Imagine you’re running a company specialized in building big machinery. Your engineers just finished their latest creation: a machine capable of sorting freshly harvested produce. It removes all foreign material such as rocks, cans and pieces of plastic, a job which had to be done by the hands of many workers standing alongside a conveyor belt up to now. It also recognizes sub-par produce, which is to be sorted out and used for processing, rather than straight up consumption – dented apples don’t sell, after all. Your machine does all this at lightning speed, with barely any setup required.
You’re rightfully very proud of your company’s creation, and want to show it off at trade shows across the globe. Your mind’s eye drifts off to potential customers gawking at the extreme precision and efficiency with which tons upon tons of green beans are sorted automatically: good, good, cigarette butt, good, a little bit yellow, good, good, …
As your head of marketing steps into your office, you wake from your dream. Still excited, you share your vision, but her raised eyebrow quickly brings you back down to earth. If only it were possible to bring along a four by two by two meter machine weighing hundreds of kilo’s, along with a set of conveyors and stacks of fresh produce, to show just how the sorter works…
At CREATE I developed a tool for TOMRA to do just this. Once the user puts on the headset, she’s standing in a factory hall, given the choice between two machines: one for sorting potatoes using pinball-like flippers, and one for smaller produce such as peas and french fries with blasts of air.
With a machine selected, the user gets to see it in action. The selected produce comes down the conveyors, is collected in a shaker to ensure an even spread and correct alignment, and at the business end of the machine, it’s sorted appropriately. The user is free to stick her head into the machine to get a close-up look at the sorting process, and can read some useful bits of info while she’s in there.
The application was developed in Unity, for both Vive and Gear VR. The main challenge was managing the large amount of produce that had to be sorted. I chose to use a particle system: this removes the overhead of having each item calculate its own logic individually, as well as reduce rendering cost since particles are batched into a single draw call.
There was still a fair bit of optimization required: the conveyors determine the direction and velocity of the particles, but checking what conveyor each particle is on means thousands of bounding box checks each frame. With a budget of 11 milliseconds per frame, this wouldn’t do. I ended up using the extra Vector4 of custom data you get per particle to store which conveyor the particle had passed, so only the current bounding box would have to be checked. I also stored what sorting path (acceptance, rejection or reuse) the particle would have to take in that Vector4.
TOMRA brings along a Vive and several Gears to each trade show now, to show their machines in action. Once your lead puts his head onto a virtual conveyor belt and opens his mouth for the oncoming fries, you know you’ve got him hooked.