The components on the left hand size of Figure A, run on a computer (PC/Mac/Linux) and animate the display, by sending new volumes (like frames, in 2D animation) to the display, over WIFI. On the right the software components draw out the slices of the current volume, maintaining the display in a persistence of vision.
This means that the display itself does not animate, it just displays, one volume at a time and if new volumes are not sent to the display the last one will stick around, until the display looses power.
Here We Go (3D)
If you wish to draw/paint/sculpt into the display, at some point you’re going to have to create 3D data, unlike “3D” Cinema, (really Stereoscopic Cinema) this display really needs that extra dimension of depth information. Placing 2D video or images into the display will work, but since you can’t limit the audience’s movement, or point of view, they will see them for what they are 2D frames hanging space. So you might want to extrude or rotate a 2D source in 3D, or capture your data with a Kinect, and of course model in 3D from primitive, such as text, dots, lines & spheres in 3D. To glue this all together we use code, and to make it as easy as possible I’ve chosen, Processing as the environment. (Maybe Cycling74’s Max or PureData one day, if you ask for it, of better yet if you write it 😉 ).
It is also possible for you to send slices directly to to display via UDP packet. One volume fits in one UDP packet.
Lastly (to my shame) the processing version that was used for testing is processing 1.5.1, so use that and everything will work or bug me and I’ll update it.
[TODO added formatting info on UDP packets]