This is the math version of the previous post (but don’t worry, you don’t have to understand the difficult part). It tells you which picture you touched (ie the closest).

It is much faster than my approach of drawing objects and looking up pixels, but you have to know the shape and orientation of the object you are touching, and have code for that particular shape. This code is only for flat 2D images.

You might want to see it working first. Run this code, then touch the red and yellow pictures. The program will print out the touch positions – so for example, if you touch the top right of one of the pictures, the result should be something like (400,600), because they are 400 pixels wide and 600 pixels high.

There are two pieces of magic in the code, which are used in the function PictureIsTouchedBy.

The screentoplane function (which I’m not going to explain, because I don’t understand it) finds the 2D location of your touch on a picture drawn in 3D (ie with perspective), and returns it as a value from -0.5 to 0.5, where 0 is the centre of the picture. We adjust that to get the pixel values.

The second half of PictureIsTouchedBy takes the touch location that we just calculated, and applies the translation and rotation of the picture, to get its position in 3D. The z value of this point tells us how far away it is, and we can use that to figure out which touched object is closer.

### How to include this in your own project

It is pretty simple, once you have added the Touch class code.

You only need three additional lines of code.

First, you need to tell the Touch class about each picture you want to touch, and give it some basic information – position and size, and some kind of id reference, such as a name or number (because when Touch finds the object you touched, it needs to give you back an id for it). This can be done anywhere, but setup is a good place.

You also need to tell Touch about the current rotation, which is just one line of code. The demo code does this in the draw function, but it doesn’t need to be done in each frame unless the rotation is changing constantly. If the rotation is fixed, you could just do this in setup.

Finally, the touched function gets Touch to figure out which object was touched, and return its id, and the touch position.

### Other objects

Thanks to LoopSpace, I have code for cubes and spheres which was built for a demonstration project, but I haven’t adapted it yet.

### Math vs kludge

So is this better than my previous post, where I drew an offscreen picture and looked up the pixel?

Clearly.

But I think that if you need to do this with more complex shapes, beyond pictures, and beyond even cubes and spheres, the picture kludge can be adapted to cope, whereas the math approach may just get too hard.