Skip to content
Tags

70a. Touching an object on a 3D screen

May 30, 2013

I’m retro-adding this post to my 3D series. It’s about how you can identify which object was touched on a 3D screen. This is really hard unless you are a math wizard, but in this post, I’ll show you a workaround. (If I ever understand the math, I will post that option as well).

The objective

You have a 3D scene, with lots of objects all over the screen, some in front of others. You want to be able to touch the screen to select an object (maybe to kill it, in an action game).

So I’ll set this objective.

I will draw dozens of numbered circles on the screen at random x,y,z positions, and I want Codea to correctly report the number of each circle I touch. If I touch anything that isn’t a numbered circle, Codea should ignore it. The touches should be reasonably accurate.

This video shows what I want to achieve: http://www.youtube.com/watch?v=rv_mtrMYyHc

The problem

Actually, there are quite a few problems. In defining them below, I will refer to how a mathematical solution deals with them (to the best of my ability).

Not enough information
Imagine you are looking through a window at your 3D scene. You touch the window to show which object you want to select. Your touch has an x (width) and y (height) value, but no z (depth) value.

So this is not enough information to figure out what you touched, because we need an x,y and z value in a 3D world. We need to draw a line from your eye to the touch point, and through the window into the 3D scene, and see which object that line hits first. That object is what you were pointing at.

Thus our first problem is not enough information, and we have to figure out the z value ourselves.

Object shape and size
The next problem is the shape and size of our objects. They may be rotated in any of the x,y or z axes, and they may be close or near. So even if they started off as simple rectangles, they may look nothing like that when they are drawn. So this problem is about figuring out what your object looks like to the camera, ie where the pixels of that object have been drawn on the 2D window we are looking through.

You see, there is no point drawing a line from our eye to our touch point, and from there into the 3D scene, if we don’t know how our objects will appear. If an object is all squashed because it is side on, then we have to know that when testing if our line of sight hits that object.

It gets much harder if your object isn’t just a 2D shape like a circle or rectangle, but is a box, or a sphere.

And this is where the math gets complex (to non mathematicians, anyway), with lots of matrices etc. It may require some approximations, eg defining a touchable rectangle around an object that isn’t exact, but is close enough.

Transparency
There is potentially a third problem. Your 3D meshes may have textures that contain transparent pixels. For example, most of the images in the libraries provided with Codea consist of an irregularly shaped picture in the middle of a rectangular image. The pixels around the central picture are transparent, ie the alpha value is nil.

Let’s suppose you have two of these images, one overlapping the other, and you press on the further image, through the transparent pixels of the closer image. You want Codea to correctly identify the touch as being on the further image.

However, if you use math to identify the object, it will come back with the closer image, because it doesn’t realise that transparent pixels should be ignored. To get around this, if you are using math, you need to minimise the number of transparent pixels, perhaps by creating extra mesh vertices to hug the image you want, more closely than a simple rectangle would, or by accepting that touch will only be approximately correct.

A non mathematical solution

I am working on mathematically based code that you and I can use without having to understand higher math, and will post it if I succeed. In the meantime, here is an alternative that is reasonably quick and practical.

This is how it works.

When you touch the screen, Codea stores the x,y value. When it next draws, it doesn’t draw to the screen, but to an image in memory (using setContext).

And when it does this, it doesn’t draw the normal image, for our numbered circles (ie for anything we want to be able to touch). These circles each have their own mesh, with a unique id number between 1 and 255. What Codea does is to draw each pixel of those circles with the colour (i,0,0,255), where i is the id number. So circle #3 will be drawn entirely with the colour (3,0,0,255), and so on. Importantly, it only does this for pixels which are not blank.

Anything without an id number is drawn normally.

Also, before we do any drawing, we use the clip command to restrict drawing to a couple of pixels around the touch point, so Codea doesn’t waste time drawing the whole image.

When the drawing is done, Codea looks at the x,y position that was touched. If it was a numbered circle, the pixel value will be (i,0,0,255), where i is the object number. So Codea now knows which object was touched.

If you had touched a transparent pixel around a numbered circle, or blank screen, or an object that isn’t numbered, Codea will still check the pixel value, but will ignore it because it isn’t of the form (i,0,0).

Having done all this, Codea goes back to normal drawing immediately afterwards, and it all happens so fast that you shouldn’t notice anything.

An advantage of this approach is that it is pixel perfect, that is, it will give you the precise object you touched, even if you touched it through the transparent pixels of another object. So it doesn’t care what mathematics were used to draw the screen, all it cares about is what is actually on the screen. So if one pixel is poking out from behind another object, you can in theory touch it, if you have sharp fingers! Of course, in reality, touch doesn’t usually need to be that accurate.

However, the main advantage is that it avoids the need for any heavy mathematics, or any approximations to object shape or errors due to transparent pixels. Basically, it just works.

The downside is that it is a brute force way of getting the right answer. Maths should be quicker and more elegant, if less precise.

Still, it is good to have choices. So you may want to make a note of this, if you ever plan to work in 3D.

The code

The code is here: https://gist.github.com/dermotbalson/5907888

Please note, it does use a shader to do the fancy pixel drawing, but you can treat that as a black box if you want. Just copy the code in the ThreeDT tab to your project, as per the instructions at the top of the code. Then you only need 3 lines of extra code in your project to start identifying touches.

To tell Codea that you want an object to be touchable, give it an id number and pass this number through with the ThreeDT.drawMesh function (one of the three extra lines).

From → 3D

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: