Skip to content

176. 3D Dungeon – Weapons and shooting

October 24, 2014

In this post, I learn to use a revolver to defend myself against the evil monsters in the dungeon. At least, I figure out most of how to do it.

This post covers some very interesting aspects of 3D.

Picking up and holding a revolver

If you play any FPS shooter, you pick up weapons, and the barrel or blade appears in front of you, as in real life.

This is not as easy to program, as it sounds.

I started out with a revolver, for which I have a nice model. When I “collide” with it, I want the barrel to appear in front of me, pointing forwards, as though I’m holding it.

This is not as easy as it sounds (working with 3D continually surprises me, because it’s the simple things that are sometimes so hard).

I need to first position the weapon so just the right amount of barrel sticks out. This is just a matter of fiddling, seeing how it looks, and fiddling again. It was a lot of work, and I had just got it right, when SkyTheCoder, who has begun collaborating with me on the project, looked over my shoulder (actually he is on the other side of the world, but that is what it felt like) and said “That image never changes. Why not just fake it with a 2D image?”

Doh. So obvious when you see it.

We looked at different ways of doing it. One is to create an image during the game, by positioning the weapon and drawing it to an image in memory. But this is still fiddly, and I happened to have a testing project with parameter controls which I could use to position 3D objects, then take screencaps and crop them to the smallest size. Then I could simply include those images in the dungeon project.

Sky also had another idea, which is to include recoil when you fire the revolver. Again, we looked at maybe having multiple images in different positions, but simply rotating the image a bit to the right and then back, using tweens, seems to work pretty well.

So that’s what the video above shows. It’s a bit rough, but I think we’ve solved the big problems and just need to tidy up.

Aiming and shooting

I have (at least) a couple of choices when it comes to aiming. I can

  • touch the screen where I want to shoot, or
  • use a joystick to aim, plus a firing button

The first has the advantage of aiming and firing in a single touch, whereas the second requires fiddling the joystick to position the aim, then pressing a button with the other hand.

I started by programming an aiming joystick instead. Clearly, I didn’t think that through very well.

So I replaced it with a touch on the screen, ie when you touch the screen (other than the joystick that moves you around), the program treats that as a shot, and checks whether you hit anything.

And that is the next problem.

Detecting what was touched

This is a very difficult problem.If you click on the screen in any desktop app, it knows what object you clicked on.

But if you touch the iPad screen, Codea has absolutely no idea what object you clicked on. That’s because there are no objects on the screen. Codea has passed all the stuff you wanted to draw, to OpenGL, which has drawn it all, except for what can’t be seen. All that comes back from OpenGL is a mess of coloured pixels, with no x, y or z locations attached. So if I touch a pixel, how do I find out what I am shooting at?

Fortunately, this is something I have explored and written about before. There are at least two solutions.

  1. Use mathematics to determine which object was touched
  2. Cheat

Mathematics is way too difficult for the strange shapes of our objects, so how do we cheat?

When the screen is touched, we can “steal” the next frame, drawing it to an image in memory instead of to the screen. And instead of drawing the normal colours and textures for each object, we can use a colour which contains a code for each object, eg the red colour value is 1 for the first object, 2 for the second object, etc.

When the image is complete, we look at the pixel that was touched previously, and its code will tell us which object it is.

And that is what I’ve programmed. This is how it works.

  1. When you touch the screen, the program stores the touch point.
  2. The next time draw runs, it checks if there is a touch point. If so, it doesn’t draw to the screen at all. Instead, it creates an image in memory, and draws all the objects which are within the light range and which have a “touchable” property (which is only assigned to enemy objects).
  3. If it finds an object to draw, it increments a counter, and stores that object in a table. If an object has several meshes, it gives each of them a separate¬† number, and stores the part name (eg “head”) as well.
  4. It passes the counter number through to a special shader which simply draws the whole mesh with the same color (r,0,0,255), where r is the counter.
  5. The program looks up the pixel that was touched, and extracts the red value. This tells it which object and body part was touched.
  6. The program goes back to drawing normally.

Detecting how fatally an object was touched

This is difficult. How do I know if I touched that bug’s head or just a feeler? Was it a fatal shot?

I’ve hinted at it just above, by saying some objects have been built with separate meshes for different parts. All I need to do is give them standard code names like “head” and “body”, and I can tell what I hit.

However, some objects, like my spider, are just a single object. And identifying spider parts is pretty important because the head and body are only a small part of the total image. It’s mostly legs!

So Sky and I are figuring out how to deal with that. The most obvious option is to take the model into Blender and break it into different meshes.

 

 

 

Advertisement
Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: