Touch Detection in Cocos2d iPhone

Ivan Moen left a comment asking for an explanation about detecting which sprites have been touched in Cocos2d for the iPhone, and it was something I was intended to write about (eventually), so it seemed like a ripe time to address it.

Before we start, I'd like to mention that Luke Hatcher created much of the code that these snippets are inspired by.


Broadly, there are three different approaches to adding touch detection to pixels in Cocos2d iPhone. Which one you should choose depends on the needs of your application. While considering this topic, it's important to keep in mind that you're not just detecting touches, you're integrating a user interface management system to your application.

The three approaches are:

  1. Dumb input management. This isn't dumb in the sense of stupid, but instead is dumb in the sense of a dumb missile that will keep flying straight until it hits something. A more precise description would be ignorant of global state.

    While usually not usable as-is in non-demo applications, this approach underpins the other two approaches, and is thus important.

    Simply subclass CocosNode and implement any or all of these three methods (you don't have to define them in the interface, they're already defined by a superclass).

    -(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
        UITouch *touch = [touches anyObject];
    oint location = [touch locationInView: [touch view]];
        [self doWhateverYouWantToDo];
        [self doItWithATouch:touch];
    }
    -(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
        UITouch *touch = [touches anyObject];
    oint location = [touch locationInView: [touch view]];
        [self doWhateverYouWantToDo];
        [self doItWithATouch:touch];
    }
    -(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
        UITouch *touch = [touches anyObject];
    oint location = [touch locationInView: [touch view]];
        [self doWhateverYouWantToDo];
        [self doItWithATouch:touch];
    }
    

    The distinction between the three methods is, touchesBegan is fired when the user first presses their finger on the screen, touchesMoved is fired after the user has pressed their finger on the screen and moves it (but before they pick it up), and touchesEnded is fired when the user picks their finger up.

    Using these three methods, you can easily fire actions whenever a Sprite (or any other Cocos2d subclass) is touched. For a simple application that may be sufficient.

  2. Top-down global input management. The next approach allows a very high level of control over handling input, but is prone to creating a monolithic method that handles all input management for your application.

    First, it requires that you have references to all Sprite objects that you are interested in detecting input for. You can do that by managing the references manually, or can setup the subclass to track all instances.

    You can track instance references fairly easily, modeling after this code:

    @interface MySprite : Sprite {}
    +(NSMutableArray *)allMySprites;
    +(void)track: (MySprite *)aSprite;
    +(void)untrack: (MySprite *)aSprite;
    @end
    

    And the implementation:

    @implementation MySprite
    
    static NSMutableArray * allMySprites = nil;
    
    +(NSMutableArray *)allMySprites {
        @synchronized(allMySprites) {
            if (allMySprites == nil)
                allMySprites = [[NSMutableArray alloc] init];
            return allMySprites;
        }
     return nil;
    }
    
    +(void)track: (MySprite *)aSprite {
        @synchronized(allMySprites) {
            [[MySprite allMySprites] addObject:aSprite];
        }
    }
    
    +(void)untrack: (MySprite *)aSprite {
        @synchronized(allMySprites) {
            [[MySprite allMySprites] removeObject:aSprite];
        }
    }
    
    -(id)init {
        self = [super init];
        if (self) [MySprite track:self];
        return self;
    }
    
    -(void)dealloc {
        [MySprite untrack:self];
        [super dealloc];
    }
    

    So, maybe this is a bit of a pain to set up, but it can be pretty useful in other situations as well (like discovering which instances of MySprite are within a certain distance of a point).

    Then, you implement the three methods from above in your Scene object, and use it to handle and route clicks.

    - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
        UITouch *touch = [touches anyObject];
        CGPoint location = [touch locationInView: [touch view]];
    
        NSArray * mySprites = [MySprite allMySprites];
        NSUInteger i, count = [mySprites count];
        for (i = 0; i < count; i++) {
            MySprite * obj = (MySprite *)[mySprites objectAtIndex:i];
            if (CGRectContainsPoint([obj rect], location)) {
                // code here is only executed if obj has been touched
            }
        }
    }
    

    The advantage of this approach is that you have an extremely granular level of control over input management. If you only wanted to perform actions on touches that touch two instances of MySprite, you could do that. Or you could only perform actions when a certain global condition is activated, and so on. This approach lets you make decisions at the point in your application that has the most information.

    But it can get unwieldy depending on the type of logic you want to implement for your user input management. To help control that, I usually roll a simple system for user input modes.

    The implementation depends on your specific app, but you'd start by subclassing NSObject into a UIMode object.

    @interface UIMode : NSObject {}
    -(id)init;
    -(void)setupWithObject: (id)anObject;
    -(void)tearDown: (UIMode *)nextMode;
    -(void)tick: (ccTime)dt;
    -(BOOL)touchBeganAt: (CGPoint)aPoint;
    -(BOOL)touchMovedAt: (CGPoint)aPoint;
    -(BOOL)touchEndedAt: (CGPoint)aPoint;
    @end
    

    The implementation of all those classes for UIMode should be inert stubs that can then be overridden in subclasses as appropriate. My system is to have the touch?At methods return YES if they decide to handle a specific touch, and otherwise return NO. This lets user interface modes implement custom logic, or to let a touch pass on to your default touch handling.

    Next update the interface for your subclass of Scene like this:

    @interface MyScene : Scene {
        UIMode    *    currentMode;
    }
    -(UIMode *)currentMode;
    -(void)setCurrentMode: (UIMode)aMode;
    

    Then, in your implementation you'd add some code along these lines:

    -(UIMode *)currentMode {
        return currentMode;
    }
    
    -(void)setCurrentMode: (UIMode *)aMode {
        if (currentMode != nil) {
            // this tearDown method is part of the imagined
            //  UIMode class, and lets a UIMode disable itself
            //  with knowledge of the subsequent UIMode for proper
            //  transitions between modes
            [currentMode tearDown:aMode];
            [currentMode release];
        }
        currentMode = [aMode retain];
    }
    

    Finally, you'd need to update the touchesBegan:withEvent method to query the UIMode whether it wants to handle each specific click.

    - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
        UITouch *touch = [touches anyObject];
        CGPoint location = [touch locationInView: [touch view]];
    
        // forward the specified location to the UIMode, and abort
        // standard click handling if the UIMode decides to handle
        // the click
        UIMode * uim = [self currentMode];
        if (uim != nil && [uim touchBeganAt:location]==YES) return;
    
        NSArray * mySprites = [MySprite allMySprites];
        NSUInteger i, count = [mySprites count];
        for (i = 0; i < count; i++) {
            MySprite * obj = (MySprite *)[mySprites objectAtIndex:i];
            if (CGRectContainsPoint([obj rect], location)) {
                // code here is only executed if obj has been touched
            }
        }
    }
    

    This is the approach I prefer, because it is fairly simple, and allows an extremely high amount of flexibility. I realize that I dumped a ton of code here, and apologize. Hopefully you can still find the thread of thought intertwined into the jumble.

  3. Bottom-up global input management. I won't provide much code for this approach, as it isn't one that I use, but it's a compromise between the first and second approaches.

    For each instance of some MySprite class, override the touchesBegan:withEvent: (and moved and ended variants as well, if you want them) method, and then notify a global object about the touch occuring.

    It would look something like this:

    -(void)touchesBegan: (NSSet *)touches withEvent: UIEvent *)event {
        CurrentScene * s = [self currentScene];   // Not a real method.
        [s mySpriteTouched:self];
    }
    

    Of course, this means you'd need to pass a reference to the current scene to each instance of MySprite, or you can use a singleton to simplify.

    static CurrentScene *sharedScene = nil;
    +(CurrentScene *)sharedScene {
        @synchronized(self) {
            if (sharedScene = nil)
                [[self alloc] init];
            }
        }
        return sharedGame;
    }
    +(void)releaseSharedScene {
        @synchronized(self) {
            if (sharedScene != nil) [sharedScene release];
            sharedScene = nil;
        }
    }
    
    +(id)allocWithZone: (NSZone *)zone {
        @synchronized(self) {
            if (sharedScene = nil) {
                sharedScene = [super allocWithZone:zone];
                return sharedScene;
            }
        }
        return nil;
    }
    -(id)retain {
        return self;
    }
    -(unsigned)retaiCount {
        return UINT_MAX;
    }
    -(void)release {}
    -(id)autorelease {
        return self;
    }
    

    The code is a bit of a clusterfuck, in my humble opinion, but it is still quite convenient, as it allows us to convert the touchesBegan:withEvent method to this:

    -(void)touchesBegan: (NSSet *)touches withEvent: UIEvent *)event {
        [[CurrentScene sharedScene] mySpriteTouched:self];
    }
    

    And we don't have to explicitly pass the reference to the CurrentScene instance to each instance of MySprite. Objective-C has a lot of these painful pieces of code that are rather annoying to implement, but can save a lot of effort once they are implemented. My advice is to use them, early and infrequently.

Well, there you have it, three approaches to handling touch detection for Cocos2d iPhone, presented in a confusing and at most halfway organized article.

Let me know if you have any questions, but I hope this is enough to get you moving in the right direction.

All Rights Reserved, Will Larson 2007 - 2014.