Sunday, March 24, 2013

iOS face detection

While playing around drawing text over images that ended up looking bad because the faces would be obscured, I had the idea of trying to find faces and then avoid them.

You can see the end result on the right. The software finds faces in the image and gives you a box that contains the features. I make a green box that encloses all of the faces. Finally I find the largest area between the edge of the enclosing box and the edge of the image and use that to place the text.

The code on iOS is incredibly simple, (although being cocoa the words get rather long).


The resulting array of features objects have a .bounds CGRect that is the location of the face.

NSDictionary *detectorOptions = [[NSDictionary alloc] initWithObjectsAndKeys:CIDetectorAccuracyLow, CIDetectorAccuracy, nil];

CIDetector *faceDetector = [CIDetector detectorOfType:CIDetectorTypeFace context:nil options:detectorOptions];

CIImage *ciImage = [[CIImage alloc] initWithImage:image];
NSDictionary *imageOptions = @{CIDetectorImageOrientation: @(1),
                                   CIDetectorAccuracyHigh: @(1)};
    
NSArray *features = [faceDetector featuresInImage:ciImage options:imageOptions];

The face detector needs a full face, it doesn't recognise heads turned to the side so that one eye is gone but interestingly it does detect a few cartoon characters.


It's very fast and can even be passably used on video - I guess the code is in there for the face detection in the camera. CIDetector has only one concrete sub-class but I wonder how hard it would be to implement detectors for other things?

No comments: