Testing OpenCV and Face.com api

Face.com offers a robust API for facial recognition. In short, you send it an image and it will process that image finding faces within it and return information about what it finds. That information can be used to locate features of the face as well as other characteristics of the face. These other characteristics include the age and gender as well as the presence of glasses or a smile.

As a test, I built a standalone app that sends faces appearing in an attached web cam to the API for analysis.

First, I built a basic Flash AIR application to upload images to the face.com API. For this I used Jean Nawratil’s Flash ActionScript 3.0 Client Library and sent images each time a key was pressed on the keyboard. The only modification I made to Jean’s library was to add a parameter to the POST made to the API. The parameter (attributes=all) tells the API to return additional characteristics.

With this in place I integrated OpenCV to locally detect when a face is present in the frame. This way, I could automatically send images to the API, but only those with something for face.com to find — reducing the load on their servers (and my rate limit).

I’ve used an AS3 implementation of OpenCV in the past, but the performance was terrible. For this project, I used Wouter Verweirder’s AIR Native Extension which offloads the work to native code. It is very responsive.

Using a standard face cascade, OpenCV recognizes faces and indicates approximately where they are in the frame. Based on this detection, my app sends a copy of the image to face.com for further analysis. The round trip to the face.com API is quick and generally takes less than a second, two at the most. I added a timer to the app to further limit how often I send off an image to the API.

Below are some captures from this basic proof of concept. Below each are the characteristics detected by the API.

Testing OpenCV and Face.com apidavidbliss-2_copy-1Testing OpenCV and Face.com api

In addition to basic face detection, the face.com API allows you to identify and tag people based on images you use to train the algorithm and/or images in a Facebook account. For more information about the API and what it can do, be sure to check out their online demo.

Odopod acquired by Nurun

When I started Odopod with Tim and Jacquie, my personal intention was to do good work with great people. That was really it. I had just wrapped up an 18 month contract with Rare Medium after they aquired my first agency and I had a renewed respect for these critical aspects of work.

Over the past ten odd years, I’ve been able to live the dream. I’ve learned an incredible amount from our ever expanding team of incredibly talented and dedicated folks. Even at the most intense moments, I would not have traded this opportunity for any other — until now.

The Nurun deal is a great opportunity for both companies and you can read more about our special chocolate-and-peanut-butter combination on our blog.

The strategic match of experience and talent between Odopod and Nurun is huge, but it was the people and our shared values that sealed the deal for me. Over the past months, I’ve met people from several Nurun offices and they all share the friendly, ego-nuetral personalities that characterize the folks at Odopod. What’s more, they share a passion for strategy, design and technology working together to deliver innovative work that is useful and enjoyable.

I’m really looking forward to the next 10 years, working with our new global team, vendors and clients to do great work.

An incredible number of people have contributed to the success of Odopod success and I’m truly grateful for everyone’s help getting us to the point where we could make this deal happen. Thank you!

‪AIR for mobile remote camera app‬‏ – YouTube

AIR for mobile remote camera app – YouTube.

This video shows an AIR desktop app that is connected (peer-to-peer) with an AIR for mobile app running on 4 different devices.

The mobile apps stream video back to the desktop application and the desktop app can be used to send messages to the camera operators.

The devices tested all handle capturing and streaming 640×480 video at 15 fps. The wifi network is having a little trouble keeping up with 4 streams but works well with fewer streams (or when I’m alone on the network).

Arc Diagram of Mentions in @Odopod Tweets

Lately I’ve been collecting and analysing Twitter data. I’ve been looking at networks formed by friends and followers of a set of people, tracking the path of tweets and generally building on my python skills.

I’m working toward a pretty ambitious goal but, inspired by the arc diagrams in the NYTWrites project, I decided to take a short break and render out one of my own.

Arc Diagram of Mentions in @Odopod Tweets

Interactive Arc Diagram

The diagram shows all twitter users mentioned in tweets by @Odopod, sorted by the number of times they have been mentioned. The arcs link nodes that were mentioned in a tweet with other users and the width of the arc indicates the number of links. The size of the nodes represent the number of links a node has associated to it.

First attempt to control eggbot from Flash: halftone pictures

This weekend, I was able to get Flash talking to the EiBotBoard of my Egg Bot via Tinkerproxy 2.0. Fun stuff.

The trickiest part was finding the right file within /dev to point Tinkerproxy at. The EiBotBoard shows up as /dev/cu.usbmodemNNN which is quite different from the way an Arduino board shows up.

With the proxy in place, I set up a Flash file to load images, transform them into halftone dot patterns and then plot them onto eggs. Plotting is done with 3 different commands, one to raise and lower the pen, one to turn the egg (xaxis) and the last to move the pen (yaxis).

First attempt to control eggbot from Flash: halftone picturesFirst attempt to control eggbot from Flash: halftone pictures 2