This past weekend I had the pleasure of being a mentor for a team competing in the USC SS12 competition. SS12 is a weekend long coding event, organized around the theme of software aiding the disabled. There were 8 teams working on 8 projects (a few teams working on different instances of the same project) working against teams at UCLA.

This competition highlights the very interesting use of AI for aiding the disabled. All of the projects basically come down to using AI to stand in for a missing sense:

-Helping the color-blind see color. -Bridge between the deaf and the non (through a sign-language reading system) -Reading for the blind - My team's project

Reading currency, in particular. - here's an example of someone *else's* currency reader in action doing pretty much what ours (mostly) did. Here's the idea: someone who is visually impaired has difficulty telling what denomination US currency they're holding. The reason for this is that in the US, unlike other nations, all bill denominations are the same size. So, our task was to create something for your phone, which, when pointed at a bill, pronounces what denomination you're holding (see video for the idea in action).

We did this using the Scale-invariant feature transform (SIFT) algorithm. SIFT did the following for us: given an unidentified image, its key features are identified, and compared with the key features of images of US bills we have on file. The reference bill that returns the highest number of matched key features is returned as having highest likelihood of matching. Even if the image is rotated, crumpled, or otherwise messed with, SIFT is able to extract the essential elements that make that image unique.

Interestingly, we got the currency recognition working straight away, and spent the rest of the time struggling with the Android API.

Posted
Authorddini
CategoriesUncategorized