Whether by accident or design, the details of Google's plans for artificial intelligence (AI) have been elusive. In some cases, there's no real mystery, just nothing all that exciting to talk about. AI technology is the foundation of the company's search engine, and the most obvious reason for Google's high-profile, $400M acquisition of DeepMind in 2014 is to use the UK firm's expertise in deep learning—a subset of AI research, but more on that later—to bolster that core capability. But the Googleplex has absorbed other bright minds from the field of AI, as well as some of the most buzzed-about companies in robotics, with only some of that collective braintrust officially allocated to driverless cars, delivery drones or other publicly announced robotics or AI-related projects. What, exactly, are Google's AI experts up to?
In a word: food.
At this week's Rework Deep Learning Summit in Boston, Google research scientist Kevin Murphy unveiled a project that uses sophisticated deep learning algorithms to analyze a still photo of food, and estimate how many calories are on the plate. It's called Im2Calories, and in one example, the system looked at an image, and counted two eggs, two pancakes and three strips of bacon. Since those aren't exactly universal units of measurement, the system gauged the size of each piece of food, in relation to the plate, as well as any condiments. And Im2Calories doesn't require carefully captured high-res images. Any standard Instagram-quality shot should do.