Google’s apps and services have been getting smarter over time as they learn from the habits of their users. This process is usually facilitated by cloud services, and while one of the main advantages of doing it that way is the ability to synchronize the data across many devices and platforms, there are some situations where cloud services might not be available and the process would be stopped and such data will be unusable. Google is teaming up with Movidius, which specializes in machine vision and deep learning capabilities, to bring machine intelligence to run locally on mobile devices.
In the agreement, Google will be able to use Movidius processors and their software development environment and Movidius will benefit as Google will contribute to their neural network technology roadmap. The engine on the ultra-low-power platform from Movidius allows deep learning within the mobile device, so it can work without an internet connection and reduce the issues caused by latency. Future devices will be able to recognize images and audio in a fast and accurate manner, which will provide a more personal computing experience. The flagship chip from Movidius known as MA2450 is the latest member of the Myriad 2 family of vision processors, which was already an improvement over the processors announced last year. The MA2450 is meant to be used in commercial products and it is capable of performing complex neural network computation processes in a very compact form factor while being power efficient.
This is actually not the first time that Google has collaborated with Movidius, as the company already provided some components for Project Tango. Yet, the two collaborations are independent, while devices from Project Tango will be able to sense the space around them, devices with deep learning capabilities will be able to recognize the elements in that space. “Movidius’ mission is to bring visual intelligence to devices so that they can understand the world in a more natural way. This partnership with Google will allow us to accelerate that vision in a tangible way.” There are still no details about the devices that would integrate such technology or when might they become available, those are just referred to as “next-gen” devices, but given that Project Tango devices are taking some time to develop, it could take a while before we see devices with “visual intelligence”.
https://www.youtube.com/watch?v=GEy-vtev1Bw