Google has created and released a series of mobile-focused computer vision models for its Tensorlow framework, dubbed MobileNets. The new models are open-source, and built upon TensorFlow’s existing framework to deliver a number of predefined machine learning models centered around the rise of machine learning in mobile form factors and use cases. Essentially, the models are focused around enabling various computer vision conventions to an ecosystem where devices are relatively weak, space is limited, and network constraints can hamper traditional neural models. The models are built to run as efficiently as possible, and take advantage of Google’s Cloud Vision API. They are made to work with a number of mobile solutions, including TensorFlow Lite and TensorFlow-Slim, and can also be used on less powerful traditional hardware.
These new models consist of pre-trained material from the ImageNet classification system, and come with model definition data that makes them easy to import into any existing project or instance in TensorFlow. There are 16 models in total, across areas of computer vision such as landmark recognition, facial recognition, object detection, and fine grain classification. Because the models are open-source, they can be modified and forked however a user may want, allowing for multiple types of detection to be trained into a model, if a user can optimize the process enough for their device or devices to have the power to spare.
This development is a clear step toward getting traditional machine learning and neural networking applications into mobile hardware, and come as Qualcomm’s Snapdragon 835 starts its rise to prominence on the back of glowing reviews of its performance in Samsung’s Galaxy S8 flagship. The powerful processor is the first consumer-facing mobile chipset to feature onboard machine learning technology, is capable of gigabit network speeds for neural networking, and can even emulate x86 instructions at near-native speeds, though its power is still far below the latest and greatest from the likes of Intel and AMD. More chips with the same capabilities are sure to come along in due time, and 5G promises to enable near-simultaneous processing for neural networks across mobile devices. This means that modern hardware affords developers the ability to test their creations on real hardware, rather than just building out theoretical models. Put simply, this means that machine learning is now more accessible than it’s ever been, and the field will likely grow significantly in the near future.