X

Developer Outs Google Glass Sensors; Shows Augmented Reality Potential

Before Google Glass was first announced at last year’s Google i/O, seemingly most reports about the niche product form Google’s X-Labs were around Glass’ potential to be an augmented reality powerhouse with features such as map overlays and games that would utilize Glass as a second screen. Since then though, many have seemed to be a bit let-down by Google’s actual feature set for the product. Not included in any list of specs thus far officially from Google or otherwise were the actual sensors that Google Glass could potentially take advantage of. That is until today. Lance Nanek is a Glass developer that has been digging deep into the innards of Glass to find out exactly what could be done with the new product. Nanek’s work points to Google Glass being able to perform many more tasks and functions than its current software set allows, including some AR functionality at some point in the future.

Nanek used good old fashioned ingenuity to obtain the sensor list that Google has decided not to supply even to developers. Nanek made this possible by pushing an application to the device via debug mode and commanding a sensor list. While as of now the APIs bestowed upon developers do not allow them to take full advantage, here is the complete list of sensors that are inside and, assumably, will someday be available for developers to use, maybe even in AR deployment:

 

  • MPL Gyroscope
  • MPL Accelerometer
  • MPL Magnetic Field
  • MPL Orientation
  • MPL Rotation Vector
  • MPL Linear Acceleration
  • MPL Gravity
  • LTR-506ALS Light sensor
  • Rotation Vector Sensor
  • Gravity Sensor
  • Linear Acceleration Sensor
  • Orientation Sensor
  • Corrected Gyroscope Sensor

 

And the location services add tons of potential and are especially needed for next generation maps and/or AR:

 

  • network
  • passive
  • GPS

 

Nanek describes a little of how these sensors would assist in AR aspects of applications via his tumblr blog, NeatoCode.

“The combination of orientation, where the user is looking, and location, allows augmented reality features of Android apps like the Yelp monocle to work. Look around you and have the Glass show info on cool restaurants just as you look toward them, look at an outlet in a Starbucks and tag it on a map for others to find and take a picture at the same time, look left and right and see the subway stations you are looking for displayed in the heads up even if buildings are the way, etc..” Lance Nanek – NeatoCode

Nanek has also provided us with a short video that demonstrates a little bit more on how the sensors could be used in an augmented reality deployment:

 

 

While we really wish that Google would have had all of these sensors in a usable or at least accessible state when Glass was released, it is something very very cool to look forward to in Glass’ future. Maybe we will learn more about what Google has in store for the next versions of the Explorer Edition’s software as well as future hardware iterations of Google Glass at Google i/O next week. Until then, join our conversation on the future of Google Glass on the Android Headlines Google+ page, or leave us a comment below.

Source Lance Nanek-NeatoCode

 

 

 

 

Source Lance Nanek-Tumblr