Google is getting serious about exploring new interaction methods for a wide variety of gadgets including Chromebooks and mobile, based on recently reported patents obtained by the company. Designs centering around the concept, based on radar, have been around for several years but no fewer than two of the patents have now gone public. In each case, the patented invention utilizes 3D radar sensor technology, enabling in-air gesture controls.
Summarily, the patents utilize the sensors in question to gauge a gadget’s surrounding area, marking out ‘landmarks’ that will later be algorithmically compared to 3D context models. That means the system will be able to recognize when a user is present and waving their hand or making over gestures at a distance for controlling the system.
The patent showcases the technology in use with laptops, smartphones, smartwatches, smart TVs, and other electronics.
How could this be used?
There appear to be a wide number of use cases that could be put in place from the most recent patents, approved in January and February.
Not least among the possible features showcased is the technology’s most obvious implementation, including general gestures similar to what’s possible with LG’s latest smartphone — the LG G8 ThinQ. The feature, referred to as Hand ID recognizes a hand hovering over the display and allows a plethora of interactions without touching the display. Users can control a huge variety of actions ranging from media playback control to volume adjustments and other system-level tweaks.
LG’s solution is not based on the same radar technology being touted in the latest patents from Google though and, if the patent does move forward toward real-world use, it will likely be more accurate at even greater distances. More directly, the patents could feasibly be used in conjunction with one another for even wider functionality.
Google may go even further to implement the patent in a way that allows multiple radar gesture-enabled devices to interact. A smartwatch or smartphone and either a smart TV or Chromebook might be used independently or in conjunction with one another. The user might interact using gestures on one device, for instance, that allow for interactions on the secondary device.
That example could work as a kind of extension to Chrome OS’s current Better Together features, that allow users to keep a Chromebook unlocked or share data and information via a connection to a smartphone. Extending on that, it isn’t out of the question that would be usable with any combination of the gadgets covered under the latest patents from the search giant.
Uses beyond those comparatively mundane features might include facial recognition tied into device-specific features at a distance. Any number of previously non-existent features could be enabled by radar-based detection too, such as the ability to optimize audio output from a device using both acoustically-driven and radar-derived methods for higher accuracy.
…if it’s used at all
As hinted at above, the patent in question won’t necessarily be finding its way into — or out of, in this case — consumers’ hands any time soon. Google has been patenting related inventions and similar features since as early as 2014, with various iterations cropping up over the years. The most recent prior to the latest designs appeared in 2016. None of that necessitates that Google will use its patents at all, particularly if a better method to enable the hinted features is discovered in the meantime.
There is also a high likelihood that it wouldn’t appear in Chrome OS or associated gadgets first, as features for those gadgets tend to arrive on Android or desktop platforms first. Regardless of whether or not it appears in Chrome 74 or another iteration much further down the road, Google does at least seem to be taking touch-free controls seriously.