X

Google's Patent Application for Glass Now Published; We Take a Look

There has been a lot of talk about Google Glass lately, from the unique rumors that are floating around, and we even went over its fashion and style, among several other topics.  One thing we haven’t touched on was its patent application; it was filed in August of 2011 and has been conveniently published yesterday, so now we have all the details.

Speaking of details, the patent description in question is enough to cause one’s eyes to roll into the back of their head, as it packs plenty of words used to describe their latest project.  To make it simple (as if anything patent-related is simple), they made sure to have their project Glass be as detailed as possible, which would help ensure that the original design concept remains theirs and to secure future advancements.  Now, on to the details.


As seen from the image, there is a lot to be said about this.  Don’t worry, we’re not going to go over every single one of them, but I’d like to talk about a couple of items in which I think are important.  First, all those numbers have their own description attached to them, which range from how the bridge rests on the nose of the user, to the flexibility of the camera’s placement.

Number 26 relates to the lens elements, which indicates that any material that can project an image or display a graphic may be used, and may be “…sufficiently transparent to allow a user to see through the lens element.” Which might refer to the option of having colored lenses, and leaves the material usage for the lens wide open with plenty of options.  The frames are also left open for design and materials used, and allows for the use of a hollow body to route the connected wires.

Number 28 details probably the most important factor; the on-board computing system.  This item mentions not only the usage of a processor and memory, but also the use of video cameras, sensors, and “finger-operated” touch pads.  This computing system may be connected to the head-mounted device via direct wire, or wireless connection, and may be used remotely.  In other words, the video camera wouldn’t need to be worn on the face.  Sensors also include, for example, one or more of a gyroscope and accelerometer.


Those are just a couple of things worthy of note, out of nearly a hundred others that eloquently describe this device.  Another thing to mention is the connectivity that you can use in addition to the device’s original features, such as the use of separate video cameras, sensors, and finger-operated touch pads, which could be smartphones and tablets.

Going back to the video camera for a moment, Google described the usage of more than one at a time, which states, “…more video cameras may be used, and each may be configured to capture the same view, or to capture different views. For example, the video camera…may be forward facing to capture at least a portion of the real-world view perceived by the user. This forward facing image captured by the video camera…may then be used to generate an augmented reality where computer generated images appear to interact with the real-world view perceived by the user.”  That part makes me go hmm.  Is it just me, or is Google trying to gamify the way we view the world?  Let us know what you think.

[Source: USPTO]