X

How Artificial Intelligence Differs By Concept Levels

A.I. is a relatively well-known term in the modern world but that doesn’t necessarily mean that it’s well-understood by the general public. That’s thanks in no small part to the widespread use of narrow A.I. – that’s artificial intelligence with strict operating parameters that nearly everybody uses at one point or another. In fact, in the strictest sense, it’s probably safe to say that the current forms of A.I. aren’t consistent with what defines a “true” A.I. The technology would need to become as adaptable and “smart” as humans are to reach that point. However, that particular definition is also all too subjective and the A.I. that does exist is nothing to scoff at. DeepMind, for example, has only been in existence since 2010 and was only acquired by Google in 2014. A quick glance at the company’s accomplishments is all that’s needed to see the progress that’s been made in that time. However, the rapid rise of the technology makes understanding what exactly A.I. is a much more difficult task.

Fortunately, it’s not overly complicated and can summarily be broken down into four distinct categories. The first and most important part of A.I., which effectively lays the foundation for the others, is machine learning. That effectively allows A.I. to learn by providing a set of base algorithms which allows computer hardware to examine and analyze specific objects or text in order to learn what they are. Recognition of the task, object, or text isn’t pre-programmed, it’s trained by feeding in examples that allow the program to find commonality. That allows the software to later identify those things outside of what it’s already been shown. Machine learning feeds back into two other pieces of the puzzle – deep learning and neural networks. Those two things also tend to be used in conjunction, with deep learning acting as a progression of machine learning.

That progression expands the prior concept to include the ability to move outside of the narrow scope of the machine learning box and apply that learning to completely different tasks and scenarios. A prime example of that exists in DeepMind’s video game-playing A.I. and in its health-related endeavors. The system effectively doesn’t require any interaction with humans to beat them at even the most complex games such as Go. On the other side of the equation, it can learn to surpass humans in discovering new parameters for detecting hard-to-diagnose diseases or illnesses. Underpinning that is the neural network that’s comprised of a series of coded nodes or “neurons” spread across an input layer, a hidden layer, and an output layer – meant to be similar to how human brains function. The interactions with the code at each neuron impacts the “weight” of any information taken in and the weight determines where it moves to next and whether it can transfer between layers from input to output. Meanwhile, each layer contains up to millions of individual nodes.

Finally, the last form that A.I. is currently thought to take, and the most advanced if Kimera Systems is to be believed, will be exhibited for the first time on May 3. The company has announced an A.I. it calls Nigel Artificial General Intelligence. The project may turn out to be the closest a firm (or anyone else) has come to creating a true A.I. as defined above. That’s because the goal of the project is to create a single artificial general intelligence (AGI) algorithm. The algorithm would effectively allow the A.I. to create its own AGI program that meets or goes beyond human levels of intelligence and adaptability. Whether or not that will actually happen remains to be seen but would hardly be surprising, given the level of progress so far.