According to top neuroscientist Zachary Mainen, it’s entirely plausible that future artificial intelligence programs, as they inch closer and closer to human-level intelligence and similar function, may wind up suffering many of the same neurological and psychological glitches that humans do. The way that the human mind works involves a great number of variables, and the slightest thing going wrong with brain chemistry, stress levels, or outside influences can cause migraines, depression, hallucination, and other ill effects. In much the same way, the kind of self-modifying and ever-growing code bases that AI based on neural networking are built on could cause similar issues.
Mainen says that his ideas mostly come from a field of study called computational psychiatry, wherein psychiatrists study AI programs that are rigged up with certain conditions and made to be as similar as possible to the human brain. While this is currently done mostly on a testing and theoretical basis, the possibility for applications in AI engineering is quite strong. He explains that Serotonin is a regulating chemical in the brain that can be responsible for depression and hallucinations if something with it goes wrong, such as a wrong dosage or having it go to the wrong place. Mainen says that something similar could happen in an AI program that uses neurochemistry as a basis to operate on a humanlike level.
While this sounds like science fiction at first, it’s not entirely unthinkable; serotonin is a regulator, and a script that serves to rein in or narrow down expectations, or tone down a response, could easily substitute. Should this be used inappropriately, an AI’s perception of a situation could be affected by its expectations, approach, processing method, or other factors. Dopamine, as another example, is a reward signal in the human brain, and an AI that’s being taught with a reward system could easily have a positive stimuli of some sort substitute for dopamine. All of the basic concepts needed to make an AI that operates in a similar fashion to the human brain are already present in the mainstream AI scene, but any such AI would require an incredible amount of training, perhaps even an impractical amount, just to be able to properly regulate its innate processes so that it can begin gathering and processing information.