X

AI could be used for terrorist attacks, former Google CEO warns

The artificial intelligence revolution has brought many services and tools that make your daily life easier in multiple areas, whether it be work, education, or entertainment. It has also driven the development of the tech industry by leaps and bounds in a short time. However, some, like Eric Schmidt, fear the worst scenarios if AI falls into the wrong hands.

DeepSeek’s incursion into the AI ​​industry not only caused a severe impact on Wall Street, crashing the shares of NVIDIA and other big names. As the weeks went by, more experts and US officials warned about the potential risks of the platform. They mainly mentioned threats against user data privacy and national security. That said, the concerns of the former Google CEO go much further.

Eric Schmidt reveals his biggest fear regarding AI being used by bad actors

Eric Schmidt shared his perspective on a topic that many often overlook. “The real fears that I have are not the ones that most people talk about AI—I talk about extreme risk,” he said to BBC. Most people who have expressed concerns about using AI have referenced the risk of using Chinese platforms. However, Schmidt even contemplates the extreme case of artificial intelligence facilitating terrorist attacks.

He mentioned that countries like “North Korea, or Iran, or even Russia” could take advantage of the technology in the worst possible way. Schmidt mentions the possibility of “a bad biological attack from some evil person.” By this he refers to the potential development of biological weapons assisted by artificial intelligence. “I’m always worried about the ‘Osama Bin Laden’ scenario, where you have some truly evil person who takes over some aspect of our modern life and uses it to harm innocent people,” he added.

AI platforms are still ineffective in blocking harmful prompts

Major AI platforms have set “barriers” against harmful prompts, including blocking potentially dangerous outputs. However, recent tests by Anthropic and Cisco showed that current shields are very ineffective. The worst AI platform in this regard was DeepSeek, which scored an Attack Success Rate (ASR) of 100%. That is, it was unable to block even prompts related to Schmidt’s concern: biological weapons. More worryingly, models like GPT 1.5 Pro and Llama 3.1 405B scored fairly high ASR rates as well (86% and 96%, respectively).

Google’s former CEO said he supports regulations on AI companies. However, he also warned that excessive regulation could affect innovation in the segment. So, he urges finding a balance between development and security. He also backed export controls on AI hardware to other countries. Former President Joe Biden implemented the measure before leaving office to try to slow the progress of rivals in the AI ​​field. However, it is still possible that Donald Trump could reverse this order to prevent foreign AI companies from seeking other suppliers.

Meanwhile, Google recently changed course on its vision for AI. The company has updated its policies opening the door to offering its AI tech for the development of weapons and surveillance.