
Artificial intelligence (AI) could pose an existential risk if it becomes âanti-humanâ, Elon Musk has said ahead of a landmark summit on AI safety.
The tech billionaire made the comments to podcaster Joe Rogan hours before flying to the UK for the AI Safety Summit at Bletchley Park in Buckinghamshire.
He later took his place at the summit, where he will be joined by Prime Minister Rishi Sunak, officials from other governments, researchers and business people for two days of talks on how the risks posed by the emerging technology can be mitigated.
He mentioned Voluntary Human Extinction movement founder, Les Knight, who was interviewed by the New York Times last year, as an example of this philosophy and claimed some people working for technology firms have a similar mindset.
Mr Knight believes the best thing humans can do for the planet is stop having children.
Mr Musk said: âYou have to say, âhow could AI go wrong?â, well, if AI gets programmed by the extinctionists itâs utility function will be the extinction of humanity.â
Referring to Mr Knight, he added: âThey wonât even think itâs bad, like that guyâ.
I think we have to be careful on how we programme the AI and make sure that it is not accidentally anti-human
Elon Musk, Tesla CEO
Mr Musk signed a letter calling for a six-month pause on AI development earlier this year.
When asked by Mr Rogan about the letter, he said: âI signed onto a letter that someone else wrote, I didnât think that people would actually pause.
âMaking some sort of digital superintelligence seems like it could be dangerous.â
He said the risks of âimplicitlyâ programming AI to believe âthat extinction of humanity is what it should try to doâ is the âbiggest dangerâ the technology poses.
He said: âIf you take that guy who was on the front page of the New York Times and you take his philosophy, which is prevalent in San Francisco, the AI could conclude, like he did, where he literally says, âthere are eight billion people in the world, it would be better if there are noneâ and engineer that outcome.â
Read More
âIt is a risk, and if you query ChatGPT, I mean itâs pretty woke.
âPeople did experiments like âwrite a poem praising Donald Trumpâ and it wonât, but you ask, âwrite a poem praising Joe Bidenâ and it will.â
When asked whether AI could be engineered in a way which mitigates the safety risks, he said: âIf you say, âwhat is the most likely outcome of AI?â I think the most likely outcome to be specific about it, is a good outcome, but it is not for sure.
âI think we have to be careful on how we programme the AI and make sure that it is not accidentally anti-human.â
When asked what he hopes the summit will achieve, he said: âI donât know. I am just generally concerned about AI safety and it is like, âwhat should we do about it?â I donât know, (perhaps) have some kind of some regulatory oversight?
âYou canât just go and build a nuclear bomb in your back yard, thatâs against the law and youâll get thrown in prison if you do that. This is, I think, maybe more dangerous than a nuclear bomb.
âWe should be concerned about AI being anti-human. That is the thing that matters potentially.
âIt is like letting a genie out of a bottle. It is like a magic genie that can make wishes come true except usually when they tell those stories that doesnât end well for the person who let the genie out of the bottle.â