X

Microsoft calls Congress to outlaw abusive AI deepfakes

Every day, we fear the bad side of AI technology. The reality is that AI has several bad sides, and one of them is AI deepfakes. Microsoft acknowledges this, and it’s calling on Congress to help outlaw AI deepfakes.

An AI deep fake is when a person takes a piece of media and manipulates it to make it seem like the subject in the video did something that they didn’t do. Say, there’s a video of you giving a speech on ending world hunger. Well, if a person manipulates your lip movements and audio to make it sound like you want to restrict food from other countries, then that’s a deepfake. That’s only one form of deepfake.

Deepfakes can be used for fun. We see YouTube videos of people replacing actors in movies. So, the technology itself isn’t bad. The issue comes when people use this technology in a bad way.

Microsoft wants Congress to police the use of AI deepfakes

With great power comes people who’ll abuse it and ruin a fun technology for everyone. Deepfakes have been around for a long time, but generative AI technology has caused abusive deepfakes to explode. We all remember the recent drama with the sexually explicit AI-generated images of Taylor Swift.

That’s not all, as there have been many other examples of this sort of behavior. It’s bad enough when it happens to Taylor Swift, but she’s 34. There’s also an issue with people making content about people who are less than half that age. AI-generated CSAM is also a major (and not unexpected) consequence of AI technology.

According to a new report, Microsoft’s vice chair and president, Brad Smith, is calling for Congress to formally outlaw AI deepfakes. He made a blog post making this call for action. He wants a “deepfake fraud statute.” This would basically give people the legal grounds to prosecute people who make abusive AI deepfaked content. This will involve any sort of fraudulent content to make it seem like people are doing what they didn’t.

It’s an important time

This is most crucial this year, as 2024’s a major election year. So, AI deepfakes have been running rampant, and they’re going to increase as people approach the polls. In the U.S., we’ve seen deepfakes targeted at both major political parties.

One of the most important things the U.S. can do is pass a comprehensive deepfake fraud statute to prevent cybercriminals from using this technology to steal from everyday Americans,” Smith said in his blog post.

Hopefully, the government can come up with some sort of framework to police AI deepfakes.