The artificial intelligence boom took the tech industry by surprise. The implementation of AI resulted in numerous positive outcomes, such as significant advancements in key technologies and the development of impressive tools that previously seemed impossible. However, it also brought some challenges that, years later, are still being sought to be addressed correctly. Scam methods or deceptions based on AI-powered deepfakes occupy a privileged place on the list of the most worrying.
One of the main virtues of AI is the one that, in turn, has also created one of the biggest drawbacks. We are talking about the boost to the pace of development in the tech industry. While this is positive for the industry in itself, it has put regulators in a bind. The same AI companies have also been caught up in the problem since not all of them have tools that allow them to accurately identify when a file is a deepfake and notify the user.
What is a deepfake?
Perhaps you are still not entirely clear about what a deepfake is. Basically, it’s content generated by AI models where a person’s face or body is digitally altered, usually by superimposing it on someone else’s. In this way, the victim appears involved in situations where they never were. This also applies to audio files where malicious actors “clone” someone’s voice to make them say whatever they want.
In the right hands, these types of tools are quite useful and allow you to exploit your creativity to the fullest. However, as AI deepfakes have become more and more realistic, malicious actors have started using them for their scams.
Initially, there were obvious errors in AI-generated images, making them easy to identify with the human eye. You just had to look at things like the fingers; yes, drawing hands is difficult even for powerful AIs. However, as generative AI models have evolved, the output has become increasingly realistic. Today, many AI-generated images are indistinguishable from reality. In fact, we could say that you can more easily identify AI-generated artwork than an AI-generated photorealistic image or video.
Deepfake tools are now available to everyone, for better and for worse
As if that weren’t enough, companies aren’t satisfied with just image-generation tools. In recent years, we’ve even seen video-generation models with impressive results. This month, ByteDance—parent company of TikTok—launched an AI video-generation model with the most realistic results we’ve ever seen. The model, called OmniHuman-1, promises video deepfakes up to 10x better than before. We’re talking about an incredible generational leap in quality, and we’ll see things like this every year. Below, you can see a small example of what OmniHuman-1 is capable of.
China is on 🔥 ByteDance drops another banger AI paper!
OmniHuman-1 can generate realistic human videos at any aspect ratio and body proportion using just a single image and audio. This is the best i have seen so far.10 incredible examples and the research paper Link👇 pic.twitter.com/5OjNj0797t
— AshutoshShrivastava (@ai_for_success) February 4, 2025
Improved deepfakes and AI tool democratization have led to more people using these platforms. Students can find them useful for obtaining complementary images for projects or presentations. Artists can get inspiration to start working on. You can generate an image just to have fun with friends as well.
But what happens when malicious actors come into play? Deception, manipulation, extortion… basically all the forms of scams that already existed, but made much easier by AI deepfakes.
Incidents related to AI-powered deepfake scam attempts
There have been countless instances of scams in multiple contexts where AI deepfakes have been the protagonists. Resemble.AI has a huge database of incidents related to the use of generative AI. Among the most recent cases are the promotion of stocks or crypto assets using the face and voice of investor Vijay Kedia. It is also very common to use Elon Musk’s image in videos where he supposedly promotes a cryptocurrency. Practically every hacked YouTube channel shows the same video with “Elon Musk” trying to scam people by requesting money or cryptos.
Another major risk related to deepfakes is the potential cases of sextortion. This is a practice where malicious third parties create fake nudes to try to blackmail people by threatening to spread them publicly or send them to their families. Teenagers are usually the main target of these situations, as they are more likely to interact with strangers on social networks. Platforms such as Instagram have taken measures to try to stop this type of interaction. However, the teen’s exposure begins the moment they decide to chat with a stranger on social media.
There are also those who use deepfakes to try to manipulate public opinion. These cases usually have multiple origins, although they often derive from the political sphere. For example, bad actors can manipulate videos to encourage voting for or against a candidate in an election. These videos can try to make a particular candidate look bad or, on the contrary, make them look extremely good in the eyes of people.
Similarly, there are instances where people use deepfake images or videos as a tool to spread hate. There are cases of manipulated multimedia to try to create aversion against a particular community or group. These videos can show people saying something they never said or doing things they never did.
Impersonation is another possibility in the AI era thanks to the power of deepfake. Resemble.AI lists a case from mid-February where scammers impersonated legitimate claim holders. AI tools allowed them to create realistic videos impersonating others, enabling them to steal about $5.6 million in assets. In this case, the scammers also had some personal data of the victims, probably obtained through social engineering. Therefore, this scenario combines the risk of deepfakes with the need to safeguard all your personal information.
People are increasingly looking for ways to detect deepfakes
There are countless cases of deepfakes being used to try to deceive, scam, or threaten. So, people should be increasingly cautious about the multimedia content they see on the internet. In fact, the rising concern is already being reflected in statistics. Online searches related to “deepfake images” increased by 15% worldwide in 2024. The spikes occurred just after news about hoaxes or scams using AI-powered videos or images emerged. Some of the most popular related search terms were “how to detect deepfake images” and “real vs. AI-generated images.” So, the growing concern among people is real.
Attempts at hoaxes or scams based on AI deepfakes are likely to only increase from now on. Nowadays, even smartphones have apps that allow generating this type of content. Plus, the smartphone itself is a device that facilitates the dissemination of manipulated videos or images. This makes it a double-edged sword in the age of artificial intelligence.
Even you could act as a “bad actor”—without really being one—unconsciously. It is not necessary for you to be the one to create a particular DeepFake file. All you need to do is receive a manipulated image, video, or audio and share it with your contacts or on your social networks as if it were real. This is often not 100% your fault, since, as we said before, deepfake content can be extremely realistic.
To avoid cases where you act as a “deepfake disseminator” without knowing it or fall for potential scams promoted by bad actors—which could even put your security at risk—AI-focused companies must act. After all, they offer the tools to create this type of content. So, they also have the intrinsic responsibility to facilitate ways to detect it.
“Content Credentials” and “SynthID” seek to boost transparency in the era of generative AI
The tech industry has not remained static in the face of this situation. The growing undesirable incidents related to deepfakes led to the birth of the Coalition for Content Provenance and Authenticity (C2PA). From this committee in question was born “Content Credentials,” a standard that seeks transparency in the era of generative AI. “Content Credentials” basically integrates and identifies specific metadata to AI-manipulated images to identify them as such.
While this sounds simple and quick to implement, it has not been so far. That’s because both device manufacturers and AI developers must adopt the standard. That makes the process slow, especially compared to the pace of the evolution of artificial intelligence capabilities. Last year, Sony launched the Alpha 1 II flagship camera with support for Content Credentials. The latest Samsung Galaxy S25 series is also compatible with the standard.
In mid-September 2024, Google announced its entry into the C2PA committee. The Mountain View giant will contribute its resources and experience to try to extend the range of tools that facilitate transparency in the AI era. Google has “SynthID” as its own system to identify edited content using AI-powered tools. In fact, this month, the company announced that photos generated through “Reimagine,” the feature in Google Photos’ Magic Editor suite, will feature a SynthID AI watermark. The “AI watermark” will be available from the “AI Info” section in the photo/video details.
That said, SynthID is a less capable standard than Content Credentials. Currently, in the smartphone industry, Samsung has taken the lead in transparency regarding deepfakes by integrating Content Credentials natively. No other phone brand has done this until now, so it’s a big step. It remains to be seen whether Content Credentials will come to older Galaxy devices via software updates.
Tips to avoid falling for an AI deepfake scam
While many deepfakes may be indistinguishable from reality, there are some steps you can take on your own to avoid falling victim to them. The first is to remain skeptical of media content involving public figures. If an actor promotes a fake cryptocurrency or a politician is in a surreal situation, your first reaction should be to doubt the file’s authenticity.
If the content allegedly comes from a particular public figure, check their social media handles. Often, videos of public figures come from their own social media accounts. If you can’t find the content on their social media accounts, stay alert. You should also enable AI content identification features on your device and apps. There are cases where these tools are enabled by default, but also others where they are not.
Lastly, there are also deepfake detection tools you can turn to. While they may fail occasionally, they are usually quite effective. There are multiple similar platforms available, both publicly and exclusive to a particular system or hardware. Deepware Scanner is one that you can freely use on your own from a browser. Others, such as DuckDuckGoose, Intel FakeCatcher, and Reality Defender, are designed with businesses in mind.
In short, AI-powered deepfakes are here to stay, but that’s not necessarily a bad thing. The impressive capabilities of artificial intelligence are also very useful for boosting productivity, seeking additional inspiration, optimizing workflow, saving time, and unleashing your creativity. However, you should be aware of the context we’re in and be more vigilant about how you handle the content you consume on the internet.
Hopefully, AI developers and developers of AI tools will adopt standards like Content Credentials. This would be a great help to the general public, especially the less tech-savvy.