Nowadays, it’s pretty much impossible to tell what’s been written by a human being and what’s been spat out by an AI chatbot. People have been grappling with this since the early days of the AI boom. Well, one of the most popular AI companies has a tool that it could use to help cut down on this, but it’s not using it. OpenAI has a fingerprinting tool to help people identify when text was generated with ChatGPT.
OpenAI has a fingerprinting tool for ChatGPT
We all know the major issues with AI generation. Why spend the time to write your college report when you can just ask Professor ChatGPT? Countless students have been able to cheat their way through their classes thanks to AI, and no one has come up with a solid way to detect it.
Things were bad at the beginning, and they’ve only gotten worse nearly two years after ChatGPT came about. Well, the company behind ChatGPT, OpenAI, actually has a powerful tool that could help greatly cut down on this sort of practice. This is a fingerprinting tool. Basically, when someone generates text using ChatGPT, the tool will apply a “fingerprint” to the output.
This fingerprint is a detectable pattern applied to the outcome that’s impossible to be detected by humans. However, people can develop tools that can detect this pattern. Ostensibly, they’ll only need to run the generation through their detection tool and identify if it’s been generated.
The method of fingerprinting has a stellar 99.9% detection rate. This is something that sounds like it should be on the market. So, why isn’t it?!
Well, according to the report, the company is not launching this tool because of internal conflicts. It’s like the company can’t make up its mind. A recent survey revealed that about 1/3 of OpenAI customers said that they don’t want fingerprinting applied to their generations among other reasons.
So, OpenAI is sitting on a golden egg, but it’s hesitating to share it with the world. If OpenAI doesn’t do it, let’s hope some other company does it.