X

GPT-4o is of 'medium risk', according to the company

There’s a ton of buzz surrounding generative AI nowadays, but there’s also a ton of fear surrounding it. This is because it has the potential to cause some major damage if it ends up in the wrong hands. Well, according to a new report, you should only be moderately worried about OpenAI’s AI. GPT-4o is of medium risk.

Microsoft-backed OpenAI has been in the news lately for its new safety team. Previously, the team consisted of members outside of the company, which is the best route. This way, people can evaluate the safety protocols with an unbiased mind. However, the new safety team consists of members within the company. It’s a bit tough to trust the opinions of people who are tied to the company.

GPT-4o is of medium risk

OpenAI just released the System Card for GPT-4o. This is basically a report card full of information about how the company is keeping its model safe. It outlines the safety measurements and risks.

According to the card, GPT-4o got a medium risk score. It’s not quite as bad as it sounds. The model’s safety was evaluated in four areas: Persuasion, Cybersecurity, Biological Threats, and Model Autonomy. The model received a Low risk score on the latter three. As for Persuasion, the model scored a Medium. Since the overall score is defined by the highest score of any area, the model has an overall score of Medium.

What does this mean? There are times when this model can produce text more persuasive than human-written text. That’s scary, as people can use text written by ChatGPT to fool the masses for their own needs. So, that’s something that the company is going to need to address.

Can we trust this?

The question here is whether or not we can trust this evaluation. Again, this was primarily given by an internal team. It doesn’t matter how much the company claims its accountability, we can’t overlook that fact. It doesn’t matter how good a student is, you’re going to be just a bit skeptical if they grade their own math test.