Europe is lagging behind America in the AI arms race — and that’s a good thing

News | Tech

Europe is lagging behind America in the AI arms race — and that’s a good thing

When it comes to man vs machines, Serge Sharoff knows which one we should pick
Serge Sharoff2 minutes ago

But as we all look, intently, to the US and China, we seem to be ignoring Europe. The continent is too rarely part of the conversation, either ignored or castigated for what’s been branded a suffocating approach, leaden with red tape. Many of us ensconced in AI, however (I am the professor of language technology at the University of Leeds) take the view that Europe — which prioritises responsibility and equality over sheer power — might be making the right decision.

While OpenAI has recently slashed the time spent testing its most powerful models operating in the US and the UK, the EU has put ethics, safety and accountability at the heart of its approach. The bloc’s AI Act was the world’s first legal framework for AI. It’s been designed with clear boundaries in place to ensure responsible development, and categorises AI by risk; banning those types deemed unacceptable and tightly regulating high-risk applications, such as those in healthcare, education and law enforcement.

This is a red rag to the libertarian bulls who cite a stifling level of government overreach. But this more cautious approach does precisely the opposite. Take, for example, the issue of biometric surveillance in public spaces. In Europe, scanning an individual’s face without their consent at a train station would be deemed an “unacceptable risk” by law, and is largely banned across the continent. Compare this to the likes of China, where facial recognition is constantly used in public surveillance — or even the US, where regulation has been so patchy that private companies such as Clearview AI have been able to develop surveillance systems with minimal oversight.

The EU’s stance sends a clear message: technological innovation must have people and human development at its core.

Man vs machines

The EU’s regulatory framework is there to ensure our liberties are respected. As a language and technology expert, I have found a great deal of comfort in this approach, and worked with researchers to see how we can use AI to further the democratic process on the continent. In recent months, my colleagues and I at iDem, a Horizon project, have been working to understand exactly what makes certain texts — notably, public documents — harder to understand for people with intellectual disabilities; and explore whether AI can help simplify these texts to make them more accessible.

We did important work last year with the Maltese government, which launched 58 public consultations and invited citizens to share their views on national policies and reforms. iDem worked to create technology for automatic text simplification, editing official documents to make them simpler and more readable for people with intellectual disabilities. In short, we used AI to give a public voice to those who wouldn’t otherwise have had one.

The UK is at a crossroads. Should it follow America’s approach, or that of the EU?

The UK should adopt a similar approach — though it finds itself at crossroads. Should we take the business-first approach of America, or follow the template set by the EU, with AI first and foremost in service to society?

Recent research undertaken by consultancy firm Accenture indicates that the UK has more to gain than any other country when it comes to AI — even more than the US. And Accenture estimates that average annual GDP for 2023 to 2038 could rise from 1.6 per cent to 3 per cent if we properly harness the sector’s potential.

Putting people at the heart of our AI strategy is a high-stakes call: and I would argue, the right one.

Smash the glass ceiling

There is already strong work being undertaken across the country in this regard. The issue of school attendance, for example, has been addressed by researchers at De Montfort University working alongside Willen Primary School in Milton Keynes. Through AI, they’ve been able to track patterns, identifying Monday as the day that so called “problem students” are most likely to skip. Willen Primary then launched a “Monday Matters” initiative, enticing students to attend through activities. The school recorded a 96 per cent attendance rate last year, for the first time since 2020.

The UK has a real opportunity to continue on this path and use AI for social change. This reflects a growing consensus among researchers and investors, including AI entrepreneur Rotem Farkash, who said recently: “If harnessed properly, AI presents the opportunity to bridge divides we didn’t think were possible.”

Large firms in the UK already recognise the importance of putting people at the heart of AI development; namely, because it goes hand in hand with the benefits AI is meant to bring. Deborah Honig, chief customer officer at Samsung UK, said at a conference in Las Vegas that she hopes AI will help women smash the glass ceiling by alleviating the grunt work, allowing them to do their jobs more efficiently and leaving them more time to spend with their children, for those who are also mothers.

We need to fundamentally change how we debate and legislate AI and remember the people we develop this technology for. The EU may be trailing the US and China on basic metrics — but its focus is the right one. We’d do well to do the same.