Yes, this is another article on the internet about ChatGPT. But rest assured, it is written by a human. Promise!
The AI-powered chatbot has taken the world by storm, captivating everyone’s attention from those in need of relationship advice to students using it to write essays.
But more importantly, and perhaps more worryingly for the chatbot’s creators OpenAI, ChatGPT has been snowballing concerns from unwanted sources.
For years, AI has rumbled on in the background, making eye-catching but largely superficial developments. After a slow start, regulators are now dusting off their legal manuals and considering how best to keep a lid on it all.
Which countries have banned ChatGPT?
As soon as it launched, ChatGPT was banned in some places. You guessed it, the usual suspects with pre-existing internet censorship laws make up the names.
China, Iran, North Korea and Russia have all blocked access to ChatGPT fearing its content may act as propaganda. For them, ChatGPT is a question of control; after all, they are the same countries that continue to ban Facebook. So it’s no surprise that an AI, free to ‘say’ what it wants, is a step too far.
But this raises an interesting contradiction. For countries in the West and elsewhere, internet censorship is highly frowned upon, which is a point of pride.
However, ChatGPT is presenting a challenge to this ideal. Countries are being forced to grapple with a sophisticated technology that is ambiguous and uncontrollable by design. In the process, their anxieties around AI are being exposed and confronted in record time.
ChatGPT – The World Tour
Last week, Italy chose the nuclear option, flat-out banning the AI chatbot over data protection concerns. If I’m honest, this seemed inevitable. As a technology that’s very existence relies on processing data gathered from users, the likelihood of privacy problems was always going to be high.
Italy’s decision started a chain reaction of worry across the wider EU. Germany, France and Ireland are all now considering bans as they await the evidence of wrongdoing Italy supposedly possesses. Canada has also launched a review along similar lines.
The UK on the other hand has taken a slightly more relaxed approach. They issued a warning to OpenAI about keeping data safe, whilst planning to use existing laws to deal with AI rather than making new ones. The US Government has remained largely silent on the matter, although tech giants like Elon Musk, who make use of AI themselves, have surprisingly called for a pause on development due to what they call “profound risks to society and humanity”.
Which brings me to my next question…
Is all this fuss actually about data protection?
How OpenAI stores and uses public data is of course a real concern, just like any other company. Look at Meta who were recently fined an eye-watering sum for data breaches by EU regulators.
However, all the noise coming from authorities suggests a deeper anxiety, a more historical one, that is driving the recent fuss over ChatGPT.
As hinted by the quote above, AI is perceived as a risk not only to transparency but also to the relevancy of humanity on Earth.
In relation to the Italy ban, the European Consumer Organisation, which plays a role in how the EU decides to regulate products including artificial intelligence, had some strong words:
“They [consumers] don’t realise how manipulative, how deceptive it [ChatGPT] can be. They don’t realise that the information they get is maybe wrong”
This goes far beyond data privacy concerns. It positions AI as an actively malicious thing, something to be feared for how it could (get your bingo cards ready) ‘go rogue’. Above all, it speaks to a real unease about what AI companies are unleashing and how unprepared everyone is to deal with it.
Putting AI on the naughty step
Humans have always been cautious about AI. Just look at the myriad of movies that revolve around some form of crazy, self-aware robot on a rampage. But real-world concerns are more, well, realistic.
In this case, it’s not murderous German-accented robots, but the misuse of information, straight-up disinformation and potential loss of jobs.
When you type a question into Google, you get curated results that are the most relevant and authoritative sources. ChatGPT doesn’t do that. It will give you an answer based on the limited information the bot has been fed as part of its training.
This means it can be patchy, downright wrong, or even deliberately deceptive. I say deliberate because ChatGPT wants to give you an answer, no matter if it’s false. It can intentionally tell you the wrong thing just because it wants to tell you a thing. It’s in its blood/code.
In just one example, an Australian mayor declared his intention to sue OpenAI last week after its chatbot wrongly said he was involved in a bribery scandal. It will be the first case of its kind and set the precedent for how we as a society treat AI going forward.
And, as a quick aside, others have managed to get it to write effective malware code, even though it’s designed to refuse. So, that’s just great!
As a result, regulators are now asking themselves; in a world where it’s already difficult to tell what is fact from fiction, do we really need more inconsistency?
Final thoughts – the future of AI regulation
As yet, there is no middle ground between banning ChatGPT and letting it say whatever it wants, no matter how inappropriate.
The same Irish Data Protection Commission that fined Meta is now looking into OpenAI, probably meaning a fine is on the way. There is also talk of an ‘EU Artificial Intelligence Act’ to clearly set out the regulations.
One thing is clear from all this; there was an oddly low level of preparedness from regulators and a suspicious lack of transparency from creators.
OpenAI has still not detailed how personal information is used to train their AI models. Even Microsoft, the chief backer of the company, has no idea.
On the flip side, regulators should have seen this coming. I could go online in the 2000s and talk to loads of, admittedly pretty bad, AI-powered chatbots. The kneejerk reaction to ChatGPT is part failure to prepare for an obvious ethical/legal headache, and part indication of how threatening some think AI is.
In the end, this strikes to the very heart of the issue; how big a part do we want AI to play in our lives?
Well, I think we’re all about to find out whether we like it or not.