Italy May Fine OpenAI $20 Million, Block ChatGPT For Violating Data Protection Laws

There is no denying that the fast-paced development of advanced AI systems has increased the chances of these systems out-competing humans at general tasks. Given the frightening pace of their development, there is some agreement that widespread deployment of AI systems could lead to millions of job losses. Several organizations and individuals have also made calls to regulate the AI industry so that we understand the true implications these technologies can have on human society. Ethical issues aside, questions have also been raised on the manner in which many of these generative AI chatbots source and present their information.

Advertisement

Amidst calls to pause the development of advanced AI systems, Italy, an EU member state with strict data protection laws recently raised concerns about the likelihood of OpenAI using unlawfully processed data from people. In fact, the country's Privacy Guarantor just went one step ahead and issued a notice to OpenAI — the company behind ChatGPT — to stop processing people's data immediately. Italy has given OpenAI 20 days to respond to their guarantor and apprise them of how the company intends to address these issues.

Failure to comply could possibly result in OpenAI being fined 20 million euros ($21.8 million). Italy is also mulling the possibility of blocking access to ChatGPT in case the company doesn't address its concerns. Italy is also seeking answers from OpenAI on how the company intends to safeguard minors from using the technology.

Advertisement

Did OpenAI process European data unlawfully?

The Italian DPA's bone of contention is to understand whether OpenAI breached the EU's General Data Protection Regulation (GDPR) while training its AI models. The agency has also opened an independent investigation to ascertain the same. The Italians believe that OpenAI did not have any "legal basis" for mass collecting data from people and using it to train its AI models.

Advertisement

Additionally, the agency is concerned that OpenAI doesn't have any system that prevents minors from accessing the tech. Interestingly, it seems OpenAI skirted around the law by adding a clause into its ToS dictating that children under 13 are not allowed to sign up for the service.

The concerns raised by the Italian agency are the latest regulatory and legal hurdle to affect OpenAI. This week, several organizations filed complaints against the company calling for freezing further development and future ChatGPT releases. These organizations fear that OpenAI, in its current state, doesn't have enough checks on its platform to prevent any untoward instances of AI misuse.

As we await an official statement from OpenAI in this regard, it would be interesting to see the path the organization takes to address the latest regulatory challenge it's up against.

Advertisement

Recommended

Advertisement