Europe Is One Step Closer To Rolling Out AI Regulations

It looks like the era of AI regulation is nigh, starting with Europe. In a European Parliament poll, members voted overwhelmingly in favor of discussing the AI Act that would oversee the development and deployment of AI. The AI Act is being looked upon as the first effective set of laws governing AI and its proliferation by a major regulator.

Advertisement

The AI Act, in its expanded form, aims to put a lid on abusive AI use case scenarios: such as biometric ID using remotely operated systems (except for law enforcement needs, and that too, after proper legal authorization), segregation (based on markers like race, gender, religion, political inclination, etc.), predictive behavior profiling, emotional recognition and tracking, and scoping of facial data from both private or public sources.

The recent modifications to the AI Act also add a special categorization for high-risk AI, which ups the stakes with scenarios like voter influencing, direct impact on human health and safety, and recommender systems. Broadly, the AI Act classifies AI systems across four harm-based categories. Take, for example, unacceptable risks, such as creating or supporting a social scoring system akin to China.

Advertisement

Lower down the ladder are high-risk implementations such as scanning applicant resumes, migration and asylum governance, judicial and democratic processes, and public health. The remaining two categories are limited risk, and minimal or no-risk. The draft proposal for the AI Act amendment was officially adopted by the Internal Market Committee and the Civil Liberties Committee in May.

The road ahead for AI regulation

The core goals of the AI Act are to mitigate the risks and define clear operational boundaries for AI systems. The regulation also specifies clear-cut obligations for both users as well as developers, aims to create a governance structure at national and bloc levels, and to create an assessment guideline. Open-source projects and scenarios where AI innovation supports small and medium enterprises (SMEs) have been added as exemptions for regulatory oversight.

Advertisement

Another core space of the AI Act is to stop AI systems from generating illegal content. While a majority of mainstream generative AI products like OpenAI's Dall-E and ChatGPT, Microsoft's Bing Chat, and Google's Bard have safeguards in place, there are multiple publicly accessible AI tools out there that have no such filters. 

This allows for the creation of synthetically altered media, such as explicit deepfakes. Earlier this month, the FBI issued a warning about the rise in deepfake crimes. AI systems have their own set of fundamental problems such as "hallucinations," causing them to generate false "facts" out of thin air. Europe isn't the only region where AI regulation is picking up pace, and legal enforcement of the AI Act is still months away.

Advertisement

In April, the Commerce Department invited public commentary into shaping AI policy recommendations, especially with federal safety guards that should be put in place. The same month, China's internet regulator released its own detailed proposal for regulating AI products to align with the country's notorious censorship laws.

Recommended

Advertisement