(VOVWORLD) - Global efforts to build legal frameworks to oversee and manage the development and use of artificial intelligence (AI) technology have made progress with the EU’s final approval of the Artificial Intelligence Act and technology corporations’ stronger commitments to AI safety.
(NurPhoto/Getty Images) |
EU countries on Tuesday made the final approval of the EU AI Act, a landmark law to control AI, especially generative AI systems.
A milestone in AI management
Mathieu Michel, Secretary of State for Digitalization of Belgium, which is taking the rotating Presidency of the Council of the European Union, said the Act is the first of its kind in the world to regulate AI, helping to solve global technological challenges and creating opportunities for economies and societies. With the AI Act, the EU emphasizes the importance of trust, transparency, and responsibility when facing problems arising from new technology, while ensuring that AI technology can benefit continental innovation.
The AI Act establishes a set of criteria to classify AI systems based on the level of risk they pose – unacceptable risk, high risk, limited risk, or low risk. Non-compliance with the prohibition on AI systems carrying unacceptable risk is subject to fines of up to 35 million euros or 7% of total worldwide annual turnover, whichever is higher.
The European Parliament has proposed to form an EU AI Office to support the enforcement of AI regulations and to guide and coordinate cross-border investigations. After the AI Act is officially published in the European gazette, regulations related to AI used for social scoring, policy prediction, or facial recognition via the Internet or security cameras (CCTV) will take effect within 6 months after the law is officially promulgated. Regulations related to generative AI models, such as ChatGPT and Google Gemini will take effect after 12 months. Other regulations will take effect from the beginning of 2026.
“The new law categorises different types of artificial intelligence according to risk. AI systems, for example, cognitive behavioural manipulation and social scoring will be banned from the EU because their risk is deemed unacceptable,” European Commissioner for Industry and Internal Market, Thierry Breton, said.
Experts said the EU AI Act is more comprehensive than the voluntary compliance approach of the US or the social stability approach of China, so the Act will have a global impact.
Patrick Van Eecke of the US’s Cooley law firm said companies outside the EU that use EU customer data will have to comply with this Act. Other countries and regions may consider it a model for developing separate regulations governing AI, as they have done with the EU’s General Data Protection Regulation.
Safe AI
Europe also recorded another notable event related to AI management. On May 17 the European Council adopted the first global binding treaty governing the use of artificial intelligence.
The European Council said its AI Treaty sets out the legal framework for all stages of AI development and use, addresses AI’s potential risks, and promotes responsible technological innovation. One difference from the EU AI Act is that non-European countries will also participate in the Treaty, which was developed over two years by an intergovernmental agency bringing together the 46 members of the European Council, the European Union, and 11 non-EC states, including the US, Japan, Argentina, Israel, and Uruguay.
Last week, the 2nd AI Safety Summit was held in Seoul, the Republic of Korea, where 16 leading technology corporations, including OpenAI, Google DeepMind, Microsoft, Amazon, IBM, Meta, Mistral AI, and Zhipu.aiagreed on new commitments to AI safety. AI companies will disclose how they assess unacceptable risk and what they will do to make sure that risks do not exceed the allowable threshold.
“Safety is a priority in the use of artificial intelligence. Safety strengthens public trust. No one can deny that safety features a critical element that determines the competitive edge and sustainability of AI models in the global market,” RoK Prime Ministe Han Duck-soo said.
Governments and technology companies are expected to give a clear, specific definition of AI safety before the next AI Safety Summit to be held in France.