(VOVWORLD) -Leading artificial intelligence companies made a fresh pledge at the 2nd AI Safety Summit on Tuesday to develop AI safely, while world leaders agreed to build a network of publicly backed safety institutes to advance research and testing of the technology.
The 2nd AI Safety Summit iin Seoul, South Korea (Photo: Reuters) |
According to a government statement from the UK, who is co-chairing the Summit with Korea, the 16 AI companies that signed up for the safety commitments include: OpenAI (developer of ChatGPT), Google DeepMind, Anthropic, Microsoft, Amazon, IBM, Meta, Mistral AI (France) and Zhipu.ai (China).
The two-day meeting is a follow-up to the November Safety Summit at Bletchley Park in the UK and comes amid a flurry of efforts by governments and global bodies to create guardrails for AI amid fears about the risks it poses to humanity.
The safety pledge includes publishing frameworks setting out how the companies will measure the risks of their models. In extreme cases where risks are severe and intolerable, AI companies will have to hit the kill switch and stop developing or deploying their models and systems if they can't mitigate the risks.
According to the plans of governments and technology companies, the definition of this safety threshold will be decided before the next AI Summit, scheduled to take place next year in France.