A flurry of AI governance activities has gripped nations in the last two weeks. It is time that nations realised that the risks emanating from AI do not respect national boundaries. Therein lies the need for global co-operation, standards and coordination on AI development, implementations and common governance approaches. Is global co-operation really happening? Here is a summary of what has been going on in the last month.
USA
On 30th October 2023, the Biden administration issued the Executive Order for the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence to US Government departments. Based on the order, NIST has invited organizations to provide letters of interest describing technical expertise and products, data, and/or models to enable the development and deployment of safe and trustworthy AI systems through the AI Risk Management Framework (AI RMF).
United Kingdom
The UK held a global AI Safety Summit on 1-2 November 2023 on Capabilities and risks from frontier AI – A discussion paper on the need for further research into AI risk. Many of the world leaders were absent though countries were represented by other officials leaving an impression that there were other more important and pressing concerns for the absent world leaders to deal with. This paper presents a mature and well researched perspective on the Risks presented by AI development, a summary of the latest state of AI development technologies and challenges faced. Read about it here.
India
India has Co-chaired and hosted the Global Partnership on AI (GPAI) ‘Future of Work’ working group online convention on 8th November 2023. The GPAI has a number of ongoing projects which are aimed at providing AI related use cases and curated information to AI Regulators, AI Practitioners and AI Investors. India is hosting a GPAI Summit on 12th to 14th December 2023 including a Global AI Expo. It is being conducted by the Ministry of Electronics and Information Technology. Global AI companies are expected to showcase their products during the Expo.
European Union
After two years of negotiations, on 25th Oct 2023, the bill was approved by the European parliament in May 2023. The draft AI rules now need to be agreed through meetings between the parliament and EU states to thrash out the final versions of laws in a process known as the trilogue. Confidential discussions are focusing on Article 6 dealing with “high risk AI systems” and exemptions for “accessory AI systems”. Earliest decision is expected in December 23. Read more here.
UNITED NATIONS – Source Reuters Report
The UN is Planning regulations for AI.
The U.N. Security Council held its first formal discussion on AI in July, addressing military and non-military applications of AI that “could have very serious consequences for global peace and security”, Secretary-General Antonio Guterres said. Guterres has backed a proposal by some AI executives for the creation of an AI watchdog, and announced plans to start work on a high-level AI advisory body by the end of the year.
Australia – Source Reuters Report
Australia will make search engines draft new codes to prevent the sharing of child sexual abuse material created by AI and the production of deepfake versions of the same material, its internet regulator said in September.
China – Source Reuters Report
China published proposed security requirements for firms offering services powered by generative AI on Oct. 12, including a blacklist of sources that cannot be used to train AI models. The country issued a set of temporary measures in August, requiring service providers to submit security assessments and receive clearance before releasing mass-market AI products.
Japan – Source Reuters Report
Japan is investigating possible AI breaches within the country.
Japan expects to introduce by the end of 2023 regulations that are likely closer to the U.S. attitude than the stringent ones planned in the EU, an official close to deliberations said in July. The country’s privacy watchdog has warned OpenAI not to collect sensitive data without people’s permission.