AI Regulations – top of the agenda?

Many nations have finally emerged from their AI regulatory confusion and hesitation. Much awaited initial actions for regulating the exponential growth of the the AI industry have started happening on a global scale. So what are nations doing? Here is a summary of the latest steps taken by various nations and G7 nations.

BRITAIN

As of 2020, nearly two thirds of UK firms used the same few Big Tech cloud service providers for delivering their services. There is much concern that, with so many financial services using Critical Third Parties, there is need to be clear where responsibility lies when things go wrong. Principally, this will be with the outsourcing firm. 

The Financial Control Authority (FCA) will regulate firms that are designated as Critical Third Parties where they underpin financial  services and can impact stability and confidence in the UK markets.

As stated by Nikhil Rathi, the Chief of the FCA, Bank of England and Prudential Regulation Authority, will be regulating these Critical Third Parties by setting standards for their services,  including AI services, to the UK financial sector. That also means making sure they meet those standards and ensuring resilience.

The UK is trying to understand what it means for competition if Big Tech firms have access to unique and comprehensive data sets such as browsing data, biometrics and social media.

The UK have invested in technology horizon scanning and synthetic data capabilities, and in 2023, have established their Digital Sandbox to be the first of its kind used by any global regulator, using real transaction, social media, and other synthetic data to support Fintech and other innovations to develop safely.  The UK are consulting Alan Turing Institute and other legal and academic institutions to determine the best way to regulate AI.

CHINA

CNN reports that China has published new rules for generative artificial intelligence (AI), becoming one of the first countries in the world to regulate the technology that powers popular services like ChatGPT.

Still, among the key provisions is a requirement for generative AI service providers to conduct security reviews and register their algorithms with the government, if their services are capable of influencing public opinion or can “mobilize” the public. China has dropped the imposition of fines imposed on AI companies in case of non-compliance in the latest upgrade of the provisions.    Read more here .

G7 NATIONS

As per Forbes, the G7, composed of the world’s seven most advanced economies, in their AI meeting in May 2023 at Hiroshima, have recognized the urgency of addressing the impact of AI. They have agreed to tackle this challenge and take a “risk-based” approach to navigate through this uncertain territory. To guide their efforts, they have identified key areas: acknowledging the importance of AI, balancing its risks and benefits, educating about AI and calling for “guardrails” when it comes to AI. 

The G7 digital ministers released an early statement on artificial intelligence, as reported by the Financial Times, reaffirming AI policies and regulations should be “human centric” and preserve human rights, privacy and personal data. Read more here.

EUROPEAN UNION

The World Economic Forum has reported that the European Union (EU) is working on a new legal framework that aims to significantly bolster regulations on the development and use of artificial intelligence. 

The proposed legislation, the Artificial Intelligence (AI) Act, focuses primarily on strengthening rules around data quality, transparency, human oversight and accountability. It also aims to address ethical questions and implementation challenges in various sectors ranging from healthcare and education to finance and energy.

In June, changes to the draft Artificial Intelligence Act were agreed on, to now include a ban on the use of AI technology in biometric surveillance and for generative AI systems like ChatGPT to disclose AI-generated content.

But in an open letter signed by more than 150 executives, European companies from Renault to Heineken warned of the impact the draft legislation could have on business.

The cornerstone of the AI Act is a classification system that determines the level of risk an AI technology could pose to the health and safety or fundamental rights of a person. The framework includes four risk tiers: unacceptable, high, limited and minimal.

AI systems with limited and minimal risk—like spam filters or video games—are allowed to be used with little requirements other than transparency obligations. Systems deemed to pose an unacceptable risk—like government social scoring and real-time biometric identification systems in public spaces—are prohibited with little exception.

US 

In 2021, the EU and US established the US-EU Trade & Tech Council (TTC) to build trust and foster cooperation in tech governance and trade. TTC serves as a forum for driving digital transformation and collaborating on new technologies.

EU and US lawmakers are working together to create a voluntary code of conduct for artificial intelligence (AI). The aim of this code is to set forth standards for the use of AI technology, bridging the gap until formal laws like the EU AI Act are passed.

While the code will be voluntary and not legally binding, large AI companies are expected to adhere given the calls to regulate AI are growing louder.

As per the Guardian, the White House and the Federal government have announced various measures to address the fervor, hoping to make the most of it while avoiding the free-for-all that led to the last decade of social media reckoning. It has issued executive orders asking agencies to implement artificial intelligence in their systems “in a manner that advances equity”, invested $140m into AI research institutes, released a blueprint for an AI bill of rights, and is seeking public comment about how best to regulate the ways in which AI is used. Read More here.

Leave a Reply

Your email address will not be published. Required fields are marked *