(Sourced from OECD ARTIFICIAL INTELLIGENCE PAPERS November 2024 No. 27)

AI RISKS

  1. Facilitation of increasingly sophisticated malicious cyber activity: AI systems can lower the effort needed for cyberattacks, potentially targeting critical infrastructure. ​
  2. Manipulation, disinformation, fraud, and resulting harms to democracy and social cohesion: AI can amplify disinformation and online manipulation, affecting information ecosystems and democratic processes. ​
  3. Races to develop and deploy AI systems causing harms due to a lack of sufficient investment in AI safety and trustworthiness: Competitive pressures may lead to rapid AI deployment without adequate safety measures. ​
  4. Unexpected harms resulting from inadequate methods to align AI system objectives with human stakeholders’ preferences and values: Misalignment of AI objectives with human values can lead to unanticipated and potentially harmful consequences. ​
  5. Power is concentrated in a small number of companies or countries: Dominance in AI resources and capabilities can lead to significant market and political power concentration. ​
  6. Minor to serious AI incidents and disasters occur in critical systems: Failures in AI-integrated critical systems can cause cascading effects and major harms. ​
  7. Invasive surveillance and privacy infringement: AI-enabled surveillance can erode privacy, fuel discrimination, and suppress political opposition. ​
  8. Governance mechanisms and institutions unable to keep up with rapid AI evolutions: The fast pace of AI development presents challenges for effective governance and regulation. ​
  9. AI systems lacking sufficient explainability and interpretability erode accountability: Black box AI systems make it difficult to understand decision processes, leading to challenges in accountability. ​
  10. Exacerbated inequality or poverty within or between countries: AI can increase social, economic, and digital divides, potentially worsening inequality and poverty. ​

PRIORITY AI POLICY ACTIONS

  1. Establish clearer rules, including on liability, for AI harms:
    • Clear rules on liability for AI-caused harm can promote accountability and adoption by removing uncertainties. ​ This involves updating or clarifying safety and liability frameworks to address AI incidents effectively. ​
  2. Consider approaches to restrict or prevent certain “red line” AI uses:
    • Implementing “red lines” can help demarcate and enforce limits on unacceptable AI uses, such as mass surveillance, autonomous weapons, and AI systems that exacerbate discrimination or manipulate human behavior. ​
  3. Require or promote the disclosure of key information about some types of AI systems:
    • Transparency about AI systems’ nature and use is crucial. ​ Disclosure requirements can include model cards, datasheets, and safety practices to reduce information asymmetries and help users make informed decisions. ​
  4. Ensure risk management procedures are followed throughout the lifecycle of AI systems that may pose a high risk:
    • Risk management procedures should be in place before and after AI deployment, including impact assessments, protective actions, and continuous monitoring to mitigate high-risk AI systems’ potential harms. ​
  5. Mitigate competitive race dynamics in AI development and deployment that could limit fair competition and result in harms:
    • International collaboration and governance efforts are needed to address competitive pressures that may lead to rapid AI deployment without sufficient safety measures, ensuring fair competition and trustworthiness. ​
  6. Invest in research on AI safety and trustworthiness approaches, including AI alignment, capability evaluations, interpretability, explainability, and transparency:
    • Funding and incentives for research on AI safety, alignment, and transparency can help develop methods to ensure AI systems’ behavior aligns with human values and preferences, reducing potential harms. ​
  7. Facilitate educational, retraining, and reskilling opportunities to help address labor market disruptions and the growing need for AI skills:
    • Educational and training programs are essential to equip workers with AI skills, address potential job displacement, and promote equity in AI adoption and use. ​
  8. Empower stakeholders and society to help build trust and reinforce democracy:
    • Engaging diverse stakeholders in AI development and governance can build trust, align AI innovation with societal needs, and reinforce democratic processes through transparency and public participation. ​
  9. Mitigate excessive power concentration:
    • Policies should address the centralization of market, economic, and political power in AI, promoting fair competition, distributed ownership, and access to AI resources as digital public goods. ​
  10. Take targeted actions to advance specific future AI benefits:
    • Governments should take direct actions to capture AI’s potential benefits, such as accelerating scientific progress, improving healthcare and education, and addressing urgent global challenges like climate change. ​

POLICY EFFORTS IN PROGRESS

  1. Facilitation of increasingly sophisticated malicious cyber activity:
    • Efforts include the Frontier AI Safety Commitments and the EU AI Act, which impose robustness and cybersecurity requirements for high-risk AI systems (UK DSIT, 2024, EU AI Act). ​
  2. Manipulation, disinformation, fraud, and harms to democracy and social cohesion:
    • The Frontier AI Safety Commitments and the EU AI Act include mechanisms to detect and disclose AI-generated content, and the US AI EO establishes guidance for content authentication (UK DSIT, 2024, EU AI Act, US AI EO). ​
  3. Races to develop and deploy advanced AI systems:
    • The EU AI Act and US AI EO place controls on key AI inputs and reporting requirements for AI systems trained above certain compute thresholds (EU AI Act, US AI EO). ​
  4. Inadequate AI alignment methods:
    • The EU AI Act imposes stringent obligations for general-purpose AI systems that could pose a systemic risk (EU AI Act). ​
  5. Power concentration:
    • Efforts focus on market power, with the US AI EO addressing risks from dominant firms and providing opportunities for SMEs (US AI EO, EU support tools for SMEs). ​
  6. AI incidents and disasters in critical systems:
    • The EU AI Act classifies AI use in critical infrastructure as high-risk, imposing regulatory obligations and incident reporting (EU AI Act). ​
  7. Invasive surveillance and privacy infringement:
    • The EU GDPR and US AI EO address privacy risks from AI-enabled data collection, with the EU AI Act imposing strict requirements for biometric identification systems (EU GDPR, US AI EO, EU AI Act).
  8. Governance mechanisms and institutions unable to keep up with rapid AI evolutions:
  9. AI systems lacking sufficient explainability and interpretability:
    • Policies include a right to an explanation for AI outputs, with the EU AI Act and US DARPA’s Explainable AI project providing frameworks (EU AI Act, DARPA XAI). ​
  10. Exacerbated inequality or poverty:
    • The Framework Convention on AI includes requirements related to equality and non-discrimination, with voluntary commitments from leading AI companies to ensure AI does not promote harmful bias (Framework Convention on AI, Voluntary AI Commitments).