{"id":1038,"date":"2025-01-29T12:42:02","date_gmt":"2025-01-29T12:42:02","guid":{"rendered":"https:\/\/aipathfinder.org\/?page_id=1038"},"modified":"2025-01-29T12:42:02","modified_gmt":"2025-01-29T12:42:02","slug":"ai-risks-policy","status":"publish","type":"page","link":"https:\/\/aipathfinder.org\/index.php\/ai-risks-policy\/","title":{"rendered":"AI Risks &amp; Policy"},"content":{"rendered":"\n<p>(Sourced from OECD ARTIFICIAL INTELLIGENCE PAPERS November 2024 No. 27)<\/p>\n\n\n\n<p><strong>AI RISKS<\/strong><\/p>\n\n\n\n<ol start=\"1\" class=\"wp-block-list\">\n<li><strong>Facilitation of increasingly sophisticated malicious cyber activity<\/strong>: AI systems can lower the effort needed for cyberattacks, potentially targeting critical infrastructure. \u200b<\/li>\n\n\n\n<li><strong>Manipulation, disinformation, fraud, and resulting harms to democracy and social cohesion<\/strong>: AI can amplify disinformation and online manipulation, affecting information ecosystems and democratic processes. \u200b<\/li>\n\n\n\n<li><strong>Races to develop and deploy AI systems causing harms due to a lack of sufficient investment in AI safety and trustworthiness<\/strong>: Competitive pressures may lead to rapid AI deployment without adequate safety measures. \u200b<\/li>\n\n\n\n<li><strong>Unexpected harms resulting from inadequate methods to align AI system objectives with human stakeholders\u2019 preferences and values<\/strong>: Misalignment of AI objectives with human values can lead to unanticipated and potentially harmful consequences. \u200b<\/li>\n\n\n\n<li><strong>Power is concentrated in a small number of companies or countries<\/strong>: Dominance in AI resources and capabilities can lead to significant market and political power concentration. \u200b<\/li>\n\n\n\n<li><strong>Minor to serious AI incidents and disasters occur in critical systems<\/strong>: Failures in AI-integrated critical systems can cause cascading effects and major harms. \u200b<\/li>\n\n\n\n<li><strong>Invasive surveillance and privacy infringement<\/strong>: AI-enabled surveillance can erode privacy, fuel discrimination, and suppress political opposition. \u200b<\/li>\n\n\n\n<li><strong>Governance mechanisms and institutions unable to keep up with rapid AI evolutions<\/strong>: The fast pace of AI development presents challenges for effective governance and regulation. \u200b<\/li>\n\n\n\n<li><strong>AI systems lacking sufficient explainability and interpretability erode accountability<\/strong>: Black box AI systems make it difficult to understand decision processes, leading to challenges in accountability. \u200b<\/li>\n\n\n\n<li><strong>Exacerbated inequality or poverty within or between countries<\/strong>: AI can increase social, economic, and digital divides, potentially worsening inequality and poverty. \u200b<\/li>\n<\/ol>\n\n\n\n<p><strong>PRIORITY AI POLICY ACTIONS<\/strong><\/p>\n\n\n\n<ol start=\"1\" class=\"wp-block-list\">\n<li><strong>Establish clearer rules, including on liability, for AI harms<\/strong>:\n<ul class=\"wp-block-list\">\n<li>Clear rules on liability for AI-caused harm can promote accountability and adoption by removing uncertainties. \u200b This involves updating or clarifying safety and liability frameworks to address AI incidents effectively. \u200b<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Consider approaches to restrict or prevent certain \u201cred line\u201d AI uses<\/strong>:\n<ul class=\"wp-block-list\">\n<li>Implementing &#8220;red lines&#8221; can help demarcate and enforce limits on unacceptable AI uses, such as mass surveillance, autonomous weapons, and AI systems that exacerbate discrimination or manipulate human behavior. \u200b<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Require or promote the disclosure of key information about some types of AI systems<\/strong>:\n<ul class=\"wp-block-list\">\n<li>Transparency about AI systems&#8217; nature and use is crucial. \u200b Disclosure requirements can include model cards, datasheets, and safety practices to reduce information asymmetries and help users make informed decisions. \u200b<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Ensure risk management procedures are followed throughout the lifecycle of AI systems that may pose a high risk<\/strong>:\n<ul class=\"wp-block-list\">\n<li>Risk management procedures should be in place before and after AI deployment, including impact assessments, protective actions, and continuous monitoring to mitigate high-risk AI systems&#8217; potential harms. \u200b<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Mitigate competitive race dynamics in AI development and deployment that could limit fair competition and result in harms<\/strong>:\n<ul class=\"wp-block-list\">\n<li>International collaboration and governance efforts are needed to address competitive pressures that may lead to rapid AI deployment without sufficient safety measures, ensuring fair competition and trustworthiness. \u200b<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Invest in research on AI safety and trustworthiness approaches, including AI alignment, capability evaluations, interpretability, explainability, and transparency<\/strong>:\n<ul class=\"wp-block-list\">\n<li>Funding and incentives for research on AI safety, alignment, and transparency can help develop methods to ensure AI systems&#8217; behavior aligns with human values and preferences, reducing potential harms. \u200b<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Facilitate educational, retraining, and reskilling opportunities to help address labor market disruptions and the growing need for AI skills<\/strong>:\n<ul class=\"wp-block-list\">\n<li>Educational and training programs are essential to equip workers with AI skills, address potential job displacement, and promote equity in AI adoption and use. \u200b<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Empower stakeholders and society to help build trust and reinforce democracy<\/strong>:\n<ul class=\"wp-block-list\">\n<li>Engaging diverse stakeholders in AI development and governance can build trust, align AI innovation with societal needs, and reinforce democratic processes through transparency and public participation. \u200b<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Mitigate excessive power concentration<\/strong>:\n<ul class=\"wp-block-list\">\n<li>Policies should address the centralization of market, economic, and political power in AI, promoting fair competition, distributed ownership, and access to AI resources as digital public goods. \u200b<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Take targeted actions to advance specific future AI benefits<\/strong>:\n<ul class=\"wp-block-list\">\n<li>Governments should take direct actions to capture AI&#8217;s potential benefits, such as accelerating scientific progress, improving healthcare and education, and addressing urgent global challenges like climate change. \u200b<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n\n\n\n<p><strong>POLICY EFFORTS IN PROGRESS<\/strong><\/p>\n\n\n\n<ol start=\"1\" class=\"wp-block-list\">\n<li><strong>Facilitation of increasingly sophisticated malicious cyber activity<\/strong>:\n<ul class=\"wp-block-list\">\n<li>Efforts include the Frontier AI Safety Commitments and the EU AI Act, which impose robustness and cybersecurity requirements for high-risk AI systems (<a href=\"https:\/\/www.gov.uk\/government\/publications\/frontier-ai-safety-commitments-ai-seoul-summit-2024\/frontier-ai-safety-commitments-ai-seoul-summit-2024\">UK DSIT, 2024<\/a>, <a href=\"https:\/\/eur-lex.europa.eu\/eli\/reg\/2024\/1689\/oj\">EU AI Act<\/a>). \u200b<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Manipulation, disinformation, fraud, and harms to democracy and social cohesion<\/strong>:\n<ul class=\"wp-block-list\">\n<li>The Frontier AI Safety Commitments and the EU AI Act include mechanisms to detect and disclose AI-generated content, and the US AI EO establishes guidance for content authentication (<a href=\"https:\/\/www.gov.uk\/government\/publications\/frontier-ai-safety-commitments-ai-seoul-summit-2024\/frontier-ai-safety-commitments-ai-seoul-summit-2024\">UK DSIT, 2024<\/a>, <a href=\"https:\/\/eur-lex.europa.eu\/eli\/reg\/2024\/1689\/oj\">EU AI Act<\/a>, <a href=\"https:\/\/www.whitehouse.gov\/briefing-room\/presidential-actions\/2023\/10\/30\/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence\/\">US AI EO<\/a>). \u200b<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Races to develop and deploy advanced AI systems<\/strong>:\n<ul class=\"wp-block-list\">\n<li>The EU AI Act and US AI EO place controls on key AI inputs and reporting requirements for AI systems trained above certain compute thresholds (<a href=\"https:\/\/eur-lex.europa.eu\/eli\/reg\/2024\/1689\/oj\">EU AI Act<\/a>, <a href=\"https:\/\/www.whitehouse.gov\/briefing-room\/presidential-actions\/2023\/10\/30\/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence\/\">US AI EO<\/a>). \u200b<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Inadequate AI alignment methods<\/strong>:\n<ul class=\"wp-block-list\">\n<li>The EU AI Act imposes stringent obligations for general-purpose AI systems that could pose a systemic risk (<a href=\"https:\/\/eur-lex.europa.eu\/eli\/reg\/2024\/1689\/oj\">EU AI Act<\/a>). \u200b<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Power concentration<\/strong>:\n<ul class=\"wp-block-list\">\n<li>Efforts focus on market power, with the US AI EO addressing risks from dominant firms and providing opportunities for SMEs (<a href=\"https:\/\/www.whitehouse.gov\/briefing-room\/presidential-actions\/2023\/10\/30\/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence\/\">US AI EO<\/a>, <a href=\"https:\/\/europa.eu\/youreurope\/business\/running-business\/eu-support-tools-sme\/index_en.htm\">EU support tools for SMEs<\/a>). \u200b<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>AI incidents and disasters in critical systems<\/strong>:\n<ul class=\"wp-block-list\">\n<li>The EU AI Act classifies AI use in critical infrastructure as high-risk, imposing regulatory obligations and incident reporting (<a href=\"https:\/\/eur-lex.europa.eu\/eli\/reg\/2024\/1689\/oj\">EU AI Act<\/a>). \u200b<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Invasive surveillance and privacy infringement<\/strong>:\n<ul class=\"wp-block-list\">\n<li>The EU GDPR and US AI EO address privacy risks from AI-enabled data collection, with the EU AI Act imposing strict requirements for biometric identification systems (<a href=\"https:\/\/www.europarl.europa.eu\/RegData\/etudes\/STUD\/2020\/641530\/EPRS_STU(2020)641530_EN.pdf\">EU GDPR<\/a>, <a href=\"https:\/\/www.whitehouse.gov\/briefing-room\/presidential-actions\/2023\/10\/30\/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence\/\">US AI EO<\/a>, <a href=\"https:\/\/eur-lex.europa.eu\/eli\/reg\/2024\/1689\/oj\">EU AI Act<\/a>).<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Governance mechanisms and institutions unable to keep up with rapid AI evolutions<\/strong>:\n<ul class=\"wp-block-list\">\n<li>Instruments like regulatory sandboxes and agile governance processes are increasing, with initiatives in countries like Colombia, Estonia, and France (<a href=\"https:\/\/legalinstruments.oecd.org\/en\/instruments\/OECD-LEGAL-0464\">OECD Recommendation<\/a>, <a href=\"https:\/\/doi.org\/10.1787\/0248ead5-en\">Framework for Anticipatory Governance<\/a>, <a href=\"https:\/\/www.coe.int\/en\/web\/artificial-intelligence\/the-framework-convention-on-artificial-intelligence\">Framework Convention on AI<\/a>). \u200b<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>AI systems lacking sufficient explainability and interpretability<\/strong>:\n<ul class=\"wp-block-list\">\n<li>Policies include a right to an explanation for AI outputs, with the EU AI Act and US DARPA&#8217;s Explainable AI project providing frameworks (<a href=\"https:\/\/eur-lex.europa.eu\/eli\/reg\/2024\/1689\/oj\">EU AI Act<\/a>, <a href=\"https:\/\/www.darpa.mil\/program\/explainable-artificial-intelligence\">DARPA XAI<\/a>). \u200b<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Exacerbated inequality or poverty<\/strong>:\n<ul class=\"wp-block-list\">\n<li>The Framework Convention on AI includes requirements related to equality and non-discrimination, with voluntary commitments from leading AI companies to ensure AI does not promote harmful bias (<a href=\"https:\/\/www.coe.int\/en\/web\/artificial-intelligence\/the-framework-convention-on-artificial-intelligence\">Framework Convention on AI<\/a>, <a href=\"https:\/\/www.whitehouse.gov\/wp-content\/uploads\/2023\/09\/Voluntary-AI-Commitments-September-2023.pdf\">Voluntary AI Commitments<\/a>).<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n","protected":false},"excerpt":{"rendered":"<p>(Sourced from OECD ARTIFICIAL INTELLIGENCE PAPERS November 2024 No. 27) AI RISKS PRIORITY AI POLICY [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-1038","page","type-page","status-publish","hentry"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/aipathfinder.org\/index.php\/wp-json\/wp\/v2\/pages\/1038","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/aipathfinder.org\/index.php\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/aipathfinder.org\/index.php\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/aipathfinder.org\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/aipathfinder.org\/index.php\/wp-json\/wp\/v2\/comments?post=1038"}],"version-history":[{"count":1,"href":"https:\/\/aipathfinder.org\/index.php\/wp-json\/wp\/v2\/pages\/1038\/revisions"}],"predecessor-version":[{"id":1040,"href":"https:\/\/aipathfinder.org\/index.php\/wp-json\/wp\/v2\/pages\/1038\/revisions\/1040"}],"wp:attachment":[{"href":"https:\/\/aipathfinder.org\/index.php\/wp-json\/wp\/v2\/media?parent=1038"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}