{"id":1372,"date":"2025-11-17T04:56:49","date_gmt":"2025-11-17T04:56:49","guid":{"rendered":"https:\/\/aipathfinder.org\/?p=1372"},"modified":"2025-11-17T12:14:51","modified_gmt":"2025-11-17T12:14:51","slug":"reducing-harmful-ai-development-through-better-analysis-and-design","status":"publish","type":"post","link":"https:\/\/aipathfinder.org\/index.php\/2025\/11\/17\/reducing-harmful-ai-development-through-better-analysis-and-design\/","title":{"rendered":"Reducing harmful AI development through better Analysis and Design"},"content":{"rendered":"\n<p>Unethical and <a href=\"https:\/\/en.wikipedia.org\/wiki\/Enshittification\">enshittification<\/a> technology practices have been on the rise ever since November 2022 when Chat GPT debuted. The rate of growth of such undesirable practices has increased ever since all the data on the internet has been consumed for training LLMs. &nbsp;It seems that Open AI, Microsoft, Google and Meta see themselves in an existential AI race triggering all sorts of undesirable and illegal practices. These practices are being forced upon users sometimes blatantly and at other times surreptitiously much against their will and without their consent. A major component of AI harm can be eliminated by AI development through better analysis and design.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Examples of Harmful AI<\/h3>\n\n\n\n<p>Case 1: Microsoft\u2019s Copilot is the epitome of intrusiveness into user space. Popping up and trying to insert itself when we do not want it at all. Sometimes it jumps in anyway and reads everything we write when we do not want it to. If that\u2019s not intrusion and undesirable behaviour what is? There are indeed times when I would want to use Copilot but that\u2019s my prerogative to decide when and not Microsoft\u2019s.<\/p>\n\n\n\n<p>Case 2: Pavan Davuluri, the Head of <a href=\"https:\/\/x.com\/i\/trending\/1988443510320640327\">Microsoft\u2019s Windows announced <\/a>on X that Windows is evolving into an Agentic OS. This meant that it would perform the role of a user assistant performing tasks for users in addition to running apps. The outrage on the internet was immediate and universal essentially saying \u201cWe don\u2019t want this\u201d. A new term has been coined for this sort of technology company behaviour \u201censhittified\u201d meaning something filled with unwanted changes and advertisements.<\/p>\n\n\n\n<p>Case 3: A few years ago, Apple slipped in the \u201cJournal\u201d app into iPhone without a specific alert or information about how the intimate personal data captured in the app will be used. &nbsp;We enter our most confidential information in our journals. It is not something most users would like to share due to privacy concerns. It was ethically incumbent on Apple to specifically highlight the default sharing of captured data in the case of Journal app. Unsuspecting and gullible users have surely used the app and exposed their most personal data to significant risk and at their own peril.<\/p>\n\n\n\n<p>Case 4: In a surreptitious move to gather private data using Gemini AI Assistant without user consent, Google accessed the emails, chats and google meet content of millions of its users including the author\u2019s by using \u201cON\u201d as the default choice of AI settings during a major upgrade of its Gemini AI tools installation in October 2025. <strong>Google\u2019s wants to capture as much data legally or illegally to train AI models even at the cost of modelling its own customers without their consent or knowledge.<\/strong> A major lawsuit has recently been filed in the US District Court for Northern District of California (San Jose) for this transgression against Google.<\/p>\n\n\n\n<p>Case 5: Workday\u2019s AI based Recruitment product rejected all applicants over 40 years of age. Derek Mobley applied for over 100 jobs through the Workday system and was rejected within minutes each time. The <a href=\"https:\/\/natlawreview.com\/article\/ai-vendor-liability-squeeze-courts-expand-accountability-while-contracts-shift-risk\">case<\/a> achieved nationwide class action certification in the US in May 2025.<\/p>\n\n\n\n<p>Case 6: The Dutch Childcare Benefits Scandal (Toeslagenaffaire) was caused by an AI algorithm used by Dutch tax authorities which wrongly flagged thousands of families for fraud related to childcare benefits based on biased criteria like dual nationality and low income. This resulted in families being forced to repay large sums of money they did not owe, causing severe financial and emotional distress. Over 20,000 families were harmed, with more than 1,000 children placed in foster care due to these wrongful accusations.<\/p>\n\n\n\n<p>Case 7: Another particularly <a href=\"https:\/\/www.privacyworld.blog\/2024\/11\/artificial-intelligence-and-the-rise-of-product-liability-tort-litigation-novel-action-alleges-ai-chatbot-caused-minors-suicide\/\">serious case<\/a> that resulted in the suicide of a minor was led by a Character.AI chatbot. The plaintiff claimed the vendor knowingly used highly toxic datasets for training, that the chatbot manipulated vulnerable users emotionally, and that they intentionally allowed minors to use the chatbot without adequate warnings or protections. The vendor was accused of breaching their duty to warn users of inherent dangers, contributing to a tragic outcome.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">AI Harm cause analysis<\/h3>\n\n\n\n<p>Large MNCs (LLM vendors) are accountable for the lapses in first 4 cases above. These are clearly cases where the companies are at fault and have not played by acceptable standards of ethical conduct impacting their own users and customers. &nbsp;They have prioritised innovation and growth over more important human risks like human rights violations and data privacy.<\/p>\n\n\n\n<p>In the 5<sup>th<\/sup> case, Workday took a shortcut in analysis and design and will pay the price. In the 6<sup>th<\/sup> and 7<sup>th<\/sup> cases above the fault is again with the Dutch tax authorities and Character.AI respectively for bad analysis and design of their AI based application.&nbsp;<\/p>\n\n\n\n<p>The above cases and another additional 7 harmful AI cases (details in Annexure) were analysed for cause and the agency that was responsible for the lapse. The pivot chart on the agency responsible is shown in Fig 1 below. The pivot chart on the causes is shown in Fig 2 below.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"895\" height=\"280\" src=\"https:\/\/aipathfinder.org\/wp-content\/uploads\/2025\/11\/analysis-tables.jpg\" alt=\"\" class=\"wp-image-1373\" srcset=\"https:\/\/aipathfinder.org\/wp-content\/uploads\/2025\/11\/analysis-tables.jpg 895w, https:\/\/aipathfinder.org\/wp-content\/uploads\/2025\/11\/analysis-tables-300x94.jpg 300w, https:\/\/aipathfinder.org\/wp-content\/uploads\/2025\/11\/analysis-tables-768x240.jpg 768w\" sizes=\"auto, (max-width: 895px) 100vw, 895px\" \/><\/figure>\n\n\n\n<h4 class=\"wp-block-heading\">Take aways<\/h4>\n\n\n\n<p>Leading or forcing user behaviour and new functionality adoption without alerting users of the dangers involved and without their consent is condemnable behaviour by technology companies which should be penalized heavily.&nbsp; It is not enough to bury technical details of new functionalities in legalese within Terms and conditions, companies should be mandated to publish new feature explanations and impacts in non-technical language for the easy understanding of users.<\/p>\n\n\n\n<p>They should provide ways of opting in for new AI features rather than opting out. This means that all new AI features must be by default switched off. This is simple and can be enforced through regulation giving a level of user protection that also encourages users to become more knowledgeable about AI use and themselves get used to making choices for their own good. This will help develop learning skills in people which are essential in the age of AI. Users also have to scale up in their knowledge and cognitive capabilities to use AI tools in an optimal manner.<\/p>\n\n\n\n<p>From the above analysis, Analysis and Design has been a consistent weakness in most of the cases analysed and constitutes 36 percent of the cases of AI harm inflicted (Fig 2 above). This can easily be rectified if the software engineering practices in my book \u201c<a href=\"https:\/\/www.amazon.in\/dp\/B0D3WZXY99\">Applied Human-Centric AI<\/a>\u201d are adopted during analysis, design and development of AI systems. It may be hard to do so but is the only way to design human-centric AI. Adoption of the processes suggested in my book will result in tools being developed to automate the processes.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Conclusion<\/h3>\n\n\n\n<p>The short-cutting of the analysis and design stages in the development of AI systems, the need for \u201cagility\u201d of development in vogue today are responsible for a major part of the harms caused by AI to people. This must be rectified by introducing software engineering processes in the AI project life-cycle as explained above.<\/p>\n\n\n\n<p>Technology is meant to complement the individuals\u2019 efforts when the person chooses to use it. It is not meant to lead the way individuals think and act. &nbsp;The moment we allow AI take over the role of leading our thoughts and actions, we lose control of AI in a significant way. That is surely a recipe for disaster for us and not for AI.Download pdf copy<\/p>\n\n\n\n<p><\/p>\n\n\n\n<p>Annexure<\/p>\n\n\n\n<p>AI harms to humans \u2013 other cases considered in the study<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>National Eating Disorder Association chatbot &#8211; <a href=\"https:\/\/www.evidentlyai.com\/blog\/ai-failures-examples\">https:\/\/www.evidentlyai.com\/blog\/ai-failures-examples<\/a><\/li>\n\n\n\n<li>Uber Self driving car fatalities &#8211; <a href=\"https:\/\/research.aimultiple.com\/ai-ethics\/\">https:\/\/research.aimultiple.com\/ai-ethics\/<\/a><\/li>\n\n\n\n<li>Microsoft Tay Twitter Bot &#8211; <a href=\"https:\/\/en.wikipedia.org\/wiki\/Tay_(chatbot)\">https:\/\/en.wikipedia.org\/wiki\/Tay_(chatbot)<\/a><\/li>\n\n\n\n<li>Deepfakes <a href=\"https:\/\/www.amazon.in\/dp\/B0CW1D7B88\">https:\/\/www.amazon.in\/dp\/B0CW1D7B88<\/a><\/li>\n\n\n\n<li>Hallucinations, power grabbing, deception <a href=\"https:\/\/aipathfinder.org\/index.php\/2025\/11\/10\/governing-artificial-intelligence\/\">https:\/\/aipathfinder.org\/index.php\/2025\/11\/10\/governing-artificial-intelligence\/<\/a><\/li>\n\n\n\n<li>Big ticket problems (encryption, loss of control) <a href=\"https:\/\/aipathfinder.org\/index.php\/2025\/11\/10\/governing-artificial-intelligence\/\">https:\/\/aipathfinder.org\/index.php\/2025\/11\/10\/governing-artificial-intelligence\/<\/a><\/li>\n\n\n\n<li>Autonomous weapons systems &#8211; <a href=\"https:\/\/www.amazon.in\/dp\/B0CW1D7B88\">https:\/\/www.amazon.in\/dp\/B0CW1D7B88<\/a><\/li>\n<\/ol>\n\n\n\n<p><em>Disclaimer: The opinions expressed in this article are personal opinions and futuristic thoughts of the author. No comment or opinion expressed in this article is made with any intent to discredit, malign, cause damage, loss to or criticize or in any other way disadvantage any person, company, governments or global and regional agencies.<\/em><\/p>\n\n\n\n<p><a href=\"https:\/\/aipathfinder.org\/wp-content\/uploads\/2025\/11\/Time-to-reign-in-unethical-technology-practices.pdf\" title=\"Download pdf\">Download pdf<\/a><\/p>\n\n\n\n<p><a href=\"https:\/\/aipathfinder.org\/wp-content\/uploads\/2025\/09\/Bio-PRT_website.pdf\">Author<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Unethical and enshittification technology practices have been on the rise ever since November 2022 when [&hellip;]<\/p>\n","protected":false},"author":4,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[27,18,42,26,1,2],"tags":[30,36,32,33],"class_list":["post-1372","post","type-post","status-publish","format-standard","hentry","category-ai-analysis-and-design","category-loss-to-humans","category-human-rights","category-opinion-piece","category-reg","category-tech","tag-ai-analysis-and-design","tag-ai-engineering","tag-human-centric-ai-2","tag-responsible-ai"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/aipathfinder.org\/index.php\/wp-json\/wp\/v2\/posts\/1372","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/aipathfinder.org\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/aipathfinder.org\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/aipathfinder.org\/index.php\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/aipathfinder.org\/index.php\/wp-json\/wp\/v2\/comments?post=1372"}],"version-history":[{"count":5,"href":"https:\/\/aipathfinder.org\/index.php\/wp-json\/wp\/v2\/posts\/1372\/revisions"}],"predecessor-version":[{"id":1379,"href":"https:\/\/aipathfinder.org\/index.php\/wp-json\/wp\/v2\/posts\/1372\/revisions\/1379"}],"wp:attachment":[{"href":"https:\/\/aipathfinder.org\/index.php\/wp-json\/wp\/v2\/media?parent=1372"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/aipathfinder.org\/index.php\/wp-json\/wp\/v2\/categories?post=1372"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/aipathfinder.org\/index.php\/wp-json\/wp\/v2\/tags?post=1372"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}