Nelson’s eye to the Engineering Foundation of Explainability in AI systems

Abstract: The aspect of explainability of AI systems is being side-stepped by big tech companies. What appears as minor AI issues caused by lack of explainability today can snowball into life threatening scenarios. Applied engineering has no license to take such risks. Creators must realise this themselves or else be prepared to be reined in.

Hallucinations

There are some aspects of AI that need to be set right. One of them is the inability to explain how things work in the learning process within neural networks. This can manifest in many forms one of which is hallucinations. This is not a ‘little problem’ as technology buffs would have us believe, something to walk over. Being a technologist all my working life of over four decades, I disagree with the approach. It is like the proverbial ostrich burying its head in the sand. AI problems caused by this approach have been exposed regularly yet big tech companies with their closed AI analysis and design approaches continue to push the frontiers of AI. The EU under GDPR can impose penalties up to 4% of annual revenues for inability to comply with data regulations.

Current State

Currently noyb, a privacy rights non-profit is filing the latest GDPR complaint against ChatGPT with the Austrian data protection authority on behalf of an unnamed complainant (described as a “public figure”) who found the AI chatbot produced an incorrect birth date for them1. Chat GPT is not in a position to correct the date of birth only because it does not know how to do it. The company gave all other solutions except what the affected person wanted, which was to simply correct his date of birth. That is what happens when one does not know how things work internally in their own product.

Rationale

A case in point demonstrating “laxity” in good engineering practices is of Boeing 737 MAX suffering a recurring failure in the Manoeuvring Characteristics Augmentation System (MCAS), causing two fatal crashes, Lion Air Flight 610 and Ethiopian Airlines Flight 302, in which 346 people died in total2. Major lapses lay at the doorstep of Boeing in flight control software. Boeing had to pay a total of criminal monetary fines and compensation amount of over $2.5bn.

Engineers are expected not to design blackbox and unsafe products. They must know how their products work internally. Is not engineering an exact science? A synthetic branch of science which is rooted in logic and explainability? A step-by-step exact applied science where we put together explainable subcomponent designs to build safe end products? With the ability to measure and apply safety margins to those products.

Accountability

Applied engineering has no license to take such risks. Creators must be held accountable for closing their eyes to defects in AI products caused by a lack of understanding of how neural networks work. Accountability can take many forms and it should surpass pure monetary penalties which can be borne easily by cash rich corporations. As in other legal practices, the human behind the decision must be held accountable for risk to fellow human beings.

Such systems must not be released into production when we are not clear how it works. Commercial interest must bow down to quality and best practices in AI design. My book “Applied Human-Centric AI”, Chapter 8, milestone deliverable A5.7 states that confirmation from Project Manager is required that there are no unexplained features in the AI system design. It recommends that where there is lack of explainability or understanding in the workings of the analytical model and data of an AI system, it should not be released into production. I have also proposed a human-centric risk classification scheme for AI use-cases which is the need of the hour.

Conclusion

This dangerous AI practice should not be supported by all right-thinking persons including creators, users and regulators of AI systems. Accepting such short cuts will lead to far more dangerous AI features being released in the future without explainability and understanding. Neural networks are used for facial recognition.  Imagine the deployment of this technology in a targeted autonomous weapon system which results in the killing of an innocent bystander?

AI is percolating into all aspects of our lives, and we can be sure to be unpleasantly surprised or seriously threatened in our lives if big tech companies do not embrace an open and explainable approach to AI analysis and design. An AI analysis, design and development model which encourages Human-Centricity, traceability, explainability, verifiability and validation has been presented in detail in my book2.

(Kindle and Paperback available at https://www.amazon.com/dp/B0CW1D7B88 https://www.amazon.in/s?k=APPLIED+HUMAN+CENTRIC+AI&i=stripbooks&crid=7BC4AJG0ZJOM&sprefix=applied+human+centric+ai%2Cstripbooks%2C79&ref=nb_sb_noss )

References:

1.    https://techcrunch.com/2024/04/28/chatgpt-gdpr-complaint-noyb/

2.    Applied Human-Centric AI by Rajagopal Tampi Amazon Kindle Store and Paperback

Follow link