KEY AI RESPONSIBILITIES OF FOUNDERS AND CORPORATE LEADERS

What are the key responsibilities of Founders and Corporate Leaders where AI development and adoption is concerned? Who is affected, in what way are they affected? These aspects will be explained in a series of three articles of which this article is Part 1. The intent is to assist companies with the necessary background information for AI policy formulation. 

Introduction

This article analyses the key AI responsibilities of Founders and Corporate leaders. It is divided into three parts as explained below:-

In Part 1, this article, identifies the stakeholders impacted by ill-designed AI and the impact on the company as one of the stakeholders. 

In Part 2, we will cover the impact on employees, society and investors as stakeholders.

In Part 3, we will cover the impact on customers, planet and government as stakeholders.

That AI systems need to be selected, designed and built respecting Human-Centric AI (HCAI) principles is probably known by many. However, commercial enterprises do not have the natural incentives to do so, being overshadowed by the race to innovation, go-to-market, valuation and profit motive. Yet voluntary compliance by companies for building Human Centric AI systems benefits them more than a regulated regime.  Particularly in today’s environment, where AI regulation is lagging AI growth, it makes sense to be prepared by adopting voluntary compliance with HCAI for many reasons. Firstly, knowingly flouting certain design principles in a shortcut to sales will backfire when regulations do catch up as they certainly will one day. Redesigning and re-building products for compliance at a later stage and the possibility of legal cases being filed against the company are risks which have significant costs. Secondly, the brand recognition of the company will be tarnished and its ability to influence regulatory policy making diminished. Thirdly, there will be hiccups in the customer’s acceptance of the company’s products. Lastly, employees also understand the path chosen by the company and this will reflect in the preference of their place to work.  

Affected Stakeholders

It is necessary to understand who the stake holders of AI are, and how they can be adversely affected by slip-shod design of AI systems. The identification and understanding of  adverse effects of ill-designed AI systems are the starting points for AI policy formulation. The parties affected by AI systems are the company itself, its customers, employees, shareholders, vendors, society and our planet.

Business effects

The company grows because of customer preference of its products over other competing products. There might be differentiating factors, product positioning and price determined markets etc, but in each of these spaces, customer preference is dominant force driving market success for the company. This means that the company’s AI products must have a competitive edge in the market. Where AI is concerned, the basic need is to be able to predict the output correctly by the model. This is a mandatory need or else the company’s AI products would fail. The next step is ‘human-centricity’ of the company’s AI products. Explaining human-centricity is complex and it will taken up in subsequent articles. For a basic understanding, it means supporting and augmenting human capabilities, our preferences and our lives. What we need to recognise at this stage of AI development, is that it is the major competitive edge and differentiator companies should strive towards. 

Rapid business growth of the company will be assisted by the existence of human centricity in its products. Customers will prefer products with these characteristics over others. When a company’s AI products undermines human centricity, there are repercussions. 

Undermining HCAI practices might take several shapes, to name a few, unethical or discriminative product outputs. This will result in customer dissatisfaction in the minimum to lawsuits at the maximum which will damage the brand image of the company. As news of such incidents spread, the market share of the company will drop and cash flows of the company will be reduced. 

Employees

After a period of de-growth, employees will sense the reduction in business and profitability of the company. Some employees will deprecate the the lack of human centricity in the company’s products. News of the lack of quality in the company’s products will spread through the employees word of mouth to their colleagues and to the market. The ubiquity of the effects of AI products, particularly on human beings and society is at the opposite end of the spectrum when compared with general software products such as ERP or CRM. This is a clear warning for companies to heed. There will be a quick and indelible adverse impression formed on the minds of people and groups that the AI products impact. In the end, the quality of employees retained in  the company will no doubt deteriorate.

Cybersecurity

Lack of adequate quality and security practices in AI design and development will affect the level of cybersecurity protection that is available to the company, its customers, employees and vendors through the use of its AI products. The risks and costs posed by lack of cybersecurity (DOS, Ransomeware attacks etc) is generally well understood. Many AI systems use third party products in their solutions. The lowest common denominator of security and quality levels of the third party products and the company’s products will determine the level of quality and security of the overall deployed AI production solution. The APIs, hosting, monitoring, scaling and management software provided by Cloud service providers also contribute to cybersecurity vulnerabilities of the AI system/solution as a whole.

Legal Challenges and liabilities

Serious lapses in HCAI can lead to legal challenges and huge penalties as a result of class action lawsuits filed by the affected customers or groups. The lack of adherence to human rights and data privacy laws etc can result in the levying of back breaking penalties by Government agencies and may also lead to being de-barred for qualified vendor lists. Of import, is the fine imposed on Meta by GDPR of Euro 1.2 billion in May 2023 for transfer of personal data of European users to the US without adequate protection mechanisms. This will also lead to reputational and brand image damage for the company. 

Cost of Funds

As a result of the above damages, the cost of obtaining funds for the company will increase substantially. This will contribute in a debilitating way to the challenging cycle of long-term de-growth and market share loss which will be hard to recover from.

The next article in the series will focus on the effects on the employees, society and investors 

Follow link © 2024

Leave a Reply

Your email address will not be published. Required fields are marked *