Introduction
The Government of India, Ministry of Electronics and Information Technology has posted an advisory a few days ago that generative AI systems in any form of testing should not be deployed without approval from the government. Following on the heels of that stricture applicable on mostly large technology companies, the Ministry has again cautioned today that Deepfakes are not representative of the factual nature of the persons that they represent and should be taken down by the concerned platforms without delay. Notwithstanding other substantial AI governance regulations being implemented in other parts of the world, these two steps taken by the Indian government alone are clearly representative of the strong regulatory treatment expected to be meted out to the creators of AI products and services in the future.
What does it mean for IT professionals and companies developing AI systems?
These recent adverse impacts of AI on individuals, society and nations force us as Information Technology professionals to rethink the way we analyse, design and develop AI systems. One learning is that not only is it important for ML professionals to be technically competent, it is equally important for them to understand on their own the broad and specific impacts of the ML systems that they develop. These impacts go well beyond technology into the realm of ethics, morals, rights, and the nuances of human nature. Explanation and tooling for deep analysis and design for information technology professionals has been either unavailable or very sparse restricted to intellectual papers. The focus has always been on technology ignoring these equally important considerations.
Why is a rethink necessary?
The reasons for a rethink of the way in which we analyse design and develop AI systems are fourfold:
- To protect jobs. This protection will come when programmers are able to analyse and design AI systems on their own to compensate for the inadequacies of AI analysis and design tools used today. It is no longer acceptable to state ”I have used the tool approved by the company to design the product, it is the tools that are bad, not my analysis and design capabilities. “ Such an answer may have been acceptable yesterday, but in my own humble opinion, it is not an acceptable answer today and going forward.
- Companies must ensure that they are compliant with regulations regarding AI. Since the topic is complex, the regulations are an emerging learning from the adverse impacts as they occur, it is advisable for companies to explore the scope of analysis and design of AI systems and its impact in a better and methodical manner. They should customise their tools and procedures in accordance with the learning required for a better, comprehensive and more thorough understanding of impacts of AI analysis and design. Their objectives could be to support employees in their AI analysis and design tasks and also be compliant with regulations, thus avoiding penalties of regulators and legal actions by AI users.
- For Human, societal, national, global safety and preservation of our ways of harmonious living, producing, trading, consuming, sharing and our economic heritage.
- For safeguarding the planet and its resources through sustenance initiatives.
The rethinking of methods for deep analysis, early detection and elimination of the adverse effects of AI is long overdue.
Follow link