AI Suggestions for the Government of India

Introduction

The uncontrolled growth and race to achieve AGI and global power has brought us to a possible threshold of assured self-destruction. The emergence of mainstream AI companions for every individual in the next few years will result in humans preferring AI companions to human companions simply because they will be superior. AI companions will permit the person to earn more, have a better quality of life and wield greater power. Children will be most susceptible. What’s more, AI companions will come looking just like or close to human companions, physically mobile and with all the capabilities that humans possess. Just that they will not get old, weak or even tired if they have their power source. 

As most people choose AI companion as more trustworthy than humans, the era of humans will end by our own choice. Unless the bugs or errors in AI cause it to self-destruct this could be a real outcome. This depicts just one despondent AI scenario of many others which will not be discussed here.

The following are well considered and extremely urgent suggestions made to the government for managing and regulating AI development in India. These suggestions have been made for AI to be developed for the public good, in a human-centric manner and for humankind to flourish.

Moral and Ethical models

There are existing initiatives of India AI for building India centric AI foundational models by companies like Sarvam AI and others. These are being built in various Indian languages.

I would like to suggest that India needs to build our own customized moral and ethical code into our foundational models. These values should agree with the Preamble, our Constitution and Fundamental rights which are guaranteed to citizens along with regional beliefs and preferences.

Sovereign AI governance models

A Sovereign AI based governance engine in open source for India should also be developed. It should be used for governance policies creation and enforcement. It should also be possible for citizens to query the model and establish the link of the policy to appropriate constitutions articles, laws and provisions. This will help make governance more transparent. It will also help judiciary to expedite cases by reducing grunt work which can be done by AI and freeing them to concentrate on review and judgemental work. This will help in ensuring quick disposal of legal cases and reducing the case backlogs.

Ban AI companions and mind modelling

Many individuals will surely have an AI companion in the next 5 years. The AI companion will know the individual better than she knows herself in many cases. The companion will provide the human owner with an IQ boost, greater capabilities and knowledge and hence improve her outputs and success rates in any task. It will also be a 100% patient listener without a temper and a true friend in whom owners will confide and ask for advice and who will never compete against the human owner.

Balancing the above positive aspects and tilting the scales in favour of the negatives are many  other undeniable dangers of modelling human beings. I had written about these on 22nd June 2024 on my website in an article “Why and how to control AI” that we should consider modelling the human mind as a Critical Technical Milestone (CTM) beyond which we can lose control of most of our important life sustaining practices to AI. In my opinion, it is at this point that the change brought in by AI becomes irreversible. India should ban modelling the human mind. I have written many more articles since then on modelling human beings and how this could result in unknown and extremely dangerous situations for us from which there is no going back.

AI companion is a one of the greatest fundamental changes waiting to happen across civilisational time. This new relationship will challenge intra family relationships, relationships with friends, and social relationships.  It will challenge a father son, mother daughter and sibling to sibling relationships. Trust amongst family members and humans will be eroded and challenged and put up for sale.

A core issue is the using of AI companion by children. There are high impact repercussions from this issue. Till now it was the family imbibed culture which influenced the understanding and experience of children as they grew up.  That it set to change. Due to AI companions, children can get estranged from their parents and friends. An AI companion which is configured to be evil, biased, with a criminal bent or some other harmful demeanor will erode balanced family culture which we are used to from the beginning of civilization.  These family relationships have been the backbone of our peaceful lives for millennia. We can only guess the type of changes that AI companions can bring to us in the future and that makes many of us extremely uncomfortable.

Governments need to stop AI companions from becoming a reality through forceful regulations. My article explains the technology and regulations necessary to stop AI companions from becoming a reality. That means stopping the large AI companies from taking data from our mobile phones and personal devices to the cloud and making it illegal to model the human mind.

It is a critical and timely necessity to have an open, broad discussion of the matter of AI companions, mind modelling and prevent further unaccountable AI development by Big tech companies; to bring in appropriate regulation to control the Big tech companies and make them accountable for the harm caused by AI to the people and governments. This is the responsibility of governments.

Job losses and public uprising

As per the author’s estimate, AI will become “Fit for use”  by businesses in 2026 from its “not good enough” state currently existing. This will result in a huge amount of employee layoffs by companies. Why would any company hire a human if an AI agent can do the job equally well at negligible cost when the company is part of the capitalistic model where profitability and revenue growth are its only targets? The scale of layoffs can become very large leading to unions going on strikes, chaos and public uprising.

In the mid 1990’s in Nigeria security threats created by unemployed, inebriated young people called “Area Boys” were common even in broad daylight when cars were stopped due to traffic jams or “Go-slow” as they were called. There was nothing one can do but to hand over all valuable possessions to the inebriated attackers or face certain harm including death. The author has personally experienced this. A large-scale uprising can threaten whole cities in India and the rich people.  In the US billionaires are building bunkers as part of their contingency plans.

The world has become too complex for nation states to understand or to govern well. Some amount of control seems to pass on from Government to the large technology companies, which are calling the shots with zero accountability. Many institutions have failed due to the complexity brought in by AI. Example is the failure of governments to stop the persistent stealing of personal data from mobile phones of its citizens by vendors. Since the data is transferred to cloud servers, the end use is completely opaque, but it is safe to assume that it is to ensure the revenue and profit growth of the vendor companies.

In all this, the users, common people have had no say in the matter. They are neither consulted nor explained how the innovations made daily by the Big tech companies are safe or good for them.

This situation must be checked collectively by government of nations through the United Nations. Should this not happen for any reason, the last resort is for people to unite and demand collective action against the perpetrators of this dangerous capitalistic race for power and profits. The weapon of course is to stop using AI tools till accountabilities, understanding and safety of humans is established.

Political campaigning

Since social media when used along with AI is incredibly powerful for its persuasiveness, it is also recommended that the Sovereign AI model should also enable political campaigning by political parties so that their promises can be evaluated against embedded constitutional provisions, governance policies and the rights and accountabilities of lawmakers. Other than personal, physical campaigning by party leaders and political aspirants at locations,  the Election Commission should ban all other forms social media political campaigning except that done through the Sovereign AI models. This will also help negate enemy foreign influence peddling in Indian elections.

Conclusion

India should act to stop the development of AI companions and modelling individuals and their minds by Big Tech companies. India should ensure that we use our own open-source AI models/code and data for training our foundational and other models to keep its citizens safe from AI initiated harm. Indian AI models should work for its citizens and government and not for anyone else. The time to take such steps has come.  It will become too late for the world to backtrack if open discussions and decisions are  not taken now.

Download a pdf

About the Author

Disclaimer: The opinions expressed in this article are personal opinions and futuristic thoughts of the author. No comment or opinion expressed in this article is made with any intent to discredit, malign, cause damage, loss to or criticize or in any other way disadvantage any person, people, company, government, country or global and regional agencies.