The two incidents below illustrate that developers of AI systems, large technology companies with all their cash reserves and gigantic revenues have been unable to analyse, design, and build safe and responsible AI systems.
The Indian government has said that artificial intelligence and LLMs using generative AI or algorithms, which are in an unreliable form such as beta testing must not be deployed for use without the explicit permission of the Government of India. This was in response to PM Modi being characterised as implementing ‘Fascist’ policies by Google’s Gemini. The Government says this characterisation is ‘biased’. Mr Rajeev Chandrashekar, Minister said on 2nd March 2024, that this was signalling the future of regulation for AI platforms to comply with. The present information comes in the form of an advisory.
India will be going for general elections in April or May 2024. The ability to manipulate 94.5 crore voters over a total population of 135 crores is a very high risk posed by AI. Gemini had to be rolled back by Google after the Government’s intervention.
Gemini has also been faulted for showing images of coloured people in places where the original person in the picture was white. The output has resulted in the manipulation of a genuine original image. Perhaps this is a result of AI trying to mitigate the harmful effect of some other generic parameter. Nevertheless, it shows insufficient understanding of AI analysis, design and testing of released AI systems. Google has now stopped the image AI service to fix these bugs.
Follow link