Abstract
There is a change that has started happening recently in the approach adopted by corporates, investors and employees towards responsible AI design and development. What are the competing forces in the fray to determine the outcome of this change? Is the change going to be desirable for us?
Introduction
The firing of co-founder, Sam Altman of Open AI seems to have triggered an event of significant change in the AI industry.
The 501(c)(3) public charitable company Open AI‘s laudable mission is “a humanity-scale endeavour pursuing broad benefit for humankind” by building safe AGI that is “broadly beneficial while remaining unencumbered by profit incentives”.
Unseen limits
As Open AI gathered speed, it realised that sufficient capital for a cutting edge AI technology company was not easy to raise using a “donation based” model . So it pivoted its business model and devised a new structure to preserve the nonprofit’s core mission, governance, and oversight while enabling it to raise the capital for their mission. A new for-profit subsidiary was formed, capable of issuing equity to raise capital and hire world class talent, but still directed by the nonprofit. Employees working on for-profit initiatives were transferred to the new subsidiary. This was clearly an honest approach to tackle the genuinely large and difficult to achieve global humanitarian mission of Open AI.
The reality is that there are commercial forces behind investments made by companies and individuals with expectations of returns. It would be naïve to dismiss the opposing interests of business profits and the stated objectives of the Open AI non profit as not being relevant. Concerns from employees threatening to resign en-masse unless their conditions are met add a new and pressing dimension to the case.
Differing perceptions
Complicating the scenario was another imbroglio. This was the very differing perceptions of harm that can be done by unregulated AI. The Future of Life Institute has cautioned the world on the extreme risks of AI through unregulated growth. The letter signed by 33,709 eminent persons including Elon Musk the serial Entrepreneur, calls on ”all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. AI systems with human-competitive intelligence can pose profound risks to society and humanity”, they declared. The ‘not so alarmed’ camp consisting of commercially interested investors primarily placed innovation and growth before regulating the safety aspects of AI. Elon Musk, quit from the Board of Open AI in 2018 in anticipation of conflict of interest with his company Tesla which was also developing AI technologies.
In an interesting development, on 20th November, 2023 in the US, about three dozen VC firms, in addition to some 15 tech sector companies, voluntarily signed up for Responsible AI Commitments. “As AI permeates across industries, investors have an important role to play in promoting responsible development of the technology across their portfolio companies…,” Softbank Investment Advisers said in a LinkedIn post1.
What Governments think
It is clear from the current state of AI regulation by global nations that, except for China, regulations that enforce AI direction, are not being contemplated by the other nations, including the US, EU and UK. The EU is contemplating regulating only the application of AI. This is largely due to the nascent stage of the industry along with its rapid growth which has not given an opportunity for deep appreciation and study of the technology, its use and associated risks by regulators. Nations seem to be preferring to allow innovation and growth to reign.
Opinion
The removal of Sam Altman and the aftermath is a natural step in an industry attempting to self-correct. This is indeed a good sign in the absence of a better understanding by the national legislatures of how to regulate AI. It shows that naturally competing forces in the market are trying to force a delicate and equitable balance amongst AI’s commercial realities, humanitarian considerations and risks that it poses to humans. The participation of all stakeholders, investors ( including Microsoft), employees, Board members and management in the choice of the future path for AI should result in identifying the best path forward.
Reference:
- VCs Hold Themselves Accountable for AI They Fund and Found – Economic Times