Who has the Competitive Edge in Generative AI?
The launch of the ChatGPT last November was undoubtedly a global sensation. The smart, conversational model developed by Open AI, garnered 1 million views in the first week of its launch, a rare feat for a tech consumer application.
Thanks to its array of language models, Generative AI currently occupies the centre stage in the Artificial Intelligence (AI) space. Predictably this has brought in its wake, a host of AI related crimes, that include deepfakes, malware, privacy violations, personal data theft, and social disharmony amongst others. The problems created by hallucinating Gen AI Chatbots add to the distress list.
The launch of Worm GPT (W-GPT), the evil ‘other ‘of Chat GPT, is the latest shocker. This AI model, which is powered by the GPT-J language model, is a monster that has been fed on large loads of malware data during its training phase. According to experts who have tested the model, the phishing messages generated by the ‘worm’ are deceptively misleading and can dupe even a ‘not so innocent’ recipient.
It is true, many of these crimes were present in the pre-Generative AI days. However Generative AI has taken malevolence to unimaginable depths.
The quest for a safe, secure, and trustworthy AI regulatory order, which has gained traction in recent months, has been largely driven by the vicious nature of ‘Generative AI’ related crimes.
Safety, security and trust apart, AI regulations seek to realise the broader goal of Responsible AI. Responsible AI involves the principles of algorithmic fairness, privacy protection, sustainability, and prevention of economic privations on account of AI induced automation. The broader goal is to create an AI order that is safe, accurate, predictable and socially inclusive. In due course, the reach of Responsible AI may extend to Intellectual Property issues as well.
Both the Biden- Harris AI Bill of Rights and EU’s proposed AI regulations have rooted for Responsible AI . The principles of algorithmic fairness and privacy protection also find mention in the draft paper on the Digital India Act 2023, recently brought out by the Ministry of Electronics and Information Technology (MeitY).
It is common for regulations to require agencies concerned to conduct risk assessments of untested technologies or products. The idea is to spot risks arising at various phases of a technology’s functioning. Sections of the Generative AI industry argue that the EU’s proposed AI regulations, by a priori classifying activities and sectors into ‘unacceptable’, ‘high risk’ and ‘low risk categories’, have restricted investment opportunities for Gen AI companies in critical sectors like education and vocational training.
Early mover ‘Disadvantage’?
According to a relatively conservative forecast by Statista , the global AI market for Generative AI products is expected to touch USD 207 billion by 2030. The lion’s share of this market is expected to be cornered by USA, Europe, and China.
While the need to ensure safe , secure and socially fair AI regimes is paramount , there are concerns that rigidly applied regulations can adversely affect the economic viability of AI companies. There is an element of truth in these arguments with Sam Altman himself turning gloomy about the future of ‘giant, giant (language) models’.
It is no secret that even a path breaking company like Open AI is financially bleeding. Gen AI companies have been, of late, experiencing near stagnation in their customer base. Every effort to ramp up their scale leads to mounting computation and data storage expenses. To this, if one were to add cost escalations arising from steeply priced Graphic Processing Unit (GPUs), it is not difficult to imagine that companies of the West that had a head start in Gen AI systems like Large Language Models (LLMs), will increasingly find it difficult to sustain themselves.
With Elon Musk and Steve Huffman threatening to clamp down on Gen AI companies for scraping data for free from Twitter and Reddit, the phase of freely available web data also looks like getting over. Tight data protection laws coupled with copyrights infringement cases filed against these companies by aggrieved authors, would sharply raise data acquisition costs in future. Their efforts to bring in ‘synthetic data’ will also not bear fruit due to multiple safety concerns associated with this data source.
China vs India
China has been the poster boy of AI market forecasters. With its tight data protection regulations (including those associated with the use of synthetic data) and ambitious AI advancement plans, China aims to be a global powerhouse for AI technologies by 2030. However, the weakening ability of China’s tech companies to raise capital from overseas markets in recent times and deteriorating US- China relations could create roadblocks in the country’s quest for AI supremacy.
Interestingly, unnoticed by many, India has worked out its AI infrastructure strategy. The Semiconductor Mission is the first leg of India’s national AI development strategy. The Digital Data Protection Bill 2023, recently passed by Parliament, does not embargo transfer of data to overseas entities. All the same, the Bill provides scope for building a large public data base under the auspices of the State. India’s ambitious data development utilisation plan for research (as spelt out in the draft consultation paper on Data Governance Framework of 2022), also forms a critical building block for a solid AI ecosystem to emerge. What adds to the scene is the emergence of a set of vibrant companies in the country, working in the space of LLMs with applications in health, E commerce and marketing. The research venture, AI4Bharat (at IIT Madras) has gone a step further by seeking to develop its own Gen AI Foundation Model.
The key challenge for India will be to ensure that these multiple building blocks turn into sustainable supply chains and affordably priced products that is attractive enough to draw in a large population of customers from across the world.
* ICRIER-Prosus Centre for Internet and Digital Economics, ICRIER, New Delhi. Views expressed are personal.
Please provide your contact details to hear from us on IPCIDE's research.