In November 2022, when India took over as Council Chair of the Global Partnership on Artificial Intelligence (GPAI) for 2022-23, Union Minister Rajeev Chandrasekhar declared: “India along with member states will work hard to build an AI framework that will be good for the citizens, with guardrails to prevent misuse.”
Launched in June 2020 with 15 members, GPAI today has 29 members. Yet, none of them is individually capable of addressing the imaginable perils of moonshot projects of tech titans should tools involving Artificial Intelligence and/or Machine Language go haywire.
In January 2015, following the publication of a letter initiated by Stephen Hawking and Elon Musk and signed by many prominent AI researchers, the potential danger of AI was a hot topic of discussion for some time. Musk then called AI research “summoning the demon” and Hawking warned that the development of AI could “spell the end of human race”.
Recent developments concerning the astounding capabilities of OpenAI’s ChatGPT, coupled with fresh insights on AI’s potential to replicate or even outclass human cognitive abilities, have once again drawn attention to the need for regulating advances in AI and ML at a global level.
AI has been (mis)used to minimize workforce deployment in the backdrop of pandemic-induced social distancing. Training data used for AI systems rely on a lot of private information that is often accessed on the sly or openly in geographies without sufficient data protection laws. AI has been used with malicious intent. Already deepfakes are being used to propagate misinformation with serious consequences on society. The vulnerabilities in AI and ML models can be easily exploited by rogue players to launch adversarial attacks with frightening repercussions. So, consensually evolved and universally accepted regulatory laws are the sine qua non of orderly growth of AI and ML.
Few countries have come up with laws specifically governing AI systems. China has come up with regulations for Internet Recommender Systems that provide ‘Internet information services’ within its mainland territory. Recommender systems and content decision systems can undermine individual privacy since they rely on the “collection and processing of private personal information of users”. They can also potentially undermine national security.
In 2021, the European Union proposed the Artificial Intelligence Act that seeks to ensure that AI systems are safe and respect the fundamental rights of people under its jurisdiction. Brazil and Canada also have pieces of legislation intended to ensure responsible development of AI.
In India, there are no specific laws for regulating AI, ML and Big Data. Minimal obligations are mentioned in the IT Act-2000 and the rules made under the law. Only personal data is protected under the fundamental right to life and certain provisions of the Information Technology Act 2000. Data other than personal data is not governed by specific pieces of legislation. So, there is a lot of uninformed compromise by citizens when it comes to anonymized data that tech companies using AI and MA leverage. India’s anti-competitive law prohibits practices like collusive bidding, coordinating prices and production to mimic a monopoly or restricting the market output to increase prices and profits. No law specifically covers the use of AI as a means of colluding among competitors. The Union government has constituted four committees to bring in a policy framework for AI. The NITI Aayog has enlisted seven principles for responsible AI that includes principles of safety and reliability, equality, inclusivity and non-discrimination, privacy and security, transparency, accountability and protection, and reinforcement of positive human values. Besides, the Department of Telecommunications has formed the AI Standardisation Committee. The panel’s AI Stack paper highlights five major horizontal pillars and one main vertical pillar, thus covering some of the most crucial aspects in AI deployment today, including security, data storage, privacy, customer experience and computing. Other than that, nothing significant has been done till date.
At a recent symposium on ‘AI in Defence’, Defence Minister Rajnath Singh sounded a word of caution that India must be ready to face the upheaval that AI will bring soon. The Minister’s fears are understandable. Instead of focusing on the looming perils of AI, tech industry leaders are engaged in the game of one-upmanship. While most people are going gaga over ChatGPT, there are others who take its capabilities with a pinch of salt.
“When it comes to very powerful technologies… we need to be careful”, Demis Hassabis, CEO of Google’s DeepMind Technologies, told a magazine. He has gone on record that DeepMind’s AI chatbot can do things that ChatGPT cannot. DeepMind’s milestones include beating human world champions at the complex board game Go and predicting over 200 million structures of all known proteins. DeepMind’s Sparrow chatbot reportedly has features that ChatGPT lacks, including the ability to cite sources through reinforcement learning. Sparrow could be released as a private beta in 2023. Most importantly, Hassabis said that AI is “on the cusp” of reaching a level that could cause significant damage to humanity. “When it comes to very powerful technologies – and obviously AI is going to be one of the most powerful ever – we need to be careful,” he said. “Not everybody is thinking about those things. It’s like experimentalists, many of whom don’t realize they’re holding dangerous material.”
For that matter, even OpenAI CEO Sam Altman has warned of “scary moments” and “significant disruptions” with human-level systems.
Regulation must involve all the industry leaders so that AI ultimately has only a positive impact on human lives with there being not even a remote possibility of its erratic behavior or malicious use. Such technical standardization requires implementation of policies by government authorities who comprehend the nitty-gritty of regulating technology in an objective manner for universal good. This requires collaboration among lawmakers, policymakers, academics and engineers, coupled with solid backing by stakeholder groups, such as corporations, citizens, and human rights activists.
Otherwise, fiercely competitive tech majors will unwittingly advance Armageddon!