Notwithstanding informed concerns expressed by industry pioneers and veterans over Artificial Intelligence tools going haywire or getting into wrong hands, both Google and Microsoft have lapped up disruptive advances in tools relying on generative artificial intelligence and begun pitting AI-driven products in a hopelessly unregulated space.
Google, though circumspect about its AI-driven tools in the works all these years, has realized the potential of ChatGPT to disrupt, if not dislodge, its search engine from the preeminent position it had occupied until recently. Google appears to have been forced to fast-forward the positioning of its AI-driven chatbot Bard, as a competitor to ChatGPT —
the astounding Microsoft-backed AI chatbot developed by the San Francisco-based startup OpenAI. Aside from intensifying the already fierce competition in internet search space, it has given room for firms behind copycat AI tools to have a field day on the premise that whatever they are doing should be right if the industry leaders and pioneers are not wrong!
Google’s announcement regarding Bard is almost coterminous with Microsoft’s announcement regarding integration of ChatGPT with its Bing Search engine. Google does not want the proven abilities of Chat GPT to dwarf Google Search results to such an extent that people begin equating googling with finding needle in a haystack.It is a different matter that Bard seems to have an edge: the ability to draw information from the Internet.
In contrast, ChatGPT, though capable of responding to complex queries with varying degrees of accuracy, cannot access real-time information from the Internet. ChatGPT’s language model was trained on a vast dataset that includes information until 2021.Like ChatGPT, Google’s service will use artificial intelligence to generate answers in text when people type in queries. Like what OpenAI is doing with its ChatGPT, Google plans to make the underlying technology available to developers through an API.Herein lies the rub.
With the race to build AI-based generative chatbots hotting up, there are genuine and as-yet unaddressed concerns that the text generation software, be it from Google, OpenAI or any of the firms working on similar or copycat tools, can be prone to inaccuracies and biases. For, we are dealing with technology that can search the internet in real-time, including harmful content such as hate speech, racial and gender biases as well as stereotyping.
Media reports suggest that in a 2020 draft research paper, AI researchers at Google had flagged the need to proceed carefully with text generation technology. This had irked some executives at the company to such an extent that two prominent researchers were fired.The same Google must now keep pace in a sunrise field of technology that analysts believe will be as transformational as personal computers, the internet and smartphones have been in various stages over the past four decades.
Artificial intelligence is known to have problems, including those related to gender and racial bias. Joy Buolamwini, a computer scientist who founded the Algorithmic Justice League, has gone on record: “Machines can discriminate in harmful ways. I experienced this first-hand, when I was a graduate student at the Massachusetts Institute of Technology in 2015 and discovered that some facial analysis software couldnot detect my dark-skinned face until I put on a white mask.These systems are often trained on images of predominantly light-skinned men. And so, I decided to share my experience of the coded gaze, the bias in artificial intelligence that can lead to discriminatory or exclusionary practices”. AI systems from leading companies have failed to correctly classify the faces of even icons like Oprah Winfrey and Serena Williams.
Certain surreal creations that went viral on the internet are products of Dall-E Mini, a web app that creates images on demand. Type in a prompt, and it will quickly produce a handful of cartoon images depicting whatever you have asked for. The tool is basically a copycat version of Dall-E-a much more powerful text-to-image tool created by OpenAI that has not been released for public use, due to concerns that it “could be used to generate a wide range of deceptive and otherwise harmful content”.
The risks of text-to-image tools include the potential to amplify bullying and harassment; to generate images that reproduce racism or gender stereotypes; and to spread misinformation. They could ‘progressively’ reduce public trust in genuine photographs that depict reality. OpenAI and Google have both also developed their own synthetic text generators on which chatbots could be based.
All along, they had shown restraint in releasing them widely to the public due to fears that they could be used to manufacture misinformation or facilitate bullying. Not anymore. When tech titans publicly hype their tools with exciting capabilities, and share, may be to a limited extent, how they made them; can copycats with fewer or no ethical hangups be far behind?
Margaret Mitchell, former co-lead of Google’s Ethical Artificial Intelligence team, has said: “Platforms are making it easier for people to create and share different types of technology without needing to have any strong background in Computer Science.”
Experts believe that it is inevitable that accessible image- and text-generation tools will open not only vistas of creative opportunity, but also a can of worms in the form of questionable applications. These applications could depict people in compromising situations or let loose armies of hate-speech bots to hunt vulnerable people online.
The quintessence of this is GPT-4chan, a copycat program that raised concerns. It was a text-generator, or chatbot, that had been trained on text from 4chan, a forum notorious for being a hotbed of racism, sexism, and homophobia. Every new sentence it generated sounded similarly toxic. Following widespread criticism online, the app was removed from Hugging Face, the website that had hosted it.
Leave alone copycats, what about industry leaders and tech titans? After all, it is they who know best how to ensure that the emerging AI-induced challenges in the world of online safety are addressed through a universal regulatory framework.