Sunday, December 15, 2024

IN FOCUS: Generative AI tools- Are we monkeying with tech?

Must read

Millions of people across the globe are now fascinated with generative AI(Artificial Intelligence) tools that have captured the public imagination since the phenomenal success of ChatGPT (Chat Generative Pre-Training Transformer), a chatbot developed by OpenAI and launched in November 2022. The application can do virtually everything: from creating content to answering everyday life questions to solving complex coding problems.

More than 30 million users have already begun using the application to the hilt.The subsequent official launch of other lab-developed generative AI tools and copycat models in the inevitable (and as-yet uncontrollable) race that followed to grab ‘searching’ eyeballs has raised questions concerning the safe, ethical and well-regulated use of the tools that will ultimately survive after a shake-out in the market.

The Pioneer’s Amartya Smaran talks to experts and others handling such tools in the industry for giving you the lowdown on what is in store with ChatGPT, Bard and a host of other generative AI tools such as Dall.e, Midjourney, and Stable Diffusion aiming to dominate, if not monopolise, the market.

OpenAI was founded in 2015 as a nonprofit research lab by Sam Altman, Peter Thiel, Reid Hoffman, and Elon Musk. The company created a for-profit subsidiary in 2019 and sealed a $1 billion deal with industry giant Microsoft. The company’s A.I. technology was made available to the public for testing in November 2022. The monumental success of ChatGPT enabled OpenAI to strike another mindboggling $10 billion deal with Microsoft, which announced on February 8 that it would integrate the new technology with its Bing search engine for delivering improved results.

Google, as industry leader in search engine technology, has been under intense pressure since Microsoft hit the golden dong with its A.I. integration strategy. The search engine giant decided to launch its own AI-driven conversational service Bard, which is powered by LaMDA and feeds on real-time information.

Google put out an advertisement on Twitter that included a live demonstration of Bard. However, in the ad, when Bard was given the prompt: “What new discoveries from the James Webb Space Telescope can I tell about 9-year-old about?”, it responded with a couple of answers, and the one that said, “JWST took the very first pictures of a planet outside our solar system.These distant worlds are called “exoplanets”. Exo means “from outside,” turned out to be inaccurate. This advertisement mishap cost Alphabet Inc, the parent company of Google, a staggering $100 billion, while Microsoft’s shares shot up by 3%.

Commenting on the ongoing AI-centric battle between Google and Microsoft, Sridhar Seshadri, Co-founder and CEO at Spotflock, says: “In my opinion, it is better for both companies to opt for a collaborative approach and not necessarily compete against each other in the NLP (Natural Language Processing)/A.I. (Artificial Intelligence) space. Also, I think it is important for them to recognise that these new models are still in the early stages, and to be mindful of how they use them as they continue to develop.

I think they should be open to experience (open to sharing resources and ideas and focus) on how they can improve the overall experience for users. Google and Microsoft should be more mindful of ChatGPTand Google Bard models that are still in the early stages. They should ensure that these models are developed as responsible AI and that any potential implications of the model are carefully considered. Additionally, they should ensure data security and data management, i.e., that any data used to train these models is collected and handled responsibly. Finally, they should ensure that these models are regularly tested and monitored to ensure that they are not making any mistakes that could have serious legal consequences.”

The advent of tools based on generative A.I. has also sparked discussions about consumers’ ethical and regulated usage. Many experts have criticized ChatGPT for its ability to propagate misinformation at an incredibly rapid pace. Teachers have also expressed concerns over students using the platform to finish their homework ahead of deadlines quickly. Just two months after its release, OpenAI’sChatGPT shook up the academic world by acing medical, law, and business administrative examinations.

“The ethicality of using ChatGPT at work is a matter of context and depends on the nature of the work being done,” observes Sridhar. “Generally speaking, it is not usually ethical to use ChatGPT in a work setting as it is an automated system that can produce inaccurate or incorrect results. This can be seen as a form of deception, as it is not transparent to the user that the results are being produced by an automated system rather than a person.

Additionally, ChatGPT can be seen as unethical in a work setting because it may be used to replace a human worker and thereby lead to a decrease in job security and wages and might cause data security concerns for edge case scenarios in FinTech, Insurance, Healthcare industries during transactional related issues, OTP/password related query.”

ChatGPT can help people become more effective and efficient at work by providing prefilled/customised code for your project requirements, and automated intelligent assistant services that can help with tasks like scheduling, reminders, and even research. For example, ChatGPT can be used to automate the process of scheduling meetings, finding the best time for everyone to attend, and sending out reminders.

This can save time and effort for everyone involved by 50%, according to Sridhar Seshadri.

In a conversation with The Pioneer, while pointing out the positives and negatives of ChatGPT, Venkat Kanchanapally, Founder-President & CEO at SuntekCorp Solutions Pvt Ltd, opines: “An individual’s ability is not known when they use ChatGPT to perform. That’s the major disadvantage of any AI technology. A mentor guides people, but in the case of ChatGPT, it is only spoon-feeding people. However, ChatGPT is positive for the ones who already have the ability to perform a certain task. It could help them achieve things in an efficient manner. It’s a disadvantage for those who don’t have the ability and become performers with the help of the application. It is a disaster if people use it as a substitute for the lack of their ability. The information that you are seeking to add on to your growth for better outcomes is always ethical.”

According to experts, the sudden explosion of ChatGPT and other generative AI models might pose a threat to human jobs. To cite an example, someone who gathers large volumes of information in memory and produces tangible data based on the acquired text is at a high risk of losing their job. Content creation will never be the same anymore.

DALL-E 2, another product of OpenAI that has made waves, is an image-generating software that converts text prompts into digital art. And there are A.I.-powered writing tools like Compose AI, Wordtune Spices, and Rytr that can help curate edited copies for writers. Buzzfeed, a media site, announced that it would take the help of AI to write some of its content online. Repetitive tasks in video editing like painstakingly masking and tracking each and every frame in the timeline can be cut out altogether. However, it is important to remember that AI technology is still nascent and needs further improvements and a lot of refinements.

So, what do the experts have to say about AI stealing human jobs? “Absolutely, there is a real threat to human jobs by ChatGPT, as it has the potential to automate many tasks that humans currently perform. ChatGPT can be used to generate text, answer customer queries, and even generate entire conversations.This could reduce the need for humans to do the same tasks, leaving them out of work. Mostly, people with repetitive tasks that new tech can replace might get laid off and new roles will be created requiring strong technical knowledge and the ability to lead the team by example,” explains Sridhar Seshadri of Spotflock.

In line with Sridhar’s views, Venkat Kanchanapally says, “Chat GPT can generate a code in two minutes, and it takes around two or three hours for a human being to build the same code. Companies may obtain this kind of utility. There is a threat to human jobs where any kind of data is involved. The people who are at the level of management of data with embedded IT in core engineering are more likely to lose jobs.”

Striking a different note, Dr. K. Madhavi, Professor, and HOD CSE Department, Gokaraju Rangaraju Institute of Engineering, says: “Yes, there is a real threat to some human jobs due to the advancement of AI technology. Automation and AI are changing the job market, and some jobs are at a higher risk of being replaced by machines and algorithms.

However, it is important to note that AI will also create new job opportunities and augment human capabilities, leading to new forms of employment. As for who is more likely to lose jobs, it depends on the type of work being performed. Jobs that involve repetitive tasks and can be easily automated, such as data entry or manual labor, are at a higher risk of being replaced. On the other hand, jobs that require human skills such as creativity, empathy, and critical thinking are less likely to be replaced by machines.”

Chief Executive of OpenAI Sam Altman has said that his goal is to create Artificial General Intelligence that matches human intelligence. He told a newspaper: “Its benefits for humankind could be so unbelievably good that it is hard for me to even imagine.”

On Sam’s statement, Sridhar says he couldn’t agree more. He, however, bats for proper regulatory space to govern use of AI. “I do concur, one has to use AI with proper governance for the ultimate benefit; else, it will mislead us and situations will not be in our favor/control, especially in edge cases like – AI usage in healthcare problem statements where the common public might be unaware whether the recommendation should be followed AS-IS and/or any certain disclaimer applies where it needs clinical correlation with a certain medical practitioner (especially in remote areas of developing countries where demand is unimaginable for continuity of care).

For example, an AI system that recommends certain solutions without considering the proper local culture, human values, personal tendencies, genetics, and body structure of the patient might be a bane for the user and any organisation that adopts losing its ultimate goal to serve the purpose. Another example is in the defence sector for relying on AI pertaining to strengthen/safeguard national security, where one wrong decision can either impact environment, ecosystem, or populated areas.”

The rise of AI technology funded by multi-billion dollar tech companies might eventually pose a threat to society, according to Dr. K. Madhavi. She adds: “The creation of dangerously advanced machine learning algorithms by nations and technology firms in competition for military and civilian advantage has turned the development of AI into a literal arms race.”

Whilst there are numerous drawbacks to ChatGPT and other applications in the pipeline, one positive aspect of AI technology is that it saves time by doing quickly repetitive and onerous tasks. In fact, it might even help people get more creative.

To cite an example, a teacher named Donnie Piercey encouraged fifth-grade students in Lexington, Kentucky, to try and out-think ChatGPT.One of the students reportedly informed that the robot-assisted in properly summarizing and correctly capitalizing words and using commas. The teacher even conducted a playwriting contest.

Donnie Piercey dubbed it “Pl-ai writing”. The students were split into groups and asked to write down the characters of a short play with three scenes to unravel and the loophole in the plot had to be solved. Now, Donnie gave the prompts to ChatGPT with instructions to create scenes within the walls of a fifth-grade classroom.The application successfully created a full script and the students later briefly rehearsed and performed the play. The point is, we could either complain about the harmful effects of something or put them to good use like Donnie Piercey.

Taking this forward, we asked experts what the best possible way is to use ChatGPT. They all certainly think ChatGPT is a great tool to make us more effective.

Sridhar Seshadri feels there is a dire need to establish the following: guidelines for responsible AI, transparency, monitoring bias, developing responsible data practices, fostering ethical leadership, encouraging public discourse, and promoting ethical innovation. “The best way to use AI technology to our advantage is to leverage it in areas such as automation, predictive analytics, machine learning, and natural language processing. This can help streamline and optimise processes, increase the accuracy and efficiency of operations, and provide insights into customer behavior.

Additionally, AI can be used to develop innovative products and services, create more personalised customer experiences, and increase safety and security. Making life easy is the motive of advanced technologies like these; these shouldnot be used for making destructive use cases. We all know the unethical use of radioactive elements and their ethical use. I am certain human civilisation is smart enough to draw the lines and use this advanced tech for positive living,” says the expert.

On the other hand, Venkat Kanchanapally of SuntekCorp Solutions Pvt Ltd,says: “I would definitely say that things like ChatGPT should not be a substitute for one’s ability. It should act as a catalyst and help us cut down on our time and effort in order to have a better outcome. That is how we must use our AI. In this process of using AI, we shouldnot get to a point where humans become helpless, and technology prevails.”

Dr. Madhavi strongly believes that AI can be used as a tool to improve our productivity and efficiency. “AI can also be our companion that can help us in our work.The best example would be the Large Language Models such as ChatGPT. ChatGPT can write code, solve math questions, summarise research papers, and also make conversation.”

Technology has always been at our disposal but when it falls into the wrong hands, things eventually go haywire. Likewise, someone who is knowledgeable enough should educate people about the pros and cons of any emerging technology.

In the 1940 satirical film The Great Dictator, the late legendary comedian and actor Charlie Chaplin delivered these famous lines: “Our knowledge has made us cynical.Our cleverness, hard and unkind. We think too much and feel too little.More than machinery we need humanity. More than cleverness we need kindness and gentleness. Without these qualities, life will be violent, and all will be lost…”

In conclusion, machines cannot think unless they are fed something in real time or otherwise. That is where humankind has an edge. Only when humans turn cynical, greedy, hard, and unkind; there is a freaking chance for technology to outwit humankind.

- Advertisement -spot_img

More articles

- Advertisement -spot_img

Latest article