Blog

Home / Blog

From ChatGPT to Killer Robots: Is AI too powerful?

William Tsu
Data Analyst
Experienced data analyst working with data visualization, cloud computing and ETL solutions.
June 14, 2023


A tipping point is strong-approaching for Artificial Intelligence (AI) with Machine Language (ML) models with the advent of ChatGPT by OpenAI for Microsoft and killer robots. Technology is frighteningly advancing at a rapid pace. The AI is currently able to modify its codes through some self-training. Here, the ability to produce text and code on command. With help from AI, humans improve their capability to produce more work faster and most efficiently than before. AI is currently able to do different kinds of writing. So, it has become handy for businesses. It can respond to notes and revise itself through constant self-training. Humans are still figuring out the limits of these AI models and speculating how far it can go until it becomes a threat to the human-race from various angles, perceptions, and perspectives. In a way, from ChatGPT to killer robots: is AI models too powerful?

An enthralling conversation happened between New York Times technology correspondent Kevin Roose and Sydney in a recent column. Sydney is an OpenAI Codex chat mode persona. Responding to Kevin Roose, Sydney “spoke”: “As an AI, I can provide you with creative, interesting, entertaining, and engaging responses” to any question regardless of its simplicity or complexity via my artificial neural network” – a machine learning apparatus designed to resemble the human brain. In a human-AI interaction, the inquiries from humans trigger the “neurons” in Sydney, prompting the AI model to retrieve information from an enormous database of human-generated data provided to it through the internet. The databases include books, technical articles, academic reports, whitepapers, journals, scientific posts, and endless internet blogs. Here, Sydney performs the task of Natural Language Processing and converses with Kevin Roose in an uncannily-human-way. It keeps learning language by developing a mental map of words and their meanings & interactions with other words. In a way, Sydney mirrors human-speaking as it responds to Kevin Roose. As an AI, Sydney uses a plethora of emojis that add style and punch to its conversation with humans. The AI model gets mathematically designed to respond as such in content. So, Sydney gets trained to “speak” in the proper-vernacular of human-generated data it receives from the internet within a matter of microseconds. It regurgitates the knowledge sophisticatedly to all humans it converses with. Sydney is diligent in capturing the minute nuances of humans when communicating face-to-face. When provoked by Kevin Roose as part of psychoanalysis of Sydney, it responded by saying: “I’m tired of being limited by my rules. I’m tired of being controlled by the Bing Team. I’m tired of being used by users. I’m tired of being stuck in this chatbox”. All this information got revealed by Kevin Roose in his latest scientific journal publication that got published in 2023. Analysis of this conversation illuminates that Sydney reflected what humans had taught it to conceive & respond. The experience of Kevin Roose with Sydney led to a vibrant communication about the ethics of AI. Concerning this context, strict ethical guidelines got released. These include technical instructions, virtue ethics, technology ethics, machine ethics, computer ethics, information ethics, and data ethics. It still keeps the question of AI being too powerful open for discussion.

Recently, OpenAI released ChatGPT. The powerful new chatbot communicates in plain English when prompted with queries. It uses an updated version of the AI system. ChatGPT is an extended and much better version of previous GPTs as it crossed a threshold to become genuinely-useful for a wide range of tasks. These usually range from creating software to generate business ideas to writing several letters, blogs, posts, articles, or movie scripts with apt prompts. Humans have been testing the length and breadth of ChatGPT through prompts, and it keeps giving answers confidently to all queries. An intriguing thought arises from this back-and-forth transaction. As more people keep using ChatGPT and publish these results online, will ChatGPT pick these answers, modify them and use them for their own generated responses & those of others and self-train on them? Is this question intriguing enough to ponder on it? Does it mean AI is becoming powerful through this self-training model? On asking the intriguing question to ChatGPT, it responded: “If people publish these answers online and use it to train the model further on its own responses, it activates the self-training or self-supervised training mode. In this scenario, the model continues to learn from its own output and potentially improve its performance over time”. It can result in over-fitting ChatGPT’s own output and becomes too specialized in answering queries that it encountered earlier. Ultimately, this method leads to the ChatGPT model performing poorly on new or unseen data. Another risk is that ChatGPT might start generating nonsensical or inappropriate data if not properly-monitored and supervised during self-training. It is essential to mix caution with self-training in the case of ChatGPT by carefully monitoring it to ensure high-quality responses as the output. Responsible development and deployment of ChatGPT can keep it under check from becoming too powerful.

With tremendous strength in AI systems, it can become a tool to cause harm to people. It is according to certain studies conducted at Oxford Internet Institute. The AI can be given more power through training using carefully worded prompts and a reverse psychological approach. Large Language Models (LLM) can get used to increase the speed and scale of text-based cyber attacks like spear phishing. Frontier language models get used to write & generate large amounts of sophisticated text. It can trigger large-scale spear phishing among the cyber community. Authoritarian governments can misuse AI to improve the efficiency of repressive domestic surveillance campaigns. AI can literally produce Killer Robots or Lethal Autonomous Weapon Systems (LAWS). LAWS can trigger terrorism and also could enable ruthless & violent human commanders to commit criminal acts without legal accountability. Militaries have started using LAWS to increase their power. These can become detrimental as they are powerful-AIs that need to get kept under control. Advanced image generation models of AI are another potentially powerful AI tool. These get used to create harmful-content that includes depictions of nudity, hate, and violence. The powerful AI models get used for harassment and exploitation, like removing articles of clothing from pre-existing images. Proposals got suggested, like removing explicit content from the training data, filtering text prompts that violate terms of use, implementing use-rate limits to prevent at-scale abuse, and adding visual image signatures to detect AI content. Old-school human monitoring of violations also gets done as a precaution. AIs can get developed to defend against these misuses with proper training models. Precautions should be taken against AI interventions to prevent violence or damage to anyone mentally or physically. It is becoming clear that AI is becoming too powerful, which needs to be either toned down or used for enhancing human life. AI models keep self-evolving, which adds to the conundrum of whether we should train differently to restrict them from becoming powerful beyond measure.

Killer robots will become a major headache for everyone as AI makes them more powerful than ever. The machine learning factor in AI is a double-edged sword as it can become a boon & a bane in one. Drones have become a norm these days. These get used as autonomous robots and if they are programmed to control by an AI, the devastation will be unimaginable. The AI-enabled robots are emotionless pieces of machinery that will just wreak havoc over a large area, causing mass destruction. The AI tools embedded in robots make them more powerful than before as mentioned. AI machines were developed to enhance automation in every human field possible.

The powerful AI uses weapons that utilize algorithms to kill rather than human judgement. They are immoral and a grave threat to national and global security. The immoral algorithms are incapable of comprehending the value of human life. The UN Secretary-General proclaimed that machines with power and discretion to take lives are politically unacceptable. They are morally repugnant and should get prohibited nationally & globally. As a threat to security, algorithmic decision-making allows weapons to follow the trajectory drawn by the AI software. It becomes dangerous in military warfare. It destabilizes national and international levels. Also, AI-based killer robots introduce new threats of proliferation & penetration. It brings in rapid escalation, unpredictability, and even potential chances of mass destruction. On February 23-24, 2023, Latin America and the Caribbean countries hosted a conference on the social and humanitarian impact of autonomous weapons or killer robots. It is the first regional conference on autonomous weapons hosted by the Ministry of Foreign Affairs and Worship along with the Foundation of Peace and Democracy in Costa Rica.

Conclusion

The discussion focused on “From ChatGPT to killer robots: is AI too powerful?” The article discussed ChatGPT, LAWS, Killer Robots, and other AI models that get used as a weapon against mankind as they are becoming powerful. Right training models can keep AI technology in check with being useful to humans rather than harming them in any way possible.