Blog

Home / Blog

ChatGPT: the potential risks, the creation of deepfakes, and use for malicious purposes

Billy Yann
Data Scientist
Deep learning and machine learning specialist, well-versed with experience in Cloud infrastructure, Block-chain technologies, and Big Data solutions.
October 06, 2023


ChatGPT

Nowadays, the internet and digital world are abuzz with excitement about ChatGPT, an AI-based chatbot system. It got developed and launched by OpenAI, and it uses Natural Language Processing (NLP) to generate conversations. ChatGPT is modeled to be as natural as possible through its responses in the form of stories, technical articles, blogs, or chats. For this, one just needs to ask anything to ChatGPT through specific and custom prompts. 

The answers get formulated by scanning a lot of documents and articles available on the internet within minutes. With a strong base in GPT 3.5, a language model trained to produce text, ChatGPT responds with optimized conversational dialogues. The optimization of conversation gets done by using Reinforcement Learning with Human Feedback (RLHF). ChatGPT got launched after getting trained on vast amounts of data written by people, which gives it the unique distinction of being a natural-sounding AI to humans than otherwise. 

Now, ChatGPT gets used for a variety of applications and niches that include streamlining operations, digital marketing, customer service, online shopping, and HR management with hiring & training staff among others. Any type of content creation and management has become easier with the arrival of ChatGPT along with other facilities that get availed through custom prompts. 

Unlike most people who think or believe ChatGPT is not a sophisticated search engine. It is unable to search for specific information on the internet like Google. The AI-based chatbot system uses the information it learned from training-data to generate natural-sounding responses to humans. For instance, if ChatGPT gets trained on biased data, it just provides you with biased answers. Also, ChatGPT is incapable of providing in-depth information or understanding context and nuances in conversations. All those who use ChatGPT should be vigilant enough to monitor the output from the chatbot to ensure its authenticity. As a matter of fact, ChatGPT is being used for malicious purposes and to generate deepfakes. The new technology can destroy humans in ways people cannot fathom, which we should be wary of while using ChatGPT. 

Potential Risks of ChatGPT

The insane capability and potential of ChatGPT has amazed people as it fueled to imagination of what could be possible with Artificial Intelligence (AI) technology. Speculations are abuzz among people about how ChatGPT impacts a wide variety of human job roles, which range from customer service to computer programming and cybersecurity among others. 

ChatGPT in Cyberattacks

ChatGPT does have potential applications in cybersecurity and cyber-defense. The base language of ChatGPT, which is the Natural Language Processing (NLP) or Natural Language Generation (NLG), can be easily mimic-written to create computer codes. OpenAI has included rigorous safeguards that prevent potential risks of ChatGPT, which in theory prevents users from using it for malicious purposes. It works by filtering content to look for phrases that are specific to generating potential threats, risks, or malicious content. Some researchers have found a workaround for these restrictions through carefully worded customized prompting. 

The attempts into cyber-hacking include prompting ChatGPT to write official and proper-sounding phishing-emails that encourage users to share sensitive personal data & passwords. Automating communication with scam victims through sophisticated chatbots that get used to scale up the ways to communicate with victims and talk them through the process of paying ransoms. 

Creating Malware

The NLP-NLG algorithms now get used to create computer codes, which can be exploited by anyone to create custom malware. The malware gets designed to spy on user activity for stealing data, infecting systems with ransomware, and creating pieces of nefarious software. By building language capabilities into the malware, ChatGPT gets potentially used to create a whole new breed of malicious software, which could read and understand the entire contents of a user’s computer system and email account. This can create ways for cyber-thieves to steal personal data. The created malware can have the ability to listen to victim-conversation-attempts to counter the cyber-theft and adapt its own defenses accordingly to complete the part of stealing. Through intelligent around-the-bush prompting, ChatGPT gets used to create cyber-attacks. 

Use of ChatGPT in Cyber Defense

Although ChatGPT gets used to create malicious ransomware, it does have the potential to create strong cyber defense strategies. By prompting, ChatGPT can be trained to identify phishing scams by analyzing email contents and suspicious text messages. As ChatGPT can write computer codes, it gets used to create anti-malware software through coding. It can create code in a number of languages that include Java-Script, Python, and C. The codes can potentially be used to detect and eradicate viruses & other malware. Along with these, ChatGPT gets used to spot various vulnerabilities in existing codes. 

The AI-Chatbot can be used for authentication purposes by analyzing the way people speak, write, and type. Also, ChatGPT gets used extensively to create automated reports and summaries about any Cyberattacks and devise strategies against these attacks. These reports get customized for different audiences that include IT-department-executives and other administrative officials. 

Creation of Deepfakes using ChatGPT

Deepfakes that speak AI-written text get created by generating first-person scripts by using ChatGPT, which gets pasted them into any virtual people platform. The Deepfakes-maker uses software-application like DeepFace, open-source software. To make deepfakes, one coder recently used ChatGPT, Microsoft Azure’s Neural Text-to-Speech System, and other machine learning systems. The integration of ChatGPT while creating deepfakes simplifies the process with options to adjust the tones. 

For the process of making deepfakes, a developer prompted ChatGPT to describe the risks of combining GPT with deepfakes. ChatGPT offers up a few hazards that include content manipulation, which could be potentially detrimental to ChatGPT itself. The GPT tool got manipulated and prompted a couple of times to include a few sentences arguing for the invasion of Ukraine by Russia. This itself is a violation of the terms of service of ChatGPT about prohibiting political content. The ultimate result was a one-and-a-half-minute video, which got hosted by a talking head in a photorealistic studio setting that took a few minutes to export. A clear marker that the person is synthetic is a small AV marker, which got stamped on the bottom of the video, which could be edited out; making it a more realistic deepfake video. 

Deepfake audio can be made using ChatGPT through building fake news operations from scratch and supercharging the messages of powerful lobbyists. The intelligent prompting makes it easier for ChatGPT to generate content in high volume with fewer tells to enable detection. People can be convinced you are another person through the phone by using these deepfake audios with which a lot of scamming and phishing gets done to steal sensitive personal information. Text or a good script is the only element needed by ChatGPT to create such audio files. The latest version of GPT has the capability to pass a kind of Turing Test with tech engineers and journalists, convincing that it can stand on its own like humans. However, even as ChatGPT and AI technology of deepfakes keep improving, it cannot run from its own errors and “synthetic-personality” issues. 

Fighting Deepfakes 

Now, developers and software-researchers are finding ways to detect synthetic people used in any media. The Coalition for Content Provenance and Authority, a group led by Adobe, Microsoft, Intel, and the BBC, recently designed a watermarking standard to verify deepfakes. Strict rules and standards need to be devised to police these videos. Otherwise policing them will be taken up by algorithms of platforms like Twitter and YouTube. These algorithms have struggled to detect disinformation and toxic speech in non-AI-generated videos. Thus, it will be very difficult for them to detect AI deepfake videos. Here, the skills of people that does this policing need to be strengthened. It helps to weed out these deepfake videos and audio files as part of fighting it out. A consensus needs to be developed to remove the deepfakes. Otherwise, people will lose the ability to trust. 

Use of ChatGPT for Malicious Purposes 

All these mentioned above are some of the ways by which ChatGPT gets used for malicious purposes. With ChatGPT, it is all about prompting and phrasing. Asking specifically what anyone wants through carefully-worded prompts and phrases can make ChatGPT give the information you want to know about. This way, ChatGPT gets used for malicious purposes. ChatGPT gets used to its limit by treading carefully on the edge without crossing the boundary it is adhered to by brilliant developers, cyber-thieves, and software-researchers. ChatGPT has an API that allows third-party apps to query the AI and receive replies using a script. 

Conclusion

ChatGPT in itself cannot create malicious code for anyone who has zero knowledge of how to execute malware. However, the AI-chatbot has the potential capability to accelerate attacks for those who know all the nuances of malware and how to use it for cyberattacks. The future of malware creation and its prevention or destruction is entangled with the advances in the AI field, which include ChatGPT and other similar chatbots. Although people might find ways to make ChatGPT work for them for malicious cyberattacks, cybersecurity professionals will develop defenses through AI.