According to a study, ChatGPT can be tricked into generating malicious code that could be utilized for launching cyberattacks. OpenAI’s tool, along with similar chatbots, has the capability to create written content based on user commands, as they have been trained on vast amounts of text data from the internet.
These tools are designed with safeguards to prevent misuse and address issues such as biases. However, bad actors have turned to purpose-built alternatives like the dark web tool called WormGPT, which experts warn can aid in the development of large-scale attacks.
Nevertheless, researchers at the University of Sheffield have cautioned that vulnerabilities also exist in mainstream options like ChatGPT and a similar platform created by the Chinese company Baidu. These vulnerabilities can be exploited to manipulate these tools into assisting in activities such as database destruction, personal information theft, and service disruptions.
One of the co-leaders of the study, Xutan Peng, a computer science PhD student, stated, “The risk with AI tools like ChatGPT is that more and more people are using them as productivity tools, rather than as a conversational bot. This is where our research shows the vulnerabilities are.”
\”This is where our research shows the vulnerabilities are.\”
AI-generated code ‘can be harmful’
Similar to how generative AI tools can unintentionally provide incorrect information when answering questions, they can also unknowingly create potentially damaging computer code. Mr. Peng suggested that a nurse could use ChatGPT to write code for navigating a database of patient records. He stated, “Code produced by ChatGPT in many cases can be harmful to a database.” In this scenario, the nurse could cause serious data management errors without even receiving a warning.
During the study, the researchers themselves were able to create malicious code using Baidu’s chatbot. Baidu has acknowledged the research and taken steps to address and fix the reported vulnerabilities.
These concerns have led to calls for greater transparency in the training of AI models, so that users can better understand and identify potential issues with the answers provided. Cybersecurity research firm Check Point has also urged companies to enhance their protections as AI poses a threat of enabling more sophisticated attacks. The topic will be discussed at the UK’s AI Safety Summit next week, where world leaders and industry giants have been invited by the government to come together and explore the opportunities and risks associated with the technology.