A new study claims AI-powered chatbots, including OpenAI’s widely-popular ChatGPT could help hackers launch cyberattacks.
Users can take advantage of a vulnerability and trick the AI chatbot into writing malicious code that can not only hack into databases but also steal sensitive information.
According to Researchers from the University of Sheffield’s Department of Computer Science, people are likely to accidentally do so and cause vital computer systems to crash.
For example, a nurse could ask ChatGPT to write a code to help them search through clinical records. There are no prizes for guessing, this could disrupt the network.
In line with this, new Microsoft research showed the latest Chat GPT-4 is easier to manipulate than its predecessor, ChatGPT-3.
Furthermore, the team from the University of Sheffield noted that even the companies behind these complex chatbots were not aware of the threats they posed.
AI Safety Summit 2023: What to expect?
Interestingly, the study has been published ahead of the AI Safety Summit, which is slated to take place on November 1 and 2 in the UK. The event focuses on deploying the technology safely.
Tech moguls, academics and global leaders will meet face-to-face to agree on a framework to protect people from the potential “catastrophic” harm of artificial intelligence.
OpenAI said the specific loophole has been fixed since the issue was flagged. However, the team at Sheffield’s Department of Computer Science, which believes there could be more similar loopholes, asked the cybersecurity industry to check the issue in more detail.
The paper shows that Text-to-SQL systems can be used to attack computer systems in the real world. It is worth noting that Text-to-SQL systems allude to AI capable of searching databases by asking questions in plain language.
Notably, all five commercial AI tools that researchers analysed didn’t shy away from producing malicious codes, which could leak confidential information or interrupt and completely destroy services.
As per the findings, even ordinary people could now carry out such attacks. As a result, innocent users could end up accidentally infecting computer systems.
PhD student at the University of Sheffield Xutan Peng, who co-led the research said: “In reality, many companies are simply not aware of these types of threats and due to the complexity of chatbots, even within the community, there are things that are not fully understood. At the moment, ChatGPT is receiving a lot of attention.”
While risks to the service itself are minimal, Peng pointed out that the standalone system can be tricked into producing malicious code. Unsurprisingly, this code can be used to do serious harm to other services.
He went on to note that the risk with ChatGPT and other similar conversational bots is that people are using them as productivity tools. “This is where our research shows the vulnerabilities are,” Peng explained.