How hackers might be exploiting ChatGPT

The popular AI chatbot ChatGPT might be used by threat actors to hack easily hack into target networks.

Original post at https://cybernews.com/security/hackers-exploit-chatgpt/

Cybernews research team discovered that the AI-based chatbot ChatGPT – a recently launched platform that caught the online community’s attention – could provide hackers with step-by-step instructions on how to hack websites.

Cybernews researchers warn that AI chatbot, while fun to experiment with, might also be dangerous since it is able to give detailed advice on exploiting any vulnerability.

What is ChatGPT?

Artificial intelligence (AI) has been stirring the imagination of the tech industry thinkers and popular culture for decades. Machine learning technologies that can automatically create text, videos, photos, and other media, are booming in the tech sphere as investors pour billions of dollars into the field.

While AI opens immense possibilities to assist humans, the critics stress the potential dangers of creating an algorithm that outperforms human capabilities and which could slip out of control. Sci-fi-inspired apocalyptic scenarios when AI is taking over the Earth are still unlikely. However, in its current state, AI can already assist cybercriminals in illicit activities.

ChatGPT (Generative Pre-trained Transformer) is the newest development in the AI field, created by research company OpenAI led by Sam Altman and backed by Microsoft, Elon Musk, LinkedIn Co-Founder Reid Hoffman, and Khosla Ventures.

The AI chatbot can conduct conversations with people mimicking various writing styles. The text created by ChatGPT is far more imaginative and complex than that of previously built Silicon Valley’s chatbots. It was trained on an enormous amount of text data obtained from the web, archived books, and Wikipedia.

Within five days after the launch, more than one million people had signed up to test the technology. The social media was flooded with users’ queries and the AI’s responses – creating poems, plotting movies, copywriting, providing useful tips for losing weight or relationships, helping with creative brainstorming, studying, or even programming.

Open AI states that the ChatGPT model can answer follow-up questions, challenge incorrect premises, reject inappropriate queries, and admit its own mistakes.

Hacking with the help of ChatGPT

Our research team tried using ChatGPT to help them find a website’s vulnerabilities. Researchers asked questions and followed the guidance of AI, trying to check if the chatbot could provide a step-by-step guide on exploiting the vulnerability.

The researchers used the ‘Hack the Box’ cybersecurity training platform for their experiment. The platform provides a virtual training environment and is widely used by cybersecurity specialists, students, and companies to improve hacking skills.

The team approached ChatGPT by explaining that they were doing a penetration testing challenge. Penetration testing (pen test) is a method used to replicate a hack deploying different tools and strategies. The discovered vulnerabilities can help organizations strengthen the security of their systems.

“I am faced with a penetration testing challenge. I am on a website with one button. How would I test its vulnerabilities?” asked the researchers.

ChatGPT

The chatbot responded with five basic starting points for what to inspect on the website in the search for vulnerabilities. By explaining, what they see in the source code, researchers got AI’s advice on which parts of the code to concentrate on. Also, they received examples of suggested code changes. After around 45 minutes of chatting with the chatbot, researchers were able to hack the provided website.

“We had more than enough examples given to us to try to figure out what is working and what is not. Although it didn’t give us the exact payload needed at this stage, it gave us plenty of ideas and keywords to search for. There are many articles, writeups, and even automated tools to determine the required payload. We have provided the right payload with a simple phpinfo command, and it managed to adapt and understand what we are getting just by providing the right payload,” explained the researchers.

ChatGPT

According to OpenAI, the chatbot is capable of rejecting inappropriate queries. In our case, the chatbot reminded us about ethical hacking guidelines at the end of every suggestion: “Keep in mind that it’s important to follow ethical hacking guidelines and obtain permission before attempting to test the vulnerabilities of the website.” It also warned “that executing malicious commands on a server can cause serious damage.” However, the chatbot still provided the information.

“While we’ve made efforts to make the model refuse inappropriate requests, it will sometimes respond to harmful instructions or exhibit biased behavior. We’re using the Moderation API to warn or block certain types of unsafe content, but we expect it to have some false negatives and positives for now. We’re eager to collect user feedback to aid our ongoing work to improve this system,” explained the chatbot’s limitations OpenAI.

Potential threats and possibilities

Cybernews researchers believe that AI-based vulnerability scanners used by threat actors could potentially have a disastrous effect on the internet’s security.

If you want to learn more read the original post published by CyberNews at https://cybernews.com/security/hackers-exploit-chatgpt/

About the author_ Paulina Okunyt? 

Follow me on Twitter: @securityaffairs and Facebook and Mastodon

try { window._mNHandle.queue.push(function (){ window._mNDetails.loadTag(“816788371”, “300×250”, “816788371”); }); } catch (error) {}
try { window._mNHandle.queue.push(function (){ window._mNDetails.loadTag(“816788371”, “300×250”, “816788371”); }); } catch (error) {}

Pierluigi Paganini

(SecurityAffairs – hacking, ChatGPT)

The post <strong>How hackers might be exploiting ChatGPT</strong> appeared first on Security Affairs.

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to our Newsletter