ChatGPT: A Hacker’s Dream?
Co-Written by Jay Thoden van Velzen, Strategic Advisor to SAP’s Chief Security Officer
You have probably heard of ChatGPT, a Large Language Model (LLM) that, in its own description states it is “an AI-powered chatbot developed by OpenAI, based on the GPT (Generative Pretrained Transformer) language model. It uses deep learning techniques to generate human-like responses to text inputs in a conversational manner responds to text prompts.” Microsoft has committed billions of dollars to this new Super Tool. Let’s review how this new AI tool will be used in cybersecurity – from the perspective of the hacker and the cybersecurity expert.
ChatGPT: How Does it Work?
Since the launch of ChatGPT in November of 2022, the artificial intelligence (AI), technology used to create it has received a lot of media attention for a technology that has been around for more than three decades.
AI works on the training and maturing of the tool based on available data. Historically, not enough data was available to train AI tools, however, over the last 15 years, thanks to the growing use of the internet and Software as a Service (SaaS) applications, a large amount of data is available to train AI tools. Cloud computing has made the infrastructure surrounding ChatGPT even more accessible to more complex models. Arguably, AI tools developed over these last few years are quite mature and can do many tasks as efficiently as humans.
Can Hackers Use ChatGPT to Launch a Cyberattack?
If you are nervous after seeing various stories about ChatGPT in the media, you are not alone. Can AI tools launch a cyber-attack without human help? It’s not a simple answer. ChatGPT cannot itself launch a cyberattack, aside from common attacks you should already be protecting yourself against. However, hackers can use ChatGPT to search how to generate a malicious payload and even ask ChatGPT to help write code to generate a malicious payload. There is a lot more involved in delivering an effective cyberattack, operationally, than a working exploit.
ChatGPT is good at reproducing a consensus result on a training set of data that has already been analyzed and documented. Therefore, this architecture is, by definition, not capable of developing zero-day exploits – assuming patches are released and applied when available. Exploit development libraries like pwntools already provide far better automation support for malware developers.
ChatGPT can be used for reconnaissance – gathering information about the site to be attacked which is the first step in preparing for a cyberattack. ChatGPT can take phishing attacks a notch up, as phishing emails from ChatGPT will have a much more professional look and feel than traditional phishing emails. AI tools
, such as ChatGPT will not only reduce the skills gap, and allow hackers to launch an attack more effectively with only some basic skills, this tool will help skilled attackers become more productive.
Use of AI Tools in Cybersecurity – for Good
Like attackers, cybersecurity professionals can also use AI tools to stay a step ahead of the attackers. One of the blessings that AI can bring to the information security industry is automation. Manual, labor-intensive, and error-prone tasks can be automated with the use of AI and machine learning. Incident response functionality in cybersecurity can easily be automated using AI, for instance. The analyses of the log data for the incident response have previously always been manual
Things have been semi-automated recently after state-of-the-art SIEM (Security Information & Event Management) tools made their way into cybersecurity through SOAR capabilities. ChatGPT has been shown to be capable of writing YARA (Yet Another Ridicules Acronyms) rules for detections. ChatGPT’s initial deployment in an organization could quickly get security teams working more productively. For example, one of our colleagues recently published an article that illustrates how ChatGPT accelerated his development workflow.
Providing answers to customers’ security questions is another very manual and laborious task. AI can automate this task and, again, save lot of time and the possibility of error by simply providing the best-matched answers to the security questionnaire.
Orca Security recently integrated ChatGPT into their Cloud Security platform to provide remediation steps to customers for alerts the tool identified. Use cases like this are very encouraging to the industry and users. An LLM trained on secure-by-default code templates could conceivably accelerate the development of more secure cloud architectures.
AI and Open-Source Intelligence (OSINT) – A Perfect Combination
OSINT is quite popular in the information security industry and with security researchers because it provides a lot of information for free. Most of the information is taken from the web when the correct keywords are used when searching. AI-based tools, like ChatGPT, takes guessing out of finding the correct search words. ChatGPT can make the task easy and provide very concise information. The goal of OSINT is to gather information already out on the internet for free use. AI tools
, like ChatGPT , are trained with the internet’s data, allowing users to navigate the internet more — it’s as if they are made for each other.
Like any other tool or technology, ChatGPT also has good and bad usage by the people and effects on society. Users must understand the Risk and side effects of AI based tools before using them.
SAP notes that posts about potential uses of generative AI and large language models are merely the individual poster’s ideas and opinions, and do not represent SAP’s official position or future development roadmap. SAP has no legal obligation or other commitment to pursue any course of business, or develop or release any functionality, mentioned in any post or related content on this website.