ChatGPT, a chatbot developed by OpenAI, is all the rage right now, and is so popular the site continually throws up an overcapacity message. Launched in November of this year, ChatGPT is designed to provide detailed responses and articulate answers across many domains of knowledge.
The ability to ask any question on just about any topic and have a very intelligent answer given has cybersecurity experts wondering if the InfoSec community is using it and, if so, for what; and, if so, how is it working for writing scripts and code or imitating phishing emails, for instance.
Bad actors have access to this technology, too, so how are they using it? Yikes.
A recent thread on Reddit proposed the topic, "I was hoping any infosec pros would comment on chatGPT and how it might change infosec," and here are some key comments:
- "It's been helpful. The other day I fed davinci 3 a maturity checklist and had it classify some things. The script to do that was written by ChatGPT. However when I've asked it to write more specific SOAR code it has often given me non working code. One time I asked it for a simple Nginx config for a TLS route and it forgot the ssl declaration. So it can't steal my job but it can help me be more productive."
- "I have used it to help with writing remediation tips for pentest reports. It has some great tips and saves time googling and brainstorming."
- "The script I saw it writes is very entry level, no complex programming logic and I think anyone who took the first two years of a computer science program can do it. When it can do complex data structures then it gets interesting.
- "Had it write a basic powershell script that saves a copy of the registry before and after. Useful for basic malware analysis. I personally had to mess around with it to get it to run (script execution policy and all, as well as tweaking some of the script to get it to run) but it served as a cool proof of concept. Looks like others have had better experience with it though. Was on twitter and someone managed to trick it to help it with a 'CTF' and it was able to solve and create a buffer overflow. Very interesting to see the ramifications of this technology and the over abundance of script kiddies in the near by future."
- "Writing lists of examples, suggestions, documentation are easy with chatgpt, I can type it but not as fast as chatgpt then I make modifications here needed."
- "It can help automate or simplify actions that need generic copy or for basic coding/scripting. I don't think anything profound will happen on the defensive side."
- "We are evaluating it right now for basic content creation (guidance) - something we always have trouble finding time for. I had hope to make it do basic gap assessments against known best practice or, even better, arbitrary content, but it doesn't seem to be a realistic use case (or at least, I've yet to find a way to do this)."
- "I had it write a phishing email. The grammar and language is correct and natural sounding. In majority of cases the phishing emails have atrocious grammar and choice of words. This could help improve the scammers in writing better and much more convincing scam emails than we are used to."
- "I'm not a pro by any means but it's been very helpful in helping me debug programs. I'll give it a compiler error along with my code and it tells me exactly what's wrong. I haven't had to use google or stackoverflow. Also it's really great for writing. I'm not a very good writer but when I write something and say 'make this sound more professional,' chatGPT spits out some fire."
- "Agree here. I am finding Google more and more frustrating in searches. I would go as far to say its results are successfully being manipulated by SEO and / or contains way to many pointless results due to focus on advertising. ChatGPT I find can be specific and precise, in a form that is a great summary that you can build on."
- "I suspect it will enable hackers and scammers to more quickly construct persuasive phishing emails that draw on information from a target's social media or bio's on LinkedIn. Like any tool, it has the potential to be used for good or evil."
Here's what ChatGPT itself spit out when we asked, "What are the implications of using ChatGPT for InfoSecurity?"
"It is not clear what you mean by 'chatGPT,' but if you are referring to using a language model such as GPT (Generative Pre-trained Transformer) for infosecurity purposes, there are several potential implications to consider.
One potential use of a language model in infosecurity is to generate phishing emails or social engineering attacks that are difficult for humans to distinguish from genuine communication. This could be a serious concern, as such attacks could potentially be used to compromise sensitive information or systems.
Another potential use of a language model in infosecurity is to analyze and classify large volumes of text data, such as logs or social media posts, to identify potential security threats. This could potentially be useful in detecting and preventing cyber attacks or other security breaches.
However, it is also important to note that language models are not perfect, and they may not always be able to accurately identify or classify security threats. It is important to use language models in conjunction with other security measures, and to carefully evaluate their performance and limitations."
This related post on Tech Monitor tackles the topic, as well, examining how this new AI technology can be used to launch cyberattacks.