Sun | Feb 5, 2023 | 8:12 AM PST

As early as 1950, visionary computer scientists like Alan Turing speculated about the possibility of machines that could interpret instructions and learn new skills like the human mind. Now, a few cheesy sci-fi movies later, functional examples of artificial intelligence are becoming an increasingly common part of our day to day lives.

One of the biggest developments in mainstream artificial intelligence (AI) to happen in recent years has been the launch of ChatGPT, a language-learning model that can understand conversational inputs and produce responses in the same style.

Though the potential of ChatGPT has been lauded by thought leaders across a diverse range of industries, this young technology has also stirred important discussions about the potential security risks it poses.

In this post, we'll take a closer look at the looming security concerns for any organization using ChatGPT, and what leaders should be looking out for to mitigate them.

What ChatGPT does

To put it simply, ChatGPT is a tool that uses machine learning to understand written language and generate responses tailored to its inputs. 

It's a form of AI known as a natural language processing model, and has been developed via analysis of massive banks of text-based data, which gives it its uncanny ability to understand the semantics and context of the words and phrases it comes across.

With more than a million users and counting, ChatGPT has taken the world by storm with its potential for use in a wide variety of areas, such as essay and article writing, brain-storming based on unique prompts, and writing code for Python, SQL, other popular languages.

Though ChatGPT is still in its early days, some of the world's most influential tech giants are rushing to be early adopters and investors, with Microsoft investing $1 billion to support ChatGPT in "building artificial general intelligence (AGI) with widely distributed economic benefits."

An increasing number of businesses are beginning to adopt it for internal tasks, including organizing and researching information, sketching out business plans, optimizing sales copy, and more.

From straightforward applications like developing quality chatbots to more innovative and "outside-the-box" projects, the coming years will see a wave of ChatGPT adoption that will redefine the way businesses interact with AI.

The security concerns for businesses using ChatGPT

Though the vast majority of ChatGPT use cases have been benign, it also carries the potential for a number of security risks which businesses must be aware of when working with the technology.

Here are some of the key security concerns to look out for as ChatGPT adoption becomes more widespread.

Spreading misinformation

ChatGPT and similar language learning software can be used to draft complex and convincing pieces of content. However, it's not able to fact check the information it collates in its written content, meaning that it can easily be used for spreading false and misleading information through social media and other platforms. 

This is an especially concerning prospect when we consider ChatGPT's ability to create personalized content which can disseminate more widely among specific audience segments and influence people's behavior.

This security concern is a large talking point in the world of digital marketing, where ChatGPT's capacity to support human copywriters is being explored with fervor.

"Businesses should keep an optimistic mindset when embracing new technology like ChatGPT," says Maxine Bremner, Head of Content & Outreach at Hive19. "Companies will fall short when assuming this disruptive AI tech can instantly replace the need for humans. There's a risk that businesses will publish content that may not be fact-checked, sourced correctly, or in line with providing authoritative information to audiences; after all, ChatGPT relies completely on its own internal knowledge and logic."

New kinds of phishing emails and spam

ChatGPT has caused a stir due to its ability to come up with new and creative approaches to routine business processes, but this can be used for malicious purposes just as much as for legitimate marketing initiatives.

With its ability to produce original content tailored to convince a detailed audience profile, relatively inexperienced cybercriminals could leverage it to steal sensitive information and produce falsified content that appears to come from a trusted source.

It's true that Open AI has taken steps to prevent their tools from being used for unethical activities. If you were to visit ChatGPT and simply enter the command "write code for a virus," it would come back with a message something like the following:

"I'm sorry, but as an AI language model created by OpenAI, I do not engage in or support any illegal or malicious activities, including the creation of viruses. The creation and distribution of viruses is illegal and harmful to individuals and organizations. It is important to use technology ethically and for the betterment of society."

However, like many pieces of software, there are ways around these rules. In one article by cybersecurity publication SC Media, a journalist describes how they got the AI to create a phishing email simply by framing it as a question for writing a piece of fiction.

Writing malware

Just like fake news and content used for scams, ChatGPT also has the potential to make life easier for people who want to write malicious software.

With coding becoming a more and more sought-after skill in the modern workforce, and the dissemination of previously specialized information about coding, an emerging generation of new cybercriminals has been a concern for some time now. However, the ability to write malicious code has still required a certain degree of skill and understanding up until recently.

Language learning models are now being used by relatively unskilled hackers to create malware, thus multiplying the number of people who know how to steal confidential files, hijack computers, and worse.

One recent report from Israeli cybersecurity firm Check Point specifically named ChatGPT as a tool that novice cybercriminals have been recommending to write malicious Python-based code, with a user of a hacking forum sharing a stealer that "searches for common file types, copies them to a random folder inside the Temp folder, ZIPs them and uploads them to a hardcoded FTP server."

Preparing for new threats

As ChatGPT and similar machine learning models become more widespread and democratized, cybersecurity professionals must look for emerging patterns and take a proactive approach to harm reduction. 

Though no one knows what the future of AI holds, taking the time to educate yourself on its capabilities and shortfalls will help you prepare for more flexible and effective work in the future.

Comments