author photo
By Cam Sivesind
Thu | Apr 6, 2023 | 2:22 PM PDT

ChatGPT and other AI content-generating platforms are all the rage, raising issues around accuracy and ethics, as well as if the good guys are outplaying the bad guys when it comes to using these tools.

So it's fitting that a recent SecureWorld webcast, sponsored by Spirion, tackles the topic of "Does ChatGPT Belong on Your Cyber Risk Register? Some Legal Perspectives." Check out the on-demand Remote Sessions webcast and earn 1 CPE credit for attending.

In this presentation, data protection attorneys offer their perspectives on this new threat and how organizations can best reflect it in their registers, as well as answer a few questions.

Key takeaways include:

  • The relationship between cyber risk registers and their enterprise risk counterparts
  • The nature of the threat of ChatGPT and other contemporary AI software
  • Ideas for presenting the risks of AI to your board and obtaining the resources you need

Interestingly enough, Scott Giordano, VP of Corporate Privacy and General Counsel at Spirion, used ChatGPT to ask the question, "Given that ChatGPT has the ability to identify weaknesses in code, generate malware, and craft phishing emails, does it belong on an organization's risk register?" And here's what ChatGPT had to say about itself:

"The risks associated with ChatGPT's capabilities may include:

  • Malicious code generation:
    ChatGPT's ability to identify weaknesses in code and generate code can be misused to create malicious code that can harm an organization's systems or steal sensitive data.
  • Malware generation:
    ChatGPT's ability to generate malware can be misused to create malware that can infect an organization's systems, disrupt operations, or steal sensitive data.
  • Phishing email generation:
    ChatGPT's ability to craft convincing phishing emails can be misused to trick employees  or customers into revealing sensitive information or clicking on malicious links.

As such, organizations should carefully evaluate the risks associated with using ChatGPT and take appropriate measures to mitigate or manage the risks, including implementing strong controls, monitoring its use, and training employees on how to identify and respond to potential security threats."

[RELATED: Tech Leaders Call for Pause on AI Development]

Panelist Jody R. Westby, Esq., CEO of Global Cyber Risk LLC, offers this advice:

"You need to have a process for evaluating the risks associated with that technology, in this case Chat GPT. You need to break that risk down into at least four categories: technical, operational, legal and managerial. The reason for that is we do cyber risk assessments all the time of enterprise security programs, and we'll see, for example, where they may have a technology they've deployed, but they have zero in the policies and procedures for how it should be used or handled or interact with the operations. It may be working fine technically, but it could have a legal risk, it could have an operation risk. The differences in the rankings of those could be significant."

Panelist David N. Patariu, Attorney at Venable LLP, said it matter-of-factly; there is risk for organizations by not using AI tools:

"AI is really important to think about as a way to address certain problems," Patariu said, citing an example where he was researching mass amounts of data to find particular types of transactions, but he wanted to ferret out certain types of transactions. "You can actually use AI classifiers and train your AI if you have a set of things you don't want to look through these mass amounts of data."

Rick Borden, Partner at Frankfurt Kurnit, had this to add to the panel discussion:

"The best thing out there right now, in my opinion, is the NIST AI risk management framework. They have a Version 2 draft that categorizes by harm to people, harm to organizations or enterprises, or harm to systems or ecosystems. They break it down further from there."

Roy Zur, Founder & CEO at ThriveDX Enterprise, had this to say about internal risk and insider threats:

"The internal threat can come even before the inside threat is malicious or a malicious intent. You can actually have peple by mistake not knowing (through ChatGPT and other AI tools) what they are sharing are secrets; creating more damage to the company."

Catch the full presentation whenever you want, as it will be available on-demand for the next few months.

If you are in the Philadelphia region, the opening keynote on Day 1 of SecureWorld Philadelphia—taking place April 19-20 at Valley Forge Casino Resort Event Center—features Donna Ross, EVP & CISO at Radian Group, and Eric Robuck, Certified Ethical Hacker, speaking on "ChatGPT and Other AI Products: What Are the Implications for Cybersecurity?" There's still time to register to attend.

Comments