author photo
By Cam Sivesind
Fri | Jun 16, 2023 | 11:15 AM PDT

The European Union approved the EU AI Act, setting up the first steps toward formal regulation of artificial intelligence in the West. The law proposes requiring generative AI systems, such as ChatGPT, to be reviewed before commercial release. It also seeks to ban real-time facial recognition.

"The EU is attempting to provide guardrails on a technology that is still not well understood but does present a lot of concerns from a legal perspective," said Jordan Fischer, cyber attorney and partner at Constangy, who recently moderated a panel discussion on "The Future of Privacy and Cyber: AI, Quantum and Mind Readers" at SecureWorld Chicago. "With the passage of this AI regulation in the EU, the EU is again signaling that it intends to lead in attempts to regulate technology and find some balance between innovation and protecting the users of that technology.”

The landmark ruling by European Parliament comes as global regulators are racing to get a handle on AI  technology and limit some of the risks to society, including job security and political integrity.

"I think this is actually a good move. Government entities should take a risk-based approach to AI," said Michael Gregg, CISO for the State of North Dakota. "While AI does have the ability to be extremely useful, it is just like any other technology in that it also has down sides. These issues should be addressed before widespread deployment."

Gregg will serves as a keynote speaker at SecureWorld Denver on September 19 and SecureWorld Dallas on October 26 on the topic of "Lessons from a CISO: Increasing Your Cybersecurity Footprint Despite Worn Soles."

According to a CNBC report: "During a critical Wednesday vote, the Parliament adopted the AI Act with 499 votes in favor, 28 against and 93 abstentions. The regulation is far from becoming law, but it is likely to be one of the first formal rules for the technology globally."

Mark Eggleston, CISO at CSC and Advisory Council member for SecureWorld Philadelphia, pointed to this article, saying it was detailed and one which he is generally in agreement with.

"On the positive, I believe the EU values individual privacy much more than the U.S., and in doing so benefits their individual freedoms," Eggleston said. "On the negative, this does run the risk of squashing innovation, and almost never is it a good idea to have legislators govern technology. I do believe AI should be monitored and ethically used, but that is best left to Chief Data Officers perhaps."

[RELATED: Tech Leaders Call for Pause on AI Development]

Here are additional comments from leaders at cybersecurity vendors.

Bob Janssen, Vice President of Engineering and Head of Innovation at Delinea:

"The draft AI Act represents a significant step in regulating AI technologies. It recognizes the need to address the potential risks and ethical concerns associated with AI systems. It can help protect patients and ensure that healthcare professionals have the necessary support in their decision-making processes and can contribute to safeguarding online platforms and enhancing trust in digital content.

The implications will depend on the implementation and enforcement of AI legislation. Balancing the regulation of AI technologies with fostering innovation and avoiding unnecessary limitations will be a key challenge.

Given the size and influence of the EU market, companies that develop and deploy AI technologies may need to comply with the legislation's provisions to continue operating within the EU. Like GDPR, this could result in a ripple effect, as businesses operating in other regions may also adopt similar practices to ensure consistency in their operations.

As governments worldwide grapple with the ethical and societal implications of AI, they should look to the EU's AI Act as a model or source of inspiration when crafting their own regulations."

Craig Jones, Vice President of Security Operations at Ontinue:

"This is a significant and ambitious step in a rapidly evolving technology landscape. The EU AI Act is pioneering in its scope, attempting to address a vast array of applications of artificial intelligence. It's a remarkable initiative that signals the maturation of AI as a technology of central societal and economic importance. The requirement for pre-release review of generative AI systems, including ChatGPT-like systems, will spark a debate around freedom of innovation and the necessity of oversight.

The EU AI Act, much like its predecessor GDPR, could indeed set global norms, given the transnational nature of technology companies and digital economies. While GDPR became a model for data privacy laws, the AI Act might become a template for AI governance worldwide, thereby elevating global standards for AI ethics and safety.

On the upside, the Act provides a regulatory safety net that seeks to ensure ethical and safe AI applications, which can instill more public trust in these technologies. It also raises the bar for AI transparency and accountability. The downside might be that it could temper the pace of AI innovation, making the EU less attractive for AI startups and entrepreneurs. The balance between transparency and protection of proprietary algorithms also poses a complex challenge."

Chris Vaughan, Vice President, Technical Account Management, at Tanium:

"Overall, this is a positive decision. AI is a powerful tool that needs legislating. Of course, there are great uses for the technology, but we have already seen numerous examples of unethical use—including horrific abuses of deepfake technology. There have also been incidents of dangerous AI related activity regarding privacy, fraud, and the manipulation of information.

AI is not something that should be legislated retroactively. Passing this draft creates a solid foundation for the future development of AI and the law around it. It signposts that one of the most influential governing bodies has recognized the risks in these developments and will not be ignorant to the threats. 

This legislation isn't the perfect solution to abuse of AI. The AI Act will only cover AI activities within the European Union, so there's a strong possibility of AI havens developing where nefarious use of the technology isn't prohibited. The legislation focuses on aspects of AI technology that can harm individuals. If AI is being developed to be used in a defensive manner, innovation won't be stifled.

AI innovations may become more difficult. AI algorithms are based on data which must be sourced from somewhere. Previously, there was no enquiry as to the source of the data. With new legislation, innovators will have to declare their source and explain how the data was used to train their AI algorithm. This creates additional red tape to businesses but ultimately protects people. A slight delay in innovation is a worthy sacrifice for safety.

There are more pros than cons to the EU AI Act. It is a risk-based Act meaning it has maneuverability. It is difficult to legislate technology that hasn't been used yet and is difficult to predict. At the very least, the EU has created a framework for legal progression. Overall, it is a beneficial Act.

I don't see many weaknesses in the legislation. Some groups addressed the limited approach to banning biometric data collection as a potential issue. However, I think that most would agree that at worst the act is a good starting point. It acknowledges the areas of most concern such as predictive policing and facial recognition which is encouraging."

On June 22, SecureWorld will host a webcast with Abnormal Security on "ChatGPT Exposed: Protecting Your Organization Against the Dark Side of AI." Register here to watch it live or catch it on-demand if that better fits your schedule.

The Remote Sessions webcast features:

•  FC, Ethical Hacker, Author, Co-Founder of Cygenta
•  Dan Shiebler, Head of Machine Learning, Abnormal Security
•  Dixon Styres, IT SecOps Solution Architect, CrowdStrike
•  Moderator: Tom Bechtold, Digital Events Director, SecureWorld