Put guardrails around AI use to protect your org, but be open to changes

Artificial intelligence (AI) is a topic that’s currently on everyone’s minds. While in some industries there is concern it could replace workers, other industries have embraced it as a game-changer for streamlining processes, automating repetitive tasks, and saving time.

CISOs AI guardrails

But security professionals should regard AI in the same way as any other significant technology development. In the right hands it has the potential to do immeasurable good, but there will always be someone who wants to use it to the detriment of others.

Generative AI tools such as ChatGPT are being used for rudimentary (nefarious) purposes, such as assisting scammers to create convincing phishing emails, but it’s the less known uses that should concern CISOs.

When it comes to the technology their companies are using, CISOs need to balance risks with rewards. It’s the same with AI. Whether it’s a developer looking for an AI algorithm that can help to solve a coding problem, or a marketer who needs assistance with creating content, a simple Google search will deliver a link to multiple AI-enabled tools that could give them a solution in moments. If we impose a blanket ban on employees using these tools, they will just find a way to access them covertly, and that introduces greater risk.

The issue for CISOs is how they can endorse the use of AI without making the company, its employees, customers, and other stakeholders vulnerable. If we start by assuming AI will be used, we can then construct guardrails to mitigate risk.

LLMs are a new attack surface

One way of approaching this is to think about AI as a new attack surface.

One of the most common and accessible AI tools are large language models (LLMs) such as ChatGPT from OpenAI, LLaMA from Meta and Google’s PaLM2. In the wrong hands, LLMs can deliver bad advice to users, encourage them to expose sensitive information, create vulnerable code or leak passwords.

While a seasoned CISO might recognize that the output from ChatGPT in response to a simple security question is malicious, it’s less likely that another member of staff will have the same antenna for risk.

Without regulations in place, any employee could be inadvertently stealing another company’s or person’s intellectual property (IP), or they could be delivering their own company’s IP into an adversary’s hands. Given that LLMs store user input as training data, this could contravene data privacy regulations, including GDPR.

Developers are using LLMs to help them write code. When this is ingested, it can reappear in response to a prompt from another user. There is nothing that the original developer can do to control this because the LLM was used to help create the code, making it highly unlikely that they can prove ownership of it. This might be mitigated by using a GenAI license which helps enterprises to guard against their code being used as an input for training. However, in these circumstances, imposing a “trust but verify” approach is a good idea.

Taking steps to reduce risks

These are just some of the security risks that enterprises face from AI, but they can be mitigated with the right approach, allowing for all the advantages of AI to be fully optimized.

To that end, one of the main priorities must be developing collaborative policies that embrace every department and every level of the organization. While the security team can provide guidance about certain risks – the dangers, for example, of downloading consumer-focused LLMs onto their personal laptops to carry out company business – feedback from employees on how they can benefit from AI tools will help all parties to agree on ground rules.

Security teams have much greater depth of knowledge as to the threats these tools pose and can pass this insight on in the form of a training program or workshops, to raise awareness. Providing real-life examples, such as how a failure to validate outputs from AI-generated content led to legal action, will resonate. Where employees utilize these learnings to good effect, their successes should be championed and highlighted internally.

Given that AI applications will proliferate very quickly, creating a policy with built-in flexibility is essential.

A positive security approach with the focus on assisting rather than preventing employees should be standard now, but when it comes to AI, employees should be able to submit their requests to use tools on a case-by-case basis, with appropriate modifications being made to the security policy each time.

The latest technological revolution

The guardrails that CISOs set in agreement with the broader organization will undoubtedly change as AI begins to play a bigger role in enterprise life. We are currently working in relatively unknown territory, but regulations are being considered by governments around the world in consultation with security professionals.

While we can expect a period of flux, it’s useful to remember that this is normal with the introduction of any major technological advancement.

When the internal combustion engine was first introduced at the beginning of the 19th century, people had little idea how to apply it and how to use it safely, but it has since revolutionized society.

Within the digital computing sphere, we have taken major leaps from the days of mainframes and the development of the personal computer to the “smart” world we live in now. With each innovation comes both opportunity and risk, but we are also better positioned than ever to assess the risks and take advantage of the opportunities that AI affords.

Don't miss