How to avoid shadow AI in your SOC

Samsung’s recent discovery that employees had uploaded sensitive code to ChatGPT should serve as a reminder for security leaders to tread carefully when it comes to integrating new artificial intelligence tools throughout their organizations.

generative AI tools

Shadow AI

Employees are using the new family of generative AI tools like ChatGPT whether they’re allowed to or not. Research shows employees are going rogue for two reasons: they see how AI can make them more effective, and they’re dissatisfied with their organizations’ poor (or lack of) AI adoption in their security operations centers.

Employees often misuse tools, exposing their organizations to short-term security risks and long-term business complications. Not only can hackers exploit AI tools’ security gaps, they can also fiddle with the training data and distort the accuracy of the business models built atop them.

To turn AI into a cybersecurity ally, organizations need to update – or perhaps create – action plans to handle the adoption of new, cutting-edge AI tools. Here are a few tactics they should employ.

Update technology use policies to cover new AI tools

A top priority is to revisit all the policies regarding technology use in a cybersecurity context. Treat ChatGPT like all the tools employees adopted in earlier rounds of shadow IT, and update policies to cover the new wave of AI tools. ChatGPT is a SaaS service, so existing controls that cover all other SaaS should be extended to address everything from egress filtering to content filtering and data governance controls.

Spell out clearly which AI tools are acceptable to use and which ones aren’t. You don’t necessarily have to ban the use of new AI tools, like many organizations have, or put temporary restrictions on their use, like Samsung did after the code uploads came to light – but develop a policy and enforce it.

Many employees don’t understand how valuable the data is that they’re putting into these services, how it may be used by others, and that certain data is confidential. Once policies are updated, it’s important to reducate staff about data handling and classification rules.

Understand why your security teams are leveraging these tools

Employees are trying out ChatGPT and other unauthorized generative AI tools for security purposes because they’re dissatisfied with the current AI options at their disposal. Finding out why would arm security leaders with the information they need to construct a more effective, strategic AI plan.

Poll employees regularly about their preferences regarding the use of AI and their thoughts about how it could do a better job securing corporate assets inside the SOC. Are they frustrated with the number of mundane, repeatable tasks they must handle on a day-to-day basis? Are they annoyed with the way existing AI tools work? Are they looking for a better user interface, or more extensive capabilities?

Another source of frustration could be the lack of automation that’s been woven into security operations. Advanced SOCs deploy Security Orchestration Automation Response (SOAR) capability that automate functions like managing cloud security or detecting compromised user accounts. Find out if employees are trying out new AI tools to streamline functions that are currently handled manually. If they can create a list of use cases that could save time or improve accuracy, it can influence security leaders’ decisions about whether to build or buy certain AI tools in the future.

Make sure the staff is prepared to use AI in the SOC

Since generative AI tools are so new, organizations haven’t had time to develop extensive training on each offering that hits the market. Users are essentially learning to operate them on the fly. That puts a premium on making sure that security staff are skilled enough to handle the use of new products that aren’t user friendly and aren’t yet totally integrated into their organization’s policies.

Staff should be able to discern when to use generative AI and when not to. AI-powered assistants can automate basic tasks, but multi-step processes can confuse the technology and produce faulty results. Staff should also have the diligence to hunt for errors and inconsistencies in enterprise generative AI.

Create a “sandbox” to encourage tinkering and knowledge sharing

For risk averse organizations, banning the use of generative AI tools can be a prudent step. But it also may be a short-sighted move for teams that are looking to find, cultivate and retain top talent. The security pros that are willing to try out new AI tools are the natural tinkerers that you want working on future solutions.

Give these risk takers some latitude to experiment and play. The field is evolving very quickly, and organizations are going to need to be nimble to take best advantage of the technologies that are rolling out. There’s still a gap between a cool idea and one that generates an ROI. Allowing tinkerers to try out the tools in a “sandbox” that’s shielded from important corporate data can prove beneficial. Then set up a knowledge-sharing pipeline – lunch-and-learns, a Teams channel – to ensure that decision makers can apply the right technologies for the right use cases.

By taking these steps, organizations can help security teams generate value from AI while also mitigating risk and keeping data secure.

Don't miss