CISOs pursuing AI readiness should start by updating the org’s email security policy

Over the past few years, traditional phishing messages — with their pervasive linguistic errors, thinly-veiled malicious payloads, and often outlandish pretexts — have been on the decline. Easily detected by most of today’s standard email security tools (and thoroughly unconvincing to most recipients), this prototypical form of phishing may soon be a thing of the past.

CISOs AI policies update

However, as this more traditional form of phishing falls to the wayside, a new, much more troubling trend has emerged. Empowered by the rapidly growing ecosystem of GenAI tools, threat actors are turning to more advanced, social engineering-driven attacks, such as spear-phishing, VIP impersonation, and business email compromise (BEC).

To combat this trend, CISOs must fundamentally rethink their approach to cybersecurity — and that starts with enacting the right policies.

New policy provides foundational layer of defense

Here are some specific ideas and practical steps every CISO should consider in adapting their BEC policies to be fully prepared for today’s cyber challenges:

Adopt segregation of duties (SoD) — It’s imperative that organizations adopt SoD, or division of labor, processes for handling sensitive data and assets. For example, any changes to bank account information used to pay invoices or payroll should require the authorization of at least two individuals (ideally: three). This way, even if one employee falls for a social engineering attack that requests they redirect a payment, there will be additional stakeholders, fulfilling a clear chain of command, to serve as stop-gaps and prevent the transfer.

Conduct regular security training — An ounce of precaution is worth a pound of cure. Conduct regular security training, especially with staff members who work with sensitive data and with executives who are often the targets of BEC. This should include live instruction (with optional testing), security awareness training (SAT) videos and testing, and phishing simulation testing (PST) that use current, real-world attacks as examples.

This new breed of social-engineering driven attacks is markedly better at evading traditional security solutions. As a result, far more malicious emails will find their way into employees’ inboxes. By investing in training and testing, your employees will become assets, rather than liabilities, as the last line of defense against these threats. And make sure employees are truly challenged.

While getting low click-through rates might look good on paper, the purpose of training and testing isn’t to simply satisfy regulators or insurance companies. The goal is to ensure your employees — and especially the most vulnerable in HR, finance, and leadership — are aware of and capable of thwarting the latest, most advanced threats. Finally, gamifying the cyber-aware culture by rewarding the employee with “most reported emails” or the “fastest reporter” promotes contributing to the overall security posture of the organization while keeping reporting engaging and fun.

Always report — It’s extremely important that organizations adopt policies in which all employees are encouraged to report all potentially malicious emails. When employees choose to simply delete or ignore a suspected attack, they are denying their SOC team and other employees knowledge of what might be a wide-reaching campaign — or at the very least, a campaign with multiple targets. By always reporting, you can ensure every BEC attempt becomes a learning opportunity for the entire organization.

Finally, it is essential that reporting policies clearly express a preference for “erring on the side of caution.” Employees can be sensitive to the idea of “wasting their co-workers’ time,” and may be embarrassed to report a false-positive. Make it abundantly clear that any degree of suspicion or doubt warrants reporting.

Limit the dissemination of organizational details — At the heart of every successful social engineering campaign is knowledge of the target individual and organization. Knowing which employees work in which departments, who has the authority to do certain things (such as issue payments, make changes to payroll, and so on), and the company’s overall organization structure, is what allows threat actors to make their social-engineering attempts effective and convincing.

Unfortunately, making detailed information about companies’ organizational structures is becoming increasingly commonplace, with sites like OrgChart and LinkedIn giving bad actors access to all manner of information that can be used to perfect their attacks. While not all organizational and operational information can be kept private, it’s important that organizations adopt policies that give access to such information on a “need-to-know” basis.

Furthermore, simple policy changes — such as making the name of the organization private in public job listings — can go a long way to prevent compromise through social engineering.

Reevaluate legacy security systems — In the brief period that generative AI has been available to the public, it has already managed to totally reshape the cybersecurity landscape, so it’s imperative that organizations adopt policies that ensure their defensive security capabilities are up to date and “AI ready”.

One of the most effective ways of ensuring continued AI readiness is to make the shift from static to adaptive security systems. In the same way that AI-enabled offensive technologies are quick to evolve, so too are AI-enabled defensive tools. This “fight fire with fire” approach will soon become a prerequisite as we enter this era of AI-based cybersecurity.

Today’s dynamic threat landscape demands updated policies

Generative AI has forever changed the field of cybersecurity. In less than two years, we’ve seen the threat landscape reshaped by these technologies. As these tools continue to expand in both reach and capabilities, we may soon find ourselves in a landscape in which the only real constant is change.

Effective, up-to-date policies are foundational to an organization’s cybersecurity strategy, and powerful determinants of overall security posture. While the above list is by no means exhaustive, it should at least give every CISO a good starting point from which to update and refine their own policy documents — and a much-needed head start in the race for AI readiness.

Don't miss