Rakuten Viber CISO/CTO on balancing encryption, abuse prevention, and platform resilience

In this Help Net Security interview, Liad Shnell, CISO and CTO at Rakuten Viber, discusses how messaging platforms have become critical infrastructure during crises and conflicts. He explains how it influences cybersecurity priorities, from encryption and abuse prevention to incident response and user protection.

Shnell also outlines how Viber assesses and mitigates risks that blend technical threats with human behavior.

messaging cybersecurity risks

Messaging platforms increasingly function as de facto critical infrastructure during crises and conflicts. How does that reality influence the way you prioritize security investments and risk scenarios at Viber?

When wars break out, disasters strike, or governments shut systems down, messaging apps don’t just support critical infrastructure. They become it. And with hundreds of millions of users relying on Rakuten Viber, security decisions move from theoretical risk to real-world responsibility. People use Viber to check if loved ones are alive, receive alerts, coordinate aid, report abuse, and sometimes to survive the next hour. That reality forces us to treat availability, integrity, and abuse resilience as life-impacting metrics, not technical abstractions.

This is why we prioritise them alongside confidentiality as core security objectives. We look at account takeover, impersonation, and large-scale social engineering as high-impact security incidents with human consequences. We therefore embed security controls directly into product flows and infrastructure, rather than adding them after the fact.

Automation and real-time detection are essential here, because manual processes simply do not scale during crises. When users rely on your platform, security stops being a cost centre, it becomes part of your social contract.

End-to-end encryption is often discussed as a binary “on or off” capability. In practice, where are the hardest trade-offs between strong encryption, abuse prevention, and platform resilience?

At Rakuten Viber, end-to-end encryption has been enabled by default since version 6.0, with no user action required. This immediately raises a hard question: How do you combat child exploitation, terrorism coordination, or large-scale fraud when message content is intentionally inaccessible? Strong encryption does not eliminate risk. Account takeover, impersonation, and coordinated spam remain high-impact threats even when the content is protected. Another tension is platform resilience, encrypted systems must still support recovery, device migration, and updates without breaking user trust.

Key management and backup flows are especially sensitive points where usability and security collide. In crisis scenarios, attackers exploit friction and confusion, while users expect instant reliability and clarity. Addressing this requires multiple layers of protection built on behavioral signals, metadata patterns, user reports, and platform-level context. We invest heavily in prevention, rate limits, identity signals, and rapid response capabilities that operate entirely outside the message payload.

AI increasingly fuels these shielded layers, allowing us to identify malicious behaviour faster and at greater scale without compromising encryption or privacy. End-to-end encryption is not a destination. It is a foundation that demands constant risk balancing. And our responsibility is to ensure the platform remains private, safe and resilient at global scale.

Messaging apps are increasingly the front line for scams, deepfake-enabled fraud, and social engineering. How do you distinguish between a cybersecurity problem and a human-behavior problem when designing defenses?

At Rakuten Viber, we don’t treat scams and social engineering as purely technical or purely human problems. They sit at the intersection of both, cybersecurity failures usually exploit systems, while social engineering exploits people. But modern attacks deliberately blend the two. Deepfake-enabled fraud, impersonation, and scam campaigns often succeed not because of broken cyber defences, but because they manipulate user trust and urgency. That means the challenge is not choosing between technology and user behaviour, but designing defences that address both simultaneously.

To do this efficiently, we focus on building systems that guide users towards safer behaviour without making security a burden. This includes indications when someone you don’t know contacts you and added context when you’re invited to a group by an unfamiliar account. Users can also limit who is allowed to add them to groups, reducing exposure by default rather than through constant vigilance.

These controls are designed to introduce the right friction at the right moment, not to overwhelm users with warnings. AI helps us adapt these protections dynamically as attacker tactics evolve, without inspecting message content. Good security design acknowledges human behaviour rather than expecting perfect judgement. The goal is a platform where technology absorbs complexity so users don’t have to.

How do you stress-test incident response plans for scenarios involving disinformation, impersonation, or coordinated influence operations rather than data theft?

We approach incident response for influence operations very differently from traditional data breaches. Classic incident response models focus on ‘contain, investigate, and remediate’. But influence operations require a fundamentally different playbook.

Relevant scenarios include account compromises spreading coordinated narratives, deepfake media going viral in group chats, and impersonation of official channels during emergencies. In these cases, the primary risk is not data loss, but cascading trust failures and real-world harm. Effective stress-testing needs to focus on response velocity and decision-making under uncertainty, not just technical containment.

Also, encrypted platforms like ours must assume limited visibility by design and test detection without inspecting message content. Behavioral signals, network patterns, and velocity metrics then become central inputs. AI plays a key role in identifying coordinated behaviour at scale and distinguishing organic viral spread from artificial amplification. Automation should act as the first responder, with humans focused on judgement, escalation, and communication. This model is critical for limiting blast radius and preserving a safe, trustworthy user experience at scale.

What metrics actually matter when assessing the security of a global messaging platform, and which commonly cited metrics do you consider misleading?

For us, the most important security metrics are the ones that reflect user harm, not internal activity. We therefore look closely at blast radius: How many users were exposed before mitigation kicked in? Account takeover rates, successful impersonation attempts, and repeat offender behaviour are far more telling than raw attack volume. And abuse resilience metrics, such as how quickly scams, spam, or coordinated campaigns lose their effectiveness, are critical at global scale.

False positive rates also matter, because overblocking erodes trust just as much as underblocking enables harm. Another key signal is recovery friction, how safely and quickly users can regain access without opening new attack paths.

On the misleading side, total blocked messages or total attacks stopped are often vanity metrics. High numbers there may simply mean attackers are probing more, not that users are safer. Compliance checklists and audit pass rates are necessary, but they say little about real-world resilience.

Good security metrics measure outcomes for users, not activity inside dashboards. At scale, security success is defined by reduced harm, preserved trust, and speed under pressure, not by perfect-looking reports.

Don't miss