The messy data trails of telehealth are becoming a security nightmare
In this Help Net Security interview, Scott Bachand, CIO/CISO at Ro, discusses how telehealth reshapes the flow of patient data and what that means for security. He explains why organizations must strengthen data classification and visibility as systems and vendors multiply. He also outlines how regulations and new technologies are driving a more adaptive approach to protecting patient information.

Telehealth creates a constant flow of sensitive data across cloud, mobile, and third-party platforms. What are the weak links in that data chain that organizations still underestimate?
The biggest issue centers on the lack of universal, exacting data classification. Without this, information moves through cloud, mobile, and partner systems without consistent protection. After all, you cannot protect what you cannot identify. Not all data is the same, and with limited resources, leaders must ensure that they are prioritizing resources against the most important data.
Also, we must dispel the notion that we can have a single data loss prevention (DLP) solution that protects the entirety of our data landscape. There will never be a one-size DLP solution that works across all platforms, so the focus has to be on accurate classification and validating that the controls for each first party application and SaaS partner are effective.
Even mature organizations can underestimate how complex the data ecosystem has become. They must actively, and continuously, hunt and locate data within their ecosystem. Each application, partner, repository and connection point introduces risk, especially when data classification is inconsistent. A modern approach requires visibility and validation at every step of the data chain. The goal is not just compliance, but continual assurance that patient data is secure, no matter where it travels.
HIPAA remains a cornerstone, but telehealth often involves non-traditional communication channels. Where do you see regulatory frameworks lagging behind technological realities?
Telehealth depends on modern tools that HIPAA never anticipated, like mobile apps, cloud services, and AI. This creates a growing gray area where data might not meet the traditionally understood definition of protected health information (PHI) but is still sensitive.
The goal should always be to protect privacy while still improving patient access and outcomes. As telehealth continues to be more mainstream, a more flexible, principle-driven regulatory model that encourages innovation while maintaining trust should be explored.
How are organizations preparing for potential new U.S. or global privacy mandates that may specifically address remote healthcare delivery?
Forward leaning organizations are working to gain a deep understanding of their data. To operate in a privacy-forward world, organizations need to know what data they have, where it resides, where it moves, and who touches it. From there, they can adjust workflows as new mandates emerge. Preparation starts with visibility, and rigorous data classification. Leading telehealth organizations map their data flows in detail, identifying every system, vendor, and endpoint that touches patient information. This enables proactive governance instead of reactive compliance.
Equally important is cross-functional collaboration, teams must work together to ensure privacy expectations are embedded into daily workflows. Organizations that take this approach can meet new mandates with confidence and agility.
Do you think there is a realistic way to standardize security expectations across telehealth vendors, or is fragmentation inevitable?
Innovation drives better patient care, but it naturally creates a fragmented landscape where technology moves faster than regulation. Fortunately, protocols like HL7 and FHIR already standardize the structure of health data, providing the necessary foundation to identify and track sensitive assets. Vendors should complement this by adopting the Open Cybersecurity Schema Framework (OCSF) to unify threat telemetry and NIST 800-53 to enforce rigorous control baselines and superior hygiene.
This combination ensures that as data moves between different platforms, the Security team retains the visibility and governance capabilities required to protect it. A standards-based approach transforms security from a series of isolated patches into a cohesive, interoperable defense that can actually be validated.
From a CISO perspective, what are the “must-have” capabilities in a telehealth security stack going into 2026?
Zero trust access should be standard. Every user and device must be verified continuously. This should be paired with intelligent data classifiers that tag and track health information across systems and trigger context-aware policies based on sensitivity and location. Moving forward, security must be data-centric. The foundation of telehealth security stack is adaptability. Continuous verification, and context-aware access decisions should operate seamlessly in the background to protect the patient experience.
Another must-have are shadow AI controls. They must go beyond thematically blocking AI websites to aggressively auditing the evolving feature sets of existing and new software and SaaS. A major hidden risk involves legacy vendors bolting on generative AI capabilities to remain competitive, often enabling these features by default. This creates a blind spot where previously vetted tools suddenly process sensitive data without security review. Consequently, governance requires continuous monitoring to detect and, as appropriate, opt-out of these AI features ensuring that authorized software does not inadvertently become a data leak vector.
Lastly, CISOs shouldn’t sleep on existing technologies like email. New threats demand new controls, particularly as the Verizon DBIR continues to identify the inbox as the primary driver of healthcare breaches.
The 2026 stack must reject the opaque nature of legacy gateways in favor of an architecture where AI functions as a detection engineer that learns the unique communication patterns of the organization. By analyzing sentiment and relationship strength, this system can generate bespoke detection logic to surface net-new attacks that technically appear safe but are behaviorally anomalous. This tailored intelligence must then integrate directly with existing Identity Access Management and response playbooks to address the issue via quarantine, blocking, identity verification, etc. It’s a new world, even for old tech.