Phil Neray is the VP, Data Security Strategy, InfoSphere Guardium & Optim at IBM. In this interview Phil talks about the complex issues surrounding cloud computing security, offers insight into what companies migrating to the cloud can expect and lines up tips for those who have to consider migrating to a cloud computing solution for mission-critical IT services.
Although cloud computing has been widely accepted in the enterprise and its usage is growing exponentially, many are still worried about the security risks. IT managers have nightmares about insecure APIs and interfaces as well as possible account hijackings. How can a company be sure they are getting a secure cloud computing solution whose implementation is secure from every possible aspect?
While cloud technology and virtualization technologies provide many important benefits including increased agility, on-demand scalability and lower costs, they also offer unique security challenges for enterprises.
According to the National Institute of Standards and Technology there are three types of service models for cloud computing, with each providing enterprises with varying levels of control over security capabilities:
1. Software as a Service (SaaS) – In this deployment model, the cloud provider delivers the entire stack including the application itself as a hosted service. Examples of SaaS offerings include Salesforce.com, Gmail and Lotus Live collaboration services. With SaaS, enterprises have little direct control over critical security capabilities such as data encryption or compliance auditing and activity monitoring. However, enterprises are still legally responsible for the confidentiality and integrity of their customer data and other sensitive information. The recommended approach is to ensure – via RFPs and contractual commitments — that your SaaS provider is providing the critical security capabilities you needed.
2. Platform as a Service (PaaS) – In the PaaS deployment model, the cloud provider delivers an application development platform – including programming languages and APIs – for creating applications on their cloud infrastructure. The enterprise doesn’t manage or control the underlying cloud infrastructure (network, servers, operating systems, or storage) but has control over the deployed applications and possibly application hosting environment configurations. Examples of PaaS offerings include Microsoft Windows Azure, Force.com and Google App Engine. There is more control over security with PaaS than with the SaaS model, but as the provider is still responsible for most security capabilities, enterprises should rely primarily on contractual commitments to meet their security needs.
3. Infrastructure as a Service (IaaS) – With IaaS, the enterprise controls most of the stack including operating systems, storage, and deployed applications. Examples include Amazon EC2, RackSpace Cloud Servers and IBM SmartCloud Enterprise. Enterprises typically have direct control over security components such as data encryption, host firewalls and activity monitoring.
In addition, enterprises can also deploy Private Clouds in order to have more complete control over security capabilities (whether hosted onsite or off-premises). With private clouds, your enterprise controls the entire software stack as well as the underlying virtualization platform, self-service provisioning and metering tools and hardware infrastructure – plus the people resources required to administer the entire environment. In fact, many enterprises have already started down this path by deploying virtualized infrastructures in their data centers. Virtualization forms the basis of most cloud offerings and offers some of the key benefits of cloud computing, such as more efficient utilization of compute and storage resources, while also offering more control over how security controls are implemented in the infrastructure.
One of the most talked about problems surrounding cloud computing has been trust. The malicious insider threat is something that no software can protect from, and when a company puts its confidential data in the cloud, and that data is to some degree accessible by a third-party, countless problems come to mind. What controls can be put in place to positively eliminate this type of threat?
Insider threats pose an interesting and growing challenge for organizations. According to a recent U.S. Chamber of Commerce study, disgruntled employees or employees transitioning out of a position have cost companies tens of millions. The definition of insiders is also evolving to include outsourced personnel, contractors and partners. Also, a recent Verizon Business data breach study reports that the number of insider threat incidents investigated nearly doubled compared to last year.
As with most IT challenges, the key to effectively managing insider threats includes a combination of people, process and technology. Here are some examples of best practices in this area:
Trust but verify: Monitor activities of privileged insiders such as DBAs, developers, system administrators and outsourced personnel. Continuous, real-time monitoring is crucial for rapidly detecting suspicious or unauthorized activity – such as a customer service rep downloading hundreds of sensitive data records in a single day — and limiting exposure to attacks and misuse. Monitoring of privileged users is also a requirement for compliance regulations such as SOX, PCI, HITECH, and FISMA2. Database activity monitoring (DAM) and database auditing technologies allow organizations to generate a secure, non-repudiable audit trail for all database activities that impact security posture such as creation of new accounts, data integrity such as changing sensitive data values or schemas, or viewing of sensitive data. In addition to being a key compliance requirement, granular audit trails are also important for forensic investigations.
Monitor the application layer: Monitor the activities of end-users that access sensitive data via multi-tier enterprise applications such as SAP, PeopleSoft and Cognos. Well-designed DAM solutions can associate specific transactions at the database tier with specific end-user accounts, in order to deterministically identify individuals that are violating corporate policies. In addition, combining database auditing information with access logs from applications and host system, via a Security Information and Event Management (SIEM) system, to see everything that a user has done, can also provide critical information and analytics for forensic investigations.
Review entitlements: Periodically review entitlement reports (also called User Right Attestation reports) as part of a formal audit process. Make sure you follow the principle of “least privilege,” but remember that DBAs typically need to have privileged access to sensitive databases to accomplish their jobs (hence the need to monitor and audit their activities). Enforce corporate policies that forbid sharing of privileged credentials, since this eliminates accountability.
Don’t forget terminated employees: There are numerous examples of former employees who stole data from ex-employers or sold their administrative credentials on the black market. Make sure IT is involved in the employee termination process, so that all of the former employee’s accounts – including remote access – are automatically de-provisioned immediately.
What tips would you give to an organization considering migrating to a cloud computing solution for mission-critical IT services? Furthermore, to what degree will they have to reorganize their overall approach to risk management?
Treat the use of public clouds like any other outsourced activity: check cloud provider references, review their policies, and make sure that all of your key requirements (such as SLAs, backup and disaster recovery, encryption, change and configuration processes, privileged user monitoring and auditing, etc.) are specified in your contract with them.
You should also audit your cloud provider periodically, but this may be more difficult with larger and well-established cloud providers, in which case you’ll need to rely on 3rd-party audits such as SAS70. The SAS 70 audit verifies the functionality of a service organization’s control activities and processes, but keep in mind that SAS 70 itself does not specify a pre-determined set of control objectives that service organizations must achieve — so make sure their documented controls meet all of your organization’s specific requirements.
Based on your experience, how long and potentially burdensome is the adaptation period for a company that switched its entire IT services to the cloud?
Most organizations are taking a phased approach. For example, they might migrate to SaaS offerings for CRM, email or blogging while retaining other business-critical applications in-house, such as ERP and financials. Many organizations are also using public clouds for less-critical test and development functions.
Migrating to an in-house virtualized infrastructure is also a good initial step, for both large and small organizations, because virtualization is now a relatively mature technology that delivers the many of the flexibility and resource utilization benefits of cloud while still allowing a large degree of control over security. Many larger enterprises are also moving beyond virtualization to offer private cloud services to their business users, in which automated self-service and metering technologies are layered over their virtualized infrastructures for added agility and convenience.
For start-up organizations with limited capital and people resources to build their own IT infrastructures, especially those in high-tech domains such as social media or mobility, it may make more sense to use public clouds exclusively from Day 1. If you choose this path, you still might want to adopt a hybrid approach in which your most sensitive data – high value assets such as intellectual property, proprietary plans or product designs, that, if compromised by cyber-criminals or insiders in the cloud provider organization, would significantly impact the firm – are retained in-house.