Blog

Preventing Data Breaches with App-Centric Security

By Mike Wronski

Ready for a sobering stat? Over 6 million data records are lost every day. 1

The volume and sophistication of cyber attacks along with extensive losses from successful exploits being covered in the media have made security a top priority for IT leadership. No business wants to make headlines because of a security breach.

Taking a look at the details that have surrounded many of the recent public breaches, there are some common themes.

One of the more prevalent is that the attacks were not directly on the data repositories. In most accounts, attackers found a single small weakness—a single unpatched server or remote IT service. This weakness allowed the attacker to set up a base of operations inside the target environment, establishing the first challenge: Organizations have been using network segmentation and firewalls for many years. These techniques are very effective at the macro level, but once an attacker is inside those perimeters, those safeguards are rendered ineffective.

A Modern Approach is Required

The reason perimeter defenses are ineffective is that there is little ability for such methods to restrict or police network traffic between applications or virtual machines (VM). So once an attacker is inside the perimeter, they can set up a base of operations and look for other targets— ones of higher value. This method of attack propagation is typically described as a lateral movement.  

The data economy renders today’s network, perimeter-based security, useless. As businesses monetize information and insights across a complex business ecosystem, the idea of a corporate perimeter becomes quaint—even dangerous.

Forrester 2

The obvious thought is: How do we restrict malicious lateral movement in the DC? One way is to keep making smaller and smaller perimeters—using more virtual networks (VLANs) or additional hardware firewalls. The problem with this approach is a combination of cost and complexity. Such an implementation requires more or larger security devices with fairly complex configurations. The approach would provide “better” security, but the cost would probably prohibit most organizations from considering the option.

Microsegmentation and Zero Trust Model

For some time now, security experts have considered using microsegmentation or a philosophy called the Zero Trust Model. Microsegmentation is essentially reducing security perimeters down to individual VMs. Zero Trust adds in policy that only allows traffic that is required between applications and users. IT operators have had the base technology to implement microsegmentation for quite some time. Most server operating systems have shipped with built-in local firewalls that can be used to block traffic for many years.  

Which brings me to my next question: There is a known security model and the technology has existed for many years, yet why is it that most large enterprises have not implemented microsegmentation or adopted a Zero Trust security model?

The answer is quite simple—complexity in two major areas: policy management and policy creation. I’ll start with management. Anyone that has tried to manage Windows firewalls with Microsoft Group Policy or any tool to manage iptables in Linux will tell you that it’s a daunting task. Success here requires that rules be pushed consistently and with a guarantee of application. Adding in variations in functionality across OS versions or configurations make this task that much more complex.

Static vs. Dynamic

The proliferation of virtualization combined with the rise in software-defined networking provides the tools required to minimize this burden. Network security policy is typically defined based on network endpoints and identifiers. Details like hardware address (MAC), a network address (IP), or VLAN ID are combined with application protocol information to describe network traffic to apply the policy to. The challenge with a policy written this way is that as applications become more distributed (on-prem + cloud + SaaS) or more dynamic (easy scale up or out), static identifiers are not desirable.

Virtualization can help resolve this conflict of security defined statically versus the desire to allow more automation as part of application management. The hypervisor is aware of all the virtual endpoint identification elements. It knows how many interfaces a VM has—along with the MAC and IP, and along with the virtual network connections. Based on this, it only makes sense to remove the need for manual enumeration and allow a more dynamic security policy that gets that information from the hypervisor and can adapt automatically should there be a change.  Policy can thus be simplified to knowing basic details about the endpoint (e.g., which VMs are in the application) and the far less dynamic application protocol details (e.g., TCP port 443 for SSL-based web traffic).

Visibility and Understanding are Key

The larger issue with policy is knowing how applications communicate. In older security models, firewall administrators would use a model called blacklisting. In this model, known “bad things” are restricted from the network. This list would be curated and updated based on security vulnerability reports or general IT best practice. In the Zero Trust model, this concept is reversed and commonly called “whitelisting.” The policy should allow only required network traffic, which is the root of the problem—most operators don’t have a solid idea of what that list of “good” traffic is. Though much more secure, this approach has a much higher risk of impacting application operation due to an improper blocking of necessary communication.

With the complex interactions between homegrown, 3rd party, and SaaS applications, understanding how each component communicates is a fairly large undertaking that must be constantly watched an updated. Again, this is an area where virtualization and SDN enable the creation of software to discover VMs and services that comprise an application.

The modern approach must provide operators the tools needed to discover and visualize applications along with their respective traffic patterns. With this level of detail, admins and operators have a solid foundation for understanding “good” traffic and creating a whitelist-based policy.

Tying It All Together

Attacks are on the rise, and traditional datacenter security methods are no longer sufficient to prevent or limit the impact of a data breach. Nutanix Flow is the modern approach to application-centric network security. Natively part of our AHV virtualization solution, Flow starts with deep visualization and a unique policy model that removes the frustration and risk from application level policy and combines it with ubiquitous enforcement through microsegmentation.

Read more about how taking an application-centric approach can improve your security posture and help prevent your company from data breaches in our new eBook – Application Centric Security.

https://www.breachlevelindex.com/
2 Forrester, Future-Proof Your Digital Business With Zero Trust Security, March 28, 2018, https://www.forrester.com/report/FutureProof+Your+Digital+Business+With+Zero+Trust+Security/-/E-RES137483

virtual

Try our Virtual Bootcamp

Check out the HCI technology that powers the world’s most advanced data centers by joining a free one-hour Virtual Technology Bootcamp.

test-drive

Try HCI Today!

Try the industry-leading hyperconverged infrastructure in this 2-hour, hands-on demo.

tco

Calculate Your Savings

See what happens when you simplify data center operations, reduce your footprint, avoid costs tied to provisioning storage—all while improving service delivery.