How zero adds value


Opportunities and challenges in adopting a Zero Trust security strategy


As the nature of business continues to change, we must change how we secure our information assets. “In the past ten years, we have transformed our office environment – and the IT network complexity has increased tenfold,” noted Grant Thornton Cybersecurity and Privacy Principal Derek Han.

Many factors have driven this increased complexity, including a distributed workforce, the incorporation of contractors and vendors into internal business processes, the shift from on-premise data centers to a blend of on-premise and cloud-based data centers and software as a service, a drastic increase in external access points and the introduction of personal devices into business networks. More IT systems are handling more functions and being replaced more rapidly.

This growing complexity has reduced visibility. Ultimately, poor visibility can lead to data breaches that expose organizations to legal and regulatory liability, as well as reputational damage. Such breaches could take months to fully identify and effectively remedy.



The Zero Trust model


The Zero Trust model emerged as a response to growing IT complexity and offers an alternative to the prevailing but outdated view of inherently trusting any system or person with access to the network while focusing security efforts on protecting the perimeter of the network. The Zero Trust model emerged about 10 years ago but “only in the past two years have my clients started taking this seriously,” Han said.

The skepticism of the Zero Trust model takes the form of five questions. These questions are not one-time gateways; rather, they represent an ongoing attempt to ensure security.

  1. Has the device attempting to connect been compromised?
  2. Is the user who they claim to be?
  3. Is a third party listening?
  4. If a system has been compromised, how are we minimizing what it can do and see?
  5. Is the environment continually monitoring for possible suspicious behavior and taking action if it is found?

Given the breadth of implementation – it will touch every process, transaction, asset, and user in an organization – it’s important to start with some strategic questions. “You need to define your priorities, what services you need to protect, what assets you need to protect and the threats you’re most concerned with,” said Han.



Focus on 3 key areas


Once an organization determines its strategy, prioritizing the assets it wants to protect and the threats it faces, it should look to adapt its cybersecurity capabilities in these three areas:

  1. Identity and access management
    This entails rigorous initial verification of users upon setup, the application of the “least privileged” principle to limit a user’s rights to the data that is absolutely necessary to do their jobs, ongoing adaptive and risk-aware authentication, timely deprovisioning as users’ roles change, the scouring of obsolete access credentials and finally continual monitoring that feeds intelligence into risk-aware components.

    How does adaptive and risk-aware authentication work? Grant Thornton Cybersecurity and Privacy Managing Director Dave Hicks provided an example: “If a user has accessed something from New York, but a half hour later they are trying to access it from California, clearly it’s not the same person. Having more risk-aware intelligence built into the authentication mechanism means challenging people for more substantiated or elevated credentials beyond just the user ID and password, where it’s warranted.”

    Don’t neglect nonhuman users. “When systems are engaged in system-to-system communication or running batch jobs, the machines in the background of the applications have identities, too. It’s important, from a Zero Trust perspective, that we validate those,” Hicks warned. “Some of those machine identities have elevated privileges and can be exploited for breaches and data exfiltration.”

  2. Network security
    Encryption, while not new, is invaluable and is increasingly low impact. Next-generation firewalls can account for applications, users and content to intelligently restrict the movement of users within the network. Along with appropriate firewall usage, micro-segmentation is another important tool to help control network traffic flow. By dividing a network into logical and secure subnetworks, micro-segmentation reduces the “attack surfaces” and allows organizations to more intelligently route data.

  3. Data protection
    Protecting an organization’s sensitive data is a primary goal of any cybersecurity program. This starts with identifying the sensitive data itself: defining what an organization considers as sensitive, understanding regulatory obligations around different types of data and identifying where structured and unstructured data exists and how it flows throughout the organization. Once the data is identified, it should be encrypted when it is not in use or in transit, and tokenized when it moves to less-secure environments. Organizations should consider using a Digital Rights Management System to help enforce the least privilege principle, while also implementing systems to protect against leakage.

    In a recent Grant Thornton webinar, attendees indicated that identifying unstructured data will be their greatest challenge in implementing Zero Trust approaches.

  1. “I am not surprised at that result,” Hicks said. “I think both Derek and I, as well as other colleagues, have seen that identifying unstructured data in the organization is a very difficult problem for people to address. It’s not something that any of the existing tools out there, or any existing processes out there, can do with 100% infallibility.”


Implementation and cultural change


Zero Trust strategies work. But how do organizations make them work? First, they must understand the enterprise-wide nature of the implementation – and the expanding sense of what an enterprise is – it’s going to take time. Think years, not months.

Even before addressing the technical challenges, organizations must make the case for change to sometimes-skeptical executives and users. A Zero Trust model can be viewed as burdensome and unnecessary by some, so it’s important to gain consensus on the dangers currently faced and reassure people that any impact on their day-to-day jobs will be minimized where possible while being focused on better securing the organization’s IT assets. Having reached internal agreement on the importance of security, the organization can move its attention to evolving its technical capabilities and processes to supporting a Zero Trust model.

The evolution to a Zero Trust model is a journey that will take many steps for most organizations. It should start with having a clear picture of the users and devices accessing an organization’s IT resources and the underlying infrastructure that an organization’s systems and data reside on. Once there is a good understanding of who needs to access systems and where those systems are located, an organization can assess where its highest risks are and prioritize the implementation of cyber capabilities that can address those risks. Whether an organization chooses to start by focusing on the identity, network or data aspects of Zero Trust will be dependent on the specific risks they face and the ability of the different technologies to reduce and mitigate those risks.

It should be clear why Zero Trust matters. In addition to protecting against data breaches, it promises much greater visibility and adaptability. “The opportunity for better monitoring leads to a better understanding of what actually is happening in the IT environment,” Hicks said. “It also gives you a much better perspective on the different risks and the threats that applications and data face. By having increased visibility early on it will help to identify potential breaches and stop them before they become too pervasive.”




More advisory insights