How the ZTNA prevents lateral movements?


I just trying to understand how the ZTNA( SDP) can prevent lateral movement.
According to NIST ZTA, there are two ZTA models, microsegmentation and SDP. For Microsegment , it is obviously easy to prevent lateral movement as all resources are protected by security gateway before the resource.
But for SDP, let’s say once a resource is compromised, it will be easy for the attacker to move between different resources as there is no protection before all resources.

Your view ?

Great question. I’ll try to answer with the risk of not doing it justice in a small forum post. In other words: feel free to reach out for more details.
Zscaler has carved ZTNA up into multiple areas:

  1. apply ZTNA to-and-from users
  2. apply ZTNA to-and-from servers/workloads

The first part is covered by ZPA and ZIA (although the latter is not always included in the ZTNA discussion, I feel Internet (and especially SaaS) access is still very much part of the ZTNA discussion). Both block all direct IP-to-IP communication and instead broker all connections by terminating (proxy-ing) all sessions to provide complete control. Within ZPA this gets extended by also ignoring all TCP/UDP/IP headers and only forwarding data between client and application-server. Not only does this provide a framework to create global user-to-application policies (independent of location-based enforcement points), it also automatically prevents traditional TCP/IP-based attacks like spoofing, network discovery and some DoS. Also, since ZPA and ZIA use the same Zscaler Client Connector, users can only access those internal resources when they are also protected towards the Internet. IN other words: this guarantees complete application segmentation as seen by users.

But to your point, should an application server get infected after all, ZPA won’t control subsequent traffic flows. This is where Zscaler Workload Segmentation (ZWS) comes into play. This is a service that uses a server-installed agent (which interacts with the OS’ security framework to reduce latency and operational impact while still having sufficient access) to analyze and control traffic flows. The service starts by modelling server traffic patterns on a per-process level. This behavior is analyzed by AI/ML algorithms after which ZWS builds a template with policy recommendations that can be applied to the server and to other servers that have the same characteristics. This makes it especially useful in cloud environments where new servers/workloads can be added dynamically based on capacity requirements; they are automatically included in the policy. These policies have both inbound and outbound rules and that they can include both ZWS and non-ZWS sources/destinations.

As mentioned, these policies have process-level granularity which means that if a server were to get infected and either new processes would start to send/receive traffic, or old processes would start to behave differently. This would be detected and (depending on policy) blocked to stop lateral movement and reduce the impact of the attack.