How best to authenticate servers

We would like to authenticate servers in a location so that we can allow outbound internet access to only select hosts. Users will not log into these servers (and I want the access rules to be tied to the server identity).

What is the best way to accomplish this, can Kerberos be used? Must these computers be licensed as “users”?

Furthermore, I would like to have computer identities logged (along with IP) for any traffic that is blocked. This will allow us to identify a server that starts beaconing out, even if the ZIA rule set stops it.

Hi Hugh, it’s typically a challenge to authenticate servers without a user present and there’s a large likelihood of applications on the servers not supporting the required authentication mechanisms (transparency and cookies).

Here are the challenges that would need to be overcome:

  • PAC file / GRE & IPsec tunnel methods use cookie-based authentication to remember the identity. IP Surrogacy can assist with mapping a user to an internal IP for X amount of time, but the authentication has to be able to occur first for surrogacy to kick-in.
  • Client Connector doesn’t currently support server operating systems
  • WIthout a user present, the authentication method implemented with Zscaler would need to support transparent authentication between the user and server and the applications on the server would need to support both the transparent authentication method (e.g. Kerberos) and store the authentication cookie - highly unlikely for anything except web browsers

If these challenges were overcome and servers were authenticated, they would be indistinguishable to Zscaler from a normal user, but a conversation with your account team and being able to provide a list of the server usernames should be helpful in ensuring they’re only billed under any server billing setup.

Our recommended best practice for servers is to use IPsec or GRE tunnels to forward the traffic, to not implement NAT within the tunnel so Zscaler can still see the internal IPs, and then building sub-locations around the internal server IP addresses and/or IP subnets, that way allowing granular policies and reporting on the IPs and subnets as needed.

1 Like

@phayes , thank you for the blow-by-blow description. You are correct that the cookies will not work for the traffic I am envisioning (API calls, CRL/OCSP lookups by the OS, etc.).

I had not thought about sublocations as a solution. So that I am clear, are you suggesting that we create /32 sublocations for servers that are authorized for this outbound access?

If so, I think that would work (because I’m aiming for a full deny for many/most).

I would also recommend looking at the (Advanced) Cloud Firewall for this. Instead of creating sub-locations of one, you can make IP & FQDN groups where you can group your server IP source addresses or ranges. It allows you to create much more granular policies for servers and it allows you creating policies for both web and non-web protocols.

The Sub-location can be the full address space of all servers, where the Firewall policies can allow or block whatever policy you require.

If you have advanced cloud firewall, you can even use Cloud Applications as destination in the FW policy. For example to block servers from accessing social media sites and other non essentials/or risky destinations. Or specifically allow Windows Updates for them.