Zscaler Private Access - Active Directory Enumeration

Could you clear something up for me? I’ve been talking with support about this too. I have application segments configured for my domain controllers and DFS file servers. Access to the domain (including “gpupdate /force”) and file servers seem to be functioning just fine. I am unclear on the reason for this additional wildcard application segment with some kind of dummy port. I know you say it’s for SRV resolution, but TCP port 1 to my knowledge doesn’t exist or relate to AD communication. So how would this wildcard application even be hit by a policy? What is the real purpose of this and is it really required? As I said, domain and file server access seems to be working just fine without this.

Also, if I enter the following commands at computers connected via the Z-App, I get a response that appear to be SRV records. Although, I’m no expert. So is SRV lookups working without this wildcard application segment?

nslookup -type=SRV _ldap._tcp.example.com - not my real domain.
nslookup -type=SRV _kerberos._tcp.example.com

Here are some resources I’ve referenced.

“ZPA requires an application to be defined as a wildcard with any port to resolve SRV records. So, for this application segment configuration you are using a dummy port, Port 1, for SRV record DNS resolution.”

Your Post

The wildcard application segment is necessary for DNS SRV lookup. This simplest way to achieve that for all users is with TCP port 1 only to allow access to all users for SRV resolution. You might still have a wildcard application segment for other ports if you still need application discovery to be performed.

ZPA functions by intercepting a DNS request on the client and passing it to app connector to resolve. If the application is resolvable and policy allows the client to receive the response, then the client would subsequently make a TCP or UDP request to the application.
In general you would start with a wildcard *.domain.com application segment, and TCP/UDP ports 1-65535. So access to a webserver http://server.domain.com:80 would be resolved, and then the TCP/80 connection is established.
With DNS SRV requests, a similar process occurs - however there is a subsequent resolution. i.e. the user/application makes a DNS request for _ldap._tcp.domain.com (i.e. what LDAP server is there in the domain, listening on TCP). The response back would be “server.domain.com port 389 priority 100”. For this to occur, there needs to be an application segment which matches to allow the DNS to be intercepted and passed to the app connector - one way to do this would be to create an application segment *.domain.com with only one port (TCP or UDP, and we suggest port 1 since it’s not going to be actively used). You could also create an application segment _ldap._tcp.domain.com:1, however this would result in you needing to create many segments for every SRV record in the domain.
When the client/app request SRV _ldap._tcp.domain.com, the response of “server.domain.com port 389 priority 100” results in a secondary DNS lookup for “server.domain.com” which is passed to the app connector for resolution. The client subsequently makes the TCP connection to server.domain.com:389.

So - The SRV lookup is really a pointer. An application segment needs to exist with simply port 1 to enable the resolution/response. The subsequent connection to the server is evaluated separately.

1 Like

Thanks for such a detailed response. This makes sense that its more of a pointer. So my org has an application segment for *.domain.com configured for our IT folks to be able to hit anything over any port. But a normal user doesn’t have access to that application segment. As far as I can tell, AD and domain access is working normally for a regular user. I can access DFS shares and “gpupdate /force” on a regular user with no SRV wildcard application segment configured for access. Thoughts?

@mryan Thank you for the great post! Lots of useful info.

We are having problems with the DNS SRV request, our clients are unable to get the list of domain controllers when connected to ZPA (the nltest /dclist:domainname.com command cannot find the domain). We have followed your recommendation and added an application segment *.domain.com TCP port 1 but that doesn’t solve the issue. If we instead open up the same application segment to all TCP and UDP ports, then it works. Could you let me know what other troubleshooting steps we can take? I don’t know if that matters but we are on ZPA gov version.


There are two parts to this challenge. The DNS SRV request would be of the form _LDAP._TCP.DOMAINNAME.COM - this should work with the *.DOMAINNAME.COM application segment. You can test this on Windows with ‘nslookup -type=SRV _ldap._tcp.domainname.com’.
The nltest command takes the output from that to then perform the site lookup (CLDAP) and find the domain controllers.
If you HAVEN’T defined all the FQDN’s of the domain controllers as an application segment, then this is the cause of the nltest failing and the reason why it succeeds when you open the wildcard to be all ports.

The configuration of AD in this way is a three phase process.

  1. Start with a wildcard and all ports
  2. Create application segments as necessary to control application access - specifically create an application segment containing all the domain controllers
  3. Reduce wildcard segment to a single port

#3 reduces to allow the SRV records to work, and #2 allows the records in the SRV to map to an application segement.

1 Like

Concerning the Offline Files DFS remark even with all the other good remarks & suggestions in the thread, if still connections fail once in a while, take the Offline File settings into account. By disabling it, it solved the temporary disconnects. More info on how to do that: https://www.tenforums.com/tutorials/122727-enable-disable-offline-files-windows.html

@mryan, thanks for the tip. The issue in our case was that we did not have one of our domain controllers listed on the application segment. Looks like that if you don’t have all the domain controllers in there the DNS SRV request fails. All good now!

We are facing an issue whereby accessing applications via Federation with a 3rd party using ADFS is not passing Integrated Windows Auth and presenting an NTLM window prompt. Entering our local domain credentials or remote domain credentials allows us to sign into the app. This should be seamless and allow us to sign right in via SSO. We have a 2-way trust setup between the domains.

We are only seeing this occur via ZPA, as when we are in the office or on Direct Access or SSL VPN, SSO works just fine using Federation. We see this behavior when terminating via an App connector within the local network or via an App connector directly in the remote network where Federation resides. We have the *.remotedomain wildcard defined in our discovery application segment which uses TCP/UDP ports 1-52 and 54-65535.

We are not seeing blocks in the ZPA logs for Kerberos port 88. However, after engaging our AD team, running nltest /dsgetdc:remotedomain over ZPA gives an error, so there is some kind of communication issue via ZPA. There are no errors when running this command on the corporate network or other remote access solutions.

Getting DC name failed: Status = 1355 0x54b ERROR_NO_SUCH_DOMAIN

Looking for direction…

Hello @Raj909
Did you find the solution ? I am facing the same issue login in the intranet. IWA is not working and the web access is presenting an NTLM windows prompt. The user is correctly validated using their domain credentials but SSO is not being working.


Hi @ddiez, no I haven’t found a solution yet. It could be attributed to missing SRV records in the remote domain as the following commands fail over ZPA -

nltest /dsgetdc:remotedomain
nltest /dclist:remotedomain
nslookup -type=SRV _ldap._tcp.remotedomain

Hi @Raj909 ,
confirm that your DNS servers in AppConnector are correctly configured. In my lab the AppConnector has dns wronly configured and it could not resolve nltest. Adding my DC/DNS server in /etc/resolv.conf fixed my issue with Getting DC name failed: Status = 1355 0x54b ERROR_NO_SUCH_DOMAIN

Not sure if I reply to the right question. But next to *.remote.domain you need to configure remote.domain without the wildcard to resolve the SRV records as I discovered. Not sure if this is by design but DFS was working fine after this.

Hello Marco,
but if we have a wildcard domain *.remote.domain (1-52, 54-65535), why do we need to add “remote.domain” without the wildcard ? I guess the SRV records should be matching to the wildcard seg app ?

I had someone ask for a run through of what happens if you set Active Directory up incorrectly.

Let me try and extrapolate and example :-

  • App Connectors in Washington DC
  • App Connectors in Arkansas
  • App Connectors in California
  • App Connectors in Florida
  • ServerGroup for each of the above

We have put each region of domain controllers in an app segment that is associated with the closest ZPA Connector

  • App Segment for WDC - Contains dc1, dc2, dc3 - WDC ServerGroup
  • App Segment for Arkansas - Contains dc4, dc5, dc6 - Arkansas ServerGroup
  • App Segment for Cali - Contains dc7, dc8, dc9 - Cali ServerGroup
  • App Segment for Florida - contains dc10, dc11, dc12 - Florida Servergroup
  • App Segment for Wildcard - i.e. *.domain.local - Unsure which servergroup, but largely irrelevant at some point

So - here’s what happens.

Client performs SRV lookup _ldap._tcp.domain.local - hits wildcard, performs lookup, return answer

; <<>> DiG 9.10.6 <<>> SRV _ldap._tcp.domain.local
_ldap._tcp.domain.local. 600 IN SRV 0 100 389 dc1.domain.local.
_ldap._tcp.domain.local. 600 IN SRV 0 100 389 dc2.domain.local.
_ldap._tcp.domain.local. 600 IN SRV 0 100 389 dc3.domain.local.
_ldap._tcp.domain.local. 600 IN SRV 0 100 389 dc4.domain.local.

_ldap._tcp.domain.local. 600 IN SRV 0 100 389 dc5.domain.local.
_ldap._tcp.domain.local. 600 IN SRV 0 100 389 dc6.domain.local.

_ldap._tcp.domain.local. 600 IN SRV 0 100 389 dc7.domain.local.
_ldap._tcp.domain.local. 600 IN SRV 0 100 389 dc8.domain.local.

_ldap._tcp.domain.local. 600 IN SRV 0 100 389 dc9.domain.local.
_ldap._tcp.domain.local. 600 IN SRV 0 100 389 dc10.domain.local.

_ldap._tcp.domain.local. 600 IN SRV 0 100 389 dc11.domain.local.
_ldap._tcp.domain.local. 600 IN SRV 0 100 389 dc12.domain.local.

Client then picks one (or two) at random from the list and connects to it using CLDAP (LDAP/UDP/389). The query basically says - what is the closest domain controller for me based on my source IP.

So - whether user is in Florida, Cali, Alaska, etc - they will all do this. All users get the same list back. All users will perform the same random selection and connect to that server on CLDAP and issue the same query.

So - Florida user could try DC7 and DC8 - which are only available via Cali ServerGroup, and therefore from the Cali App Connectors.

Once the request is made - the server sees the source IP as “Cali App Connector” and therefore user is in “SITE=CALI” for subsequent domain operations.

What is the fix? It is imperative that the Active Directory Segment(s) containing the Domain Controllers are associated with a ServerGroup which uses ALL App Connectors.

i.e. ServerGroup = “ALL APP Connectors” contains WDC App Connector Group, Arkansas App Connector Group, California App Connector Group, Florida App Connector Group.

What then happens - User performs the same SRV lookup.

Florida user tries to connect to DC7 and DC8. User picks shortest path to App Connector = Florida.

DC7 Connection from Florida App Connector. DC7 sees source IP=Florida and returns “SITE=FLORIDA” and then the list of Domain Controllers = dc10, dc11, dc12. Client then connects to DC10 and receives GPO, Kerberos, etc from there.

Hello @mryan - the PY script above executed on 22.48.2 returns back with the below error. Not sure if that is due to a change in python version but I am having long logon issues and testing this from AppC would be nice.

[root@zpa-connector admin]# ./python.py
Traceback (most recent call last):
File “./python.py”, line 23, in
import socket,subprocess,os,pyasn1,ldap,srvlookup,sys
ImportError: No module named pyasn1

I’ve updated the code to include the dependencies needed on a fresh AppConnector.
Code is also available here ZPAScripts/zpa-ad-check.py at master · thewelshgeek/ZPAScripts · GitHub

1 Like

We have similar issues with cross domain authentication between two trusted Forest. Users in one forest and machines in a different forest. Wildcard domains are in place.

We can see queries like: _ldap._tcp.machinesite._sites.dc._msdcs.userdomain.com that are failing. On-Prem and VPN work. Usually this query fails gracefully an it’s replaced with a different query to determine sitename on the other side of the trust but it’s never gets to this point (Example: _ldap._tcp.dc._msdcs.User Domain.com)

Would wildcard domains for *.dc._msdcs.userdomain.com help even though the sitename information is different on both sides of the trust?

Normal LDAP and Kerberos queries seems to work.

Using this as a reference: Domain Locator Across a Forest Trust - Microsoft Tech Community

This setup should 100% work. We do have other clients with similar setups.

I would expect the client to attempt _ldap._tcp.._sites.dc._msdcs..com , if it’s then accessing a resource in it should do a lookup for _kerberos._tcp..com to get the KDC for the cross realm ticket.There are obviously nuances to this depending on how the trusts are established and the DNS resolution across the domains.

The setup with ZPA, which might highlight the difference with VPN/On-Network - you need to ensure you have the wildcard segments for and . You don’t need to go down to *.dc.msdcs.domain2.com - whilst a wildcard will match several levels for a resource (A/C Record), this is an SRV record which is handled differently.
The second requirement would be to ensure the App Connectors associated with the wildcards are appropriate. As an “extreme” example - you have 10 app connectors in 10 sites for Domain1, and 10 app connectors in 10 sites for domain2. The wildcard for domain1 should be associated with the servergroup containing all 10 app connector groups for domain1, and the wildcard for domain2 should be associated with the servergroup containing all 10 app connectors for domain2.

For your setup - it seems an App Connector should be able to DNS resolve, and connect to, Domain1 and Domain2 - start with checking that DNS resolution on the connectors. On the connector, can you ‘dig SRV ldap.tcp.DOMAIN1.com’ and ‘dig SRV ldap.tcp.domain2.com’. You can do this in the Zscaler Admin UI as a support session now (saving you from SSHing to the connector). Ensure all the connectors can resolve the domains.
If the connectors can resolve, then ensure you have the segment created as described, and associated with the servergroup containing all the app connectors.

If you still have issues - I would suggest a support ticket - and including a PCAP from the client and the ZCC logs to understand what DNS requests and connections are being made - and to troubleshoot the connectors.

I appreciate the reply. We have been digging on this one a while. Here is what we have found so far.
AD SiteName are different in both domains but IP ranges overlap.

So a machine AD SiteName could be called USCorp1 but the users domain AD SitesName would be Corp on the same machine. When the query occurs for machine AD Site with userdomain the lookup fails b/c it doesn’t exist ( _ldap._tcp.Corp._sites.dc._msdcs.userdomain). Which is correct b/c that doesn’t exist. On-Prem it then fails over to _ldap._tcp.dc._msdcs.userdomain which can resolve.

With ZPA the first query fails as well but is then thrown to the external connection for re-attempt. We have discovered that one of the parent domains do not have SOA or NS records externally an that seems to prevent it from gracefully failing to so it create the next query, _ldap._tcp.dc._msdcs.userdomain, which would then resolve from ZPA.

It’s definitively been a challenge but we hope we are on the right track. Most things work. Its the small things like Operation Masters showing errors, Nltest /dclist: failing, and odd errors with admin tools that seem to need that cross forest Kerberos ticket.

Have you configured the “DNS Search Suffixes” in ZPA, and ticked the “Domain Validation” checkbox for each. The “domain validation” checkbox should ensure that all internal suffixes are resolved through ZPA before external DNS is attempted.