Zscaler Private Access - Active Directory Enumeration

When using Zscaler Private Access to access Active Directory, it’s important to consider that the Connector IP address is seen as the source IP for user requests. It’s therefore imperative that the Connector IP ranges are configured in Active Directory Sites and Services. This is used for the decision on which Active Directory Domain Controller is going to process the Netlogon request and how GPO is delivered. The AD Site also controls which SCCM server is used (when configured in AD Site mode, rather than IP Boundary).
Additionally, the connector will use UDP/389 to enumerate the domain during CLDAP connection.

The process for a workstation receiving AD Site, Logon, and GPO policy is

  • DNS SRV Lookup (_ldap._tcp.domain.com)
  • CLDAP (UDP/389) to servers returned in DNS SRV
  • LDAP (TCP/389) to a server returned in LDAP
  • DNS SRV Lookup (_kerberos._tcp.domain.com)
  • Kerberos Auth (TCP/88) to get TGT and SVC tickets
  • RPC (TCP/137) to return TCP port for GPO
  • RPC-Connection (TCP high port, 40000+) to receive GPO Policy
  • SMB2 (TCP/445) to connect to AD Server for NetLogon Process

It’s imperative that the wildcard domain exists for ALL active directory authentication domains (*.domain.com, *.trusteddomain.com, *.domain.internal) for the DNS SRV to succeed.
The UDP Connection from connectors must succeed to enumerate the domain and start the LDAP and Netlogon processes.
RPC high ports are required to receive GPO. The ports can be changed on the AD servers as necessary
Since this can generate a significant amount of traffic, it’s worth looking at the Connector Performance topic to ensure ephemeral ports are increased to handle the number of UDP/TCP source ports which can be used.

To test whether UDP is available to start the Active Directory is enumerable (DNS/UDP) the following script takes an input of the TLD (e.g. domain.com) and performs the DNS SRV lookup, followed by the UDP connection and a query for the AD Sites and Services. This should (could) be run from each connector to ensure every connector which is associated with the Active Directory Application Segment has the ability to resolve and connect to every server.


#Python3 Script - Takes input of a DNS Domain Name
#performs DNS SRV lookup for _ldap._tcp.domain.com
#which returns all Active Directory Domain Controllers
#in the directory.  Attempts CLDAP (UDP/389) Connection
#to each server and queries NetLogon service for details
#then outputs the result.  This enables a full view of
#connectivity to domain controllers, and details of AD Site

#sudo yum update -y
#sudo yum install python3 openldap-devel python3-devel -y
#sudo yum group install "Development Tools" -y 
#python3 -m pip install setuptools pyasn1 srvlookup python_ldap --user

#References https://github.com/kimgr/asn1ate
#References https://github.com/etingof/pyasn1/ & http://snmplabs.com/pyasn1/
#References https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-adts/d2435927-0999-4c62-8c6d-13ba31a52e1a

import socket
import subprocess
import os
import pyasn1
from pyasn1_modules.rfc2251 import *
import srvlookup
import sys
from struct import *
from pyasn1.type import univ, tag
from pyasn1.codec.der.encoder import encode
from pyasn1.codec.der.decoder import decode

#Force Suppress errors to DevNull

class DevNull:
    def write(self, msg):

#sys.stderr = DevNull()

domaininput = sys.argv[1]

srvrecord = srvlookup.lookup("ldap", protocol="tcp", domain=domaininput)
for srec in range(0, len(srvrecord)):
	testdc = srvrecord[srec].hostname
	testport = srvrecord[srec].port
	testip = srvrecord[srec].host
	ntver = "%c%c%c%c" % (6, 0, 0, 0)
	cldap = LDAPMessage()
	cldap['messageID'] = 0
	search = SearchRequest()
	search['baseObject'] = ""
	search['scope'] = 0
	search['derefAliases'] = 0
	search['sizeLimit'] = 0
	search['timeLimit'] = 0
	search['typesOnly'] = 0
	filter1 = Filter()
	filter1['equalityMatch']['attributeDesc'] = 'DnsDomain'
	filter1['equalityMatch']['assertionValue'] = domaininput
	filter2 = Filter()
	filter2['equalityMatch']['attributeDesc'] = 'Host'
	filter2['equalityMatch']['assertionValue'] = testdc
	filter3 = Filter()
	filter3['equalityMatch']['attributeDesc'] = 'NtVer'
	filter3['equalityMatch']['assertionValue'] = ntver
	filter4 = Filter()
	filter4['and'].extend([filter1, filter2, filter3])
	attribute = AttributeDescription('Netlogon')
	attributes = AttributeDescriptionList()
	search['attributes'] = attributes
	search['filter'] = filter4
	cldap['protocolOp']['searchRequest'] = search
	substrate = encode(cldap)
	server = testip
	port = testport
	sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
	sock.connect((server, port))
		data = sock.recv(500)
		data = "NR"
	if data == "NR":
		print(testdc+" timed out")
            #	Ignore first 2 Bytes - OpCode
            #	Ignore next 4 Bytes - Flags
            #	Ignore next 16 Bytes - Guid
            #	Until 00 or 18 read into Domain - if 18 it = Forest
            #	until 00 or 18 read into hostname - if 18 append forest to domain
            #	until 00 read into NetBIOSDomain
            #	unitl 00 read into NetBiOSHostname
            #	Until 00 read into username
            #	until 00 read into ServerSiteName
            #	until 40 read into ClientSiteName
            #	4 bytes Version Flags
            #	8 bytes to end
		x = decode(data, asn1Spec=LDAPMessage())
		z = x[0]['protocolOp']['searchResEntry']['attributes'][0]['vals'][0]._value
		flag = "forest"
		forest = ""
		domain = ""
		hostname = ""
		NetBIOSDomain = ""
		NetBIOSHostname = ""
		username = ""
		ServerSiteName = ""
		ClientSiteName = ""
		for i in range(25, len(z)-1):
			if flag == "forest":
				c = unpack_from('c', z, i)[0].decode('ascii')
				if c == "\x03":
					forest = forest+"."
				if c == "\x00":
					flag = "domain"
				elif c != "c0" and c != "\x03":
					forest = forest+c
			elif flag == "domain":
				c = unpack_from('c', z, i)[0]
				if c != b"\xc0":
					c = c.decode('ascii')
					c = "c0"
				if c == "\x00":
					flag = "hostname"
				elif c == "\x18":
					flag = "hostname"
					domain = forest
				elif c != "c0" and c != "\x03":
					domain = domain+c
			elif flag == "hostname":
				c = unpack_from('c', z, i)[0]
				if c != b"\xc0":
					c = c.decode('ascii')
					c = "c0"
				if c == "\x00":
					flag = "NetBIOSDomain"
				elif c == "\x18":
					flag = "NetBIOSDomain"
					hostname = hostname+"."+domain
				elif c != "c0" and c != "\x03":
					hostname = hostname+c
			elif flag == "NetBIOSDomain":
				c = unpack_from('c', z, i)[0]
				if c != b"\xc0":
					c = c.decode('ascii')
				if c == "\x00":
					flag = "NetBIOSHostname"
				elif c != "c0" and c != "\x09":
					NetBIOSDomain = NetBIOSDomain+c
			elif flag == "NetBIOSHostname":
				c = unpack_from('c', z, i)[0]
				if c != b"\xc0":
					c = c.decode('ascii')
					c = "c0"
				if c == "\x00":
					flag = "username"
				elif c != "c0" and c != "\x03":
					NetBIOSHostname = NetBIOSHostname+c
			elif flag == "username":
				c = unpack_from('c', z, i)[0]
				if c != b"\xc0":
					c = c.decode('ascii')
				if c == "\x00":
					flag = "ServerSiteName"
					username = username+"<ROOT>"
				elif c != b"\xc0":
					username = username+c
			elif flag == "ServerSiteName":
				c = unpack_from('c', z, i)[0]
				if c != b"\xc0":
					c = c.decode('ascii')
					c = "c0"
				if c == "\x00":
					flag = "ClientSiteName"
				elif c != "c0" and c != "\x03" and c != "\x07":
					ServerSiteName = ServerSiteName+c
			elif flag == "ClientSiteName":
				c = unpack_from('c', z, i)[0]
				if c != b"\xc0":
					c = c.decode('ascii')
					c = "c0"
					ClientSiteName = ServerSiteName
				if c == "\x00":
					flag = "done"
				elif c != "c0" and c != "\x03" and c != "\x40" and c != "\x05":
					ClientSiteName = ClientSiteName+c
		srecord = "DNS Server Record : IP="+srvrecord[srec].host+", PORT="+str(srvrecord[srec].port)+", PRIORITY="+str(
		    srvrecord[srec].priority)+", WEIGHT="+str(srvrecord[srec].weight)+", HOSTNAME="+srvrecord[srec].hostname
		forest = "AD Forest : "+forest
		domain = "AD Domain : "+domain
		hostname = "AD DC HostName : "+hostname
		NetBIOSDomain = "AD NetBIOS Domain : "+NetBIOSDomain
		NetBIOSHostname = "AD DC NetBIOS Name :"+NetBIOSHostname
		username = "AD Username : "+username
		ServerSiteName = "AD Server Site : "+ServerSiteName
		ClientSiteName = "AD Client Site : "+ClientSiteName

Can you shed some light on the segment rule requirement for SRV queries to pass? As per an article I found for setting up DFS, a segment should be created with a wildcard domain and TCP port 1 only, however when running diagnostics I never see any traffic hit that application segment. I am asking because DFS is behaving flaky-- upon logon DFS mapped drives are unavailable, however they become available after about 5 minutes of idle time. Only issue I can think of is SRV lookup is failing but it begins working after 5 minutes due to workstation service queries.

The wildcard application segment is necessary for DNS SRV lookup. This simplest way to achieve that for all users is with TCP port 1 only to allow access to all users for SRV resolution. You might still have a wildcard application segment for other ports if you still need application discovery to be performed.

DFS requires more than simple SRV lookup. Since DFS shares could be provided by multiple servers in the network, you need to ensure that the servers are enabled application segments and you have DNS Search domains correctly configured.
For example

\\server\share1 and \server\share2 are two different DFS shares.
share1 is provided by server1 and share2 is provided by server2 and server3
server1 has FQDN server1.companydomain1.com
server2 has FQDN server2.companydomain2.com
server3 has FQDN server3.companydomain3.com
\server is a global namespace available across companydomain.com, companydomain1.com, companydomain2.com, and companydomain3.com

You would require DNS Search Suffix to include the four suffixes companydomain.com, companydomain1.com, companydomain2.com, and companydomain3.com
You would require application segments with the following FQDNs (this could be one segment or multiple segments) :-


Therefore when the user maps \\server\share, the dns suffix completes it to \\server.companydomain.com\share , which is then mapped by DFS to \server1\share1 which is completed to \server1.companydomain1.com\share1 .
Each of these share points is then going to challenge the user for authentication. The DNS SRV lookup is necessary to enumerate Active Directory for Kerberos Authentication. The Kerberos Authentication to each of these share points would need to enumerate each of the AD Domains (companydomain1.com, companydomain2.com and companydomain3.com) - so there would need to be an application segment for each of the domains with the appropriate AD ports open.

The delay you’re experiencing is likely because of domain suffix search order taking time. I.e. the DFS mount point might be companydomain3.com, but it’s attempting companydomain1.com and companydomain2.com first (and trying to get authentication tokens for each).

Thanks, in the end it appears it wasn’t Zscaler breaking DFS, it was the combination of DFS and Offline files plus an offsite laptop. Lack of connectivity at logon time takes the root namespace offline, which prevents other drives in the same namespace from connecting until about 5-10 minutes after the user logs in.

Could you clear something up for me? I’ve been talking with support about this too. I have application segments configured for my domain controllers and DFS file servers. Access to the domain (including “gpupdate /force”) and file servers seem to be functioning just fine. I am unclear on the reason for this additional wildcard application segment with some kind of dummy port. I know you say it’s for SRV resolution, but TCP port 1 to my knowledge doesn’t exist or relate to AD communication. So how would this wildcard application even be hit by a policy? What is the real purpose of this and is it really required? As I said, domain and file server access seems to be working just fine without this.

Also, if I enter the following commands at computers connected via the Z-App, I get a response that appear to be SRV records. Although, I’m no expert. So is SRV lookups working without this wildcard application segment?

nslookup -type=SRV _ldap._tcp.example.com - not my real domain.
nslookup -type=SRV _kerberos._tcp.example.com

Here are some resources I’ve referenced.

“ZPA requires an application to be defined as a wildcard with any port to resolve SRV records. So, for this application segment configuration you are using a dummy port, Port 1, for SRV record DNS resolution.”

Your Post

The wildcard application segment is necessary for DNS SRV lookup. This simplest way to achieve that for all users is with TCP port 1 only to allow access to all users for SRV resolution. You might still have a wildcard application segment for other ports if you still need application discovery to be performed.

ZPA functions by intercepting a DNS request on the client and passing it to app connector to resolve. If the application is resolvable and policy allows the client to receive the response, then the client would subsequently make a TCP or UDP request to the application.
In general you would start with a wildcard *.domain.com application segment, and TCP/UDP ports 1-65535. So access to a webserver http://server.domain.com:80 would be resolved, and then the TCP/80 connection is established.
With DNS SRV requests, a similar process occurs - however there is a subsequent resolution. i.e. the user/application makes a DNS request for _ldap._tcp.domain.com (i.e. what LDAP server is there in the domain, listening on TCP). The response back would be “server.domain.com port 389 priority 100”. For this to occur, there needs to be an application segment which matches to allow the DNS to be intercepted and passed to the app connector - one way to do this would be to create an application segment *.domain.com with only one port (TCP or UDP, and we suggest port 1 since it’s not going to be actively used). You could also create an application segment _ldap._tcp.domain.com:1, however this would result in you needing to create many segments for every SRV record in the domain.
When the client/app request SRV _ldap._tcp.domain.com, the response of “server.domain.com port 389 priority 100” results in a secondary DNS lookup for “server.domain.com” which is passed to the app connector for resolution. The client subsequently makes the TCP connection to server.domain.com:389.

So - The SRV lookup is really a pointer. An application segment needs to exist with simply port 1 to enable the resolution/response. The subsequent connection to the server is evaluated separately.

1 Like

Thanks for such a detailed response. This makes sense that its more of a pointer. So my org has an application segment for *.domain.com configured for our IT folks to be able to hit anything over any port. But a normal user doesn’t have access to that application segment. As far as I can tell, AD and domain access is working normally for a regular user. I can access DFS shares and “gpupdate /force” on a regular user with no SRV wildcard application segment configured for access. Thoughts?

@mryan Thank you for the great post! Lots of useful info.

We are having problems with the DNS SRV request, our clients are unable to get the list of domain controllers when connected to ZPA (the nltest /dclist:domainname.com command cannot find the domain). We have followed your recommendation and added an application segment *.domain.com TCP port 1 but that doesn’t solve the issue. If we instead open up the same application segment to all TCP and UDP ports, then it works. Could you let me know what other troubleshooting steps we can take? I don’t know if that matters but we are on ZPA gov version.


There are two parts to this challenge. The DNS SRV request would be of the form _LDAP._TCP.DOMAINNAME.COM - this should work with the *.DOMAINNAME.COM application segment. You can test this on Windows with ‘nslookup -type=SRV _ldap._tcp.domainname.com’.
The nltest command takes the output from that to then perform the site lookup (CLDAP) and find the domain controllers.
If you HAVEN’T defined all the FQDN’s of the domain controllers as an application segment, then this is the cause of the nltest failing and the reason why it succeeds when you open the wildcard to be all ports.

The configuration of AD in this way is a three phase process.

  1. Start with a wildcard and all ports
  2. Create application segments as necessary to control application access - specifically create an application segment containing all the domain controllers
  3. Reduce wildcard segment to a single port

#3 reduces to allow the SRV records to work, and #2 allows the records in the SRV to map to an application segement.

1 Like

Concerning the Offline Files DFS remark even with all the other good remarks & suggestions in the thread, if still connections fail once in a while, take the Offline File settings into account. By disabling it, it solved the temporary disconnects. More info on how to do that: https://www.tenforums.com/tutorials/122727-enable-disable-offline-files-windows.html

@mryan, thanks for the tip. The issue in our case was that we did not have one of our domain controllers listed on the application segment. Looks like that if you don’t have all the domain controllers in there the DNS SRV request fails. All good now!

We are facing an issue whereby accessing applications via Federation with a 3rd party using ADFS is not passing Integrated Windows Auth and presenting an NTLM window prompt. Entering our local domain credentials or remote domain credentials allows us to sign into the app. This should be seamless and allow us to sign right in via SSO. We have a 2-way trust setup between the domains.

We are only seeing this occur via ZPA, as when we are in the office or on Direct Access or SSL VPN, SSO works just fine using Federation. We see this behavior when terminating via an App connector within the local network or via an App connector directly in the remote network where Federation resides. We have the *.remotedomain wildcard defined in our discovery application segment which uses TCP/UDP ports 1-52 and 54-65535.

We are not seeing blocks in the ZPA logs for Kerberos port 88. However, after engaging our AD team, running nltest /dsgetdc:remotedomain over ZPA gives an error, so there is some kind of communication issue via ZPA. There are no errors when running this command on the corporate network or other remote access solutions.

Getting DC name failed: Status = 1355 0x54b ERROR_NO_SUCH_DOMAIN

Looking for direction…

Hello @Raj909
Did you find the solution ? I am facing the same issue login in the intranet. IWA is not working and the web access is presenting an NTLM windows prompt. The user is correctly validated using their domain credentials but SSO is not being working.


Hi @ddiez, no I haven’t found a solution yet. It could be attributed to missing SRV records in the remote domain as the following commands fail over ZPA -

nltest /dsgetdc:remotedomain
nltest /dclist:remotedomain
nslookup -type=SRV _ldap._tcp.remotedomain

Hi @Raj909 ,
confirm that your DNS servers in AppConnector are correctly configured. In my lab the AppConnector has dns wronly configured and it could not resolve nltest. Adding my DC/DNS server in /etc/resolv.conf fixed my issue with Getting DC name failed: Status = 1355 0x54b ERROR_NO_SUCH_DOMAIN

Not sure if I reply to the right question. But next to *.remote.domain you need to configure remote.domain without the wildcard to resolve the SRV records as I discovered. Not sure if this is by design but DFS was working fine after this.

Hello Marco,
but if we have a wildcard domain *.remote.domain (1-52, 54-65535), why do we need to add “remote.domain” without the wildcard ? I guess the SRV records should be matching to the wildcard seg app ?

I had someone ask for a run through of what happens if you set Active Directory up incorrectly.

Let me try and extrapolate and example :-

  • App Connectors in Washington DC
  • App Connectors in Arkansas
  • App Connectors in California
  • App Connectors in Florida
  • ServerGroup for each of the above

We have put each region of domain controllers in an app segment that is associated with the closest ZPA Connector

  • App Segment for WDC - Contains dc1, dc2, dc3 - WDC ServerGroup
  • App Segment for Arkansas - Contains dc4, dc5, dc6 - Arkansas ServerGroup
  • App Segment for Cali - Contains dc7, dc8, dc9 - Cali ServerGroup
  • App Segment for Florida - contains dc10, dc11, dc12 - Florida Servergroup
  • App Segment for Wildcard - i.e. *.domain.local - Unsure which servergroup, but largely irrelevant at some point

So - here’s what happens.

Client performs SRV lookup _ldap._tcp.domain.local - hits wildcard, performs lookup, return answer

; <<>> DiG 9.10.6 <<>> SRV _ldap._tcp.domain.local
_ldap._tcp.domain.local. 600 IN SRV 0 100 389 dc1.domain.local.
_ldap._tcp.domain.local. 600 IN SRV 0 100 389 dc2.domain.local.
_ldap._tcp.domain.local. 600 IN SRV 0 100 389 dc3.domain.local.
_ldap._tcp.domain.local. 600 IN SRV 0 100 389 dc4.domain.local.

_ldap._tcp.domain.local. 600 IN SRV 0 100 389 dc5.domain.local.
_ldap._tcp.domain.local. 600 IN SRV 0 100 389 dc6.domain.local.

_ldap._tcp.domain.local. 600 IN SRV 0 100 389 dc7.domain.local.
_ldap._tcp.domain.local. 600 IN SRV 0 100 389 dc8.domain.local.

_ldap._tcp.domain.local. 600 IN SRV 0 100 389 dc9.domain.local.
_ldap._tcp.domain.local. 600 IN SRV 0 100 389 dc10.domain.local.

_ldap._tcp.domain.local. 600 IN SRV 0 100 389 dc11.domain.local.
_ldap._tcp.domain.local. 600 IN SRV 0 100 389 dc12.domain.local.

Client then picks one (or two) at random from the list and connects to it using CLDAP (LDAP/UDP/389). The query basically says - what is the closest domain controller for me based on my source IP.

So - whether user is in Florida, Cali, Alaska, etc - they will all do this. All users get the same list back. All users will perform the same random selection and connect to that server on CLDAP and issue the same query.

So - Florida user could try DC7 and DC8 - which are only available via Cali ServerGroup, and therefore from the Cali App Connectors.

Once the request is made - the server sees the source IP as “Cali App Connector” and therefore user is in “SITE=CALI” for subsequent domain operations.

What is the fix? It is imperative that the Active Directory Segment(s) containing the Domain Controllers are associated with a ServerGroup which uses ALL App Connectors.

i.e. ServerGroup = “ALL APP Connectors” contains WDC App Connector Group, Arkansas App Connector Group, California App Connector Group, Florida App Connector Group.

What then happens - User performs the same SRV lookup.

Florida user tries to connect to DC7 and DC8. User picks shortest path to App Connector = Florida.

DC7 Connection from Florida App Connector. DC7 sees source IP=Florida and returns “SITE=FLORIDA” and then the list of Domain Controllers = dc10, dc11, dc12. Client then connects to DC10 and receives GPO, Kerberos, etc from there.

Hello @mryan - the PY script above executed on 22.48.2 returns back with the below error. Not sure if that is due to a change in python version but I am having long logon issues and testing this from AppC would be nice.

[root@zpa-connector admin]# ./python.py
Traceback (most recent call last):
File “./python.py”, line 23, in
import socket,subprocess,os,pyasn1,ldap,srvlookup,sys
ImportError: No module named pyasn1

I’ve updated the code to include the dependencies needed on a fresh AppConnector.
Code is also available here ZPAScripts/zpa-ad-check.py at master · thewelshgeek/ZPAScripts · GitHub

1 Like