Scripted downloads and tests suffer from Zscaler's poor useability

It’s surprising the forum posts do not appear to mention a developer’s point of view. Scripted downloads in “testing” or “continuous integration” of “software” fail due to 2 levels of interference by the enterprise-wide routing through Zscaler:

  • Zscaler rewriting HTTPS. This may be worked around by adding the Zscaler Certificate Authority but causes loss of many hours to modify and maintain the scripts.
  • Zscaler offering a 307 redirect to the “banned” message or to the “are you sure” dialog as a response to contentious decrypted requests. This requires a lot more effort to avoid or work around at a level of scripting often unavailable to mere mortal developers.

Zscaler as a product appears aiming at organizations full of personal machines or VPN users whose work involves browsing. The product appears extremely hostile to organizations of modern day turning their eye at quick prototyping and development. Zscaler’s meddling with automated downloads and other HTTP requests can be avoided by Zscaler’s product development who may consider the following useability fixes.

  • Avoid HTTPS decryption and MitM rewrites for destination IP addresses and Server Name Indications found in an enterprise-wide white list. The white list can, by default, include a number of modern package repositories such as Microsoft Gallery, Maven Central and other Maven repositories, NodeJS Package Manager Registry, Docker Hub, Linux distributions etc. (The scenario of unintended downloads of malicious packages needs to be handled at a higher level involving setting up trusted build environments and human code reviews at each of the package repositories).
  • The most developer-friendly approach would enable traffic inspection only through enterprise-enforced opt-in of the desktop browsers. For example, each enterprise-approved desktop browser could receive a policy update installing an open-source plugin (authored by Zscaler, I hope) that would accompany HTTPS requests with an unencrypted tag or a GUID indicating a request for Zscaler decryption and inspection. The plugin should tag only those HTTP requests that are initiated by the browser as document (frame) loads.
  • The following option is the least effort for Zscaler product development but it is not so good for the developers at whom the product is thrown by their upper management. Decrypt HTTPS (requiring consumer developers carry the Zscaler CA) but inspect the decrypted contents and bring up the “are you sure” redirects or bans only when decrypted requests have User-Agent headers resembling known browsers.

I forgot to mention another concern: API requests and embedded downloads for scripts, images, fonts, style sheets from the browser page suffer from Zscaler’s banning or redirecting to dialogs because, instead of being shown in the window, the rewritten contents break the logic of modern Single Page Application frameworks.

  • A browser plugin or analysis of request headers could tell if the request is a direct document load that can use a redirect to the “are you sure” prompt.
  • Barring smart detection, a simpler browser plugin could have some feedback channel with Zscaler that would let it redraw the page on loading any suspicious material through the document load or otherwise.

Hi @ilgiz, thanks for your post.

First up, your organisations Zscaler administrators can bypass destinations form SSL inspection, this is documented here:

A similar process can be used for bypassing Authentication, thus eliminating the 307 redirect (which usually will lead to a SAML auth page, easy to handle in-browser or on Zscaler Client Connector.

Also, you may find this post useful, it documents the procedures for many common dev tools which may need the Zscaler certificate installed. We’re always keen to get more references on how to configure cli and api based apps to work nicely with inspected flows

Hope this helps,

  • It’s great to see documentation on bypassing Zscaler at the admin and user level. But the user-level bypass needs to apply to many other scenarios such as NodeJS “npm install” command, Maven build and so on. Eventually and ideally, every CA bundle used by a build process can be taken care of by automatic scripting, but this removes hours from many developers on many projects (assuming they want to automate the builds on their own laptop before letting the enterprise build servers replicate that).
  • The redirect bypass documentation did not show in the answer.
  • The Zscaler admin documentation betrays the profile of consumers for which the product is targeted. Even the “system development” category does not mention the Nuget “Microsoft Gallery”, the Sonatype “Maven Central”, other maven repositories (Adobe), NodeJS “package manager” registry, Python, Docker Hub and, I am not afraid to say, Github. The admin option documentation does not specify if Zscaler applies the bypass after its man-in-the-middle, which would still require carrying the Zscaler CA around every build. I hope the admin bypass decodes each allowed URL’s hostname into IPv4 and IPv6 addresses and uses both the destination addresses and the Server Name Indication strings in the incoming packets to match them against the white list. This exempts the URLs from the man-in-the-middle.
  • My product development ideas aim at keeping the product with the enterprise but narrowing its use to the casual browsing. Blocking access to malicious downloads can only be done when knowing the URLs or fingerprints of malicious code and only assuming that the download was not zipped or encrypted. Malicious code’s attacks will become known only after their success, at which point the URL and fingerprint use has smaller effect. That’s why I assume that abandoning the scanning of downloads by making Zscaler an opt-in browser plugin (possibly enforced by the enterprise update policies) would be a sensible trade-off compared to the recovery of hours of reverse-engineering download failures in developer’s life. Instead of the weak detection mechanism, one may hope that a big effort of prevention on the part of package repository owners (setting up automatic builds from source code only, enforcing human code reviews) could address the risk.

for npm install you need to install root CA in Java cacerts file. It is because java uses it’s own cert store and causes the issue.
On the other hand your org can use PAC files that will only pick browser traffic and not CLI traffic.
We had the same issue in our organisations where we had to bypass certain websites since they didn’t have cert pinning but were having issues with Zscaler SSL inspection and we had to add Zscaler root cert is cacerts file.

Thanks Sudeep, great to hear support for the concern. (I think manipulating client proxy settings does not get rid of enterprise-level routing via Zscaler once the employee connects to VPN or even before that due to some enterprise-enforced routing in the laptop or due to ZPA).

By the way, someone needs to address a minor vulnerability in ZPA configuring the proxy via plain-text links to At a pre-VPN state of a victim opening their laptop in a coffee shop and connecting to a malicious or breached Wi-Fi hotspot, the PAC file can be intercepted and rewritten, causing, for example, the browser to not use Zscaler pre-VPN. To remediate, the plain-text links need changing to HTTPS ones, unless I miss other limits. The domain does not have a “preload” standing in the “strict transport security” context either.