Zscaler Splunk App - Design and Installation documentation

The search query for Tunnel Status panel of the app seems to be incorrect. I’ve changed the record types based on what Splunk returns in the search results and now the panel is properly populated. The modified query looks like this:

index=zscaler sourcetype=“zscalernss-tunnel” (Recordtype=“Tunnel Event” OR Recordtype=“IPSec Phase2” OR Recordtype=“IPSec Phase1”) | eval event=if(Recordtype==“Tunnel Event”, event, Recordtype) | table _time,event,location

1 Like

Thanks, I thought I patched this! Will work into the next update.

At this point I’ve got Web, Tunnel and Audit working great. I have yet to get Sandbox API reporting to work. It seems the process to get the MD5s and save into a CSV file isn’t working. It is creating a file called zscaler-md5-lookup.csv. I noticed there was a file called md5.csv in the directory that was installed with the app, but doesn’t get updated. Since the Audit API is working, I would think all my API credentials are OK. This is Case: 02371860 from 3/25.

Is the TA running on a search head? It needs to be running on the search heard to find pending detonations and populate the MD5 list.

We are Splunk Cloud so its running on the Heavy Forwarder. I’ll work with Splunk to get this on the search head.

On the TA there is a problem with the zscalernss-web sourcetype.

[zscalernss-web]
EVAL-app = Zscaler

You need double quotes for the field to be created. Otherwise it means you’re going to copy the value of the Zscaler field which isn’t intended.

[zscalernss-web]
EVAL-app = “Zscaler”

On the other hand this field isn’t really compliant with the Web datamodel - https://docs.splunk.com/Documentation/CIM/4.15.0/User/Web

Based on the above documentation it’s like this: “The application detected or hosted by the server/site such as WordPress, Splunk, or Facebook.”

In our environment I have commented “EVAL-app = Zscaler” and created this field alias:

[zscalernss-web]
FIELDALIAS-appname_as_app = appname AS app

Hi Zscaler Splunk Support team,

We have installed the Zscaler Add-on on Splunk Heavy Forwarder. The Zscaler LSS log receiver is on the same subnet as the Splunk HF.

However, we have seen messages are missing when run search queries on Splunk. The tcpdump on LSS log receiver shows the message has been sent to HF.

We are using Data Input TCP option.

Have you experienced any similar issue in production environment? We have several thousands active users going over ZPA. Are there any knobs that we need to tune to ensure Splunk can keep up with the ingest data volume?

Hi Jane, welcome to Zscaler Community!


If you’re using our add-on, all the setting we could tweak should already be setup (the sourctypes have most of these fields set).

This said, we’re not experts when it comes to Splunk sizing and other elements of Splunk configuration. Per your note, LSS is transporting the logs, which is expected. I’d suggest you ask the same question on Splunk Community (or support), they may some more thoughts on what you can be looking fora at a broader Splunk infrastructure level.

Cheers,
@skottieb

@skottieb Thanks for the information. It turns out to be a time formatting issue.

We just installed the 3.0.2 Version of the Add On. Our Splunk environment is behind a (Zscaler) Proxy. The release notes for version 3 from 10th Aug 2020 say that proxy settings are enabled via configuration “- Enabled Proxy Settings in TA configuration”. This did not work in our environment and I don’t see any obvious code in the TA that reads the proxy configuration is this still work in progress? We also noticed that the default audit log input is acitvated by default and creates python errors.
I deleted the default input and applied the feature request joe0815 posted on January 8th and the input started working.

@skottieb Is there something broken with the proxy-configuration in the TA or could you explain how to apply it properly?

Regards
Chris

This is a sample error caused by the default input:
2020-10-07 13:29:43,881 ERROR pid=22334 tid=MainThread file=base_modinput.py:log_error:309 | Traceback (most recent call last):
File “/opt/splunk/etc/apps/TA-Zscaler_CIM/bin/ta_zscaler_cim/aob_py3/modinput_wrapper/base_modinput.py”, line 113, in stream_events
self.parse_input_args(input_definition)
File “/opt/splunk/etc/apps/TA-Zscaler_CIM/bin/ta_zscaler_cim/aob_py3/modinput_wrapper/base_modinput.py”, line 154, in parse_input_args
self._parse_input_args_from_global_config(inputs)
File “/opt/splunk/etc/apps/TA-Zscaler_CIM/bin/ta_zscaler_cim/aob_py3/modinput_wrapper/base_modinput.py”, line 173, in _parse_input_args_from_global_config
ucc_inputs = global_config.inputs.load(input_type=self.input_type)
File “/opt/splunk/etc/apps/TA-Zscaler_CIM/bin/ta_zscaler_cim/aob_py3/splunktaucclib/global_config/configuration.py”, line 281, in load
self._references,
File “/opt/splunk/etc/apps/TA-Zscaler_CIM/bin/ta_zscaler_cim/aob_py3/splunktaucclib/global_config/configuration.py”, line 302, in _reference
configs
File “/opt/splunk/etc/apps/TA-Zscaler_CIM/bin/ta_zscaler_cim/aob_py3/splunktaucclib/global_config/configuration.py”, line 334, in _input_reference
config_name=config_name
splunktaucclib.global_config.configuration.GlobalConfigError: Config Not Found for Input, input_type=zscaler_audit_logs, input_name=ZDEMO_Audit_Beta, config_type=account, config_name=ZDEMO_Beta

Dan, did you get this resolved?

No. 9 months later this is still broken. Our MSSP, Proficio, has reached out to Scott to see what is needed now that we have an IDP running in Splunk Cloud, yet still can’t get the Sandbox reporting working. There seems to be an impass.

My customer appears to be populating the CSV, but not making API calls to fetch the MD5 Reports. I will pass along anything I learn.

We have the opposite issue I think. We have the API running in the IDM, but can’t get the stored query to run on the IDM to produce the CSV. Splunk support says that has to run on the search head.

Is there a trick to getting the parsing working? I’m sending logs as JSON, and have the TA installed on my Indexer and explicitly specifying sourcetype = zscalerlss-zpa-app in inputs.conf, but the fields aren’t being parsed out.

The Deployment Guide says to setup 4 LSS logsource types in splunk for "“Auth”, “Access”, “Browser Access”, and “Connector”. However, when setting up the LLS Log Streams in Zscaler, there are 5 log types: “User Activity”, “User Status”, “App Connector Status”, “Browser Access”, and “Audit Logs”. How to those line up?

A while back er renamed the log-feeds, but the use of the sourcetypes is imbedded in some SPL (it should be macros, but that’s not true for any custom work done outside of what’s released in the TA/App).

Short of the long:
User Activity = zscalerlss-zpa-app
User Status = zscalerlss-zpa-auth
Connector Status = zscalerlss-zpa-connector

Further descriptors can be found here:
https://help.zscaler.com/zpa/configuring-log-receiver

This leaves zscalerlss-casb to be the ‘Audit Logs’.

Is ZWS going to be integrated into this app or have it’s own app?