Skip to main content

Splunk Integration with Active Directory/LDAP

Most of the companies want to integrate their Splunk installation to centralised authentication system. The main article in Splunk docs describe it in concise manner, but this article is to do the integration in a practical manner including the code.

LDAP/Active Directory : Purpose of Integration


  • To authenticate users via Active Directory (AD)
  • To associate users to roles
  • To centralise management of users/roles 
  • To collect Identity list from Active Directory subsystems

Modular App(s)

I always tend to create specific apps for every functionality. For integration of Splunk, the app I would create is something like
  • A_prod_ldap_auth  (the naming convention implies the integration into PROD, ldap for authorisation purposes)

Contents of the app

Authorization mainly is done using two conf files
  • authentication.conf  - configuring authentication with LDAP
  • authorize.conf  - configure roles and granular access controls
Both these files can be configured and customised and put into the A_prod_ldap_auth app to make it isolated. You can then deploy this app to your Splunk deployment (both standalone & clustered) and it will work like a charm

     Working Code Example

    Comments

    Popular posts from this blog

    syslog and Splunk : Logrotate

    Syslog (rsyslog or syslog-ng) is used by almost 99% of Linux based Splunk installations for collection of data especially from network devices where the data is transient. Key things to remember while collecting these logs are To store the log using syslog and forward to other systems if required. Store them in a well formatted directory structure. Direct streaming to Splunk is not preferred as restart of Splunk causes problem This leads to storing  the data and hence managing the data (store for 1 day and delete etc.). We use "logrotate" extensively for this purpose Link to git code:  logrotate.d Loading .... Delaycompress is required, so that log files are NOT rotated while splunk is reading it While collecting into splunk discard any .gz extensions Ensure size is specified to a feasible value

    Splunk : Transform data further from already transformed data

    In many circumstances, you may need to extract or transform a data which has just been extracted by another transform. Please see a below case whereby the _raw needs to have a new sourcetype (index time), then on such modified sourcetype you need to extract the fields , and one of the extracted fields itself is an XML. We can achieve all this using props.conf and transforms.conf within an app (or local/ directory of your existing app) In props.conf  # =================================================================  # These are executed in the same order that they appear in the list so ORDER CAREFULLY!  [incoming_sourcetype]  TRANSFORMS-sourcetype = rename_mySourcetype  [mySourcetype]  # Search Time extractions by REPORT  REPORT-mySourcetype = my_deep_extraction_1, my_deeper_extraction_2, my_deeper_extraction_3  # =================================================================  In transforms.conf  # ====================...