Skip to main content

Naming Standards for Splunk Objects

Every software architect or developer would know the importance of "naming conventions". But this may not be the case with the business users, admins or Project Managers. Hence as a Splunk Administrator my first job is to define the naming conventions of Splunk Objects

Splunk Objects

What do you mean by Splunk objects? Splunk object refers to any configuration item(s) that are under the control of your team. For example, a dashboard you created, a custom splunk app, a report etc. For the splunk objects you may not have direct control, the naming conventions may not apply. So the conventions can be applied only to the objects that are created by you/your team and which you have FULL control. Ensure you are well familiar with Splunk configuration file precedence  to understand the object precedence as the naming convention should be built upon it.

Naming Conventions and Methodology

  • Ensure if your organisation have a naming convention methodology?
  • Use camelCase (not underscores) for variables (eg  myFirstVariable) within code.
  • Use camelCase and underscores instead of spaces for filenames (eg  this_is_a_fileName.xml)
  • Use NYSE company lookup for your company name (eg IBM, AAPL). This is helpful for prefixing your custom objects
  • The apps you create SHOULD start with CAPS. (eg  A_custom_ports). This is an exception from having everything with camelCase. The reason behind is in Search Head clustering , the deployer sends the apps to "default" directory thus making your app's contents to get lower privilege than "Splunk_TA" etc., because it starts with "S". I always start my app names with A_ , thus ensuring it always get's priority

Key objects & configuration items  (example in brackets)

  • coding variables :  (myDashboardRefreshInterval)
  • app names :  (A_custom_ports)
  • Reports: <yourcompany>_rp_<Platform/Device/Service>_<Category>_<TimeInterval>_<Description>  (mc_rp_ops_scheduled_24h_batchUpload.xml
  • Dashboards:<yourcompany>_db_<Platform/Device/Service>_<Category>_<Description>  (mc_db_ops_management_serverFailures.xml)
  • Alerts: <yourcompany>_al_<Platform/Device/Service>_<Category>_<TimeInterval>_<Description> (mc_al_ops_tivoli_10m_userFailure.xml)
  • SavedSearches: <yourcompany>_ss_<Platform/Device/Service>_<Category>_<TimeInterval>_<Description> (mc_ss_ops_postProcess_10m_cmdbItems.xml)




Comments

Popular posts from this blog

Splunk Integration with Active Directory/LDAP

Most of the companies want to integrate their Splunk installation to centralised authentication system. The main article in Splunk docs describe it in concise manner, but this article is to do the integration in a practical manner including the code. LDAP/Active Directory : Purpose of Integration To authenticate users via Active Directory (AD) To associate users to roles To centralise management of users/roles  To collect Identity list from Active Directory subsystems Modular App(s) I always tend to create specific apps for every functionality. For integration of Splunk, the app I would create is something like A_prod_ldap_auth  (the naming convention implies the integration into PROD, ldap for authorisation purposes) Contents of the app Authorization mainly is done using two conf files authentication .conf  - configuring authentication with LDAP authorize .conf  - configure roles and granular access controls Both these files...

syslog and Splunk : Logrotate

Syslog (rsyslog or syslog-ng) is used by almost 99% of Linux based Splunk installations for collection of data especially from network devices where the data is transient. Key things to remember while collecting these logs are To store the log using syslog and forward to other systems if required. Store them in a well formatted directory structure. Direct streaming to Splunk is not preferred as restart of Splunk causes problem This leads to storing  the data and hence managing the data (store for 1 day and delete etc.). We use "logrotate" extensively for this purpose Link to git code:  logrotate.d Loading .... Delaycompress is required, so that log files are NOT rotated while splunk is reading it While collecting into splunk discard any .gz extensions Ensure size is specified to a feasible value

Splunk : Transform data further from already transformed data

In many circumstances, you may need to extract or transform a data which has just been extracted by another transform. Please see a below case whereby the _raw needs to have a new sourcetype (index time), then on such modified sourcetype you need to extract the fields , and one of the extracted fields itself is an XML. We can achieve all this using props.conf and transforms.conf within an app (or local/ directory of your existing app) In props.conf  # =================================================================  # These are executed in the same order that they appear in the list so ORDER CAREFULLY!  [incoming_sourcetype]  TRANSFORMS-sourcetype = rename_mySourcetype  [mySourcetype]  # Search Time extractions by REPORT  REPORT-mySourcetype = my_deep_extraction_1, my_deeper_extraction_2, my_deeper_extraction_3  # =================================================================  In transforms.conf  # ====================...