Which of the following security options must be explicitly configured (i.e. which options are not enabled by default)?
Data encryption between Splunk Web and splunkd.
Certificate authentication between forwarders and indexers.
Certificate authentication between Splunk Web and search head.
Data encryption for distributed search between search heads and indexers.
The following security option must be explicitly configured, as it is not enabled by default:
Certificate authentication between forwarders and indexers. This option allows the forwarders and indexers to verify each other’s identity using SSL certificates, which prevents unauthorized data transmission or spoofing attacks. This option is not enabled by default, as it requires the administrator to generate and distribute the certificates for the forwarders and indexers. For more information, see [Secure the communication between forwarders and indexers] in the Splunk documentation. The following security options are enabled by default:
Data encryption between Splunk Web and splunkd. This option encrypts the communication between the Splunk Web interface and the splunkd daemon using SSL, which prevents data interception or tampering. This option is enabled by default, as Splunk provides a self-signed certificate for this purpose. For more information, see [About securing Splunk Enterprise with SSL] in the Splunk documentation.
Certificate authentication between Splunk Web and search head. This option allows the Splunk Web interface and the search head to verify each other’s identity using SSL certificates, which prevents unauthorized access or spoofing attacks. This option is enabled by default, as Splunk provides a self-signed certificate for this purpose. For more information, see [About securing Splunk Enterprise with SSL] in the Splunk documentation.
Data encryption for distributed search between search heads and indexers. This option encrypts the communication between the search heads and the indexers using SSL, which prevents data interception or tampering. This option is enabled by default, as Splunk provides a self-signed certificate for this purpose. For more information, see [Secure your distributed search environment] in the Splunk documentation.
When designing the number and size of indexes, which of the following considerations should be applied?
Expected daily ingest volume, access controls, number of concurrent users
Number of installed apps, expected daily ingest volume, data retention time policies
Data retention time policies, number of installed apps, access controls
Expected daily ingest volumes, data retention time policies, access controls
When designing the number and size of indexes, the following considerations should be applied:
Expected daily ingest volumes: This is the amount of data that will be ingested and indexed by the Splunk platform per day. This affects the storage capacity, the indexing performance, and the license usage of the Splunk deployment. The number and size of indexes should be planned according to the expected daily ingest volumes, as well as the peak ingest volumes, to ensure that the Splunk deployment can handle the data load and meet the business requirements12.
Data retention time policies: This is the duration for which the data will be stored and searchable by the Splunk platform. This affects the storage capacity, the data availability, and the data compliance of the Splunk deployment. The number and size of indexes should be planned according to the data retention time policies, as well as the data lifecycle, to ensure that the Splunk deployment can retain the data for the desired period and meet the legal or regulatory obligations13.
Access controls: This is the mechanism for granting or restricting access to the data by the Splunk users or roles. This affects the data security, the data privacy, and the data governance of the Splunk deployment. The number and size of indexes should be planned according to the access controls, as well as the data sensitivity, to ensure that the Splunk deployment can protect the data from unauthorized or inappropriate access and meet the ethical or organizational standards14.
Option D is the correct answer because it reflects the most relevant and important considerations for designing the number and size of indexes. Option A is incorrect because the number of concurrent users is not a direct factor for designing the number and size of indexes, but rather a factor for designing the search head capacity and the search head clustering configuration5. Option B is incorrect because the number of installed apps is not a direct factor for designing the number and size of indexes, but rather a factor for designing the app compatibility and the app performance. Option C is incorrect because it omits the expected daily ingest volumes, which is a crucial factor for designing the number and size of indexes.
(What is the best way to configure and manage receiving ports for clustered indexers?)
Use Splunk Web to create the receiving port on each peer node.
Define the receiving port in /etc/deployment-apps/cluster-app/local/inputs.conf and deploy it to the peer nodes.
Run the splunk enable listen command on each peer node.
Define the receiving port in /etc/manager-apps/_cluster/local/inputs.conf and push it to the peer nodes.
According to the Indexer Clustering Administration Guide, the most efficient and Splunk-recommended way to configure and manage receiving ports for all clustered indexers (peer nodes) is through the Cluster Manager (previously known as the Master Node).
In a clustered environment, configuration changes that affect all peer nodes—such as receiving port definitions—should be managed centrally. The correct procedure is to define the inputs configuration file (inputs.conf) within the Cluster Manager’s manager-apps directory. Specifically, the configuration is placed in:
$SPLUNK_HOME/etc/manager-apps/_cluster/local/inputs.conf
and then deployed to all peers using the configuration bundle push mechanism.
This centralized approach ensures consistency across all peer nodes, prevents manual configuration drift, and allows Splunk to maintain uniform ingestion behavior across the cluster.
Running splunk enable listen on each peer (Option C) or manually configuring inputs via Splunk Web (Option A) introduces inconsistencies and is not recommended in clustered deployments. Using the deployment-apps path (Option B) is meant for deployment servers, not for cluster management.
References (Splunk Enterprise Documentation):
• Indexer Clustering: Configure Peer Nodes via Cluster Manager
• Deploy Configuration Bundles from the Cluster Manager
• inputs.conf Reference – Receiving Data Configuration
• Splunk Enterprise Admin Manual – Managing Clustered Indexers
When Splunk is installed, where are the internal indexes stored by default?
SPLUNK_HOME/bin
SPLUNK_HOME/var/lib
SPLUNK_HOME/var/run
SPLUNK_HOME/etc/system/default
Splunk internal indexes are the indexes that store Splunk’s own data, such as internal logs, metrics, audit events, and configuration snapshots. By default, Splunk internal indexes are stored in the SPLUNK_HOME/var/lib/splunk directory, along with other user-defined indexes. The SPLUNK_HOME/bin directory contains the Splunk executable files and scripts. The SPLUNK_HOME/var/run directory contains the Splunk process ID files and lock files. The SPLUNK_HOME/etc/system/default directory contains the default Splunk configuration files.
As a best practice, where should the internal licensing logs be stored?
Indexing layer.
License server.
Deployment layer.
Search head layer.
As a best practice, the internal licensing logs should be stored on the license server. The license server is a Splunk instance that manages the distribution and enforcement of licenses in a Splunk deployment. The license server generates internal licensing logs that contain information about the license usage, violations, warnings, and pools. The internal licensing logs should be stored on the license server itself, because they are relevant to the license server’s role and function. Storing the internal licensing logs on the license server also simplifies the license monitoring and troubleshooting process. The internal licensing logs should not be stored on the indexing layer, the deployment layer, or the search head layer, because they are not related to the roles and functions of these layers. Storing the internal licensing logs on these layers would also increase the network traffic and disk space consumption
In a distributed environment, knowledge object bundles are replicated from the search head to which location on the search peer(s)?
SPLUNK_HOME/var/lib/searchpeers
SPLUNK_HOME/var/log/searchpeers
SPLUNK_HOME/var/run/searchpeers
SPLUNK_HOME/var/spool/searchpeers
In a distributed environment, knowledge object bundles are replicated from the search head to the SPLUNK_HOME/var/run/searchpeers directory on the search peer(s). A knowledge object bundle is a compressed file that contains the knowledge objects, such as fields, lookups, macros, and tags, that are required for a search. A search peer is a Splunk instance that provides data to a search head in a distributed search. A search head is a Splunk instance that coordinates and executes a search across multiple search peers. When a search head initiates a search, it creates a knowledge object bundle and replicates it to the search peers that are involved in the search. The search peers store the knowledge object bundle in the SPLUNK_HOME/var/run/searchpeers directory, which is a temporary directory that is cleared when the Splunk service restarts. The search peers use the knowledge object bundle to apply the knowledge objects to the data and return the results to the search head. The SPLUNK_HOME/var/lib/searchpeers, SPLUNK_HOME/var/log/searchpeers, and SPLUNK_HOME/var/spool/searchpeers directories are not the locations where the knowledge object bundles are replicated, because they do not exist in the Splunk file system
(A customer has converted a CSV lookup to a KV Store lookup. What must be done to make it available for an automatic lookup?)
Add the repFactor=true attribute in collections.conf.
Add the replicate=true attribute in lookups.conf.
Add the replicate=true attribute in collections.conf.
Add the repFactor=true attribute in lookups.conf.
Splunk’s KV Store management documentation specifies that when converting a static CSV lookup to a KV Store lookup, the lookup data is stored in a MongoDB-based collection defined in collections.conf. To ensure that the KV Store lookup is replicated and available across all search head cluster members, administrators must include the attribute replicate=true within the collections.conf file.
This configuration instructs Splunk to replicate the KV Store collection’s data to all members in the Search Head Cluster (SHC), enabling consistent access and reliability across the cluster. Without this attribute, the KV Store collection would remain local to a single search head, making it unavailable for automatic lookups performed by other members.
Here’s an example configuration snippet from collections.conf:
[customer_lookup]
replicate = true
field.name = string
field.age = number
The attribute repFactor=true (mentioned in Options A and D) is unrelated to KV Store behavior—it applies to index replication, not KV Store replication. Similarly, replicate=true in lookups.conf (Option B) has no effect, as KV Store replication is controlled exclusively via collections.conf.
Once properly configured, the lookup can be defined in transforms.conf and referenced in props.conf for automatic lookup functionality.
References (Splunk Enterprise Documentation):
• KV Store Collections and Configuration – collections.conf Reference
• Managing KV Store Data in Search Head Clusters
• Automatic Lookup Configuration Using KV Store
• Splunk Enterprise Admin Manual – Distributed KV Store Replication Settings
(If the maxDataSize attribute is set to auto_high_volume in indexes.conf on a 64-bit operating system, what is the maximum hot bucket size?)
4 GB
750 MB
10 GB
1 GB
According to the indexes.conf reference in Splunk Enterprise, the parameter maxDataSize controls the maximum size (in GB or MB) of a single hot bucket before Splunk rolls it to a warm bucket. When the value is set to auto_high_volume on a 64-bit system, Splunk automatically sets the maximum hot bucket size to 10 GB.
The “auto” settings allow Splunk to choose optimized values based on the system architecture:
auto: Default hot bucket size of 750 MB (32-bit) or 10 GB (64-bit).
auto_high_volume: Specifically tuned for high-ingest indexes; on 64-bit systems, this equals 10 GB per hot bucket.
auto_low_volume: Uses smaller bucket sizes for lightweight indexes.
The purpose of larger hot bucket sizes on 64-bit systems is to improve indexing performance and reduce the overhead of frequent bucket rolling during heavy data ingestion. The documentation explicitly warns that these sizes differ on 32-bit systems due to memory addressing limitations.
Thus, for high-throughput environments running 64-bit operating systems, auto_high_volume = 10 GB is the correct and Splunk-documented configuration.
References (Splunk Enterprise Documentation):
• indexes.conf – maxDataSize Attribute Reference
• Managing Index Buckets and Data Retention
• Splunk Enterprise Admin Manual – Indexer Storage Configuration
• Splunk Performance Tuning: Bucket Management and Hot/Warm Transitions
At which default interval does metrics.log generate a periodic report regarding license utilization?
10 seconds
30 seconds
60 seconds
300 seconds
The default interval at which metrics.log generates a periodic report regarding license utilization is 60 seconds. This report contains information about the license usage and quota for each Splunk instance, as well as the license pool and stack. The report is generated every 60 seconds by default, but this interval can be changed by modifying the license_usage stanza in the metrics.conf file. The other intervals (10 seconds, 30 seconds, and 300 seconds) are not the default values, but they can be set by the administrator if needed. For more information, see About metrics.log and Configure metrics.log in the Splunk documentation.
How does IT Service Intelligence (ITSI) impact the planning of a Splunk deployment?
ITSI requires a dedicated deployment server.
The amount of users using ITSI will not impact performance.
ITSI in a Splunk deployment does not require additional hardware resources.
Depending on the Key Performance Indicators that are being tracked, additional infrastructure may be needed.
ITSI can impact the planning of a Splunk deployment depending on the Key Performance Indicators (KPIs) that are being tracked. KPIs are metrics that measure the health and performance of IT services and business processes. ITSI collects, analyzes, and displays KPI data from various data sources in Splunk. Depending on the number, frequency, and complexity of the KPIs, additional infrastructure may be needed to support the data ingestion, processing, and visualization. ITSI does not require a dedicated deployment server, nor does it affect the number of users using ITSI. ITSI in a Splunk deployment does require additional hardware resources, such as CPU, memory, and disk space, to run the ITSI components and apps
To activate replication for an index in an indexer cluster, what attribute must be configured in indexes.conf on all peer nodes?
repFactor = 0
replicate = 0
repFactor = auto
replicate = auto
To activate replication for an index in an indexer cluster, the repFactor attribute must be configured in indexes.conf on all peer nodes. This attribute specifies the replication factor for the index, which determines how many copies of raw data are maintained by the cluster. Setting the repFactor attribute to auto will enable replication for the index. The replicate attribute in indexes.conf is not a valid Splunk attribute. The repFactor attribute in outputs.conf and the replicate attribute in deploymentclient.conf are not related to replication for an index in an indexer cluster. For more information, see Configure indexes for indexer clusters in the Splunk documentation.
Why should intermediate forwarders be avoided when possible?
To minimize license usage and cost.
To decrease mean time between failures.
Because intermediate forwarders cannot be managed by a deployment server.
To eliminate potential performance bottlenecks.
Intermediate forwarders are forwarders that receive data from other forwarders and then send that data to indexers. They can be useful in some scenarios, such as when network bandwidth or security constraints prevent direct forwarding to indexers, or when data needs to be routed, cloned, or modified in transit. However, intermediate forwarders also introduce additional complexity and overhead to the data pipeline, which can affect the performance and reliability of data ingestion. Therefore, intermediate forwarders should be avoided when possible, and used only when there is a clear benefit or requirement for them. Some of the drawbacks of intermediate forwarders are:
They increase the number of hops and connections in the data flow, which can introduce latency and increase the risk of data loss or corruption.
They consume more resources on the hosts where they run, such as CPU, memory, disk, and network bandwidth, which can affect the performance of other applications or processes on those hosts.
They require additional configuration and maintenance, such as setting up inputs, outputs, load balancing, security, monitoring, and troubleshooting.
They can create data duplication or inconsistency if they are not configured properly, such as when using cloning or routing rules.
Some of the references that support this answer are:
Configure an intermediate forwarder, which states: “Intermediate forwarding is where a forwarder receives data from one or more forwarders and then sends that data on to another indexer. This kind of setup is useful when, for example, you have many hosts in different geographical regions and you want to send data from those forwarders to a central host in that region before forwarding the data to an indexer. All forwarder types can act as an intermediate forwarder. However, this adds complexity to your deployment and can affect performance, so use it only when necessary.”
Intermediate data routing using universal and heavy forwarders, which states: “This document outlines a variety of Splunk options for routing data that address both technical and business requirements. Overall benefits Using splunkd intermediate data routing offers the following overall benefits: … The routing strategies described in this document enable flexibility for reliably processing data at scale. Intermediate routing enables better security in event-level data as well as in transit. The following is a list of use cases and enablers for splunkd intermediate data routing: … Limitations splunkd intermediate data routing has the following limitations: … Increased complexity and resource consumption. splunkd intermediate data routing adds complexity to the data pipeline and consumes resources on the hosts where it runs. This can affect the performance and reliability of data ingestion and other applications or processes on those hosts. Therefore, intermediate routing should be avoided when possible, and used only when there is a clear benefit or requirement for it.”
Use forwarders to get data into Splunk Enterprise, which states: “The forwarders take the Apache data and send it to your Splunk Enterprise deployment for indexing, which consolidates, stores, and makes the data available for searching. Because of their reduced resource footprint, forwarders have a minimal performance impact on the Apache servers. … Note: You can also configure a forwarder to send data to another forwarder, which then sends the data to the indexer. This is called intermediate forwarding. However, this adds complexity to your deployment and can affect performance, so use it only when necessary.”
Of the following types of files within an index bucket, which file type may consume the most disk?
Rawdata
Bloom filter
Metadata (.data)
Inverted index (.tsidx)
Of the following types of files within an index bucket, the rawdata file type may consume the most disk. The rawdata file type contains the compressed and encrypted raw data that Splunk has ingested. The rawdata file type is usually the largest file type in a bucket, because it stores the original data without any filtering or extraction. The bloom filter file type contains a probabilistic data structure that is used to determine if a bucket contains events that match a given search. The bloom filter file type is usually very small, because it only stores a bit array of hashes. The metadata (.data) file type contains information about the bucket properties, such as the earliest and latest event timestamps, the number of events, and the size of the bucket. The metadata file type is also usually very small, because it only stores a few lines of text. The inverted index (.tsidx) file type contains the time-series index that maps the timestamps and event IDs of the raw data. The inverted index file type can vary in size depending on the number and frequency of events, but it is usually smaller than the rawdata file type
A customer currently has many deployment clients being managed by a single, dedicated deployment server. The customer plans to double the number of clients.
What could be done to minimize performance issues?
Modify deploymentclient. conf to change from a Pull to Push mechanism.
Reduce the number of apps in the Manager Node repository.
Increase the current deployment client phone home interval.
Decrease the current deployment client phone home interval.
According to the Splunk documentation1, increasing the current deployment client phone home interval can minimize performance issues by reducing the frequency of communication between the clients and the deployment server. This can also reduce the network traffic and the load on the deployment server. The other options are false because:
Modifying deploymentclient.conf to change from a Pull to Push mechanism is not possible, as Splunk does not support a Push mechanism for deployment server2.
Reducing the number of apps in the Manager Node repository will not affect the performance of the deployment server, as the apps are only downloaded when there is a change in the configuration or a new app is added3.
Decreasing the current deployment client phone home interval will increase the performance issues, as it will increase the frequency of communication between the clients and the deployment server, resulting in more network traffic and load on the deployment server1.
When planning a search head cluster, which of the following is true?
All search heads must use the same operating system.
All search heads must be members of the cluster (no standalone search heads).
The search head captain must be assigned to the largest search head in the cluster.
All indexers must belong to the underlying indexer cluster (no standalone indexers).
When planning a search head cluster, the following statement is true: All indexers must belong to the underlying indexer cluster (no standalone indexers). A search head cluster is a group of search heads that share configurations, apps, and search jobs. A search head cluster requires an indexer cluster as its data source, meaning that all indexers that provide data to the search head cluster must be members of the same indexer cluster. Standalone indexers, or indexers that are not part of an indexer cluster, cannot be used as data sources for a search head cluster. All search heads do not have to use the same operating system, as long as they are compatible with the Splunk version and the indexer cluster. All search heads do not have to be members of the cluster, as standalone search heads can also search the indexer cluster, but they will not have the benefits of configuration replication and load balancing. The search head captain does not have to be assigned to the largest search head in the cluster, as the captain is dynamically elected from among the cluster members based on various criteria, such as CPU load, network latency, and search load.
Which command will permanently decommission a peer node operating in an indexer cluster?
splunk stop -f
splunk offline -f
splunk offline --enforce-counts
splunk decommission --enforce counts
The splunk offline --enforce-counts command will permanently decommission a peer node operating in an indexer cluster. This command will remove the peer node from the cluster and delete its data. This command should be used when the peer node is no longer needed or is being replaced by another node. The splunk stop -f command will stop the Splunk service on the peer node, but it will not decommission it from the cluster. The splunk offline -f command will take the peer node offline, but it will not delete its data or enforce the replication and search factors. The splunk decommission --enforce-counts command is not a valid Splunk command. For more information, see Remove a peer node from an indexer cluster in the Splunk documentation.
Which Splunk Enterprise offering has its own license?
Splunk Cloud Forwarder
Splunk Heavy Forwarder
Splunk Universal Forwarder
Splunk Forwarder Management
The Splunk Universal Forwarder is the only Splunk Enterprise offering that has its own license. The Splunk Universal Forwarder license allows the forwarder to send data to any Splunk Enterprise or Splunk Cloud instance without consuming any license quota. The Splunk Heavy Forwarder does not have its own license, but rather consumes the license quota of the Splunk Enterprise or Splunk Cloud instance that it sends data to. The Splunk Cloud Forwarder and the Splunk Forwarder Management are not separate Splunk Enterprise offerings, but rather features of the Splunk Cloud service. For more information, see [About forwarder licensing] in the Splunk documentation.
Several critical searches that were functioning correctly yesterday are not finding a lookup table today. Which log file would be the best place to start troubleshooting?
btool.log
web_access.log
health.log
configuration_change.log
A lookup table is a file that contains a list of values that can be used to enrich or modify the data during search time1. Lookup tables can be stored in CSV files or in the KV Store1. Troubleshooting lookup tables involves identifying and resolving issues that prevent the lookup tables from being accessed, updated, or applied correctly by the Splunk searches. Some of the tools and methods that can help with troubleshooting lookup tables are:
web_access.log: This is a file that contains information about the HTTP requests and responses that occur between the Splunk web server and the clients2. This file can help troubleshoot issues related to lookup table permissions, availability, and errors, such as 404 Not Found, 403 Forbidden, or 500 Internal Server Error34.
btool output: This is a command-line tool that displays the effective configuration settings for a given Splunk component, such as inputs, outputs, indexes, props, and so on5. This tool can help troubleshoot issues related to lookup table definitions, locations, and precedence, as well as identify the source of a configuration setting6.
search.log: This is a file that contains detailed information about the execution of a search, such as the search pipeline, the search commands, the search results, the search errors, and the search performance. This file can help troubleshoot issues related to lookup table commands, arguments, fields, and outputs, such as lookup, inputlookup, outputlookup, lookup_editor, and so on .
Option B is the correct answer because web_access.log is the best place to start troubleshooting lookup table issues, as it can provide the most relevant and immediate information about the lookup table access and status. Option A is incorrect because btool output is not a log file, but a command-line tool. Option C is incorrect because health.log is a file that contains information about the health of the Splunk components, such as the indexer cluster, the search head cluster, the license master, and the deployment server. This file can help troubleshoot issues related to Splunk deployment health, but not necessarily related to lookup tables. Option D is incorrect because configuration_change.log is a file that contains information about the changes made to the Splunk configuration files, such as the user, the time, the file, and the action. This file can help troubleshoot issues related to Splunk configuration changes, but not necessarily related to lookup tables.
Which props.conf setting has the least impact on indexing performance?
SHOULD_LINEMERGE
TRUNCATE
CHARSET
TIME_PREFIX
According to the Splunk documentation1, the CHARSET setting in props.conf specifies the character set encoding of the source data. This setting has the least impact on indexing performance, as it only affects how Splunk interprets the bytes of the data, not how it processes or transforms the data. The other options are false because:
The SHOULD_LINEMERGE setting in props.conf determines whether Splunk breaks events based on timestamps or newlines. This setting has a significant impact on indexing performance, as it affects how Splunk parses the data and identifies the boundaries of the events2.
The TRUNCATE setting in props.conf specifies the maximum number of characters that Splunk indexes from a single line of a file. This setting has a moderate impact on indexing performance, as it affects how much data Splunk reads and writes to the index3.
The TIME_PREFIX setting in props.conf specifies the prefix that directly precedes the timestamp in the event data. This setting has a moderate impact on indexing performance, as it affects how Splunk extracts the timestamp and assigns it to the event
Configurations from the deployer are merged into which location on the search head cluster member?
SPLUNK_HOME/etc/system/local
SPLUNK_HOME/etc/apps/APP_HOME/local
SPLUNK_HOME/etc/apps/search/default
SPLUNK_HOME/etc/apps/APP_HOME/default
Configurations from the deployer are merged into the SPLUNK_HOME/etc/apps/APP_HOME/local directory on the search head cluster member. The deployer distributes apps and other configurations to the search head cluster members in the form of a configuration bundle. The configuration bundle contains the contents of the SPLUNK_HOME/etc/shcluster/apps directory on the deployer. When a search head cluster member receives the configuration bundle, it merges the contents of the bundle into its own SPLUNK_HOME/etc/apps directory. The configurations in the local directory take precedence over the configurations in the default directory. The SPLUNK_HOME/etc/system/local directory is used for system-level configurations, not app-level configurations. The SPLUNK_HOME/etc/apps/search/default directory is used for the default configurations of the search app, not the configurations from the deployer.
Which tool(s) can be leveraged to diagnose connection problems between an indexer and forwarder? (Select all that apply.)
telnet
tcpdump
splunk btool
splunk btprobe
The telnet and tcpdump tools can be leveraged to diagnose connection problems between an indexer and forwarder. The telnet tool can be used to test the connectivity and port availability between the indexer and forwarder. The tcpdump tool can be used to capture and analyze the network traffic between the indexer and forwarder. The splunk btool command can be used to check the configuration files of the indexer and forwarder, but it cannot diagnose the connection problems. The splunk btprobe command does not exist, and it is not a valid tool.
Other than high availability, which of the following is a benefit of search head clustering?
Allows indexers to maintain multiple searchable copies of all data.
Input settings are synchronized between search heads.
Fewer network ports are required to be opened between search heads.
Automatic replication of user knowledge objects.
According to the Splunk documentation1, one of the benefits of search head clustering is the automatic replication of user knowledge objects, such as dashboards, reports, alerts, and tags. This ensures that all cluster members have the same set of knowledge objects and can serve the same search results to the users. The other options are false because:
Allowing indexers to maintain multiple searchable copies of all data is a benefit of indexer clustering, not search head clustering2.
Input settings are not synchronized between search heads, as search head clusters do not collect data from inputs. Data collection is done by forwarders or independent search heads3.
Fewer network ports are not required to be opened between search heads, as search head clusters use several ports for communication and replication among the members4.
(What are the possible values for the mode attribute in server.conf for a Splunk server in the [clustering] stanza?)
[clustering] mode = peer
[clustering] mode = searchhead
[clustering] mode = deployer
[clustering] mode = manager
Within the [clustering] stanza of the server.conf file, the mode attribute defines the functional role of a Splunk instance within an indexer cluster. Splunk documentation identifies three valid modes:
mode = manager
Defines the node as the Cluster Manager (formerly called the Master Node).
Responsible for coordinating peer replication, managing configurations, and ensuring data integrity across indexers.
mode = peer
Defines the node as an Indexer (Peer Node) within the cluster.
Handles data ingestion, replication, and search operations under the control of the manager node.
mode = searchhead
Defines a Search Head that connects to the cluster for distributed searching and data retrieval.
The value “deployer” (Option C) is not valid within the [clustering] stanza; it applies to Search Head Clustering (SHC) configurations, where it is defined separately in server.conf under [shclustering].
Each mode must be accompanied by other critical attributes such as manager_uri, replication_port, and pass4SymmKey to enable proper communication and security between cluster members.
References (Splunk Enterprise Documentation):
• Indexer Clustering: Configure Manager, Peer, and Search Head Modes
• server.conf Reference – [clustering] Stanza Attributes
• Distributed Search and Cluster Node Role Configuration
• Splunk Enterprise Admin Manual – Cluster Deployment Architecture
Consider a use case involving firewall data. There is no Splunk-supported Technical Add-On, but the vendor has built one. What are the items that must be evaluated before installing the add-on? (Select all that apply.)
Identify number of scheduled or real-time searches.
Validate if this Technical Add-On enables event data for a data model.
Identify the maximum number of forwarders Technical Add-On can support.
Verify if Technical Add-On needs to be installed onto both a search head or indexer.
A Technical Add-On (TA) is a Splunk app that contains configurations for data collection, parsing, and enrichment. It can also enable event data for a data model, which is useful for creating dashboards and reports. Therefore, before installing a TA, it is important to identify the number of scheduled or real-time searches that will use the data model, and to validate if the TA enables event data for a data model. The number of forwarders that the TA can support is not relevant, as the TA is installed on the indexer or search head, not on the forwarder. The installation location of the TA depends on the type of data and the use case, so it is not a fixed requirement
Which of the following most improves KV Store resiliency?
Decrease latency between search heads.
Add faster storage to the search heads to improve artifact replication.
Add indexer CPU and memory to decrease search latency.
Increase the size of the Operations Log.
KV Store is a feature of Splunk Enterprise that allows apps to store and retrieve data within the context of an app1.
KV Store resides on search heads and replicates data across the members of a search head cluster1.
KV Store resiliency refers to the ability of KV Store to maintain data availability and consistency in the event of failures or disruptions2.
One of the factors that affects KV Store resiliency is the network latency between search heads, which can impact the speed and reliability of data replication2.
Decreasing latency between search heads can improve KV Store resiliency by reducing the chances of data loss, inconsistency, or corruption2.
The other options are not directly related to KV Store resiliency. Faster storage, indexer CPU and memory, and Operations Log size may affect other aspects of Splunk performance, but not KV Store345.
When using the props.conf LINE_BREAKER attribute to delimit multi-line events, the SHOULD_LINEMERGE attribute should be set to what?
Auto
None
True
False
When using the props.conf LINE_BREAKER attribute to delimit multi-line events, the SHOULD_LINEMERGE attribute should be set to false. This tells Splunk not to merge events that have been broken by the LINE_BREAKER. Setting the SHOULD_LINEMERGE attribute to true, auto, or none will cause Splunk to ignore the LINE_BREAKER and merge events based on other criteria. For more information, see Configure event line breaking in the Splunk documentation.
A Splunk environment collecting 10 TB of data per day has 50 indexers and 5 search heads. A single-site indexer cluster will be implemented. Which of the following is a best practice for added data resiliency?
Set the Replication Factor to 49.
Set the Replication Factor based on allowed indexer failure.
Always use the default Replication Factor of 3.
Set the Replication Factor based on allowed search head failure.
The correct answer is B. Set the Replication Factor based on allowed indexer failure. This is a best practice for adding data resiliency to a single-site indexer cluster, as it ensures that there are enough copies of each bucket to survive the loss of one or more indexers without affecting the searchability of the data1. The Replication Factor is the number of copies of each bucket that the cluster maintains across the set of peer nodes2. The Replication Factor should be set according to the number of indexers that can fail without compromising the cluster’s ability to serve data1. For example, if the cluster can tolerate the loss of two indexers, the Replication Factor should be set to three1.
The other options are not best practices for adding data resiliency. Option A, setting the Replication Factor to 49, is not recommended, as it would create too many copies of each bucket and consume excessive disk space and network bandwidth1. Option C, always using the default Replication Factor of 3, is not optimal, as it may not match the customer’s requirements and expectations for data availability and performance1. Option D, setting the Replication Factor based on allowed search head failure, is not relevant, as the Replication Factor does not affect the search head availability, but the searchability of the data on the indexers1. Therefore, option B is the correct answer, and options A, C, and D are incorrect.
1: Configure the replication factor 2: About indexer clusters and index replication
New data has been added to a monitor input file. However, searches only show older data.
Which splunkd. log channel would help troubleshoot this issue?
Modularlnputs
TailingProcessor
ChunkedLBProcessor
ArchiveProcessor
The TailingProcessor channel in the splunkd.log file would help troubleshoot this issue, because it contains information about the files that Splunk monitors and indexes, such as the file path, size, modification time, and CRC checksum. It also logs any errors or warnings that occur during the file monitoring process, such as permission issues, file rotation, or file truncation. The TailingProcessor channel can help identify if Splunk is reading the new data from the monitor input file or not, and what might be causing the problem. Option B is the correct answer. Option A is incorrect because the ModularInputs channel logs information about the modular inputs that Splunk uses to collect data from external sources, such as scripts, APIs, or custom applications. It does not log information about the monitor input file. Option C is incorrect because the ChunkedLBProcessor channel logs information about the load balancing process that Splunk uses to distribute data among multiple indexers. It does not log information about the monitor input file. Option D is incorrect because the ArchiveProcessor channel logs information about the archive process that Splunk uses to move data from the hot/warm buckets to the cold/frozen buckets. It does not log information about the monitor input file12
1: https://docs.splunk.com/Documentation/Splunk/9.1.2/Troubleshooting/WhatSplunklogsaboutitself#splunkd.log 2: https://docs.splunk.com/Documentation/Splunk/9.1.2/Troubleshooting/Didyouloseyourfishbucket#Check_the_splunkd.log_file
(Which of the following is a minimum search head specification for a distributed Splunk environment?)
A 1Gb Ethernet NIC, optional 2nd NIC for a management network.
An x86 32-bit chip architecture.
128 GB RAM.
Two physical CPU cores, or four vCPU at 2GHz or greater speed per core.
According to the Splunk Enterprise Capacity Planning and Hardware Sizing Guidelines, a distributed Splunk environment’s minimum search head specification must ensure that the system can efficiently manage search parsing, ad-hoc query execution, and knowledge object replication. Splunk officially recommends using a 64-bit x86 architecture system with a minimum of two physical CPU cores (or four vCPUs) running at 2 GHz or higher per core for acceptable performance.
Search heads are CPU-intensive components, primarily constrained by processor speed and the number of concurrent searches they must handle. Memory and disk space should scale with user concurrency and search load, but CPU capability remains the baseline requirement. While 128 GB RAM (Option C) is suitable for high-demand or Enterprise Security (ES) deployments, it exceeds the minimum hardware specification for general distributed search environments.
Splunk no longer supports 32-bit architectures (Option B). While a 1Gb Ethernet NIC (Option A) is common, it is not part of the minimum computational specification required by Splunk for search heads. The critical specification is processor capability — two physical cores or equivalent.
References (Splunk Enterprise Documentation):
• Splunk Enterprise Capacity Planning Manual – Hardware and Performance Guidelines
• Search Head Sizing and System Requirements
• Distributed Deployment Manual – Recommended System Specifications
• Splunk Hardware and Performance Tuning Guide
(What is the expected performance reduction when architecting Splunk in a virtualized environment instead of a physical environment?)
Up to 15%
Between 20% and 45%
0
0.5
The Splunk Enterprise Capacity Planning Manual states that running Splunk in a virtualized environment typically results in a performance reduction of approximately 20% to 45% compared to equivalent deployments on physical hardware.
This degradation is primarily due to the virtualization overhead inherent in hypervisor environments (such as VMware, Hyper-V, or KVM), which can affect:
Disk I/O throughput and latency — the most critical factor for indexers.
CPU scheduling efficiency, particularly for multi-threaded indexing processes.
Network latency between clustered components.
Splunk’s documentation strongly emphasizes that while virtualized environments offer operational flexibility, they cannot match bare-metal performance, especially under heavy indexing loads.
To mitigate performance loss, Splunk recommends:
Reserving dedicated CPU and I/O resources for Splunk VMs.
Avoiding over-commitment of hardware resources.
Using high-performance SSD storage or paravirtualized disk controllers.
These optimizations can narrow the performance gap, but a 20–45% reduction remains a realistic expectation under typical conditions.
References (Splunk Enterprise Documentation):
• Splunk Enterprise Capacity Planning Manual – Virtualization Performance Considerations
• Splunk on Virtual Infrastructure – Best Practices and Performance Tuning
• Indexer and Search Head Hardware Recommendations
• Performance Testing Guidelines for Splunk Deployments
Data for which of the following indexes will count against an ingest-based license?
summary
main
_metrics
_introspection
Splunk Enterprise licensing is based on the amount of data that is ingested and indexed by the Splunk platform per day1. The data that counts against the license is the data that is stored in the indexes that are visible to the users and searchable by the Splunk software2. The indexes that are visible and searchable by default are the main index and any custom indexes that are created by the users or the apps3. The main index is the default index where Splunk Enterprise stores all data, unless otherwise specified4.
Option B is the correct answer because the data for the main index will count against the ingest-based license, as it is a visible and searchable index by default. Option A is incorrect because the summary index is a special type of index that stores the results of scheduled reports or accelerated data models, which do not count against the license. Option C is incorrect because the _metrics index is an internal index that stores metrics data about the Splunk platform performance, which does not count against the license. Option D is incorrect because the _introspection index is another internal index that stores data about the impact of the Splunk software on the host system, such as CPU, memory, disk, and network usage, which does not count against the license.
(A customer has an environment with a Search Head Cluster and an indexer cluster. They are troubleshooting license usage data, including indexed volume in bytes per pool, index, host, sourcetype, and source. Where should the license_usage.log file be retrieved from in this environment?)
Cluster Manager and Search Head Cluster Deployer
License Manager
Search Head Cluster Deployer only
All indexers
The license_usage.log file is generated and maintained on the License Manager node in a Splunk deployment. This log provides detailed statistics about daily license consumption, including data volume indexed per pool, index, sourcetype, source, and host.
In a distributed or clustered environment (with both search head and indexer clusters), the License Manager acts as the central authority that collects license usage information from all indexers and consolidates it into this log. The License Manager receives periodic reports from each license peer (indexer) and records them in:
$SPLUNK_HOME/var/log/splunk/license_usage.log
The log is automatically indexed into the _internal index with sourcetype=splunkd and can be queried using searches such as:
index=_internal source=*license_usage.log* type="RolloverSummary"
Other components like the Cluster Manager, SHC Deployer, or individual indexers do not store the full consolidated license usage data — they only send summarized reports to the License Manager.
Therefore, the License Manager is the definitive and Splunk-documented location for retrieving and analyzing license_usage.log data across a distributed deployment.
References (Splunk Enterprise Documentation):
• Managing Licenses in a Distributed Environment
• license_usage.log Reference and Structure
• Monitoring License Consumption Using the License Manager
• Splunk Enterprise Admin Manual – License Reporting and Troubleshooting
Which of the following server. conf stanzas indicates the Indexer Discovery feature has not been fully configured (restart pending) on the Master Node?
A)
B)
C)
D)
Option A
Option B
Option C
Option D
The Indexer Discovery feature enables forwarders to dynamically connect to the available peer nodes in an indexer cluster. To use this feature, the manager node must be configured with the [indexer_discovery] stanza and a pass4SymmKey value. The forwarders must also be configured with the same pass4SymmKey value and the master_uri of the manager node. The pass4SymmKey value must be encrypted using the splunk _encrypt command. Therefore, option A indicates that the Indexer Discovery feature has not been fully configured on the manager node, because the pass4SymmKey value is not encrypted. The other options are not related to the Indexer Discovery feature. Option B shows the configuration of a forwarder that is part of an indexer cluster. Option C shows the configuration of a manager node that is part of an indexer cluster. Option D shows an invalid configuration of the [indexer_discovery] stanza, because the pass4SymmKey value is not encrypted and does not match the forwarders’ pass4SymmKey value12
1: https://docs.splunk.com/Documentation/Splunk/9.1.2/Indexer/indexerdiscovery 2: https://docs.splunk.com/Documentation/Splunk/9.1.2/Security/Secureyourconfigurationfiles#Encrypt_the_pass4SymmKey_setting_in_server.conf
A three-node search head cluster is skipping a large number of searches across time. What should be done to increase scheduled search capacity on the search head cluster?
Create a job server on the cluster.
Add another search head to the cluster.
server.conf captain_is_adhoc_searchhead = true.
Change limits.conf value for max_searches_per_cpu to a higher value.
Changing the limits.conf value for max_searches_per_cpu to a higher value is the best option to increase scheduled search capacity on the search head cluster when a large number of searches are skipped across time. This value determines how many concurrent scheduled searches can run on each CPU core of the search head. Increasing this value will allow more scheduled searches to run at the same time, which will reduce the number of skipped searches. Creating a job server on the cluster, running the server.conf captain_is_adhoc_searchhead = true command, or adding another search head to the cluster are not the best options to increase scheduled search capacity on the search head cluster. For more information, see [Configure limits.conf] in the Splunk documentation.
Which of the following commands is used to clear the KV store?
splunk clean kvstore
splunk clear kvstore
splunk delete kvstore
splunk reinitialize kvstore
The splunk clean kvstore command is used to clear the KV store. This command will delete all the collections and documents in the KV store and reset it to an empty state. This command can be useful for troubleshooting KV store issues or resetting the KV store data. The splunk clear kvstore, splunk delete kvstore, and splunk reinitialize kvstore commands are not valid Splunk commands. For more information, see Use the CLI to manage the KV store in the Splunk documentation.
Which of the following is true regarding the migration of an index cluster from single-site to multi-site?
Multi-site policies will apply to all data in the indexer cluster.
All peer nodes must be running the same version of Splunk.
Existing single-site attributes must be removed.
Single-site buckets cannot be converted to multi-site buckets.
According to the Splunk documentation1, when migrating an indexer cluster from single-site to multi-site, you must remove the existing single-site attributes from the server.conf file of each peer node. These attributes include replication_factor, search_factor, and cluster_label. You must also restart each peer node after removing the attributes. The other options are false because:
Multi-site policies will apply only to the data created after migration, unless you configure the manager node to convert legacy buckets to multi-site1.
All peer nodes do not need to run the same version of Splunk, as long as they are compatible with the manager node2.
Single-site buckets can be converted to multi-site buckets by changing the constrain_singlesite_buckets setting in the manager node’s server.conf file to "false"1.
Which server.conf attribute should be added to the master node's server.conf file when decommissioning a site in an indexer cluster?
site_mappings
available_sites
site_search_factor
site_replication_factor
The site_mappings attribute should be added to the master node’s server.conf file when decommissioning a site in an indexer cluster. The site_mappings attribute is used to specify how the master node should reassign the buckets from the decommissioned site to the remaining sites. The site_mappings attribute is a comma-separated list of site pairs, where the first site is the decommissioned site and the second site is the destination site. For example, site_mappings = site1:site2,site3:site4 means that the buckets from site1 will be moved to site2, and the buckets from site3 will be moved to site4. The available_sites attribute is used to specify which sites are currently available in the cluster, and it is automatically updated by the master node. The site_search_factor and site_replication_factor attributes are used to specify the number of searchable and replicated copies of each bucket for each site, and they are not affected by the decommissioning process
Splunk Enterprise performs a cyclic redundancy check (CRC) against the first and last bytes to prevent the same file from being re-indexed if it is rotated or renamed. What is the number of bytes sampled by default?
128
512
256
64
Splunk Enterprise performs a CRC check against the first and last 256 bytes of a file by default, as stated in the inputs.conf specification. This is controlled by the initCrcLength parameter, which can be changed if needed. The CRC check helps Splunk Enterprise to avoid re-indexing the same file twice, even if it is renamed or rotated, as long as the content does not change. However, this also means that Splunk Enterprise might miss some files that have the same CRC but different content, especially if they have identical headers. To avoid this, the crcSalt parameter can be used to add some extra information to the CRC calculation, such as the full file path or a custom string. This ensures that each file has a unique CRC and is indexed by Splunk Enterprise. You can read more about crcSalt and initCrcLength in the How log file rotation is handled documentation.
The master node distributes configuration bundles to peer nodes. Which directory peer nodes receive the bundles?
apps
deployment-apps
slave-apps
master-apps
The master node distributes configuration bundles to peer nodes in the slave-apps directory under $SPLUNK_HOME/etc. The configuration bundle method is the only supported method for managing common configurations and app deployment across the set of peers. It ensures that all peers use the same versions of these files1. Bundles typically contain a subset of files (configuration files and assets) from $SPLUNK_HOME/etc/system, $SPLUNK_HOME/etc/apps, and $SPLUNK_HOME/etc/users2. The process of distributing knowledge bundles means that peers by default receive nearly the entire contents of the search head’s apps3.
Which of the following statements about integrating with third-party systems is true? (Select all that apply.)
A Hadoop application can search data in Splunk.
Splunk can search data in the Hadoop File System (HDFS).
You can use Splunk alerts to provision actions on a third-party system.
You can forward data from Splunk forwarder to a third-party system without indexing it first.
The following statements about integrating with third-party systems are true: You can use Splunk alerts to provision actions on a third-party system, and you can forward data from Splunk forwarder to a third-party system without indexing it first. Splunk alerts are triggered events that can execute custom actions, such as sending an email, running a script, or calling a webhook. Splunk alerts can be used to integrate with third-party systems, such as ticketing systems, notification services, or automation platforms. For example, you can use Splunk alerts to create a ticket in ServiceNow, send a message to Slack, or trigger a workflow in Ansible. Splunk forwarders are Splunk instances that collect and forward data to other Splunk instances, such as indexers or heavy forwarders. Splunk forwarders can also forward data to third-party systems, such as Hadoop, Kafka, or AWS Kinesis, without indexing it first. This can be useful for sending data to other data processing or storage systems, or for integrating with other analytics or monitoring tools. A Hadoop application cannot search data in Splunk, because Splunk does not provide a native interface for Hadoop applications to access Splunk data. Splunk can search data in the Hadoop File System (HDFS), but only by using the Hadoop Connect app, which is a Splunk app that enables Splunk to index and search data stored in HDFS
A customer is migrating 500 Universal Forwarders from an old deployment server to a new deployment server, with a different DNS name. The new deployment server is configured and running.
The old deployment server deployed an app containing an updated deploymentclient.conf file to all forwarders, pointing them to the new deployment server. The app was successfully deployed to all 500 forwarders.
Why would all of the forwarders still be phoning home to the old deployment server?
There is a version mismatch between the forwarders and the new deployment server.
The new deployment server is not accepting connections from the forwarders.
The forwarders are configured to use the old deployment server in $SPLUNK_HOME/etc/system/local.
The pass4SymmKey is the same on the new deployment server and the forwarders.
All of the forwarders would still be phoning home to the old deployment server, because the forwarders are configured to use the old deployment server in $SPLUNK_HOME/etc/system/local. This is the local configuration directory that contains the settings that override the default settings in $SPLUNK_HOME/etc/system/default. The deploymentclient.conf file in the local directory specifies the targetUri of the deployment server that the forwarder contacts for configuration updates and apps. If the forwarders have the old deployment server’s targetUri in the local directory, they will ignore the updated deploymentclient.conf file that was deployed by the old deployment server, because the local settings have higher precedence than the deployed settings. To fix this issue, the forwarders should either remove the deploymentclient.conf file from the local directory, or update it with the new deployment server’s targetUri. Option C is the correct answer. Option A is incorrect because a version mismatch between the forwarders and the new deployment server would not prevent the forwarders from phoning home to the new deployment server, as long as they are compatible versions. Option B is incorrect because the new deployment server is configured and running, and there is no indication that it is not accepting connections from the forwarders. Option D is incorrect because the pass4SymmKey is the shared secret key that the deployment server and the forwarders use to authenticate each other. It does not affect the forwarders’ ability to phone home to the new deployment server, as long as it is the same on both sides12
1: https://docs.splunk.com/Documentation/Splunk/9.1.2/Updating/Configuredeploymentclients 2: https://docs.splunk.com/Documentation/Splunk/9.1.2/Admin/Wheretofindtheconfigurationfiles
A customer has a four site indexer cluster. The customer has requirements to store five copies of searchable data, with one searchable copy of data at the origin site, and one searchable copy at the disaster recovery site (site4).
Which configuration meets these requirements?
site_replication_factor = origin:2, site4:l, total:3
site_replication_factor = origin:l, site4:l, total:5
site_search_factor = origin:2, site4:l, total:3
site search factor = origin:1, site4:l, total:5
The correct configuration to meet the customer’s requirements is site_replication_factor = origin:1, site4:1, total:5. This means that each bucket will have one copy at the origin site, one copy at the disaster recovery site (site4), and three copies at any other sites. The total number of copies will be five, as required by the customer. The site_replication_factor determines how many copies of each bucket are stored across the sites in a multisite indexer cluster1. The site_search_factor determines how many copies of each bucket are searchable across the sites in a multisite indexer cluster2. Therefore, option B is the correct answer, and options A, C, and D are incorrect.
1: Configure the site replication factor 2: Configure the site search factor
Which of the following describe migration from single-site to multisite index replication?
A master node is required at each site.
Multisite policies apply to new data only.
Single-site buckets instantly receive the multisite policies.
Multisite total values should not exceed any single-site factors.
Migration from single-site to multisite index replication only affects new data, not existing data. Multisite policies apply to new data only, meaning that data that is ingested after the migration will follow the multisite replication and search factors. Existing data, or data that was ingested before the migration, will retain the single-site policies, unless they are manually converted to multisite buckets. Single-site buckets do not instantly receive the multisite policies, nor do they automatically convert to multisite buckets. Multisite total values can exceed any single-site factors, as long as they do not exceed the number of peer nodes in the cluster. A master node is not required at each site, only one master node is needed for the entire cluster
In splunkd. log events written to the _internal index, which field identifies the specific log channel?
component
source
sourcetype
channel
In the context of splunkd.log events written to the _internal index, the field that identifies the specific log channel is the "channel" field. This information is confirmed by the Splunk Common Information Model (CIM) documentation, where "channel" is listed as a field name associated with Splunk Audit Logs.
An indexer cluster is being designed with the following characteristics:
• 10 search peers
• Replication Factor (RF): 4
• Search Factor (SF): 3
• No SmartStore usage
How many search peers can fail before data becomes unsearchable?
Zero peers can fail.
One peer can fail.
Three peers can fail.
Four peers can fail.
Three peers can fail. This is the maximum number of search peers that can fail before data becomes unsearchable in the indexer cluster with the given characteristics. The searchability of the data depends on the Search Factor, which is the number of searchable copies of each bucket that the cluster maintains across the set of peer nodes1. In this case, the Search Factor is 3, which means that each bucket has three searchable copies distributed among the 10 search peers. If three or fewer search peers fail, the cluster can still serve the data from the remaining searchable copies. However, if four or more search peers fail, the cluster may lose some searchable copies and the data may become unsearchable. The other options are not correct, as they either underestimate or overestimate the number of search peers that can fail before data becomes unsearchable. Therefore, option C is the correct answer, and options A, B, and D are incorrect.
1: Configure the search factor
Which Splunk component is mandatory when implementing a search head cluster?
Captain Server
Deployer
Cluster Manager
RAFT Server
This is a mandatory Splunk component when implementing a search head cluster, as it is responsible for distributing the configuration updates and app bundles to the cluster members1. The deployer is a separate instance that communicates with the cluster manager and pushes the changes to the search heads1. The other options are not mandatory components for a search head cluster. Option A, Captain Server, is not a component, but a role that is dynamically assigned to one of the search heads in the cluster2. The captain coordinates the replication and search activities among the cluster members2. Option C, Cluster Manager, is a component for an indexer cluster, not a search head cluster3. The cluster manager manages the replication and search factors, and provides a web interface for monitoring and managing the indexer cluster3. Option D, RAFT Server, is not a component, but a protocol that is used by the search head cluster to elect the captain and maintain the cluster state4. Therefore, option B is the correct answer, and options A, C, and D are incorrect.
1: Use the deployer to distribute apps and configuration updates 2: About the captain 3: About the cluster manager 4: How a search head cluster works
(What command will decommission a search peer from an indexer cluster?)
splunk disablepeer --enforce-counts
splunk decommission —enforce-counts
splunk offline —enforce-counts
splunk remove cluster-peers —enforce-counts
The splunk offline --enforce-counts command is the official and documented method used to gracefully decommission a search peer (indexer) from an indexer cluster in Splunk Enterprise. This command ensures that all replication and search factors are maintained before the peer is removed.
When executed, Splunk initiates a controlled shutdown process for the peer node. The Cluster Manager verifies that sufficient replicated copies of all bucket data exist across the remaining peers according to the configured replication_factor (RF) and search_factor (SF). The --enforce-counts flag specifically enforces that replication and search counts remain intact before the peer fully detaches from the cluster, ensuring no data loss or availability gap.
The sequence typically includes:
Validating cluster state and replication health.
Rolling off the peer’s data responsibilities to other peers.
Removing the peer from the active cluster membership list once replication is complete.
Other options like disablepeer, decommission, or remove cluster-peers are not valid Splunk commands. Therefore, the correct documented method is to use:
splunk offline --enforce-counts
References (Splunk Enterprise Documentation):
• Indexer Clustering: Decommissioning a Peer Node
• Managing Peer Nodes and Maintaining Data Availability
• Splunk CLI Command Reference – splunk offline
• Cluster Manager and Peer Maintenance Procedures
Which Splunk server role regulates the functioning of indexer cluster?
Indexer
Deployer
Master Node
Monitoring Console
The master node is the Splunk server role that regulates the functioning of the indexer cluster. The master node coordinates the activities of the peer nodes, such as data replication, data searchability, and data recovery. The master node also manages the cluster configuration bundle and distributes it to the peer nodes. The indexer is the Splunk server role that indexes the incoming data and makes it searchable. The deployer is the Splunk server role that distributes apps and configuration updates to the search head cluster members. The monitoring console is the Splunk server role that monitors the health and performance of the Splunk deployment. For more information, see About indexer clusters and index replication in the Splunk documentation.
Indexing is slow and real-time search results are delayed in a Splunk environment with two indexers and one search head. There is ample CPU and memory available on the indexers. Which of the following is most likely to improve indexing performance?
Increase the maximum number of hot buckets in indexes.conf
Increase the number of parallel ingestion pipelines in server.conf
Decrease the maximum size of the search pipelines in limits.conf
Decrease the maximum concurrent scheduled searches in limits.conf
Increasing the number of parallel ingestion pipelines in server.conf is most likely to improve indexing performance when indexing is slow and real-time search results are delayed in a Splunk environment with two indexers and one search head. The parallel ingestion pipelines allow Splunk to process multiple data streams simultaneously, which increases the indexing throughput and reduces the indexing latency. Increasing the maximum number of hot buckets in indexes.conf will not improve indexing performance, but rather increase the disk space consumption and the bucket rolling time. Decreasing the maximum size of the search pipelines in limits.conf will not improve indexing performance, but rather reduce the search performance and the search concurrency. Decreasing the maximum concurrent scheduled searches in limits.conf will not improve indexing performance, but rather reduce the search capacity and the search availability. For more information, see Configure parallel ingestion pipelines in the Splunk documentation.
(Which index does Splunk use to record user activities?)
_internal
_audit
_kvstore
_telemetry
Splunk Enterprise uses the _audit index to log and store all user activity and audit-related information. This includes details such as user logins, searches executed, configuration changes, role modifications, and app management actions.
The _audit index is populated by data collected from the Splunkd audit logger and records actions performed through both Splunk Web and the CLI. Each event in this index typically includes fields like user, action, info, search_id, and timestamp, allowing administrators to track activity across all Splunk users and components for security, compliance, and accountability purposes.
The _internal index, by contrast, contains operational logs such as metrics.log and scheduler.log used for system performance and health monitoring. _kvstore stores internal KV Store metadata, and _telemetry is used for optional usage data reporting to Splunk.
The _audit index is thus the authoritative source for user behavior monitoring within Splunk environments and is a key component of compliance and security auditing.
References (Splunk Enterprise Documentation):
• Audit Logs and the _audit Index – Monitoring User Activity
• Splunk Enterprise Security and Compliance: Tracking User Actions
• Splunk Admin Manual – Overview of Internal Indexes (_internal, _audit, _introspection)
• Splunk Audit Logging and User Access Monitoring
Which of the following should be done when installing Enterprise Security on a Search Head Cluster? (Select all that apply.)
Install Enterprise Security on the deployer.
Install Enterprise Security on a staging instance.
Copy the Enterprise Security configurations to the deployer.
Use the deployer to deploy Enterprise Security to the cluster members.
When installing Enterprise Security on a Search Head Cluster (SHC), the following steps should be done: Install Enterprise Security on the deployer, and use the deployer to deploy Enterprise Security to the cluster members. Enterprise Security is a premium app that provides security analytics and monitoring capabilities for Splunk. Enterprise Security can be installed on a SHC by using the deployer, which is a standalone instance that distributes apps and other configurations to the SHC members. Enterprise Security should be installed on the deployer first, and then deployed to the cluster members using the splunk apply shcluster-bundle command. Enterprise Security should not be installed on a staging instance, because a staging instance is not part of the SHC deployment process. Enterprise Security configurations should not be copied to the deployer, because they are already included in the Enterprise Security app package.
Which of the following is a problem that could be investigated using the Search Job Inspector?
Error messages are appearing underneath the search bar in Splunk Web.
Dashboard panels are showing "Waiting for queued job to start" on page load.
Different users are seeing different extracted fields from the same search.
Events are not being sorted in reverse chronological order.
According to the Splunk documentation1, the Search Job Inspector is a tool that you can use to troubleshoot search performance and understand the behavior of knowledge objects, such as event types, tags, lookups, and so on, within the search. You can inspect search jobs that are currently running or that have finished recently. The Search Job Inspector can help you investigate error messages that appear underneath the search bar in Splunk Web, as it can show you the details of the search job, such as the search string, the search mode, the search timeline, the search log, the search profile, and the search properties. You can use this information to identify the cause of the error and fix it2. The other options are false because:
Dashboard panels showing “Waiting for queued job to start” on page load is not a problem that can be investigated using the Search Job Inspector, as it indicates that the search job has not started yet. This could be due to the search scheduler being busy or the search priority being low. You can use the Jobs page or the Monitoring Console to monitor the status of the search jobs and adjust the priority or concurrency settings if needed3.
Different users seeing different extracted fields from the same search is not a problem that can be investigated using the Search Job Inspector, as it is related to the user permissions and the knowledge object sharing settings. You can use the Access Controls page or the Knowledge Manager to manage the user roles and the knowledge object visibility4.
Events not being sorted in reverse chronological order is not a problem that can be investigated using the Search Job Inspector, as it is related to the search syntax and the sort command. You can use the Search Manual or the Search Reference to learn how to use the sort command and its options to sort the events by any field or criteria.
Which of the following strongly impacts storage sizing requirements for Enterprise Security?
The number of scheduled (correlation) searches.
The number of Splunk users configured.
The number of source types used in the environment.
The number of Data Models accelerated.
Data Model acceleration is a feature that enables faster searches over large data sets by summarizing the raw data into a more efficient format. Data Model acceleration consumes additional disk space, as it stores both the raw data and the summarized data. The amount of disk space required depends on the size and complexity of the Data Model, the retention period of the summarized data, and the compression ratio of the data. According to the Splunk Enterprise Security Planning and Installation Manual, Data Model acceleration is one of the factors that strongly impacts storage sizing requirements for Enterprise Security. The other factors are the volume and type of data sources, the retention policy of the data, and the replication factor and search factor of the index cluster. The number of scheduled (correlation) searches, the number of Splunk users configured, and the number of source types used in the environment are not directly related to storage sizing requirements for Enterprise Security1
1: https://docs.splunk.com/Documentation/ES/6.6.0/Install/Plan#Storage_sizing_requirements
What does the deployer do in a Search Head Cluster (SHC)? (Select all that apply.)
Distributes apps to SHC members.
Bootstraps a clean Splunk install for a SHC.
Distributes non-search-related and manual configuration file changes.
Distributes runtime knowledge object changes made by users across the SHC.
The deployer distributes apps and non-search related and manual configuration file changes to the search head cluster members. The deployer does not bootstrap a clean Splunk install for a search head cluster, as this is done by the captain. The deployer also does not distribute runtime knowledge object changes made by users across the search head cluster, as this is done by the replication factor. For more information, see Use the deployer to distribute apps and configuration updates in the Splunk documentation.
What types of files exist in a bucket within a clustered index? (select all that apply)
Inside a replicated bucket, there is only rawdata.
Inside a searchable bucket, there is only tsidx.
Inside a searchable bucket, there is tsidx and rawdata.
Inside a replicated bucket, there is both tsidx and rawdata.
According to the Splunk documentation1, a bucket within a clustered index contains two key types of files: the raw data in compressed form (rawdata) and the indexes that point to the raw data (tsidx files). A bucket can be either replicated or searchable, depending on whether it has both types of files or only the rawdata file. A replicated bucket is a bucket that has been copied from one peer node to another for the purpose of data replication. A searchable bucket is a bucket that has both the rawdata and the tsidx files, and can be searched by the search heads. The types of files that exist in a bucket within a clustered index are:
Inside a searchable bucket, there is tsidx and rawdata. This is true because a searchable bucket contains both the data and the index files, and can be searched by the search heads1.
Inside a replicated bucket, there is both tsidx and rawdata. This is true because a replicated bucket can also be a searchable bucket, if it has both the data and the index files. However, not all replicated buckets are searchable, as some of them might only have the rawdata file, depending on the replication factor and the search factor settings1.
The other options are false because:
Inside a replicated bucket, there is only rawdata. This is false because a replicated bucket can also have the tsidx file, if it is a searchable bucket. A replicated bucket only has the rawdata file if it is a non-searchable bucket, which means that it cannot be searched by the search heads until it gets the tsidx file from another peer node1.
Inside a searchable bucket, there is only tsidx. This is false because a searchable bucket always has both the tsidx and the rawdata files, as they are both required for searching the data. A searchable bucket cannot exist without the rawdata file, as it contains the actual data that the tsidx file points to1.
A customer plans to ingest 600 GB of data per day into Splunk. They will have six concurrent users, and they also want high data availability and high search performance. The customer is concerned about cost and wants to spend the minimum amount on the hardware for Splunk. How many indexers are recommended for this deployment?
Two indexers not in a cluster, assuming users run many long searches.
Three indexers not in a cluster, assuming a long data retention period.
Two indexers clustered, assuming high availability is the greatest priority.
Two indexers clustered, assuming a high volume of saved/scheduled searches.
Two indexers clustered is the recommended deployment for a customer who plans to ingest 600 GB of data per day into Splunk, has six concurrent users, and wants high data availability and high search performance. This deployment will provide enough indexing capacity and search concurrency for the customer’s needs, while also ensuring data replication and searchability across the cluster. The customer can also save on the hardware cost by using only two indexers. Two indexers not in a cluster will not provide high data availability, as there is no data replication or failover. Three indexers not in a cluster will provide more indexing capacity and search concurrency, but also more hardware cost and no data availability. The customer’s data retention period, number of long searches, or volume of saved/scheduled searches are not relevant for determining the number of indexers. For more information, see [Reference hardware] and [About indexer clusters and index replication] in the Splunk documentation.
What information is needed about the current environment before deploying Splunk? (select all that apply)
List of vendors for network devices.
Overall goals for the deployment.
Key users.
Data sources.
Before deploying Splunk, it is important to gather some information about the current environment, such as:
Overall goals for the deployment: This includes the business objectives, the use cases, the expected outcomes, and the success criteria for the Splunk deployment. This information helps to define the scope, the requirements, the design, and the validation of the Splunk solution1.
Key users: This includes the roles, the responsibilities, the expectations, and the needs of the different types of users who will interact with the Splunk deployment, such as administrators, analysts, developers, and end users. This information helps to determine the user access, the user experience, the user training, and the user feedback for the Splunk solution1.
Data sources: This includes the types, the formats, the volumes, the locations, and the characteristics of the data that will be ingested, indexed, and searched by the Splunk deployment. This information helps to estimate the data throughput, the data retention, the data quality, and the data analysis for the Splunk solution1.
Option B, C, and D are the correct answers because they reflect the essential information that is needed before deploying Splunk. Option A is incorrect because the list of vendors for network devices is not a relevant information for the Splunk deployment. The network devices may be part of the data sources, but the vendors are not important for the Splunk solution.
(On which Splunk components does the Splunk App for Enterprise Security place the most load?)
Indexers
Cluster Managers
Search Heads
Heavy Forwarders
According to Splunk’s Enterprise Security (ES) Installation and Sizing Guide, the majority of processing and computational load generated by the Splunk App for Enterprise Security is concentrated on the Search Head(s).
This is because Splunk ES is built around a search-driven correlation model — it continuously runs scheduled correlation searches, data model accelerations, and notables generation jobs. These operations rely on the search head tier’s CPU, memory, and I/O resources rather than on indexers. ES also performs extensive data model summarization, CIM normalization, and real-time alerting, all of which are search-intensive operations.
While indexers handle data ingestion and indexing, they are not heavily affected by ES beyond normal search request processing. The Cluster Manager only coordinates replication and plays no role in search execution, and Heavy Forwarders serve as data collection or parsing points with minimal analytical load.
Splunk officially recommends deploying ES on a dedicated Search Head Cluster (SHC) to isolate its high CPU and memory demands from other workloads. For large-scale environments, horizontal scaling via SHC ensures consistent performance and stability.
References (Splunk Enterprise Documentation):
• Splunk Enterprise Security Installation and Configuration Guide
• Search Head Sizing for Splunk Enterprise Security
• Enterprise Security Overview – Workload Distribution and Performance Impact
• Splunk Architecture and Capacity Planning for ES Deployments
A Splunk instance has crashed, but no crash log was generated. There is an attempt to determine what user activity caused the crash by running the following search:
What does searching for closed_txn=0 do in this search?
Filters results to situations where Splunk was started and stopped multiple times.
Filters results to situations where Splunk was started and stopped once.
Filters results to situations where Splunk was stopped and then immediately restarted.
Filters results to situations where Splunk was started, but not stopped.
Searching for closed_txn=0 in this search filters results to situations where Splunk was started, but not stopped. This means that the transaction was not completed, and Splunk crashed before it could finish the pipelines. The closed_txn field is added by the transaction command, and it indicates whether the transaction was closed by an event that matches the endswith condition1. A value of 0 means that the transaction was not closed, and a value of 1 means that the transaction was closed1. Therefore, option D is the correct answer, and options A, B, and C are incorrect.
1: transaction command overview
A customer has a multisite cluster with site1 and site2 configured. They want to configure search heads in these sites to get search results only from data stored on their local sites. Which step prevents this behavior?
Set site=site0 in the [general] stanza of server.conf on the search head.
Configure site_search_factor = site1:1, total:2.
Implement only two indexers per site.
Configure site_search_factor = site1:2, total:3.
Comprehensive and Detailed Explanation (From Splunk Enterprise Documentation)Splunk’s multisite clustering documentation describes that search affinity is controlled by the site attribute in server.conf on the search head. Splunk explicitly states that assigning site=site0 on a search head removes site affinity, causing the search head to treat all sites as equal and search remotely as needed. The documentation describes site0 as the special value that disables local-site preference and forces the system to behave like a single-site cluster.
The customer wants each site’s search head to pull results only from its local site. This behavior works only if the search head’s site value matches the local site name (e.g., site1 or site2). By setting it to site0, all locality restrictions are removed, which prevents the desired reduction of network traffic.
The site search factor options (B and D) affect replication and searchable copy placement on indexers, not search head behavior. The number of indexers per site (C) also does not disable search affinity. Therefore only option A disables local-only searching.
Copyright © 2014-2025 Certensure. All Rights Reserved