parent
9a339bcf94
commit
68498b208a
@ -0,0 +1,201 @@
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "{}"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright {yyyy} {name of copyright owner}
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
@ -0,0 +1,13 @@
|
||||
Copyright 2017 Gareth Anderson
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
File diff suppressed because it is too large
Load Diff
@ -0,0 +1,22 @@
|
||||
#
|
||||
# Splunk app configuration file
|
||||
#
|
||||
|
||||
[install]
|
||||
is_configured = 0
|
||||
|
||||
[ui]
|
||||
is_visible = 1
|
||||
label = SplunkAdmins
|
||||
# allow 9.1 and above to use themes
|
||||
supported_themes = light,dark
|
||||
|
||||
[launcher]
|
||||
author = Gareth Anderson
|
||||
description = Alerts and dashboards as described in the Splunk 2017 conf presentation How did you get so big?
|
||||
version = 4.0.1
|
||||
|
||||
[package]
|
||||
id = SplunkAdmins
|
||||
check_for_updates = true
|
||||
|
||||
@ -0,0 +1,583 @@
|
||||
<nav search_view="search" color="#65A637">
|
||||
<view name="search" default="true" />
|
||||
<view name="reports" />
|
||||
<view name="alerts" />
|
||||
<view name="dashboards" />
|
||||
<collection label="AllSplunk*Level">
|
||||
<collection label="OS Level Issues">
|
||||
<collection label="OS Config">
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkEnterpriseLevel%2520-%2520Core%2520Dumps%2520Disabled">Core Dumps Disabled</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkEnterpriseLevel%20-%20Transparent%20Huge%20Pages%20is%20enabled%20and%20should%20not%20be">Transparent Huge Pages is enabled and should not be</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkEnterpriseLevel%20-%20ulimit%20on%20Splunk%20enterprise%20servers%20is%20below%208192">ulimit on Splunk enterprise servers is below 8192</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FMonitoringConsole%20-%20Check%20OS%20ulimits%20via%20REST">MonitoringConsole - Check OS ulimits via REST</a>
|
||||
</collection>
|
||||
<collection label="Failures">
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkEnterpriseLevel%2520-%2520KVStore%2520Process%2520Terminated">KVStore Process Terminated</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkEnterpriseLevel%20-%20Unable%20to%20dispatch%20searches%20due%20to%20disk%20space">Unable to dispatch searches due to disk space</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkEnterpriseLevel%20-%20Low%20disk%20space">Low disk space</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkEnterpriseLevel%2520-%2520Splunkd%2520Crash%2520Logs%2520Have%2520Appeared%2520in%2520Production">Splunkd Crash Logs Have Appeared in Production</a>
|
||||
<a href="/app/SplunkAdmins/search?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkLevel%2520-%2520Unexpected%2520termination%2520of%2520a%2520Splunk%2520process%2520unix">AllSplunkLevel - Unexpected termination of a Splunk process unix</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkLevel%2520-%2520Unexpected%2520termination%2520of%2520a%2520Splunk%2520process%2520windows">AllSplunkLevel - Unexpected termination of a Splunk process windows</a>
|
||||
</collection>
|
||||
<collection label="Performance">
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkEnterpriseLevel%20-%20Splunk%20Servers%20with%20resource%20starvation">Splunk Servers with resource starvation</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkLevel%20-%20Time%20skew%20on%20Splunk%20Servers">Time skew on Splunk Servers</a>
|
||||
</collection>
|
||||
</collection>
|
||||
<collection label="Splunk Config Issues">
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkEnterpriseLevel%20-%20Detect%20LDAP%20groups%20that%20no%20longer%20exist">Detect LDAP groups that no longer exist</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkEnterpriseLevel%20-%20File%20integrity%20check%20failure">File integrity check failure</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkEnterpriseLevel%20-%20Non-existent%20roles%20are%20assigned%20to%20users">Non-existent roles are assigned to users</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkEnterpriseLevel%20-%20TCP%20or%20SSL%20Config%20Issue">TCP or SSL Config Issue</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkEnterpriseLevel%20-%20WARN%20iniFile%20Configuration%20Issues">WARN iniFile Configuration Issues</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkEnterpriseLevel%20-%20error%20in%20stdout">error in stdout.log</a>
|
||||
</collection>
|
||||
<collection label="Splunk Level Failures">
|
||||
<collection label="Deployment Server Related">
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkLevel%20-%20Application%20Installation%20Failures%20From%20Deployment%20Manager">Application Installation Failures From Deployment Manager</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkLevel%20-%20DeploymentServer%20Application%20Installation%20Error">DeploymentServer Application Installation Error</a>
|
||||
</collection>
|
||||
<collection label="Input or Alert Failures">
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkEnterpriseLevel%20-%20Email%20Sending%20Failures">Email Sending Failures</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkEnterpriseLevel%20-%20sendmodalert%20errors">sendmodalert errors</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkEnterpriseLevel%20-%20Splunk%20Servers%20throwing%20runScript%20errors">Splunk Servers throwing runScript errors</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkLevel%20execprocessor%20errors">execprocessor errors</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkLevel%2520-%2520Data%2520Loss%2520on%2520shutdown">Data Loss on shutdown</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkLevel%20-%20TailReader%20Ignoring%20Path">AllSplunkLevel - TailReader Ignoring Path</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkLevel%20-%20No%20recent%20metrics.log%20data">AllSplunkLevel - No recent metrics.log data</a>
|
||||
</collection>
|
||||
<collection label="Scheduler">
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkEnterpriseLevel%20-%20Splunk%20Scheduler%20excessive%20delays%20in%20executing%20search">Splunk Scheduler excessive delays in executing search</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkEnterpriseLevel%20-%20Splunk%20Scheduler%20skipped%20searches%20and%20the%20reason">Splunk Scheduler skipped searches and the reason</a>
|
||||
</collection>
|
||||
<collection label="Splunk to Splunk failures">
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkEnterpriseLevel%20-%20Replication%20Failures">Replication Failures</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkLevel%20-%20Unable%20To%20Distribute%20to%20Peer">Unable To Distribute to Peer</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkLevel%2520-%2520Data%2520Loss%2520on%2520shutdown">Data Loss on shutdown</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkEnterpriseLevel%20-%20Losing%20Contact%20With%20Master%20Node">Losing Contact With Master Node</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkLevel%20-%20No%20recent%20metrics.log%20data">AllSplunkLevel - No recent metrics.log data</a>
|
||||
</collection>
|
||||
<collection label="Generic">
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkEnterpriseLevel%20-%20Splunkd%20Log%20Messages%20Admins%20Only">Splunkd Log Messages Admins Only</a>
|
||||
</collection>
|
||||
</collection>
|
||||
</collection>
|
||||
<collection label="ClusterMasterLevel">
|
||||
<collection label="ClusterMaster Endpoint">
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FClusterMasterLevel%20-%20Per%20index%20status">Per index status</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FClusterMasterLevel%20-%20excess%20buckets%20on%20master">Excess buckets on master</a>
|
||||
<saved name="ClusterMasterLevel - Primary bucket count per peer" />
|
||||
</collection>
|
||||
<collection label="Run Anywhere">
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FIndexerLevel%20-%20ClusterMaster%20Advising%20SearchOrRep%20Factor%20Not%20Met">ClusterMaster Advising SearchOrRep Factor Not Met</a>
|
||||
<view name="ClusterMasterJobs" />
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkEnterpriseLevel%20-%20Splunkd%20Log%20Messages%20Admins%20Only">Splunkd Log Messages Admins Only</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkEnterpriseLevel%20-%20Losing%20Contact%20With%20Master%20Node">Losing Contact With Master Node</a>
|
||||
</collection>
|
||||
</collection>
|
||||
<collection label="Deployment Server">
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FDeploymentServer%20-%20Application%20Not%20Found%20On%20Deployment%20Server">Application Not Found On Deployment Server</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FDeploymentServer%20-%20btool%20validation%20failures%20occurring%20on%20deployment%20server">btool validation failures occurring on deployment server</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FDeploymentServer%20-%20Forwarder%20has%20changed%20properties%20on%20phone%20home">Forwarder has changed properties on phone home</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FDeploymentServer%20-%20Unsupported%20attribute%20within%20DS%20config">Unsupported attribute within DS config</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkLevel%20-%20Application%20Installation%20Failures%20From%20Deployment%20Manager">Application Installation Failures From Deployment Manager</a>
|
||||
<collection label="Generic">
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkEnterpriseLevel%20-%20Splunkd%20Log%20Messages%20Admins%20Only">Splunkd Log Messages Admins Only</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FDeploymentServer%20-%20Error%20Found%20On%20Deployment%20Server">Error Found On Deployment Server</a>
|
||||
</collection>
|
||||
<saved name="DeploymentServer - Count by application" />
|
||||
</collection>
|
||||
<collection label="ForwarderLevel">
|
||||
<collection label="OS Level Issues">
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FForwarderLevel%20-%20Forwarders%20in%20restart%20loop">Forwarders in restart loop</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FForwarderLevel%20-%20Splunk%20Forwarder%20Down">Splunk Forwarder Down</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FForwarderLevel%20-%20Splunk%20forwarders%20failing%20due%20to%20disk%20space%20issues">Splunk forwarders failing due to disk space issues</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FForwarderLevel%20-%20Splunk%20Universal%20Forwarders%20that%20are%20time%20shifting">Splunk Universal Forwarders that are time shifting</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FForwarderLevel%20-%20Splunk%20universal%20forwarders%20with%20ulimit%20issues">Splunk universal forwarders with ulimit issues</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FForwarderLevel%20-%20Splunk%20Universal%20Forwarders%20Exceeding%20the%20File%20Descriptor%20Cache">Splunk Universal Forwarders Exceeding the File Descriptor Cache</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FMonitoringConsole%20-%20Check%20OS%20ulimits%20via%20REST">MonitoringConsole - Check OS ulimits via REST (useful for HF's only)</a>
|
||||
</collection>
|
||||
<collection label="File Monitoring issues">
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FForwarderLevel%20-%20crcSalt%20or%20initCrcLength%20change%20may%20be%20required">crcSalt or initCrcLength change may be required</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FForwarderLevel%20-%20File%20Too%20Small%20to%20checkCRC%20occurring%20multiple%20times">File Too Small to checkCRC occurring multiple times</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FForwarderLevel%20-%20Splunk%20Insufficient%20Permissions%20to%20Read%20Files">Splunk Insufficient Permissions to Read Files</a>
|
||||
</collection>
|
||||
<collection label="Deployment Server">
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkLevel%20-%20Splunk%20forwarders%20that%20are%20not%20talking%20to%20the%20deployment%20server">Splunk forwarders that are not talking to the deployment server</a>
|
||||
<saved name="DeploymentServer - Count by application" />
|
||||
</collection>
|
||||
<collection label="Splunk Level Issues">
|
||||
<collection label="Performance">
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FForwarderLevel%20-%20Bandwidth%20Throttling%20Occurring">Bandwidth Throttling Occurring</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FForwarderLevel%20-%20Read%20operation%20timed%20out%20expecting%20ACK">Read operation timed out expecting ACK</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FForwarderLevel%20-%20Splunk%20forwarders%20are%20having%20issues%20with%20sending%20data%20to%20indexers">Splunk forwarders are having issues with sending data to indexers</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FForwarderLevel%20-%20Splunk%20Heavy%20logging%20sources">Splunk Heavy logging sources</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkLevel%20-%20TCP%20Output%20Processor%20has%20paused%20the%20data%20flow">TCP Output Processor has paused the data flow</a>
|
||||
</collection>
|
||||
<collection label="Data Balance">
|
||||
<saved name="ForwarderLevel - Forwarders connecting to a single endpoint for extended periods UF level" />
|
||||
<saved name="ForwarderLevel - Forwarders connecting to a single endpoint for extended periods" />
|
||||
</collection>
|
||||
<collection label="Failures">
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FForwarderLevel%20-%20Splunk%20HTTP%20Listener%20Overwhelmed">Splunk HTTP Listener Overwhelmed</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FForwarderLevel%20-%20SplunkStream%20Errors">SplunkStream Errors</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FForwarderLevel%2520-%2520SSL%2520Errors%2520In%2520Logs%2520%2528Potential%2520Universal%2520Forwarder%2520and%2520License%2520Issue%2529">SSL Errors In Logs (Potential Universal Forwarder and LicenseIssue)</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FForwarderLevel%20-%20Unusual%20number%20of%20duplication%20alerts">Unusual number of duplication alerts</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FForwarderLevel%2520-%2520Splunk%2520HEC%2520issues">Splunk HEC issues</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkLevel%20-%20No%20recent%20metrics.log%20data">AllSplunkLevel - No recent metrics.log data</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FForwarderLevel%20-%20Stopping%20all%20listening%20ports">Stopping all listening ports</a>
|
||||
<saved name="ForwarderLevel - Data dropping duration" />
|
||||
<collection lable="Generic">
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkEnterpriseLevel%20-%20Splunkd%20Log%20Messages%20Admins%20Only">Splunkd Log Messages Admins Only</a>
|
||||
</collection>
|
||||
</collection>
|
||||
</collection>
|
||||
<collection label="Performance">
|
||||
<view name="heavyforwarders_max_data_queue_sizes_by_name" />
|
||||
<view name="heavyforwarders_max_data_queue_sizes_by_name_v8" />
|
||||
<view name="indexer_max_data_queue_sizes_by_name" />
|
||||
<view name="indexer_max_data_queue_sizes_by_name_v8" />
|
||||
<view name="hec_performance" />
|
||||
<view name="splunk_forwarder_output_tuning" />
|
||||
<view name="splunk_forwarder_data_balance_tuning" />
|
||||
<view name="splunk_introspection_io_stats" />
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FForwarderLevel%20-%20Channel%20churn%20issues">Channel churn issues</a>
|
||||
</collection>
|
||||
<collection label="syslog-ng">
|
||||
<saved name="syslog-ng - cache statistics summary" />
|
||||
</collection>
|
||||
</collection>
|
||||
<collection label="IndexerLevel">
|
||||
<collection label="Bucket Related">
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FIndexerLevel%20-%20Buckets%20have%20being%20frozen%20due%20to%20index%20sizing">Buckets have being frozen due to index sizing</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FIndexerLevel%20-%20Buckets%20have%20being%20frozen%20due%20to%20index%20sizing%20SmartStore">Buckets have being frozen due to index sizing SmartStore</a>
|
||||
<saved name="IndexerLevel - Buckets changes per day" />
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FIndexerLevel%20-%20Buckets%20rolling%20more%20frequently%20than%20expected">Buckets rolling more frequently than expected</a>
|
||||
<saved name="IndexerLevel - Report on bucket corruption" />
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FIndexerLevel%20-%20These%20Indexes%20Are%20Approaching%20The%20warmDBCount%20limit">These Indexes Are Approaching The warmDBCount limit</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FIndexerLevel%20-%20strings_metadata%20triggering%20bucket%20rolling">strings_metadata triggering bucket rolling</a>
|
||||
<saved name="IndexerLevel - Corrupt buckets via DBInspect" />
|
||||
<view name="rolled_buckets_by_index" />
|
||||
<saved name="IndexerLevel - IndexWriter pause duration" />
|
||||
</collection>
|
||||
<collection label="Data Ingestion">
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FIndexerLevel%20-%20Data%20parsing%20error">Data parsing error</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FIndexerLevel%20-%20IndexConfig%20Warnings%20from%20Splunk%20indexers">IndexConfig Warnings from Splunk indexers</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FIndexerLevel%20-%20Index%20not%20defined">Index not defined</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FForwarderLevel%20-%20Stopping%20all%20listening%20ports">ForwarderLevel - Stopping all listening ports</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FIndexerLevel%20-%20replicationdatareceiverthread%20close%20to%20100%25%20utilisation">IndexerLevel - replicationdatareceiverthread close to 100% utilisation</a>
|
||||
<saved name="SearchHeadLevel - license usage per sourcetype per index" />
|
||||
</collection>
|
||||
<collection label="Data Parsing">
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FIndexerLevel%20-%20Failures%20To%20Parse%20Timestamp%20Correctly%20%28excluding%20breaking%20issues%29">Failures To Parse Timestamp Correctly (excluding breaking issues)</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FIndexerLevel%20-%20Future%20Dated%20Events%20that%20appeared%20in%20the%20last%20week">Future Dated Events that appeared in the last week</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FIndexerLevel%20-%20Large%20multiline%20events%20using%20SHOULD_LINEMERGE%20setting">Large multiline events using SHOULD_LINEMERGE setting</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FIndexerLevel%20-%20Old%20data%20appearing%20in%20Splunk%20indexes">Old data appearing in Splunk indexes</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FIndexerLevel%20-%20Time%20format%20has%20changed%20multiple%20log%20types%20in%20one%20sourcetype">Time format has changed multiple log types in one sourcetype</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FIndexerLevel%20-%20Timestamp%20parsing%20issues%20combined%20alert">Timestamp parsing issues combined alert</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FIndexerLevel%20-%20Too%20many%20events%20with%20the%20same%20timestamp">Too many events with the same timestamp</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FIndexerLevel%20-%20Valid%20Timestamp%20Invalid%20Parsed%20Time">Valid Timestamp Invalid Parsed Time</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FIndexerLevel%20-%20Weekly%20Broken%20Events%20Report">Weekly Broken Events Report</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FIndexerLevel%20-%20Weekly%20Truncated%20Logs%20Report">Weekly Truncated Logs Report</a>
|
||||
<view name="issues_per_sourcetype" />
|
||||
<saved name="IndexerLevel - IndexWriter pause duration" />
|
||||
</collection>
|
||||
<collection label="Failures">
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FIndexerLevel%20-%20S2SFileReceiver%20Error">S2SFileReceiver Error</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FIndexerLevel%20-%20Unclean%20Shutdown%20-%20Fsck">Unclean Shutdown - Fsck</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkEnterpriseLevel%20-%20Losing%20Contact%20With%20Master%20Node">AllSplunkEnterpriseLevel - Losing Contact With Master Node</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FIndexerLevel%20-%20SmartStore%20-%20Bucket%20cache%20errors%20audit%20logs">IndexerLevel - SmartStore - Bucket cache errors audit logs</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkLevel%20-%20No%20recent%20metrics.log%20data">AllSplunkLevel - No recent metrics.log data</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FIndexerLevel%20-%20Connection%20errors%20to%20SmartStore">Connection errors to SmartStore</a>
|
||||
<collection lable="Generic">
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkEnterpriseLevel%20-%20Splunkd%20Log%20Messages%20Admins%20Only">Splunkd Log Messages Admins Only</a>
|
||||
</collection>
|
||||
</collection>
|
||||
<collection label="Performance">
|
||||
<collection label="Queues">
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FIndexerLevel%20-%20Indexer%20Queues%20May%20Have%20Issues">Indexer Queues May Have Issues</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FIndexerLevel%20-%20Indexer%20replication%20queue%20issues%20to%20some%20peers">Indexer replication queue issues to some peers</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FIndexerLevel%20-%20Slow%20peer%20from%20remote%20searches">Slow peer from remote searches</a>
|
||||
<view name="heavyforwarders_max_data_queue_sizes_by_name" />
|
||||
<view name="heavyforwarders_max_data_queue_sizes_by_name_v8" />
|
||||
<view name="indexer_max_data_queue_sizes_by_name" />
|
||||
<view name="indexer_max_data_queue_sizes_by_name_v8" />
|
||||
<view name="hec_performance" />
|
||||
<view name="splunk_forwarder_output_tuning" />
|
||||
<view name="splunk_forwarder_data_balance_tuning" />
|
||||
<view name="splunk_introspection_io_stats" />
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FForwarderLevel%20-%20Channel%20churn%20issues">ForwarderLevel - Channel churn issues</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FIndexerLevel%20-%20replicationdatareceiverthread%20close%20to%20100%25%20utilisation">IndexerLevel - replicationdatareceiverthread close to 100% utilisation</a>
|
||||
</collection>
|
||||
<collection label="Other">
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FIndexerLevel%20-%20Indexer%20not%20accepting%20TCP%20Connections">Indexer not accepting TCP Connections</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FIndexerLevel%20-%20Uneven%20Indexed%20Data%20Across%20The%20Indexers">Uneven Indexed Data Across The Indexers</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FForwarderLevel%20-%20Stopping%20all%20listening%20ports">ForwarderLevel - Stopping all listening ports</a>
|
||||
<view name="troubleshooting_indexer_cpu" />
|
||||
<view name="indexer_data_spread" />
|
||||
<view name="troubleshooting_resource_usage_per_user" />
|
||||
<view name="detect_excessive_search_use" />
|
||||
<view name="hec_performance" />
|
||||
<view name="splunk_introspection_io_stats" />
|
||||
<saved name="IndexerLevel - Knowledge bundle upload stats" />
|
||||
<saved name="SearchHeadLevel - Knowledge bundle replication times metrics.log" />
|
||||
<saved name="SearchHeadLevel - Search Messages field extractor slow" />
|
||||
<saved name="IndexerLevel - IndexWriter pause duration" />
|
||||
<saved name="IndexerLevel - events per second benchmark" />
|
||||
<saved name="IndexerLevel - savedsearches by indexer execution time" />
|
||||
<saved name="SearchHeadLevel - Indexes for savedsearch without subsearches" />
|
||||
</collection>
|
||||
<collection label="SmartStore">
|
||||
<saved name="SearchHeadLevel - SmartStore cache misses - savedsearches" />
|
||||
<saved name="SearchHeadLevel - SmartStore cache misses - dashboards" />
|
||||
<saved name="SearchHeadLevel - SmartStore cache misses - combined" />
|
||||
<saved name="IndexerLevel - SmartStore cache misses - remote_searches" />
|
||||
<saved name="IndexerLevel - Buckets in cache" />
|
||||
<view name="smartstore_stats" />
|
||||
</collection>
|
||||
</collection>
|
||||
<collection label="Search Related">
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FIndexerLevel%20-%20Peer%20will%20not%20return%20results%20due%20to%20outdated%20generation">Peer will not return results due to outdated generation</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FIndexerLevel%20-%20Search%20Failures">Search Failures</a>
|
||||
<saved name="IndexerLevel - Maximum memory utilisation per search" />
|
||||
<saved name="IndexerLevel - RemoteSearches find all time searches" />
|
||||
<saved name="IndexerLevel - RemoteSearches find datamodel acceleration with wildcards" />
|
||||
<saved name="IndexerLevel - RemoteSearches - lookup usage" />
|
||||
<collection label="SmartStore">
|
||||
<saved name="SearchHeadLevel - SmartStore cache misses - savedsearches" />
|
||||
<saved name="SearchHeadLevel - SmartStore cache misses - dashboards" />
|
||||
<saved name="SearchHeadLevel - SmartStore cache misses - combined" />
|
||||
<saved name="IndexerLevel - SmartStore cache misses - remote_searches" />
|
||||
<saved name="IndexerLevel - Buckets in cache" />
|
||||
<view name="smartstore_stats" />
|
||||
<view name="splunk_introspection_io_stats" />
|
||||
</collection>
|
||||
</collection>
|
||||
<collection label="Sizing Related">
|
||||
<collection label="Volumes">
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FIndexerLevel%20-%20Cold%20data%20location%20approaching%20size%20limits">Cold data location approaching size limits</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FIndexerLevel%20-%20Volume%20%28Cold%29%20Has%20Been%20Exceeded">Volume (Cold) Has Been Exceeded</a>
|
||||
</collection>
|
||||
<collection label="Other">
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FIndexerLevel%20-%20Indexer%20Out%20Of%20Disk%20Space">Indexer Out Of Disk Space</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FIndexerLevel%20-%20Rolling%20Hot%20Bucket%20Failure">Rolling Hot Bucket Failure</a>
|
||||
</collection>
|
||||
</collection>
|
||||
<collection label="Summary_Reports">
|
||||
<saved name="SearchHeadLevel - platform_stats.audit metrics searches" />
|
||||
<saved name="SearchHeadLevel - platform_stats.audit metrics users" />
|
||||
<saved name="SearchHeadLevel - platform_stats.audit metrics api" />
|
||||
<saved name="SearchHeadLevel - platform_stats.audit metrics users 24hour" />
|
||||
<saved name="SearchHeadLevel - platform_stats.users dashboards" />
|
||||
<saved name="SearchHeadLevel - platform_stats.users savedsearches" />
|
||||
<saved name="SearchHeadLevel - platform_stats.user_stats.introspection metrics populating search" />
|
||||
<saved name="SearchHeadLevel - platform_stats access summary" />
|
||||
<saved name="SearchHeadLevel - platform_stats.remote_searches metrics populating search" />
|
||||
<saved name="SearchHeadLevel - platform_stats.remote_searches metrics populating search 24 hour" />
|
||||
<saved name="SearchHeadLevel - audit.log - lookup usage" />
|
||||
<saved name="SearchHeadLevel - Lookup Editor lookup updates" />
|
||||
<saved name="IndexerLevel - platform_stats.counters hosts" />
|
||||
<saved name="IndexerLevel - platform_stats.counters hosts 24hour" />
|
||||
<saved name="IndexerLevel - platform_stats.indexers totalgb measurement" />
|
||||
<saved name="IndexerLevel - platform_stats.indexers totalgb_thruput measurement" />
|
||||
<saved name="IndexerLevel - platform_stats.indexers stddev measurement" />
|
||||
<saved name="IndexerLevel - platform_stats.indexers stddev incoming measurement" />
|
||||
<saved name="IndexerLevel - RemoteSearches Indexes Stats" />
|
||||
<saved name="IndexerLevel - RemoteSearches Indexes Stats Wilcard" />
|
||||
<saved name="IndexerLevel - RemoteSearches - lookup usage" />
|
||||
</collection>
|
||||
</collection>
|
||||
<collection label="LicenseMaster">
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FLicenseMaster%20-%20Duplicated%20License%20Situation">Duplicated License Situation</a>
|
||||
</collection>
|
||||
<collection label="SearchHeadLevel">
|
||||
<collection label="Analytics">
|
||||
<saved name="SearchHeadLevel - audit.log - lookup usage" />
|
||||
<saved name="SearchHeadLevel - Detect lookups that have not being accessed for a period of time" />
|
||||
<saved name="SearchHeadLevel - Lookup Editor lookup updates" />
|
||||
<saved name="SearchHeadLevel - indexes per savedsearch" />
|
||||
<saved name="SearchHeadLevel - macros in use" />
|
||||
<saved name="SearchHeadLevel - Search Queries Per Day Audit Logs" />
|
||||
<saved name="SearchHeadLevel - Search Queries By Type Audit Logs" />
|
||||
<saved name="SearchHeadLevel - Search Queries By Type Audit Logs macro version" />
|
||||
<saved name="SearchHeadLevel - Search Queries By Type Audit Logs macro version other" />
|
||||
<saved name="SearchHeadLevel - Search Queries summary exact match" />
|
||||
<saved name="SearchHeadLevel - Search Queries summary non-exact match" />
|
||||
<saved name="SearchHeadLevel - Search Queries summary exact match" />
|
||||
<saved name="SearchHeadLevel - Search Queries summary exact match by user" />
|
||||
<saved name="SearchHeadLevel - Search Queries summary exact match by index" />
|
||||
<saved name="SearchHeadLevel - Search Queries summary loadjob and savedsearch usage in audit logs" />
|
||||
<saved name="SearchHeadLevel - Sourcetypes usage from search telemetry data" />
|
||||
<saved name="SearchHeadLevel - Searches by search type" />
|
||||
<saved name="SearchHeadLevel - IndexesPerUser Report" />
|
||||
<saved name="SearchHeadLevel - license usage per sourcetype per index" />
|
||||
<saved name="SearchHeadLevel - Lookup file owners" />
|
||||
<saved name="SearchHeadLevel - REST API usage via audit.log" />
|
||||
<saved name="SearchHeadLevel - Lookups within a dashboard" />
|
||||
<saved name="SearchHeadLevel - Lookups within savedsearches" />
|
||||
<saved name="SearchHeadLevel - Job performance data per indexer" />
|
||||
<saved name="SearchHeadLevel - Job performance data per indexer handoff time" />
|
||||
<saved name="SearchHeadLevel - Jobs endpoint example" />
|
||||
<saved name="SearchHeadLevel - configtracker index example" />
|
||||
<saved name="SearchHeadLevel - configtracker index example2" />
|
||||
<saved name="IndexerLevel - RemoteSearches Indexes Stats" />
|
||||
<saved name="IndexerLevel - RemoteSearches Indexes Stats Wilcard" />
|
||||
<saved name="IndexerLevel - RemoteSearches - lookup usage" />
|
||||
<saved name="IndexerLevel - events per second benchmark" />
|
||||
</collection>
|
||||
<collection label="Data Models">
|
||||
<saved name="SearchHeadLevel - Data Model Acceleration Completion Status" />
|
||||
<saved name="SearchHeadLevel - DataModel Fields" />
|
||||
<saved name="SearchHeadLevel - Accelerated DataModels Access Info" />
|
||||
<saved name="SearchHeadLevel - Datamodel REST endpoint indexes in use" />
|
||||
<saved name="IndexerLevel - DataModel Acceleration - Indexes in use" />
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20datamodel%20errors%20in%20splunkd">datamodel errors in splunkd</a>
|
||||
<view name="data_model_rebuild_monitor" />
|
||||
<view name="data_model_status" />
|
||||
</collection>
|
||||
<collection label="Failures">
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20Detect%20MongoDB%20errors">Detect MongoDB errors</a>
|
||||
<saved name="SearchHeadLevel - Detect searches hitting corrupt buckets" />
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20Indexer%20Peer%20Connection%20Failures">Indexer Peer Connection Failures</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20KVStore%20Or%20Conf%20Replication%20Issues%20Are%20Occurring">KVStore Or Conf Replication Issues Are Occurring</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20Long%20filenames%20may%20be%20causing%20issues">Long filenames may be causing issues</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20Script%20failures%20in%20the%20last%20day">Script failures in the last day</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20SHCluster%20Artifact%20Replication%20Issues">SHCluster Artifact Replication Issues</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20SHC%20Captain%20unable%20to%20establish%20common%20bundle">SHC Captain unable to establish common bundle</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20splunk_search_messages%20dispatch">splunk_search_messages dispatch</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20dispatch%20metadata%20files%20may%20need%20removal">dispatch metadata files may need removal</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20Dashboards%20invalid%20character%20in%20splunkd">Dashboards invalid character in splunkd</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20savedsearches%20invalid%20character%20in%20splunkd">savedsearches invalid character in splunkd</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20datamodel%20errors%20in%20splunkd">datamodel errors in splunkd</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FIndexerLevel%20-%20SmartStore%20-%20Bucket%20cache%20errors%20audit%20logs">IndexerLevel - SmartStore - Bucket cache errors audit logs</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkLevel%20-%20No%20recent%20metrics.log%20data">AllSplunkLevel - No recent metrics.log data</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20Detect%20bundle%20pushes%20no%20longer%20occurring">Detect bundle pushes no longer occurring</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20Peer%20timeouts%20or%20authentication%20issues">Peer timeouts or authentication issues</a>
|
||||
<collection label="Generic">
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FAllSplunkEnterpriseLevel%20-%20Splunkd%20Log%20Messages%20Admins%20Only">Splunkd Log Messages Admins Only</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20Search%20Messages%20user%20level">Search Messages user level</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20Search%20Messages%20admins%20only">Search Messages admins only</a>
|
||||
</collection>
|
||||
<saved name="SearchHeadLevel - Knowledge Bundle contents" />
|
||||
</collection>
|
||||
<collection label="Non best-practice">
|
||||
<collection label="Realtime searches">
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20Realtime%20Scheduled%20Searches%20are%20in%20use">Realtime Scheduled Searches are in use</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20Realtime%20Search%20Queries%20in%20dashboards">Realtime Search Queries in dashboards</a>
|
||||
</collection>
|
||||
<collection label="Data Models">
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20Accelerated%20DataModels%20with%20All%20Time%20Searching%20Enabled">Accelerated DataModels with All Time Searching Enabled</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20Accelerated%20DataModels%20with%20wildcard%20or%20no%20index%20specified">Accelerated DataModels with wildcard or no index specified</a>
|
||||
</collection>
|
||||
<collection label="Dashboards">
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20User%20-%20Dashboards%20searching%20all%20indexes%20macro%20version">User - Dashboards searching all indexes macro version</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20User%20-%20Dashboards%20searching%20all%20indexes">User - Dashboards searching all indexes</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%2520-%2520Dashboards%2520with%2520all%2520time%2520searches%2520set">SearchHeadLevel - Dashboards with all time searches set</a>
|
||||
<saved name="SearchHeadLevel - Dashboard refresh intervals" />
|
||||
<saved name="SearchHeadLevel - Dashboards using depends and running searches in the background" />
|
||||
<saved name="SearchHeadLevel - Dashboards using special characters" />
|
||||
<saved name="SearchHeadLevel - Dashboards resulting in concurrency issues" />
|
||||
<saved name="SearchHeadLevel - Dashboards that may benefit from base or post-process searches" />
|
||||
</collection>
|
||||
<collection label="Scheduled Searches">
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20Scheduled%20searches%20not%20specifying%20an%20index%20macro%20version">Scheduled searches not specifying an index macro version</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20Scheduled%20searches%20not%20specifying%20an%20index">Scheduled searches not specifying an index</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20Scheduled%20Searches%20without%20a%20configured%20earliest%20and%20latest%20time">Scheduled Searches without a configured earliest and latest time</a>
|
||||
<saved name="SearchHeadLevel - Summary searches using realtime search scheduling" />
|
||||
<saved name="SearchHeadLevel - SavedSearches using special characters" />
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20Splunk%20alert%20actions%20exceeding%20the%20max_action_results%20limit">Splunk alert actions exceeding the max_action_results limit</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20Splunk%20Scheduler%20logs%20have%20not%20appeared%20in%20the%20last">Splunk Scheduler logs have not appeared in the last</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20summary%20indexing%20searches%20not%20using%20durable%20search">SearchHeadLevel - summary indexing searches not using durable search</a>
|
||||
<saved name="SearchHeadLevel - Savedsearches with schedules and no next_scheduled_time" />
|
||||
</collection>
|
||||
<collection label="Other">
|
||||
<saved name="SearchHeadLevel - Knowledge bundle replication times metrics.log" />
|
||||
<saved name="SearchHeadLevel - audit logs showing all time searches" />
|
||||
<saved name="IndexerLevel - RemoteSearches find all time searches" />
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20Excessive%20REST%20API%20usage">SearchHeadLevel - Excessive REST API usage</a>
|
||||
<saved name="SearchHeadLevel - Knowledge Bundle contents" />
|
||||
</collection>
|
||||
</collection>
|
||||
<collection label="Performance Issues">
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20Captain%20Switchover%20Occurring">Captain Switchover Occurring</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20Disabled%20modular%20inputs%20are%20running">Disabled modular inputs are running</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20Long%20Running%20Searches%20Found">Long Running Searches Found</a>
|
||||
<view name="search_head_scheduledsearches_distribution" />
|
||||
<view name="detect_excessive_search_use" />
|
||||
<view name="splunk_introspection_io_stats" />
|
||||
<saved name="SearchHeadLevel - Maximum memory utilisation per search" />
|
||||
<saved name="SearchHeadLevel - Detect Excessive Search Use - Dashboard - Automated" />
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20SHC%20Captain%20unable%20to%20establish%20common%20bundle">SearchHeadLevel - SHC Captain unable to establish common bundle</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FIndexerLevel%20-%20Slow%20peer%20from%20remote%20searches">Slow peer from remote searches</a>
|
||||
<saved name="SearchHeadLevel - Search Messages field extractor slow" />
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20Excessive%20REST%20API%20usage">SearchHeadLevel - Excessive REST API usage</a>
|
||||
<saved name="SearchHeadLevel - Knowledge bundle replication times metrics.log" />
|
||||
</collection>
|
||||
<collection label="Proactive">
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20LDAP%20users%20have%20been%20disabled%20or%20left%20the%20company%20cleanup%20required">LDAP users have been disabled or left the company cleanup required</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20Saved%20Searches%20with%20privileged%20owners%20and%20excessive%20write%20perms">Saved Searches with privileged owners and excessive write perms</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20Scheduled%20Searches%20Configured%20with%20incorrect%20sharing">Scheduled Searches Configured with incorrect sharing</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20Splunk%20login%20attempts%20from%20users%20that%20do%20not%20have%20any%20LDAP%20roles">Splunk login attempts from users that do not have any LDAP roles</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20authorize.conf%20settings%20will%20prevent%20some%20users%20from%20appearing%20in%20the%20UI">SearchHeadLevel - authorize.conf settings will prevent some users from appearing in the UI</a>
|
||||
<saved name="SearchHeadLevel - Knowledge Bundle contents" />
|
||||
<saved name="SearchHeadLevel - Lookup definitions with no lookup file or kvstore collection" />
|
||||
<saved name="SearchHeadLevel - User created kvstore collections" />
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20summary%20indexing%20searches%20not%20using%20durable%20search">SearchHeadLevel - summary indexing searches not using durable search</a>
|
||||
</collection>
|
||||
<collection label="Quotas">
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20Splunk%20Max%20Historic%20Search%20Limits%20Reached">Splunk Max Historic Search Limits Reached</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20Splunk%20Users%20Violating%20the%20Search%20Quota">Splunk Users Violating the Search Quota</a>
|
||||
<saved name="SearchHeadLevel - Users exceeding the disk quota introspection" />
|
||||
<saved name="SearchHeadLevel - Users with auto-finalized searches" />
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20Users%20exceeding%20the%20disk%20quota">Users exceeding the disk quota</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20WLM%20aborted%20searches">WLM aborted searches</a>
|
||||
</collection>
|
||||
<collection label="SmartStore">
|
||||
<saved name="SearchHeadLevel - SmartStore cache misses - savedsearches" />
|
||||
<saved name="SearchHeadLevel - SmartStore cache misses - dashboards" />
|
||||
<saved name="SearchHeadLevel - SmartStore cache misses - combined" />
|
||||
<saved name="IndexerLevel - SmartStore cache misses - remote_searches" />
|
||||
<saved name="IndexerLevel - Buckets in cache" />
|
||||
<view name="smartstore_stats" />
|
||||
</collection>
|
||||
<collection label="Reports">
|
||||
<saved name="SearchHeadLevel - Alerts that have not fired an action in X days" />
|
||||
<saved name="SearchHeadLevel - Audit log search example only" />
|
||||
<saved name="SearchHeadLevel - Determine query scan density" />
|
||||
<saved name="SearchHeadLevel - Role access list by user" />
|
||||
<saved name="Scheduled Search Efficiency" />
|
||||
<saved name="SearchHeadLevel - Dashboard load times" />
|
||||
<saved name="SearchHeadLevel - Scheduled searches status" />
|
||||
<saved name="SearchHeadLevel - Detect changes to knowledge objects" />
|
||||
<saved name="SearchHeadLevel - Detect changes to knowledge objects directory" />
|
||||
<saved name="SearchHeadLevel - Detect changes to knowledge objects non-directory" />
|
||||
<saved name="SearchHeadLevel - Lookup updates within SHC" />
|
||||
<saved name="SearchHeadLevel - Lookup definitions with no lookup file or kvstore collection" />
|
||||
<saved name="SearchHeadLevel - indexes per savedsearch" />
|
||||
<saved name="SearchHeadLevel - macros in use" />
|
||||
<saved name="SearchHeadLevel - SHC conf log summary" />
|
||||
<saved name="SearchHeadLevel - Searches dispatched as owner by other users" />
|
||||
<saved name="SearchHeadLevel - Lookup CSV size" />
|
||||
<saved name="SearchHeadLevel - KVStore collection size" />
|
||||
<saved name="SearchHeadLevel - audit logs showing all time searches" />
|
||||
<saved name="SearchHeadLevel - audit.log - lookup usage" />
|
||||
<saved name="SearchHeadLevel - Detect lookups that have not being accessed for a period of time" />
|
||||
<saved name="SearchHeadLevel - Lookup Editor lookup updates" />
|
||||
<saved name="SearchHeadLevel - REST API usage via audit.log" />
|
||||
<saved name="SearchHeadLevel - User created kvstore collections" />
|
||||
<saved name="IndexerLevel - RemoteSearches find all time searches" />
|
||||
<saved name="IndexerLevel - RemoteSearches find datamodel acceleration with wildcards" />
|
||||
<saved name="IndexerLevel - RemoteSearches - lookup usage" />
|
||||
<saved name="SearchHeadLevel - Search Messages field extractor slow" />
|
||||
<saved name="SearchHeadLevel - SmartStore cache misses - savedsearches" />
|
||||
<saved name="SearchHeadLevel - SmartStore cache misses - dashboards" />
|
||||
<saved name="SearchHeadLevel - SmartStore cache misses - combined" />
|
||||
<saved name="IndexerLevel - SmartStore cache misses - remote_searches" />
|
||||
<saved name="IndexerLevel - Buckets in cache" />
|
||||
<view name="knowledge_objects_by_app" />
|
||||
<view name="lookups_in_use_finder" />
|
||||
<view name="lookup_audit" />
|
||||
<saved name="SearchHeadLevel - Lookup file owners" />
|
||||
<saved name="SearchHeadLevel - Lookups within a dashboard" />
|
||||
<saved name="SearchHeadLevel - Lookups within savedsearches" />
|
||||
<saved name="SearchHeadLevel - Knowledge bundle status on indexers" />
|
||||
<saved name="SearchHeadLevel - Knowledge bundle replication times metrics.log" />
|
||||
<saved name="SearchHeadLevel - Knowledge Bundle contents" />
|
||||
<saved name="SearchHeadLevel - license usage per sourcetype per index" />
|
||||
<saved name="syslog-ng - cache statistics summary" />
|
||||
<saved name="IndexerLevel - events per second benchmark" />
|
||||
<saved name="IndexerLevel - savedsearches by indexer execution time" />
|
||||
<saved name="SearchHeadLevel - Indexes for savedsearch without subsearches" />
|
||||
</collection>
|
||||
<collection label="Summary_Reports">
|
||||
<saved name="SearchHeadLevel - audit.log - lookup usage" />
|
||||
<saved name="SearchHeadLevel - Lookup Editor lookup updates" />
|
||||
<saved name="SearchHeadLevel - license usage per sourcetype per index" />
|
||||
<saved name="SearchHeadLevel - indexes per savedsearch" />
|
||||
<saved name="SearchHeadLevel - macros in use" />
|
||||
<saved name="SearchHeadLevel - platform_stats.audit metrics searches" />
|
||||
<saved name="SearchHeadLevel - platform_stats.audit metrics users" />
|
||||
<saved name="SearchHeadLevel - platform_stats.audit metrics users 24hour" />
|
||||
<saved name="SearchHeadLevel - platform_stats.users dashboards" />
|
||||
<saved name="SearchHeadLevel - platform_stats.users savedsearches" />
|
||||
<saved name="SearchHeadLevel - platform_stats.audit metrics api" />
|
||||
<saved name="SearchHeadLevel - platform_stats.user_stats.introspection metrics populating search" />
|
||||
<saved name="SearchHeadLevel - platform_stats access summary" />
|
||||
<saved name="SearchHeadLevel - platform_stats.remote_searches metrics populating search" />
|
||||
<saved name="SearchHeadLevel - platform_stats.remote_searches metrics populating search 24 hour" />
|
||||
<saved name="IndexerLevel - platform_stats.counters hosts" />
|
||||
<saved name="IndexerLevel - platform_stats.counters hosts 24hour" />
|
||||
<saved name="IndexerLevel - platform_stats.indexers totalgb measurement" />
|
||||
<saved name="IndexerLevel - platform_stats.indexers totalgb_thruput measurement" />
|
||||
<saved name="IndexerLevel - platform_stats.indexers stddev measurement" />
|
||||
<saved name="IndexerLevel - platform_stats.indexers stddev incoming measurement" />
|
||||
<saved name="IndexerLevel - RemoteSearches Indexes Stats" />
|
||||
<saved name="IndexerLevel - RemoteSearches Indexes Stats Wilcard" />
|
||||
<saved name="IndexerLevel - RemoteSearches - lookup usage" />
|
||||
</collection>
|
||||
<collection label="Scheduled Search Failures">
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20Scheduled%20searches%20failing%20in%20cluster%20with%20404%20error">Scheduled searches failing in cluster with 404 error</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20Scheduled%20Searches%20That%20Cannot%20Run">Scheduled Searches That Cannot Run</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%20-%20savedsearches%20invalid%20character%20in%20splunkd">savedsearches invalid character in splunkd</a>
|
||||
</collection>
|
||||
<collection label="SupportingReports">
|
||||
<saved name="SearchHeadLevel - Index access list by user" />
|
||||
<saved name="SearchHeadLevel - Index list report" />
|
||||
<saved name="SearchHeadLevel - Index list by cluster report" />
|
||||
<saved name="SearchHeadLevel - IndexesPerRole Remote Report" />
|
||||
<saved name="SearchHeadLevel - IndexesPerRole Report" />
|
||||
<saved name="SearchHeadLevel - Macro report" />
|
||||
<saved name="SearchHeadLevel - DataModels report" />
|
||||
<saved name="SearchHeadLevel - Tags report" />
|
||||
<saved name="SearchHeadLevel - EventTypes report" />
|
||||
<saved name="SearchHeadLevel - Users exceeding the disk quota introspection cleanup" />
|
||||
<saved name="SearchHeadLevel - RMD5 to savedsearch_name lookupgen report" />
|
||||
<saved name="SearchHeadLevel - Lookup file owners" />
|
||||
</collection>
|
||||
<collection label="Recommended (externally hosted)">
|
||||
<a href="https://github.com/silkyrich/cluster_health_tools/">The cluster_health_tools git repository contains very useful dashboards for various indexer related performance stats</a>
|
||||
<a href="https://github.com/dpaper-splunk/public/tree/master/dashboards" target="_blank">Extended Search Reporting (and others)</a>
|
||||
<a href="https://github.com/nicovdw/splunk_concurrency_helper" target="_blank">Search Scheduler Tuning searches</a>
|
||||
<a href="https://splunkbase.splunk.com/app/6449/" target="_blank">Sideview UI (User Activity details)</a>
|
||||
<a href="https://splunkbase.splunk.com/app/6368/" target="_blank">Admins Little Helper for Splunk (btool, bundle utils and similar)</a>
|
||||
<a href="https://splunkbase.splunk.com/app/4621/" target="_blank">TrackMe (Data Ingestion)</a>
|
||||
<a href="https://github.com/redvelociraptor/gettingsmarter/tree/main">Getting Smarter about Splunk SmartStore (including HEC dashboards)</a>
|
||||
<a href="https://github.com/TheWoodRanger/presentation-conf_24_audittrail_native_telemetry">Maximizing Splunk Core: Analyzing Splunk Searches Using Audittrail and Native Splunk Telemetry</a>
|
||||
</collection>
|
||||
</collection>
|
||||
<collection label="Summary_Reports">
|
||||
<saved name="SearchHeadLevel - audit.log - lookup usage" />
|
||||
<saved name="SearchHeadLevel - Lookup Editor lookup updates" />
|
||||
<saved name="SearchHeadLevel - license usage per sourcetype per index" />
|
||||
<saved name="SearchHeadLevel - indexes per savedsearch" />
|
||||
<saved name="SearchHeadLevel - macros in use" />
|
||||
<saved name="SearchHeadLevel - platform_stats.audit metrics searches" />
|
||||
<saved name="SearchHeadLevel - platform_stats.audit metrics users" />
|
||||
<saved name="SearchHeadLevel - platform_stats.audit metrics api" />
|
||||
<saved name="SearchHeadLevel - platform_stats.audit metrics users 24hour" />
|
||||
<saved name="SearchHeadLevel - platform_stats.users dashboards" />
|
||||
<saved name="SearchHeadLevel - platform_stats.users savedsearches" />
|
||||
<saved name="SearchHeadLevel - platform_stats.user_stats.introspection metrics populating search" />
|
||||
<saved name="SearchHeadLevel - platform_stats access summary" />
|
||||
<saved name="SearchHeadLevel - platform_stats.remote_searches metrics populating search" />
|
||||
<saved name="SearchHeadLevel - platform_stats.remote_searches metrics populating search 24 hour" />
|
||||
<saved name="IndexerLevel - platform_stats.counters hosts" />
|
||||
<saved name="IndexerLevel - platform_stats.counters hosts 24hour" />
|
||||
<saved name="IndexerLevel - platform_stats.indexers totalgb measurement" />
|
||||
<saved name="IndexerLevel - platform_stats.indexers totalgb_thruput measurement" />
|
||||
<saved name="IndexerLevel - platform_stats.indexers stddev measurement" />
|
||||
<saved name="IndexerLevel - platform_stats.indexers stddev incoming measurement" />
|
||||
<saved name="IndexerLevel - RemoteSearches Indexes Stats" />
|
||||
<saved name="IndexerLevel - RemoteSearches Indexes Stats Wilcard" />
|
||||
<saved name="IndexerLevel - RemoteSearches - lookup usage" />
|
||||
</collection>
|
||||
<collection label="Users">
|
||||
<saved name="What Access Do I Have Without REST?" />
|
||||
</collection>
|
||||
<collection label="MonitoringConsole">
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FMonitoringConsole%20-%20Core%20dumps%20have%20appeared%20on%20the%20filesystem">Core dumps have appeared on the filesystem</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FMonitoringConsole%20-%20Crash%20logs%20have%20appeared%20on%20the%20filesystem">Crash logs have appeared on the filesystem</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FMonitoringConsole%20-%20one%20or%20more%20servers%20require%20configuration">one or more servers require configuration</a>
|
||||
<a href="/app/SplunkAdmins/alert?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FMonitoringConsole%20-%20one%20or%20more%20servers%20require%20configuration%20automated">one or more servers require configuration automated</a>
|
||||
</collection>
|
||||
</nav>
|
||||
@ -0,0 +1,107 @@
|
||||
<form version="1.1">
|
||||
<label>ClusterMasterJobs</label>
|
||||
<fieldset submitButton="false">
|
||||
<input type="time" token="time">
|
||||
<label></label>
|
||||
<default>
|
||||
<earliest>-15m</earliest>
|
||||
<latest>now</latest>
|
||||
</default>
|
||||
</input>
|
||||
<input type="text" token="span">
|
||||
<label>span</label>
|
||||
<default>2m</default>
|
||||
</input>
|
||||
</fieldset>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Job Count</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal `splunkadmins_clustermaster_oshost` sourcetype=splunkd `splunkadmins_splunkd_source` *CMRepJob running job | timechart span=$span$ count by job</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">visible</option>
|
||||
<option name="charting.axisTitleY.visibility">visible</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.abbreviation">none</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.abbreviation">none</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.abbreviation">none</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">line</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">default</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">none</option>
|
||||
<option name="charting.layout.splitSeries">0</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.mode">standard</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
<option name="charting.lineWidth">2</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="trellis.enabled">0</option>
|
||||
<option name="trellis.scales.shared">1</option>
|
||||
<option name="trellis.size">medium</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Fixup Jobs</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal `splunkadmins_metrics_source` sourcetype=splunkd name=cmmaster_service `splunkadmins_clustermaster_oshost` group=subtask_counts
|
||||
| timechart max(to_fix_gen), max(to_fix_rep_factor), max(to_fix_search_factor) span=$span$</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">visible</option>
|
||||
<option name="charting.axisTitleY.visibility">visible</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.abbreviation">none</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.abbreviation">none</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.abbreviation">none</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">column</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">default</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">none</option>
|
||||
<option name="charting.layout.splitSeries">0</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.mode">standard</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
<option name="charting.lineWidth">2</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="trellis.enabled">0</option>
|
||||
<option name="trellis.scales.shared">1</option>
|
||||
<option name="trellis.size">medium</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
</form>
|
||||
@ -0,0 +1,276 @@
|
||||
<form version="1.1">
|
||||
<label>Dashboard - Data Model Rebuild Monitor</label>
|
||||
<description>Originally based on the work on URL https://conf.splunk.com/files/2017/slides/running-enterprise-security-at-capacity-tuning-es-with-data-model-acceleration.pdf modified to work without the macros and corrected the datamodel sizing (and misc tweaks).</description>
|
||||
<fieldset submitButton="false">
|
||||
<input type="dropdown" token="dm">
|
||||
<label>Data model (on this search head or cluster)</label>
|
||||
<search>
|
||||
<query>| rest /services/admin/summarization by_tstats=t splunk_server=local count=0
|
||||
| eval datamodel=replace('summary.id',"DM_".'eai:acl.app'."_","")
|
||||
| fields datamodel
|
||||
| sort 100 + datamodel</query>
|
||||
</search>
|
||||
<fieldForLabel>datamodel</fieldForLabel>
|
||||
<fieldForValue>datamodel</fieldForValue>
|
||||
</input>
|
||||
<input type="dropdown" token="earliest_token" depends="$value_never_set$">
|
||||
<label>field1</label>
|
||||
<fieldForLabel>acceleration.earliest_time</fieldForLabel>
|
||||
<fieldForValue>acceleration.earliest_time</fieldForValue>
|
||||
<search>
|
||||
<query>| rest /services/configs/conf-datamodels| search title=$dm$ | fields acceleration.earliest_time</query>
|
||||
<earliest>0</earliest>
|
||||
<latest></latest>
|
||||
</search>
|
||||
<selectFirstChoice>true</selectFirstChoice>
|
||||
</input>
|
||||
</fieldset>
|
||||
<row>
|
||||
<panel>
|
||||
<html>
|
||||
<h1>$dm$ data modelconfig</h1>
|
||||
</html>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<single>
|
||||
<search>
|
||||
<query>| rest /services/configs/conf-datamodels
|
||||
| search title=$dm$
|
||||
| fields acceleration.earliest_time</query>
|
||||
<earliest>@d</earliest>
|
||||
<latest>now</latest>
|
||||
</search>
|
||||
<option name="colorBy">value</option>
|
||||
<option name="colorMode">none</option>
|
||||
<option name="drilldown">all</option>
|
||||
<option name="numberPrecision">0</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="showSparkline">1</option>
|
||||
<option name="showTrendIndicator">1</option>
|
||||
<option name="trendColorInterpretation">standard</option>
|
||||
<option name="trendDisplayMode">absolute</option>
|
||||
<option name="underLabel">Retention (earliest)</option>
|
||||
<option name="unitPosition">after</option>
|
||||
<option name="useColors">0</option>
|
||||
<option name="useThousandSeparators">1</option>
|
||||
</single>
|
||||
<single>
|
||||
<search>
|
||||
<query>| rest /services/configs/conf-datamodels
|
||||
| search title=$dm$
|
||||
| fields acceleration.backfill_time</query>
|
||||
<earliest>@d</earliest>
|
||||
<latest>now</latest>
|
||||
</search>
|
||||
<option name="colorBy">value</option>
|
||||
<option name="colorMode">none</option>
|
||||
<option name="drilldown">all</option>
|
||||
<option name="numberPrecision">0</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="showSparkline">1</option>
|
||||
<option name="showTrendIndicator">1</option>
|
||||
<option name="trendColorInterpretation">standard</option>
|
||||
<option name="trendDisplayMode">absolute</option>
|
||||
<option name="underLabel">Backfill target</option>
|
||||
<option name="unitPosition">before</option>
|
||||
<option name="useColors">0</option>
|
||||
<option name="useThousandSeparators">1</option>
|
||||
</single>
|
||||
<single>
|
||||
<search>
|
||||
<query>| rest /services/admin/summarization by_tstats=t splunk_server=local count=0
|
||||
| eval datamodel=replace('summary.id',"DM_".'eai:acl.app'."_","")
|
||||
| fields summary.complete, datamodel
|
||||
| rename summary.complete AS complete
|
||||
| search datamodel=$dm$
|
||||
| eval complete(%)=round(complete*100,1)."%"
|
||||
| fields complete(%)</query>
|
||||
<earliest>0.000</earliest>
|
||||
<latest></latest>
|
||||
</search>
|
||||
<option name="colorBy">value</option>
|
||||
<option name="colorMode">none</option>
|
||||
<option name="drilldown">all</option>
|
||||
<option name="numberPrecision">0</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="showSparkline">1</option>
|
||||
<option name="showTrendIndicator">1</option>
|
||||
<option name="trendColorInterpretation">standard</option>
|
||||
<option name="trendDisplayMode">absolute</option>
|
||||
<option name="underLabel">Backfill complete</option>
|
||||
<option name="unitPosition">after</option>
|
||||
<option name="useColors">0</option>
|
||||
<option name="useThousandSeparators">1</option>
|
||||
</single>
|
||||
<single>
|
||||
<search>
|
||||
<query>| rest /services/configs/conf-datamodels
|
||||
| search title=$dm$
|
||||
| fields acceleration.max_concurrent</query>
|
||||
<earliest>@d</earliest>
|
||||
<latest>now</latest>
|
||||
</search>
|
||||
<option name="colorBy">value</option>
|
||||
<option name="colorMode">none</option>
|
||||
<option name="drilldown">all</option>
|
||||
<option name="numberPrecision">0</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="showSparkline">1</option>
|
||||
<option name="showTrendIndicator">1</option>
|
||||
<option name="trendColorInterpretation">standard</option>
|
||||
<option name="trendDisplayMode">absolute</option>
|
||||
<option name="underLabel">max concurrent summarisation jobs</option>
|
||||
<option name="unitPosition">after</option>
|
||||
<option name="useColors">0</option>
|
||||
<option name="useThousandSeparators">1</option>
|
||||
</single>
|
||||
<single>
|
||||
<search>
|
||||
<query>| rest /services/configs/conf-datamodels
|
||||
| search title=$dm$
|
||||
| fields acceleration.max_time</query>
|
||||
<earliest>@d</earliest>
|
||||
<latest>now</latest>
|
||||
</search>
|
||||
<option name="colorBy">value</option>
|
||||
<option name="colorMode">none</option>
|
||||
<option name="drilldown">all</option>
|
||||
<option name="numberPrecision">0</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="showSparkline">1</option>
|
||||
<option name="showTrendIndicator">1</option>
|
||||
<option name="trendColorInterpretation">standard</option>
|
||||
<option name="trendDisplayMode">absolute</option>
|
||||
<option name="underLabel">max acceleration runtime in seconds</option>
|
||||
<option name="unitPosition">after</option>
|
||||
<option name="useColors">0</option>
|
||||
<option name="useThousandSeparators">1</option>
|
||||
</single>
|
||||
<single>
|
||||
<search>
|
||||
<query>```The authors original attempt of | `datamodel("Splunk_Audit", "Datamodel_Acceleration | `drop_dm_object_name("Datamodel_Acceleration")` Just did not appear to show accurate numbers when compared to the filesystem of the indexers
|
||||
The previous attempt at this number via | rest "/services/admin/introspection--disk-objects--summaries?count=-1" ... worked fine *unless* there were multiple search head GUID's in the introspection data in which case it seems to return 1 set only (resulting in highly inaccurate numbers in some cases)
|
||||
Now querying the introspection data instead as that provides consistently accurate numbers```
|
||||
index=_introspection `indexerhosts` component=summaries "data.name"=*$dm$
|
||||
| stats latest(data.total_size) AS size by data.search_head_guid, data.related_indexes_count, data.related_indexes, host
|
||||
| stats sum(size) AS size</query>
|
||||
<earliest>@d</earliest>
|
||||
<latest>now</latest>
|
||||
</search>
|
||||
<option name="colorBy">value</option>
|
||||
<option name="colorMode">none</option>
|
||||
<option name="drilldown">all</option>
|
||||
<option name="numberPrecision">0</option>
|
||||
<option name="rangeColors">["0x65a637,","0x6db7c6,","0xf7bc38,","0xf58f39,","0xd93f3c"]</option>
|
||||
<option name="rangeValues">[0,30,70,100]</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="showSparkline">1</option>
|
||||
<option name="showTrendIndicator">1</option>
|
||||
<option name="trendColorInterpretation">standard</option>
|
||||
<option name="trendDisplayMode">absolute</option>
|
||||
<option name="underLabel">data size in MB</option>
|
||||
<option name="unitPosition">after</option>
|
||||
<option name="useColors">0</option>
|
||||
<option name="useThousandSeparators">1</option>
|
||||
</single>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<html>
|
||||
<h1>$dm$ data model acceleration state</h1>
|
||||
</html>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>$dm$ event counts - Monitor lag and backfill</title>
|
||||
<chart>
|
||||
<title>Backfill view over the last 2 hours</title>
|
||||
<search>
|
||||
<query>| tstats prestats=t summariesonly=t allow_old_summaries=t count from datamodel=$dm$ by _time span=10s
|
||||
| timechart count span=10s</query>
|
||||
<earliest>-2h</earliest>
|
||||
<latest>now</latest>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">collapsed</option>
|
||||
<option name="charting.axisTitleY.visibility">collapsed</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">line</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">default</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">all</option>
|
||||
<option name="charting.layout.splitSeries">0</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.placement">none</option>
|
||||
<option name="height">275</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
</chart>
|
||||
<chart>
|
||||
<title>Backfill view over time range of DM acceleration (and -1w)</title>
|
||||
<search>
|
||||
<query>|tstats prestats=t allow_old_summaries=t summariesonly=t count from datamodel=$dm$ by _time span=4h| timechart count span=4h</query>
|
||||
<earliest>$earliest_token$-1w</earliest>
|
||||
<latest>now</latest>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">collapsed</option>
|
||||
<option name="charting.axisTitleY.visibility">collapsed</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">line</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">default</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">all</option>
|
||||
<option name="charting.layout.splitSeries">0</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.placement">none</option>
|
||||
<option name="height">275</option>
|
||||
</chart>
|
||||
</panel>
|
||||
<panel>
|
||||
<table>
|
||||
<title>$dm$ recent acceleration jobs</title>
|
||||
<search>
|
||||
<query>index=_internal source=*scheduler.log _ACCELERATE_DM_*$dm$_ACCELERATE_ | eval scheduled=strftime(scheduled_time,"%c")
|
||||
| stats values(scheduled) as scheduled, values(scheduled_time) as scheduled_time, list(status) as statuses, values(run_time) as run_time by savedsearch_name sid | sort - scheduled_time |
|
||||
eval done=if(isnull(run_time),"running","done")
|
||||
| eval run_time=tostring(if(isnull(run_time),now()-scheduled_time,run_time),"duration") | fields - scheduled_time savedsearch_name sid </query>
|
||||
<earliest>@d</earliest>
|
||||
<latest>now</latest>
|
||||
</search>
|
||||
<option name="count">10</option>
|
||||
<option name="dataOverlayMode">none</option>
|
||||
<option name="drilldown">cell</option>
|
||||
<option name="rowNumbers">false</option>
|
||||
<option name="wrap">true</option>
|
||||
</table>
|
||||
</panel>
|
||||
</row>
|
||||
</form>
|
||||
@ -0,0 +1,168 @@
|
||||
<form version="1.1">
|
||||
<label>Dashboard - Data Model Status</label>
|
||||
<description>Originally based on the work on URL https://conf.splunk.com/files/2017/slides/running-enterprise-security-at-capacity-tuning-es-with-data-model-acceleration.pdf modified to work without the macros (and misc tweaks)</description>
|
||||
<fieldset submitButton="false">
|
||||
<input type="time" token="timepicker1">
|
||||
<label></label>
|
||||
<default>
|
||||
<earliest>-4h@m</earliest>
|
||||
<latest>now</latest>
|
||||
</default>
|
||||
</input>
|
||||
</fieldset>
|
||||
<row>
|
||||
<panel>
|
||||
<chart>
|
||||
<title>Skipped searches ($timepicker1.earliest$ to $timepicker1.latest$)</title>
|
||||
<search>
|
||||
<query>index=_internal `searchheadhosts` sourcetype=scheduler status="skipped"
|
||||
| eval type=if(match(savedsearch_name,"^_ACCELERATE_"),"DM","non-DM")
|
||||
| eval reason = if(isnull(reason) OR reason == "", "none", reason)
|
||||
| eval combo=type . " - " . reason
|
||||
| timechart span=5m count by combo</query>
|
||||
<earliest>$timepicker1.earliest$</earliest>
|
||||
<latest>$timepicker1.latest$</latest>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">collapsed</option>
|
||||
<option name="charting.axisTitleY.visibility">visible</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">column</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">stacked</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">all</option>
|
||||
<option name="charting.layout.splitSeries">0</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.placement">bottom</option>
|
||||
<option name="height">200</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
</chart>
|
||||
<chart>
|
||||
<title>Deferred & Skipped searches ($timepicker1.earliest$ to $timepicker1.latest$)</title>
|
||||
<search>
|
||||
<query>index=_internal `searchheadhosts` sourcetype=scheduler status=continued OR status=skipped
|
||||
| eval type=if(match(savedsearch_name,"^_ACCELERATE_"),"DM","non-DM")
|
||||
| eval status=replace(status,"continued","deferred")
|
||||
| eval combo=type . "-" . status
|
||||
| timechart span=5m count by combo</query>
|
||||
<earliest>$timepicker1.earliest$</earliest>
|
||||
<latest>$timepicker1.latest$</latest>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">collapsed</option>
|
||||
<option name="charting.axisTitleY.visibility">visible</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">column</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">stacked</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">all</option>
|
||||
<option name="charting.layout.splitSeries">0</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.placement">bottom</option>
|
||||
<option name="height">200</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
</chart>
|
||||
</panel>
|
||||
<panel>
|
||||
<chart>
|
||||
<title>Top Accelerations by Run Duration (on this search head / cluster)</title>
|
||||
<search>
|
||||
<query>| rest /services/admin/summarization by_tstats=t splunk_server=local count=0
|
||||
| eval datamodel=replace('summary.id',(("DM_" . 'eai:acl.app') . "_"),"")
|
||||
| join max=1 overwrite=1 type=left usetime=0 datamodel
|
||||
[| rest /services/data/models splunk_server=local count=0
|
||||
| table title acceleration.cron_schedule eai:digest
|
||||
| rename title as datamodel
|
||||
| rename "acceleration.cron_schedule" as cron]
|
||||
| table datamodel eai:acl.app summary.access_time summary.is_inprogress summary.size summary.latest_time summary.complete summary.buckets_size summary.buckets cron summary.last_error summary.time_range summary.id summary.mod_time eai:digest summary.earliest_time summary.last_sid summary.access_count
|
||||
| rename "eai:digest" as digest, "summary.earliest_time" as earliest, "summary.id" as summary_id, "summary.latest_time" as latest, "summary.time_range" as retention
|
||||
| rename "eai:acl.app" as app, "summary.access_count" as access_count, "summary.access_time" as access_time, "summary.buckets" as buckets, "summary.buckets_size" as buckets_size, "summary.complete" as complete, "summary.is_inprogress" as is_inprogress, "summary.last_error" as last_error, "summary.last_sid" as last_sid, "summary.mod_time" as mod_time, "summary.size" as size, "summary.*" as "*", "eai:acl.*" as "*"
|
||||
| sort datamodel
|
||||
| rename access_count as "Datamodel_Acceleration.access_count", access_time as "Datamodel_Acceleration.access_time", app as "Datamodel_Acceleration.app", buckets as "Datamodel_Acceleration.buckets", buckets_size as "Datamodel_Acceleration.buckets_size", complete as "Datamodel_Acceleration.complete", cron as "Datamodel_Acceleration.cron", datamodel as "Datamodel_Acceleration.datamodel", digest as "Datamodel_Acceleration.digest", earliest as "Datamodel_Acceleration.earliest", is_inprogress as "Datamodel_Acceleration.is_inprogress", last_error as "Datamodel_Acceleration.last_error", last_sid as "Datamodel_Acceleration.last_sid", latest as "Datamodel_Acceleration.latest", mod_time as "Datamodel_Acceleration.mod_time", retention as "Datamodel_Acceleration.retention", size as "Datamodel_Acceleration.size", summary_id as "Datamodel_Acceleration.summary_id"
|
||||
| rename "Datamodel_Acceleration.access_count" as access_count, "Datamodel_Acceleration.access_time" as access_time, "Datamodel_Acceleration.app" as app, "Datamodel_Acceleration.buckets" as buckets, "Datamodel_Acceleration.buckets_size" as buckets_size, "Datamodel_Acceleration.complete" as complete, "Datamodel_Acceleration.cron" as cron, "Datamodel_Acceleration.datamodel" as datamodel, "Datamodel_Acceleration.digest" as digest, "Datamodel_Acceleration.earliest" as earliest, "Datamodel_Acceleration.is_inprogress" as is_inprogress, "Datamodel_Acceleration.last_error" as last_error, "Datamodel_Acceleration.last_sid" as last_sid, "Datamodel_Acceleration.latest" as latest, "Datamodel_Acceleration.mod_time" as mod_time, "Datamodel_Acceleration.retention" as retention, "Datamodel_Acceleration.size" as size, "Datamodel_Acceleration.summary_id" as summary_id, "Datamodel_Acceleration.*" as "*"
|
||||
| join max=1 overwrite=1 type=outer usetime=0 last_sid
|
||||
[| rest splunk_server=* count=0 /services/search/jobs reportSearch=summarize*
|
||||
| rename sid as last_sid
|
||||
| fields last_sid,runDuration]
|
||||
| eval "size(MB)"=round((size / 1048576),1)
|
||||
| eval "retention(days)"=if((retention == 0),"unlimited",(retention / 86400))
|
||||
| eval "complete(%)"=round((complete * 100),1)
|
||||
| eval "runDuration(s)"=round(runDuration,1)
|
||||
| sort 18 - runDuration
|
||||
| table datamodel,runDuration
|
||||
| eval concurrent_threshold=300
|
||||
| eval deferred_threshold=600
|
||||
| eval skipped_threshold=900</query>
|
||||
<earliest>0.000</earliest>
|
||||
<latest></latest>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">visible</option>
|
||||
<option name="charting.axisTitleY.visibility">visible</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">bar</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.overlayFields">concurrent_threshold,deferred_threshold,skipped_threshold</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">default</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">all</option>
|
||||
<option name="charting.layout.splitSeries">0</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.placement">bottom</option>
|
||||
<option name="height">400</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<table>
|
||||
<title>All skipped scheduled searches ($timepicker1.earliest$ to $timepicker1.latest$)</title>
|
||||
<search>
|
||||
<query>index=_internal `searchheadhosts` sourcetype=scheduler status="skipped"
|
||||
| table _time status savedsearch_name
|
||||
| sort - _time</query>
|
||||
<earliest>$timepicker1.earliest$</earliest>
|
||||
<latest>$timepicker1.latest$</latest>
|
||||
</search>
|
||||
<option name="drilldown">none</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
</table>
|
||||
</panel>
|
||||
</row>
|
||||
</form>
|
||||
@ -0,0 +1,120 @@
|
||||
<form version="1.1">
|
||||
<label>Dashboard - Detect Excessive Search Use</label>
|
||||
<description>Detect repeated search use for the same search query by a particular user during a period of time</description>
|
||||
<fieldset submitButton="false">
|
||||
<input type="time" token="time">
|
||||
<label>Time Period</label>
|
||||
<default>
|
||||
<earliest>-4h@m</earliest>
|
||||
<latest>now</latest>
|
||||
</default>
|
||||
</input>
|
||||
<input type="text" token="span">
|
||||
<label>Span</label>
|
||||
<default>10m</default>
|
||||
</input>
|
||||
</fieldset>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Searches occurring more often than expected in the audit logs</title>
|
||||
<table>
|
||||
<title>Click any line for drilldown per-username</title>
|
||||
<search>
|
||||
<query>index=_audit info=granted "search='" NOT "savedsearch_name=\"Threat - Correlation Searches - Lookup Gen\"" NOT "savedsearch_name=\"Bucket Copy Trigger\"" NOT "search='| copybuckets" NOT "search='search index=_telemetry sourcetype=splunk_telemetry | spath" NOT "savedsearch_name=\"_ACCELERATE_*"
|
||||
| rex ", search='(?P<search>[\S+\s+]+?)', "
|
||||
| regex search!="\|\s+(rest|inputlookup|makeresults|tstats count AS \"Count of [^\"]+\"\s+ from sid=)"
|
||||
| rex "apiEndTime='[^,]+, savedsearch_name=\"(?P<savedsearch_name>[^\"]+)"
|
||||
| eval apiEndTime=strptime(apiEndTime, "'%a %B %d %H:%M:%S %Y'"), apiStartTime=strptime(apiStartTime, "'%a %B %d %H:%M:%S %Y'")
|
||||
| eval timePeriod=apiEndTime-apiStartTime
|
||||
| bin _time span=$span$
|
||||
| stats count, values(host) AS hostList, values(savedsearch_name) AS savedSearchName, values(ttl) AS ttl by search, user, _time, timePeriod
|
||||
| eval frequency = ceil((10*60)/timePeriod)
|
||||
| fillnull frequency
|
||||
| where count>4 AND count>frequency
|
||||
| eval timePeriod=tostring(timePeriod,"duration")
|
||||
| stats sum(count) AS count, max(count) AS "maxCountPerSpan", values(user) AS userList, values(hostList) AS hostList, values(savedSearchName) AS savedSearchName, values(ttl) AS ttl, earliest(_time) AS firstSeen, latest(_time) AS mostRecent, values(timePeriod) AS timePeriods by search
|
||||
| eval firstSeen=strftime(firstSeen, "%+"), mostRecent=strftime(mostRecent, "%+")
|
||||
| eval search=substr(search,0,60)
|
||||
| sort - count</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="count">50</option>
|
||||
<option name="dataOverlayMode">none</option>
|
||||
<option name="drilldown">cell</option>
|
||||
<option name="percentagesRow">false</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="rowNumbers">false</option>
|
||||
<option name="totalsRow">false</option>
|
||||
<option name="wrap">false</option>
|
||||
<drilldown>
|
||||
<set token="username">$row.userList$</set>
|
||||
</drilldown>
|
||||
</table>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Results from access logs for $username$</title>
|
||||
<table>
|
||||
<title>Note: cluster command in use, introspection data may better list all dashboards in use</title>
|
||||
<search>
|
||||
<query>index=_internal (sourcetype=splunkd_access (method="GET" AND "/services/search/jobs/export") OR method="POST") OR (sourcetype=splunkd_ui_access method=POST "/report?" OR "/search?" OR "/search/jobs" OR "/servicesNS/*/*/search/jobs" OR "/saved/searches" NOT "/search/parser HTTP" NOT "/user-prefs/data/user-prefs/") OR (sourcetype=splunkd_ui_access method=GET "/app/" NOT "/search HTTP" NOT "/dashboards HTTP" NOT "/alerts HTTP" NOT "/reports HTTP") user IN ($username$)
|
||||
| cluster t=0.95 showcount=true
|
||||
| rex field=uri "/servicesNS/[^/]+/(?P<app>[^/]+)"
|
||||
| rex field=uri "/[^/]+/app/(?P<app>[^/]+)/(?P<dashboard_name>[^/\?]+)"
|
||||
| sort - cluster_count
|
||||
| table cluster_count, app, uri_path, user, dashboard_name, clientip, sourcetype</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
</search>
|
||||
<option name="count">10</option>
|
||||
<option name="drilldown">none</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
</table>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Introspection data for this $username$</title>
|
||||
<table>
|
||||
<title>Click for drilldown</title>
|
||||
<search>
|
||||
<query>index=_introspection `indexerhosts` sourcetype=splunk_resource_usage data.search_props.sid::* data.search_props.user IN ($username$)
|
||||
| eval mem_used = 'data.mem_used'
|
||||
| eval app = 'data.search_props.app'
|
||||
| eval elapsed = 'data.elapsed'
|
||||
| eval label = 'data.search_props.label'
|
||||
| eval type = 'data.search_props.type'
|
||||
| eval mode = 'data.search_props.mode'
|
||||
| eval user = 'data.search_props.user'
|
||||
| eval cpuperc = 'data.pct_cpu'
|
||||
| eval search_head = 'data.search_props.search_head'
|
||||
| eval read_mb = 'data.read_mb'
|
||||
| eval provenance='data.search_props.provenance'
|
||||
| eval label=coalesce(label, provenance)
|
||||
| eval sid='data.search_props.sid'
|
||||
| rex field=sid "^remote_[^_]+_(?P<sid>.*)"
|
||||
| eval sid = "'" . sid . "'"
|
||||
| fillnull search_head value="*"
|
||||
| stats max(elapsed) as runtime max(mem_used) as mem_used earliest(_time) as searchStartTime, sum(cpuperc) AS totalCPU, avg(cpuperc) AS avgCPU, max(read_mb) AS read_mb, values(sid) AS sids by type, mode, app, user, label, host, search_head, data.pid
|
||||
| bin searchStartTime span=1m
|
||||
| stats dc(sids) AS count, sum(totalCPU) AS totalCPU, sum(mem_used) AS totalMemUsed, max(runtime) AS maxRunTime, avg(runtime) AS avgRuntime, avg(avgCPU) AS avgCPUPerIndexer, sum(read_mb) AS totalReadMB, values(sids) AS sids by searchStartTime, type, mode, app, user, search_head, label
|
||||
| eval maxduration = tostring(maxRunTime, "duration"), averageduration = tostring(avgRuntime, "duration")
|
||||
| eval Started = strftime(searchStartTime,"%+")
|
||||
| table Started, count, user, app, label, averageduration, maxduration, search_head, sids, mode, type</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
</search>
|
||||
<option name="count">10</option>
|
||||
<option name="drilldown">cell</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<fields>["Started","count","user","app","label","averageduration","maxduration","mode","type"]</fields>
|
||||
<drilldown>
|
||||
<link target="_blank">/app/SplunkAdmins/troubleshooting_resource_usage_per_user_drilldown?form.username=$username$&form.sid=$row.sids$&form.app=$row.app$&form.host=*&form.label=*&form.time.earliest=$time.earliest$&form.time.latest=$time.latest$</link>
|
||||
</drilldown>
|
||||
</table>
|
||||
</panel>
|
||||
</row>
|
||||
</form>
|
||||
@ -0,0 +1,634 @@
|
||||
<form version="1.1">
|
||||
<label>Dashboard - Heavy Forwarder analysis</label>
|
||||
<description>As found on https://drive.google.com/file/d/1zvMKrFkk6wzmeXS1r69-GYfEbIdT_TVX/view from https://conf.splunk.com/files/2024/slides/PLA1509B.pdf / https://conf.splunk.com/files/2024/recordings/PLA1509B.mp4</description>
|
||||
<fieldset submitButton="false">
|
||||
<input type="time" token="time1">
|
||||
<label>Select Time</label>
|
||||
<default>
|
||||
<earliest>-15m</earliest>
|
||||
<latest>now</latest>
|
||||
</default>
|
||||
</input>
|
||||
<input type="dropdown" token="host1">
|
||||
<label>Select Forwarder - HF</label>
|
||||
<fieldForLabel>hostname</fieldForLabel>
|
||||
<fieldForValue>hostname</fieldForValue>
|
||||
<search>
|
||||
<query>index=_internal group=tcp*_connections sourcetype=splunkd
|
||||
fwdType=full
|
||||
| stats count by hostname fwdType</query>
|
||||
<earliest>-24h@h</earliest>
|
||||
<latest>now</latest>
|
||||
</search>
|
||||
</input>
|
||||
</fieldset>
|
||||
<row>
|
||||
<panel>
|
||||
<title>$host1$ Queues-Pipelines Fill perc 90% - if high check thruput not throttled</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal sourcetype=splunkd group=queue host=$host1$ (name=tcpin_queue OR name=splunktcpin OR name=parsingqueue OR name=aggqueue OR name=typingqueue OR name=indexqueue OR name=tcpout*)
|
||||
| eval ingest_pipe = if(isnotnull(ingest_pipe), ingest_pipe, "0")
|
||||
| search ingest_pipe=*
|
||||
| eval max=if(isnotnull(max_size_kb),max_size_kb,max_size)
|
||||
| eval curr=if(isnotnull(current_size_kb),current_size_kb,current_size)
|
||||
| eval fill_perc=round((curr/max)*100,2)
|
||||
| eval name=host."-".name."-".ingest_pipe
|
||||
| timechart span=1m Perc90(fill_perc) by name useother=false limit=0 usenull=f</query>
|
||||
<earliest>$time1.earliest$</earliest>
|
||||
<latest>$time1.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<search type="annotation">
|
||||
<query>index=_internal sourcetype=splunkd host=$host1$ (shutdownhandler complete) OR (loader Splunkd starting build) OR (request state change from=RUN to=SHUTDOWN_SIGNALED) OR (request state change from=SHUTDOWN_IN_PROGRESS to=SHUTDOWN_COMPLETE) OR (loader Splunkd starting build) OR (my GUID is) OR (All pipelines finished) NOT(Queued job)
|
||||
| transaction startswith=finished endswith=starting maxspan=15min host keepevicted=true
|
||||
| eval annotation_label=case(searchmatch("new generated"), "first startup",
|
||||
(searchmatch("complete") OR searchmatch("signalled")) AND searchmatch("NOT starting"), "graceful shutdown",
|
||||
(searchmatch("complete") OR searchmatch("signalled")) AND searchmatch("starting"), "graceful restart",1=1, "ungraceful restart")." ".host, annotation_category="restart", annotation_color="#FBB117"
|
||||
| table _time ann*</query>
|
||||
<earliest>$time1.earliest$</earliest>
|
||||
<latest>$time1.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.max_kflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">visible</option>
|
||||
<option name="charting.axisTitleY.visibility">visible</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.abbreviation">none</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.abbreviation">none</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.abbreviation">none</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">line</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">connect</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">default</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">none</option>
|
||||
<option name="charting.layout.splitSeries">0</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.max_kflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.mode">standard</option>
|
||||
<option name="charting.legend.placement">bottom</option>
|
||||
<option name="charting.lineWidth">2</option>
|
||||
<option name="height">385</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="trellis.enabled">0</option>
|
||||
<option name="trellis.scales.shared">1</option>
|
||||
<option name="trellis.size">medium</option>
|
||||
</chart>
|
||||
</panel>
|
||||
<panel>
|
||||
<title>$host1$ Max Thruput (kbps) by ingest_pipe If =< 256kbps Check FWD is Limited to 256kbps</title>
|
||||
<single>
|
||||
<search>
|
||||
<query>index=_internal host=$host1$ group=thruput name=cooked_output
|
||||
| eval ingest_pipe = if(isnotnull(ingest_pipe), ingest_pipe, "0")
|
||||
| timechart span=1m max(instantaneous_kbps) as max_instantaneous_kbps by ingest_pipe</query>
|
||||
<earliest>$time1.earliest$</earliest>
|
||||
<latest>$time1.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="colorBy">value</option>
|
||||
<option name="colorMode">none</option>
|
||||
<option name="drilldown">none</option>
|
||||
<option name="numberPrecision">0</option>
|
||||
<option name="rangeColors">["0x53a051","0xf8be34","0xdc4e41","0xdc4e41"]</option>
|
||||
<option name="rangeValues">[200,250,260]</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="showSparkline">1</option>
|
||||
<option name="showTrendIndicator">1</option>
|
||||
<option name="trellis.enabled">1</option>
|
||||
<option name="trellis.scales.shared">1</option>
|
||||
<option name="trellis.size">medium</option>
|
||||
<option name="trendColorInterpretation">standard</option>
|
||||
<option name="trendDisplayMode">absolute</option>
|
||||
<option name="unitPosition">after</option>
|
||||
<option name="useColors">1</option>
|
||||
<option name="useThousandSeparators">1</option>
|
||||
</single>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Connected to IDXs based on tcp connections</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal group=tcp*_connections sourcetype=splunkd (host=$host1$ OR hostname=$host1$) NOT lastIndexer=None| timechart span=1sec count by lastIndexer usenull=f</query>
|
||||
<earliest>$time1.earliest$</earliest>
|
||||
<latest>$time1.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.chart">column</option>
|
||||
<option name="charting.chart.stackMode">stacked</option>
|
||||
<option name="charting.drilldown">none</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
</chart>
|
||||
</panel>
|
||||
<panel>
|
||||
<title>component="AutoLoadBalancedConnectionStrategy"</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal sourcetype=splunkd (host=$host1$) component="AutoLoadBalancedConnectionStrategy" |timechart minspan=1sec count by idx usenull=f useother=false</query>
|
||||
<earliest>$time1.earliest$</earliest>
|
||||
<latest>$time1.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">visible</option>
|
||||
<option name="charting.axisTitleY.visibility">visible</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.abbreviation">none</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.abbreviation">none</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.abbreviation">none</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">column</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">stacked</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">none</option>
|
||||
<option name="charting.layout.splitSeries">0</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.mode">standard</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
<option name="charting.lineWidth">2</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="trellis.enabled">0</option>
|
||||
<option name="trellis.scales.shared">1</option>
|
||||
<option name="trellis.size">medium</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>$host1$ Max KB/sec by index-pipe</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal host=$host1$ source="*metrics.log*" group=per_index_thruput NOT series=_*
|
||||
| eval ingest_pipe = if(isnotnull(ingest_pipe), ingest_pipe, "0")
|
||||
| eval index-pipe=series."-".ingest_pipe
|
||||
| timechart minspan=30sec max(kbps) as "Max KB/sec" by index-pipe useother=f usenull=f limit=0</query>
|
||||
<earliest>$time1.earliest$</earliest>
|
||||
<latest>$time1.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<search type="annotation">
|
||||
<query>index=_internal sourcetype=splunkd host=$host1$ (shutdownhandler complete) OR (loader Splunkd starting build) OR (request state change from=RUN to=SHUTDOWN_SIGNALED) OR (request state change from=SHUTDOWN_IN_PROGRESS to=SHUTDOWN_COMPLETE) OR (loader Splunkd starting build) OR (my GUID is) OR (All pipelines finished) NOT(Queued job)
|
||||
| transaction startswith=finished endswith=starting maxspan=15min host keepevicted=true
|
||||
| eval annotation_label=case(searchmatch("new generated"), "first startup",
|
||||
(searchmatch("complete") OR searchmatch("signalled")) AND searchmatch("NOT starting"), "graceful shutdown",
|
||||
(searchmatch("complete") OR searchmatch("signalled")) AND searchmatch("starting"), "graceful restart",1=1, "ungraceful restart")." ".host, annotation_category="restart", annotation_color="#FBB117"
|
||||
| table _time ann*</query>
|
||||
<earliest>$time1.earliest$</earliest>
|
||||
<latest>$time1.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">visible</option>
|
||||
<option name="charting.axisTitleY.text">kbps</option>
|
||||
<option name="charting.axisTitleY.visibility">visible</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.abbreviation">none</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.abbreviation">none</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.abbreviation">none</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">line</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">default</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">none</option>
|
||||
<option name="charting.layout.splitSeries">0</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.mode">standard</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
<option name="charting.lineWidth">2</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="trellis.enabled">0</option>
|
||||
<option name="trellis.scales.shared">1</option>
|
||||
<option name="trellis.size">medium</option>
|
||||
</chart>
|
||||
</panel>
|
||||
<panel>
|
||||
<title>$host1$ Max KB/sec by sourcetype-pipe</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal host=$host1$ source="*metrics.log*" group=per_sourcetype_thruput NOT series=_*
|
||||
| eval ingest_pipe = if(isnotnull(ingest_pipe), ingest_pipe, "0")
|
||||
| eval sourcetype-pipe=series."-".ingest_pipe
|
||||
| timechart minspan=30sec max(kbps) as "Max KB/sec" by sourcetype-pipe useother=f usenull=f limit=0</query>
|
||||
<earliest>$time1.earliest$</earliest>
|
||||
<latest>$time1.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.chart">line</option>
|
||||
<option name="charting.drilldown">none</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>$host1$ Average kbps by ingest_pipe</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal host=$host1$ group=thruput name=cooked_output
|
||||
| eval ingest_pipe = if(isnotnull(ingest_pipe), ingest_pipe, "0")
|
||||
| timechart span=1m max(average_kbps) by ingest_pipe</query>
|
||||
<earliest>$time1.earliest$</earliest>
|
||||
<latest>$time1.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<search type="annotation">
|
||||
<query>index=_internal sourcetype=splunkd host=$host1$ (shutdownhandler complete) OR (loader Splunkd starting build) OR (request state change from=RUN to=SHUTDOWN_SIGNALED) OR (request state change from=SHUTDOWN_IN_PROGRESS to=SHUTDOWN_COMPLETE) OR (loader Splunkd starting build) OR (my GUID is) OR (All pipelines finished) NOT(Queued job)
|
||||
| transaction startswith=finished endswith=starting maxspan=15min host keepevicted=true
|
||||
| eval annotation_label=case(searchmatch("new generated"), "first startup",
|
||||
(searchmatch("complete") OR searchmatch("signalled")) AND searchmatch("NOT starting"), "graceful shutdown",
|
||||
(searchmatch("complete") OR searchmatch("signalled")) AND searchmatch("starting"), "graceful restart",1=1, "ungraceful restart")." ".host, annotation_category="restart", annotation_color="#FBB117"
|
||||
| table _time ann*</query>
|
||||
<earliest>$time1.earliest$</earliest>
|
||||
<latest>$time1.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">visible</option>
|
||||
<option name="charting.axisTitleY.text">kbps</option>
|
||||
<option name="charting.axisTitleY.visibility">visible</option>
|
||||
<option name="charting.axisTitleY2.text">max_instantaneous_kbps</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.abbreviation">none</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.abbreviation">none</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.abbreviation">none</option>
|
||||
<option name="charting.axisY2.enabled">1</option>
|
||||
<option name="charting.axisY2.scale">linear</option>
|
||||
<option name="charting.chart">column</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.overlayFields">"max_instantaneous_kbps: 0","max_instantaneous_kbps: 1","max_instantaneous_kbps: 2","max_instantaneous_kbps: 3","max_instantaneous_kbps: 4","max_instantaneous_kbps: 5","max_instantaneous_kbps: 6","max_instantaneous_kbps: 7","max_instantaneous_kbps: 8","max_instantaneous_kbps: 9","max_instantaneous_kbps: 10"</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">stacked</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">none</option>
|
||||
<option name="charting.layout.splitSeries">0</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.mode">standard</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
<option name="charting.lineWidth">2</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="trellis.enabled">0</option>
|
||||
<option name="trellis.scales.shared">1</option>
|
||||
<option name="trellis.size">medium</option>
|
||||
</chart>
|
||||
</panel>
|
||||
<panel>
|
||||
<title>$host1$ Max kbps by ingest_pipe If < 256kbps Check FWD is Limited to 256kbps</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal host=$host1$ group=thruput name=cooked_output
|
||||
| eval ingest_pipe = if(isnotnull(ingest_pipe), ingest_pipe, "0")
|
||||
| eval UFdefaultkbps=256
|
||||
| timechart span=1m max(UFdefaultkbps) as UFdefaultkbps max(instantaneous_kbps) as max_instantaneous_kbps by ingest_pipe</query>
|
||||
<earliest>$time1.earliest$</earliest>
|
||||
<latest>$time1.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<search type="annotation">
|
||||
<query>index=_internal sourcetype=splunkd host=$host1$ (shutdownhandler complete) OR (loader Splunkd starting build) OR (request state change from=RUN to=SHUTDOWN_SIGNALED) OR (request state change from=SHUTDOWN_IN_PROGRESS to=SHUTDOWN_COMPLETE) OR (loader Splunkd starting build) OR (my GUID is) OR (All pipelines finished) NOT(Queued job)
|
||||
| transaction startswith=finished endswith=starting maxspan=15min host keepevicted=true
|
||||
| eval annotation_label=case(searchmatch("new generated"), "first startup",
|
||||
(searchmatch("complete") OR searchmatch("signalled")) AND searchmatch("NOT starting"), "graceful shutdown",
|
||||
(searchmatch("complete") OR searchmatch("signalled")) AND searchmatch("starting"), "graceful restart",1=1, "ungraceful restart")." ".host, annotation_category="restart", annotation_color="#FBB117"
|
||||
| table _time ann*</query>
|
||||
<earliest>$time1.earliest$</earliest>
|
||||
<latest>$time1.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">visible</option>
|
||||
<option name="charting.axisTitleY.text">kbps</option>
|
||||
<option name="charting.axisTitleY.visibility">visible</option>
|
||||
<option name="charting.axisTitleY2.text">max_instantaneous_kbps</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.abbreviation">none</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.abbreviation">none</option>
|
||||
<option name="charting.axisY.scale">log</option>
|
||||
<option name="charting.axisY2.abbreviation">none</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.minimumNumber">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">line</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.overlayFields">"UFdefaultkbps: 0","UFdefaultkbps: 1","UFdefaultkbps: 2","UFdefaultkbps: 3","UFdefaultkbps: 4","UFdefaultkbps: 5","UFdefaultkbps: 6","UFdefaultkbps: 7","UFdefaultkbps: 8","UFdefaultkbps: 9","UFdefaultkbps: 10",,"UFdefaultkbps: 11","UFdefaultkbps: 12","UFdefaultkbps: 13","UFdefaultkbps: 14","UFdefaultkbps: 15","UFdefaultkbps: 16","UFdefaultkbps: 17","UFdefaultkbps: 18","UFdefaultkbps: 19","UFdefaultkbps: 20"</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">default</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">none</option>
|
||||
<option name="charting.layout.splitSeries">0</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.mode">standard</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
<option name="charting.lineWidth">2</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="trellis.enabled">0</option>
|
||||
<option name="trellis.scales.shared">1</option>
|
||||
<option name="trellis.size">medium</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>$host1$ % CPU by pipe_name_processor</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal host=$host1$ source="*metrics.log*" sourcetype=splunkd group=pipeline NOT processor=sendout
|
||||
| eval ingest_pipe = if(isnotnull(ingest_pipe), ingest_pipe, "0")
|
||||
| eval pipe_name_processor=ingest_pipe."-".name."-".processor
|
||||
| timechart minspan=30s per_second(eval(cpu_seconds*100)) AS pctCPU by pipe_name_processor useother=false limit=0</query>
|
||||
<earliest>$time1.earliest$</earliest>
|
||||
<latest>$time1.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">visible</option>
|
||||
<option name="charting.axisTitleY.visibility">visible</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.abbreviation">none</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.abbreviation">none</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.abbreviation">none</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">line</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">stacked</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">none</option>
|
||||
<option name="charting.layout.splitSeries">0</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.mode">standard</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
<option name="charting.lineWidth">2</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="trellis.enabled">0</option>
|
||||
<option name="trellis.scales.shared">1</option>
|
||||
<option name="trellis.size">medium</option>
|
||||
</chart>
|
||||
</panel>
|
||||
<panel>
|
||||
<title>$host1$ executes by pipe_name_processor</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal host=$host1$ source="*metrics.log*" sourcetype=splunkd group=pipeline NOT processor=sendout
|
||||
| eval ingest_pipe = if(isnotnull(ingest_pipe), ingest_pipe, "0")
|
||||
| eval pipe_name_processor=ingest_pipe."-".name."-".processor
|
||||
| timechart minspan=30s per_second(executes) AS executes by pipe_name_processor useother=false limit=0</query>
|
||||
<earliest>$time1.earliest$</earliest>
|
||||
<latest>$time1.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">visible</option>
|
||||
<option name="charting.axisTitleY.visibility">visible</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.abbreviation">none</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.abbreviation">none</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.abbreviation">none</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">line</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">stacked</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">none</option>
|
||||
<option name="charting.layout.splitSeries">0</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.mode">standard</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
<option name="charting.lineWidth">2</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="trellis.enabled">0</option>
|
||||
<option name="trellis.scales.shared">1</option>
|
||||
<option name="trellis.size">medium</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>$host1$ cpu_seconds, executes name processor</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal sourcetype=splunkd host=$host1$ Metrics TERM(group=pipeline) NOT TERM(processor=sendout) NOT TERM(processor=readerin)
|
||||
| bucket _time span=1m
|
||||
| fields cpu_seconds, executes name processor
|
||||
| eval name_rocessor=name."-".processor
|
||||
|timechart sum(cpu_seconds) as cpu_seconds sum(executes) as executes by name_rocessor useother=f</query>
|
||||
<earliest>$time1.earliest$</earliest>
|
||||
<latest>$time1.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">visible</option>
|
||||
<option name="charting.axisTitleY.visibility">visible</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.abbreviation">none</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.abbreviation">none</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.abbreviation">none</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">line</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">default</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">none</option>
|
||||
<option name="charting.layout.splitSeries">0</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.mode">standard</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
<option name="charting.lineWidth">2</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="trellis.enabled">0</option>
|
||||
<option name="trellis.scales.shared">1</option>
|
||||
<option name="trellis.size">medium</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Check if TCPout Groups and Queues Dropping Events</title>
|
||||
<table>
|
||||
<search>
|
||||
<query>index=_internal host=$host1$ component=TcpOutputProc sourcetype=splunkd "TcpOutputProc - Queue for group * has"
|
||||
[| tstats min(_time) as earliest where (index=_internal sourcetype=splunkd)]
|
||||
[| tstats max(_time) as latest where (index=_internal sourcetype=splunkd)]
|
||||
| rex field=event_message "Queue for group (?<tcpout_group>.*) has (?<queue_action>.*) events"
|
||||
| eval group_action=tcpout_group."-".queue_action
|
||||
| stats sparkline count by tcpout_group queue_action</query>
|
||||
<earliest>$time1.earliest$</earliest>
|
||||
<latest>$time1.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="count">20</option>
|
||||
<option name="dataOverlayMode">none</option>
|
||||
<option name="drilldown">none</option>
|
||||
<option name="percentagesRow">false</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="rowNumbers">false</option>
|
||||
<option name="totalsRow">false</option>
|
||||
<option name="wrap">true</option>
|
||||
</table>
|
||||
</panel>
|
||||
<panel>
|
||||
<title>The TCP output processor has paused the data flow</title>
|
||||
<table>
|
||||
<search>
|
||||
<query>index=_internal host=$host1$ component=TcpOutputProc sourcetype=splunkd event_message="The TCP output processor has paused the data flow*"
|
||||
| rex field=event_message "Forwarding to output group (?<tcpout_group>.*) has been blocked for (?<blocked_for_seconds>.*) seconds"
|
||||
| stats sparkline(max(blocked_for_seconds),5m) as blocked_for_seconds last(_time) as _time min(blocked_for_seconds) as min_blocked_seconds max(blocked_for_seconds) as max_blocked_seconds by tcpout_group host</query>
|
||||
<earliest>$time1.earliest$</earliest>
|
||||
<latest>$time1.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="count">20</option>
|
||||
<option name="dataOverlayMode">none</option>
|
||||
<option name="drilldown">none</option>
|
||||
<option name="percentagesRow">false</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="rowNumbers">false</option>
|
||||
<option name="totalsRow">false</option>
|
||||
<option name="wrap">true</option>
|
||||
</table>
|
||||
</panel>
|
||||
<panel>
|
||||
<title>$host1$ Blocking</title>
|
||||
<table>
|
||||
<search>
|
||||
<query>index=_internal sourcetype=splunkd host=LOG-HF11.myengie.com (log_level=ERROR AND ("TcpInputProc - Error encountered for connection from" AND " Local side shutting down")) OR (log_level=INFO AND blocked=true)
|
||||
| eval combined=max_size_kb."-".current_size_kb."-".current_size."-".largest_size
|
||||
| stats dc(_time) as count values(max_size_kb) as max_size_kb values(current_size_kb) as current_size_kb values(current_size) as current_size values(largest_size) as largest_size earliest(_time) as firsttime latest(_time) as lasttime by host name combined
|
||||
| convert ctime(lasttime) as LastTime, ctime(firsttime) as FirstTime
|
||||
| addcoltotals labelfield=host label=Total
|
||||
| fields - firsttime lasttime combined | where count>0 | sort - count</query>
|
||||
<earliest>$time1.earliest$</earliest>
|
||||
<latest>$time1.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="drilldown">none</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
</table>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>$host1$ Queues : Current Size v Max Size (kb)</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal source="*metrics.log*" group=queue host=$host1$
|
||||
| timechart values(current_size_kb) AS current_size_kb values(max_size_kb) as max_size_kb by name</query>
|
||||
<earliest>$time1.earliest$</earliest>
|
||||
<latest>$time1.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">collapsed</option>
|
||||
<option name="charting.axisTitleY.visibility">collapsed</option>
|
||||
<option name="charting.axisTitleY2.visibility">collapsed</option>
|
||||
<option name="charting.axisX.abbreviation">none</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.abbreviation">none</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.abbreviation">none</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">line</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">connect</option>
|
||||
<option name="charting.chart.overlayFields">max_size_kb</option>
|
||||
<option name="charting.chart.showDataLabels">minmax</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">default</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">none</option>
|
||||
<option name="charting.layout.splitSeries">0</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">1</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.mode">standard</option>
|
||||
<option name="charting.legend.placement">none</option>
|
||||
<option name="charting.lineWidth">2</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="trellis.enabled">1</option>
|
||||
<option name="trellis.scales.shared">1</option>
|
||||
<option name="trellis.size">medium</option>
|
||||
<option name="trellis.splitBy">name</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
</form>
|
||||
@ -0,0 +1,318 @@
|
||||
<form version="1.1">
|
||||
<label>Dashboard - HeavyForwarders Max Data Queue Sizes By Name</label>
|
||||
<fieldset submitButton="false">
|
||||
<input type="time" token="time">
|
||||
<label></label>
|
||||
<default>
|
||||
<earliest>-4h@m</earliest>
|
||||
<latest>now</latest>
|
||||
</default>
|
||||
</input>
|
||||
<input type="text" token="span">
|
||||
<label>span</label>
|
||||
<default>1m</default>
|
||||
</input>
|
||||
<input type="text" token="hosts">
|
||||
<label>hosts</label>
|
||||
<default>`heavyforwarderhosts`</default>
|
||||
</input>
|
||||
</fieldset>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Parsing Queue Fill Size</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal $hosts$ `splunkadmins_metrics_source` sourcetype=splunkd group=queue (name=parsingqueue)
|
||||
| eval ingest_pipe = if(isnotnull(ingest_pipe), ingest_pipe, "none") | search ingest_pipe=*
|
||||
| eval max=if(isnotnull(max_size_kb),max_size_kb,max_size)
|
||||
| eval curr=if(isnotnull(current_size_kb),current_size_kb,current_size)
|
||||
| eval fill_perc=round((curr/max)*100,2)
|
||||
| eval combined = host . "_pipe_" . ingest_pipe
|
||||
| timechart limit=20 useother=false span=$span$ max(fill_perc) by combined</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleY.visibility">collapsed</option>
|
||||
<option name="charting.axisY.maximumNumber">100</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.chart">area</option>
|
||||
<option name="charting.layout.splitSeries">1</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
<option name="height">441</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Aggregation Queue Fill Size</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal $hosts$ `splunkadmins_metrics_source` sourcetype=splunkd group=queue (name=aggqueue)
|
||||
| eval ingest_pipe = if(isnotnull(ingest_pipe), ingest_pipe, "none") | search ingest_pipe=*
|
||||
| eval max=if(isnotnull(max_size_kb),max_size_kb,max_size)
|
||||
| eval curr=if(isnotnull(current_size_kb),current_size_kb,current_size)
|
||||
| eval fill_perc=round((curr/max)*100,2)
|
||||
| eval combined = host . "_pipe_" . ingest_pipe
|
||||
| timechart limit=20 useother=false span=$span$ Max(fill_perc) by combined</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
</search>
|
||||
<option name="charting.axisTitleY.visibility">collapsed</option>
|
||||
<option name="charting.axisY.maximumNumber">100</option>
|
||||
<option name="charting.chart">area</option>
|
||||
<option name="charting.layout.splitSeries">1</option>
|
||||
<option name="height">446</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Typing Queue Fill Size</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal $hosts$ `splunkadmins_metrics_source` sourcetype=splunkd group=queue (name=typingqueue)
|
||||
| eval ingest_pipe = if(isnotnull(ingest_pipe), ingest_pipe, "none") | search ingest_pipe=*
|
||||
| eval max=if(isnotnull(max_size_kb),max_size_kb,max_size)
|
||||
| eval curr=if(isnotnull(current_size_kb),current_size_kb,current_size)
|
||||
| eval fill_perc=round((curr/max)*100,2)
|
||||
| eval combined = host . "_pipe_" . ingest_pipe
|
||||
| timechart limit=20 useother=false span=$span$ Max(fill_perc) by combined</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
</search>
|
||||
<option name="charting.axisTitleY.visibility">collapsed</option>
|
||||
<option name="charting.axisY.maximumNumber">100</option>
|
||||
<option name="charting.chart">area</option>
|
||||
<option name="charting.layout.splitSeries">1</option>
|
||||
<option name="height">440</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Index Queue Size</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal $hosts$ `splunkadmins_metrics_source` sourcetype=splunkd group=queue (name=indexqueue)
|
||||
| eval name=case(name=="aggqueue","2 - Aggregation Queue",
|
||||
name=="indexqueue", "4 - Indexing Queue",
|
||||
name=="parsingqueue", "1 - Parsing Queue",
|
||||
name=="typingqueue", "3 - Typing Queue")
|
||||
| eval ingest_pipe = if(isnotnull(ingest_pipe), ingest_pipe, "none") | search ingest_pipe=*
|
||||
| eval max=if(isnotnull(max_size_kb),max_size_kb,max_size)
|
||||
| eval curr=if(isnotnull(current_size_kb),current_size_kb,current_size)
|
||||
| eval fill_perc=round((curr/max)*100,2)
|
||||
| eval combined = host . "_pipe_" . ingest_pipe
|
||||
| timechart limit=20 useother=false span=$span$ Max(fill_perc) by combined</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">visible</option>
|
||||
<option name="charting.axisTitleY.visibility">collapsed</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.maximumNumber">100</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">area</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">default</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">all</option>
|
||||
<option name="charting.layout.splitSeries">1</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
<option name="height">524</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>TCPOut Queue Sizes</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal $hosts$ `splunkadmins_metrics_source` sourcetype=splunkd group=queue (name=tcpout_*)
|
||||
| eval ingest_pipe = if(isnotnull(ingest_pipe), ingest_pipe, "none") | search ingest_pipe=*
|
||||
| eval max=if(isnotnull(max_size_kb),max_size_kb,max_size)
|
||||
| eval curr=if(isnotnull(current_size_kb),current_size_kb,current_size)
|
||||
| eval fill_perc=round((curr/max)*100,2)
|
||||
| eval combined = host . "_pipe_" . ingest_pipe
|
||||
| timechart limit=20 useother=false span=$span$ max(fill_perc) by combined</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">visible</option>
|
||||
<option name="charting.axisTitleY.text">% Max</option>
|
||||
<option name="charting.axisTitleY.visibility">collapsed</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.maximumNumber">100</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">area</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">default</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">all</option>
|
||||
<option name="charting.layout.splitSeries">1</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
<option name="height">429</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Blocked Forwarder Queues</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal $hosts$ `splunkadmins_metrics_source` sourcetype=splunkd group=queue max_size_kb>0 | stats count(eval(isnotnull(blocked))) AS blockedCount, count by name, host, _time | eval percBlocked=(100/count)*blockedCount | eval hostQueue = host . "_" . name | where percBlocked>0 | timechart limit=50 useOther=false span=$span$ avg(percBlocked) by hostQueue</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">visible</option>
|
||||
<option name="charting.axisTitleY.visibility">collapsed</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.maximumNumber">100</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">area</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">default</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">all</option>
|
||||
<option name="charting.layout.splitSeries">1</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
<option name="height">750</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>TcpOut KB per second per forwarder</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal $hosts$ `splunkadmins_metrics_source` sourcetype=splunkd group=thruput name=cooked_output OR name=uncooked_output
|
||||
| timechart useother=false span=$span$ limit=20 per_second(kb) by host
|
||||
</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">visible</option>
|
||||
<option name="charting.axisTitleY.visibility">collapsed</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">area</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">default</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">all</option>
|
||||
<option name="charting.layout.splitSeries">1</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
<option name="height">404</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Forced closures on restart</title>
|
||||
<chart>
|
||||
<title>A potential indicator of data loss</title>
|
||||
<search>
|
||||
<query>| tstats count where index=_internal sourcetype=splunkd $hosts$ `splunkadmins_splunkd_source` TERM("Forcing") groupby _time, host span=1s | timechart sum(count) by host
|
||||
</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">visible</option>
|
||||
<option name="charting.axisTitleY.visibility">collapsed</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">area</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">default</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">all</option>
|
||||
<option name="charting.layout.splitSeries">1</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
<option name="height">404</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Forwarders that have stopped listening on all ports</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal $hosts$ sourcetype=splunkd `splunkadmins_splunkd_source` TERM(WARN) TERM(Stopping)
|
||||
| timechart count by host span=1m limit=99</query>
|
||||
<earliest>-24h@h</earliest>
|
||||
<latest>now</latest>
|
||||
</search>
|
||||
<option name="charting.chart">line</option>
|
||||
<option name="charting.drilldown">none</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
</form>
|
||||
@ -0,0 +1,318 @@
|
||||
<form version="1.1">
|
||||
<label>Dashboard - HeavyForwarders Max Data Queue Sizes By Name (works in Splunk 8.0+)</label>
|
||||
<fieldset submitButton="false">
|
||||
<input type="time" token="time">
|
||||
<label></label>
|
||||
<default>
|
||||
<earliest>-4h@m</earliest>
|
||||
<latest>now</latest>
|
||||
</default>
|
||||
</input>
|
||||
<input type="text" token="span">
|
||||
<label>span</label>
|
||||
<default>1m</default>
|
||||
</input>
|
||||
<input type="text" token="hosts">
|
||||
<label>hosts</label>
|
||||
<default>`heavyforwarderhosts`</default>
|
||||
</input>
|
||||
</fieldset>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Parsing Queue Fill Size</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal $hosts$ `splunkadmins_metrics_source` sourcetype=splunkd group=queue (name=parsingqueue)
|
||||
| eval ingest_pipe = if(isnotnull(ingest_pipe), ingest_pipe, "none") | search ingest_pipe=*
|
||||
| eval max=if(isnotnull(max_size_kb),max_size_kb,max_size)
|
||||
| eval curr=if(isnotnull(current_size_kb),current_size_kb,current_size)
|
||||
| eval fill_perc=round((curr/max)*100,2)
|
||||
| eval combined = host . "_pipe_" . ingest_pipe
|
||||
| timechart limit=20 useother=false span=$span$ max(fill_perc) by combined</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleY.visibility">collapsed</option>
|
||||
<option name="charting.axisY.maximumNumber">100</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.chart">area</option>
|
||||
<option name="charting.layout.splitSeries">1</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
<option name="height">441</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Aggregation Queue Fill Size</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal $hosts$ `splunkadmins_metrics_source` sourcetype=splunkd group=queue (name=aggqueue)
|
||||
| eval ingest_pipe = if(isnotnull(ingest_pipe), ingest_pipe, "none") | search ingest_pipe=*
|
||||
| eval max=if(isnotnull(max_size_kb),max_size_kb,max_size)
|
||||
| eval curr=if(isnotnull(current_size_kb),current_size_kb,current_size)
|
||||
| eval fill_perc=round((curr/max)*100,2)
|
||||
| eval combined = host . "_pipe_" . ingest_pipe
|
||||
| timechart limit=20 useother=false span=$span$ Max(fill_perc) by combined</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
</search>
|
||||
<option name="charting.axisTitleY.visibility">collapsed</option>
|
||||
<option name="charting.axisY.maximumNumber">100</option>
|
||||
<option name="charting.chart">area</option>
|
||||
<option name="charting.layout.splitSeries">1</option>
|
||||
<option name="height">446</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Typing Queue Fill Size</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal $hosts$ `splunkadmins_metrics_source` sourcetype=splunkd group=queue (name=typingqueue)
|
||||
| eval ingest_pipe = if(isnotnull(ingest_pipe), ingest_pipe, "none") | search ingest_pipe=*
|
||||
| eval max=if(isnotnull(max_size_kb),max_size_kb,max_size)
|
||||
| eval curr=if(isnotnull(current_size_kb),current_size_kb,current_size)
|
||||
| eval fill_perc=round((curr/max)*100,2)
|
||||
| eval combined = host . "_pipe_" . ingest_pipe
|
||||
| timechart limit=20 useother=false span=$span$ Max(fill_perc) by combined</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
</search>
|
||||
<option name="charting.axisTitleY.visibility">collapsed</option>
|
||||
<option name="charting.axisY.maximumNumber">100</option>
|
||||
<option name="charting.chart">area</option>
|
||||
<option name="charting.layout.splitSeries">1</option>
|
||||
<option name="height">440</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Index Queue Size</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal $hosts$ `splunkadmins_metrics_source` sourcetype=splunkd group=queue (name=indexqueue)
|
||||
| eval name=case(name=="aggqueue","2 - Aggregation Queue",
|
||||
name=="indexqueue", "4 - Indexing Queue",
|
||||
name=="parsingqueue", "1 - Parsing Queue",
|
||||
name=="typingqueue", "3 - Typing Queue")
|
||||
| eval ingest_pipe = if(isnotnull(ingest_pipe), ingest_pipe, "none") | search ingest_pipe=*
|
||||
| eval max=if(isnotnull(max_size_kb),max_size_kb,max_size)
|
||||
| eval curr=if(isnotnull(current_size_kb),current_size_kb,current_size)
|
||||
| eval fill_perc=round((curr/max)*100,2)
|
||||
| eval combined = host . "_pipe_" . ingest_pipe
|
||||
| timechart limit=20 useother=false span=$span$ Max(fill_perc) by combined</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">visible</option>
|
||||
<option name="charting.axisTitleY.visibility">collapsed</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.maximumNumber">100</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">area</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">default</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">all</option>
|
||||
<option name="charting.layout.splitSeries">1</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
<option name="height">524</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>TCPOut Queue Sizes</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal $hosts$ `splunkadmins_metrics_source` sourcetype=splunkd group=queue (name=tcpout_*)
|
||||
| eval ingest_pipe = if(isnotnull(ingest_pipe), ingest_pipe, "none") | search ingest_pipe=*
|
||||
| eval max=if(isnotnull(max_size_kb),max_size_kb,max_size)
|
||||
| eval curr=if(isnotnull(current_size_kb),current_size_kb,current_size)
|
||||
| eval fill_perc=round((curr/max)*100,2)
|
||||
| eval combined = host . "_pipe_" . ingest_pipe
|
||||
| timechart limit=20 useother=false span=$span$ max(fill_perc) by combined</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">visible</option>
|
||||
<option name="charting.axisTitleY.text">% Max</option>
|
||||
<option name="charting.axisTitleY.visibility">collapsed</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.maximumNumber">100</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">area</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">default</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">all</option>
|
||||
<option name="charting.layout.splitSeries">1</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
<option name="height">429</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Blocked Forwarder Queues</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal $hosts$ `splunkadmins_metrics_source` sourcetype=splunkd group=queue max_size_kb>0 | stats count(eval(isnotnull(blocked))) AS blockedCount, count by name, host, _time | eval percBlocked=(100/count)*blockedCount | eval hostQueue = host . "_" . name | where percBlocked>0 | timechart limit=50 useOther=false span=$span$ avg(percBlocked) by hostQueue</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">visible</option>
|
||||
<option name="charting.axisTitleY.visibility">collapsed</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.maximumNumber">100</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">area</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">default</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">all</option>
|
||||
<option name="charting.layout.splitSeries">1</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
<option name="height">750</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>TcpOut KB per second per forwarder</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>| tstats prestats=true sum(PREFIX(kb=)) where index=_internal $hosts$ TERM(group=thruput) TERM(name=cooked_output) OR TERM(name=uncooked_output) sourcetype=splunkd `splunkadmins_metrics_source` groupby host, _time span=1s
|
||||
| timechart aligntime=latest useother=false span=$span$ limit=20 per_second(kb=) by host
|
||||
</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">visible</option>
|
||||
<option name="charting.axisTitleY.visibility">collapsed</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">area</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">default</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">all</option>
|
||||
<option name="charting.layout.splitSeries">1</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
<option name="height">404</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Forced closures on restart</title>
|
||||
<chart>
|
||||
<title>A potential indicator of data loss</title>
|
||||
<search>
|
||||
<query>| tstats prestats=true count where index=_internal sourcetype=splunkd $hosts$ `splunkadmins_splunkd_source` TERM("Forcing") groupby _time, host span=1s | timechart count by host
|
||||
</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">visible</option>
|
||||
<option name="charting.axisTitleY.visibility">collapsed</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">area</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">default</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">all</option>
|
||||
<option name="charting.layout.splitSeries">1</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
<option name="height">404</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Forwarders that have stopped listening on all ports</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal $hosts$ sourcetype=splunkd `splunkadmins_splunkd_source` TERM(WARN) TERM(Stopping)
|
||||
| timechart count by host span=1m limit=99</query>
|
||||
<earliest>-24h@h</earliest>
|
||||
<latest>now</latest>
|
||||
</search>
|
||||
<option name="charting.chart">line</option>
|
||||
<option name="charting.drilldown">none</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
</form>
|
||||
@ -0,0 +1,217 @@
|
||||
<form version="1.1">
|
||||
<label>Dashboard - HEC Performance</label>
|
||||
<description>Based on the original version from https://github.com/camrunr/hec_perf_report/blob/master/hec_perf_report.xml</description>
|
||||
<search id="by_token">
|
||||
<query>index=_introspection (`indexerhosts`) OR (`heavyforwarderhosts`) `splunkadmins_hec_metrics_source` http_event_collector_token
|
||||
| bucket _time span=$dd_span$
|
||||
| stats sum(data.num_of_events) as Events sum(data.total_bytes_received) as Bytes by _time data.token_name</query>
|
||||
<earliest>$timepicker.earliest$</earliest>
|
||||
<latest>$timepicker.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
<refresh>$refreshinterval$</refresh>
|
||||
</search>
|
||||
<search id="by_host">
|
||||
<query>index=_introspection (`indexerhosts`) OR (`heavyforwarderhosts`) `splunkadmins_hec_metrics_source` http_event_collector_token
|
||||
| bucket _time span=$dd_span$
|
||||
| stats sum(data.num_of_events) as Events sum(data.total_bytes_received) as Bytes by _time host
|
||||
| eval host=replace(host,"\..*","")</query>
|
||||
<earliest>$timepicker.earliest$</earliest>
|
||||
<latest>$timepicker.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
<refresh>$refreshinterval$</refresh>
|
||||
</search>
|
||||
<fieldset submitButton="false">
|
||||
<input type="time" token="timepicker">
|
||||
<label>Time</label>
|
||||
<default>
|
||||
<earliest>-4h@m</earliest>
|
||||
<latest>now</latest>
|
||||
</default>
|
||||
</input>
|
||||
<input type="dropdown" token="dd_span">
|
||||
<label>span</label>
|
||||
<choice value="1min">1 minute</choice>
|
||||
<choice value="5min">5 minutes</choice>
|
||||
<choice value="30m">30 minutes</choice>
|
||||
<choice value="1h">1 hour</choice>
|
||||
<choice value="1d">1 day</choice>
|
||||
<default>1min</default>
|
||||
</input>
|
||||
<input type="text" token="hostcount">
|
||||
<label>Timechart host limit</label>
|
||||
<default>15</default>
|
||||
</input>
|
||||
<input type="text" token="refreshinterval" searchWhenChanged="true">
|
||||
<label>Refresh Interval</label>
|
||||
<default>300</default>
|
||||
</input>
|
||||
</fieldset>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Events/sec by host</title>
|
||||
<chart>
|
||||
<search base="by_host">
|
||||
<query>timechart limit=$hostcount$ span=$dd_span$ per_second(Events) as Events/sec by host</query>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">collapsed</option>
|
||||
<option name="charting.axisTitleY.visibility">visible</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.abbreviation">none</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.abbreviation">auto</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.abbreviation">none</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">column</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">stacked</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">none</option>
|
||||
<option name="charting.layout.splitSeries">0</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.mode">standard</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
<option name="charting.lineWidth">2</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="trellis.enabled">0</option>
|
||||
<option name="trellis.scales.shared">1</option>
|
||||
<option name="trellis.size">medium</option>
|
||||
</chart>
|
||||
</panel>
|
||||
<panel>
|
||||
<title>Bytes/sec by host</title>
|
||||
<chart>
|
||||
<search base="by_host">
|
||||
<query>timechart limit=$hostcount$ span=$dd_span$ per_second(Bytes) as Bytes/sec by host</query>
|
||||
</search>
|
||||
<option name="charting.axisTitleX.visibility">collapsed</option>
|
||||
<option name="charting.axisY.abbreviation">auto</option>
|
||||
<option name="charting.chart">column</option>
|
||||
<option name="charting.chart.stackMode">stacked</option>
|
||||
<option name="charting.drilldown">none</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Events/sec by input/group</title>
|
||||
<chart>
|
||||
<search base="by_token">
|
||||
<query>timechart span=$dd_span$ per_second(Events) as Events/sec by data.token_name</query>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">collapsed</option>
|
||||
<option name="charting.axisTitleY.visibility">visible</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.abbreviation">none</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.abbreviation">auto</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.abbreviation">none</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">column</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">stacked</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">none</option>
|
||||
<option name="charting.layout.splitSeries">0</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.mode">standard</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
<option name="charting.lineWidth">2</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="trellis.enabled">0</option>
|
||||
<option name="trellis.scales.shared">1</option>
|
||||
<option name="trellis.size">medium</option>
|
||||
</chart>
|
||||
</panel>
|
||||
<panel>
|
||||
<title>Bytes/sec by input/group</title>
|
||||
<chart>
|
||||
<search base="by_token">
|
||||
<query>timechart span=$dd_span$ per_second(Bytes) as Bytes/sec by data.token_name</query>
|
||||
</search>
|
||||
<option name="charting.axisTitleX.visibility">collapsed</option>
|
||||
<option name="charting.axisY.abbreviation">auto</option>
|
||||
<option name="charting.chart">column</option>
|
||||
<option name="charting.chart.stackMode">stacked</option>
|
||||
<option name="charting.drilldown">none</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>HEC Batching Efficiency</title>
|
||||
<table>
|
||||
<search>
|
||||
<refresh>$refreshinterval$</refresh>
|
||||
<query>index=_introspection (`indexerhosts`) OR (`heavyforwarderhosts`) `splunkadmins_hec_metrics_source` http_event_collector_token
|
||||
| eval EpR='data.num_of_events'/'data.num_of_requests'
|
||||
| bucket _time span=5m
|
||||
| stats sum(data.num_of_events) as events avg(EpR) as events_per_POST sum(data.num_of_requests) as reqs sum(data.total_bytes_received) as Bytes by _time data.token_name
|
||||
| eval reqs_per_sec=reqs/300, bytes_per_post=Bytes/reqs
|
||||
| rename data.token_name as token_name
|
||||
| stats sum(eval(Bytes/1024/1024)) as MBytes sum(events) as Events p50(events_per_POST) as events_per_post p50(bytes_per_post) as bytes_per_post p90(reqs_per_sec) as posts_per_sec by token_name
|
||||
| eval MBytes = round(MBytes, 2), events_per_post=round(events_per_post,2), bytes_per_post=round(bytes_per_post,2), posts_per_sec=round(posts_per_sec,2)
|
||||
| sort - posts_per_sec</query>
|
||||
<earliest>$timepicker.earliest$</earliest>
|
||||
<latest>$timepicker.latest$</latest>
|
||||
</search>
|
||||
<option name="count">10</option>
|
||||
<option name="drilldown">none</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<format type="color" field="events_per_POST">
|
||||
<colorPalette type="list">[#DC4E41,#DC4E41,#F8BE34,#53A051]</colorPalette>
|
||||
<scale type="threshold">0,5,10</scale>
|
||||
</format>
|
||||
<format type="color" field="reqs_per_sec">
|
||||
<colorPalette type="list">[#53A051,#F8BE34,#DC4E41]</colorPalette>
|
||||
<scale type="threshold">10,50</scale>
|
||||
</format>
|
||||
<format type="number" field="MBytes"></format>
|
||||
<format type="number" field="events_per_post"></format>
|
||||
<format type="number" field="bytes_per_post"></format>
|
||||
<format type="number" field="posts_per_sec"></format>
|
||||
<format type="number" field="Events">
|
||||
<option name="precision">0</option>
|
||||
</format>
|
||||
</table>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>If useACK is in use num_of_requests_waiting_ack is high then this can be an issue (HEC tokens with useACK will stop allowing data through)</title>
|
||||
<chart>
|
||||
<search>
|
||||
<refresh>$refreshinterval$</refresh>
|
||||
<query>index=_introspection (`indexerhosts`) OR (`heavyforwarderhosts`) data.series=http_event_collector data.num_of_requests_waiting_ack=* sourcetype=http_event_collector_metrics
|
||||
| timechart minspan=2m max(data.num_of_requests_waiting_ack) AS num_of_requests_waiting_ack</query>
|
||||
<earliest>$timepicker.earliest$</earliest>
|
||||
<latest>$timepicker.latest$</latest>
|
||||
</search>
|
||||
<option name="charting.chart">line</option>
|
||||
<option name="charting.drilldown">none</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
</form>
|
||||
@ -0,0 +1,170 @@
|
||||
<form version="1.1">
|
||||
<label>Dashboard - Indexer Data Spread</label>
|
||||
<description>Indexer Data Spread</description>
|
||||
<fieldset submitButton="false">
|
||||
<input type="time" token="thetime">
|
||||
<label></label>
|
||||
<default>
|
||||
<earliest>-24h@h</earliest>
|
||||
<latest>now</latest>
|
||||
</default>
|
||||
</input>
|
||||
<input type="time" token="time_tok">
|
||||
<label>Time for the indexed data per KB per second</label>
|
||||
<default>
|
||||
<earliest>-24h@h</earliest>
|
||||
<latest>now</latest>
|
||||
</default>
|
||||
</input>
|
||||
</fieldset>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Spread of data across the indexers</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>| tstats count WHERE index="*" by splunk_server _time span=10m | timechart span=10m sum(count) by splunk_server</query>
|
||||
<earliest>$thetime.earliest$</earliest>
|
||||
<latest>$thetime.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">visible</option>
|
||||
<option name="charting.axisTitleY.visibility">visible</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">area</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">stacked100</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">all</option>
|
||||
<option name="charting.layout.splitSeries">0</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Indexed data in KB per second per indexer</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>(index=_internal `indexerhosts` `splunkadmins_metrics_source` sourcetype=splunkd group=thruput name=index_thruput) | eval ingest_pipe = if(isnotnull(ingest_pipe), ingest_pipe, "none") | search ingest_pipe=* | timechart minspan=30s per_second(kb) by host</query>
|
||||
<earliest>$time_tok.earliest$</earliest>
|
||||
<latest>$time_tok.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">visible</option>
|
||||
<option name="charting.axisTitleY.visibility">visible</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">area</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">minmax</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">stacked</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">all</option>
|
||||
<option name="charting.layout.splitSeries">0</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Forwarders and Throughput (from monitoring console)</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal sourcetype=splunkd group=tcpin_connections (connectionType=cooked OR connectionType=cookedSSL) fwdType=* guid=* `indexerhosts` | timechart minspan=30s dc(guid) as forwarder_count, per_second(kb) as tcp_KBps | rename forwarder_count as "Forwarder Count", tcp_KBps as "Throughput (KB/s)"</query>
|
||||
<earliest>$time_tok.earliest$</earliest>
|
||||
<latest>$time_tok.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">visible</option>
|
||||
<option name="charting.axisTitleY.visibility">visible</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.enabled">1</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">column</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.overlayFields">"Throughput (KB/s)"</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">stacked</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">all</option>
|
||||
<option name="charting.layout.splitSeries">0</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Incoming TCP Queues</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal `indexerhosts` `splunkadmins_metrics_source` sourcetype=splunkd group=queue name=splunktcpin OR name=tcpin_cooked_pqueue
|
||||
| eval max=if(isnotnull(max_size_kb),max_size_kb,max_size)
|
||||
| eval curr=if(isnotnull(current_size_kb),current_size_kb,current_size)
|
||||
| eval fill_perc=round((curr/max)*100,2)
|
||||
| timechart minspan=30s Median(fill_perc) AS "fill_percentage" by host</query>
|
||||
<earliest>$time_tok.earliest$</earliest>
|
||||
<latest>$time_tok.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">visible</option>
|
||||
<option name="charting.axisTitleY.visibility">collapsed</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.maximumNumber">100</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">area</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">default</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">all</option>
|
||||
<option name="charting.layout.splitSeries">1</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
</form>
|
||||
@ -0,0 +1,336 @@
|
||||
<form version="1.1">
|
||||
<label>Dashboard - Indexer Max Data Queue Sizes By Name</label>
|
||||
<fieldset submitButton="false">
|
||||
<input type="time" token="time">
|
||||
<label></label>
|
||||
<default>
|
||||
<earliest>-4h@m</earliest>
|
||||
<latest>now</latest>
|
||||
</default>
|
||||
</input>
|
||||
</fieldset>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Parsing Queue Fill Size</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal `indexerhosts` `splunkadmins_metrics_source` sourcetype=splunkd group=queue (name=parsingqueue)
|
||||
| eval name=case(name=="aggqueue","2 - Aggregation Queue",
|
||||
name=="indexqueue", "4 - Indexing Queue",
|
||||
name=="parsingqueue", "1 - Parsing Queue",
|
||||
name=="typingqueue", "3 - Typing Queue")
|
||||
| eval ingest_pipe = if(isnotnull(ingest_pipe), ingest_pipe, "none") | search ingest_pipe=*
|
||||
| eval max=if(isnotnull(max_size_kb),max_size_kb,max_size)
|
||||
| eval curr=if(isnotnull(current_size_kb),current_size_kb,current_size)
|
||||
| eval fill_perc=round((curr/max)*100,2)
|
||||
| eval combined = host . "_pipe_" . ingest_pipe
|
||||
| timechart limit=14 useother=false span=1m Max(fill_perc) by combined</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleY.visibility">collapsed</option>
|
||||
<option name="charting.axisY.maximumNumber">100</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.chart">area</option>
|
||||
<option name="charting.layout.splitSeries">1</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
<option name="height">441</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Aggregation Queue Fill Size</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal `indexerhosts` `splunkadmins_metrics_source` sourcetype=splunkd group=queue (name=aggqueue)
|
||||
| eval name=case(name=="aggqueue","2 - Aggregation Queue",
|
||||
name=="indexqueue", "4 - Indexing Queue",
|
||||
name=="parsingqueue", "1 - Parsing Queue",
|
||||
name=="typingqueue", "3 - Typing Queue")
|
||||
| eval ingest_pipe = if(isnotnull(ingest_pipe), ingest_pipe, "none") | search ingest_pipe=*
|
||||
| eval max=if(isnotnull(max_size_kb),max_size_kb,max_size)
|
||||
| eval curr=if(isnotnull(current_size_kb),current_size_kb,current_size)
|
||||
| eval fill_perc=round((curr/max)*100,2)
|
||||
| eval combined = host . "_pipe_" . ingest_pipe
|
||||
| timechart limit=14 useother=false span=1m Max(fill_perc) by combined</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
</search>
|
||||
<option name="charting.axisTitleY.visibility">collapsed</option>
|
||||
<option name="charting.axisY.maximumNumber">100</option>
|
||||
<option name="charting.chart">area</option>
|
||||
<option name="charting.layout.splitSeries">1</option>
|
||||
<option name="height">446</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Typing Queue Fill Size</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal `indexerhosts` `splunkadmins_metrics_source` sourcetype=splunkd group=queue (name=typingqueue)
|
||||
| eval name=case(name=="aggqueue","2 - Aggregation Queue",
|
||||
name=="indexqueue", "4 - Indexing Queue",
|
||||
name=="parsingqueue", "1 - Parsing Queue",
|
||||
name=="typingqueue", "3 - Typing Queue")
|
||||
| eval ingest_pipe = if(isnotnull(ingest_pipe), ingest_pipe, "none") | search ingest_pipe=*
|
||||
| eval max=if(isnotnull(max_size_kb),max_size_kb,max_size)
|
||||
| eval curr=if(isnotnull(current_size_kb),current_size_kb,current_size)
|
||||
| eval fill_perc=round((curr/max)*100,2)
|
||||
| eval combined = host . "_pipe_" . ingest_pipe
|
||||
| timechart limit=14 useother=false span=1m Max(fill_perc) by combined</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
</search>
|
||||
<option name="charting.axisTitleY.visibility">collapsed</option>
|
||||
<option name="charting.axisY.maximumNumber">100</option>
|
||||
<option name="charting.chart">area</option>
|
||||
<option name="charting.layout.splitSeries">1</option>
|
||||
<option name="height">440</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Indexing Queue Fill Size</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal `indexerhosts` `splunkadmins_metrics_source` sourcetype=splunkd group=queue (name=indexqueue)
|
||||
| eval name=case(name=="aggqueue","2 - Aggregation Queue",
|
||||
name=="indexqueue", "4 - Indexing Queue",
|
||||
name=="parsingqueue", "1 - Parsing Queue",
|
||||
name=="typingqueue", "3 - Typing Queue")
|
||||
| eval ingest_pipe = if(isnotnull(ingest_pipe), ingest_pipe, "none") | search ingest_pipe=*
|
||||
| eval max=if(isnotnull(max_size_kb),max_size_kb,max_size)
|
||||
| eval curr=if(isnotnull(current_size_kb),current_size_kb,current_size)
|
||||
| eval fill_perc=round((curr/max)*100,2)
|
||||
| eval combined = host . "_pipe_" . ingest_pipe
|
||||
| timechart limit=14 useother=false span=1m Max(fill_perc) by combined</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
</search>
|
||||
<option name="charting.axisTitleY.visibility">collapsed</option>
|
||||
<option name="charting.axisY.maximumNumber">100</option>
|
||||
<option name="charting.chart">area</option>
|
||||
<option name="charting.layout.splitSeries">1</option>
|
||||
<option name="height">424</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Shows any replication queue issues that may slowdown/prevent the queues from clearing at the indexer level</title>
|
||||
<chart>
|
||||
<title>The replication queue appears to directly relate to the indexing queue, any blockage of the indexing queue will then block the replication queue and temporarily slow data ingestion. The replication queue appears to be extremely sensitive to the other indexers indexing queue so it can be a useful measure of an issue...</title>
|
||||
<search>
|
||||
<query>index=_internal `indexerhosts` "replication queue for " "full" OR "has room now" sourcetype=splunkd `splunkadmins_splunkd_source`
|
||||
| rename peer AS guid
|
||||
| join guid
|
||||
[| rest /services/search/distributed/peers
|
||||
| table guid peerName]
|
||||
| transaction bid guid endswith="has room now" keeporphans=true
|
||||
| timechart span=1m count, max(duration) AS duration by peerName</query>
|
||||
<earliest>-60m@m</earliest>
|
||||
<latest>now</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">visible</option>
|
||||
<option name="charting.axisTitleY.visibility">visible</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">area</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">default</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">all</option>
|
||||
<option name="charting.layout.splitSeries">0</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
<option name="height">540</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="trellis.enabled">1</option>
|
||||
<option name="trellis.scales.shared">0</option>
|
||||
<option name="trellis.size">large</option>
|
||||
<option name="trellis.splitBy">_aggregation</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Blocked Indexing Queues</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal `indexerhosts` `splunkadmins_metrics_source` sourcetype=splunkd group=queue | stats count(eval(isnotnull(blocked))) AS blockedCount, count by name, host, _time | eval percBlocked=(100/count)*blockedCount | eval hostQueue = host . "_" . name | timechart useOther=false span=10m avg(percBlocked) by hostQueue</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">visible</option>
|
||||
<option name="charting.axisTitleY.visibility">collapsed</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.maximumNumber">100</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">area</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">default</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">all</option>
|
||||
<option name="charting.layout.splitSeries">1</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
<option name="height">750</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>TCPIn Queue Sizes (Max)</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal `indexerhosts` `splunkadmins_metrics_source` sourcetype=splunkd group=queue (name=splunktcpin OR name=tcpin_cooked_pqueue)
|
||||
| eval ingest_pipe = if(isnotnull(ingest_pipe), ingest_pipe, "none") | search ingest_pipe=*
|
||||
| eval max=if(isnotnull(max_size_kb),max_size_kb,max_size)
|
||||
| eval curr=if(isnotnull(current_size_kb),current_size_kb,current_size)
|
||||
| eval fill_perc=round((curr/max)*100,2)
|
||||
| eval combined = host . "_pipe_" . ingest_pipe
|
||||
| timechart limit=14 useother=false span=1m max(fill_perc) by combined</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">visible</option>
|
||||
<option name="charting.axisTitleY.text">% Max</option>
|
||||
<option name="charting.axisTitleY.visibility">collapsed</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.maximumNumber">100</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">area</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">default</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">all</option>
|
||||
<option name="charting.layout.splitSeries">1</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
<option name="height">562</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Thruput Per Indexer</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal `indexerhosts` `splunkadmins_metrics_source` sourcetype=splunkd group=thruput name=index_thruput
|
||||
| eval ingest_pipe = if(isnotnull(ingest_pipe), ingest_pipe, "none") | search ingest_pipe=*
|
||||
| eval combined = host . "_pipe_" . ingest_pipe
|
||||
| timechart useother=false span=1m limit=14 per_second(kb) by host
|
||||
</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">visible</option>
|
||||
<option name="charting.axisTitleY.visibility">collapsed</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">area</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">default</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">all</option>
|
||||
<option name="charting.layout.splitSeries">1</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
<option name="height">404</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Forced closures on restart</title>
|
||||
<chart>
|
||||
<title>A potential indicator of data loss</title>
|
||||
<search>
|
||||
<query>| tstats count where index=_internal sourcetype=splunkd `indexerhosts` `splunkadmins_splunkd_source` TERM("Forcing") groupby _time, host span=1s | timechart sum(count) by host
|
||||
</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">visible</option>
|
||||
<option name="charting.axisTitleY.visibility">collapsed</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">area</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">default</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">all</option>
|
||||
<option name="charting.layout.splitSeries">1</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
<option name="height">404</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
</form>
|
||||
@ -0,0 +1,334 @@
|
||||
<form version="1.1">
|
||||
<label>Dashboard - Indexer Max Data Queue Sizes By Name (works in Splunk 8.0+)</label>
|
||||
<fieldset submitButton="false">
|
||||
<input type="time" token="time">
|
||||
<label></label>
|
||||
<default>
|
||||
<earliest>-4h@m</earliest>
|
||||
<latest>now</latest>
|
||||
</default>
|
||||
</input>
|
||||
</fieldset>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Parsing Queue Fill Size</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal `indexerhosts` `splunkadmins_metrics_source` sourcetype=splunkd group=queue (name=parsingqueue)
|
||||
| eval name=case(name=="aggqueue","2 - Aggregation Queue",
|
||||
name=="indexqueue", "4 - Indexing Queue",
|
||||
name=="parsingqueue", "1 - Parsing Queue",
|
||||
name=="typingqueue", "3 - Typing Queue")
|
||||
| eval ingest_pipe = if(isnotnull(ingest_pipe), ingest_pipe, "none") | search ingest_pipe=*
|
||||
| eval max=if(isnotnull(max_size_kb),max_size_kb,max_size)
|
||||
| eval curr=if(isnotnull(current_size_kb),current_size_kb,current_size)
|
||||
| eval fill_perc=round((curr/max)*100,2)
|
||||
| eval combined = host . "_pipe_" . ingest_pipe
|
||||
| timechart limit=14 useother=false span=1m Max(fill_perc) by combined</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleY.visibility">collapsed</option>
|
||||
<option name="charting.axisY.maximumNumber">100</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.chart">area</option>
|
||||
<option name="charting.layout.splitSeries">1</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
<option name="height">441</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Aggregation Queue Fill Size</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal `indexerhosts` `splunkadmins_metrics_source` sourcetype=splunkd group=queue (name=aggqueue)
|
||||
| eval name=case(name=="aggqueue","2 - Aggregation Queue",
|
||||
name=="indexqueue", "4 - Indexing Queue",
|
||||
name=="parsingqueue", "1 - Parsing Queue",
|
||||
name=="typingqueue", "3 - Typing Queue")
|
||||
| eval ingest_pipe = if(isnotnull(ingest_pipe), ingest_pipe, "none") | search ingest_pipe=*
|
||||
| eval max=if(isnotnull(max_size_kb),max_size_kb,max_size)
|
||||
| eval curr=if(isnotnull(current_size_kb),current_size_kb,current_size)
|
||||
| eval fill_perc=round((curr/max)*100,2)
|
||||
| eval combined = host . "_pipe_" . ingest_pipe
|
||||
| timechart limit=14 useother=false span=1m Max(fill_perc) by combined</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
</search>
|
||||
<option name="charting.axisTitleY.visibility">collapsed</option>
|
||||
<option name="charting.axisY.maximumNumber">100</option>
|
||||
<option name="charting.chart">area</option>
|
||||
<option name="charting.layout.splitSeries">1</option>
|
||||
<option name="height">446</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Typing Queue Fill Size</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal `indexerhosts` `splunkadmins_metrics_source` sourcetype=splunkd group=queue (name=typingqueue)
|
||||
| eval name=case(name=="aggqueue","2 - Aggregation Queue",
|
||||
name=="indexqueue", "4 - Indexing Queue",
|
||||
name=="parsingqueue", "1 - Parsing Queue",
|
||||
name=="typingqueue", "3 - Typing Queue")
|
||||
| eval ingest_pipe = if(isnotnull(ingest_pipe), ingest_pipe, "none") | search ingest_pipe=*
|
||||
| eval max=if(isnotnull(max_size_kb),max_size_kb,max_size)
|
||||
| eval curr=if(isnotnull(current_size_kb),current_size_kb,current_size)
|
||||
| eval fill_perc=round((curr/max)*100,2)
|
||||
| eval combined = host . "_pipe_" . ingest_pipe
|
||||
| timechart limit=14 useother=false span=1m Max(fill_perc) by combined</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
</search>
|
||||
<option name="charting.axisTitleY.visibility">collapsed</option>
|
||||
<option name="charting.axisY.maximumNumber">100</option>
|
||||
<option name="charting.chart">area</option>
|
||||
<option name="charting.layout.splitSeries">1</option>
|
||||
<option name="height">440</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Indexing Queue Fill Size</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal `indexerhosts` `splunkadmins_metrics_source` sourcetype=splunkd group=queue (name=indexqueue)
|
||||
| eval name=case(name=="aggqueue","2 - Aggregation Queue",
|
||||
name=="indexqueue", "4 - Indexing Queue",
|
||||
name=="parsingqueue", "1 - Parsing Queue",
|
||||
name=="typingqueue", "3 - Typing Queue")
|
||||
| eval ingest_pipe = if(isnotnull(ingest_pipe), ingest_pipe, "none") | search ingest_pipe=*
|
||||
| eval max=if(isnotnull(max_size_kb),max_size_kb,max_size)
|
||||
| eval curr=if(isnotnull(current_size_kb),current_size_kb,current_size)
|
||||
| eval fill_perc=round((curr/max)*100,2)
|
||||
| eval combined = host . "_pipe_" . ingest_pipe
|
||||
| timechart limit=14 useother=false span=1m Max(fill_perc) by combined</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
</search>
|
||||
<option name="charting.axisTitleY.visibility">collapsed</option>
|
||||
<option name="charting.axisY.maximumNumber">100</option>
|
||||
<option name="charting.chart">area</option>
|
||||
<option name="charting.layout.splitSeries">1</option>
|
||||
<option name="height">424</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Shows any replication queue issues that may slowdown/prevent the queues from clearing at the indexer level</title>
|
||||
<chart>
|
||||
<title>The replication queue appears to directly relate to the indexing queue, any blockage of the indexing queue will then block the replication queue and temporarily slow data ingestion. The replication queue appears to be extremely sensitive to the other indexers indexing queue so it can be a useful measure of an issue...</title>
|
||||
<search>
|
||||
<query>index=_internal `indexerhosts` "replication queue for " "full" OR "has room now" sourcetype=splunkd `splunkadmins_splunkd_source`
|
||||
| rename peer AS guid
|
||||
| join guid
|
||||
[| rest /services/search/distributed/peers
|
||||
| table guid peerName]
|
||||
| transaction bid guid endswith="has room now" keeporphans=true
|
||||
| timechart span=1m count, max(duration) AS duration by peerName</query>
|
||||
<earliest>-60m@m</earliest>
|
||||
<latest>now</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">visible</option>
|
||||
<option name="charting.axisTitleY.visibility">visible</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">area</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">default</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">all</option>
|
||||
<option name="charting.layout.splitSeries">0</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
<option name="height">540</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="trellis.enabled">1</option>
|
||||
<option name="trellis.scales.shared">0</option>
|
||||
<option name="trellis.size">large</option>
|
||||
<option name="trellis.splitBy">_aggregation</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Blocked Indexing Queues</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal `indexerhosts` `splunkadmins_metrics_source` sourcetype=splunkd group=queue | stats count(eval(isnotnull(blocked))) AS blockedCount, count by name, host, _time | eval percBlocked=(100/count)*blockedCount | eval hostQueue = host . "_" . name | timechart useOther=false span=10m avg(percBlocked) by hostQueue</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">visible</option>
|
||||
<option name="charting.axisTitleY.visibility">collapsed</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.maximumNumber">100</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">area</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">default</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">all</option>
|
||||
<option name="charting.layout.splitSeries">1</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
<option name="height">750</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>TCPIn Queue Sizes (Max)</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal `indexerhosts` `splunkadmins_metrics_source` sourcetype=splunkd group=queue (name=splunktcpin OR name=tcpin_cooked_pqueue)
|
||||
| eval ingest_pipe = if(isnotnull(ingest_pipe), ingest_pipe, "none") | search ingest_pipe=*
|
||||
| eval max=if(isnotnull(max_size_kb),max_size_kb,max_size)
|
||||
| eval curr=if(isnotnull(current_size_kb),current_size_kb,current_size)
|
||||
| eval fill_perc=round((curr/max)*100,2)
|
||||
| eval combined = host . "_pipe_" . ingest_pipe
|
||||
| timechart limit=14 useother=false span=1m max(fill_perc) by combined</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">visible</option>
|
||||
<option name="charting.axisTitleY.text">% Max</option>
|
||||
<option name="charting.axisTitleY.visibility">collapsed</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.maximumNumber">100</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">area</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">default</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">all</option>
|
||||
<option name="charting.layout.splitSeries">1</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
<option name="height">562</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Thruput Per Indexer</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>| tstats prestats=true sum(PREFIX(kb=)) where index=_internal `indexerhosts` TERM(group=thruput) TERM(name=index_thruput) `splunkadmins_metrics_source` sourcetype=splunkd `indexerhosts` groupby PREFIX(name=), host, _time span=1s
|
||||
| timechart aligntime=latest useother=false span=10m limit=14 per_second(kb=) AS tcp_KBps by host
|
||||
</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">visible</option>
|
||||
<option name="charting.axisTitleY.visibility">collapsed</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">area</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">default</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">all</option>
|
||||
<option name="charting.layout.splitSeries">1</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
<option name="height">404</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Forced closures on restart</title>
|
||||
<chart>
|
||||
<title>A potential indicator of data loss</title>
|
||||
<search>
|
||||
<query>| tstats prestats=true count where index=_internal sourcetype=splunkd `indexerhosts` `splunkadmins_splunkd_source` TERM("Forcing") groupby _time, host span=1s | timechart count by host
|
||||
</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">visible</option>
|
||||
<option name="charting.axisTitleY.visibility">collapsed</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">area</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">default</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">all</option>
|
||||
<option name="charting.layout.splitSeries">1</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
<option name="height">404</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
</form>
|
||||
@ -0,0 +1,276 @@
|
||||
<form version="1.1">
|
||||
<label>Dashboard - Issues Per Sourcetype</label>
|
||||
<description>Detect time-parsing, event breaking or truncation issues for a particular sourcetype. Please note that the investigation query is something you can copy & paste into a new search window in Splunk to find example events, it does not work 100% of the time...</description>
|
||||
<fieldset submitButton="false">
|
||||
<input type="time" token="time">
|
||||
<label></label>
|
||||
<default>
|
||||
<earliest>-60m</earliest>
|
||||
<latest>now</latest>
|
||||
</default>
|
||||
</input>
|
||||
<input type="text" token="sourcetype">
|
||||
<label>sourcetype to investigate</label>
|
||||
</input>
|
||||
</fieldset>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Failure to parse timestamp correctly</title>
|
||||
<table>
|
||||
<title>Timestamp parsing has failed, note that if the null queue is in use this can give false alarms</title>
|
||||
<search>
|
||||
<query>```Timestamp parsing has failed, and it doesn't appear to be related to the event been broken due to having too many lines, that is a separate alert that may trigger a timestamp parsing issue (excluded from this alert as that issue needs to be resolved first)
|
||||
Please note that you may see this particular warning on data that is sent to the nullQueue using a transforms.conf. Obviously you won't see this in the index but you will see the warning because the time parsing occurs before the transforms.conf occurs
|
||||
This alert now checks for at least 2 failures, and header entries can often trigger 2 entries in the log files about timestamp parsing failures...
|
||||
Finally one strange edge case is a newline inserted into the log file (by itself with no content before/afterward) can trigger the warning but nothing will get indexed, multiline_event_extra_waittime, time_before_close and EVENT_BREAKER can resolve this edge case```
|
||||
index=_internal sourcetype=splunkd ("Failed to parse timestamp" "Defaulting to timestamp of previous event") OR "Breaking event because limit of " OR "outside of the acceptable time window" (`indexerhosts`) OR (`heavyforwarderhosts`) $sourcetype$
|
||||
| bin _time span=1m
|
||||
| eval host=data_host, source=data_source, sourcetype=data_sourcetype
|
||||
| rex "source::(?P<source>[^|]+)\|host::(?P<host>[^|]+)\|(?P<sourcetype>[^|]+)"
|
||||
| eventstats count(eval(isnotnull(data_host))) AS hasBrokenEventOrTuncatedLine, count(eval(searchmatch("outside of the acceptable time window"))) AS outsideTimewindow by _time, host, source, sourcetype
|
||||
| where hasBrokenEventOrTuncatedLine=0 AND isnull(data_host) AND NOT searchmatch("outside of the acceptable time window")
|
||||
```To investigate further we want the previous timestamp that Splunk used for the event in question, that way we can see what it looks like in raw format...```
|
||||
| rex "Defaulting to timestamp of previous event \((?P<previousTimeStamp>[^)]+)"
|
||||
| eval previousTimeStamp=strptime(previousTimeStamp, "%a %b %d %H:%M:%S %Y")
|
||||
| stats count, min(_time) AS firstSeen, max(_time) AS mostRecent, first(previousTimeStamp) AS recentExample, sum(outsideTimewindow) AS outsideTimewindow by host, sourcetype, source
|
||||
| where count>0
|
||||
| stats sum(count) AS count, min(firstSeen) AS firstSeen, max(mostRecent) AS mostRecent, first(recentExample) AS recentExample, values(source) AS sourceList, sum(outsideTimewindow) AS outsideTimewindow by host, sourcetype
|
||||
| eval invesEnd=recentExample+1
|
||||
| eval invesDataSource=sourceList
|
||||
| eval invesDataSource=if(mvcount(invesDataSource)>1,mvjoin(invesDataSource,"\" OR source=\""),invesDataSource)
|
||||
| eval invesDataSource = "source=\"" + invesDataSource + "\""
|
||||
| eval invesDataSource = replace(invesDataSource, "\\\\", "\\\\\\\\")
|
||||
| eval investigationQuery="```The investigation query may find zero data if the data was sent to the null queue by a transforms.conf as the time parsing occurs before the transforms occur. If this source/sourcetype has a null queue you may need to exclude it from this alert. Note that the host= can be inaccurate if host overrides are in use in transforms.conf, if this query finds no results remove host=...``` index=* host=" . host . " sourcetype=\"" . sourcetype . "\" " . invesDataSource . " earliest=" . recentExample . " latest=" . invesEnd . " | eval indextime=strftime(_indextime, \"%+\")"
|
||||
| eval mostRecent=strftime(mostRecent, "%+"), firstSeen=strftime(firstSeen, "%+")
|
||||
| eval outsideAcceptableTimeWindow=if(outsideTimewindow!=0,"Timestamp parsing failed due to been outside the acceptable time window","No")
|
||||
| fields - recentExample, invesEnd, invesDataSource, outsideTimewindow
|
||||
| sort - count</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="count">20</option>
|
||||
<option name="dataOverlayMode">none</option>
|
||||
<option name="drilldown">none</option>
|
||||
<option name="percentagesRow">false</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="rowNumbers">false</option>
|
||||
<option name="totalsRow">false</option>
|
||||
<option name="wrap">false</option>
|
||||
</table>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Invalid parsed time</title>
|
||||
<table>
|
||||
<title>The timestamp parsing did run but the timestamp found did not match previous events so the time parsing may need a review</title>
|
||||
<search>
|
||||
<query>```The timestamp parsing did run but the timestamp found did not match previous events so the time parsing may need a review```
|
||||
index=_internal sourcetype=splunkd (`indexerhosts`) OR (`heavyforwarderhosts`)
|
||||
"outside of the acceptable time window. If this timestamp is correct, consider adjusting"
|
||||
OR "is too far away from the previous event's time"
|
||||
OR "is suspiciously far away from the previous event's time" $sourcetype$
|
||||
| rex "source::(?P<source>[^|]+)\|host::(?P<host>[^|]+)\|(?P<sourcetype>[^|]+)"
|
||||
| rex "Context: source=(?P<source>[^|]+)\|host=(?P<host>[^|]+)\|(?P<sourcetype>[^|]+)"
|
||||
```The goal of this part of the search was to obtain the messages that are relating to this particular host/source/sourcetype, however since the message includes a time we cannot uses values(message) without getting a huge number of values, therefore we use cluster to obtain the unique values. Since we want the original start/end times we use labelonly=true```
|
||||
| cluster labelonly=true
|
||||
| eval message=coalesce(message,event_message)
|
||||
| stats count, min(_time) AS firstSeen, max(_time) AS lastSeen, first(message) AS message by host, source, sourcetype, cluster_label
|
||||
```While 'A possible timestamp match (...) is outside of the acceptable time window' and 'Time parsed (...) is too far away from the previous event's time', result in the current indexing time been used, the 'Accepted time (...) is suspiciously far away from the previous event's time' is accepted and therefore we need to expand the investigation query time to include this time range as well!```
|
||||
| rex field=message "Accepted time \((?P<acceptedTime>[^\)]+)"
|
||||
| eval acceptedTime=strptime(acceptedTime, "%a %b %d %H:%M:%S %Y")
|
||||
| eval firstSeen=if(acceptedTime<firstSeen,acceptedTime,firstSeen)
|
||||
```Now that we have the first message for each labelled cluster, we now take all relevant message per host/source/sourcetype```
|
||||
| stats values(acceptedTime) AS acceptedTime, sum(count) AS count, min(firstSeen) AS firstSeen, max(lastSeen) AS lastSeen, values(message) AS message by host, source, sourcetype
|
||||
| eval invesEnd=if(lastSeen=firstSeen,round(lastSeen+1),round(lastSeen)), invesStart=floor(firstSeen)
|
||||
| eval invesDataSource = replace(source, "\\\\", "\\\\\\\\")
|
||||
| eval investigationQuery="```Please note that this query may need to be narrowed down further before running it, this is an example only...``` index=* host=" . host . " sourcetype=\"" . sourcetype . "\" source=\"" . invesDataSource . "\" earliest=" . invesStart . " latest=" . invesEnd . " | eval indextime=strftime(_indextime, \"%+\")"
|
||||
| eval firstSeen=strftime(firstSeen, "%+"), lastSeen=strftime(lastSeen, "%+")
|
||||
| table host, source, sourcetype count, firstSeen, lastSeen, message, investigationQuery
|
||||
| sort - count</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="count">20</option>
|
||||
<option name="dataOverlayMode">none</option>
|
||||
<option name="drilldown">none</option>
|
||||
<option name="percentagesRow">false</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="rowNumbers">false</option>
|
||||
<option name="totalsRow">false</option>
|
||||
<option name="wrap">false</option>
|
||||
</table>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Multiple time formats one sourcetype</title>
|
||||
<table>
|
||||
<title>Normally this alert advises that a sourcetype is been used by multiple unique types of data (i.e. it should be more than one sourcetype), one way to fix this is at the universal forwarder / inputs.conf sourcetype= setting</title>
|
||||
<search>
|
||||
<query>```This search detects when the time format has changed within the files 1 or more times, the time format per sourcetype should be consistent```
|
||||
index=_internal "DateParserVerbose - Accepted time format has changed" sourcetype=splunkd (`indexerhosts`) OR (`heavyforwarderhosts`) $sourcetype$
|
||||
| rex "source(?:=|::)(?P<source>[^|]+)\|host(?:=|::)(?P<host>[^|]+)\|(?P<sourcetype>[^|]+)"
|
||||
| eval message=coalesce(message,event_message)
|
||||
| stats count, min(_time) AS firstSeen, max(_time) AS lastSeen by host, source, sourcetype, message
|
||||
| eval invesMaxTime=if(firstSeen=lastSeen,lastSeen+1,lastSeen)
|
||||
| eval invesDataSource = replace(source, "\\\\", "\\\\\\\\")
|
||||
| eval potentialInvestigationQuery="```If no results are found, prepend the earliest=/latest= with _index_ (eg _index_earliest=...) and expand the timeframe searched over, as the parsed timestamps from the data does not have to exactly match the time the warnings appeared...``` index=* sourcetype=\"" . sourcetype . "\" source=\"" . invesDataSource . "\" host=" . host . " earliest=" . firstSeen . " latest=" . invesMaxTime . " | eval start=substr(_raw, 0, 30) | cluster field=start"
|
||||
| eval firstSeen=strftime(firstSeen, "%+"), lastSeen=strftime(lastSeen, "%+")
|
||||
| fields - invesMaxTime, invesDataSource
|
||||
| sort - count</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="count">20</option>
|
||||
<option name="dataOverlayMode">none</option>
|
||||
<option name="drilldown">none</option>
|
||||
<option name="percentagesRow">false</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="rowNumbers">false</option>
|
||||
<option name="totalsRow">false</option>
|
||||
<option name="wrap">false</option>
|
||||
</table>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Events Broken due to size limits</title>
|
||||
<table>
|
||||
<title>The event that came in was greater than the maximum number of lines that were configured, therefore it was broken into multiple events...(use a LINE_BREAKER or adjust MAX_EVENTS)</title>
|
||||
<search>
|
||||
<query>```The event that came in was greater than the maximum number of lines that were configured, therefore it was broken into multiple events...
|
||||
Also refer to the monitoring console Indexing -> Inputs -> Data Quality```
|
||||
index=_internal "AggregatorMiningProcessor - Breaking event because limit of" sourcetype=splunkd data_sourcetype=$sourcetype$
|
||||
| rex "Breaking event because limit of (?P<curlimit>\d+)"
|
||||
| stats max(_time) AS mostRecent, min(_time) AS firstSeen, count by data_sourcetype, data_host, curlimit
|
||||
| eval longerThan=curlimit-1
|
||||
| eval invesLatest = if(mostRecent==firstSeen,mostRecent+1,mostRecent)
|
||||
| rename data_sourcetype AS sourcetype, data_host AS host
|
||||
| eval investigationQuery="```If no results are found prepend the earliest=/latest= with _index_ (eg _index_earliest=...) and expand the timeframe searched over, as the parsed timestamps from the data does not have to exactly match the time the warnings appeared...``` index=* host=" . host . " sourcetype=\"" . sourcetype . "\" linecount>" . longerThan . " earliest=" . firstSeen . " latest=" . invesLatest
|
||||
| fields - firstSeen, longerThan, invesLatest
|
||||
| eval mostRecent=strftime(mostRecent, "%+")
|
||||
| sort - count</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="count">20</option>
|
||||
<option name="dataOverlayMode">none</option>
|
||||
<option name="drilldown">none</option>
|
||||
<option name="percentagesRow">false</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="rowNumbers">false</option>
|
||||
<option name="totalsRow">false</option>
|
||||
<option name="wrap">false</option>
|
||||
</table>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Data found to be from the future</title>
|
||||
<table>
|
||||
<title>Note: hardcoded to look for data in the past week. Find events which have future based dates on them, any results are a problem</title>
|
||||
<search>
|
||||
<query>index=* earliest=+5m latest=+5y sourcetype=$sourcetype$
|
||||
| eval ahead=abs(now() - _time)
|
||||
| eval indextime=_indextime
|
||||
| bin span=1d indextime
|
||||
| eval timeToLookBack=now()-(60*60*24*7)
|
||||
| stats avg(ahead) as averageahead, max(_time) AS maxTime, min(_time) as minTime, count, first(timeToLookBack) AS timeToLookBack by sourcetype, index, indextime
|
||||
| where indextime>timeToLookBack AND averageahead > 1000
|
||||
| eval averageahead =tostring(averageahead, "duration")
|
||||
| eval invesMaxTime=if(minTime=maxTime,maxTime+1,maxTime)
|
||||
| eval investigationQuery="index=" . index . " sourcetype=\"" . sourcetype . "\" earliest=" . minTime . " latest=" . invesMaxTime . " _index_earliest=" . timeToLookBack . "
|
||||
| eval indextime=strftime(_indextime, \"%+\")"
|
||||
| eval indextime=strftime(indextime, "%+"), maxTime = strftime(maxTime, "%+"), minTime = strftime(minTime, "%+")
|
||||
| table sourcetype, index, averageahead, indextime, minTime, maxTime, count, investigationQuery
|
||||
| sort - count</query>
|
||||
<earliest>-24h@h</earliest>
|
||||
<latest>now</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="count">20</option>
|
||||
<option name="dataOverlayMode">none</option>
|
||||
<option name="drilldown">none</option>
|
||||
<option name="percentagesRow">false</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="rowNumbers">false</option>
|
||||
<option name="totalsRow">false</option>
|
||||
<option name="wrap">false</option>
|
||||
</table>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Old data coming ingested recently</title>
|
||||
<table>
|
||||
<title>Hardcoded to find data which was sent in during the past week</title>
|
||||
<search>
|
||||
<query>| tstats max(_time) AS mostRecentlySeen, max(_indextime) AS mostRecentlyIndexed, min(_time) AS earliestSeen, min(_indextime) AS earliestIndexTime , count
|
||||
where _index_earliest=-7d, earliest=-300d, latest=-7d, sourcetype=$sourcetype$
|
||||
groupby source, sourcetype, index, host
|
||||
| eval invesDataSource = replace(source, "\\\\", "\\\\\\\\"), invesLatestTime=mostRecentlySeen+1, invesLatestIndexTime=mostRecentlyIndexed+1
|
||||
| eval investigationQuery="```Narrow down to the older part of the timeline after this query runs to see the potential issue...``` index=" . index . " source=\"" . invesDataSource . "\" sourcetype=\"" . sourcetype . "\" host=" . host . " earliest=" . earliestSeen . " latest=" . invesLatestTime . " _index_earliest=" . earliestIndexTime . " _index_latest=" . invesLatestIndexTime . " | eval indextime=strftime(_indextime, \"%+\")"
|
||||
| eval mostRecentlySeen=strftime(mostRecentlySeen, "%+"), mostRecentlyIndexed=strftime(mostRecentlyIndexed, "%+")
|
||||
| sort index, host, sourcetype
|
||||
| table index, source, sourcetype, host, mostRecentlySeen, mostRecentlyIndexed, count, investigationQuery</query>
|
||||
<earliest>-12d@d</earliest>
|
||||
<latest>now</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="count">100</option>
|
||||
<option name="dataOverlayMode">none</option>
|
||||
<option name="drilldown">none</option>
|
||||
<option name="percentagesRow">false</option>
|
||||
<option name="rowNumbers">false</option>
|
||||
<option name="totalsRow">false</option>
|
||||
<option name="wrap">false</option>
|
||||
</table>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Truncated data</title>
|
||||
<table>
|
||||
<title>The line was truncated due to length, the TRUNCATE setting may need tweaking (or it may be just bad data coming in)</title>
|
||||
<search>
|
||||
<query>```The line was truncated due to length, the TRUNCATE setting may need tweaking (or it may be just bad data coming in)
|
||||
Also refer to the Monitoring Console, Indexing -> Inputs -> Data Quality
|
||||
If you are in a (very) performance sensitive environment you might want to remove the rex/eval lines for the data_host field and let the admin update the investigation query manually```
|
||||
index=_internal "Truncating line because limit of" sourcetype=splunkd data_sourcetype=$sourcetype$ (`heavyforwarderhosts`) OR (`indexerhosts`)
|
||||
| rex "Truncating line because limit of (?P<curlimit>\d+) bytes.*with a line length >= (?P<approxlinelength>\S+)"
|
||||
| rex field=data_host "(?P<data_host>[^\.]+)"
|
||||
| eval data_host=data_host . "*"
|
||||
| stats min(_time) AS firstSeen, max(_time) AS lastSeen, count, avg(approxlinelength) AS avgApproxLineLength, max(approxlinelength) AS maxApproxLineLength, values(data_host) AS hosts by data_sourcetype, curlimit
|
||||
| rename data_sourcetype AS sourcetype
|
||||
| eval hostList=if(mvcount(hosts)>1,mvjoin(hosts," OR host="),hosts)
|
||||
| eval hostList="host=" . hostList
|
||||
| eval avgApproxLineLength = round(avgApproxLineLength)
|
||||
| eval invesLastSeen=if(firstSeen==lastSeen,lastSeen+1,lastSeen)
|
||||
| eval firstSeen=firstSeen-10
|
||||
| eval invesLastSeen=invesLastSeen+10
|
||||
| eval investigationQuery="```Find examples where the truncation limit has been reached. The earliest/latest time is based on the warning messages in the Splunk logs, they may need customisation!``` index=* sourcetype=" . sourcetype . " " . hostList . " earliest=" . firstSeen . " latest=" . invesLastSeen . " | where len(_raw)=" . curlimit
|
||||
| sort - count
|
||||
| eval lastSeen=strftime(lastSeen, "%+")
|
||||
| table sourcetype, curlimit, count, avgApproxLineLength, maxApproxLineLength, lastSeen, investigationQuery
|
||||
| where count>0</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="count">100</option>
|
||||
<option name="dataOverlayMode">none</option>
|
||||
<option name="drilldown">none</option>
|
||||
<option name="percentagesRow">false</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="rowNumbers">false</option>
|
||||
<option name="totalsRow">false</option>
|
||||
<option name="wrap">false</option>
|
||||
</table>
|
||||
</panel>
|
||||
</row>
|
||||
</form>
|
||||
@ -0,0 +1,103 @@
|
||||
<form version="1.1">
|
||||
<label>Dashboard - Knowledge Objects By App</label>
|
||||
<description>List of knowledge objects per app</description>
|
||||
<fieldset submitButton="false">
|
||||
<input type="dropdown" token="app">
|
||||
<label>Application Name</label>
|
||||
<fieldForLabel>app</fieldForLabel>
|
||||
<fieldForValue>app</fieldForValue>
|
||||
<search>
|
||||
<query>| rest /services/apps/local search="disabled=0" count=0 f=title splunk_server=local
|
||||
| rename title as app
|
||||
| table app</query>
|
||||
<earliest>-24h@h</earliest>
|
||||
<latest>now</latest>
|
||||
</search>
|
||||
</input>
|
||||
<input type="dropdown" token="type">
|
||||
<label>Knowledge Object Type (based on app)</label>
|
||||
<choice value="*">all</choice>
|
||||
<choice value="datamodel">datamodel</choice>
|
||||
<choice value="calcfields">calcfields</choice>
|
||||
<choice value="macros">macros</choice>
|
||||
<fieldForLabel>type</fieldForLabel>
|
||||
<fieldForValue>type</fieldForValue>
|
||||
<search>
|
||||
<query>| rest "/servicesNS/-/$app$/directory" count=0 splunk_server=local
|
||||
| search eai:acl.app=$app$
|
||||
| rename eai:type AS type
|
||||
| search type!="macros" ```macros only appears in really new versions of Splunk via the directory endpoint, so assume it doesn't exist in this query```
|
||||
| stats count by type
|
||||
| fields - count</query>
|
||||
<earliest>-24h@h</earliest>
|
||||
<latest>now</latest>
|
||||
</search>
|
||||
<default>all</default>
|
||||
<initialValue>*</initialValue>
|
||||
</input>
|
||||
</fieldset>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Knowledge object summary</title>
|
||||
<table>
|
||||
<search>
|
||||
<query>| rest "/servicesNS/-/$app$/directory" count=0 splunk_server=local
|
||||
| search eai:acl.app=$app$
|
||||
| eval updatedEpoch=strptime(updated,"%Y-%m-%dT%H:%M:%S%:z")
|
||||
| rename eai:type AS type, eai:acl.app AS app, eai:location AS location
|
||||
| append [ rest splunk_server=local /servicesNS/-/$app$/datamodel/model count=0 f=updated f=eai:appName | rename eai:appName AS app | eval type="datamodel" ]
|
||||
| append [ | rest splunk_server=local /servicesNS/-/$app$/data/props/calcfields count=0 | eval type="calcfields" | rename eai:acl.app AS app]
|
||||
| append [ | rest splunk_server=local /servicesNS/-/$app$/configs/conf-macros count=0 | rename eai:appName AS app | eval type="macros"]
|
||||
| fillnull location value="N/A"
|
||||
| search app=$app$
|
||||
| stats count by type, app, location</query>
|
||||
<earliest>-4h@m</earliest>
|
||||
<latest>now</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="count">100</option>
|
||||
<option name="dataOverlayMode">none</option>
|
||||
<option name="drilldown">none</option>
|
||||
<option name="percentagesRow">false</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="rowNumbers">false</option>
|
||||
<option name="totalsRow">false</option>
|
||||
<option name="wrap">false</option>
|
||||
</table>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Knowledge Objects by app semi-detailed</title>
|
||||
<table>
|
||||
<title>Click any row for the drilldown...</title>
|
||||
<search>
|
||||
<query>| rest "/servicesNS/-/$app$/directory" count=0 splunk_server=local
|
||||
| search eai:acl.app=$app$
|
||||
| eval updatedEpoch=strptime(updated,"%Y-%m-%dT%H:%M:%S%:z")
|
||||
| rename eai:type AS type, eai:acl.app AS app, eai:location AS location
|
||||
| append [ rest splunk_server=local /servicesNS/-/$app$/datamodel/model count=0 f=updated f=eai:appName | rename eai:appName AS app | eval type="datamodel" ]
|
||||
| append [ | rest splunk_server=local /servicesNS/-/$app$/data/props/calcfields count=0 | eval type="calcfields" | rename eai:acl.app AS app]
|
||||
| append [ | rest splunk_server=local /servicesNS/-/$app$/configs/conf-macros count=0 | rename eai:appName AS app | eval type="macros"]
|
||||
| fillnull location value="N/A"
|
||||
| search app=$app$, type=$type$
|
||||
| stats values(title) AS names, values(updated) AS updated by eai:acl.owner, eai:acl.sharing, type
|
||||
| rename eai:acl.sharing AS sharing, eai:acl.owner AS owner</query>
|
||||
<earliest>-4h@m</earliest>
|
||||
<latest>now</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="count">100</option>
|
||||
<option name="dataOverlayMode">none</option>
|
||||
<option name="drilldown">cell</option>
|
||||
<option name="percentagesRow">false</option>
|
||||
<option name="rowNumbers">false</option>
|
||||
<option name="totalsRow">false</option>
|
||||
<option name="wrap">false</option>
|
||||
<drilldown>
|
||||
<link target="_blank">/app/SplunkAdmins/knowledge_objects_by_app_drilldown?form.app=$app$&form.type=$row.type$&form.sharing=$row.sharing$&form.owner=$row.owner$</link>
|
||||
</drilldown>
|
||||
</table>
|
||||
</panel>
|
||||
</row>
|
||||
</form>
|
||||
@ -0,0 +1,94 @@
|
||||
<form version="1.1">
|
||||
<label>Knowledge Objects By App Drilldown</label>
|
||||
<description>List of knowledge objects per app by user/sharing level</description>
|
||||
<fieldset submitButton="false">
|
||||
<input type="dropdown" token="app">
|
||||
<label>Application Name</label>
|
||||
<fieldForLabel>app</fieldForLabel>
|
||||
<fieldForValue>app</fieldForValue>
|
||||
<search>
|
||||
<query>| rest /services/apps/local search="disabled=0" count=0 f=title splunk_server=local
|
||||
| rename title as app
|
||||
| table app</query>
|
||||
<earliest>-24h@h</earliest>
|
||||
<latest>now</latest>
|
||||
</search>
|
||||
</input>
|
||||
<input type="dropdown" token="type">
|
||||
<label>Knowledge Object Type (based on app)</label>
|
||||
<choice value="*">all</choice>
|
||||
<choice value="datamodel">datamodel</choice>
|
||||
<choice value="calcfields">calcfields</choice>
|
||||
<choice value="macros">macros</choice>
|
||||
<fieldForLabel>type</fieldForLabel>
|
||||
<fieldForValue>type</fieldForValue>
|
||||
<search>
|
||||
<query>| rest "/servicesNS/-/$app$/directory" count=0 splunk_server=local
|
||||
| search eai:acl.app=$app$
|
||||
| rename eai:type AS type
|
||||
| stats count by type
|
||||
| fields - count</query>
|
||||
<earliest>-24h@h</earliest>
|
||||
<latest>now</latest>
|
||||
</search>
|
||||
<default>all</default>
|
||||
<initialValue>*</initialValue>
|
||||
</input>
|
||||
<input type="text" token="owner">
|
||||
<label>User/Owner</label>
|
||||
<default>*</default>
|
||||
</input>
|
||||
<input type="dropdown" token="sharing">
|
||||
<label>Sharing Level</label>
|
||||
<choice value="*">All</choice>
|
||||
<choice value="app">app</choice>
|
||||
<choice value="user">user (private)</choice>
|
||||
<choice value="global">global</choice>
|
||||
<default>*</default>
|
||||
<initialValue>*</initialValue>
|
||||
</input>
|
||||
<input type="text" token="name">
|
||||
<label>Knowledge Object Name</label>
|
||||
<default>*</default>
|
||||
</input>
|
||||
<input type="radio" token="disabled">
|
||||
<label>Exclude disabled?</label>
|
||||
<choice value="0">Yes</choice>
|
||||
<choice value="*">No</choice>
|
||||
<default>*</default>
|
||||
</input>
|
||||
</fieldset>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Knowledge object summary</title>
|
||||
<table>
|
||||
<search>
|
||||
<query>| rest "/servicesNS/-/$app$/directory" count=0 splunk_server=local
|
||||
| search eai:acl.app=$app$
|
||||
| eval updatedEpoch=strptime(updated,"%Y-%m-%dT%H:%M:%S%:z")
|
||||
| rename eai:type AS type, eai:acl.app AS app, eai:location AS location
|
||||
| append [ rest splunk_server=local /servicesNS/-/$app$/datamodel/model count=0 f=updated f=eai:appName | rename eai:appName AS app | eval type="datamodel" ]
|
||||
| append [ | rest splunk_server=local /servicesNS/-/$app$/data/props/calcfields count=0 | eval type="calcfields" | rename eai:acl.app AS app]
|
||||
| append [ | rest splunk_server=local /servicesNS/-/$app$/configs/conf-macros count=0 | rename eai:appName AS app | eval type="macros"]
|
||||
| fillnull disabled
|
||||
| search app=$app$ type=$type$ title=$name$ eai:acl.sharing=$sharing$ disabled=$disabled$ eai:acl.owner=$owner$
|
||||
| fillnull location value="N/A"
|
||||
| rename title AS name, eai:acl.owner AS owner, eai:acl.sharing AS sharing
|
||||
| eval disabled=case(disabled==0,"false",disabled==1,"true",1==1,"Unknown")
|
||||
| table name, description, disabled, owner, sharing, type, updated</query>
|
||||
<earliest>-4h@m</earliest>
|
||||
<latest>now</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="count">100</option>
|
||||
<option name="dataOverlayMode">none</option>
|
||||
<option name="drilldown">none</option>
|
||||
<option name="percentagesRow">false</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="rowNumbers">false</option>
|
||||
<option name="totalsRow">false</option>
|
||||
<option name="wrap">false</option>
|
||||
</table>
|
||||
</panel>
|
||||
</row>
|
||||
</form>
|
||||
@ -0,0 +1,178 @@
|
||||
<form version="1.1">
|
||||
<label>Dashboard - Lookup Audit</label>
|
||||
<description>Dashboard for displaying lookup table files on a Search Head. Created to easily identify large tables which might disrupt Splunk uptime. Created by Discovered Intelligence -- https://discoveredintelligence.ca, modifications by Gareth Anderson</description>
|
||||
<search id="base">
|
||||
<query>| rest /servicesNS/nobody/$appselection_rest$/data/lookup-table-files splunk_server=local
|
||||
| rename eai:acl.app as appname
|
||||
| regex appname=^$appselection$$
|
||||
| dedup appname
|
||||
| map maxsearches=5000 search=" | rest /servicesNS/-/$appselection_rest$/admin/file-explorer/$splunk_dir|u$%2Fapps%2F$$appname$$%2Flookups splunk_server=local | eval appname=\"$$appname$$\""</query>
|
||||
</search>
|
||||
<fieldset submitButton="false" autoRun="true">
|
||||
<input type="dropdown" token="filter" searchWhenChanged="true">
|
||||
<label>Select Lookup Filter</label>
|
||||
<choice value="*">Show All Lookups</choice>
|
||||
<choice value="NonBlackList">Exclude Blacklisted Lookups</choice>
|
||||
<choice value="Blacklist">Show Only Blacklisted Lookups</choice>
|
||||
<change>
|
||||
<condition value="Blacklist">
|
||||
<set token="blacklist">
|
||||
<![CDATA[(
|
||||
[| rest /servicesNS/-/-/configs/conf-distsearch splunk_server=local
|
||||
| where title="replicationBlacklist"
|
||||
| transpose 0 header_field=title
|
||||
| where like(replicationBlacklist,"apps%") OR like(replicationBlacklist,"%csv")
|
||||
| eval replicationBlacklist=replace(replicationBlacklist,"\.\.\.","*")
|
||||
| eval replicationBlacklist=replace(replicationBlacklist,"\[|\]|\\\\","")
|
||||
| rename replicationBlacklist AS title
|
||||
| fields title])]]>
|
||||
</set>
|
||||
</condition>
|
||||
<condition value="NonBlackList">
|
||||
<set token="blacklist">
|
||||
<![CDATA[NOT (
|
||||
[| rest /servicesNS/-/-/configs/conf-distsearch splunk_server=local
|
||||
| where title="replicationBlacklist"
|
||||
| transpose 0 header_field=title
|
||||
| where like(replicationBlacklist,"apps%") OR like(replicationBlacklist,"%csv")
|
||||
| eval replicationBlacklist=replace(replicationBlacklist,"\.\.\.","*")
|
||||
| eval replicationBlacklist=replace(replicationBlacklist,"\[|\]|\\\\","")
|
||||
| rename replicationBlacklist AS title
|
||||
| fields title]) ]]>
|
||||
</set>
|
||||
</condition>
|
||||
<condition value="*">
|
||||
<set token="blacklist">*</set>
|
||||
</condition>
|
||||
</change>
|
||||
<default>NonBlackList</default>
|
||||
<initialValue>NonBlackList</initialValue>
|
||||
</input>
|
||||
<input type="dropdown" token="appselection" searchWhenChanged="true">
|
||||
<label>Select App</label>
|
||||
<choice value=".*">All</choice>
|
||||
<fieldForLabel>appname</fieldForLabel>
|
||||
<fieldForValue>appname</fieldForValue>
|
||||
<search>
|
||||
<query>| rest /servicesNS/-/-/data/lookup-table-files splunk_server=local
|
||||
| where like(title,"%csv")
|
||||
| rename eai:acl.app as appname
|
||||
| dedup appname
|
||||
| sort appname</query>
|
||||
<earliest>-15m</earliest>
|
||||
<latest>now</latest>
|
||||
</search>
|
||||
<change>
|
||||
<condition value=".*">
|
||||
<set token="appselection_rest">-</set>
|
||||
</condition>
|
||||
<condition>
|
||||
<set token="appselection_rest">$value$</set>
|
||||
</condition>
|
||||
</change>
|
||||
</input>
|
||||
<input type="dropdown" token="priv_lookup" searchWhenChanged="true">
|
||||
<label>Private User Lookup</label>
|
||||
<choice value="*">All</choice>
|
||||
<choice value="Yes">Yes</choice>
|
||||
<choice value="No">No</choice>
|
||||
<default>*</default>
|
||||
<initialValue>*</initialValue>
|
||||
</input>
|
||||
<input type="text" token="splunk_dir">
|
||||
<label>Splunk Etc Dir (use \\\\ for Windows paths or / for Unix). C:\\\\program files\\\\splunk\\\\etc (for example)</label>
|
||||
<initialValue>/opt/splunk/etc</initialValue>
|
||||
</input>
|
||||
</fieldset>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Lookup Files by App</title>
|
||||
<table>
|
||||
<search base="base">
|
||||
<query>| rex field=title "[\\\\/]apps[\\\\/](?P<App>.+)[\\\\/]lookups"
|
||||
| sort - lastModifiedTime
|
||||
| eval "Last Modified" = strftime(lastModifiedTime,"%b %d, %Y %H:%M"), fileSize_MB=round((fileSize/1024),3)
|
||||
| fillnull value=0.000 fileSize_MB
|
||||
| fields App name fileSize_MB "Last Modified" title
|
||||
| rex field=title "(?<title>apps.*)$"
|
||||
| search $blacklist$
|
||||
| join type=left name
|
||||
[| rest /servicesNS/nobody/$appselection_rest$/data/lookup-table-files splunk_server=local
|
||||
| rename title AS name
|
||||
| fields + name author]
|
||||
| eval private_lookup="No"
|
||||
| append
|
||||
[| rest /servicesNS/-/$appselection_rest$/data/lookup-table-files splunk_server=local
|
||||
| regex eai:data="[\\\\/]users[\\\\/]$appselection$[\\\\/][^\\\\/]+[\\\\/]lookups[/\\\\]"
|
||||
| rename eai:acl.app as appname, eai:userName AS user
|
||||
| search appname=*
|
||||
| dedup appname
|
||||
| map maxsearches=5000 search=" | rest /servicesNS/-/$appselection_rest$/admin/file-explorer/$splunk_dir|u$%2Fusers%2F$$user$$%2F$$appname$$%2Flookups splunk_server=local"
|
||||
| rex field=title "[\\\\/]users[\\\\/]$appselection$[\\\\/](?<App>.+)[\\\\/]lookups[\\\\/]"
|
||||
| sort - lastModifiedTime
|
||||
| eval "Last Modified" = strftime(lastModifiedTime,"%b %d, %Y %H:%M"), fileSize_MB=round((fileSize/1024),3)
|
||||
| fillnull value=0.000 fileSize_MB
|
||||
| fields App name fileSize_MB "Last Modified" title
|
||||
| rex field=title "(?<title>users.*)$"
|
||||
| search $blacklist$
|
||||
| join type=left name
|
||||
[| rest /servicesNS/-/$appselection_rest$/data/lookup-table-files splunk_server=local
|
||||
| regex eai:data="$splunk_dir$[\\\\/]users[\\\\/]$appselection$[\\\\/]"
|
||||
| rename title AS name
|
||||
| fields + name author]
|
||||
| eval private_lookup="Yes"
|
||||
]
|
||||
| rename title AS path
|
||||
| search private_lookup="$priv_lookup$"
|
||||
| sort - fileSize_MB</query>
|
||||
</search>
|
||||
<option name="count">100</option>
|
||||
<option name="dataOverlayMode">none</option>
|
||||
<option name="drilldown">cell</option>
|
||||
<option name="percentagesRow">false</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="rowNumbers">false</option>
|
||||
<option name="totalsRow">false</option>
|
||||
<option name="wrap">true</option>
|
||||
</table>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Lookup Subdirectories by App</title>
|
||||
<table>
|
||||
<title>Note: blacklist does not work for this panel and the last modified is directory modification date. If the author is blank then no matching lookup definition of type geo was found. Finally, as per the open ideas, the sub-directories under the lookups directory are never reaped by Splunk as of 8.0.3, it is upto the administrator to remove them as required. Also note they are not blacklisted from the knowledge bundle to the search peers, and finally they are created when the geom command is used so can be different per-search head!</title>
|
||||
<search base="base">
|
||||
<query>| eval last_modified = strftime(lastModifiedTime,"%b %d, %Y %H:%M")
|
||||
| search hasSubNodes=1
|
||||
| map maxsearches=5000 search=" | rest /servicesNS/-/$appselection_rest$/admin/file-explorer/$splunk_dir|u$%2Fapps%2F$$appname$$%2Flookups%2F$$name$$ splunk_server=local | eval last_modified=\"$$last_modified$$\""
|
||||
| rex field=title "(?P<path>[^/\\\\]+[/\\\\](?P<App>[^/\\\\]+)[/\\\\][^/\\\\]+[/\\\\](?P<dirname>[^/\\\\]+))[/\\\\][^/\\\\]+$"
|
||||
| stats sum(fileSize) AS fileSize, values(last_modified) AS "Last Modified" by dirname, App, path
|
||||
| append
|
||||
[| rest /servicesNS/-/$appselection_rest$/data/lookup-table-files splunk_server=local
|
||||
| regex eai:data="$splunk_dir$[\\\\/]users[/\\\\][^/\\\\]+[/\\\\]$appselection$[\\\\/]"
|
||||
| rename eai:acl.app as appname, eai:userName AS user
|
||||
| search appname=*
|
||||
| dedup appname
|
||||
| map maxsearches=5000 search=" | rest /servicesNS/-/$appselection_rest$/admin/file-explorer/$splunk_dir|u$%2Fusers%2F$$user$$%2F$$appname$$%2Flookups splunk_server=local | eval appname=\"$$appname$$\", user=\"$$user$$\""
|
||||
| search NOT ignoreme="true"
|
||||
| search hasSubNodes=1
|
||||
| eval last_modified = strftime(lastModifiedTime,"%b %d, %Y %H:%M")
|
||||
| fillnull last_modified
|
||||
| map maxsearches=5000 search=" | rest /servicesNS/-/$appselection_rest$/admin/file-explorer/$splunk_dir|u$%2Fusers%2F$$user$$%2F$$appname$$%2Flookups%2F$$name$$ splunk_server=local | eval last_modified=\"$$last_modified$$\""
|
||||
| rex field=title "(?P<path>([^/\\\\]+[/\\\\]){2}(?P<App>[^/\\\\]+)[/\\\\][^/\\\\]+[/\\\\](?P<dirname>[^/\\\\]+))[/\\\\][^/\\\\]+$"
|
||||
| stats sum(fileSize) AS fileSize, values(last_modified) AS "Last Modified" by dirname, App, path ]
|
||||
| eval fileSize_MB=round((fileSize/1024),3)
|
||||
| table App, dirname, fileSize_MB, "Last Modified" path
|
||||
| join type=left dirname
|
||||
[| rest /servicesNS/-/$appselection_rest$/data/transforms/lookups splunk_server=local search="type=geo" f=title
|
||||
| fields + dirname author]
|
||||
| sort - fileSize_MB</query>
|
||||
</search>
|
||||
<option name="count">10</option>
|
||||
<option name="drilldown">none</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
</table>
|
||||
</panel>
|
||||
</row>
|
||||
</form>
|
||||
@ -0,0 +1,117 @@
|
||||
<form version="1.1">
|
||||
<label>Dashboard - Lookup in use finder</label>
|
||||
<description>Attempt to detect if a lookup file in question is in use within Splunk</description>
|
||||
<fieldset submitButton="false">
|
||||
<input type="text" token="lookup_name">
|
||||
<label>Lookup Name (CSV file name or similar)</label>
|
||||
</input>
|
||||
<input type="dropdown" token="app">
|
||||
<label>App</label>
|
||||
<choice value="-">All</choice>
|
||||
<default>-</default>
|
||||
<initialValue>-</initialValue>
|
||||
<fieldForLabel>app</fieldForLabel>
|
||||
<fieldForValue>app</fieldForValue>
|
||||
<search>
|
||||
<query>| rest /services/apps/local search="disabled=0" count=0 f=title splunk_server=local
|
||||
| rename title as app
|
||||
| table app</query>
|
||||
<earliest>-15m</earliest>
|
||||
<latest>now</latest>
|
||||
</search>
|
||||
</input>
|
||||
<input type="time" token="time">
|
||||
<label>Audit Logs Time Period</label>
|
||||
<default>
|
||||
<earliest>-60m@m</earliest>
|
||||
<latest>now</latest>
|
||||
</default>
|
||||
</input>
|
||||
</fieldset>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Dashboard or Scheduled Search lookups</title>
|
||||
<table>
|
||||
<search>
|
||||
<query>| makeresults
|
||||
| eval filename="$lookup_name$", lookupDefName=null()
|
||||
| fields - _time
|
||||
| append
|
||||
[| rest splunk_server=local "/servicesNS/-/$app$/data/transforms/lookups" f=eai:* f=filename f=title f=updated
|
||||
| search filename="$lookup_name$"
|
||||
| fields title
|
||||
| rename title AS lookupDefName ]
|
||||
| tail 1
|
||||
| fillnull lookupDefName value="youwontfindthisone"
|
||||
| appendpipe
|
||||
[ | map
|
||||
[| rest /servicesNS/-/$app$/data/ui/views splunk_server=local f=eai:* f=label f=title
|
||||
| fields eai:acl.app, label, title, updated, eai:acl.owner, eai:data
|
||||
| regex eai:data="(input|output)?lookup\s+($lookup_name$|$$lookupDefName$$)"
|
||||
| eval type="dashboard"
|
||||
| fields - eai:data ] ]
|
||||
| appendpipe [ | map
|
||||
[| rest /servicesNS/-/$app$/saved/searches splunk_server=local f=eai:* f=title f=search f=updated
|
||||
| fields eai:acl.owner, title, search, updated, eai:acl.app
|
||||
| regex search="(input|output)?lookup\s+($lookup_name$|$$lookupDefName$$)"
|
||||
| eval type="report"
|
||||
| fields - search ]]
|
||||
| where isnotnull('eai:acl.app')
|
||||
| eval searchedApp="$app$"
|
||||
| where 'eai:acl.app'==searchedApp OR "$app$"=="-"
|
||||
| fields - filename, lookupDefName
|
||||
| rename eai:acl.app AS app, eai:acl.owner AS owner</query>
|
||||
<earliest>-5m</earliest>
|
||||
<latest>now</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="count">100</option>
|
||||
<option name="dataOverlayMode">none</option>
|
||||
<option name="drilldown">none</option>
|
||||
<option name="percentagesRow">false</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="rowNumbers">false</option>
|
||||
<option name="totalsRow">false</option>
|
||||
<option name="wrap">false</option>
|
||||
</table>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Audit Logs Check (note no app context available)</title>
|
||||
<table>
|
||||
<search>
|
||||
<query>| makeresults
|
||||
| eval filename="$lookup_name$", lookupDefName=null()
|
||||
| fields - _time
|
||||
| append
|
||||
[| rest splunk_server=local "/servicesNS/-/$app$/data/transforms/lookups" f=eai:* f=filename f=title f=updated
|
||||
| search filename="$lookup_name$"
|
||||
| fields title
|
||||
| rename title AS lookupDefName ]
|
||||
| tail 1
|
||||
| fillnull lookupDefName value="youwontfindthisone"
|
||||
| appendpipe
|
||||
[ map
|
||||
[ search index=_audit "info=granted" "search='search " $lookup_name$ search_id!="'ta_*"
|
||||
| rex ", search='(?P<search>[\S+\s+]+?)', "
|
||||
| regex search="(input|output)?lookup\s+($lookup_name$|$$lookupDefName$$)"
|
||||
| fields user, search, search_id, savedsearch_name] ]
|
||||
| where isnotnull(user)
|
||||
| table user, search, search_id, savedsearch_name</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="count">100</option>
|
||||
<option name="dataOverlayMode">none</option>
|
||||
<option name="drilldown">none</option>
|
||||
<option name="percentagesRow">false</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="rowNumbers">false</option>
|
||||
<option name="totalsRow">false</option>
|
||||
<option name="wrap">false</option>
|
||||
</table>
|
||||
</panel>
|
||||
</row>
|
||||
</form>
|
||||
@ -0,0 +1,230 @@
|
||||
<form version="1.1">
|
||||
<label>Dashboard - Rolled Buckets By Index</label>
|
||||
<description>A very simple dashboard to determine which index is rolling the largest number of buckets and therefore may require some level of tuning</description>
|
||||
<fieldset submitButton="false">
|
||||
<input type="time" token="time">
|
||||
<label>Time period for rolled bucket graphs</label>
|
||||
<default>
|
||||
<earliest>-3d</earliest>
|
||||
<latest>@d</latest>
|
||||
</default>
|
||||
</input>
|
||||
<input type="dropdown" token="days">
|
||||
<label>Days of data to look over</label>
|
||||
<choice value="3">3</choice>
|
||||
<choice value="7">7</choice>
|
||||
<choice value="14">14</choice>
|
||||
<choice value="30">30</choice>
|
||||
<choice value="60">60</choice>
|
||||
<default>7</default>
|
||||
<prefix>-</prefix>
|
||||
<suffix>d</suffix>
|
||||
</input>
|
||||
</fieldset>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Number of buckets rolled from hot to warm</title>
|
||||
<chart>
|
||||
<title>Buckets rolled per day per index, top 15 indexes</title>
|
||||
<search>
|
||||
<query>index=_internal "HotBucketRoller" sourcetype=splunkd `splunkadmins_splunkd_source` `indexerhosts` "finished moving"
|
||||
| bin _time span=24h
|
||||
| chart limit=15 useother=false count by _time, idx</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.chart">line</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<drilldown>
|
||||
<set token="indexname">$click.name2$</set>
|
||||
</drilldown>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Buckets with largest timespan</title>
|
||||
<table>
|
||||
<title>Buckets sorted by longest average time period (often indicates a timestamp parsing issue as large time periods trigger the buckets to roll early)</title>
|
||||
<search>
|
||||
<query>| dbinspect index=*
|
||||
| eval timePeriod=(endEpoch-startEpoch)/60/60/24
|
||||
| stats avg(timePeriod) AS avgTimePeriod, max(timePeriod) AS maxTimePeriod by index
|
||||
| where avgTimePeriod>5
|
||||
| sort - avgTimePeriod</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="count">10</option>
|
||||
<option name="dataOverlayMode">none</option>
|
||||
<option name="drilldown">cell</option>
|
||||
<option name="percentagesRow">false</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="rowNumbers">false</option>
|
||||
<option name="totalsRow">false</option>
|
||||
<option name="wrap">false</option>
|
||||
<drilldown>
|
||||
<set token="indexname">$click.value$</set>
|
||||
</drilldown>
|
||||
</table>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>License usage for index $indexname$</title>
|
||||
<chart>
|
||||
<title>Click on an index above for this drilldown to show which the license usage by a particular index</title>
|
||||
<search>
|
||||
<query>index=_internal `licensemasterhost` `splunkadmins_license_usage_source` idx=$indexname$
|
||||
| bin _time span=24h
|
||||
| stats sum(b) AS totalB by idx, _time
|
||||
| eval totalB=totalB/1024/1024/1024
|
||||
| chart avg(totalB) AS totalGB by _time, idx</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">visible</option>
|
||||
<option name="charting.axisTitleY.visibility">visible</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.abbreviation">none</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.abbreviation">none</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.abbreviation">none</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">line</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">default</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">none</option>
|
||||
<option name="charting.layout.splitSeries">0</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.mode">standard</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
<option name="charting.lineWidth">2</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="trellis.enabled">0</option>
|
||||
<option name="trellis.scales.shared">1</option>
|
||||
<option name="trellis.size">medium</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Bucket Info From DBInspect</title>
|
||||
<table>
|
||||
<title>Show the length of time for the average bucket from this particular index</title>
|
||||
<search>
|
||||
<query>| dbinspect index=$indexname$
|
||||
| eval timePeriod=(endEpoch-startEpoch)/60/60/24
|
||||
| stats avg(timePeriod) AS avgTimePeriod, min(timePeriod) AS minTimePeriod, max(timePeriod) AS maxTimePeriod, max(sizeOnDiskMB) AS maxSizeMB, avg(sizeOnDiskMB) AS avgSizeMB by index
|
||||
| append
|
||||
[| rest `splunkindexerhostsvalue` /services/data/indexes
|
||||
| search title=$indexname$
|
||||
| head 1
|
||||
| table maxDataSize ]</query>
|
||||
<earliest>$days$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="count">100</option>
|
||||
<option name="dataOverlayMode">none</option>
|
||||
<option name="drilldown">none</option>
|
||||
<option name="percentagesRow">false</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="rowNumbers">false</option>
|
||||
<option name="totalsRow">false</option>
|
||||
<option name="wrap">false</option>
|
||||
</table>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Sourcetype info for $indexname$</title>
|
||||
<table>
|
||||
<title>Click on any sourcetype to drilldown to the historic data in the past week for that sourcetype...</title>
|
||||
<search>
|
||||
<query>| tstats count where index=$indexname$ groupby sourcetype
|
||||
| sort - count</query>
|
||||
<earliest>$days$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="count">20</option>
|
||||
<option name="dataOverlayMode">none</option>
|
||||
<option name="drilldown">cell</option>
|
||||
<option name="percentagesRow">false</option>
|
||||
<option name="rowNumbers">false</option>
|
||||
<option name="totalsRow">false</option>
|
||||
<option name="wrap">false</option>
|
||||
<drilldown>
|
||||
<set token="sourcetype">$click.value2$</set>
|
||||
</drilldown>
|
||||
</table>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Historic data for index $indexname$ indexed in the past $days$</title>
|
||||
<event>
|
||||
<title>Find data indexed in the past $days$ days that is at least 30 days old for sourcetype $sourcetype$ in index $indexname$</title>
|
||||
<search>
|
||||
<query>index=$indexname$ sourcetype=$sourcetype$ _index_earliest=-7d earliest=-300d latest=-30d
|
||||
| eval indextime=strftime(_indextime, "%+")</query>
|
||||
<earliest>$days$</earliest>
|
||||
<latest>now</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="count">20</option>
|
||||
<option name="list.drilldown">none</option>
|
||||
<option name="list.wrap">0</option>
|
||||
<option name="maxLines">100</option>
|
||||
<option name="raw.drilldown">none</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="rowNumbers">1</option>
|
||||
<option name="table.drilldown">none</option>
|
||||
<option name="table.sortDirection">asc</option>
|
||||
<option name="table.wrap">1</option>
|
||||
<option name="type">list</option>
|
||||
</event>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Future based data</title>
|
||||
<event>
|
||||
<title>Future based data for sourcetype $sourcetype$ in index $indexname$ indexed in the past $days$ days</title>
|
||||
<search>
|
||||
<query>index=$indexname$ sourcetype=$sourcetype$ earliest=+5m latest=+5y _index_earliest=$days$
|
||||
| eval indextime=strftime(_indextime, "%+")</query>
|
||||
<earliest>-5m</earliest>
|
||||
<latest>now</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="count">20</option>
|
||||
<option name="list.drilldown">none</option>
|
||||
<option name="list.wrap">0</option>
|
||||
<option name="maxLines">100</option>
|
||||
<option name="raw.drilldown">none</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="rowNumbers">1</option>
|
||||
<option name="table.drilldown">none</option>
|
||||
<option name="table.sortDirection">asc</option>
|
||||
<option name="table.wrap">1</option>
|
||||
<option name="type">list</option>
|
||||
</event>
|
||||
</panel>
|
||||
</row>
|
||||
</form>
|
||||
@ -0,0 +1,86 @@
|
||||
<form version="1.1">
|
||||
<label>Dashboard - Search Head ScheduledSearches Distribution</label>
|
||||
<description>Number of scheduler searches per search head</description>
|
||||
<fieldset submitButton="false">
|
||||
<input type="time" token="time">
|
||||
<label></label>
|
||||
<default>
|
||||
<earliest>-24h@h</earliest>
|
||||
<latest>now</latest>
|
||||
</default>
|
||||
</input>
|
||||
</fieldset>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Searches per search head</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal `searchheadhosts` sourcetype=scheduler status=delegated_remote_completion | timechart count by member_label</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">visible</option>
|
||||
<option name="charting.axisTitleY.visibility">visible</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">area</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">stacked100</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">all</option>
|
||||
<option name="charting.layout.splitSeries">0</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Scheduled searches starting later than 100 seconds after the scheduled time (mostly harmless as the now time period relates to the original scheduled time)</title>
|
||||
<input type="dropdown" token="exclude" searchWhenChanged="true">
|
||||
<label>Exclude</label>
|
||||
<choice value="__NOEXCLUSION__">None</choice>
|
||||
<choice value="_ACCELERATE*">_ACCELERATE</choice>
|
||||
<default>__NOEXCLUSION__</default>
|
||||
<initialValue>__NOEXCLUSION__</initialValue>
|
||||
</input>
|
||||
<input type="text" token="userequals">
|
||||
<label>user equals</label>
|
||||
<default>*</default>
|
||||
<initialValue>*</initialValue>
|
||||
</input>
|
||||
<input type="text" token="usernotequalto">
|
||||
<label>Exclude Username</label>
|
||||
<default>noexclusion</default>
|
||||
<initialValue>noexclusion</initialValue>
|
||||
</input>
|
||||
<table>
|
||||
<search>
|
||||
<query>index=_internal `searchheadhosts` sourcetype=scheduler app=* scheduled_time=* savedsearch_name!=$exclude$ user=$userequals$ user!=$usernotequalto$ | eval time=strftime(_time,"%Y-%m-%d %H:%M:%S") | eval delay_in_start = (dispatch_time - scheduled_time) | where delay_in_start>100 | eval scheduled_time=strftime(scheduled_time,"%Y-%m-%d %H:%M:%S") | eval dispatch_time=strftime(dispatch_time,"%Y-%m-%d %H:%M:%S") | rename time AS endTime | table host,savedsearch_name,delay_in_start, scheduled_time, dispatch_time, endTime, run_time, status, user, app | sort -delay_in_start | dedup host,savedsearch_name,delay_in_start</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="count">100</option>
|
||||
<option name="dataOverlayMode">none</option>
|
||||
<option name="drilldown">cell</option>
|
||||
<option name="percentagesRow">false</option>
|
||||
<option name="rowNumbers">false</option>
|
||||
<option name="totalsRow">false</option>
|
||||
<option name="wrap">true</option>
|
||||
</table>
|
||||
</panel>
|
||||
</row>
|
||||
</form>
|
||||
@ -0,0 +1,178 @@
|
||||
<form version="1.1">
|
||||
<label>Dashboard - SmartStore Stats</label>
|
||||
<description>Also refer to https://github.com/camrunr/s2_traffic_report/blob/master/s2_traffic_report.xml for an alternative view of SmartStore downloads/uploads. To determine which searches are causing cache misses refer to the SearchHeadLevel - SmartStore cache misses reports in this app. Note that the cache misses combined will require the search to complete while the indexing tier version can catch an in-progress search</description>
|
||||
<fieldset submitButton="false">
|
||||
<input type="time" token="time">
|
||||
<label></label>
|
||||
<default>
|
||||
<earliest>-60m@m</earliest>
|
||||
<latest>now</latest>
|
||||
</default>
|
||||
</input>
|
||||
<input type="dropdown" token="action">
|
||||
<label>Action</label>
|
||||
<choice value="*">All</choice>
|
||||
<choice value="download">download</choice>
|
||||
<choice value="upload">upload</choice>
|
||||
<default>*</default>
|
||||
</input>
|
||||
<input type="text" token="host">
|
||||
<label>host</label>
|
||||
<default></default>
|
||||
</input>
|
||||
<input type="dropdown" token="host">
|
||||
<label>host</label>
|
||||
<choice value="`indexerhosts`">All Indexers</choice>
|
||||
<default>`indexerhosts`</default>
|
||||
</input>
|
||||
</fieldset>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Also refer to</title>
|
||||
<html><a href="https://github.com/camrunr/s2_traffic_report/blob/master/s2_traffic_report.xml">SmartStore S2S Traffic report</a> for an alternative dashboard view or <a href="/app/SplunkAdmins/report?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FSearchHeadLevel%2520-%2520SmartStore%2520cache%2520misses%2520-%2520combined">SearchHeadLevel - SmartStore cache misses combined</a> or <a href="/app/SplunkAdmins/report?s=%2FservicesNS%2Fnobody%2FSplunkAdmins%2Fsaved%2Fsearches%2FIndexerLevel%2520-%2520SmartStore%2520cache%2520misses%2520-%2520remote_searches">SmartStore cache misses - remote_searches</a> to find the searches that are triggering the cache misses</html>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Upload/download latency</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal $host$ TERM(status=succeeded) OR TERM(status=failed) sourcetype=splunkd `splunkadmins_splunkd_source` TERM(action=$action$)
|
||||
| rangemap field=kb under_300=0-307200 300_700=307201-716800 700_1000=716801-1024000 default=over1000
|
||||
| eval combined = action . "_" . range
|
||||
| timechart avg(elapsed_ms) AS avg_elapsed_ms, max(elapsed_ms) AS max_elapsed_ms by combined</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">visible</option>
|
||||
<option name="charting.axisTitleY.visibility">visible</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.abbreviation">none</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.abbreviation">none</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.abbreviation">none</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">line</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">default</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">none</option>
|
||||
<option name="charting.layout.splitSeries">0</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.mode">standard</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
<option name="charting.lineWidth">2</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="trellis.enabled">0</option>
|
||||
<option name="trellis.scales.shared">1</option>
|
||||
<option name="trellis.size">medium</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Upload/download thruput</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal sourcetype=splunkd `splunkadmins_splunkd_source` $host$ TERM(status=succeeded) OR TERM(status=failed) TERM(action=$action$)
|
||||
| timechart sum(eval(kb/1024)) AS MB by action</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
</search>
|
||||
<option name="charting.chart">line</option>
|
||||
<option name="charting.drilldown">none</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>CacheManager Queued download count</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>```Relates to [cachemanager] max_concurrent_downloads in server.conf. Thanks to Splunk support for the original version of this search``` index=_internal $host$ `splunkadmins_metrics_source` TERM(group=cachemgr_download) sourcetype=splunkd queued
|
||||
| timechart partial=f limit=50 avg(queued) AS avg_queued by host
|
||||
| eval ceiling=20</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
</search>
|
||||
<option name="charting.chart">line</option>
|
||||
<option name="charting.drilldown">none</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>CacheManager hits/misses</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>
|
||||
index=_internal $host$ `splunkadmins_metrics_source` sourcetype=splunkd group=cachemgr_bucket TERM(cache_hit=*) OR TERM(cache_miss=*)
|
||||
| timechart sum(cache_hit) as Hits sum(cache_miss) as Misses
|
||||
</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
</search>
|
||||
<option name="charting.chart">line</option>
|
||||
<option name="charting.drilldown">none</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Excessive cachemanager downloads</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>```Thanks to Splunk support for the original version of this search, similar version available in the monitoring console...``` index=_internal $host$ `splunkadmins_splunkd_source` sourcetype=splunkd CacheManager TERM(action=download) TERM(status=succeeded) TERM(download_set=*)
|
||||
| rex field=cache_id ">*\|(?<index_name>.*)~.*~.*\|"
|
||||
| eval identifier=(cache_id + host)
|
||||
| stats count by identifier, index_name
|
||||
| stats count(eval(count>1)) as duplicate_downloads, sum(count) as all_downloads
|
||||
count(eval(count>8)) as excessive_duplicate_downloads by index_name
|
||||
| eval duplicate_percent=if(all_downloads=0,0,round((duplicate_downloads/all_downloads)*100,2))
|
||||
| fields index_name, duplicate_percent all_downloads duplicate_downloads excessive_duplicate_downloads
|
||||
| rename custom_index as Index, duplicate_percent as "Repeat Download %", all_downloads as "All Downloads", duplicate_downloads as "Repeated"</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
</search>
|
||||
<option name="charting.chart">line</option>
|
||||
<option name="charting.drilldown">none</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>CacheManager downloads by age/index</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>```Thanks to Splunk support for the original version of this search``` index=_audit $host$ TERM(action=remote_bucket_download) TERM(info=completed)
|
||||
| eval gbps=kb/1024/1024
|
||||
| eval age=round((now()-earliest_time)/60/60/24)
|
||||
| bucket span=30 age
|
||||
| rex field=cache_id "^[^\|]+\|(?P<index_name>[^~]+)~[^~]+~[^~]+"
|
||||
| eval age_index = age. " - ".index_name
|
||||
|timechart span=60s sum(gbps) by age_index limit=10 useother=f usenull=f</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
</search>
|
||||
<option name="charting.chart">line</option>
|
||||
<option name="charting.drilldown">none</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
</form>
|
||||
@ -0,0 +1,150 @@
|
||||
<form version="1.1">
|
||||
<label>Dashboard - Forwarder Data Balance</label>
|
||||
<description>Attempt to measure data balance between HF's, original version by Brett Adam's, similar to splunk_forwarder_output_tuning</description>
|
||||
<fieldset submitButton="false">
|
||||
<input type="time" token="time">
|
||||
<label></label>
|
||||
<default>
|
||||
<earliest>-60m@m</earliest>
|
||||
<latest>now</latest>
|
||||
</default>
|
||||
</input>
|
||||
<input type="text" token="host">
|
||||
<label>host</label>
|
||||
<default>`heavyforwarderhosts`</default>
|
||||
</input>
|
||||
<input type="dropdown" token="output_group">
|
||||
<label>Output Group</label>
|
||||
<fieldForLabel>output_name</fieldForLabel>
|
||||
<fieldForValue>output_name</fieldForValue>
|
||||
<search>
|
||||
<query>index=_internal $host$ sourcetype=splunkd `splunkadmins_metrics_source` TERM(group=tcpout_connections)
|
||||
| rex field=name "(?P<output_name>[^:]+)"
|
||||
| stats count by output_name
|
||||
| fields output_name</query>
|
||||
<earliest>-60m@m</earliest>
|
||||
<latest>now</latest>
|
||||
</search>
|
||||
</input>
|
||||
</fieldset>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Scatter Line Chart of sum by destination IP</title>
|
||||
<viz type="Splunk_ML_Toolkit.ScatterLineViz">
|
||||
<search>
|
||||
<query>index=_internal $host$ sourcetype=splunkd `splunkadmins_metrics_source` component=Metrics TERM(group=tcpout_connections) name=$output_group$*
|
||||
| timechart span=1m sum(kb) by destIp limit=50
|
||||
| fillnull value=0
|
||||
| untable _time server kb
|
||||
| eval t=_time-now()
|
||||
| table t kb</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="Splunk_ML_Toolkit.ScatterLineViz.identityLine">false</option>
|
||||
<option name="Splunk_ML_Toolkit.ScatterLineViz.legendAlign">bottom</option>
|
||||
<option name="Splunk_ML_Toolkit.ScatterLineViz.legendOrder">numeric</option>
|
||||
<option name="Splunk_ML_Toolkit.ScatterLineViz.showAxisLabels">true</option>
|
||||
<option name="Splunk_ML_Toolkit.ScatterLineViz.showLegend">false</option>
|
||||
<option name="drilldown">none</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="trellis.enabled">0</option>
|
||||
<option name="trellis.scales.shared">1</option>
|
||||
<option name="trellis.size">medium</option>
|
||||
</viz>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Total KB by destination IP</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal $host$ sourcetype=splunkd `splunkadmins_metrics_source` component=Metrics TERM(group=tcpout_connections) name=$output_group$*
|
||||
| timechart span=1m sum(kb) by destIp limit=100
|
||||
| fillnull value=0</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">collapsed</option>
|
||||
<option name="charting.axisTitleY.visibility">collapsed</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.abbreviation">none</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.abbreviation">none</option>
|
||||
<option name="charting.axisY.maximumNumber">1000000</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.abbreviation">none</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">line</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">zero</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">stacked</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">none</option>
|
||||
<option name="charting.layout.splitSeries">0</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.mode">standard</option>
|
||||
<option name="charting.legend.placement">none</option>
|
||||
<option name="charting.lineWidth">2</option>
|
||||
<option name="height">365</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="trellis.enabled">0</option>
|
||||
<option name="trellis.scales.shared">1</option>
|
||||
<option name="trellis.size">medium</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Standard Deviation</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal $host$ sourcetype=splunkd `splunkadmins_metrics_source` component=Metrics TERM(group=tcpout_connections) name=$output_group$*
|
||||
| timechart span=1m sum(kb) by destIp limit=50
|
||||
| fillnull value=0
|
||||
| untable _time destIp kb
|
||||
| stats avg(kb) as avg stdev(kb) as stdev by _time
|
||||
| eval devperc = stdev/avg*100
|
||||
| table _time devperc</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
</search>
|
||||
<option name="charting.chart">line</option>
|
||||
<option name="charting.chart.nullValueMode">connect</option>
|
||||
<option name="charting.drilldown">none</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Data Sum</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal $host$ sourcetype=splunkd `splunkadmins_metrics_source` component=Metrics TERM(group=tcpout_connections) kb>0 name=$output_group$*
|
||||
| bin span=1m _time
|
||||
| stats sum(kb) as kb by destIp _time
|
||||
| sort _time
|
||||
| streamstats sum(kb) as sumkb by destIp
|
||||
| timechart span=1m max(sumkb) by destIp useother=false limit=50</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
</search>
|
||||
<option name="charting.chart">line</option>
|
||||
<option name="charting.chart.nullValueMode">connect</option>
|
||||
<option name="charting.drilldown">none</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
</form>
|
||||
@ -0,0 +1,136 @@
|
||||
<form version="1.1">
|
||||
<label>Dashboard - Splunk forwarder output tuning</label>
|
||||
<description>Splunk forwarder to indexer output tuning</description>
|
||||
<fieldset submitButton="false">
|
||||
<input type="time" token="time">
|
||||
<label>time</label>
|
||||
<default>
|
||||
<earliest>-60m@m</earliest>
|
||||
<latest>now</latest>
|
||||
</default>
|
||||
</input>
|
||||
<input type="text" token="host">
|
||||
<label>host</label>
|
||||
<default>`heavyforwarderhosts`</default>
|
||||
</input>
|
||||
<input type="dropdown" token="output_group">
|
||||
<label>Output Group</label>
|
||||
<fieldForLabel>output_name</fieldForLabel>
|
||||
<fieldForValue>output_name</fieldForValue>
|
||||
<search>
|
||||
<query>index=_internal $host$ sourcetype=splunkd `splunkadmins_metrics_source` TERM(group=tcpout_connections)
|
||||
| rex field=name "(?P<output_name>[^:]+)"
|
||||
| stats count by output_name
|
||||
| fields output_name</query>
|
||||
<earliest>-60m@m</earliest>
|
||||
<latest>now</latest>
|
||||
</search>
|
||||
</input>
|
||||
<input type="dropdown" token="split_by">
|
||||
<label>Split by host?</label>
|
||||
<choice value="host">Yes</choice>
|
||||
<choice value="""">No</choice>
|
||||
</input>
|
||||
</fieldset>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Data output per-second</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_internal $host$ sourcetype=splunkd `splunkadmins_metrics_source` TERM(group=tcpout_connections) name=$output_group$*
|
||||
| rex field=name "(?<output_name>[^:]+)"
|
||||
| search output_name=$output_group$
|
||||
| fillnull ingest_pipe
|
||||
| eval combined = output_name . "_" . ingest_pipe
|
||||
| bin _time span=1m
|
||||
| stats sum(kb) AS totalkb by combined, host, _time
|
||||
| eval totalkb=totalkb/60
|
||||
| eval combined = $split_by$ . combined
|
||||
| timechart limit=99 avg(totalkb) AS avgkb, perc95(totalkb) AS perc95kb, min(totalkb) AS minkb by combined</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
</search>
|
||||
<option name="charting.chart">line</option>
|
||||
<option name="charting.drilldown">none</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Destination count</title>
|
||||
<table>
|
||||
<search>
|
||||
<query>index=_internal $host$ sourcetype=splunkd `splunkadmins_metrics_source` group=tcpout_connections name=$output_group$*
|
||||
| rex field=name "(?<output_name>[^:]+)"
|
||||
| search output_name=$output_group$
|
||||
| bin _time span=5m
|
||||
| stats dc(destIp) AS destination_count by output_name, host, _time
|
||||
| stats min(destination_count) AS min_destination_count, avg(destination_count) AS avg_destination_count by output_name</query>
|
||||
<earliest>-24h@h</earliest>
|
||||
<latest>now</latest>
|
||||
</search>
|
||||
<option name="drilldown">none</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
</table>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Data output std deviation</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>```Credit to Brett Adams``` index=_internal $host$ sourcetype=splunkd `splunkadmins_metrics_source` component=Metrics TERM(group=tcpout_connections) name=$output_group$*
|
||||
| rex field=name "(?P<destination>[^:]+)"
|
||||
| search destination=$output_group$*
|
||||
| timechart span=1m sum(kb) by destIp limit=50
|
||||
| fillnull value=0
|
||||
| untable _time destIp kb
|
||||
| stats avg(kb) as avg stdev(kb) as stdev by _time
|
||||
| eval dev_perc = stdev/avg*100
|
||||
| table _time dev_perc</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
</search>
|
||||
<option name="charting.chart">line</option>
|
||||
<option name="charting.drilldown">none</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Dashboard info</title>
|
||||
<html>
|
||||
<body>
|
||||
<p>Purpose of destination count table? metrics.log only records the tcpout data *if* the connection is open at the time the metrics.log writes, so the count is to sanity-check that the numbers of connections matches the number of forwarders on the backend (this will happen with the below outputs.conf settings combined with regular data flow)</p>
|
||||
<br/>
|
||||
<p><a href="https://docs.splunk.com/Documentation/SVA/current/Architectures/Intermediaterouting#Asynchronous_load_balancing"> Asynchronous load balancing (docs.splunk.com) </a></p>
|
||||
<p><a href="https://www.linkedin.com/pulse/splunk-asynchronous-forwarding-lightning-fast-data-ingestor-rawat"> Splunk Asynchronous Forwarding (Lightning-fast data ingestor)</a></p>
|
||||
<p>Purpose of the data output per-second timechart? The current goal is to get close to switching indexers every second for an output group (per-pipeline), note that this will result in more open connections to indexers so only really works if this is deployed to a moderate number of intermediate forwarders (HF's or similar). Note that you want to do this with autoLBVolume, if you lower autoLBFrequency to a very short time period you may result in un-even data balance due to switching frequently when forwarding smaller volumes of data. In my testing so far it would appear that aiming above the average kb/s for the autoLBVolume appears to work well, going too low doesn't work well in my testing so far</p>
|
||||
<p>Please read the linked article for information on these settings, note that when using async forwarding the open file descriptor usage is higher than without async forwarding as the connections are held open by forwarders. So this works great on an intermediate forwarding tier, this may not work so well with a very large number of forwarders</p>
|
||||
<p>Also note that the maxQueueSize should not be below 10MB (10MB minimium size)</p>
|
||||
<p>If you are using an AWS NLB, you may wish to refer to this newer post <a href="https://www.linkedin.com/posts/harendra-rawat-b10b41_asynchronous-forwarding-with-nlb-activity-7112204069363933185-SYRv"> Asynchronous forwarding with NLB</a></p>
|
||||
<p>Finally while this also works on UF's, there are some reasons why you may want to consider HF's if you are running an intermediate tier, answers post <a href="https://community.splunk.com/t5/Getting-Data-In/Wrongly-merged-Events-permanently-blocked-tcpout-queue-with/m-p/508743">Wrongly merged Events/permanently blocked tcpout queue with Intermediate Universal Forwarder</a></p>
|
||||
<br/><p>Finally you may want to refer to <a href="https://community.splunk.com/t5/Knowledge-Management/Slow-indexer-receiver-detection-capability/m-p/683768">Slow indexer/receiver detection capability</a></p>
|
||||
<p>What config is used to achieve the above?</p>
|
||||
<p>outputs.conf file based on 1MB/s
|
||||
</p><p><code>maxQueueSize = 10MB</code>
|
||||
</p>
|
||||
<p>
|
||||
<code>#autoLBVolume is set below 1/5 of the maxQueueSize due to changes post 7.3.6 which will hopefully be documented in the very near future, minimum 10MB queue</code>
|
||||
</p>
|
||||
<p>
|
||||
<code>autoLBVolume = 1024000</code>
|
||||
</p>
|
||||
<p>
|
||||
<code>autoLBFrequency = 10</code>
|
||||
</p>
|
||||
<p>
|
||||
<code>connectionTTL = 300</code>
|
||||
</p>
|
||||
</body>
|
||||
</html>
|
||||
</panel>
|
||||
</row>
|
||||
</form>
|
||||
@ -0,0 +1,170 @@
|
||||
<form version="1.1">
|
||||
<label>Dashboard - Splunk Introspection IO Comparison</label>
|
||||
<fieldset submitButton="false">
|
||||
<input type="time" token="time">
|
||||
<label>time</label>
|
||||
<default>
|
||||
<earliest>-24h@h</earliest>
|
||||
<latest>now</latest>
|
||||
</default>
|
||||
</input>
|
||||
<input type="text" token="hosts">
|
||||
<label>hosts</label>
|
||||
<default>`indexerhosts`</default>
|
||||
</input>
|
||||
<input type="text" token="span">
|
||||
<label>span</label>
|
||||
<default>1m</default>
|
||||
</input>
|
||||
</fieldset>
|
||||
<row>
|
||||
<panel>
|
||||
<title>data.avg_total_ms (average wait time)</title>
|
||||
<chart>
|
||||
<title>perc95 total io service time per host (sum of all disks avg_total_ms)</title>
|
||||
<search>
|
||||
<query>index=_introspection sourcetype=splunk_resource_usage component=IOStats $hosts$ data.device=nvme*
|
||||
| eval avg_total_ms = 'data.avg_total_ms', comment="You may wish to change sum(avg_total_ms) for perc95 or similar depending on your setup..."
|
||||
| bin _time span=$span$
|
||||
| stats sum(avg_total_ms) AS avg_total_ms by host, _time
|
||||
| timechart span=$span$ partial=f limit=99 perc95(avg_total_ms) AS avg_total_ms by host</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">visible</option>
|
||||
<option name="charting.axisTitleY.visibility">visible</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.abbreviation">none</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.abbreviation">none</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.abbreviation">none</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">line</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">default</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">none</option>
|
||||
<option name="charting.layout.splitSeries">0</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.mode">standard</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
<option name="charting.lineWidth">2</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="trellis.enabled">0</option>
|
||||
<option name="trellis.scales.shared">1</option>
|
||||
<option name="trellis.size">medium</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>data.read_ps/data.write_ps</title>
|
||||
<chart>
|
||||
<title>perc95 reads/writes per second (IOPS)</title>
|
||||
<search>
|
||||
<query>index=_introspection sourcetype=splunk_resource_usage component=IOStats $hosts$ data.device=nvme*
|
||||
| eval reads_ps = 'data.reads_ps', writes_ps = 'data.writes_ps'
|
||||
| bin _time span=$span$
|
||||
| stats sum(reads_ps) AS reads_ps, sum(writes_ps) AS writes_ps by host, _time
|
||||
| timechart span=$span$ partial=f limit=99 perc95(reads_ps) AS reads_ps, perc95(writes_ps) AS writes_ps by host</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">visible</option>
|
||||
<option name="charting.axisTitleY.visibility">visible</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.abbreviation">none</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.abbreviation">none</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.abbreviation">none</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">line</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">default</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">none</option>
|
||||
<option name="charting.layout.splitSeries">0</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.mode">standard</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
<option name="charting.lineWidth">2</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="trellis.enabled">0</option>
|
||||
<option name="trellis.scales.shared">1</option>
|
||||
<option name="trellis.size">medium</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>data.read_kb_ps/data.write_kb_ps</title>
|
||||
<chart>
|
||||
<title>perc95 read KB/write KB per second</title>
|
||||
<search>
|
||||
<query>index=_introspection sourcetype=splunk_resource_usage component=IOStats $hosts$ data.device=nvme*
|
||||
| eval reads_kb_ps = 'data.reads_kb_ps', writes_kb_ps = 'data.writes_kb_ps'
|
||||
| bin _time span=$span$
|
||||
| stats sum(reads_kb_ps) AS reads_kb_ps, sum(writes_kb_ps) AS writes_kb_ps by host, _time
|
||||
| timechart span=$span$ partial=f limit=99 perc95(reads_kb_ps) AS reads_kb_ps, perc95(writes_kb_ps) AS writes_kb_ps by host</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">visible</option>
|
||||
<option name="charting.axisTitleY.visibility">visible</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.abbreviation">none</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.abbreviation">none</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.abbreviation">none</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">line</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">default</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">none</option>
|
||||
<option name="charting.layout.splitSeries">0</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.mode">standard</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
<option name="charting.lineWidth">2</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="trellis.enabled">0</option>
|
||||
<option name="trellis.scales.shared">1</option>
|
||||
<option name="trellis.size">medium</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
</form>
|
||||
@ -0,0 +1,461 @@
|
||||
<form version="1.1">
|
||||
<label>Dashboard - Troubleshooting Indexer CPU</label>
|
||||
<fieldset submitButton="false">
|
||||
<input type="time" token="time_tok">
|
||||
<label>General Time Picker</label>
|
||||
<default>
|
||||
<earliest>-4h@h</earliest>
|
||||
<latest>@h</latest>
|
||||
</default>
|
||||
</input>
|
||||
<input type="text" token="user">
|
||||
<label>User</label>
|
||||
</input>
|
||||
<input type="dropdown" searchWhenChanged="true" token="interval">
|
||||
<label></label>
|
||||
<choice value="10m">10m</choice>
|
||||
<choice value="30m">30m</choice>
|
||||
<choice value="60m">60m</choice>
|
||||
<choice value="120m">120m</choice>
|
||||
<choice value="240m">4h</choice>
|
||||
<default>60m</default>
|
||||
</input>
|
||||
<input type="time" token="CPUtimetoken">
|
||||
<label>CPU Based Time Picker (Pie charts near top)</label>
|
||||
<default>
|
||||
<earliest>-1h@h</earliest>
|
||||
<latest>@h</latest>
|
||||
</default>
|
||||
</input>
|
||||
</fieldset>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Search Count Per Application</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_introspection `indexerhosts` sourcetype=splunk_resource_usage data.search_props.sid::* | eval app = 'data.search_props.app'
|
||||
| chart count by app</query>
|
||||
<earliest>$CPUtimetoken.earliest$</earliest>
|
||||
<latest>$CPUtimetoken.latest$</latest>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">visible</option>
|
||||
<option name="charting.axisTitleY.visibility">visible</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">pie</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">default</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">all</option>
|
||||
<option name="charting.layout.splitSeries">0</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
</chart>
|
||||
</panel>
|
||||
<panel>
|
||||
<title>CPU Usage By Application (point in time across all indexers)</title>
|
||||
<chart>
|
||||
<title>CPU is approx CPU% at any point in time</title>
|
||||
<search>
|
||||
<query>index=_introspection `indexerhosts` sourcetype=splunk_resource_usage data.search_props.sid::* | eval app = 'data.search_props.app' | eval cpuperc = 'data.pct_cpu' | bin _time span=1m | stats sum(cpuperc) AS totalCPU, avg(cpuperc) AS avgCPU by data.pid, host, _time, app | stats sum(totalCPU) AS totalCPU, sum(avgCPU) AS avgCPU by app | addinfo | eval overThisManyMinutes = round((info_max_time-info_min_time)/60) | eval CPUPercUsed = round(avgCPU/overThisManyMinutes) | fields - totalCPU, info* overThisManyMinutes, avgCPU</query>
|
||||
<earliest>$CPUtimetoken.earliest$</earliest>
|
||||
<latest>$CPUtimetoken.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">visible</option>
|
||||
<option name="charting.axisTitleY.visibility">visible</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">pie</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">default</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">all</option>
|
||||
<option name="charting.layout.splitSeries">0</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
</chart>
|
||||
</panel>
|
||||
<panel>
|
||||
<title>Searches Running Per Indexer</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_introspection `indexerhosts` sourcetype=splunk_resource_usage data.search_props.sid::* | chart count by host</query>
|
||||
<earliest>$CPUtimetoken.earliest$</earliest>
|
||||
<latest>$CPUtimetoken.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">visible</option>
|
||||
<option name="charting.axisTitleY.visibility">visible</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">pie</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">default</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">all</option>
|
||||
<option name="charting.layout.splitSeries">0</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
</chart>
|
||||
</panel>
|
||||
<panel>
|
||||
<title>Search Related CPU By Indexer</title>
|
||||
<chart>
|
||||
<title>CPU is approx CPU% at any point in time</title>
|
||||
<search>
|
||||
<query>index=_introspection `indexerhosts` sourcetype=splunk_resource_usage data.search_props.sid::* | eval cpuperc = 'data.pct_cpu' | bin _time span=1m | stats sum(cpuperc) AS totalCPU, avg(cpuperc) AS avgCPU by data.pid, host, _time| stats sum(totalCPU) AS totalCPU, sum(avgCPU) AS avgCPUTotal by host | addinfo | eval overThisManyMinutes = round((info_max_time-info_min_time)/60) | eval CPUPercUsed = round(avgCPUTotal/overThisManyMinutes) | fields - info* overThisManyMinutes, totalCPU, avgCPUTotal</query>
|
||||
<earliest>$CPUtimetoken.earliest$</earliest>
|
||||
<latest>$CPUtimetoken.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">visible</option>
|
||||
<option name="charting.axisTitleY.visibility">visible</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">pie</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">default</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">all</option>
|
||||
<option name="charting.layout.splitSeries">0</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>TotalCPU By Indexer And Application</title>
|
||||
<chart>
|
||||
<title>This is not % CPU, a rough guide only</title>
|
||||
<search>
|
||||
<query>index=_introspection `indexerhosts` sourcetype=splunk_resource_usage data.search_props.sid::* | eval app = 'data.search_props.app' | eval cpuperc = 'data.pct_cpu' | chart sum(cpuperc) AS totalCPU by host, app</query>
|
||||
<earliest>$time_tok.earliest$</earliest>
|
||||
<latest>$time_tok.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
|
||||
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
|
||||
<option name="charting.axisTitleX.visibility">visible</option>
|
||||
<option name="charting.axisTitleY.visibility">visible</option>
|
||||
<option name="charting.axisTitleY2.visibility">visible</option>
|
||||
<option name="charting.axisX.scale">linear</option>
|
||||
<option name="charting.axisY.scale">linear</option>
|
||||
<option name="charting.axisY2.enabled">0</option>
|
||||
<option name="charting.axisY2.scale">inherit</option>
|
||||
<option name="charting.chart">area</option>
|
||||
<option name="charting.chart.bubbleMaximumSize">50</option>
|
||||
<option name="charting.chart.bubbleMinimumSize">10</option>
|
||||
<option name="charting.chart.bubbleSizeBy">area</option>
|
||||
<option name="charting.chart.nullValueMode">gaps</option>
|
||||
<option name="charting.chart.showDataLabels">none</option>
|
||||
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
|
||||
<option name="charting.chart.stackMode">stacked</option>
|
||||
<option name="charting.chart.style">shiny</option>
|
||||
<option name="charting.drilldown">all</option>
|
||||
<option name="charting.layout.splitSeries">0</option>
|
||||
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
|
||||
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
|
||||
<option name="charting.legend.placement">right</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Search count by app, indexer</title>
|
||||
<chart>
|
||||
<search>
|
||||
<query>index=_introspection `indexerhosts` sourcetype=splunk_resource_usage data.search_props.sid::* | eval app = 'data.search_props.app'
|
||||
| chart count by app, host</query>
|
||||
<earliest>$time_tok.earliest$</earliest>
|
||||
<latest>$time_tok.latest$</latest>
|
||||
</search>
|
||||
<option name="charting.chart">line</option>
|
||||
</chart>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Usage by non system users - per $interval$ block of time</title>
|
||||
<table>
|
||||
<title>CPU is total measured amount, memory is maximum memory usage by process, 100 is 1 CPU core</title>
|
||||
<search>
|
||||
<query>index=_introspection `indexerhosts` sourcetype=splunk_resource_usage data.search_props.sid::* "data.search_props.user"!=admin "data.search_props.user"!=splunk-system-user
|
||||
| eval mem_used = 'data.mem_used' | eval app = 'data.search_props.app' | eval elapsed = 'data.elapsed' | eval label = 'data.search_props.label'
|
||||
| eval type = 'data.search_props.type' | eval mode = 'data.search_props.mode' | eval user = 'data.search_props.user' | eval cpuperc = 'data.pct_cpu'
|
||||
| eval search_head = if(isnull('data.search_props.search_head'),"N/A",'data.search_props.search_head') | eval read_mb = 'data.read_mb'
|
||||
| eval provenance='data.search_props.provenance' | eval label=coalesce(label, provenance)
|
||||
| stats max(elapsed) as runtime max(mem_used) as mem_used earliest(_time) as searchStartTime, sum(cpuperc) AS totalCPU, avg(cpuperc) AS avgCPU, max(read_mb) AS read_mb by type, mode, app, user, label, host, search_head, data.pid
|
||||
| bin searchStartTime span=$interval$
|
||||
| stats sum(totalCPU) AS totalCPU, sum(mem_used) AS totalMemUsed, sum(runtime) AS totalRuntime, avg(runtime) AS avgRuntime, sum(avgCPU) AS avgCPUAcrossAllIndexers, sum(read_mb) AS totalReadMB by searchStartTime, type, mode, app, user
|
||||
| eval totalduration = tostring(totalRuntime, "duration"), averageduration = tostring(avgRuntime, "duration")
|
||||
| eval Started = strftime(searchStartTime,"%+")
|
||||
| eval avgCPUAcrossAllIndexers = round(avgCPUAcrossAllIndexers)
|
||||
| sort - totalCPU, totalMemUsed
|
||||
| eval totalCPU=tostring(totalCPU,"commas"), avgCPUAcrossAllIndexers=tostring(avgCPUAcrossAllIndexers,"commas")
|
||||
| fields Started, totalMemUsed, user, app, mode, type, averageduration, totalduration, totalCPU, avgCPUAcrossAllIndexers, totalReadMB</query>
|
||||
<earliest>$time_tok.earliest$</earliest>
|
||||
<latest>$time_tok.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="count">20</option>
|
||||
<option name="dataOverlayMode">none</option>
|
||||
<option name="drilldown">cell</option>
|
||||
<option name="percentagesRow">false</option>
|
||||
<option name="rowNumbers">false</option>
|
||||
<option name="totalsRow">false</option>
|
||||
<option name="wrap">true</option>
|
||||
<drilldown>
|
||||
<eval token="app">if($click.name2$="app", $click.value2$, "*"</eval>
|
||||
<eval token="user">if($click.name2$="user", $click.value2$, ""</eval>
|
||||
<link target="_blank">/app/SplunkAdmins/troubleshooting_indexer_cpu_drilldown?form.app=$app$&form.user=$user$</link>
|
||||
</drilldown>
|
||||
</table>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Usage by system users per $interval$ block of time</title>
|
||||
<table>
|
||||
<title>CPU is totalMeasuredAmount, memory is maximum memory usage by process, 100 is 1 CPU core</title>
|
||||
<search>
|
||||
<query>index=_introspection `indexerhosts` sourcetype=splunk_resource_usage data.search_props.sid::* "data.search_props.user"=admin OR "data.search_props.user"=splunk-system-user
|
||||
| eval mem_used = 'data.mem_used' | eval app = 'data.search_props.app' | eval elapsed = 'data.elapsed' | eval label = 'data.search_props.label'
|
||||
| eval type = 'data.search_props.type' | eval mode = 'data.search_props.mode' | eval user = 'data.search_props.user' | eval cpuperc = 'data.pct_cpu'
|
||||
| eval search_head = if(isnull('data.search_props.search_head'),"N/A",'data.search_props.search_head') | eval read_mb = 'data.read_mb'
|
||||
| eval provenance='data.search_props.provenance' | eval label=coalesce(label, provenance)
|
||||
| stats max(elapsed) as runtime max(mem_used) as mem_used earliest(_time) as searchStartTime, sum(cpuperc) AS totalCPU, avg(cpuperc) AS avgCPU, max(read_mb) AS read_mb by type, mode, app, user, label, host, search_head, data.pid
|
||||
| bin searchStartTime span=$interval$
|
||||
| stats sum(totalCPU) AS totalCPU, sum(mem_used) AS totalMemUsed, sum(runtime) AS totalRuntime, avg(runtime) AS avgRuntime, sum(avgCPU) AS avgCPUAcrossAllIndexers, sum(read_mb) AS totalReadMB by searchStartTime, type, mode, app, user
|
||||
| eval totalduration = tostring(totalRuntime, "duration"), averageduration = tostring(avgRuntime, "duration")
|
||||
| eval Started = strftime(searchStartTime,"%+")
|
||||
| eval avgCPUAcrossAllIndexers = round(avgCPUAcrossAllIndexers)
|
||||
| sort - totalCPU, totalMemUsed
|
||||
| eval totalCPU=tostring(totalCPU,"commas"), avgCPUAcrossAllIndexers=tostring(avgCPUAcrossAllIndexers,"commas")
|
||||
| fields Started, totalMemUsed, user, app, mode, type, averageduration, totalduration, totalCPU, avgCPUAcrossAllIndexers, totalReadMB</query>
|
||||
<earliest>$time_tok.earliest$</earliest>
|
||||
<latest>$time_tok.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="count">20</option>
|
||||
<option name="dataOverlayMode">none</option>
|
||||
<option name="drilldown">cell</option>
|
||||
<option name="percentagesRow">false</option>
|
||||
<option name="rowNumbers">false</option>
|
||||
<option name="totalsRow">false</option>
|
||||
<option name="wrap">true</option>
|
||||
<drilldown>
|
||||
<eval token="app">if($click.name2$="app", $click.value2$, "*"</eval>
|
||||
<eval token="user">if($click.name2$="user", $click.value2$, ""</eval>
|
||||
<link target="_blank">/app/SplunkAdmins/troubleshooting_indexer_cpu_drilldown?form.app=$app$&form.user=$user$</link>
|
||||
</drilldown>
|
||||
</table>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>CPU used per indexer per search label, CPU measured at point in time</title>
|
||||
<input type="dropdown" token="labelExclusion" searchWhenChanged="false">
|
||||
<label>Exclude</label>
|
||||
<choice value="_ACCELERATE*">_ACCELERATE*</choice>
|
||||
<choice value="__DONTEXCLUDE__">No Exclusion</choice>
|
||||
<default>__DONTEXCLUDE__</default>
|
||||
</input>
|
||||
<input type="dropdown" token="sort">
|
||||
<label>Sort By</label>
|
||||
<choice value="totalAVGCPU, totalMemUsed">avgCPU, memory</choice>
|
||||
<choice value="totalCPU, totalMemUsed">totalCPU, memory</choice>
|
||||
<choice value="totalRuntime, totalCPU">duration, totalCPU</choice>
|
||||
<choice value="totalRuntime, totalAVGCPU">duration, avgCPU</choice>
|
||||
<default>totalAVGCPU, totalMemUsed</default>
|
||||
<initialValue>totalAVGCPU, totalMemUsed</initialValue>
|
||||
</input>
|
||||
<table>
|
||||
<title>CPU is approx CPU% at any point in time, memory is maximum memory usage by process, 100 is 1 CPU core</title>
|
||||
<search>
|
||||
<query>index=_introspection `indexerhosts` sourcetype=splunk_resource_usage data.search_props.sid::* NOT ("data.search_props.label"=$labelExclusion$)
|
||||
| eval mem_used = 'data.mem_used' | eval app = 'data.search_props.app' | eval elapsed = 'data.elapsed' | eval label = 'data.search_props.label'
|
||||
| eval type = 'data.search_props.type' | eval mode = 'data.search_props.mode' | eval user = 'data.search_props.user' | eval cpuperc = 'data.pct_cpu'
|
||||
| eval read_mb = 'data.read_mb'
|
||||
| eval provenance='data.search_props.provenance'
|
||||
| eval label=coalesce(label, provenance)
|
||||
| eval search_head = if(isnull('data.search_props.search_head'),"N/A",'data.search_props.search_head')
|
||||
| stats max(elapsed) as runtime max(mem_used) as mem_used earliest(_time) as Started, sum(cpuperc) AS totalCPU, max(read_mb) AS read_mb, avg(cpuperc) AS avgCPU by type, mode, app, user, label, host, data.pid
|
||||
| stats sum(avgCPU) AS totalAVGCPU, sum(mem_used) AS totalMemUsed, sum(runtime) AS totalRuntime, sum(read_mb) AS totalReadMB, sum(totalCPU) AS totalCPU by Started, type, "mode", app, user, label, host
|
||||
| eval totalMemUsed = round(totalMemUsed, 2)
|
||||
| eval Started=strftime(Started,"%+")
|
||||
| eval duration = tostring(totalRuntime, "duration")
|
||||
| eval avgCPU = round(totalAVGCPU)
|
||||
| sort - $sort$
|
||||
| eval totalCPU=tostring(totalCPU,"commas"), avgCPU=tostring(avgCPU,"commas")
|
||||
| fields - totalRuntime, totalAVGCPU</query>
|
||||
<earliest>$time_tok.earliest$</earliest>
|
||||
<latest>$time_tok.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="count">20</option>
|
||||
<option name="dataOverlayMode">none</option>
|
||||
<option name="drilldown">cell</option>
|
||||
<option name="percentagesRow">false</option>
|
||||
<option name="rowNumbers">false</option>
|
||||
<option name="totalsRow">false</option>
|
||||
<option name="wrap">true</option>
|
||||
<drilldown>
|
||||
<eval token="app">if($click.name2$="app", $click.value2$, "*"</eval>
|
||||
<eval token="user">if($click.name2$="user", $click.value2$, ""</eval>
|
||||
<link target="_blank">/app/SplunkAdmins/troubleshooting_indexer_cpu_drilldown?form.app=$app$&form.user=$user$</link>
|
||||
</drilldown>
|
||||
</table>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Most Expensive Non System Queries with CPU measured at point in time</title>
|
||||
<input type="dropdown" token="sort2">
|
||||
<label>Sort By</label>
|
||||
<choice value="totalAVGCPU, totalMemUsed">avgCPU, memory</choice>
|
||||
<choice value="totalCPU, totalMemUsed">totalCPU, memory</choice>
|
||||
<choice value="totalRuntime, totalCPU">duration, totalCPU</choice>
|
||||
<choice value="totalRuntime, totalAVGCPU">duration, avgCPU</choice>
|
||||
<default>totalAVGCPU, totalMemUsed</default>
|
||||
<initialValue>totalAVGCPU, totalMemUsed</initialValue>
|
||||
</input>
|
||||
<table>
|
||||
<title>CPU is approx CPU% at any point in time, memory is maximum memory usage by process, 100 is 1 CPU core</title>
|
||||
<search>
|
||||
<query>index=_introspection `indexerhosts` sourcetype=splunk_resource_usage data.search_props.sid::* "data.search_props.user"!=admin "data.search_props.user"!=splunk-system-user
|
||||
| eval mem_used = 'data.mem_used' | eval app = 'data.search_props.app' | eval elapsed = 'data.elapsed' | eval label = 'data.search_props.label'
|
||||
| eval type = 'data.search_props.type' | eval mode = 'data.search_props.mode' | eval user = 'data.search_props.user' | eval cpuperc = 'data.pct_cpu'
|
||||
| eval read_mb = 'data.read_mb'
|
||||
| eval provenance='data.search_props.provenance'
|
||||
| eval label=coalesce(label, provenance)
|
||||
| eval search_head = if(isnull('data.search_props.search_head'),"N/A",'data.search_props.search_head')
|
||||
| stats max(elapsed) as runtime max(mem_used) as mem_used earliest(_time) as Started, sum(cpuperc) AS totalCPU, max(read_mb) AS read_mb, avg(cpuperc) AS avgCPU by type, mode, app, user, label, host, data.pid
|
||||
| stats sum(avgCPU) AS totalAVGCPU, sum(mem_used) AS totalMemUsed, sum(runtime) AS totalRuntime, sum(read_mb) AS totalReadMB, sum(totalCPU) AS totalCPU by Started, type, "mode", app, user, label, host
|
||||
| eval totalMemUsed = round(totalMemUsed, 2)
|
||||
| eval Started=strftime(Started,"%+")
|
||||
| eval duration = tostring(totalRuntime, "duration")
|
||||
| eval avgCPU = round(totalAVGCPU)
|
||||
| sort - $sort2$
|
||||
| eval totalCPU=tostring(totalCPU,"commas"), avgCPU=tostring(avgCPU,"commas")
|
||||
| fields - totalRuntime, totalAVGCPU</query>
|
||||
<earliest>$time_tok.earliest$</earliest>
|
||||
<latest>$time_tok.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="count">20</option>
|
||||
<option name="dataOverlayMode">none</option>
|
||||
<option name="drilldown">cell</option>
|
||||
<option name="percentagesRow">false</option>
|
||||
<option name="rowNumbers">false</option>
|
||||
<option name="totalsRow">false</option>
|
||||
<option name="wrap">true</option>
|
||||
<drilldown>
|
||||
<eval token="app">if($click.name2$="app", $click.value2$, "*"</eval>
|
||||
<eval token="user">if($click.name2$="user", $click.value2$, ""</eval>
|
||||
<link target="_blank">/app/SplunkAdmins/troubleshooting_indexer_cpu_drilldown?form.app=$app$&form.user=$user$</link>
|
||||
</drilldown>
|
||||
</table>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel>
|
||||
<title>CPU used on a per SID basis</title>
|
||||
<input type="dropdown" token="sort3">
|
||||
<label>Sort By</label>
|
||||
<choice value="totalAVGCPUPerMinute, totalMemUsed">avgCPU, memory</choice>
|
||||
<choice value="totalCPU, totalMemUsed">totalCPU, memory</choice>
|
||||
<choice value="totalRuntime, totalCPU">duration, totalCPU</choice>
|
||||
<choice value="totalRuntime, totalAVGCPUPerMinute">duration, avgCPU</choice>
|
||||
<initialValue>totalRuntime, totalCPU</initialValue>
|
||||
</input>
|
||||
<input type="dropdown" token="labelExclusion2">
|
||||
<label>Exclude</label>
|
||||
<choice value="_ACCELERATE*">_ACCELERATE*</choice>
|
||||
<choice value="__DONTEXCLUDE__">No Exclusion</choice>
|
||||
<default>__DONTEXCLUDE__</default>
|
||||
</input>
|
||||
<table>
|
||||
<search>
|
||||
<query>index=_introspection `indexerhosts` sourcetype=splunk_resource_usage data.search_props.sid::* NOT ("data.search_props.label"=$labelExclusion2$)
|
||||
| eval mem_used = 'data.mem_used' | eval app = 'data.search_props.app' | eval elapsed = 'data.elapsed' | eval label = 'data.search_props.label'
|
||||
| eval type = 'data.search_props.type' | eval mode = 'data.search_props.mode' | eval user = 'data.search_props.user' | eval cpuperc = 'data.pct_cpu'
|
||||
| eval read_mb = 'data.read_mb'
|
||||
| eval sid='data.search_props.sid'
|
||||
| eval provenance='data.search_props.provenance'
|
||||
| eval label=coalesce(label, provenance)
|
||||
| eval search_head = if(isnull('data.search_props.search_head'),"N/A",'data.search_props.search_head')
|
||||
| stats max(elapsed) as runtime max(mem_used) as mem_used earliest(_time) as Started, sum(cpuperc) AS totalCPU, max(read_mb) AS read_mb, avg(cpuperc) AS avgCPUPerMinute by type, mode, app, user, label, host, data.pid, sid
|
||||
| stats sum(avgCPUPerMinute) AS totalAVGCPUPerMinute, sum(mem_used) AS totalMemUsed, sum(runtime) AS totalRuntime, sum(read_mb) AS totalReadMB, sum(totalCPU) AS totalCPU by Started, type, "mode", app, user, label, host, sid, data.pid
|
||||
| eval totalMemUsed = round(totalMemUsed, 2)
|
||||
| eval Started=strftime(Started,"%+")
|
||||
| eval duration = tostring(totalRuntime, "duration")
|
||||
| eval avgCPU = round(totalAVGCPUPerMinute)
|
||||
| sort - $sort3$
|
||||
| eval totalCPU=tostring(totalCPU,"commas"), avgCPU=tostring(avgCPU,"commas")
|
||||
| fields - totalRuntime, totalAVGCPUPerMinute, sid</query>
|
||||
<earliest>$time_tok.earliest$</earliest>
|
||||
<latest>$time_tok.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="count">20</option>
|
||||
<option name="dataOverlayMode">none</option>
|
||||
<option name="drilldown">cell</option>
|
||||
<option name="percentagesRow">false</option>
|
||||
<option name="rowNumbers">false</option>
|
||||
<option name="totalsRow">false</option>
|
||||
<option name="wrap">true</option>
|
||||
<drilldown>
|
||||
<eval token="app">if($click.name2$="app", $click.value2$, "*"</eval>
|
||||
<eval token="user">if($click.name2$="user", $click.value2$, ""</eval>
|
||||
<link target="_blank">/app/SplunkAdmins/troubleshooting_indexer_cpu_drilldown?form.app=$app$&form.user=$user$</link>
|
||||
</drilldown>
|
||||
</table>
|
||||
</panel>
|
||||
</row>
|
||||
</form>
|
||||
@ -0,0 +1,118 @@
|
||||
<form version="1.1">
|
||||
<label>Troubleshooting Indexer CPU Drilldown</label>
|
||||
<fieldset submitButton="false">
|
||||
<input type="time" token="time_tok" searchWhenChanged="true">
|
||||
<label>Time</label>
|
||||
<default>
|
||||
<earliest>-4h@m</earliest>
|
||||
<latest>now</latest>
|
||||
</default>
|
||||
</input>
|
||||
<input type="dropdown" token="sort" searchWhenChanged="true">
|
||||
<label>Sort By</label>
|
||||
<choice value="totalAVGCPU, totalMemUsed">avgCPU, memory</choice>
|
||||
<choice value="totalCPU, totalMemUsed">totalCPU, memory</choice>
|
||||
<choice value="totalRuntime, totalCPU">duration, totalCPU</choice>
|
||||
<choice value="totalRuntime, totalAVGCPU">duration, avgCPU</choice>
|
||||
<default>totalAVGCPU, totalMemUsed</default>
|
||||
<initialValue>totalAVGCPU, totalMemUsed</initialValue>
|
||||
</input>
|
||||
<input type="text" token="user" searchWhenChanged="true">
|
||||
<label>user</label>
|
||||
<prefix>data.search_props.user=</prefix>
|
||||
<change>
|
||||
<condition value="">
|
||||
<unset token="display_user_panel"></unset>
|
||||
</condition>
|
||||
<condition value="*">
|
||||
<set token="display_user_panel">true</set>
|
||||
<set token="uservalue">$value$</set>
|
||||
</condition>
|
||||
</change>
|
||||
</input>
|
||||
<input type="text" token="app" searchWhenChanged="true">
|
||||
<label>application</label>
|
||||
<default>*</default>
|
||||
</input>
|
||||
<input type="checkbox" token="per_pid_breakdown">
|
||||
<label>Breakdown Per PID?</label>
|
||||
<choice value="true">Yes</choice>
|
||||
<delimiter> </delimiter>
|
||||
</input>
|
||||
</fieldset>
|
||||
<row>
|
||||
<panel depends="$per_pid_breakdown$">
|
||||
<title>Usage Drilldown Per PID</title>
|
||||
<table>
|
||||
<search>
|
||||
<query>index=_introspection `indexerhosts` sourcetype=splunk_resource_usage data.search_props.sid::* $user$ data.search_props.app=$app$
|
||||
| eval mem_used = 'data.mem_used' | eval app = 'data.search_props.app' | eval elapsed = 'data.elapsed' | eval label = 'data.search_props.label'
|
||||
| eval type = 'data.search_props.type' | eval mode = 'data.search_props.mode' | eval user = 'data.search_props.user' | eval cpuperc = 'data.pct_cpu'
|
||||
| eval read_mb = 'data.read_mb'
|
||||
| eval sid='data.search_props.sid'
|
||||
| eval provenance='data.search_props.provenance' | eval label=coalesce(label, provenance)
|
||||
| eval search_head = if(isnull('data.search_props.search_head'),"N/A",'data.search_props.search_head')
|
||||
| stats max(elapsed) as runtime max(mem_used) as mem_used earliest(_time) as Started, sum(cpuperc) AS totalCPU, max(read_mb) AS read_mb, avg(cpuperc) AS avgCPUPerMinute by type, mode, app, user, label, host, data.pid, sid
|
||||
| stats sum(avgCPUPerMinute) AS totalAVGCPUPerMinute, sum(mem_used) AS totalMemUsed, sum(runtime) AS totalRuntime, sum(read_mb) AS totalReadMB, sum(totalCPU) AS totalCPU by Started, type, "mode", app, user, label, host, sid, data.pid
|
||||
| eval totalMemUsed = round(totalMemUsed, 2)
|
||||
| eval Started=strftime(Started,"%+")
|
||||
| eval duration = tostring(totalRuntime, "duration")
|
||||
| eval avgCPU = round(totalAVGCPUPerMinute)
|
||||
| eval totalCPU=tostring(totalCPU,"commas"), avgCPU=tostring(avgCPU,"commas")
|
||||
| sort - totalRuntime, totalCPU
|
||||
| fields - totalRuntime, totalAVGCPUPerMinute, sid</query>
|
||||
<earliest>$time_tok.earliest$</earliest>
|
||||
<latest>$time_tok.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="count">20</option>
|
||||
<option name="dataOverlayMode">none</option>
|
||||
<option name="drilldown">cell</option>
|
||||
<option name="percentagesRow">false</option>
|
||||
<option name="rowNumbers">false</option>
|
||||
<option name="totalsRow">false</option>
|
||||
<option name="wrap">true</option>
|
||||
</table>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel rejects="$per_pid_breakdown$">
|
||||
<table>
|
||||
<title>Usage Drilldown Per Search Label</title>
|
||||
<search>
|
||||
<query>index=_introspection `indexerhosts` sourcetype=splunk_resource_usage data.search_props.sid::* $user$ data.search_props.app=$app$
|
||||
| eval mem_used = 'data.mem_used' | eval app = 'data.search_props.app' | eval elapsed = 'data.elapsed' | eval label = 'data.search_props.label'
|
||||
| eval type = 'data.search_props.type' | eval mode = 'data.search_props.mode' | eval user = 'data.search_props.user' | eval cpuperc = 'data.pct_cpu'
|
||||
| eval read_mb = 'data.read_mb'
|
||||
| eval provenance='data.search_props.provenance' | eval label=coalesce(label, provenance)
|
||||
| eval search_head = if(isnull('data.search_props.search_head'),"N/A",'data.search_props.search_head')
|
||||
| bin _time span=1m
|
||||
| stats max(elapsed) as runtime max(mem_used) as mem_used earliest(_time) as Started, sum(cpuperc) AS totalCPU, max(read_mb) AS read_mb, avg(cpuperc) AS avgCPU by type, mode, app, user, label, data.pid, host
|
||||
| stats sum(avgCPU) AS totalAVGCPU, sum(mem_used) AS totalMemUsed, sum(runtime) AS totalRuntime, sum(read_mb) AS totalReadMB, sum(totalCPU) AS totalCPU by Started, type, "mode", app, user, label
|
||||
| eval totalMemUsed = round(totalMemUsed, 2)
|
||||
| eval Started=strftime(Started,"%+")
|
||||
| eval duration = tostring(totalRuntime, "duration")
|
||||
| eval avgCPU = round(totalAVGCPU)
|
||||
| eval totalCPU=tostring(totalCPU,"commas"), avgCPU=tostring(avgCPU,"commas")
|
||||
| sort - $sort$
|
||||
| fields - totalRuntime, totalAVGCPU</query>
|
||||
<earliest>$time_tok.earliest$</earliest>
|
||||
<latest>$time_tok.latest$</latest>
|
||||
</search>
|
||||
<option name="count">20</option>
|
||||
</table>
|
||||
</panel>
|
||||
</row>
|
||||
<row>
|
||||
<panel depends="$display_user_panel$">
|
||||
<table>
|
||||
<title>Recently Used URL By User</title>
|
||||
<search>
|
||||
<query>index=_internal sourcetype=splunkd_ui_access user=$uservalue$ `searchheadhosts` | top referer</query>
|
||||
<earliest>$time_tok.earliest$</earliest>
|
||||
<latest>$time_tok.latest$</latest>
|
||||
</search>
|
||||
</table>
|
||||
</panel>
|
||||
</row>
|
||||
</form>
|
||||
@ -0,0 +1,91 @@
|
||||
<form version="1.1">
|
||||
<label>Dashboard - Troubleshooting Resource Usage Per User</label>
|
||||
<description>This dashboard attempts to assist with finding which queries are using excessive amounts of CPU, memory, disk IOPS at the indexing tier and the queries behind them</description>
|
||||
<fieldset submitButton="true" autoRun="false">
|
||||
<input type="time" token="time">
|
||||
<label></label>
|
||||
<default>
|
||||
<earliest>-4h@m</earliest>
|
||||
<latest>now</latest>
|
||||
</default>
|
||||
</input>
|
||||
<input type="radio" token="exclusion">
|
||||
<label>Exclude system users?</label>
|
||||
<choice value=""data.search_props.user"!=admin "data.search_props.user"!=splunk-system-user">Yes</choice>
|
||||
<choice value="""">No</choice>
|
||||
<default>"data.search_props.user"!=admin "data.search_props.user"!=splunk-system-user</default>
|
||||
<initialValue>"data.search_props.user"!=admin "data.search_props.user"!=splunk-system-user</initialValue>
|
||||
</input>
|
||||
<input type="dropdown" token="sort">
|
||||
<label>sort</label>
|
||||
<choice value="totalCPU">totalCPU</choice>
|
||||
<choice value="avgCPUPerIndexer">avgCPUPerIndexer</choice>
|
||||
<choice value="totalduration">totalduration</choice>
|
||||
<choice value="averageduration">averageduration</choice>
|
||||
<choice value="totalMemUsed">totalMemUsed</choice>
|
||||
<choice value="totalReadMB">totalReadMB</choice>
|
||||
<choice value="count">count</choice>
|
||||
<initialValue>totalCPU</initialValue>
|
||||
</input>
|
||||
<input type="text" token="timespan">
|
||||
<label>timespan</label>
|
||||
<default>60m</default>
|
||||
</input>
|
||||
<input type="text" token="filter">
|
||||
<label>Free Text Filter</label>
|
||||
<default></default>
|
||||
<initialValue>""</initialValue>
|
||||
</input>
|
||||
</fieldset>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Resource Usage Per User</title>
|
||||
<table>
|
||||
<title>count is the number of searches triggered during that time period (dashboards may have multiple searches), introspection is measured in 10 second blocks (so sometimes no stats are available)</title>
|
||||
<search>
|
||||
<query>index=_introspection `indexerhosts` sourcetype=splunk_resource_usage data.search_props.sid::* $exclusion$
|
||||
| eval mem_used = 'data.mem_used'
|
||||
| eval app = 'data.search_props.app'
|
||||
| eval elapsed = 'data.elapsed'
|
||||
| eval label = 'data.search_props.label'
|
||||
| eval type = 'data.search_props.type'
|
||||
| eval mode = 'data.search_props.mode'
|
||||
| eval user = 'data.search_props.user'
|
||||
| eval cpuperc = 'data.pct_cpu'
|
||||
| eval search_head = 'data.search_props.search_head'
|
||||
| eval read_mb = 'data.read_mb'
|
||||
| eval provenance='data.search_props.provenance'
|
||||
| eval label=coalesce(label, provenance)
|
||||
| eval sid='data.search_props.sid'
|
||||
| search $filter$
|
||||
| rex field=sid "^remote_[^_]+_(?P<sid>.*)"
|
||||
| eval sid = "'" . sid . "'"
|
||||
| fillnull search_head value="*"
|
||||
| stats max(elapsed) as runtime max(mem_used) as mem_used earliest(_time) as searchStartTime, sum(cpuperc) AS totalCPU, avg(cpuperc) AS avgCPU, max(read_mb) AS read_mb, values(sid) AS sids by type, mode, app, user, label, host, search_head, data.pid
|
||||
| bin searchStartTime span=$timespan$
|
||||
| stats dc(sids) AS count, sum(totalCPU) AS totalCPU, sum(mem_used) AS totalMemUsed, max(runtime) AS maxRunTime, avg(runtime) AS avgRuntime, avg(avgCPU) AS avgCPUPerIndexer, sum(read_mb) AS totalReadMB, values(sids) AS sids by searchStartTime, type, mode, app, user, search_head, label
|
||||
| eval maxduration = tostring(maxRunTime, "duration"), averageduration = tostring(avgRuntime, "duration")
|
||||
| eval Started = strftime(searchStartTime,"%+")
|
||||
| eval avgCPUPerIndexer = round(avgCPUPerIndexer)
|
||||
| sort - $sort$
|
||||
| eval totalCPU=tostring(totalCPU,"commas"), avgCPUAcrossAllIndexers=tostring(avgCPUAcrossAllIndexers,"commas"), totalReadMB=tostring(totalReadMB, "commas"), totalMemUsed=tostring(totalMemUsed, "commas")
|
||||
| table Started, count, user, app, label, averageduration, maxduration, totalCPU, avgCPUPerIndexer, totalReadMB, totalMemUsed, search_head, sids, mode, type</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="count">100</option>
|
||||
<option name="dataOverlayMode">none</option>
|
||||
<option name="drilldown">cell</option>
|
||||
<option name="percentagesRow">false</option>
|
||||
<option name="rowNumbers">false</option>
|
||||
<option name="totalsRow">false</option>
|
||||
<option name="wrap">false</option>
|
||||
<fields>["Started","count","user","app","label","averageduration","maxduration","totalCPU","avgCPUPerIndexer","totalReadMB","totalMemUsed","mode","type"]</fields>
|
||||
<drilldown>
|
||||
<link target="_blank">/app/SplunkAdmins/troubleshooting_resource_usage_per_user_drilldown?form.sid=$row.sids$&form.host=$row.search_head$&form.app=$row.app$&form.label=$row.label$&form.time.earliest=$time.earliest$&form.time.latest=$time.latest$</link>
|
||||
</drilldown>
|
||||
</table>
|
||||
</panel>
|
||||
</row>
|
||||
</form>
|
||||
@ -0,0 +1,55 @@
|
||||
<form version="1.1">
|
||||
<label>Troubleshooting Resource Usage Per User Drilldown</label>
|
||||
<description>Drilldown for Troubleshooting Resource Usage Per User (Splunk 6.6+ only due to the use of the IN keyword)</description>
|
||||
<fieldset submitButton="true" autoRun="false">
|
||||
<input type="text" token="sid">
|
||||
<label>sid</label>
|
||||
</input>
|
||||
<input type="text" token="host">
|
||||
<label>host</label>
|
||||
</input>
|
||||
<input type="text" token="app">
|
||||
<label>app</label>
|
||||
</input>
|
||||
<input type="text" token="label">
|
||||
<label>label</label>
|
||||
</input>
|
||||
<input type="time" token="time" searchWhenChanged="false">
|
||||
<label>Time</label>
|
||||
<default>
|
||||
<earliest>-15m</earliest>
|
||||
<latest>now</latest>
|
||||
</default>
|
||||
</input>
|
||||
</fieldset>
|
||||
<row>
|
||||
<panel>
|
||||
<title>Query information from audit logs</title>
|
||||
<table>
|
||||
<search>
|
||||
<query>index=_audit host=$host$ "info=granted" OR "info=completed" OR "info=canceled" search_id IN ($sid$)
|
||||
| rex ", search='(?P<search>[\S+\s+]+?)', "
|
||||
| stats min(_time) AS time, max(_time) AS max_timestamp, values(user) AS user, values(total_run_time) AS total_run_time, values(result_count) AS result_count, values(search) AS search, values(host) AS host, values(search_et) AS startTime, values(search_lt) AS endTime, values(info) AS info, values(savedsearch_name) AS savedsearch_name by search_id
|
||||
| eval app="$app$", label="$label$"
|
||||
| eval endTime=if((info=="completed" OR info=="canceled") AND endTime=="N/A",max_timestamp,endTime)
|
||||
| eval period=tostring(round(endTime-startTime), "duration")
|
||||
| eval startTime=strftime(startTime, "%Y-%m-%d %H:%M:%S"), endTime=strftime(endTime, "%Y-%m-%d %H:%M:%S")
|
||||
| fillnull value="All Time" startTime endTime period
|
||||
| table time, app, user, total_run_time, result_count, period, search, label, host, startTime, endTime, info, savedsearch_name, search_id
|
||||
| sort - time</query>
|
||||
<earliest>$time.earliest$</earliest>
|
||||
<latest>$time.latest$</latest>
|
||||
<sampleRatio>1</sampleRatio>
|
||||
</search>
|
||||
<option name="count">100</option>
|
||||
<option name="dataOverlayMode">none</option>
|
||||
<option name="drilldown">none</option>
|
||||
<option name="percentagesRow">false</option>
|
||||
<option name="refresh.display">progressbar</option>
|
||||
<option name="rowNumbers">false</option>
|
||||
<option name="totalsRow">false</option>
|
||||
<option name="wrap">false</option>
|
||||
</table>
|
||||
</panel>
|
||||
</row>
|
||||
</form>
|
||||
@ -0,0 +1,926 @@
|
||||
##############
|
||||
#
|
||||
# Customise these macros to ensure the SplunkAdmins / Alerts for Splunk Admins
|
||||
# application works as expected
|
||||
#
|
||||
##############
|
||||
[indexerhosts]
|
||||
definition = host=*
|
||||
iseval = 0
|
||||
|
||||
[heavyforwarderhosts]
|
||||
definition = host=*
|
||||
iseval = 0
|
||||
|
||||
[searchheadhosts]
|
||||
definition = host=*
|
||||
iseval = 0
|
||||
|
||||
#Designed for searches where returning data from other search heads
|
||||
#would not provide valid results...
|
||||
[localsearchheadhosts]
|
||||
definition = host=*
|
||||
iseval = 0
|
||||
|
||||
[splunkenterprisehosts]
|
||||
definition = host=*
|
||||
iseval = 0
|
||||
|
||||
[deploymentserverhosts]
|
||||
definition = host=*
|
||||
iseval = 0
|
||||
|
||||
[licensemasterhost]
|
||||
definition = host=*
|
||||
iseval = 0
|
||||
|
||||
[cluster_masters]
|
||||
definition = host=*
|
||||
iseval = 0
|
||||
|
||||
[sysloghosts]
|
||||
definition = host=*
|
||||
iseval = 0
|
||||
|
||||
[searchheadsplunkservers]
|
||||
definition = splunk_server=*
|
||||
iseval = 0
|
||||
|
||||
[splunkindexerhostsvalue]
|
||||
definition = splunk_server=*
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_splunkd_source]
|
||||
definition = source=*splunkd.log*
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_splunkuf_source]
|
||||
definition = source=*splunkd.log*
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_mongo_source]
|
||||
definition = source=*mongod.log*
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_license_usage_source]
|
||||
definition = source=*license_usage.log*
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_clustermaster_oshost]
|
||||
definition = host=changeme
|
||||
iseval = 0
|
||||
|
||||
#Only used in a few searches, customise this if you have the cluster master as a search
|
||||
#peer, if not you may wish to leave this to local and run the ClusterMasterLevel searches on
|
||||
#the cluster master server...
|
||||
[splunkadmins_clustermaster_host]
|
||||
definition = splunk_server=local
|
||||
|
||||
##############
|
||||
#
|
||||
# Utility functions
|
||||
#
|
||||
##############
|
||||
[comment(1)]
|
||||
args = text
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
#
|
||||
#Dynamically generate a Splunk SPL statement to filter out a list of hosts / time periods where the particular hosts
|
||||
#were restarting, for example if search heads were restarting we probably don't care about delayed scheduled searches at this point in time
|
||||
#Allowing a macro name to be passed in allows this function to be used for search heads or indexers or anything else
|
||||
#Furthermore allowing contingency time allows some time for the server to recover from the restart if required...
|
||||
#This macro returns in the form of ((host=X _time>start _time<end) OR (host=Y _time>start _time<end)
|
||||
#
|
||||
#The query is checking various shutdown messages as different types of server have different messages signalling the start of the shutdown process
|
||||
#simplifying this in the past has resulted in missing at least 1 type of shutdown or the start of the shutdown process...
|
||||
[splunkadmins_shutdown_list(3)]
|
||||
args = macroName, minTimeContingency, maxTimeContingency
|
||||
definition = search ```Send an exclusion list in terms of a search result for when this particular Splunk server was shutdown, plus any contingency time as requested```\
|
||||
index=_internal (`$macroName$`) sourcetype=splunkd `splunkadmins_splunkd_source` (CASE("Shutting down")) OR "Shutdown complete in" OR "Received shutdown signal." OR "master has instructed peer to restart" OR "Performing early shutdown tasks"\
|
||||
| eval message=coalesce(message,event_message)\
|
||||
| stats min(_time) AS logTime by message, host\
|
||||
| stats min(logTime) AS minTime, max(logTime) AS maxTime by host\
|
||||
| eval minTime=minTime - $minTimeContingency$, maxTime=maxTime + $maxTimeContingency$\
|
||||
| eval search="host=" . host . " _time>" . minTime . " _time<" .maxTime\
|
||||
| fields search\
|
||||
| format\
|
||||
| rex mode=sed field=search "s/\"//g"
|
||||
iseval = 0
|
||||
|
||||
#
|
||||
#Dynamically generate a Splunk SPL statement to filter out a list of keywords (hostnames without the host=) / time periods where the particular hosts
|
||||
#were restarting, for example if search heads were restarting we probably don't care about delayed scheduled searches at this point in time
|
||||
#Allowing a macro name to be passed in allows this function to be used for search heads or indexers or anything else
|
||||
#Furthermore allowing contingency time allows some time for the server to recover from the restart if required...
|
||||
#This macro returns in the form of ((X _time>start _time<end) OR (Y _time>start _time<end)
|
||||
#
|
||||
#The query is checking various shutdown messages as different types of server have different messages signalling the start of the shutdown process
|
||||
#simplifying this in the past has resulted in missing at least 1 type of shutdown or the start of the shutdown process...
|
||||
[splunkadmins_shutdown_keyword(3)]
|
||||
args = macroName, minTimeContingency, maxTimeContingency
|
||||
definition = search ```Send an exclusion list in terms of a search result for when this particular Splunk server was shutdown, plus any contingency time as requested```\
|
||||
index=_internal (`$macroName$`) sourcetype=splunkd `splunkadmins_splunkd_source` (CASE("Shutting down")) OR "Shutdown complete in" OR "Received shutdown signal." OR "master has instructed peer to restart" OR "Performing early shutdown tasks"\
|
||||
| eval message=coalesce(message,event_message)\
|
||||
| stats min(_time) AS logTime by message, host\
|
||||
| stats min(logTime) AS minTime, max(logTime) AS maxTime by host\
|
||||
| eval minTime=minTime - $minTimeContingency$, maxTime=maxTime + $maxTimeContingency$\
|
||||
| eval search=host . " _time>" . minTime . " _time<" .maxTime\
|
||||
| fields search\
|
||||
| format\
|
||||
| rex mode=sed field=search "s/\"//g"
|
||||
iseval = 0
|
||||
|
||||
#
|
||||
#Dynamically generate a Splunk SPL statement to filter out a list of hosts / time periods where the particular hosts
|
||||
#were restarting, for example if search heads were restarting we probably don't care about delayed scheduled searches at this point in time
|
||||
#Allowing a macro name to be passed in allows this function to be used for search heads or indexers or anything else
|
||||
#Furthermore allowing contingency time allows some time for the server to recover from the restart if required...
|
||||
#This macro returns in the form of (_time>start _time<end) this allows entire indexer cluster restarts to be filtered out.
|
||||
#
|
||||
#The query is checking various shutdown messages as different types of server have different messages signalling the start of the shutdown process
|
||||
#simplifying this in the past has resulted in missing at least 1 type of shutdown or the start of the shutdown process...
|
||||
[splunkadmins_shutdown_time(3)]
|
||||
args = macroName, minTimeContingency, maxTimeContingency
|
||||
definition = search ```Send an exclusion list in terms of a search result for the time when any indexer was shutdown```\
|
||||
index=_internal (`$macroName$`) sourcetype=splunkd `splunkadmins_splunkd_source` (CASE("Shutting down")) OR "Shutdown complete in" OR "Received shutdown signal." OR "master has instructed peer to restart" OR "Performing early shutdown tasks"\
|
||||
| eval message=coalesce(message,event_message)\
|
||||
| stats min(_time) AS logTime by message, host\
|
||||
| stats min(logTime) AS minTime, max(logTime) AS maxTime\
|
||||
| eval minTime=minTime - $minTimeContingency$, maxTime=maxTime + $maxTimeContingency$\
|
||||
| eval search=" _time>" . minTime . " _time<" .maxTime\
|
||||
| fields search\
|
||||
| format\
|
||||
| rex mode=sed field=search "s/\"//g"
|
||||
iseval = 0
|
||||
|
||||
# variation of the above to utilise smaller blocks of time during the search
|
||||
[splunkadmins_shutdown_time_by_period(4)]
|
||||
args = macroName, minTimeContingency, maxTimeContingency, period
|
||||
definition = search ```Send an exclusion list in terms of a search result for the time when any indexer was shutdown```\
|
||||
index=_internal (`$macroName$`) sourcetype=splunkd `splunkadmins_splunkd_source` (CASE("Shutting down")) OR "Shutdown complete in" OR "Received shutdown signal." OR "master has instructed peer to restart" OR "Performing early shutdown tasks"\
|
||||
| eval message=coalesce(message,event_message)\
|
||||
| bin _time span=$period$\
|
||||
| stats min(_time) AS logTime by message, host, _time\
|
||||
| stats min(logTime) AS minTime, max(logTime) AS maxTime by _time\
|
||||
| eval minTime=minTime - $minTimeContingency$, maxTime=maxTime + $maxTimeContingency$\
|
||||
| eval search=" _time>" . minTime . " _time<" .maxTime\
|
||||
| fields search\
|
||||
| format\
|
||||
| rex mode=sed field=search "s/\"//g"
|
||||
iseval = 0
|
||||
|
||||
|
||||
##############
|
||||
#
|
||||
# Per-alert macros that can be customised for
|
||||
# filtering unncessary data from alerts where required
|
||||
#
|
||||
##############
|
||||
|
||||
[splunkadmins_acc_datamodels]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_runscript]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_timeskew]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_changedprops]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_changedprops_count]
|
||||
definition = 3
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_btoolvalidation_ds]
|
||||
definition = ```Splunk for stream doesn't include a config file which causes errors, however it appears to work without it...``` NOT "/opt/splunk/etc/deployment-apps/Splunk_TA_stream*"
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_bandwidth]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_toosmall_checkcrc]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_forwarderdown]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_heavylogging]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_exceeding_filedescriptor]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_sending_data]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_sending_data_nonhf_count]
|
||||
definition = 0
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_sending_data_hf_count]
|
||||
definition = 5
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_unusual_duplication]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_unusual_duplication_count]
|
||||
definition = 10
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_crcsalt_initcrc]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_uf_timeshifting]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_future_dated]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_failuretoparse_timestamp]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_failuretoparse_timestamp_count]
|
||||
definition = 0
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_failuretoparse_timestamp_binperiod]
|
||||
definition = 1m
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_failuretoparse_timestamp2]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_indexconfig_warn]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_indexerqueue_fillperc_nonindexqueue]
|
||||
definition = 50
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_indexerqueue_fillperc_indexqueue]
|
||||
definition = 90
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_indexer_replication_queue_count]
|
||||
definition = 15
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_uneven_indexed_perc]
|
||||
definition = 25
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_weekly_brokenevents]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_weekly_truncated]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_weekly_truncated_count]
|
||||
definition = 0
|
||||
iseval = 0
|
||||
|
||||
|
||||
[splunkadmins_valid_timestamp_invalidparsed]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_longrunning_searches]
|
||||
definition = ```Exclude various standard/expected searches``` savedsearch_name!="Generate Meta Woot! every 15 mins" savedsearch_name!="Generate NMON*"
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_realtime_scheduledsearches]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_scheduledsearches_cannot_run]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_scheduledsearches_without_earliestlatest]
|
||||
definition = NOT (eai:acl.app=splunk_app_aws author=nobody)
|
||||
iseval = 0
|
||||
|
||||
#Ignore Splunk apps which will trigger this
|
||||
[splunkadmins_scheduledsearches_without_index]
|
||||
definition = eai:acl.app!="splunk_archiver" eai:acl.app!="splunk_app_windows_infrastructure" eai:acl.app!="splunk_app_aws" eai:acl.app!="nmon"
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_scriptfailures]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_users_violating_searchquota]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_users_exceeding_diskquota]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_execprocessor]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_timeformat_change]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_loginattempts]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_insufficient_permissions]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_tcpoutput_paused]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_streamerrors]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_unable_distribute_to_peer]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_dashboards_allindexes]
|
||||
definition = NOT (eai:appName=simple_xml_examples eai:acl.sharing=app) NOT (eai:appName=nmon eai:acl.sharing=app)
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_scheduled_incorrectsharing]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_realtime_dashboard]
|
||||
definition = NOT (eai:appName=simple_xml_examples eai:acl.sharing=app) NOT (eai:appName=nmon eai:acl.sharing=app)
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_olddata]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_olddata_lookback]
|
||||
definition = -7d
|
||||
|
||||
[splunkadmins_olddata_earliest]
|
||||
definition = -2600d
|
||||
|
||||
[splunkadmins_olddata_latest]
|
||||
definition = -60d
|
||||
|
||||
[splunkadmins_forwarders_nottalking_ds]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
#Ignore enterprise security related sendalert errors, they are often false alarms here, also filter the data a bit further...
|
||||
[splunkadmins_sendmodalert_errors]
|
||||
definition = ```We look for the sendalert commands to provide context around the errors where possible. Since notable/risks fail more often they are removed from this particular alert``` NOT action=notable NOT action=risk NOT (" - INFO]" OR "Results Link" OR "Alert Name:")
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_bucketrolling_count]
|
||||
definition = 20
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_readop_expectingack]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_repfailures]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_lowdisk]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_lowdisk_perc]
|
||||
definition = 10
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_lowdisk_mb]
|
||||
definition = 90000
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_kvstore_terminated]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_fileintegritycheck]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_multiline_linemerge]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_warninifile]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_toomany_sametimestamp]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_colddata_percused]
|
||||
definition = 80
|
||||
iseval = 0
|
||||
|
||||
#Ignore internal indexes introspection & main
|
||||
[splunkadmins_colddata]
|
||||
definition = ```Some internal indexes roll based on size by default such as introspection``` index!=_introspection index!=defaultdb
|
||||
iseval = 0
|
||||
|
||||
#Ignore internal indexes introspection & main
|
||||
[splunkadmins_bucketfrozen]
|
||||
definition = ```Some internal indexes roll based on size by default such as introspection``` bkt!="*_introspection*" bkt!="*defaultdb*"
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_permissions]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
#Ignore internal indexes introspection & main
|
||||
[splunkadmins_warmdbcount]
|
||||
definition = ```We probably don't care about the warm limits for the internal indexes...``` index!=_introspection index!=defaultdb
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_warmdbcount_perc]
|
||||
definition = 80
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_clustermaster_failurecount]
|
||||
definition = 1
|
||||
iseval = 0
|
||||
|
||||
#My environment appears to have random SSL interconnectivity issues with mongo which are harmless/never cause an issue
|
||||
[splunkadmins_mongodb_errors]
|
||||
definition = NOT "SSL: error"
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_mongodb_errors2]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
#Many of these applications contain macros which have embedded macros, attempting to expand them proved to be ... complicated so ignoring them!
|
||||
[splunkadmins_scheduledsearches_without_index_macro]
|
||||
definition = NOT ((eai:acl.app="splunk_app_windows_infrastructure" OR eai:acl.app="splunk_app_aws" OR eai:acl.app="splunk_app_for_nix" OR eai:acl.app="app-docker" OR eai:acl.app="nmon") AND (eai:acl.sharing=app OR eai:acl.sharing=global))
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_privilegedowners]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
#Not sure why but this "Success" message appears in my instance...
|
||||
[splunkadmins_searchfailures]
|
||||
definition = message!="Success"
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_captain_switchover]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_resource_starvation]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_s2sfilereceiver]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
#
|
||||
#Dynamically generate a Splunk SPL statement to filter out a list of hosts / time periods where the particular hosts
|
||||
#were having a transfer of captain
|
||||
#Allowing a macro name to be passed in allows this function to be used for different search head clusters
|
||||
#This macro returns in the form of (_time>start _time<end) this allows entire indexer cluster restarts to be filtered out.
|
||||
[splunkadmins_transfer_captain_times(3)]
|
||||
args = macroName, minTimeContingency, maxTimeContingency
|
||||
definition = search ```Send an exclusion list in terms of a search result for the time when a search head captain transfer occurred``` index=_internal (`$macroName$`) sourcetype=splunkd `splunkadmins_splunkd_source` "Got Transfer captaincy" | eval message=coalesce(message,event_message) | stats min(_time) AS logTime by message, host | stats min(logTime) AS minTime, max(logTime) AS maxTime | eval minTime=minTime - $minTimeContingency$, maxTime=maxTime + $maxTimeContingency$ | eval search=" _time>" . minTime . " _time<" .maxTime | fields search | format | rex mode=sed field=search "s/\"//g"
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_replicationfactor]
|
||||
definition = 2
|
||||
iseval = 0
|
||||
|
||||
[whataccessdoihave]
|
||||
definition = rest /services/authentication/users splunk_server=local\
|
||||
| search ```REST query is limited to the current search head this is running on so we see the index access from this instances point of view```\
|
||||
[| rest /services/authentication/current-context/context splunk_server=local\
|
||||
| head 1 \
|
||||
| fields username \
|
||||
| rename username AS title] \
|
||||
| table title roles | rename title as user | mvexpand roles\
|
||||
| join type=left roles \
|
||||
[rest /services/authorization/roles splunk_server=local\
|
||||
| table title srchIndexesAllowed srchIndexesDefault srchIndexesDisallowed imported_srchIndexesAllowed imported_srchIndexesDefault imported_srchIndexesDisallowed | rename title as roles]\
|
||||
| fillnull value="" srchIndexesAllowed, srchIndexesDefault, srchIndexesDisallowed, imported_srchIndexesAllowed, imported_srchIndexesDefault imported_srchIndexesDisallowed\
|
||||
| eval srchIndexesAllowed = srchIndexesAllowed . " " . imported_srchIndexesAllowed, srchIndexesDefault = srchIndexesDefault . " " . imported_srchIndexesDefault, srchIndexesDisallowed = srchIndexesDisallowed . " " . imported_srchIndexesDisallowed \
|
||||
| makemv srchIndexesAllowed tokenizer=(\S+) | makemv srchIndexesDefault tokenizer=(\S+) | makemv srchIndexesDisallowed tokenizer=(\S+) \
|
||||
| eval indexes= [ | eventcount summarize=false index=* index=_* | stats values(index) AS indexes | eval theindexes="\"" . mvjoin(indexes, " ") . "\"" | return $theindexes ]\
|
||||
| makemv indexes\
|
||||
| stats values(roles) AS roles, values(indexes) AS indexes, values(srchIndexesAllowed) AS srchIndexesAllowed, values(srchIndexesDefault) AS srchIndexesDefault, values(srchIndexesDisallowed) AS srchIndexesDisallowed by user
|
||||
|
||||
[diskusage]
|
||||
definition = rest /services/authentication/current-context/context splunk_server=local \
|
||||
| head 1 \
|
||||
| fields username \
|
||||
| map \
|
||||
[| rest /services/search/jobs splunk_server=local search="eai:acl.owner=$username$" ] \
|
||||
| eval run_time=tostring(round(runDuration),"duration"), time_to_live_remaining=tostring(ttl,"duration"), disk_usage=round(diskUsage/1024/1024) \
|
||||
| eventstats sum(disk_usage) AS total_disk_usage \
|
||||
| eval disk_usage=disk_usage . "MB", total_disk_usage=total_disk_usage . "MB" \
|
||||
| stats list(disk_usage) AS disk_usage, list(eai:acl.app) AS apps, list(provenance) AS provenance, list(resultCount) AS result_count, list(run_time) AS run_time, list(time_to_live_remaining) AS time_to_live_remaining, list(updated) AS updated, list(title) AS title, values(total_disk_usage) AS total_disk_usage by dispatchState \
|
||||
| table total_disk_usage, disk_usage, apps, provenance, time_to_live_remaining, run_time, dispatchState, result_count, updated, title \
|
||||
| eval total_disk_usage=if(dispatchState!="DONE",null(),total_disk_usage)
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_restmacro]
|
||||
definition = splunk_server=local
|
||||
iseval = 0
|
||||
|
||||
#Number of metric logs printed per minute, defaults to 30 seconds but can be changed by the user...
|
||||
#if you log metrics every minute change this to 1
|
||||
[splunkadmins_metrics_permin]
|
||||
definition = 2
|
||||
iseval = 0
|
||||
|
||||
#Substitute `<macro name>` within the audit.log files with the audit definition based on a lookup file
|
||||
#note this version only substitutes the first macro seen...the Splunk 8 version can handle multiple macros at once
|
||||
[splunkadmins_audit_logs_macro_sub]
|
||||
definition = ```Set all values to null() in case this macro is called again within the same search. Subsitute a macro used inside a search with the definition found in the lookup file```\
|
||||
| eval definition=null(), commas=null(), commas2=null(), argCount2=null(), argCount=null(), match=null()\
|
||||
| rex field=search max_match=1 "\`(?!\")(?!')(?P<macro>[^\`]+)\`" \
|
||||
```You can have multiple macro definitions with either 0 or more arguments so we have to count them...``` \
|
||||
| rex max_match=10 field=macro "([^\"]+\")|([^']+')\s*(?P<commas>,)" \
|
||||
| rex max_match=10 field=macro "(?P<commas2>,)" \
|
||||
| rex max_match=1 field=macro "(?P<match>[^\(]+\()" \
|
||||
```Two count methods are used as if we have macro(arg1) that has no commas, but macro(arg1,arg2) will work as expected...``` \
|
||||
| eval argCount2=if(match(macro,"([^\"]+\")|([^']+')") AND isnull(commas),-1,if(isnotnull(commas2),mvcount(commas2),null())) \
|
||||
| eval argCount=if(isnull(argCount2),0,argCount2+1) \
|
||||
| eval argCount=if(argCount==0,if(isnotnull(match),1,0),argCount) \
|
||||
| rex field=macro "(?P<macro>^[^\( ]+)" \
|
||||
| eval macroName=if(argCount==0,macro,macro . "(" . argCount . ")") \
|
||||
| lookup splunkadmins_macros title AS macroName, app AS app_name, splunk_server \
|
||||
| eval app_name2="global"\
|
||||
| lookup splunkadmins_macros title AS macroName, app AS app_name2, splunk_server OUTPUTNEW definition\
|
||||
| lookup splunkadmins_macros title AS macroName, splunk_server OUTPUTNEW definition\
|
||||
| eval macroReplace=if((argCount == 0),(("`" . macro) . "`"),(("`" . macro) . "\\(.*?\\)`")), search=if(isnotnull(definition),replace(search,macroReplace,mvindex(definition,0)),search)
|
||||
iseval = 0
|
||||
|
||||
#Substitute `<macro name>` within the audit.log files with the audit definition based on a lookup file
|
||||
#note this version only works on Splunk 8 due to the use of mvmap
|
||||
[splunkadmins_audit_logs_macro_sub_v8]
|
||||
definition = ```Set all values to null() in case this macro is called again within the same search. Subsitute a macro used inside a search with the definition found in the lookup file``` \
|
||||
eval definition=null(), definition2=null(), definition3=null(), commas=null(), commas2=null(), argCount2=null(), argCount=null(), match=null() \
|
||||
| rex field=search "\\`(?!\")(?!')(?P<macro>[^\\`]+)\\`" max_match=20 \
|
||||
```remove any commas inside double quotes or single quotes inside a macro, they are probably not arguments to the macro itself``` \
|
||||
| eval remove_commas_inside_macros=mvmap(macro,replace(macro,"(\"[^\"]+\"|'[^']+')","")) \
|
||||
```Originally a regex, the replace+len works in mvmap and determines number of commas so we can find a macro name``` \
|
||||
| eval commas2=mvmap(remove_commas_inside_macros,if(match(remove_commas_inside_macros,"^[^\(]+$"),"-1",len(replace(remove_commas_inside_macros,"[^,]+",""))+1)) \
|
||||
| rex field=macro "(?P<macro_name>^[^\( ]+)" max_match=20 \
|
||||
| eval macro_commas=mvzip(macro_name,commas2,"!!!!!!!") \
|
||||
```A macro with zero arguments is -1 from the previous mvmap, if it has non-zero arguments the definition changes to macro(number)...``` \
|
||||
| eval macroName=mvmap(macro_commas,if(mvindex(split(macro_commas,"!!!!!!!"),1)=="-1",mvindex(split(macro_commas,"!!!!!!!"),0),mvindex(split(macro_commas,"!!!!!!!"),0) . "(" . mvindex(split(macro_commas,"!!!!!!!"),1) . ")")) \
|
||||
| lookup splunkadmins_macros title AS macroName, app AS app_name, splunk_server \
|
||||
| eval app_name2="global" \
|
||||
```The original version just did an OUTPUTNEW definition, however this has the limitation that if 1 of the 5 macros found resolves, output stops. And this can result in missing macros. So this version over-matches but that appears to be the tradeoff...without making this even more complicated``` \
|
||||
| lookup splunkadmins_macros title AS macroName, app AS app_name2, splunk_server OUTPUT definition AS definition2 \
|
||||
| lookup splunkadmins_macros title AS macroName, splunk_server OUTPUT definition AS definition3 \
|
||||
| eval definition=mvdedup(mvappend(definition,definition2,definition3)) \
|
||||
| fillnull definition value="macronotfound" \
|
||||
| nomv definition \
|
||||
| eval definition=" " . definition . " " \
|
||||
```While an mvmap could replace per-macro that results in a multivalue output. Also replace doesn't handle a multivalued replacement argument so just replace the first macro if it exists with the definitions of all the macros, close enough for what we want``` \
|
||||
| eval search=if(isnotnull(macro_name),replace(search,mvindex(macro_name,0),definition),search)
|
||||
iseval = 0
|
||||
|
||||
#Substitute `<macro name>` within the any file
|
||||
[splunkadmins_macro_sub(1)]
|
||||
args = fieldname
|
||||
definition = ```Set all values to null() in case this macro is called again within the same search. Subsitute a macro used inside a search with the definition found in the lookup file``` \
|
||||
eval definition=null(), definition2=null(), definition3=null(), commas=null(), commas2=null(), argCount2=null(), argCount=null(), match=null() \
|
||||
| rex field=$fieldname$ "\\`(?!\")(?!')(?P<macro>[^\\`]+)\\`" max_match=20 \
|
||||
```remove any commas inside double quotes or single quotes inside a macro, they are probably not arguments to the macro itself``` \
|
||||
| eval remove_commas_inside_macros=mvmap(macro,replace(macro,"(\"[^\"]+\"|'[^']+')","")) \
|
||||
```Originally a regex, the replace+len works in mvmap and determines number of commas so we can find a macro name``` \
|
||||
| eval commas2=mvmap(remove_commas_inside_macros,if(match(remove_commas_inside_macros,"^[^\(]+$"),"-1",len(replace(remove_commas_inside_macros,"[^,]+",""))+1)) \
|
||||
| rex field=macro "(?P<macro_name>^[^\( ]+)" max_match=20 \
|
||||
| eval macro_commas=mvzip(macro_name,commas2,"!!!!!!!") \
|
||||
```A macro with zero arguments is -1 from the previous mvmap, if it has non-zero arguments the definition changes to macro(number)...``` \
|
||||
| eval macroName=mvmap(macro_commas,if(mvindex(split(macro_commas,"!!!!!!!"),1)=="-1",mvindex(split(macro_commas,"!!!!!!!"),0),mvindex(split(macro_commas,"!!!!!!!"),0) . "(" . mvindex(split(macro_commas,"!!!!!!!"),1) . ")")) \
|
||||
| lookup splunkadmins_macros title AS macroName, app AS app_name, splunk_server \
|
||||
| eval app_name2="global" \
|
||||
```The original version just did an OUTPUTNEW definition, however this has the limitation that if 1 of the 5 macros found resolves, output stops. And this can result in missing macros. So this version over-matches but that appears to be the tradeoff...without making this even more complicated``` \
|
||||
| lookup splunkadmins_macros title AS macroName, app AS app_name2, splunk_server OUTPUT definition AS definition2 \
|
||||
| lookup splunkadmins_macros title AS macroName, splunk_server OUTPUT definition AS definition3 \
|
||||
| eval definition=mvdedup(mvappend(definition,definition2,definition3)) \
|
||||
| fillnull definition value="macronotfound" \
|
||||
| nomv definition \
|
||||
| eval definition=" " . definition . " " \
|
||||
```While an mvmap could replace per-macro that results in a multivalue output. Also replace doesn't handle a multivalued replacement argument so just replace the first macro if it exists with the definitions of all the macros, close enough for what we want``` \
|
||||
| eval search=if(isnotnull(macro_name),replace($fieldname$,mvindex(macro_name,0),definition),$fieldname$)
|
||||
iseval = 0
|
||||
|
||||
#Note this macro requires TA-webtools
|
||||
#Alternatively the "Mothership app" on SplunkBase can be used for this purpose...
|
||||
[splunkadmins_remote_macros(3)]
|
||||
args = url,user,pass
|
||||
definition = | curl method=get uri="$url$/servicesNS/-/-/configs/conf-macros?count=-1&output_mode=json" user=$user$ pass=$pass$\
|
||||
| spath input=curl_message path="entry{}.name" output=title\
|
||||
| spath input=curl_message path="entry{}.acl.app" output=app\
|
||||
| spath input=curl_message path="entry{}.content.definition" output=definition\
|
||||
| spath input=curl_message path="entry{}.acl.sharing" output=sharing\
|
||||
| fields - curl_* \
|
||||
| fields title, app, definition, sharing \
|
||||
| eval data=mvzip(mvzip(mvzip(title, 'app', "%%%%"),definition,"%%%%"),sharing,"%%%%")\
|
||||
| fields data \
|
||||
| mvexpand data \
|
||||
| makemv data delim="%%%%" \
|
||||
| eval title=mvindex(data,0),app=mvindex(data,1), definition=mvindex(data,2), sharing=mvindex(data,3)\
|
||||
| search sharing!=user\
|
||||
| fields - data
|
||||
iseval = 0
|
||||
|
||||
#Not currently in use by searches but attempts to pull the roles from a remote Splunk server
|
||||
#Alternatively the "Mothership app" on SplunkBase can be used for this purpose...
|
||||
[splunkadmins_remote_roles(3)]
|
||||
args = url,user,pass
|
||||
definition = | curl method=get uri="$url$/services/authentication/users?output_mode=json&count=0&f=roles" user="$user$" pass="$pass$"\
|
||||
| rex field=curl_message max_match=10000 "{\"name\":\"(?P<user>[^\"]+)\".*?\"roles\":\[(?P<roles>[^\]]+)" \
|
||||
| fields - curl_* \
|
||||
| eval data=mvzip(user,roles,"%%%%") \
|
||||
| mvexpand data \
|
||||
| table data \
|
||||
| makemv data delim="%%%%" \
|
||||
| eval user=mvindex(data,0), roles=mvindex(data,1)\
|
||||
| fields - data\
|
||||
| eval roles=replace(roles,"\"","")\
|
||||
| makemv roles delim=","
|
||||
iseval = 0
|
||||
|
||||
#Macro to determine search head cluster name, potentially using a case statement or similar
|
||||
[search_head_cluster]
|
||||
definition = "default"
|
||||
iseval = 0
|
||||
|
||||
#Macro to determine which indexer cluster name, potentially using a case statement or similar
|
||||
[indexer_cluster_name(1)]
|
||||
args = indexer
|
||||
definition = "default"
|
||||
iseval = 0
|
||||
|
||||
#Macro to define indexer cluster name
|
||||
[indexer_cluster_name]
|
||||
definition = "default"
|
||||
iseval = 0
|
||||
|
||||
[forwarder_name(1)]
|
||||
args = hostname
|
||||
definition = "default"
|
||||
iseval = 0
|
||||
|
||||
[search_type_from_sid(1)]
|
||||
args = search_id
|
||||
definition = eval from=null(), username=null(), searchname2=null(), searchname=null()\
|
||||
| rex field=$search_id$ "'?(_rt)?(_?subsearch)*_?(?P<from>[^_]+)((_(?P<base64username>[^_]+))|(__(?P<username>[^_]+)))((__(?P<app>[^_]+)__(?P<searchname2>[^_]+))|(_(?P<base64appname>[^_]+)__(?P<searchname>[^_]+)))"\
|
||||
| rex field=$search_id$ "^_?(?P<from>SummaryDirector)"\
|
||||
```Pattern appears to vary but remote_<hostname>_ is consistent along with the optional _subsearch, the _from can be <username>__ownername__appname__RMD for dashboards as one pattern, it can also be unixepoch (ad-hoc), or scheduler__username__appname (scheduled search), or username__owner__(something)__dashboardview, among others. RMD values can be translated via audit.log, scheduler.log or remote_searches.log (if savedsearch_name is there)!```\
|
||||
| fillnull from value="adhoc"\
|
||||
| eval searchname=coalesce(searchname,searchname2)\
|
||||
| eval type=case(from=="scheduler","scheduled",from=="SummaryDirector","acceleration",match(search_id,"^'?alertsmanager_"),"scheduled",isnotnull(searchname),"dashboard",1=1,"ad-hoc")
|
||||
iseval = 0
|
||||
|
||||
[base64decode(1)]
|
||||
args = afield
|
||||
definition = eval $afield$=null() ```As per https://docs.splunk.com/Documentation/Splunk/latest/Report/Createandeditreports usernames/apps can be base64 encrypted, remove the eval when ready to use this...decrypt2 (splunkbase) can be used to decrypt with (remove the backslashes): eval $afield$=$afield$ . "===" | decrypt field=$afield$ atob emit('$afield$')```
|
||||
iseval = 0
|
||||
|
||||
[dashboard_depends_filter1]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[dashboard_depends_filter2]
|
||||
definition = ```potentially a where clause to only filter when a certain number of tokens exist...``` ""
|
||||
iseval = 0
|
||||
|
||||
[dashboard_depends_filter3]
|
||||
definition = ```potentially a where clause to only filter when a certain number of tokens were matched or similar...``` ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_wineventlog_index]
|
||||
definition = wineventlog
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_unexpected_term_count]
|
||||
definition = 5
|
||||
iseval = 0
|
||||
|
||||
#Note getsize=true appears to be added in 7.3.3+ and above so this will only work on newer versions and only for lookup definitions
|
||||
#the /admin/file-explorer/ will work for all CSV files but is admin only so using this option as a macro...
|
||||
[mylookups]
|
||||
definition = rest splunk_server=local /servicesNS/-/-/admin/transforms-lookup getsize=true \
|
||||
| search [| rest /services/authentication/current-context/context splunk_server=local | head 1 | fields username | rename username AS eai:acl.owner] \
|
||||
| eval name = 'eai:acl.app' + "." + title \
|
||||
| rename "eai:acl.sharing" AS sharing \
|
||||
| table name type size sharing \
|
||||
| sort - size
|
||||
|
||||
[splunkadmins_tailreader_ignorepath]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_splunk_server_name]
|
||||
definition = "default"
|
||||
|
||||
[splunkadmins_audit_alltime]
|
||||
definition = ""
|
||||
|
||||
[splunkadmins_dashboards_alltime]
|
||||
definition = ""
|
||||
|
||||
#Just a nicer way to format the returned data from the conf-props or conf-similar (borrowed from slack)
|
||||
[conf_rest_endpoint(1)]
|
||||
args = endpoint
|
||||
definition = rest /services/configs/conf-$endpoint$ splunk_server=local \
|
||||
| eval _raw="", acl="" \
|
||||
| foreach "*" \
|
||||
[| eval field=if(match("<<FIELD>>","^(title|eai:|splunk_server|author|id|updated|published)"),"","<<FIELD>> = ".'<<FIELD>>') \
|
||||
| eval acl_field=if(match("<<FIELD>>","^(eai:|author|updated|published)"),"<<FIELD>> = ".'<<FIELD>>',"") \
|
||||
| eval _raw=mvappend(_raw,field) \
|
||||
| eval acl=mvappend(acl,acl_field)] \
|
||||
| fields splunk_server title _raw acl \
|
||||
| eval _raw=mvappend("[".title."]",_raw)
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_excessive_rest_api_httplib]
|
||||
definition = "Python-httplib2/0.13.1 (gzip)"
|
||||
|
||||
[splunkadmins_excessive_rest_api_threshold]
|
||||
definition = 100
|
||||
|
||||
#Convert a time string into epoch time
|
||||
[splunkadmins_epoch(1)]
|
||||
args = time
|
||||
definition = strptime("$time$","%Y-%m-%d %T")
|
||||
iseval = 1
|
||||
|
||||
[splunkadmins_audit_logs_datamodel_sub]
|
||||
definition = eval definition=null(), datamodel3=null(), datamodel1=null(), datamodel2=null()\
|
||||
| rex field=search "^\s*\|\s*((from\s+datamodel\s*:?\s*\"?(?P<datamodel1>[^\"\.\s]+))|(datamodel\s+\"?(?P<datamodel2>[^\s\"\.]+)\"?\s+[^\|]*search))" \
|
||||
| rex field=search "datamodel\s*=\s*\"?(?P<datamodel3>[^\s\"\.]+)" \
|
||||
| eval datamodel_res=case(isnotnull(datamodel3) AND match(search,"\s*\|\s*(tstats)"),datamodel3,isnotnull(datamodel1),datamodel1,isnotnull(datamodel2),datamodel2,true(),null()) \
|
||||
| lookup splunkadmins_datamodels datamodel AS datamodel_res, app AS app_name, splunk_server OUTPUT definition\
|
||||
| eval app_name2="global"\
|
||||
| lookup splunkadmins_datamodels datamodel AS datamodel_res, app AS app_name2, splunk_server OUTPUTNEW definition\
|
||||
| lookup splunkadmins_datamodels datamodel AS datamodel_res, splunk_server OUTPUTNEW definition\
|
||||
| nomv definition \
|
||||
| eval definition=" " . definition . " "\
|
||||
```While an mvmap could replace per-datamodel that results in a multivalue output. Also replace doesn't handle a multivalued replacement argument so just replace the first macro if it exists with the definitions of all the datamodels``` \
|
||||
| eval search=if(isnotnull(datamodel_res),replace(search,mvindex(datamodel_res,0),definition),search)
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_audit_logs_tags_sub]
|
||||
definition = eval pretag=null(), tag=null(), definition=null(), definition2=null(), definition3=null() \
|
||||
| rex field=search max_match=50 "(?P<pre_tag>tag\s*=\s*)(?P<tag>[^\s\)\"]+)" \
|
||||
| lookup splunkadmins_tags tag, app AS app_name, splunk_server OUTPUT definition \
|
||||
| eval app_name2="global" \
|
||||
| lookup splunkadmins_tags tag, app AS app_name2, splunk_server OUTPUT definition AS definition2 \
|
||||
| lookup splunkadmins_tags tag, splunk_server OUTPUT definition AS definition3 \
|
||||
| eval definition=mvdedup(mvappend(definition, definition2, definition3)) \
|
||||
| nomv definition \
|
||||
| eval search=if(isnotnull(definition),replace(search,pre_tag . tag," " . definition . " "),search)
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_audit_logs_eventtypes_sub]
|
||||
definition = eval pre_eventtype=null(), eventtype=null(), eventtype2=null(), definition=null(), definition2=null(), definition3=null() \
|
||||
| rex field=search max_match=20 "(?P<pre_eventtype>eventtype\s*=\s*)((\"(?P<eventtype>[^\"]+))|((?P<eventtype2>[^\s\)]+)))" \
|
||||
| eval eventtype=coalesce(eventtype,eventtype2) \
|
||||
| lookup splunkadmins_eventtypes eventtype, app AS app_name, splunk_server OUTPUT definition \
|
||||
| eval app_name2="global" \
|
||||
| lookup splunkadmins_eventtypes eventtype, app AS app_name2, splunk_server OUTPUT definition AS definition2 \
|
||||
| lookup splunkadmins_eventtypes eventtype, splunk_server OUTPUT definition AS definition3 \
|
||||
| eval definition=mvdedup(mvappend(definition, definition2, definition3)) \
|
||||
| nomv definition \
|
||||
| eval search=if(isnotnull(definition),replace(search,pre_eventtype . "\"?" . eventtype," " . definition . " "),search)
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_slowpeer_time]
|
||||
definition = 60
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_slowpeer_threshold]
|
||||
definition = 10
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_searchmessages_user_1]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_searchmessages_user_2]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_searchmessages_admin_1]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_searchmessages_admin_2]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_splunkd_log_messages]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_alertactions_max_action_results]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_authorize_conf_prevent_users]
|
||||
definition = role!="can_delete"
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_indexer_remotesearches_alltime]
|
||||
definition = host=localhost
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_dataparsing_error]
|
||||
definition = ""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_shutdown_time_by_shc(3)]
|
||||
args = macroName, minTimeContingency, maxTimeContingency
|
||||
definition = search ```Send an exclusion list in terms of a search result for the time when any SH was shutdown```\
|
||||
index=_internal (`$macroName$`) sourcetype=splunkd `splunkadmins_splunkd_source` (CASE("Shutting down")) OR "Shutdown complete in" OR "Received shutdown signal." OR "Shutdown signal received" OR "master has instructed peer to restart" OR "Performing early shutdown tasks"\
|
||||
| eval message=coalesce(message,event_message)\
|
||||
| stats min(_time) AS logTime by message, host\
|
||||
| eval search_head=host\
|
||||
| eval search_head_cluster=`search_head_cluster`\
|
||||
| stats min(logTime) AS minTime, max(logTime) AS maxTime by search_head_cluster\
|
||||
| eval minTime=minTime - $minTimeContingency$, maxTime=maxTime + $maxTimeContingency$\
|
||||
| eval search=" _time>" . minTime . " _time<" .maxTime . " search_head_cluster=" . search_head_cluster\
|
||||
| fields search\
|
||||
| format\
|
||||
| rex mode=sed field=search "s/\"//g"
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_indexerqueue_count]
|
||||
definition = 1
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_deploymentserver_splunkserver]
|
||||
definition = splunk_server=local
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_sh_knowledgebundle_metrics_filter]
|
||||
definition = where replication_time_msec>200000
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_sh_knowledgebundle_metrics_timespan]
|
||||
definition = 60m
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_bundlepush_span]
|
||||
definition = 10m
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_metrics_source]
|
||||
definition = source=*metrics.log*
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_hec_metrics_source]
|
||||
definition = source=*http_event_collector_metrics.log*
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_summaryindex_durablesearch]
|
||||
definition = NOT title IN ("SearchHeadLevel - summary indexing searches not using durable search") next_scheduled_time!=""
|
||||
iseval = 0
|
||||
|
||||
[splunkadmins_events_per_second]
|
||||
definition = desc.savedsearch_name IN ("Example")
|
||||
iseval = 0
|
||||
@ -0,0 +1,39 @@
|
||||
#Splunk does not index the search.log files from the dispatch directory by default
|
||||
#so create a stanza to take only the parts we care about...
|
||||
#Example lines to look for include
|
||||
#05-24-2018 08:31:03.881 ERROR SearchResultTransaction - Got status 502 from https://x.x.x.x:8089/services/streams/search?sh_sid=1527150641.164891_315974D3-2FA6-4A16-839A-A95A0376BA14
|
||||
#05-24-2018 08:31:03.881 ERROR SearchResultTransaction - HTTP error status message from https://x.x.x.x:8089/services/streams/search?sh_sid=1527150641.164891_315974D3-2FA6-4A16-839A-A95A0376BA14: Error connecting: Connect Timeout
|
||||
#05-24-2018 08:31:03.881 ERROR DispatchThread - sid:1527150641.164891_315974D3-2FA6-4A16-839A-A95A0376BA14 Unknown error for peer indexername. Search Results might be incomplete. If this occurs frequently, please check on the peer.
|
||||
#05-28-2018 00:52:17.245 INFO DispatchThread - sid:1527468707.34320_315974D3-DFFC-48EC-86C8-33BD6744EE4F Search auto-finalized after time limit (30 seconds) reached.
|
||||
#however a better alternative may be [search]
|
||||
#log_search_messages = true
|
||||
#In the limits.conf file and then use the search_messages.log file...
|
||||
[splunk:searchlog]
|
||||
TIME_PREFIX = ^
|
||||
TIME_FORMAT = %m-%d-%Y %H:%M:%S.%3N
|
||||
SHOULD_LINEMERGE = false
|
||||
TRANSFORMS-set = setNull,setError,setAutoFinalize
|
||||
|
||||
#Example inputs.conf if you want to use the above in Linux
|
||||
#[monitor:///opt/splunk/var/run/splunk/dispatch/*/search.log]
|
||||
#sourcetype = splunk:searchlog
|
||||
#index = _internal
|
||||
|
||||
#Splunk records failures from search heads to indexer for corrupt buckets in the info.csv log only on the search head level
|
||||
#the search.log on the indexer peers *will* record this so if your ingesting the search.log from the peers you probably don't need this one...
|
||||
#The info.csv does show you what the end user will see in terms of errors such as this...
|
||||
#Examples include:
|
||||
#,,,,,,,,,,,,,,,,,ERROR,"[hostname] Failed to read size=1 event(s) from rawdata in bucket='_internal~43~E21ADB4E-02B7-4877-8A42-A15CE7F422BD' path='.../db_1515304396_1515080916_.... Rawdata may be corrupt, see search.log. Results may be incomplete!","{}",,,,,,,
|
||||
#Note that a better alternative may be [search]
|
||||
#log_search_messages = true
|
||||
#In the limits.conf file and then use the search_messages.log file...
|
||||
[splunk:search:info]
|
||||
SHOULD_LINEMERGE = false
|
||||
DATETIME_CONFIG = NONE
|
||||
TRANSFORMS-set = setNull,setWARNorERROR,setAutoFinalize
|
||||
|
||||
#Example inputs.conf if you want to use the above in Linux
|
||||
#[monitor:///opt/splunk/var/run/splunk/dispatch/*/info.csv]
|
||||
#sourcetype = splunk:search:info
|
||||
#index = _internal
|
||||
#crcSalt = <SOURCE>
|
||||
File diff suppressed because it is too large
Load Diff
@ -0,0 +1,85 @@
|
||||
[setNull]
|
||||
REGEX = .
|
||||
DEST_KEY = queue
|
||||
FORMAT = nullQueue
|
||||
|
||||
[setError]
|
||||
REGEX = ^[01]\d-[0-3]\d-20\d\d \d{2}:\d{2}:\d{2}.\d{3}\s+ERROR\s+
|
||||
DEST_KEY = queue
|
||||
FORMAT = indexQueue
|
||||
|
||||
[setAutoFinalize]
|
||||
REGEX = Search auto-finalized after
|
||||
DEST_KEY = queue
|
||||
FORMAT = indexQueue
|
||||
|
||||
#Only include warning or error entries
|
||||
[setWARNorERROR]
|
||||
REGEX = ,(?:ERROR|WARN),
|
||||
DEST_KEY = queue
|
||||
FORMAT = indexQueue
|
||||
|
||||
[splunkadmins_macros]
|
||||
#This config failed below with ERROR KVStoreLookup - KV Store output failed with err: The provided query was invalid. (Document may not contain '$' or '.' in keys.) message:
|
||||
#Switching back to csv files for now
|
||||
#collection = splunkadmins_macros
|
||||
#external_type = kvstore
|
||||
#fields_list = definition, eai:acl.app, title
|
||||
batch_index_query = 0
|
||||
case_sensitive_match = 1
|
||||
collection =
|
||||
external_type =
|
||||
fields_list =
|
||||
filename = splunkadmins_macros.csv
|
||||
|
||||
[splunkadmins_userlist_indexinfo]
|
||||
collection = splunkadmins_userlist_indexinfo
|
||||
#external_type = kvstore
|
||||
#fields_list = srchIndexesAllowed, srchIndexesDefault, user
|
||||
filename = splunkadmins_userlist_indexinfo.csv
|
||||
|
||||
[splunkadmins_indexlist]
|
||||
batch_index_query = 0
|
||||
case_sensitive_match = 1
|
||||
filename = splunkadmins_indexlist.csv
|
||||
|
||||
[splunkadmins_indexes_per_role]
|
||||
batch_index_query = 0
|
||||
case_sensitive_match = 1
|
||||
filename = splunkadmins_indexes_per_role.csv
|
||||
|
||||
[splunkadmins_datamodels]
|
||||
batch_index_query = 0
|
||||
case_sensitive_match = 0
|
||||
filename = splunkadmins_datamodels.csv
|
||||
|
||||
[splunkadmins_tags]
|
||||
batch_index_query = 0
|
||||
case_sensitive_match = 0
|
||||
filename = splunkadmins_tags.csv
|
||||
|
||||
[splunkadmins_eventtypes]
|
||||
batch_index_query = 0
|
||||
case_sensitive_match = 0
|
||||
filename = splunkadmins_eventtypes.csv
|
||||
|
||||
[splunkadmins_rmd5_to_savedsearchname]
|
||||
batch_index_query = 0
|
||||
case_sensitive_match = 0
|
||||
filename = splunkadmins_rmd5_to_savedsearchname.csv
|
||||
|
||||
[splunkadmins_indexlist_by_cluster]
|
||||
batch_index_query = 0
|
||||
case_sensitive_match = 1
|
||||
filename = splunkadmins_indexlist_by_cluster.csv
|
||||
|
||||
#Note that the lookup splunkadmins_hec_reply_code_lookup is based on https://github.com/redvelociraptor/gettingsmarter/blob/main/dashboards/hec_reply_codes.csv (previously https://docs.splunk.com/Documentation/Splunk/latest/Data/TroubleshootHTTPEventCollector) and this may change over time
|
||||
[splunkadmins_hec_reply_code_lookup]
|
||||
batch_index_query = 0
|
||||
case_sensitive_match = 1
|
||||
filename = splunkadmins_hec_reply_code_lookup.csv
|
||||
|
||||
[splunkadmins_lookupfile_owners]
|
||||
batch_index_query = 0
|
||||
case_sensitive_match = 1
|
||||
filename = splunkadmins_lookupfile_owners.csv
|
||||
|
|
|
|
|
|
|
|
|
|
@ -0,0 +1,23 @@
|
||||
# Application-level permissions
|
||||
[]
|
||||
access = read : [ admin, sc_admin ], write : [ admin, sc_admin ]
|
||||
|
||||
[eventtypes]
|
||||
export = none
|
||||
|
||||
[props]
|
||||
export = none
|
||||
|
||||
[transforms]
|
||||
export = none
|
||||
|
||||
[lookups]
|
||||
export = none
|
||||
|
||||
[tags]
|
||||
export = none
|
||||
|
||||
[viewstates]
|
||||
access = read : [ * ], write : [ * ]
|
||||
export = none
|
||||
|
||||
@ -0,0 +1,271 @@
|
||||
{
|
||||
"version": "1.0",
|
||||
"date": "2024-11-18T21:26:24.560754613Z",
|
||||
"hashAlgorithm": "SHA-256",
|
||||
"app": {
|
||||
"id": 3796,
|
||||
"version": "4.0.1",
|
||||
"files": [
|
||||
{
|
||||
"path": "default/app.conf",
|
||||
"hash": "b67935fa9e332c7889406e4380fea757f826bb863437e6329493669b9df562a1"
|
||||
},
|
||||
{
|
||||
"path": "default/data/ui/nav/default.xml",
|
||||
"hash": "1864d3aeaeac7ee0c49c93e0e41609d67da7783a175638b08ed31f9a8b9f328d"
|
||||
},
|
||||
{
|
||||
"path": "default/data/ui/views/ClusterMasterJobs.xml",
|
||||
"hash": "ace418e8530449f73e9d7d91f6e6f57002e234c43c5a04e22866bfaa525f7949"
|
||||
},
|
||||
{
|
||||
"path": "default/data/ui/views/data_model_rebuild_monitor.xml",
|
||||
"hash": "10690251de7d55a3da368d0bed0e0acd90e56285118aa3fbd88bc344eabbab5e"
|
||||
},
|
||||
{
|
||||
"path": "default/data/ui/views/data_model_status.xml",
|
||||
"hash": "f93feda0cbb8874bc40f7623bc3ae3ef40b68811e59b2d0384f3ae9ba3208e96"
|
||||
},
|
||||
{
|
||||
"path": "default/data/ui/views/detect_excessive_search_use.xml",
|
||||
"hash": "fbf207e014b41b7f38c21904d8e6525d493aeda1a62c81dd8a67a9a5c4939d61"
|
||||
},
|
||||
{
|
||||
"path": "default/data/ui/views/heavyforwarders_max_data_queue_sizes_by_name.xml",
|
||||
"hash": "d787e4eb2766616fb6a76fe7944c8510b013005556fe75355b878846bc87d227"
|
||||
},
|
||||
{
|
||||
"path": "default/data/ui/views/heavyforwarders_max_data_queue_sizes_by_name_v8.xml",
|
||||
"hash": "71c203e028bcc17f9d13f52e1f001a68605304a50a6337bcd2daf9a571bddf0e"
|
||||
},
|
||||
{
|
||||
"path": "default/data/ui/views/heavy_forwarder_analysis.xml",
|
||||
"hash": "ca4286507e38d2f08da1e989d6a1708b9c09e48ef18851a1daf60d1cafdb798d"
|
||||
},
|
||||
{
|
||||
"path": "default/data/ui/views/hec_performance.xml",
|
||||
"hash": "dc166a30a81c9de437b8d2c922ad581b9ad988389df156a3fef66c8b2f4fa134"
|
||||
},
|
||||
{
|
||||
"path": "default/data/ui/views/indexer_data_spread.xml",
|
||||
"hash": "a45ddeebf77d329b45be89be753426404587c896c41b0478b2d4068892d2f071"
|
||||
},
|
||||
{
|
||||
"path": "default/data/ui/views/indexer_max_data_queue_sizes_by_name.xml",
|
||||
"hash": "065529820d0e080a9a75699afbd7631df5de42b25e22dbf6e4ed0949b7e77b4c"
|
||||
},
|
||||
{
|
||||
"path": "default/data/ui/views/indexer_max_data_queue_sizes_by_name_v8.xml",
|
||||
"hash": "4faaf0efed77f607a154e079706ab5f763fae170dbe5d41da41e397ddeb77cf0"
|
||||
},
|
||||
{
|
||||
"path": "default/data/ui/views/issues_per_sourcetype.xml",
|
||||
"hash": "fd1bdf2a18f159e6b2f8ff93c5c130419a5c2a7e1fc032926928199ee5ba237e"
|
||||
},
|
||||
{
|
||||
"path": "default/data/ui/views/knowledge_objects_by_app.xml",
|
||||
"hash": "d0a7644a87608ac53508677dd52925a997846a7577eadb1f177281f0f63aa172"
|
||||
},
|
||||
{
|
||||
"path": "default/data/ui/views/knowledge_objects_by_app_drilldown.xml",
|
||||
"hash": "cf74079b09ffe61312c4d00b7c51ed8635a025c79b3727814c8a4425876eaa1c"
|
||||
},
|
||||
{
|
||||
"path": "default/data/ui/views/lookups_in_use_finder.xml",
|
||||
"hash": "c6e43d1b40b08e665553774e21fdf38f5286f550c2be5d5a2e6989052d800986"
|
||||
},
|
||||
{
|
||||
"path": "default/data/ui/views/lookup_audit.xml",
|
||||
"hash": "208681df6d96087207518af6a948834c3211c92bb65de450ded41e0dea6a090a"
|
||||
},
|
||||
{
|
||||
"path": "default/data/ui/views/rolled_buckets_by_index.xml",
|
||||
"hash": "f9b0bc4f1655ca5252fcaab05865f38bffdeb2c61fe3a09fa8212b7334ffbf0b"
|
||||
},
|
||||
{
|
||||
"path": "default/data/ui/views/search_head_scheduledsearches_distribution.xml",
|
||||
"hash": "86997ae930ad8a7e505e86021efd6bc4b83abbf5675295b53213bc85be13766e"
|
||||
},
|
||||
{
|
||||
"path": "default/data/ui/views/smartstore_stats.xml",
|
||||
"hash": "5c4a7f45ee75e2f4d3a219e961fe841a428f5cb5986030929d3e2b4483e8a04d"
|
||||
},
|
||||
{
|
||||
"path": "default/data/ui/views/splunk_forwarder_data_balance_tuning.xml",
|
||||
"hash": "7b30fd2f4fd19ae94f6b5b6fa0bf87b1810ec4bf2a2085cd143ad31fa5bc9bac"
|
||||
},
|
||||
{
|
||||
"path": "default/data/ui/views/splunk_forwarder_output_tuning.xml",
|
||||
"hash": "0a9233373d6919f6668aabf669c0491519bd3c040de34c2061990985602f4197"
|
||||
},
|
||||
{
|
||||
"path": "default/data/ui/views/splunk_introspection_io_stats.xml",
|
||||
"hash": "f101b92f4725bcd91ce05a0c35484d8f496251609abc81e5ad89213625e833de"
|
||||
},
|
||||
{
|
||||
"path": "default/data/ui/views/troubleshooting_indexer_cpu.xml",
|
||||
"hash": "5b438d0ec47779a0c9e97b294bda0dd528a52095046c5b16129d87f85a79b4ab"
|
||||
},
|
||||
{
|
||||
"path": "default/data/ui/views/troubleshooting_indexer_cpu_drilldown.xml",
|
||||
"hash": "710a2e4a0f6d088b1ef1cab7db9944ad8aab0a6a3a4eef03e2d4d11c038469e5"
|
||||
},
|
||||
{
|
||||
"path": "default/data/ui/views/troubleshooting_resource_usage_per_user.xml",
|
||||
"hash": "8ea42775ea292e9fd7801ea35d7fabb4874db2c07b9558790df0a5f41aea3f10"
|
||||
},
|
||||
{
|
||||
"path": "default/data/ui/views/troubleshooting_resource_usage_per_user_drilldown.xml",
|
||||
"hash": "3de1321b80e17059ba468dfe076208b28b3d51772537dd037b087f342cca0104"
|
||||
},
|
||||
{
|
||||
"path": "default/macros.conf",
|
||||
"hash": "dffbdc2e99dfa520f86eaa7339c75733b34ccce8aa99f88c909be616e699a657"
|
||||
},
|
||||
{
|
||||
"path": "default/props.conf",
|
||||
"hash": "b422e5d7410919ac19476180e1383520830592fa3da27787fdeed0d5b7262bb5"
|
||||
},
|
||||
{
|
||||
"path": "default/savedsearches.conf",
|
||||
"hash": "4abb46669e6728ac2d171489fb049e8275f815b140a778b2ea2250b25635467c"
|
||||
},
|
||||
{
|
||||
"path": "default/transforms.conf",
|
||||
"hash": "6fc76fe50cd62a39018b22279535b7bb475a326ccf2db1b7fe1a2b3a378fa033"
|
||||
},
|
||||
{
|
||||
"path": "LICENSE",
|
||||
"hash": "b40930bbcf80744c86c46a12bc9da056641d722716c378f5659b9e555ef833e1"
|
||||
},
|
||||
{
|
||||
"path": "lookups/splunkadmins_datamodels.csv",
|
||||
"hash": "1d5f73c2170040fd111d3e64f095ffd808978030d9d3817d422b214eb82be636"
|
||||
},
|
||||
{
|
||||
"path": "lookups/splunkadmins_eventtypes.csv",
|
||||
"hash": "4f308f3c824b105eace933f06b5a170fa86dbf98c5ba348e4a05522804b0dbec"
|
||||
},
|
||||
{
|
||||
"path": "lookups/splunkadmins_hec_reply_code_lookup.csv",
|
||||
"hash": "9c4be11e9cfa465f5d8a38c3f1ba467d00191c14d0bd767a9fe0ab7a77196b79"
|
||||
},
|
||||
{
|
||||
"path": "lookups/splunkadmins_indexes_per_role.csv",
|
||||
"hash": "39d43de1ef29a713ad480f668375d56fba195d57d49b77b190aed712f455fd55"
|
||||
},
|
||||
{
|
||||
"path": "lookups/splunkadmins_indexlist.csv",
|
||||
"hash": "f816b480f87144ec4de5862adf028ff66cc6964250325d53fd22bf8922824b6f"
|
||||
},
|
||||
{
|
||||
"path": "lookups/splunkadmins_indexlist_by_cluster.csv",
|
||||
"hash": "8d953cac7d4dbd8a1cd5aa3bb488710a6eeb5d49c0c43c27930328acdf9708f0"
|
||||
},
|
||||
{
|
||||
"path": "lookups/splunkadmins_lookupfile_owners.csv",
|
||||
"hash": "ebda123bd9d0f791eaa177d8a7c6903e3f4b1df910c57218e3cc48ed99cdbcb4"
|
||||
},
|
||||
{
|
||||
"path": "lookups/splunkadmins_macros.csv",
|
||||
"hash": "28ecbbdbe1641776141e78ca483e310565f3023f0a2a6a539e2dc0ee752824e2"
|
||||
},
|
||||
{
|
||||
"path": "lookups/splunkadmins_rmd5_to_savedsearchname.csv",
|
||||
"hash": "64d62548bb0741d6f76fad5ab96c168307c68d5c98251139f2c9f6c6c0574024"
|
||||
},
|
||||
{
|
||||
"path": "lookups/splunkadmins_tags.csv",
|
||||
"hash": "d7ea98b9397ddbedb9b6471fb6cdc86a76135388e823204fcd84d91826318dfc"
|
||||
},
|
||||
{
|
||||
"path": "lookups/splunkadmins_userlist_indexinfo.csv",
|
||||
"hash": "d9e8eabd1d316bc60a6e351339b676bd6d7b53b914f01bf3c7858cf7c92716bc"
|
||||
},
|
||||
{
|
||||
"path": "metadata/default.meta",
|
||||
"hash": "0838ba65305ef1ae0367d6bcc5c6fd63d0f59d8e6fdb66b6aed4ff14394c613c"
|
||||
},
|
||||
{
|
||||
"path": "NOTICE",
|
||||
"hash": "11494ae88ef9a7d75cd70b4e2c3152bd83751665a3dde0590527857496ba5440"
|
||||
},
|
||||
{
|
||||
"path": "README.md",
|
||||
"hash": "3da128fa717ba6929a0528ec9f91057656356de738cd6ff50c989f14f50efcd4"
|
||||
},
|
||||
{
|
||||
"path": "static/appIcon.png",
|
||||
"hash": "32f1a6833f3a9db2f6d4dcac27404459f91bef4e2898604aa1ddc168455dbc1b"
|
||||
},
|
||||
{
|
||||
"path": "static/appIconAlt.png",
|
||||
"hash": "32f1a6833f3a9db2f6d4dcac27404459f91bef4e2898604aa1ddc168455dbc1b"
|
||||
},
|
||||
{
|
||||
"path": "static/appIconAlt_2x.png",
|
||||
"hash": "8caf40b544afaaa087d232c479560a0a3c2e57b27d0f8cb38f90ba48f53256c6"
|
||||
},
|
||||
{
|
||||
"path": "static/appIcon_2x.png",
|
||||
"hash": "8caf40b544afaaa087d232c479560a0a3c2e57b27d0f8cb38f90ba48f53256c6"
|
||||
},
|
||||
{
|
||||
"path": "static/appLogo.png",
|
||||
"hash": "ee7abc736a4b4cbbd796383f0dce484d4efe4b1be5dc309ff6730a14a92896a0"
|
||||
},
|
||||
{
|
||||
"path": "static/appLogo_2x.png",
|
||||
"hash": "0b483b1aec1a6c70a98bd1a58fa31406b7d946ce9cfac3ac3ae296edc7fdce28"
|
||||
}
|
||||
]
|
||||
},
|
||||
"products": [
|
||||
{
|
||||
"platform": "splunk",
|
||||
"product": "enterprise",
|
||||
"versions": [
|
||||
"8.1",
|
||||
"8.2",
|
||||
"9.0",
|
||||
"9.1",
|
||||
"9.2",
|
||||
"9.3"
|
||||
],
|
||||
"architectures": [
|
||||
"x86_64"
|
||||
],
|
||||
"operatingSystems": [
|
||||
"windows",
|
||||
"linux",
|
||||
"macos",
|
||||
"freebsd",
|
||||
"solaris",
|
||||
"aix"
|
||||
]
|
||||
},
|
||||
{
|
||||
"platform": "splunk",
|
||||
"product": "cloud",
|
||||
"versions": [
|
||||
"8.1",
|
||||
"8.2",
|
||||
"9.0",
|
||||
"9.1",
|
||||
"9.2",
|
||||
"9.3"
|
||||
],
|
||||
"architectures": [
|
||||
"x86_64"
|
||||
],
|
||||
"operatingSystems": [
|
||||
"windows",
|
||||
"linux",
|
||||
"macos",
|
||||
"freebsd",
|
||||
"solaris",
|
||||
"aix"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
|
After Width: | Height: | Size: 1.1 KiB |
|
After Width: | Height: | Size: 1.1 KiB |
|
After Width: | Height: | Size: 1.6 KiB |
|
After Width: | Height: | Size: 1.6 KiB |
|
After Width: | Height: | Size: 1.0 KiB |
|
After Width: | Height: | Size: 2.0 KiB |
Loading…
Reference in new issue