Also refer to https://github.com/camrunr/s2_traffic_report/blob/master/s2_traffic_report.xml for an alternative view of SmartStore downloads/uploads. To determine which searches are causing cache misses refer to the SearchHeadLevel - SmartStore cache misses reports in this app. Note that the cache misses combined will require the search to complete while the indexing tier version can catch an in-progress searchAlso refer toSmartStore S2S Traffic report for an alternative dashboard view or SearchHeadLevel - SmartStore cache misses combined or SmartStore cache misses - remote_searches to find the searches that are triggering the cache misses
Upload/download latencyindex=_internal $host$ TERM(status=succeeded) OR TERM(status=failed) sourcetype=splunkd `splunkadmins_splunkd_source` TERM(action=$action$)
| rangemap field=kb under_300=0-307200 300_700=307201-716800 700_1000=716801-1024000 default=over1000
| eval combined = action . "_" . range
| timechart avg(elapsed_ms) AS avg_elapsed_ms, max(elapsed_ms) AS max_elapsed_ms by combined$time.earliest$$time.latest$1Upload/download thruputindex=_internal sourcetype=splunkd `splunkadmins_splunkd_source` $host$ TERM(status=succeeded) OR TERM(status=failed) TERM(action=$action$)
| timechart sum(eval(kb/1024)) AS MB by action$time.earliest$$time.latest$CacheManager Queued download count```Relates to [cachemanager] max_concurrent_downloads in server.conf. Thanks to Splunk support for the original version of this search``` index=_internal $host$ `splunkadmins_metrics_source` TERM(group=cachemgr_download) sourcetype=splunkd queued
| timechart partial=f limit=50 avg(queued) AS avg_queued by host
| eval ceiling=20$time.earliest$$time.latest$CacheManager hits/misses
index=_internal $host$ `splunkadmins_metrics_source` sourcetype=splunkd group=cachemgr_bucket TERM(cache_hit=*) OR TERM(cache_miss=*)
| timechart sum(cache_hit) as Hits sum(cache_miss) as Misses
$time.earliest$$time.latest$Excessive cachemanager downloads```Thanks to Splunk support for the original version of this search, similar version available in the monitoring console...``` index=_internal $host$ `splunkadmins_splunkd_source` sourcetype=splunkd CacheManager TERM(action=download) TERM(status=succeeded) TERM(download_set=*)
| rex field=cache_id ">*\|(?<index_name>.*)~.*~.*\|"
| eval identifier=(cache_id + host)
| stats count by identifier, index_name
| stats count(eval(count>1)) as duplicate_downloads, sum(count) as all_downloads
count(eval(count>8)) as excessive_duplicate_downloads by index_name
| eval duplicate_percent=if(all_downloads=0,0,round((duplicate_downloads/all_downloads)*100,2))
| fields index_name, duplicate_percent all_downloads duplicate_downloads excessive_duplicate_downloads
| rename custom_index as Index, duplicate_percent as "Repeat Download %", all_downloads as "All Downloads", duplicate_downloads as "Repeated"$time.earliest$$time.latest$CacheManager downloads by age/index```Thanks to Splunk support for the original version of this search``` index=_audit $host$ TERM(action=remote_bucket_download) TERM(info=completed)
| eval gbps=kb/1024/1024
| eval age=round((now()-earliest_time)/60/60/24)
| bucket span=30 age
| rex field=cache_id "^[^\|]+\|(?P<index_name>[^~]+)~[^~]+~[^~]+"
| eval age_index = age. " - ".index_name
|timechart span=60s sum(gbps) by age_index limit=10 useother=f usenull=f$time.earliest$$time.latest$