Quantcast
Channel: Tips & Tricks – Splunk Blogs
Viewing all 621 articles
Browse latest View live

Smart AnSwerS #73

$
0
0

Hey there community and welcome to the 73rd installment of Smart AnSwerS.

It has been almost a year since the very first members of the SplunkTrust Community MVP program were chosen by Rachel Perkins, Sr. Director of Community. SplunkTrustees have accomplished a lot of great work helping users through Splunk Answers, user groups, virtual .conf sessions, blogs, and giving talks at Splunk and industry events. It is now time for the next cohort for 2016-2017 to be selected by Rachel and the current members. Applications are now live as of last Friday, July 29th, and are due for submission by 12:01AM on Saturday, August 20th, 2016. Best of luck to all applicants!

Check out this week’s featured Splunk Answers posts:

How can I determine the lag between when an app’s scheduled search is supposed to run and when it actually runs?

nmiller shares this helpful question and answer with the community to help users identify the lag between the time a search is scheduled to run and the actual run time. She provides the run anywhere example search that can be used to look at searches for a particular app or all scheduled searches in your environment.
https://answers.splunk.com/answers/436059/how-can-i-determine-the-lag-between-when-an-apps-s.html

What are the expected results if a Splunk deployment server goes down for an extended period of time?

RJ_Grayson was concerned about what would happen with universal forwarder communication and deployment apps during a deployment server outage. esix explains that deployment clients will still try to contact the deployment server based on the phonehomeinterval, and reassured that apps would not get dumped or stop working. He also describes what happens with universal forwarders when the existing deployment server comes back online, or a new one is built.
https://answers.splunk.com/answers/433297/what-are-the-expected-results-if-a-splunk-deployme.html

How to search information on usage for each search from all the different apps in our Splunk environment?

ECovell needed to find usage data on each search for all apps to see what could be cleaned up or optimized for better performance, but wasn’t sure where to look or start. Ravan provides an answer with two useful searches that identify searches by app_name, which users are running them, disk usage, number of runs, and job run time information.
https://answers.splunk.com/answers/437907/how-to-search-information-on-usage-for-each-search.html

Thanks for reading!

Missed out on the first seventy-two Smart AnSwerS blog posts? Check ‘em out here!
http://blogs.splunk.com/author/ppablo


SplunkTalk – #76 – Buzzword Bingo

$
0
0

medium_splunktalk-1448930454We’re getting the hang of this now?!? Maybe? Today’s episode we chat about some upcoming goodies like Hal’s Developer Lounge and Wilde’s Yoga Classes and much more at #Splunkconf16 at the Swan/Dolphin Hotel in Orlando. Clint has a new job at Splunk. Wilde celebrates his 10th year at Splunk and some funny stories about our bumpy time at 250 Brannan where we slowly took over that building — #pettingzoo. Splunk is in a fantastic new building next door, if you’re in SF, come for a visit #thereisalegoroom.

Episodes are recorded frequently. Live on the internet, on youtube! – Email us at splunktalk@splunk.com to ask questions and have them answered on air!

Listen here, right now!

Smart AnSwerS #74

$
0
0

Hey there community and welcome to the 74th installment of Smart AnSwerS.

A Splunk Paper Aircraft Association was started up at HQ a couple weeks ago where each participant creates and launches their own paper aircraft every Friday afternoon. Weekly awards are given for longest distance traveled and duration in flight. There’s also a Splunker’s Choice Award for the most unusual, interesting, creative, or fun design. Last Friday, Director of Documentation ChrisG won top prize for his aircraft, winning in both categories of distance and duration. Congrats to the all-star!

Check out this week’s featured Splunk Answers posts:

Large lookup caused the bundle replication to fail. What are my options?

Support engineer rbal shared this Q&A with the Splunk community because it was a common issue seen in cases she had worked on with customers. Several users have asked about this problem on Splunk Answers throughout the years, so rbal posted this almost a year ago for others to easily search a find her troubleshooting guidelines. She has since added updates on caveats with distributed search and search head clustering environments to cover more ground.
https://answers.splunk.com/answers/436059/how-can-i-determine-the-lag-between-when-an-apps-s.html

How to match an IP address from a lookup table of CIDR ranges?

glenngermiathen was trying to search for events where a destination IP, but not the source IP, is found in a lookup table of CIDR ranges. lguinn from the Splunk Education team points out that the argument for cidrmatch is a string, not a list of subnets. To get something like this to work, she shows how to do this with the lookup command by configuring certain options in transforms.conf and the required format for the lookup file. lguinn created an example search and explains how it works to get the expected filtered results.
https://answers.splunk.com/answers/305211/how-to-match-an-ip-address-from-a-lookup-table-of.html

Where should I check for python.log error messages about generating pdf of scheduled reports?

Skender27 was getting “An error occurred while generating the PDF” while receiving some scheduled reports, and wanted to know what to look for in python.log to figure out the underlying cause. ronogle had the same problem and found out how to track and pinpoint the issue. He suggested looking in splunkd_access.log for a 400 status code with a corresponding time value, and see if this status code is also found in python.log and pdfgen.log. If all things check, then the splunkdConnectionTimeout in web.conf would need to be increased to a value greater than the time value found in splunkd_access.log to prevent this error from happening again.
https://answers.splunk.com/answers/339920/where-should-i-check-for-pythonlog-error-messages.html

Thanks for reading!

Missed out on the first seventy-three Smart AnSwerS blog posts? Check ‘em out here!
http://blogs.splunk.com/author/ppablo

Secure Splunk Web in Five Minutes Using Let’s Encrypt

$
0
0

Configuring SSL for your public facing Splunk instance is time-consuming, expensive and essential in today’s digital environment. Whether you choose to go with a cloud provider or self-hosting; RTFM-ing how to generate the keys correctly and configuring how Splunk should use them can be quite confusing. Last year, a new certificate authority Let’s Encrypt was born in an effort to streamline the CA process and make SSL encryption more widely available to users (The service is FREE). In this short tutorial, we will cover how to make use of this new CA to secure your Splunk instance and stop using self-signed certs.  Using SSL will help you to secure your Splunk instance against MITM attacks. Let’s Encrypt utilizes all of the SSL best practices with none of the frustration.

The only requirements for this five-minute tutorial are:

  • Root/Sudo Access to the server running Splunk Web
  • Ownership of a publicly accessible domain name
  • Internet connectivity for the Splunk server

Configure the domain

One important requirement is for the publicly accessible domain to have an A record associated with the host you are creating a cert for. Additionally the @ record must also route to a publicly accessible server.

Example DNS Settings for AnthonyTellez.com:

dns_config

Install Certbot & Generate Certs

Thanks to EFF there is an easy way to automate the cert process using Certbot.
You can find the exact instructions for getting it installed on your flavor of Linux here: https://certbot.eff.org/
From the drop down you want to select “none of the above” and the operating system you are using.
For this example, we are going to be using Ubuntu 16.04 (Xenial).

Install Certbot on the Splunk server you wish to secure with SSL using: sudo apt-get install letsencrypt

Once installed, use the following command line options for certbot, substituting your domain & subdomain.

$ letsencrypt certonly --standalone -d anthonytellez.com -d splunk-es.anthonytellez.com

At the prompt, fill out your information for key recovery and agree to the TOS.

certbot_inteface

On successful completion, you should see the following message:

cert_bot_good

Take note of the expiration date, you can renew whenever you need to.

Configure Splunkweb

Take a quick peek in /etc/letsencrypt/live/


root@splunk-es:~# cd /etc/letsencrypt/live/anthonytellez.com/
root@splunk-es:/etc/letsencrypt/live/anthonytellez.com# ls
cert.pem chain.pem fullchain.pem privkey.pem

You will see four .pem files, you only need to copy two which are needed for Splunk web SSL (fullchain.pem & privkey.pem). The quickest way to get Splunk configured and remember is to create a directory in /opt/splunk/etc/auth/ In my case, I created a directory using the domain name to keep things simple and memorable.


mkdir /opt/splunk/etc/auth/anthonytellez
cp fullchain.pem privkey.pem /opt/splunk/etc/auth/anthonytellez/
chown -R splunk:splunk /opt/splunk/

Configure Splunk web to make use of the certs in $SPLUNK_HOME/etc/system/local/web.conf:


[settings]
enableSplunkWebSSL = 1
privKeyPath = etc/auth/anthonytellez/privkey.pem
caCertPath = /opt/splunk/etc/auth/anthonytellez/fullchain.pem

Restart Splunk using: ./splunk restart and direct your browser to the https version of Splunk web.

In our example the URL would be: https://splunk-es.anthonytellez.com:8000

splunk_ssl

If you need additional examples, take a peek at docs.splunk.com: Configure Splunk Web to use the key and certificate files.

Handling HTTP Event Collector (HEC) Content-Length too large errors without pulling your hair out

$
0
0

Once you start using HEC, you want to send it more and more data, as you do your payloads are going to increase in size, especially if you start batching. Unfortunately as soon as you exceed a request payload size of close to 1MB (for example if you use our Akamai app or send events from AWS Lambda) you’ll get an error status 413, with a not so friendly error message:

“Content-Length of XXXXX too large (maximum is 1000000) “

At this point you might feel tempted to pull your hair out, but fortunately you have options. The reason you are hitting this error is because HEC has a pre-defined limit on the maximum content length for the request. Fortunately this limit is configurable via limits.conf.

If you look in $SPLUNK_HOME$/etc/system/default/limits.conf you’ll see the following:

# The max request content length.
max_content_length = 1000000

All you need to do is up that limit in /etc/system/local/limits.conf and restart your Splunk instance and you’ll be good to go. If you are hosted in Splunk Cloud, our support folks will be more than happy to take care of it for you.

As a side note, we’ll be upping this default in our next release to 800MB, so that you are never bothered by this error again.

Detecting early signs of compromise by splunking windows sysinternal

$
0
0

Splunk_Power_Banner

OVERVIEW

Traditional way of detecting of compromise in window environment using signature based anti-virus / malware product is very difficult to detect advanced malware or threats.  Most of anti-malware solutions that are signature based relies on known list of signatures :

  • Endpoint protection product, don’t have the perfect list of threats to detect all signatures that exist or known
  • Don’t apply to new type of threats that are executed as new executables at the endpoints because there is no known signature to compare against

This traditional approach is costing organization to constantly deal with security breaches hitting the headlines that ranges from incidents that deal with data exfiltration, service interruptions, ransomwares, etc.  all dealing with inability to protect and detect the activities on the endpoints.

Fundamentally the problems lie with many organizations not being able to utilize the very granular windows system activities events that could be collected from windows infrastructure as well as applying analytics to that data, to determine what is normal vs abnormal by reviewing all process and session created at windows end-point.

Picture2

The challenges with collecting sys internal data from all endpoint requires coordinating efforts and proper technology that installs a light agent at windows endpoint that could collect granular sys internal events real-time from many windows systems.  Once the details of windows activity, in event log format from the endpoint is collected, it needs to be stored in a data platform that could handle the volume of the messages that could range from tens to hundreds of event per second out from a single machine, and to be able to search, apply analytics against that every single system activities events effectively to find anomalies.

SOLUTION

Using Splunk forwarders that includes the ability to collect window infrastructure’s sysmon data, provides the critical function to collect sys internal data from the endpoint in real-time.  Then Splunk transports the events that are relevant in analyzing anomalies for all process and session creations on the endpoint.

Splunk provide 2 key functions to solve the challenges of making the best use of sys internal events for detecting early signs of known advanced malware infections.

  1. Collections of windows activities: Using Splunk windows OS based forwarder to easily collect all sys internal data through event log
    • Provide simple agent to collect all windows data (event log, sys internal, perf mon, files)
    • Allows secure and high confident transport means in centralizing data to analytics platform
    • Sysmon specific formatting and process ability to immediately apply analysis
  1. Analytics base to searching and analyze anomalies: Using simple search and statistical summation and calculation to highlight rare values in process creation details.
    • Ability to pivot into different endpoint criteria to dynamically derive to results
    • Ability to apply machine learning

By applying analytical approach to the data, regardless of known or unknown, without using any additional tools, Splunk allows distinction of abnormality in the activities of the endpoint by eliminating normal pattern that portraits in statistical calculation.  The use of this technique can widely be used in most of organization with 1) any windows based server infrastructure 2) or with collecting sys internals from all windows clients.  Application of this use-case covers majority of security operations.  Regardless of whether the organization already have end-point security solution or not, the wealth of information and the details provide significant value to assess the security of an end-point.  There also could be other uses of the sys internal where it will add more context to either IT operations and service analysis.

DATA SOURCES

Data source that is required to detect the potential activities of malware on windows endpoint is sys internal collected through windows event log using sysmon.  An organization can simply achieve collecting this detailed information by installing sysmon provided by Microsoft, then simply installing Splunk forwarder to define what needs to be collected and filtered.  This, sys internal data, is where finding the indications of odd activities would begin, but additional correlation to trace the how and what got infected; further ingesting proxy, IDS/IPS, DNS/Stream data is recommended to root-case the route of potential infection and determine the scope and mitigate the incident.  Analyzing the sys internals through Splunk would provide definitive indications of compromise in detecting potential of any malware, whether it’s known or unknown.

  • Windows sys internals using sysmon through event log (Required)
  • Proxy, IDS/IPS, DNS, Stream (Recommended for further investigation beyond detection)

Event log with sysmon installed provides the following details to be collected to Splunk:

  • Process creation including full command line with paths for both current and parent processes
  • Hash of the process image using either MD5, SHA1 or SHA256
  • Process GUID that provides static IDs for better correlations as opposed to PIDs that we re-used by OS
  • Network connection records from the host to another, includes source process, IP address, Port number, hostnames and port names for TCP/UDP
  • File creation time changes
  • Boot process events that may include kernel-mode malware

Picture1
< Example of windows event log through sysmon>

COLLECTION OF WINDOWS ACTIVITIES EVENTS

Using Splunk forwarder collecting various information from window infrastructure is easy.

To collect and integrate sysmon data to Splunk, here are a few simple steps to accomplish

  1. Install sysmon on your windows based endpoint, Sysmon can be downloaded from the following link http://technet.microsoft.com/en-us/sysinternals/dn798348
  2. Install Splunk forwarder on the end point, forwarder will forward sys internal messages real-time to Splunk instance
  3. Install Splunk Add-ons for Microsoft sysmon, easy Splunk configuration to extract and maps to CIM, Link https://splunkbase.splunk.com/app/1914/

Once sysmon is installed, through Splunk’s “Data Inputs” deciding what you want from the endpoint lies at your fingertips; Just select the type of event logs to transport to the Splunk Indexer.

Picture3

Now that you have events in Splunk, there is a wealth of information available to you. The basic search to call the sys internal events from Splunk index:

sourcetype=”XmlWinEventLog:Microsoft-Windows-Sysmon/Operational”

Following is an example of Splunk collected data. Window event log format is converted into XML containing all different fields into a single line event.

<Event xmlns=’http://schemas.microsoft.com/win/2004/08/events/event’><System><Provider Name=’Microsoft-Windows-Sysmon’ Guid='{5770385F-C22A-43E0-BF4C-06F5698FFBD9}’/><EventID>1</EventID><Version>5</Version><Level>4</Level><Task>1</Task><Opcode>0</Opcode><Keywords>0x8000000000000000</Keywords><TimeCreated SystemTime=’2016-02-04T01:58:00.125000000Z’/><EventRecordID>73675</EventRecordID><Correlation/><Execution ProcessID=’1664′ ThreadID=’1856’/><Channel>Microsoft-Windows-Sysmon/Operational</Channel><Computer>FSAMUELS</Computer><Security UserID=’S-1-5-18’/></System><EventData><Data Name=’UtcTime’>2016-02-04 01:58:00.125</Data><Data Name=’ProcessGuid’>{6B166207-B028-56B2-0000-001082512900}</Data><Data Name=’ProcessId’>4544</Data><Data Name=’Image’>C:\Program Files\apps\Update\Update.exe</Data><Data Name=’CommandLine’>””C:\Program Files\apps\Update\Update.exe”” /ua /installsource scheduler</Data><Data Name=’CurrentDirectory’>C:\Windows\system32\</Data><Data Name=’User’>NT AUTHORITY\SYSTEM</Data><Data Name=’LogonGuid’>{6B166207-A731-56B2-0000-0020E7030000}</Data><Data Name=’LogonId’>0x3e7</Data><Data Name=’TerminalSessionId’>0</Data><Data Name=’IntegrityLevel’>System</Data><Data Name=’Hashes’>SHA1=9D04597F8CFC8841DFA876487DE965C0F05708CC</Data><Data Name=’ParentProcessGuid’>{6B166207-B028-56B2-0000-0010AC4F2900}</Data><Data Name=’ParentProcessId’>2576</Data><Data Name=’ParentImage’>C:\Windows\System32\taskeng.exe</Data><Data Name=’ParentCommandLine’>taskeng.exe {A26A5EC9-73E5-4AE9-A492-04500B20692F} S-1-5-18:NT AUTHORITY\System:Service:</Data></EventData></Event>

Collected XML format, sys internal events are all parsed into fields in Splunk, with help of “Splunk Add-on for Sysmon”.  Browsing through complex sys internal events is now easy, just point and click on parsed fields.

Picture13

SEARCHING FOR PROCESS CREATION ANOMALIES 

The challenge is how do we protect against the unknown?   Unknown meaning there is no list to verify against, things that are not just defined either right or wrong, but what’s right or wrong derives from the data itself.   Based on calculated results with understanding of what’s majority vs minority and associated other analytical details related to them, either majority or minority could clearly distinguish the normal vs anomaly.

Objective of the Analytics approach :

The process of detecting the changes of activities under stealth entails attempting to find anomalies by comparing with what happened and existed to what’s now.

The elements to validate different aspects of determining anomalies are:

  • What is pre-existing and new?
  • What is the statistics on pre-existed vs new, to validate which is old (Being normal) and new (as something that needs to be validated)?
  • What is the time relations of existed and new entities?
  • Amount of association existing entity has with other entities, such as number of assets associated with

After having insights to the questions related to validating anomalies, now we can eliminate the “normal” to filter out the anomalies that are most like to be evaluated and analyzed.

These kinds of distinction are possible when statistics of things are compared in relative to each other.

Windows Sys-internal provides extensive detail into understanding the status of endpoints in terms of endpoint security and vulnerability.  One of the notable power of analyzing sys internals, is the ability to gain visibility into what processes and files are installed and executed.  There are events related to the execution of processes, indicating activities on the system which provides critical source of information that could security analysts to understand:

  • What process have been executed?
  • What directory origin of the executable?
  • What is the parent process that executed the executable?
  • What is the fingerprint of the executed process?

All of these insights could gain from the sys internals are critical part of gathered system activity information in applying analytics to find anomalies of processes and action executed in an endpoint.  With the data collected from the different Sysmon sources, this is an easy task to do. Using sysmon’s hash information attached to each processes create events as MD5, SHA1 or SHA256, an analyst can identify few different versions of a certain system executable.

For example, why do we care about the full path of a process “cmd.exe”?  Even though the “cmd.exe” is a legitimate looking executable on windows, we can indicate odd path of the binaries, potentially linking it to a “Black Sheep”.  How about MD5 hash of the binary “cmd.exe” that is different from all the other “cmd.exe” in the network? This is a clear indication of file manipulation, potentially malicious code hiding as legitimate executable.

MALWARE PROCESS HIDING AS EXISTING OS OR APPLICATION PROESS

Most of PC users have experience looking at windows process monitor, finding no particular problems where the OS seems to be running all the normal process.  Regardless of who it may appears to be the user knows that the PC is infected with all kinds of malware witnessing for example the browser hijack to an odd site.  When malware processes run as if it apparent to be a normal process would be an example of “Black Sheep” malware disgusting as a normal OS process.  How could this kind of “Black Sheep” can be detected?

What about in the case of advanced malware, a type malware has never been known or detected by anti-malware software product?  These types of malware would be executed on an endpoint without allowing any ability for most of anti-malware detection software can raise the red-flag, because the signature of the new executable is not known.  Could this kind of problem be tackled using analytics?  Analytics that compares a set of criteria from different executable / finger prints detected that derived from results of analytics.

In order to find this, hashes on the sysmon event play a key role.  The hash information that gets attached to the sysmon process creation event represent a unique finger print of an executable.  Using analytics, if we were to find out what were those existing fingerprints of trusted executable vs comparing the new fingerprint for a same executable that started recently, we can find the processes that are anomaly.  This detailed sysmon events about created processes and their associated hash can be analyzed with simple Splunk SPL summation by executable name and hash does the trick finding what processes are potentially a known malware.

This lists unique counts of executables regardless of how the executables are disguising as.  A fingerprint of a Hash means a non-arguably a unique file or executable executed.  On top of that sum the count of those unique hashes does indicate what need to be look at closer.

Search Syntax Below:

sourcetype=”XmlWinEventLog:Microsoft-Windows-Sysmon/Operational” Image=*svchost.exe| dedup Computer

| eval TIME=strftime(_time,”%Y-%m-%d %H:%M”)

| stats first(TIME) count by Image, Hashes

 

The search to find all the same executable names with different Hashes.

Picture4

Based on the result of search, same executables svchost.exe with exact same paths were found, but notice hashes are different.  This means that there are 2 variants of Windows OS, because this infrastructure is running a good balance of hosts that are Window 7 and Windows 8.  This seems normal because given the size of the network with 200+ hosts, the distribution of hashes for a critical system process “svchosts.exe” is distributed at the quantity of each Window version.  Notice the sum of the instances, knowing the basic facts about the infrastructure running 2 versions of OS, and seeing a good count of both results, we can conclude that the things look normal.

Picture5

Let’s look at the next following example.  Imagine the same search returns above example.  The result is shows the similar number of distribution for the first 2 majority Hash executables, but it has the third one with much less number of host with a new SHA1 hash found.  This means the same executable with different hash and significantly less count of process creation means this is a new executable executed with same name as a system binary.  The sum of counts of “2” indicates it’s a rare frequency, not likely to see as a system executable, unless we have another new version of OS with different system executables running came on the network.  If it’s not the case, then this is a suspicious hash that needs to be looked on google.

Also, looking at the “first(TIME)” indicate the very first time of the anomalous executable created, indicates it is definitely a new process compared to the normal svchost.exe executables created much longer time ago.   First time function provides understand into what was existing vs new, and correlating the sum of counts really determines what is abnormal.  The 3rd hashes and newer timestamp executable with minor number of occurrence is most likely malware, potentially anti-virus program didn’t detect.

Picture6

Make sure to very what hosts are associate with the hashes for 2 different svchost.exe, as well as which hosts are involved in potential malware activities.  This can be accomplished by listing unique value of “Computer” field from sysmon data, using values(Computer) function.

sourcetype=”XmlWinEventLog:Microsoft-Windows-Sysmon/Operational” Image=*svchost.exe| dedup Computer| eval TIME=strftime(_time,”%Y-%m-%d %H:%M”)

| stats first(TIME), count, values(Computer) by Image, Hashes

Picture7
Picture8
Picture9

After the analysis of finding a process with new hashes, we can conclude a couple of condition to define a potential malware sneaking in as a system process:

  • The process may look normal from the path and name of the executable, but the hash of the new executable in comparison with existing historical hashes are different
  • The frequency of process creation in contrast with existing executable hash is significantly different.

Understanding the nature of the manipulation / tactics, we can define a query that filters automatically by apply a couple of calculation steps that would consider the quantitative contrast of process creation count with existing and new executable hashed.  Following with the search, eventstats add sum of total occurrences, and calculate a percentage to relatively understand the difference between one entity of executable versus other.

Search Syntax Below :

sourcetype=”XmlWinEventLog:Microsoft-Windows-Sysmon/Operational” Image=*svchost.exe| dedup Computer| eval TIME=strftime(_time,”%Y-%m-%d %H:%M”)

| stats first(TIME) count by Image, Hashes

| eventstats sum(count) as total_host

| eval majority_percent=round((count/total_host)*100,2)

Picture10

Now, how to we define a search (Rule) to have Splunk to look for these kinds of odd executables?

Expanding the previous relative quantity calculation, applying a filter to look for “normal_percent<5” will eliminate the normal groups and expose the anomalous executable group based on relative threshold.

sourcetype=”XmlWinEventLog:Microsoft-Windows-Sysmon/Operational” Image=*svchost.exe| dedup Computer

| eval TIME=strftime(_time,”%Y-%m-%d %H:%M”)

| stats first(TIME) count by Image, Hashes

| eventstats sum(count) as total_host

| eval majority_percent=round((count/total_host)*100,2)

| where majority_percent<5

Picture11

This kind of recipe can be applied Splunk Enterprise’s saved search or Enterprise Security’s correlations search feature to do the analysis job for us and automatically send the analyst alerts on finding the anomalous processes that could start up in any one of the windows workstation running on the network.

Picture12

SUMMARY

By using Splunk Enterprise and Microsoft sysmon, security analyst can gain significant power over understanding detailed activities on end-point as well as the ability to detect advanced / unknown malware activities.  Statistical analysis over detailed end-point data contrasts risk in quantitative values for analysts to easily profile behavior of compromised hosts by adversaries and further define rules based on those values as threshold.  This empowers security analysts to apply similar techniques to solve many of other similar problems and use-cases that could be addressed by only by analytical approach.  Analytical approach that contextually distinguish the differences and anomalies provides the security operations to detect advanced threats faster to ultimately minimize business impact.

Android ANR troubleshooting with MINT

$
0
0

Being involved with shippable software for mobile and desktop, I realize that there is a class of problems that are not easy to troubleshoot.

Crashes are probably the easiest to reproduce in QA and Engineering environments and so they are easier to fix. But one class of problems, that in many cases requires more time and possible code redesign, is application sluggishness. This problem usually falls into the gray area of software development that everybody tries to address during design and implementation stages. The problem of application sluggishness seldom shows up in QA or other controller environments, but always happens when the actual user is trying to use the app.

Modern mobile apps are complex creatures. A lot of things are happening as a result of user input or internal processes in the background that are also trying to update the UI. Apps can also issue many backend calls to keep the UI up to date. 

We all like a smooth UI experience with our apps. Android addresses UI issues by implementing an Application Not Responding (ANR) mechanism, which forcefully terminates non-responding apps. The timeout is enforced by the system and the data is available in the LogCat.

In the 5.1 release of the Splunk MINT SDK for Android, we’ve given you a way to monitor and troubleshoot your app’s ANR issues. Just opt-in for ANR monitoring for your app by calling:

Mint.startANRMonitoring(5000/*timeout*/, true/*ignoreDebugger*/);

ANR events will then be available in Splunk Enterprise. Run this search to view them:

sourcetype="mint:error" "extraData.ANR"=true

Example:

anrPlease note that the stacktrace field in the event should be interpreted as a thread dump of your application threads (see the link to the documentation and example below). 

Our monitoring feature will help you to identify common problem of ANR, such as application deadlocks and unexpectedly long-running or stalled HTTP requests.

Additional Reading: 

 

Tracing Objective-C Methods

$
0
0

You can write very fast programs in Objective-C, but you can also write very slow ones. Performance isn’t a characteristic of a language but of a language implementation, and more importantly, of the programs written in that language. Performance optimization requires that you measure the time to perform a task, then try algorithm and coding changes to make the task faster.

The most important performance issue is the quality of the libraries used in developing applications. Good quality libraries reduce the performance impact. So to help you improve performance in your apps, we’ve updated the Splunk MINT SDK for iOS to provide an easy way to trace a method performance using MACROS.

To trace an Objective-C method, add the MINT_METHOD_TRACE_START macro to the beginning of your method and the MINT_METHOD_TRACE_STOP macro to the end of it.

For example:

- (void)anyMethod {
    MINT_METHOD_TRACE_START
    ...
    MINT_METHOD_TRACE_STOP
}

If you are not using ARC, use the MINT_NONARC_METHOD_TRACE_STOP macro to avoid a memory leak issue.

The trace method automatically picks up performance metrics for your method and sends it to Splunk. The trace report contains following fields:

  • method
  • elapsedTime
  • threadID

To view the event information, run following search in Splunk:

index=mint sourcetype=mint:methodinvocation

Here is an example event:

{
    apiKey: 6d8c9a39
    appEnvironment: Staging
    appRunningState: Background
    appVersionCode: 1
    appVersionName: 3.1
    batteryLevel: -100
    carrier: NA
    connection: WIFI
    currentView: MainViewController
    device: iPad5,3
    elapsedTime: 1105708
    extraData: {
    }
    locale: GB
    method: -[MainViewController mintMeta]
    msFromStart: 1450
    osVersion: 9.2.1
    packageName: WhiteHouse
    platform: iOS
    remoteIP: 185.75.2.2
    screenOrientation: Portrait
    sdkVersion: 5.1.0
    session_id: E9F4BE3D-0CEB-4461-9442-145101E5EE67
    state: CONNECTED
    threadID: 10759
    transactions: [
    ]
    userIdentifier: XXXXXXXX
    uuid: XXXXXXXX
}

iOS Memory Warnings

$
0
0

Memory on mobile devices is a shared resource, and apps that manage memory improperly run out of memory and crash. iOS manages the memory footprint of an application by controlling the lifetime of all objects using object ownership, which is part of the compiler and runtime feature called Automatic Reference Counting (ARC). When you start interacting with an object, you’re said to own that object, which means that it’s guaranteed to exist as long as you’re using it. When you’re done with the object, you relinquish ownership and if the object has no other owners, the OS destroys the object and frees up the memory. Not relinquishing ownership of an object causes memory to leak and the app to crash. ARC takes away much of the pain of memory management, but you still need to be careful with the retain cycle, global data structures and lower-level classes that don’t support ARC.

A memory warning is a signal that is sent to your app when it leaks. If the app terminates because of a memory leak, the app won’t generate a crash report. Because of that, you might not be able to find and fix the leak in your production app unless you already implemented the memory warning delegate to free up memory in the ViewController class.

To help you manage memory, the Splunk MINT SDK for iOS has a memory warning feature that collects the memory footprint and the class that received the memory warning. When an app terminates but doesn’t send a crash report, that means the app received a memory warning and sent the memory footprint to Splunk. So, go check your MINT data in Splunk Enterprise for recent memory warnings, which might help you fix memory issues in your mobile apps.

Splunk MINT SDK for IOS automatically starts to monitor for memory warnings on initialization. There is no need to do anything extra.

The memory warning information contains the following fields:
• className
• totalMemory
• usedMemory
• wiredMemory
• activeMemory
• inactiveMemory
• freeMemory
• purgableMemory

To view memory information, run a search in Splunk Web for the mint:memorywarning sourcetype, for example:

index=mint sourcetype=mint:memorywarning

Here is an example event:

{
    activeMemory: 9118
    apiKey: 12345
    appEnvironment: Testing
    appRunningState: Foreground
    appVersionCode: 1
    appVersionName: 1.0
    batteryLevel: -100
    carrier: NA
    className: LoginViewController
    connection: WIFI
    currentView: LoginViewController
    device: x86_64
    extraData: {
    }
    freeMemory: 3040
    inactiveMemory: 1511
    locale: US
    message: Received memory warning
    msFromStart: 4334
    osVersion: 9.3
    packageName: SplunkTests
    platform: iOS
    purgableMemory: 210
    remoteIP: 204.107.141.240
    screenOrientation: Portrait
    sdkVersion: 5.0.0
    session_id: 1C048628-A709-44BC-9110-25069C7FC736
    state: CONNECTED
    totalMemory: 16384
    transactions: { [+]
    }
    usedMemory: 454
    userIdentifier: XXXXXXXX
    uuid: XXXXXXXX
    wiredMemory: 2112
}

To monitor memory warnings as they happen, create a real-time alert like this:
1. In Splunk Web, run this search: index=mint sourcetype=mint:memorywarning
2. Select Save As > Alert.
3. For Alert Type, click Real-time.
4. Click Add Actions to select an alert action.
5. Click Save.

Smart AnSwerS #75

$
0
0

Hey there community and welcome to the 75th installment of Smart AnSwerS.

The “Where Will Your Karma Take You” contest officially ended this past Monday, and the winners were announced in a Splunk blog post by piebob earlier this week. BIG congratulations to sundareshr, skoelpin, and jkat54 for accruing the most karma points during the competition period, earning them each a free pass to .conf2016! If any of these guys have helped you solve your issues on Splunk Answers, be sure to thank them for being such awesome community contributors if you happen to cross paths. :)

Check out this week’s featured Splunk Answers posts:

How to encode a URL for a Hipchat notification alert action if there is no urlencode() function?

floriancoulmier wanted to have a link prefilled with elements from the alert to display on a dashboard, but needed the URL to be encoded to handle special characters so the link could be opened by a browser. jkat54 created a custom urlencode command for the job, sharing the Python code he mustered up and how to configure commands.conf to make it ready for use.
https://answers.splunk.com/answers/441031/how-to-encode-a-url-for-a-hipchat-notification-ale.html

How to set a default timezone for an entire multisite Splunk deployment?

wweiland was looking for a way to use a default timezone for all users in a multisite environment, but didn’t know what setting needed to be configured and on what Splunk instances. lguinn explains that the two main locations for timezone are during the data ingestion process and at search time. She notes the configuration for data ingestion must be done on forwarders and indexers with props.conf. For search time, however, the timezone has to be explicitly set for each role or defined in user-prefs.conf on the search heads.
https://answers.splunk.com/answers/439363/how-to-set-a-default-timezone-for-an-entire-multis.html

Why am I getting error “’newline’ is an invalid keyword argument” using the CLI to run my Python script that writes a CSV file?

jenniferleenyc created a Python script in $SPLUNK_HOME/bin, but was getting an error every time she tried to run it in the command line. Luckily, richgalloway and Masa came in to help her understand how scripted and modular inputs work in Splunk. They provided examples of proper syntax to run them via CLI and supporting Splunk documentation for further education.
https://answers.splunk.com/answers/439146/why-am-i-getting-error-newline-is-an-invalid-keywo.html

Thanks for reading!

Missed out on the first seventy-four Smart AnSwerS blog posts? Check ‘em out here!
http://blogs.splunk.com/author/ppablo

How to Create a Modular Alert

$
0
0

What’s a Modular Alert (and why should I care)?

Modular Alerts is a feature in included in Splunk 6.3 and later that allows it to actively respond to events and send alerts, gather more data, or perform actions. Splunk includes an API that makes it easy for people to write their own apps with modular alerts that can be shared on apps.splunk.comSee the official docs for more detailed information.

Modular Alerts can used for things such as:

In this post, I’ll walk you though how to write a Modular Alert. The entire example app source code is posted on Github here and the installable example app is here.

Alright, I want to make one

Ok, glad you agree partner. Let’s do this. We are going to make a modular alert that just logs something inside Splunk. This example won’t do much but you should be able to see how to expand this into a new modular alert of your own from this example.

What are the parts?

Before we get started, lets review the components of a Modular Alert. Below is the list of files that an app should include for a functioning Modular Alert:

  •  README/
    • alert_actions.conf.spec (declares the alert action and defines the supported parameters)
  • bin/
    • <modular_alert_name>.py (the Python class containing the code executed by the modular alert)
  • appserver/
    • static/
      • appIcon.png (the app icon, can be used as the icon for the modular alert)
  • default/
    • setup.xml (optional, a setup page for configuring global default values for the modular alert)
    • alert_actions.conf (defines details about the modular alert and app default values for the parameters)
    • data/ui/alerts/
      • <modular_alert_name>.html (an HTML template defining the UI of the modular alert configuration page)

Step 1: make a basic app

To start, lets make a basic app. This will consist of an app.conf file that describes your app. The app.conf file will go under the default directory. Completing the app.conf is fairly intuitive but make sure to check out the documentation if you need help. See below for the content of the app.conf for this sample app:

[launcher]
version = 1.0
description = An example of a Splunk modular alert
author = LukeMurphey

[package]
id = splunk_modular_alert_example

[ui]
is_visible = false
label = Splunk Modular Alert Example

The app is set to be invisible (is_visible=false) because it doesn’t include views and thus it doesn’t need to be included in the list of apps on Splunk home page. I also included some meta-data that gives everyone read access and gives administrators write access. Once this is all done, our app will include the following files:

  • modular_alert_example/
    • metadata/
      • default.meta
    • default/
      • app.conf

See the change here in more detail.

Deploy this onto a Splunk box and restart it. Once you do, you should see the app listed in the list of installed apps (look for “Splunk Modular Alert Example”):

Example app successfully installed

Step 2: make the alert action conf files and the icon

Now, lets start the process of making the Modular Alert. To do so, we will need to make two files: alert_actions.conf and alert_actions.conf.spec.

Make alert_actions.conf.spec

The alert actions.conf.spec file will describe our alert action to Splunk in order to define the fields that the alert action expects. The file needs to be placed under the README directory. This results in the following files in our app:

  • modular_alert_example/
    • metadata/
      • default.meta
    • default/
      • app.conf
    • README/
      • alert_actions.conf.spec

The file itself will include the list of fields under a stanza which indicates the name of the alert action. This modular alert action is going to just going to log a message, thus, I’ll call it “make_a_log_message”. It takes two parameters, a message to log and a value indicating the importance. This results in the following:

[make_a_log_message]
param.message = <string>
param.importance = <integer>

See the change here in more detail.

Make alert_actions.conf

Next, make the alert_actions.conf file. See the spec for alert_actions.conf for details. The file will go under the default directory. This results in the following:

  • modular_alert_example/
    • metadata/
      • default.meta
    • default/
      • app.conf
      • alert_actions.conf
    • README/
      • alert_actions.conf.spec

The stanza for the alert action needs to match the stanza name in the alert_actions.conf.spec file you previously created. The label field is used by Splunk when it lists the available modular alerts and the description provides more details on what it does. The icon_path parameter tells Splunk where the icon is that should be used to represent the alert action. In this case, we are going to use the app icon (appIcon.png). The param fields are used to provide a default value for the alert action. In this case, I am providing a default importance of 0 (zero). The file looks like this:

[make_a_log_message]
is_custom = 1
label = Make a log message
description = Makes a log message in the _internal index (an example of a modular alert)
icon_path = appIcon.png
payload_format = json

# Default value for importance
param.importance = 0

See the change here in more detail.

Make an icon

Now, lets make an icon so that the list of alert actions look a little nicer. To do this, I’ll include a 36×36 PNG file in appserver/static. This results in the following:

  • modular_alert_example/
    • metadata/
      • default.meta
    • default/
      • app.conf
      • alert_actions.conf
    • README/
      • alert_actions.conf.spec
    • appserver/
      • static/
        • appIcon.png

In this case, I’m going to use this icon both for the app icon as well as the icon for the alert action. Make a different icon (e.g. “appserver/static/alertAction.png”) if you want to use different files.

See the change here in more detail.

Deploy this onto a Splunk box and restart it. Then, navigate to “Alert actions” in the “settings” menu at the top right of the Splunk web UI. You should see your alert action listed (look for “Make a log message”):

alert_action_listed

Step 3: make the modular alert configuration view

Splunk gives you the ability to define an HTML stub that provides a UI for editing your alert action entries. This HTML stub will be rendered when someone configures your alert action for a saved search. To make this page, create a file under default/data/ui/alerts/ with the file name “make_a_log_message.html”. The file name needs to match the stanza name of the alert action.

This results in the following:

  • modular_alert_example/
    • metadata/
      • default.meta
    • default/
      • app.conf
      • alert_actions.conf
      • data/
        • ui/
          • alerts/
            • make_a_log_message.html
    • README/
      • alert_actions.conf.spec
    • appserver/
      • static/
        • appIcon.png

See the change here in more detail.

Deploy this onto a Splunk box and restart it. Then, view the dialog by doing the following:

  1. Open the search view and run some search (e.g. “* | head 1”)
  2. Click “Save As” > “Alert”
  3. Click “Add Actions” and select “Make a log message”

You should see your alert action configuration page:

Alert action config page

Step 4: make the modular alert Python class

Thus far, we have registered an alert action but we haven’t enabled it do anything. In this step, we will fill in the code to make it do something.

Get the modular alert base class

I wrote a class that simplifies the creation of a Modular Alert. You can download it from Github. The license is intentionally permissive so that you can use it in your own apps (even paid ones). Place this file under the directory bin/modular_alert_example_app. Additionally, make an empty file named “__init__.py” in the directory so that Python will treat this as a module.

This results in the following:

  • modular_alert_example/
    • metadata/
      • default.meta
    • default/
      • app.conf
      • alert_actions.conf
      • data/
        • ui/
          • alerts/
            • make_a_log_message.html
    • README/
      • alert_actions.conf.spec
    • appserver/
      • static/
        • appIcon.png
    • bin/
      • modular_alert_example_app/
        • __init__.py
        • modular_alert.py

See the change here in more detail.

Create your modular alert class

Now, lets make the class that will do the work. To do this, make a Python file named make_a_log_message.py under the bin directory.

This results in the following:

  • modular_alert_example/
    • metadata/
      • default.meta
    • default/
      • app.conf
      • alert_actions.conf
      • data/
        • ui/
          • alerts/
            • make_a_log_message.html
    • README/
      • alert_actions.conf.spec
    • appserver/
      • static/
        • appIcon.png
    • bin/
      • modular_alert_example_app/
        • __init__.py
        • modular_alert.py
      • make_a_log_message.py

Start populating your make_a_log_message.py with some imports:

import logging
import sys
from modular_alert_example_app.modular_alert import ModularAlert, Field, IntegerField, FieldValidationException

See the change here.

This will import the base class and some classes we will need to make our alert work. I am importing the Field and IntegerField classes because the modular input will need to validate the string field “message” and the integer field “importance”.

Next, add the class for your modular input that sub-classes. Here I have made a class which inherits from ModularAlert and implements a constructor. The constructor includes a list of the parameters that the alert action expects (based on the contents of the alert_actions.conf.spec file). This allows the Python code to make sure that the incoming values are valid and it will convert them to the appropriate Python object. For example, the importance field will be automatically converted to an integer since it using the IntegerField class. Make sure that your constructor calls the super class constructor too. Don’t worry about the contents of the run function yet, we will fill that out next.

import logging
import sys
from modular_alert_example_app.modular_alert import ModularAlert, Field, IntegerField, FieldValidationException

class MakeLogMessageAlert(ModularAlert):
    """
    This alert just makes a log message (its an example).
    """
    
    def __init__(self, **kwargs):
        params = [
                    IntegerField("importance"),
                    Field("message")
        ]
        
        super(MakeLogMessageAlert, self).__init__(params, logger_name="make_a_log_message_alert", log_level=logging.INFO )
    
    def run(self, cleaned_params, payload):
        pass

See the change here.

Next, populate the run function with the code to make your modular alert do something. The run function will be called with the your parameters in the “cleaned_params” argument. These arguments will already be converted to Python objects (e.g. the importance field will be an integer). I am using the get() function with a default value as the second argument so that I can define a default value in case that parameter wasn’t provided. You can use the logger instance in self.logger in order to post information about what your modular input is doing. This is important in order to aid debugging in case something goes wrong. This results in:

import logging
import sys
from modular_alert_example_app.modular_alert import ModularAlert, Field, IntegerField, FieldValidationException

class MakeLogMessageAlert(ModularAlert):
    """
    This alert just makes a log message (its an example).
    """
    
    def __init__(self, **kwargs):
        params = [
                    IntegerField("importance"),
                    Field("message")
        ]
        
        super(MakeLogMessageAlert, self).__init__(params, logger_name="make_a_log_message_alert", log_level=logging.INFO )

    def make_the_log_message(self, message, importance):
        """
        This is the function that does what this modular alert is supposed to do.
        """
        self.logger.info("message=%s, importance=%i", message, importance)

    def run(self, cleaned_params, payload):
        
        # Get the information we need to execute the alert action
        importance = cleaned_params.get('importance', 0)
        message = cleaned_params.get('message', "(blank)")
        
        self.logger.info("Ok, here we go...")
        self.make_the_log_message(message, importance)
        self.logger.info("Successfully executed the modular alert. You are a total pro.")

See the change here.

Finally, you will need to add some boilerplate code that will get your modular input to execute in Splunk. This code makes an instance of your class and gets it to communicate with Splunk over standard input:

import logging
import sys
from modular_alert_example_app.modular_alert import ModularAlert, Field, IntegerField, FieldValidationException

class MakeLogMessageAlert(ModularAlert):
    """
    This alert just makes a log message (its an example).
    """
    
    def __init__(self, **kwargs):
        params = [
                    IntegerField("importance"),
                    Field("message")
        ]
        
        super(MakeLogMessageAlert, self).__init__(params, logger_name="make_a_log_message_alert", log_level=logging.INFO )

    def make_the_log_message(self, message, importance):
        """
        This is the function that does what this modular alert is supposed to do.
        """
        self.logger.info("message=%s, importance=%i", message, importance)

    def run(self, cleaned_params, payload):
        
        # Get the information we need to execute the alert action
        importance = cleaned_params.get('importance', 0)
        message = cleaned_params.get('message', "(blank)")
        
        self.logger.info("Ok, here we go...")
        self.make_the_log_message(message, importance)
        self.logger.info("Successfully executed the modular alert. You are a total pro.")
        
        
"""
If the script is being called directly from the command-line, then this is likely being executed by Splunk.
"""
if __name__ == '__main__':
    
    # Make sure this is a call to execute
    if len(sys.argv) > 1 and sys.argv[1] == "--execute":
        
        try:
            modular_alert = MakeLogMessageAlert()
            modular_alert.execute()
            sys.exit(0)
        except Exception as e:
            print >> sys.stderr, "Unhandled exception was caught, this may be due to a defect in the script:" + str(e) # This logs general exceptions that would have been unhandled otherwise (such as coding errors)
            raise
        
    else:
        print >> sys.stderr, "Unsupported execution mode (expected --execute flag)"
        sys.exit(1)

See the change here.

Deploy this onto a Splunk box and restart it. Then, make a search and add your alert action so that you can test it. You should see the output once your search executes. You can then view the results of the alert actions execution by running the following search:

index=_internal sourcetype=splunkd component=sendmodalert action="make_a_log_message"

Alert action output

If you do not see output, run the following search and look for errors:

index=_internal sendmodalert sourcetype=splunkd

Step 5: make a setup page to set global defaults (optional)

Optionally, you can create a setup.xml page that sets default values for the Modular Alert. This useful for configuration items such as authentication which you may not want to have repeated for every instance of the alert action. This can also be used to modify the default values presented when configuring a modular alert.

To make a setup page, create a file named “setup.xml” in default/setup.xml. This results in the following file-structure:

  • modular_alert_example/
    • metadata/
      • default.meta
    • default/
      • app.conf
      • alert_actions.conf
      • setup.xml
      • data/
        • ui/
          • alerts/
            • make_a_log_message.html
    • README/
      • alert_actions.conf.spec
    • appserver/
      • static/
        • appIcon.png
    • bin/
      • modular_alert_example_app/
        • __init__.py
        • modular_alert.py
      • make_a_log_message.py

New populate the setup.xml file with a setup page. See the spec file for details. In this case, I just want a setup page that edits the “message” parameter of the “make_a_log_message” alert action. The setup page ends up looking like this:

<setup>
  
  <block title="Global Settings for the Alert Action" endpoint="admin/alert_actions" entity="make_a_log_message">
	  
      <text>An example of a setup page for the make-a-log-message example modular alert action
      </text>

      <input field="param.message">
        <label>Enter a default message</label>
        <type>text</type>
      </input>
      
  </block>

</setup>

See the change here.

Deploy this onto a Splunk box and restart it. The Alert Action list in the Splunk’s Manager will now show a link to your setup page (see the link titled “Setup Splunk Modular Alert Example”):

Modular alert list with setup page link

If you click the link to setup up the app then you will be taken to the setup page. I modified the default message to be “This is the message set in setup.xml”:

Modular alert setup page

If you save the change, then Splunk will set the default value to whatever you defined. This deploys a local version of alert_actions.conf (in /modular_alert_example/local/alert_actions.conf) which includes the following in the example above:

[make_a_log_message]
disabled = 0
param.message = This is the message set in setup.xml

Once the default is change, any new alert actions will have this value in the message field by default. If I create a new instance of the alert action, I see the message field is already populated with the value I set above:

Modular alert dialog with default value for message

 

Conclusion

That’s basically it. Consider submitting your app to apps.splunk.com if you make an app so that others can use it too. Let us know if you run into problems and post a question on Splunk answers if you get stuck.

Dashboard Digest Series – Episode 1

$
0
0

Welcome to the Dashboard Digest Series! Starting today you can look forward to a different dashboard (and sometimes a collection of dashboards) that was created to solve one of many hundreds of use cases in just about any industry in hopes of getting your creative juices flowing and show you the art of possible of what you can create with Splunk.  Some upcoming examples you can expect in this series are depicted in the collage below.

dashboard_collage_luedtke_v1

Each post will contain information about the dashboard such as data sources involved, Splunk version, Apps used, and general purpose. This is a great way to see new features and learn about tips and tricks on how to create these dashboards!

So let’s get started!

The first dashboard(s) in the series is from the Gamer’s Lounge at .conf2015 last year. Players were able to see their Team Fortress 2 stats displayed realtime in Splunk. There was a previous blog post about TF2 last March if you want to read more about it, however the dashboards have been updated since then.

tf2_realtime_game_stats_v6.3

Purpose: Display meaningful statistics on overall game and player activity of Team Fortress 2 in historical and real-time.
Splunk Version: Splunk 6.3 and above
Data Sources: Team Fortress 2 Server Logs
Apps: Team Fortress 2 App, Dashboard 6.x Examples App

Tips ‘n Tricks:

There were actually several dashboards created for monitoring TF2, but I’m just going to show 2 of them for now. The first dashboard I used the following examples from the Dashboard 6.x Examples App:
1. “Table Element with Sparklines” – to create the black and orange rangemapped sparklines
2. “Custom Layout Width” – to create custom panel widths across the dashboard
3. “Table Icon Set (Rangemap)” – to add the images of guns and team colors in the table. See this blog for more details on how to do this.

Other than that I just added some custom heights to the panels to make everything line up nice. Just a “simple” SimpleXML addition of <option name="height">200px</option> to your charts and single values.

tf2_more_game_play_stats_v6.3

For the second dashboard I just used two examples from the Dashboard examples app:
1. “Chart Color Options” – to create the orange theme
2. “Single Value With Color” – to customize the single value icons

That’s all for this round! See you next time with another “Dashboard Digest Series!” and as always Happy Splunking!

– Stephen

 

Adding a Deployment Server / Forwarder Management to a new or existing Splunk Cloud (or Splunk Enterprise) Deployment

$
0
0

As part of the Cloud Adoption team, I am working with Splunk Cloud (and Splunk Enterprise) customers on a daily basis and I get asked questions quite frequently about how to optimize, and effectively reduce, administration overhead. This becomes especially relevant when I am talking with new or relatively new customers that are expanding from a handful of forwarders, into the 100’s or 1000’s of forwarders. And I always say…. start with a Deployment Server.

For larger customers that have trained and experienced Splunk Administrators, or have engaged with Professional Services, this is a given and typically already exists in their deployments.

On the other end however, new Splunk Cloud and Splunk Enterprise customers may not have this luxury.

This article is for you.

I won’t go into full details on the how and why this works, but I will outline what configurations are needed and how this will scale based on my field experience, and what our best practices outlines. The configurations here are based upon Splunk’s Professional Services Base Configurations toolset.

Assumptions..

This outlines how to configure a DS to deploy apps on your local network. From an architecture point of view, the Cloud Forwarder App contains the configs to send your data to your Splunk Cloud instance. This could be interchanged with an App that forwards to on-premise Indexers or an HF/UF Aggregation Tier, but that’s a different discussion…

Let’s get some terminology out of the way…

Deployment Server (DS) – A Splunk Enterprise instance that acts as a centralized configuration manager. It deploys configuration updates to other instances. Also refers to the overall configuration update facility comprising deployment server, clients, and apps.

Deployment Client – A remotely configured Splunk Enterprise instance. It receives updates from the deployment server. Typically these are Splunk Universal Forwarders or Heavy Forwarders.

Server Class – A deployment configuration category shared by a group of deployment clients. A deployment client can belong to multiple server classes.

Deployment App – A unit of content deployed to the members of one or more server classes.

So let’s dig in!

First off, we need a dedicated Splunk Heavy Forwarder (HF/HWF) instance that will be the DS. This instance should be configured and already sending its data to your Splunk Cloud instance, and this document assumes this is installed in /opt/splunk.

Here, a virtual machine is more than sufficient, and preferred. But follow the recommended spec for this : 4 cores x 8 gb of RAM and sufficient disk space to handle your deployment apps. (Typically 50gb is more than enough!) Additionally, while not required, a 64bit Linux host is ideal and you will get the most mileage out of this.

This server also needs to be placed on the network in such a way that all the hosts can communicate with it. This means that firewalls will need to be opened up for the Splunk Management Port to the DS host (TCP:8089 by default) or multiple DS’s deployed.

Additionally, we need our “Apps”.

In this article we will deploy the Splunk_TA_nix. 100_demostack_splunkcloud” from our Splunk Cloud Stack, and org_deployment_client. (More on this one later!)

Picture1

These Apps need to all be placed in the /opt/splunk/etc/deployment-apps/ directory. Once these are place here, they will be visible in the Splunk Web Interface, from the Forwarder Management page.

Picture2

From here, we are able to build our Server Classes. To do this, we want to consider our Deployment Topology. In a nutshell, a DS can filter based on hostname, IP address, or machine type. So we have a few options for deploying to all of our Clients.

Now we will setup our Server Classes..

First we setup a Server Class for All Clients. We are going to call this “All_Hosts”.

Picture3

Once we create this, we can add Apps and Clients to the Server Class.

Picture4

Let’s add our org_deployment_client and 100_demostack_splunkcloud Apps to the All_Hosts serverclass.

Picture5

And next, we need to add Clients. At this point, there are no clients connecting to this DS. However, since this class is for all clients, we add a Include whitelist of ‘*’.

Picture6

Next, repeat the creation of a serverclass, but with the Splunk_TA_nix add added. For filtering on this, until a client connects, you are not able to filter on machine types. This means you need to filter on machine name or IP address until the machine types connect. In this example, I created a filter for a host name of “nix-*, ubuntu*”.

Picture7

Once this is done, your DS is ready and awaiting clients to connect!

Connecting Clients..

Previously I mentioned the “org_deployment_client” app. Let’s revisit this now.

Typically, to configure a client to connect to a DS, we either add it through the CLI (via splunk set deploy-poll servername.mydomain.com:8089) or we edit the deploymentclient.conf file in /opt/splunk/etc/system/local and restart..

That’s fine! It works… BUT.. it is local. Once you put it there, you have to manually change it (or if you’re lucky, automate it..) But I digress.

From the start, let’s make an app that connects to the DS.. Here’s where the “org_deployment_client” comes in to play.

Taken from the Splunk PS Base Configs, here is the template..

[deployment-client]
# Set the phoneHome at the end of the PS engagement
# 10 minutes
# phoneHomeIntervalInSecs = 600

[target-broker:deploymentServer]
# Change the targetUri
targetUri = deploymentserver.splunk.mycompany.com:8089

As you can guess, we update the targetUri to point to the address and management port of our DS. It’s highly recommended to use DNS for this, and not an IP address. And as of 6.3, this can also be a load balancer.. <finally…woot!! >

Now, the most difficult part.. The org_deployment_client app needs to be deployed to all our UFs on install, or after deployment.. This allows us the ability in the future to change the targetUri and phoneHomeInternvalInSecs without having to touch every forwarder! There are many ways to accomplish this, some use git/mercurial/cvs/ script the delivery of this, some build custom install packages that install this automatically.. Others manually deploy this after installation.. However you want to do it, do it!

Back on track.. once this is deployed, we install our clients (with the org_deployment_client.) In this case, I don’t have the apps configured to restart Splunk once they are downloaded from the DS, so a manual restart is required. Afterwards, we can check the Forwarder Management GUI and confirm our hosts and the apps deployed..

Picture8

From here, we have our hosts sending their data logs to Splunk Cloud. This will include enabled TA’s and modular inputs.

There are “Gotchas”… Please Don’t do this!

Here are a few things to take into consideration, and not to do.

1) Search Head Cluster Members (SHC) – These cannot be part of a DS, the Deployer Node handles this functionality

2) Index Cluster Members – These cannot be part of a DS, the Cluster Master Node handles deployment of configurations

3) Using Automation ( Puppet / Chef / Ansible etc) – Be careful when using these in conjunction with DS.. configs can disappear and break…

4) Test your serverclasses.conf changes in a DEV environment!!

5) Standardize on a naming convention for your Server Classes and App names. Here I used org_deployment_client, but for your company it would be mycompany_deploymentclient_securelan and mycompany_deploymentclient_dmz1.

There are a lot of features and functionality available in the Deployment Server that I didn’t cover here. Our Education team does a wonderful job of teaching this, and Splunk PS can also spend a wonderful amount of time going over the different features of the DS and how to get it to scale. Please reach out if you want to learn more!

Additional Reading:
Capacity Planning Manual for Splunk Enterprise
Updating Splunk Enterprise Instances – Deployment server architecture
Updating Splunk Enterprise Instances – Plan a deployment
Updating Splunk Enterprise Instances – Configure deployment clients

Thanks,
Eric Six and Dennis Bourg

SplunkZero, delivering value with Splunk at Splunk

$
0
0

LGO-Splunk-Zero-600x330-RGB-2color-101

I want to introduce you to our internal Splunk platform, SplunkZero. I’ll go into some detail on the philosophy of how we chose to deploy Splunk at Splunk, but what I hope to do is kick start the conversation about how we gain value with our own products.

A little bit about myself, in the 5+ years I’ve been here at Splunk, I have worked in both marketing and IT orgs and am excited to now be leading the SplunkZero team. I am passionate about our products and love seeing how excited our customers get when the talk about how they leverage Splunk.

The name SplunkZero came out of a request from our markets group that IT be driving internal product adoption as “Customer Zero,” being the first to adopt and test our emerging features and products. Somewhere along the lines it was referred to as SplunkZero, and the name stuck. This request also helped define the mission for our platform, as well as the program charter.

“The SplunkZero mission is to empower Splunk to be a data driven company. By leveraging our own products and presenting ourselves as the example, we provide a clear vision of how our customers can achieve the same success.”

It’s a fancy way of saying, we want to find the best ways to leverage the platform and we’re going to share that information with you. This breaks down into the four items that make up our program charter; Operational Intelligence, App Development, Product Improvement and Thought Leadership.

  • Operational Intelligence: Use Splunk as a customer to derive value from our data. Insuring we deploy solutions in the same manner our services teams instruct our customers.
  • Application Development: For cases where an “out of the box” app does not meet the exact requirements of a business use case. This requires us to create an area where we can enable our internal Splunk experts to build custom apps to drive new value and identify future opportunities.
  • Product Improvement: Be a voice for our customers, and fellow Splunk Admins, to help drive new product features. This also includes testing product before it goes to market to insure we identify issues before anything is shipped to our customers.
  • Thought Leadership: Gain the trust of our current and future customers by showing them exactly how we gain value with Splunk and show them how they can do the same. That includes community outreach, like Splunk Events or posting on this blog!

We will be following up with additional topics covering our technical infrastructure, how the environment is being monitored using ITSI and some of the additional use cases we have enabled.

I hope this has given you some insight into how we are deploying Splunk internally. If there are any specific topics you would like to hear us cover, please leave them in the comments below and they will be considered.

Thanks,
Erik Cambra
Manager, SplunkZero

Introducing the “Welcome Page Creator”

$
0
0

“Hey Ninja! My manager just got me access to this ‘Splunk’ thing and I was able to log in and all but all I see is this screen with a search bar. What the heck is this and where are all the answers? What do I do here?”

After way too many situations teaching newbies about Splunk, I finally took a step back and asked myself: What if when they logged in to Splunk, they were presented with all the materials needed to get Splunking? Not only would they get answers more rapidly, but I’d get a heck of a lot more work done with less distractions.

Attempting to solve this, I created dashboards that “Welcomed” users to the Splunk environment by providing them answers to the questions they were most likely to ask. Each “Welcome” page was the default dashboard within the default app for each role or user group. Since each group’s “Welcome” page was nearly identical, I made my job easier by cloning the first “Welcome” page to all the other apps. From there, the pages were tweaked to be effective for the role or group of users viewing it.

Wanna do the same? Well, you can! That’s because a gang of us (Kevin Meeks, Erick Mechler, Aly Kheraj, and Frank Tisellano) have package up the Welcome Page Creator app. This app gives you a collection of over twenty prebuilt panels that you can piece together to rapidly create Welcome pages.

The most successful Welcome pages have built upon the following best practices:

  • KISS: “Keep is Simple Silly”, “Less is more,”, or “A lil’ dab will do ya.” However you want to say it just remember how Google flipped the table on search screens. Less content means a stronger focus by the user and therefore a more effective use of the platform.
  • Consider the Audience: The focus and communication style of your developers is dramatically different than that of your business users. Keep this in mind when selecting what panels to include on a Welcome page. Use materials and language that is effective for the reader. In fact, that’s exactly why some of the panels in the ‘Welcome Page Creator’ have similar content but presented differently!
  • Apps as Workspaces: Create an app for each “team” using Splunk. This gives that team a place to play and save their knowledge objects without getting in the way of other teams. In fact, I’m confident you’ll find your users are more willing to embrace and explore Splunk if their experience is intimately contained within their group’s workspace rather than effecting an entire deployment.
  • Role Segregation: When each team has their own role you have the ability to segregate their workspaces and provide the ‘Welcome’ page that is most effective to them. Take it a step further and change the permissions on other team’s apps so users only see the workspace (app) for them – not other’s group’s apps.
  • Set Defaults: Don’t forget to set the respective app as the default app within the role definition as well as the Welcome page as the default dashboard. This ensures that when users log in to the environment, they’ll automatically head to their workspace and its Welcome page.
  • Listen: Even after implementing Welcome pages, you’re still going to notice some questions come your way. Put the answers to such repeated questions in a new panel on your Welcome page so you can get back to the fun stuff. When necessary, edit any shared prebuilt panel or convert any panel to an inline panel so you can further customer for that audience.
  • Not a Welcome App but a Welcome Page: Welcome pages within a team’s app is powerful. A Welcome page within the Welcome Page Creator app is confusing. The Welcome Page Creator app is deliberately focused on creating Welcome pages and therefore is not intended as a place for starting a Splunk experience. The Welcome Page Creator’s panels are globally shared so your end users can continue to work within their workspace (app).

That’s it! Have fun creating Welcome pages and use your new found free time by letting us know about panels you create or think we should add to the Welcome Page Creator app.


Configuring Okta Security Assertion Markup Language (SAML) Single Sign On (SSO) with Splunk Cloud

$
0
0

post-itAs organizations grow, the number of applications and tools utilized to perform a job and support the business of the organization inevitably grows. It is not unheard of for enterprises to literally have hundreds of on premise, SAAS and Cloud based tools and applications. Making sure users of those applications are who they say they are means, at the least, one must authenticate themselves into the application. Although it was effective, people frowned on the practice of sticking a mass of Post-it notes on a monitor with user names and passwords. Password vault tools are a nice alternative to the Post-it, but it still means one has pull up the password vault app to look up a forgotten password to log in to this app, log in to that app, log in again when a session times out, log out, log in again, … ad nauseam.

Enter our glorious days of Single Sign On (SSO)! And that is a bandwagon that has many, many riders these days. The SSO cat has been skinned in many ways by many vendors and niche players. Some provide auto-fill of login/password prompts no matter what type of app or screen is presented. Others provide auto-fill of only web forms that present a user with fields for a username and password. And there are even cracker tools out there that do the same, in an attempt to brute force acquire a valid username/password to mischievously log into apps. You know, to exploit those users out there that like to use passwords like ‘abc123’ or ‘passw0rd’ so they don’t have to have so many stickies on their monitor or reduce the number of times they have to bring up their password vault on their smart phone…

In the spirit of SSO, a while back a bunch of smart folk asked the question “is there a better way to authenticate with SSO into an app without having to present fields for a username and password?” and answered that question with a resounding “yes!”. But first they had to create that better way. They put their heads together, designed a framework and wrote a Request for Comments (RFC) that defined that better way for securely passing authentication requests and responses between applications. They called it the Security Assertion Markup Language (SAML) which is now at version 2.0.

oktaMy role at Splunk>, as an Engineer on the Cloud Adoption team as part of our Customer Success organization, means that I exist here to help make our customers happy. Not having to type in a username and password to log into Splunk> to bring up your boss’s ‘TPS Reports’ dashboard is just one small way to bring happiness. So a team in your company must have got together and convinced management to purchase an Identity Provider (IdP) – Okta! Smiles ensued. Another way to ride the highway to bliss is for your organization’s over-worked IT Infrastructure staff to not have to own the hardware, floor space and admin head count to support that awesome instance of Splunk> that you’re getting huge piles of value out of. So that smart team at your company bought Splunk> Cloud and the party keeps on rolling!

By now you’re saying “I’m four paragraphs deep and I’ve yet to learn how to configure Okta!” – so enough of the background and let’s get down to it.

So what do you need? Okta? – check. Splunk> Cloud instance? – check.

Who do you need? 1) An administrator for your Okta instance 2) An administrator for your local Identity Management system (Active Directory, LDAP, etc.) 3) An administrator for your Splunk> Cloud instance. If they’re all the same person (you), you’re in luck. Otherwise you’ll have to run the calendar dice and find time for you all to discuss SAML integration, put in change control, scheduled a time to implement, etc.

Here’s what you do:

Pre-requisite:

This step if requested to be performed so that our Splunk> Cloud Support and Operations staff will know that you are integrating your instance with Okta. It provides a mechanism to more effectively support you in your efforts to integrate with Okta in case anything may go amiss, or you may have further questions around Okta configuration that are not addressed in this posting.

  • Log into your Splunk> Customer Portal and create a Splunk> Customer support case.
    • A Priority of P3 or P4 is adequate.
    • Choose ‘Authentication & Security‘ for the Area
    • For the ‘Feature / Component / App‘ choose ‘SAML
    • In the ‘Subject‘ enter in something along the lines of ‘SAML Integration with Okta
    • Add a summary in the ‘Description‘ that you are going to integrate your Splunk> Cloud instance with Okta, and possibly a date/time you will be performing the integration if applicable.
  • Read all of the below Integration steps. There are some pieces that you may need to perform in your Identity Management environment before you integrate with Okta. There are also possible affects to your current locally defined users in Splunk> Cloud. And there may be other topics that may require further discussions among your team members or questions with Splunk>

Okta Integration:

The following steps are specific to versions 6.4.x of Splunk> Cloud. Okta is supported under Splunk> Cloud v6.3.1551.x and the steps below are nearly identical as well for that older version of Splunk (if your instance has not yet been upgraded).

  1. First have your Splunk> Cloud administrator log into your instance as a user with the ‘admin‘ role. Yep, the ole manually entered username/password thing…
    If you have multiple search heads in your Splunk> Cloud environment (aka a general search head at ‘https://<acme>.splunkcloud.com and/or possibly an Enterprise Security search head at https://<es-acme>.splunkcloud.com) you will need to perform a separate Okta integration for each search head independently. In short, you’ll have multiple Okta apps, one for each search head (or search head cluster).Screen Shot 2016-09-01 at 9.36.26 AM
  2. Confirm that your instance is at version 6.3.1551.x or later by going to the top menu option ‘Support & Services‘ -> ‘AboutScreen Shot 2016-09-01 at 9.38.14 AM
  3. Obtain your search head’s metadata.
    This can be obtained by, once logged into a session as an admin role user, entering the URL https://<acme>.splunkcloud.com/saml/spmetadata into your browser’s URL field.
    Something similar will be presented in your browser window:Screen Shot 2016-09-01 at 9.44.54 AM

    From the metadata, capture the search head’s certificate (masked out above, between the XML tags ‘<ds:X509Certificate>‘ and ‘</ds:X509Certificate>‘. Save the certificate into a non-formatted text file (Notepad for example) and place a row above the certificate with the text ‘—–BEGIN CERTIFICATE—–‘ and a row below the certificate with the text ‘—–END CERTIFICATE—–‘. It should look something similar to:Screen Shot 2016-09-01 at 10.11.12 AM
  4. Have your Okta admin log into your Okta instance as the Admin user.Picture1
  5. Enter into the Admin functionality within OktaPicture3
  6. Click on the link to ‘Add ApplicationsPicture4
  7. Click on the link to ‘Create New App
    Now – there is a ‘pre-canned’ App in Okta for Splunk> Although some customers have been successful in starting with this App to integrate Okta with their Splunk> Cloud instance, the number of changes to this App are enough where no time is really saved, and there’s been more confusion and mis-steps in the configuration versus starting from scratch with a new Okta app.Picture5
  8. Choose ‘SAML 2.0‘ and click on the ‘Create‘ button
    Picture6
  9. Enter in an application name and optionally upload a Splunk> logo for the Okta App widget. Suggested app names might be ‘Splunk> Cloud General’ or ‘Splunk> Cloud ES’ or ‘Splunk> Cloud Operations’. Make it something that helps identify which search head functionality your users will be logging into.Picture7
  10. In the SAML General settings section, enter the ‘Single Sign On URL‘ in the format similar to ‘https://<acme>.splunkcloud.com/saml/acs‘ where <acme> is the DNS canonical name of the search head you are integrating Okta with.
    Make sure the checkbox for ‘Use this for Recipient URL and Destination URL’ is checked.Picture8
  11. Enter a unique name for the ‘Audience URI (SP Entity ID)‘. A suggested Entity ID name to uniquely identify the Splunk> Cloud search head is ‘splunk-‘ followed by the first field of the canonical name ‘https://acme.splunkcloud.com‘ – so ‘splunk-acme‘ for instance (Splunk-CustomerName example in the graphic below – do note that this field is case sensitive, so when this is used elsewhere in Okta for the Single Logout as well as when it is used in the Splunk> SAML configuration the case must match)
    Picture10
  12. Set the ‘Name ID Format‘ to ‘Transient‘. And choose the ‘Application username‘ in the format you wish to have users identified within your Splunk> Cloud instance as they come across via SAML.
    Here’s one of your first decision points. The ‘Application username’ is the value that is passed via SAML as the ‘nameID‘ attribute. The contents of this attribute will be used as the Splunk> account name once authenticated into Splunk>
    Picture11If you have existing locally defined accounts in you Splunk Cloud instance, it would be good to have the users that authenticate through SAML to come across into Splunk with the same account name. For instance, if your locally defined user for ‘Joe Schmoe’ has a Splunk> account by the name of ‘jschmoe’, it would be good to have the SAML authenticated user to come across with the ‘nameID‘ as the same string ‘jschmoe’. If, however, Joe Schmoe logs into Okta as ‘jschmoe@acme.com’ then the ‘nameID‘ within the ‘Okta username’ will come across into Splunk> as the string ‘jschmoe@acme.com’ and thus be seen by Splunk> as a net new account.
    What does this mean? It means that if Joe Schmoe has a bunch of knowledge objects saved under his old account named ‘jschmoe’, when he logs in via SAML he will no longer have access to those knowledge objects. They are owned by a completely different user under the account ‘jschmoe’ instead of his new account named ‘jschmoe@acme.com’ that was instantiated through the SAML authentication. Not good…
    So – luckily there are other options in the pulldown for the ‘Application username‘ as is shown below.Screen Shot 2016-09-01 at 2.32.04 PM
    But what if those pulldown choices do NOT match what your user logs into? Now what?! The Custom option will allow you to use the expression language to create a string specific to what you need to have users come across via SAML just as they were named when they were locally defined accounts. If this is a net new instance and you’re just getting up and running with Splunk> the topic is moot. But if you’ve had users for a long time and they have a lot of stuff they don’t want to lose – this piece is important. If you can’t find a pre-formatted pulldown option or an expression language option that will work or you, please do reach out to Splunk> Cloud support for further guidance.
  13. Click on the ‘Show Advanced Settings‘ link to expose additional settings for the Okta App you are building.
  14. Leave ‘Response‘ as ‘Signed‘. ‘Assertion Signature‘ as ‘Signed‘. ‘Signature Algorithm‘ as ‘RSA-SHA256‘. ‘Assertion Encryption‘ as ‘Unencrypted
    Picture12
  15. Click the ‘Enable Single Logout‘ checkbox. Then enter into the ‘Single Logout URL‘ field the URL ‘https://<acme>.splunkcloud.com/saml/logout‘ where ‘<acme>’ again is the search head’s canonical name.
    Picture13
  16. In the ‘SP Issuer‘ field, enter the same value you used in the ‘Audience URI (SP Entity ID)‘ in a previous field – ‘splunk-acme‘ for example. Again, do note that this is case sensitive and should match exactly what was typed into the ‘Audience URI (SP Entity ID)‘.
    Picture14
  17. In this step you will upload the search head’s certificate that you saved into an unformulated text file in step 3. above.
    Click on the ‘Browse‘ button to choose the file that contains the certificate.
    Click on the ‘Upload Certificate‘ button, you should see a ‘Certificate Updated!‘ notification popup window that will indicate successful upload and parse of the certificate file. The ‘Signature Certificate‘ field should also populate with text similar to the example screen capture below.
    Picture15 

    Picture16

  18. Leave the ‘Authentication context class‘, ‘Honor Force Authentication‘ and ‘SAML Issuer ID‘ to their defaults
    Picture17
  19. There are three other attributes that need to be passed to Splunk> Cloud from Okta. These are named ‘mail‘, ‘realName‘ and ‘role‘. Do note that these attributes are case sensitive and must be entered into your Okta App in exactly the case described.The ‘mail‘ attribute is the string you wish to have populated into your Splunk> user’s ‘e-mail’ field for their account. Thus it should be their valid work e-mail address.The ‘realName‘ attribute is the string you wish to have as the ‘Full Name’ of the user in the Splunk> account. This name is used throughout the Splunk> UI. For example it shows as the user’s name in the upper menu bar of the UI where the user can click to enter the menu to change their ‘User Settings‘ or ‘Logout‘ of their Splunk> session. The ‘role‘ is a list of groups that the user is assigned to within Okta and/or your Identity Management system (Active Directory, LDAP, etc.) Enter in “ATTRIBUTE STATEMENTS” section as follows:
    Name: mail
    Name format: Unspecified
    Value: user.emailClick the “Add Another” button to add another Attribute line/entry and enter as follows:
    Name: realName
    Name format: Unspecified
    Value: user.login
    (see further notes below)Enter in “GROUP ATTRIBUTE STATEMENTS” section, add one entry as follows:
    Name: role
    Name format: Unspecified
    Filter: Regex
    (Filter textbox): .*
    (again see notes below)

    Picture2* If you find that your Okta ‘user.login’ does not contain the text that would be nice to have in the ‘Full Name’ field in the Splunk> user account, there are other pre-canned choices in the pull down
    image2016-5-23 15_28_27However, you can also become quite creative by entering your own Okta expression language formula for the string. A common one that some customers have elected is to have the user’s first name and last name appear in the Full Name within their account. That way in the Splunk UI the user sees their full name – a much more user friendly experience. This can manually be entered with the formula “${user.firstName} ${user.lastName}” (without the quotes) as seen in the below screen shot:
    image2016-5-23 15_27_29
    * The role field is important. This is how we will (later) map group or groups the user is set to within your AD/LDAP environment to the Splunk> role or roles they should acquire once authenticated into your Cloud instance. It is not required that your Identity Management administrator create unique groups specific to your usage of Splunk>, however it does make it a bit easier to manage if you do set up groups. A suggested example of some AD/LDAP groups might be a set of groups named like:splunk-acme-admins (for those users that need access to the Splunk> instance and should have the admin Splunk> role)
    splunk-acme-user (for those users that just need the user role)
    splunk-acme-mycustomrole (for those users that should map to a custom role in Splunk> that you’ve created)
    splunk-es-acme-admins (for those users that need to log into the https://es-acme.splunkcloud.com search head and need the admin role)
    etc.

    Thus there may need to be some change control set up to create these groups and assign the specific users to those groups before you integrate Okta with your Splunk> Cloud instance.

    There is a way to also be more selective of the groups that are passed across in the role attribute from Okta into Splunk>. The regular expression can be more restrictive to only pass over a subset of the groups the user is assigned to. Using the above example named groups, we could enter the regular expression ‘splunk-.*‘ into the filter field of the role attribute and only those groups that match the string starting with ‘splunk-‘ will be passed through.

  20. Click on the ‘Next‘ button to move to the next Okta App configuration panel
  21. Choose the radio button for ‘I’m an Okta customer adding an internal app
    Picture18
  22. Add additional options as you wish or leave blank.
    Picture19
  23. At this point, you will be at the ‘Sign On‘ panel of the Okta application configuration. This screen provides you with a button to ‘View Setup Instructions’ and a link for the ‘Identity Provider metadata‘. Click the ‘Identity Provider metadata‘ and download/save that into a file on your local system. This will be later uploaded to your Splunk> Cloud instance.
    Picture20
  24. The last step in Okta is to assign the new Splunk> Okta App to those people/groups you wish to have access to the Okta App widget you created.
    Picture24
  25. Have the Splunk> Cloud administrator log into the instance if not already still logged in
  26. Go to the Splunk> top menu option ‘Settings‘ -> ‘Access ControlsPicture21
  27. Click on the ‘Authentication Method‘ link
    Picture22
  28. Choose the ‘External Authentication Method‘ radio button ‘SAML‘, then click on the ‘SAML Settings‘ button
    Picture23
  29. Once in the ‘SAML Settings‘ panel, click on the ‘SAML Configuration‘ in the upper right hand corner
    Picture25
  30. Click on the ‘Select File‘ button to choose and upload the ‘Identity Provider metadata‘ file that was saved from Okta in the steps above. Then click on the ‘Apply‘ button. It will populate several fields in the ‘SAML Configuration‘ panel from values within the metadata file.Picture26
  31. Manually enter in the ‘Entity ID‘ field to match the ‘Audience URI (SP Entity ID)‘ that was used in the Okta App – ‘splunk-acme‘ for instance. Again remember that this is case sensitive so it should be typed in exactly as was used in the Okta app.
  32. Scroll down to the ‘Advanced Settings‘ section.
    Manually enter in the ‘Fully Qualified Domain Name (FQDN)‘ field the URL of your instance – ‘https://<acme>.splunkcloud.com‘ for instance
    Manually enter a ‘0‘ (zero) in the ‘Redirect port – load balancer’s port
    Click the ‘Save‘ button to save your configuration, your instance is now set to utilize SAML for authentication! – but the config is not finished yet….
    Picture27
  33. Back at the ‘SAML Settings‘ panel, click on the ‘New Group‘ button in the upper right.Picture28
  34. Enter a ‘Group Name‘ that is a group from your AD/LDAP environment. For instance, the example ‘splunk-acme-admins‘ would be the text entered as the group. Then click on one or more roles in the ‘Splunk Roles‘ ‘Available Items‘ selection list. It will copy over to the ‘Selected Item(s)‘ list. Note that it can be a one to many relationship – you can have a group map to one or more Splunk> Roles. Click the ‘Save‘ button to save your mapping – do note that Splunk> will lowercase all text that you enter (as it lower cases everything internally as it comes across in SAML). So if you have a group named ‘SPLUNK-ACME-USERS’ it will be the same as ‘splunk-acme-users’.Picture29
  35. With the mapping in place, open up a separate browser or an incognito tab in your browser (and have the SAML tracer plugin running if you have one) and test a login. Either log into Okta as a user that is assigned the new Splunk> Okta app and click on the widget to initiate a SAML login, or simply go directly to your URL ‘https://<acme>.splunkcloud.com‘ and the SAML redirect should occur to authenticate via Okta into Splunk. Use the SAML tracer browser plugin to troubleshoot anything that might be amiss. If you authenticate successfully, be sure to check on the account information in Splunk> within the User Settings panel to make sure the account name (nameID), the ‘Full Name’ (realName), e-mail address (mail) came across as desired. Also check the user’s roles are mapped correctly via the ‘Settings‘ -> ‘Access Controls‘ -> ‘Users‘ list in Splunk>.NOTE: Splunk> highly recommends that after SAML integration has been performed, any former locally defined user accounts should be removed through the UI by a user with the admin role. This does NOT remove that user’s knowledge objects, all it does it remove the locally defined password. By doing thus, a user account can only log in via a successful SAML authentication.
  36. Also test the logout process. Choose the username in the upper menu bar of Splunk> and choose the menu option ‘Logout‘. You should be successfully logged out of the Splunk> instance.
    Picture30
  37. If all is well and you’re rockin and rollin, close the support ticket you opened in the pre-requisite steps.

 

Smart AnSwerS #76

$
0
0

Hey there community and welcome to the 76th installment of Smart AnSwerS.

SplunkTrust member rich7177 graced us with his presence at HQ earlier this week, and was awarded an awesome trophy from the Splunk documentation team for always providing constructive feedback. Not only has he been helpful with improving the docs, but he’s an all-star on Answers too! Five of his many contributions have been featured in this Smart AnSwers blog series to date, with more to come I’m sure :) Congratulations Rich!

It’s a shame he couldn’t stick around until next week to join us for our monthly San Francisco Bay Area user group meeting next Wednesday, September 7th @ 6:30PM. If you happen to be in the area, come join us at Yahoo HQ! in Sunnyvale to listen in on talks by burwell from Yahoo and jonathon from Groupon. Visit the SFBA user group page for more details and to RSVP.

Check out this week’s featured Splunk Answers posts:

How to troubleshoot why startup.handoff in the Search Job Inspector always seems to take a long time?

gustavomichels noticed search performance issues and looked in the Search Job Inspector to find that startup.handoff was taking up most of the time to execute a search. sjohnson includes the definition of startup.handoff from documentation in his answer, and also shares several factors that contribute to this taking a long time from his own experience and observations. He finishes off his solid response by showing how to troubleshoot which one could be the culprit.
https://answers.splunk.com/answers/247024/how-to-troubleshoot-why-startuphandoff-in-the-sear.html

What is the difference between the srchJobsQuota and cumulativeSrchJobsQuota settings in the authorize.conf role stanzas?

kwasielewski wanted to set the search quota for a role, but didn’t know if one or both of these settings in authorize.conf should be used. Raghav2384 provides explanations for both srchJobsQuota and cumulativeSrchJobsQuota with a link to supporting documentation, and gives examples defining these parameters for a role to demonstrate the differences and how they work.
https://answers.splunk.com/answers/411440/what-is-the-difference-between-the-srchjobsquota-a.html

Splunk Enterprise 8089 Vulnerability Scan Results: How do I resolve these SSL errors?

serwin was require to scan his Splunk Enterprise environment for compliance reasons, and kept getting multiple SSL errors for the management port 8089 on search heads and indexers. Masa knocks it out of the park by addressing how to resolve each error in the list, and adds the appropriate links from documentation and a previous Splunk Answers post.
https://answers.splunk.com/answers/436018/splunk-enterprise-8089-vulnerability-scan-results-1.html

Thanks for reading!

Missed out on the first seventy-five Smart AnSwerS blog posts? Check ‘em out here!
http://blogs.splunk.com/author/ppablo

#splunkconf16 preview: IT Operations Track – Choose your own adventure!

$
0
0

Does anyone else remember the ‘choose your own adventure books’ from the 90s? I do, and this year’s #splunkconf16 has me almost as excited as getting a brand spankin’ new pile of books. Just kidding, 2016 user conference is going to be much, much better!

2016-05-09-1462761733-5966723-chooseyourown

caveoftime

(No, this is not an ITSI Glass Table)

 

Splunk .conf2016 is coming up fast, and everyone on the Splunk team is excited to head down to the happiest place on earth for this year’s user conference. Check out some key details below about the great sessions that will be featured in the Splunk IT Operations track this year at .conf 2016. This year, we’ve made it easy for you by parsing the sessions into some easy-to-follow tracks. Session speakers will be covering everything from how to drive critical business decisions to maximizing operational efficiencies to Splunk for DevOps to smarter IT analytics with Splunk IT Service Intelligence. Below we sort through ~200 sessions to find a series to attend based on your interests. So go ahead, choose your own adventure!

ITSI Beginner:

For Customers who are new to our premium solution offering for IT professionals, IT Service Intelligence. These sessions will give you an overview of how you can leverage IT Service Intelligence in your organization to make better business decisions.

  • Introduction to Splunk IT Service Intelligence with Alok Bhide, Principal Product Manager, Splunk Inc. and David Millis, Staff Architect, IT Operations Analytics, Splunk Inc
    • Tuesday, September 27, 2016 at 10:30am -11:15am AND Wednesday, September 28, 2016 at 1:10pm- 1:55pm
  • Earn a Seat at the Business Table with Splunk IT Service Intelligence with Erickson Delgado, Architect, Development Operations, Carnival Corporation and Juan Echeverry, Application Automation Engineer, Carnival Corporation, and Marc Franco, Manager, Web Operations, Carnival Corporation
    • Tuesday, September 27, 2016 at 11:35am-12:20pm
  • How Anaplan Used Splunk Cloud and ITSI to Monitor Our Cloud Platform with Martin Hempstock, Monitoring and Metrics Architect, Anaplan
    • Tuesday, September 27, 2016 at 3:15pm-4:00pm
  • Modernizing Enterprise Monitoring at the World Bank Group Using Splunk It Service Intelligence with Michael Makar, Sr Manager, Enterprise Monitoring, World Bank Group
    • Tuesday, September 27, 2016 at 5:25pm-6:10pm
  • Splunk IT Service Intelligence: Keep Your Boss and Their Bosses Informed and Happy (and Still Have Time to Sleep at Night)! With Jonathan LeBaugh, ITOA Architect, Splunk
    • Thursday, September 29, 2016 at 2:35pm-3:20pm

ITSI Advanced:

For customers who are familiar with our premium solution offering for IT Professionals, IT Service Intelligence. These sessions will go into greater detail into the why, what, and how to maximize the productivity of your current or future IT Service Intelligence deployment.

  • Machine learning and Anomaly Detection in Splunk IT Service Intelligence with Alex Cruise, Senior Dev. Manager/Architect, Splunk and Fred Zhang, Senior Data Scientist, Splunk
    • Tuesday September 27, 2016 at 4:20pm- 5:05pm
  • An Ongoing Mission of Service Discovery with Michael Donnelly, ITOA Solutions Architect, Splunk and Ross Lazerowitz, Product Manager, Splunk
    • Thursday, September 29, 2016 at 11:20am-12:05pm
  • Anatomy of a Successful Splunk IT Service Intelligence Deployment with Martin Wiser, ITOA Practitioner, Splunk
    • Tuesday, September 27, 2016 at 12:40pm-1:25pm

IT Troubleshooting (and monitoring!):

For customers looking to learn more about Splunk for application management, Splunk to reduce costs and drive operational efficiencies, and how to get started with Splunk.

  • Splunk gone wild! Innovating a large Splunk solution at the speed of management with Kevin Dalian, Team Lead- Tools and Automation, Ford Motor Company and Glen Upreti, Professional Services Consultant, Sierra-Cedar
    • Thursday, September 29, 2016 at 11:20am-12:05pm
  • How MD Anderson Cancer Center Uses Splunk to Deliver World Class Healthcare When Patients Need it the Most with Ed Gonzalez, Manager- Web Operations, MD Anderson Cancer Center, and Jeffrey Tacy, Senior Systems Analyst, MD Anderson Cancer Center
    • Thursday, September 29, 2016 at 10:15am-11:00am
  • Splunking your Mobile Apps with Bill Emmett, Director, Solutions Marketing, Splunk, and Panagiotis Papadopoulos, Product Management Director, Splunk
    • Thursday, September 29, 2016 at 12:25pm-1:10pm
  • Great, We Have Splunk at Yahoo!… Now What? With Dileep Eduri, Production Engineering, Yahoo and Indumathy Rajagopalan, Service Engineer, Yahoo and Francois Richard, Senior Engineering Director, Yahoo, and Tripati Kumar Subudhi, Senior DevOps, Yahoo
    • Tuesday, September 27, 2016 at 11:35am-12:20pm
  • The Truthiness of Wire Data: Using Splunk App for Stream for Performance Monitoring with David Cavuto, Product Manager, Splunk
    • Thursday, September 29, 2016 at 12:25pm-1:10pm

DevOps and Emerging Trends:

Check out these sessions to learn more about how you can leverage Splunk within your organization to move to continuous delivery and implement a DevOps culture shift.

  • Biz-PMO-Dev-QA-Sec-Build-Stage-Ops-Biz: Shared Metrics as a Forcing Function for End-to-End Enterprise Collaboration with Andi Mann, Chief Technology Advocate, Splunk Inc
    • Wednesday, September 28, 2016 at 4:35pm-5:20pm
  • Splunks of War: Creating a better game development process through data analytics with Phil Cousins, Principal Software Engineer, The Coalition, Microsoft
    • Tuesday, September 27, 2016 at 3:15pm-4:00pm
  • Puppet and Splunk: Better Together with CTO and Chief Architect, Puppet and Stela Udovicic, Senior Product Marketing Manager, Splunk
    • Tuesday September 27, 2016 at 4:20pm-5:05pm
  • Splunking the User Experience: Going Beyond Application Logs with Doug Erkkila, PAS Capacity Management Analyst, CSAA Insurance Group
    • Thursday, September 29, 2016 at 1:30pm-2:15pm
  • Data That Matters, A DevOps Expert Panel featuring Phil Cousins, Microsoft and Doug Erkkila, CSAA Insurance Group, and Deepak Giridharagopal, Puppet and Andi Mann, Splunk, and Sumit Nagal, Intuit, and Hal Rottenberg, Splunk
    • Wednesday, September 28, 2016 at 1:10pm-1:55pm

Untitled copy

Buttercup and pals in the Seattle office are pumped for .conf

On top of these awesome sessions we have lined up, we’ll have 3 days of Splunk University Training, 70 technology partners presenting, over 4,000 splunk enthusiasts, and the Splunk search party. It’s not too late to register for .conf2016 and head down to Disneyworld!

Follow all the conversations coming out of #splunkconf16!

Configuring Microsoft’s Azure Security Assertion Markup Language (SAML) Single Sign On (SSO) with Splunk Cloud

$
0
0

samlRecently there was a blog posting that described how to configure a Splunk Cloud (version 6.4.x) instance with Okta SAML 2.0. A bit of background on SAML was provided on why Single Sign On seems to be such the rage these days.

My role at Splunk>, as an Engineer on the Cloud Adoption team as part of our Customer Success organization, means that I exist here to help make our customers happy. Not having to type in a username and password to log into Splunk> to bring up your boss’s ‘TPS Reports’ dashboard is just one small way to bring happiness. A team in your company must have come together and convinced management to purchase the proper features and functionality of an Identity Provider (IdP) – In this case Azure! Smiles ensued. Another way to ride the highway to bliss is for your organization’s over-worked IT Infrastructure staff to not have to own the hardware, floor space and admin head count to support that awesome instance of Splunk> that you’re getting huge piles of value out of. A smart team at your company bought Splunk> Cloud and the party keeps on rolling!

So let’s get right down to it – below is a quick how-to on setting up Azure to provide SAML SSO with your Splunk> Cloud 6.4.x instance.

azure

Who do you need?
1) An administrator for your Azure instance
2) An administrator for your local Identity Management system (Active Directory most likely, if you’re investing into an Azure instance you most likely leverage Microsoft software infrastructure on premise)
3) An administrator for your Splunk> Cloud instance. If they’re all the same person (you), you’re in luck. Otherwise you’ll have to run the calendar dice and find time for you all to discuss SAML integration, put in change control, scheduled a time to implement, etc.

Here’s what you do:

Pre-requisite:

This step is requested to be performed so that our Splunk> Cloud Support and Operations staff will know that you are integrating your instance with Azure. It provides a mechanism to more effectively support you in your efforts to integrate with Azure in case anything may go amiss, or you may have further questions around Azure configuration that are not addressed in this posting.

  • Log into your Splunk> Customer Portal and create a Splunk> Customer support case.
    • A Priority of P3 or P4 is adequate.
    • Choose ‘Authentication & Security‘ for the Area
    • For the ‘Feature / Component / App‘ choose ‘SAML
    • In the ‘Subject‘ enter in something along the lines of ‘SAML Integration with Azure
    • Add a summary in the ‘Description‘ that you are going to integrate your Splunk> Cloud instance with Azure, and possibly a date/time you will be performing the integration if applicable.
  • Read all of the below Integration steps. There are some pieces that you may need to perform in your Identity Management environment before you integrate with Azure. There are also possible affects to your current locally defined users in Splunk> Cloud. And there may be other topics that may require further discussions among your team members or questions with Splunk>

Azure Integration:

Initially, configuration of Azure was a bit more ‘manual’. With the awesome work of other Splunkers – Rahul Dimri et. al. – there is a pre-canned app in the Microsoft Azure Gallery/Marketplace. This app reduces the number of steps required for an SSO integration.

  1. Login to https://manage.windowsazure.com

  2. Navigate to your directory by selecting the Azure Active Directory on the left hand pane. If you want a separate directory then create one, or else select the directory in which you want users to access splunk.
    selectDir

  3. Click on “APPLICATIONS” tab, then click the “ADD” button on the bottom banner. Select the option “Add an application from the gallery“. This should be the second option.
    add_app

  4. In the “Add an application for my organization to use” dialogue, click on the search box on the top right hand side and type the word “splunk“, then search for the application by clicking on the search icon ( magnifying glass)
    choose_app

  5. There is a lot of text on the right hand column of the dialogue. Place your mouse cursor within that column and press the tab key. Pressing the tab key will show a “DISPLAY NAME” text box on the bottom of the column. Enter a name for this application. In this example we will call it “SplunkSamlAppForAzure“. Click on the check mark in the bottom right to save your app.
    enter_name

  6. Click on the app (“SplunkSamlAppForAzure“) you just created in you applications pane. Then click on “Configure single sign-on
    configure

  7. On the next pane check on the first opeion “Microsoft Azure AD Single Sign-On“. Click on the arrow to proceed to the next configuration page.
    ad_sso

  8. On the second page, enter the:
    Enter the “SIGN ON URL“, this is the landing page on which users are taken to in case of IdP initiated flow. This would be the base URL for your Splunk> Cloud instance: https://<acme>.splunkcloud.com where <acme> is the canonical DNS name of the instance and/or search head (in case of multiple search heads or clustered search heads – such as a general ad-hoc search head at ‘https://acme.splunkcloud.com’ and a separate search head (or cluster) for Enterprise Security at ‘https://es-acme.splunkcloud.com‘)
    Enter the “IDENTIFIER“, this is essentially the ‘entity ID’ for the SAML configuration. In Azure you must use a URI – so use the URL of your Splunk> Cloud instance: ‘https://acme.splunkcloud.com
    Enter the “REPLY URL”. This is the SAML target and is the URL: ‘https://<acme>.splunkcloud.com/saml/acs’ where again ‘<acme>’ is the DNS name of the instance.
    Leave “Configure the certificate used for federated single sign-on (optional).” unchecked, you may want to explore this if you need to select a specific cert , or create a new cert to sign the assertions with. We will let it pick a default cert in this configuration.
    Click on the right arrow to proceed to the next configuration page 3.
    app_settings

  9. On page 3 of the configuration dialogue, click to “Download metadata” and save it to a file on your local system. This file will be needed when configuring the Splunk> Cloud instance for SAML in later steps
    You can skip the “View <app> configuration instructions
    Check the checkbox to confirm you have configured Splunk.
    Click on the right arrow and complete the configuration dialogue.

    app_settings_3

  10. Once back at the app main page, click on the “ATTRIBUTES” tab at the top.
    attributes
  11. click on “Add User Attribute” and enter the name “realName” for the “ATTRIBUTE NAME
    Select the value “user.displayname” for the “ATTRIBUTE VALUE
    realName

  12. Add a second user attribute
    Enter the name “mail” for the “ATTRIBUTE NAME
    Select the value “user.mail” from the “ATTRIBUTE VALUE” pulldown menu.
    mail
  13. Click on “Apply Changes” on the bottom of the page.
    apply_changes

  14. Click on the blue ‘cloud’ with a thunderbolt icon (on the left of “DASHBOARD” tab) to go back to the main app screen. Click on “Assign Accounts” and then add the users that you want to have access to the application.

    Splunk> roles are mapped to the groups a user is part of in Azure Active directory. Typically users are already assigned to a set of Azure/AD groups based on their role within the organization. However we recommend setting up a new set of groups that are specific to and solely for the users that will need access to the Splunk> Cloud instance search head(s). For example, you can create a group “splunk_admins” and “splunk_users” (as examples) in your Azure/AD. You can then assign your users that need ‘admin‘ role in Splunk or just ‘user‘ role in Splunk> to those two groups. Setting up the group to Splunk> roles mapping is covered a little later in these instructions.

  15. When Azure passes information on the groups that a user is assigned to within the SAML Assertion, they are passed along by the group’s unique “Object ID” and not by the Azure/AD group’s name. So for the ability to map Azure/AD groups to Splunk> roles, we will need to collect information about the Groups that you are using. The object Id for each group you are using can be found by going to your Azure Directory Page and then navigating to the group whose Object Id is to be retrieved.
    For example, the graphic below shows a group that is named “splunkUsers”. When this is passed along to Splunk> in the SAML Assertion (XML) it is passed along by the “OBJECT ID” of “10ad<blahblahblah>3d“. So for the group to Splunk> role mapping, we will need to record that Object ID for later use. I suggest a simple document or spreadsheet that contains the “DISPLAY NAME“, then use the ‘copy to clipboard’ button to the right of the “OBJECT ID” to copy/paste into the document/spreadsheet. Then add the Splunk> role or roles (you can map to more than one Splunk> role if desired) that you will be mapping to.
    group_id

    NOTE: At this time, the use of Object ID is the only way to map groups to Splunk> roles for Azure. There is an enhancement request to find another way that is more ‘user friendly’ to map roles. Hopefully future versions of Splunk> and/or Azure will provide a mechanism that is more intuitive for this mapping mechanism.

Configure Splunk> SAML:

  1. Log into your Splunk> Cloud instance as a user with the admin role
  2. Go to the Settings -> Access Controls menu option. Click on the ‘Authentication method‘ link. Click on the ‘SAML‘ radio button, then click on the ‘Configure Splunk to use SAML‘ green button.
    Screen Shot 2016-09-08 at 7.46.28 AM
  3. The first thing we will do is set up the AD/Azure Group to Splunk> Role mappings. Remember that list of group Object ID’s we had you record earlier when setting up Azure? Get that out, then click on the green ‘New Group’ button in the upper right hand corner of the SAML Groups configuration screen in Splunk.
    In the ‘Create new SAML Group’ configuration dialogue, paste in the first Object ID into the ‘Group Name’ field. Then choose one or more ‘Splunk Roles’ that you wish to map to users that are assigned to that group from the ‘Available Item(s)’ box. The items you choose will populate over into the ‘Selected Item(s)’ box. Click the green ‘Save’ button once finished. Perform this step for all AD/Azure groups (Object IDs) that you are going to be mapping. I have not come across an Azure Object ID for a group that has had capital letters within it. That being said, Splunk> will lowercase all letters in the ‘Group Name’ field once the mapping is saved. Behind the scenes, when the SAML Assertion comes over from an IDP into Splunk>, all groups within the ‘role’ attribute will be set to all lower case before looking up mapping settings. The Azure Object ID is long enough and random enough that there should never be a conflict between group names (Object IDs) caused by the lower case action. So this note is more just to notify you of the behavior in case you see a difference in case between what you’ve entered and what was saved and listed in the SAML Groups screen.
    addGroup
  4.  With the mappings in place, it is now time to set up the SAML configuration. Back on the ‘SAML Groups‘ configuration page, click on the green ‘SAML Configuration‘ button in the upper right hand corner of the page.
    Click on the ‘Select File‘ button next to the ‘Metadata XML File‘ entry row.
    Select the metadata XML file that you saved from Azure earlier. Once selected click the ‘Apply’ button.
    Several of the fields will populate from the XML data, such as ‘Single Sign On (SSO) URL‘ and the ‘Single Log Out (SLO) URL‘.Type in the same URL that you used earlier as the ‘IDENTIFIER‘ on step 8 of the Azure App configuration earlier. It should be something like ‘https://<acme>.splunkcloud.comNOTE: The ‘Sign AuthnRequest’ and ‘Sign SAML response’ checkboxes need to be un-checked. With Azure, signing of the requests and responses is not performed, so please do make sure both check boxes are blank, not checked, or otherwise the SAML Assertion will not be accepted by Splunk/Azure.Screen Shot 2016-09-08 at 8.21.04 AM
  5. Scroll down within the configuration dialogue to the “Advance Settings” section.For Azure, the SAML Assertion sends over data within a few schema named attributes. Enter the following values in each attribute:
    Attribute Alias Role” : http://schemas.microsoft.com/ws/2008/06/identity/claims/groups
    “Attribute Alias Real Name” : http://schemas.microsoft.com/identity/claims/displayname
    “Attribute Alias Mail” : http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name
         <OR>
    http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress
     

    NOTE: There have been multiple instances where the ‘*/emailaddress’ attribute is not being passed by Azure. If the accounts are Microsoft accounts, then they will have the http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress attribute. If the accounts are sourced from Microsoft Azure Active Directory (most often the case where Azure is connected to internal), then most likely the e-mail address will be coming across through the “Attribute Alias Mail” of http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name For the ‘Fully qualified domain name or IP of the load balancer‘ enter in the FQDN of your Splunk> instance – aka ‘https://<acme>.splunkcloud.com
    Enter in a ‘0‘ (zero) into the ‘Redirect port – load balancer port‘Click the ‘Save‘ button to save the configuration.Screen Shot 2016-09-08 at 8.23.58 AM

  6. Now that you have mappings and the SAML configuration, your Splunk> Instance is now set to re-direct authentication to your Azure IDP. Test your setup by entering a new browser session or open up an incognito session of your browser. Go to your https://acme.splunkcloud.com URL and it should re-direct you to authenticate via your Azure instance.
    azureAutheticate
    NOTE: If stuff is not working – there is a URL that will get your directly to your login auth page for your locally defined Splunk> accounts. For your Splunk> Cloud instance, use the URL of the form https://acme.splunkcloud.com/en-US/account/login?loginType=splunk and you will be presented with the ole tried and true local authentication page to login to a locally defined Splunk> account.
    Also it is highly recommended to have a SAML tracer plugin for your browser (chrome, firefox, etc. all have a SAML tracer). This will allow you to easily capture the data that is being passed between your IDP and Splunk> to determine what values are being passed within the Assertion attributes, etc. – making it much easier to troubleshoot anything that might not be working or is not optimal for your preferences.
  7. You should authenticate into Splunk. Click on your test user’s account name in the top menu and choose the menu option for ‘Edit account‘ and check the values that are populated in the ‘Full Name‘ and ‘Email address‘ fields.landingSplunk
  8. With Azure, the account name (nameID) for the user most likely will come across as that user’s unique Object ID. Some of you will be OK with this, most will not. Having a long obtuse string of numbers and characters as the Splunk> account name is not all that optimal when researching things within the internal indexes – looking for stuff to answer questions like ‘what users are doing what in my Splunk> instance?’.In 6.4.x of Splunk, there is a way we can set the account name to use the value within the e-mail address that comes across via the SAML Assertion. However, this requires a Splunk> Support or Cloud Operations technician to manually modify your instance’s authentication.conf file for you. Remember that pre-requisite step of creating a support ticket for Splunk>? Use this ticket to request Cloud Operations to add the nameIdFormat parameter to your search head’s configuration.The parameter that will be added to your configuration file is:nameIdFormat=urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddressOnce this setting has been put into place for you, your users when they come across via Azure SAML will obtain Splunk> account names that are populated with the value of the email address.NOTE: This behavior will be enhanced in future versions of Splunk> Cloud. At this time, however, the only two options for the account name for users of Azure as an IDP is the unique Object ID or the nameIdFormat of ‘.*:emailAddress’. Thus if you wish to have the Splunk> account name relect any other value (SAM Account Name for instance) – that is not currently possible (as of my understanding) with the current versions and capabilities of Azure and Splunk> 6.4.x.

Dashboard Digest Series – Episode 2

$
0
0

noaa_website

Welcome to the second episode of the Dashboard Digest Series! So what do we have for Episode 2? Waves!

The use case here was to display real-time and historical parameters and statistics from the National Oceanic and Atmospheric Administrations National Data Buoy Center or NOAA NDBC for short.  Thanks to an add-on created by Julien Ruaux on Splunkbase, I was able to easily collect data from the NDBC’s data feed and start creating dashboards right away.   While the NOAA NDBC site has it’s own dashboard (pictured right) I figured it might be useful to access and visualize the data in different ways through Splunk.  That and eventually correlate the buoy data with other data sources.

Purpose: Display meaningful statistics on NDBC buoy information in historical and real-time. Easily drilldown, aggregate and visualize data from 1000s of buoys transmitting information.
Splunk Version: Splunk 6.4 and above
Data Sources: Polling NDBC RSS feed that produces JSON payload
Apps: Add-on for NDBC, Custom Cluster Map Visualization, Clustered Single Value Map Visualization, Splunk 6.x Dashboard Examples

Tips n’ Tricks:

Let’s take a look! Using some new custom visualizations from Splunkbase I was able to show current/max wave height by station much easier.  Additionally a simple drilldown allows me to see specific details by station over time quickly and effectively.  I can pick location of the buoy, wave height, water temperature, windspeed etc. and plot it over a map.

noaa_wave_bouys

I began to notice there were also ships reporting information.  Wouldn’t it be nice to separate visually what is a ship and what is a buoy? Enter the Clustered Single Value Map Visualization.  You can even add html and Splunk results within each click box to add more context.

noaa_wave_bouys4

Other than those two custom visualizations I’m just using some form input examples from the Splunk 6.x Dashboard Examples App.  I’m using Dropdown, Multi-select, and Link Switcher which are all available to use in SimpleXML and require no JS! The Link Switcher allows switching between different map visualizations as some map types did better to show wave height, water temperature and other information as opposed to just location.   Finally I used chart overlay (also available in Dash Examples App) to plot multiple data points over wave height.

That’s it for today folks, as always Happy Splunking! As a side note, for anyone going to .conf2016 I will be conducting a session called  “Next Generation Dashboards” where you can learn more about creating dashboards in this series!

– Stephen

 

Viewing all 621 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>