Elk + Osquery + Kolide Fleet = Love

Threat hunting on Linux and Mac has probably never been easier. With the combination of these tools, we can query all of our hosts on demand for IOC’s, schedule queries to run on an automated basis and feed all of these results into our SIEM. Osquery is even platform agnostic so we can deploy it across all endpoints, regardless of host OS.

Osquery – Is a tool that allows us to query devices as if they are databases. It was built by Facebook and is built with performance in mind.

Kolide Fleet – A flexible control server for osquery fleets. Fleet allows us query multiple hosts on demand as well as create query packs, build schedules and manage the hosts in our environment.

Elastic Stack – Elasticsearch, Logstash and Kibana are tools that allow for the collection, normalizing and visualization of logs.

This post will assume a couple of things:

  1. You have an Elastic Stack configured. This has most likely never been easier, simply check out Roberto Rodriguez’s HELK (Hunting ELK) and run the setup script. For this post, I am using HELK so it should be all you need. If you like to do things manually so you understand how things are working, Roberto Rodriguez has you covered, head over to his site and follow his tutorials (They are top-notch).

What you will need:

  1. At least one Linux host to run your osquery daemon, you can also run it on the same box as you are running Kolide or your Elastic stack.
  2. An Ubuntu 16.04 Server to run Kolide Fleet, you can run this on the same box as your Elastic stack. This tutorial uses a separate host to run Kolide Fleet so I will let you know what you might need to change to make it work on the same server.

Before we begin, make sure to run:

apt update && apt upgrade

Kolide Setup:

You can use Kolides official documentation for most of this if you’d like. I basically customized their install guide to be more fitting for our purpose.

Official Setup Guide

Install MySQL:

Use the password: ‘kolide’ (Or whatever you want, just adjust accordingly as you go)

Install and run Redis:

Or if you have time, use the proper procedure to run Redis although totally not necessary for the purpose of this guide!

Fleet Server Setup:

Now, if you go to https://localhost:8080 in your local browser, you should be redirected to https://localhost:8080/setup where you can create your first Fleet user account.

We need to create some queries now, you can do this with the GUI, or you can run the importer tool found here. The importer tool is a bit buggy so for the purpose of this post, we will just configure the queries manually.

Go to Packs –> Manage Packs –> Create New Pack

Name the pack ‘linux_collection’ and add a description of you’d like. Under Select Pack Targets, choose All Hosts.

Now on the right hand side of the page, you should see Select Query. Click that drop down and choose ‘crontab’.

Set the interval to 60, the Platform to Linux, the minimum version to All and the Logging to Snapshot. Set the shard to 100.

The logging set to Snapshot will simply return all the results, differential would return the changes since the last query, and this is good for monitoring for malicious changes.

The shard is the percent of hosts that this Osquery pack will query. In a large environment, it might make sense to only query a portion of the hosts every time instead of all of them.

Repeat these steps for the following queries: etc_hosts, iptables, listening_ports, mounts, open_files, and shell_history..

All of these queries can be run on demand as well.

OSQuery Install:

Now on your Linux host we need to install OSQuery, like I mentioned in the beginning of the post, you can install this on the same machine that’s running Kolide, your Elastic stack or a standalone box.

Back on your new Kolide instance, select ‘Add New Hosts’ and copy the enroll Secret. Select Fetch Kolide Certificate and move certificate to your Linux box at /var/osquery/server.pem. Instead of using scp or the likes, you can simply open the certificate file with a text editor and copy/paste into you Linux terminal.

Now, back on the host that you are installing OSQuery, replace <enroll secret> with your secret that was provided from Fleet and on line 11, replace localhost with the server IP running Kolide Fleet.

If you go back to you Kolide app, you should see your new host appear!

FileBeat Installation:

We need to use FileBeat to move our osquery logs over to our Elastic Stack. If you are running osquery on the same machine as your Elastic Stack, you don’t need FileBeat, you can simply use the Logstash file plugin to pull the logs from the log file and push them to Elasticsearch.

On the server running Kolide run:

In the filebeat config file paste the following:

On line: 31, update localhost with your Logstash server IP.

If you are using TheHELK out of the box, you will not need to configure the ssl.certificate_authorities, however, if you followed the manual ELK setup and/or would like to use an SSL cert to encrypt the log traffic, you will need to comment this line and add the cert that Logstash is using.

Now run:

service filebeat start

Logstash:

On you Elastic Stack server, or the server that Filebeat is forwarding osquery logs to, you will need to add some Logstash filters.

Add a new output filter:

sudo nano /etc/logstash/conf.d/60-osquery-output.conf

Replace ‘fleet-controller’ with the hostname of you Kolide server and paste this in to the conf file you just opened:

Kibana:

Choose Management -> Index Patterns -> Create Index Pattern

In the Index Pattern box, enter osquery-result-* and choose Next Step. From the Time Filter field name drop down, choose @timestamp and then choose Create index pattern.

Now go to Discover and choose osquery-result-* from the Index dropdown. You should see your queries filter in.

Go To Management -> Import and import the two files found in this gist. One is the visualizations and one is the dashboard for osquery.

Recap:

This was all done with free tools. Add in sysmon to the mix and you now have a comprehensive threat hunting platform and a rudimentary SIEM. Osquery has support for Windows as well allowing you to query every. single. host. in your environment in a matter of seconds. If you are security and need the support from other teams before you can roll it out, sell it to them. osquery isn’t just a security tool. Operations, helpdesk, compliance teams etc can all get value from this, and the best part? Its free!

Further Reading/Resources:

https://medium.com/@palantir/osquery-across-the-enterprise-3c3c9d13ec55

https://blog.kolide.com/monitoring-macos-hosts-with-osquery-ba5dcc83122d

https://medium.com/@clong/osquery-for-security-b66fffdf2daf

https://medium.com/@clong/osquery-for-security-part-2-2e03de4d3721

The osquery slack channel: https://osquery.slack.com is super active and helpful! Also, if you run into any issues, /var/log/ is your friend 😉

Comments, suggestions, concerns… feel free to reach out to me on Twitter or by email!

The advice and scripts contained and referenced in this point are provided with no warranty. As always, never blindly trust scripts off the internet.

 

 

Automating the detection of Mimikatz with ELK

I’ve been going through CyberWarDog’s Threat Hunting posts as of late and stumbled upon his ‘Hunting for In-Memory Mimikatz’ Series. The methods used to build signatures are very straight forward and seem to remove a barrier to entry for figuring out how to profile malicious tools.

The method used to detect Mimikatz is referred to as grouping which consists of taking a group of unique artifacts and identifying when multiple of the unique artifacts appear together. So for this post, we will use Cyberwardog’s guidance to build an alert for the detection of Mimikatz using Sysmon and the ELK Stack.

This post assumes you already have an ELK Stack stood up with ElastAlert configured. This can be stood up in minutes with the HELK. (ElastAlert will be merged soon)

I want to start out by saying this is definitely not the most elegant solution. The idea was simple enough, alert when 5 DLL’s are accessed within 1 second of each other from the same host. Unfortunately, ElastAlert does not have this functionality built in, so Python it is..

The Sysmon config I am using is the Ion-Storm sysmon config. By default, the proper events are forwarded. Lines 571-579.

To get started, we need a script to handle some of the logic required to verify a couple things before we fire off an alert. I tried to make the python tool as modular as possible so that we can easily alert on other event ‘groupings’.

The 5 DLL’s we will be detecting are:

Cryptdll.dll

Hid.dll

Samlib.dll

Vaultcli.dll

Winscard.dll

We will also only be detecting these if they happen to be accessed within one second of each other.

On your server running your ELK stack:

sudo nano /bin/py-alert.py

Paste in:

sudo chmod 755 /bin/py-alert.py

This script handles all of our logic and also sends our Slack Notification. Using the options, we can alert on any combination of events.

Add our individual rules to our alerts rules directory.

Grab our rules off GitHub:

git clone https://github.com/jordanpotti/ElastAlertGrouper.git

Copy our rules into our ElastAlert rules directory:

sudo cp ElastAlertGrouper/alert_rules/* /etc/elastalert/alert_rules/

We now have 6 new rules in our rule directory. Each rule with a DLL name alerts when that given DLL is loaded.

As you can see, we have the typical alert rule options and we are querying for samlib in event_data.ImageLoaded. When this alert is tripped, it calls our python script with this command:

python3 /bin/py-alert.py –T D –a Mimikatz –o /tmp/mimikatz –c $ComputerName
-T is telling the script what action to take, in this case, we are just writing the hostname to a file so we want to use the ‘Document’ or D option.

-a is the alert type, in this case Mimikatz

–c is the hostname taken from the alert.

This is reflected across all DLL alerts. So when mimikatz is ran, the output file will have 5 hostnames written there.

Now let’s take a look at our Mimikatz rule.

This alert uses frequency as well as a different index. ElastAlert actually has its own index that indexes everytime an alert is queried. So now we can check this index if all five of the DLL alerts were fired in less than one second. It does this by filtering for only the DLL rules, only returning those with the alert_sent flag set to true, and alerting only if identifies 5 results within 1 second.

You will need to generate a Slack Web Hook and replace ‘SLACKWEBHOOK’ with your web hook. 

The alert is a command calling our Python script again:

python3 /bin/py-alert.py –T S –S SLACKWEBHOOK –a Mimikatz –t 5
-T is telling the script that we want to perform a ‘Send’ action.

-S needs to have our Slack Web Hook

-a tells the script what detection type we are alerting on

-t tells the script to only alert if there are 5 or more unique hostnames in our output file.

This last part is important. This number should always be the amount of rules that make up your grouping.

Run Mimikatz:

Profit:

One thing to keep in mind is that these have been tested in a lab environment with a small population of end points. Deploying this in production will likely involve major tuning.

Check out these labs: https://cyberwardog.blogspot.com/2017/02/setting-up-pentesting-i-mean-threat_98.html for an in-depth guide on how to set this stuff up manually as well as build the lab around it.

If you run into any issues, feel free to reach out to me on Twitter or by email!

The advice and scripts contained and referenced in this point are provided with no warranty. As always, never blindly trust scripts off the internet.

 

Using Elastic Curator To Clean Up ELK

I recently setup ELK in order to begin collecting logs from Sysmon for security monitoring in my lab. The problem I could foresee running into was the issue of disk space. Unfortunately when my ELK server runs out of space, it runs out of space. I needed a way to clean up the logs when the server began to reach a threshold. This led me to Elastic Curator. Curator allows us to manage our indices which includes deleting indices over a given number of days ago. However, this is just a one-off command we can run so I wanted to add some logic to the process.

Disclaimer: This is strictly for one node setups, if you have a large setup with multiple clusters you will want a different solution. (You can use this method still but you might get some weird issues)

Install Curator:

sudo pip install elasticsearch-curator

Create Curator Directory:

sudo mkdir /etc/curator

Create config file:

sudo nano /etc/curator/config.yml

Paste in:

Create delete config action file:

sudo nano /etc/curator/delete-after.yml

Paste in:

Unless you followed this guide you will most likely have to change some of the details in this action config file. Namely the filters –> value field. You will need to put the name of your index here.

Create Script file:

sudo nano /etc/curator/cleanup.sh

Paste in:

Warning: The above script will check that the disk space is less than 80 percent. It will check the free space of the root or whatever drive is mounted on “/”. If the free space is less than 20 percent, it will begin to delete the oldest indices, once it deletes the oldest one, it will check again and delete the next oldest until the disk space usage is below 80 percent. It will stop deleting regardless of disk space if there is less than 2 days of indices left. Since I couldn’t find an easy way to simple delete the oldest indices, I start at 90 days ago and move forward until it starts to find indices, if you have more than that, please adjust the script (the days variable at line 8). You may also want more than 2 days as a safety net.

Add script to cron:

sudo su
(crontab -l 2>/dev/null; echo "5 0 * * * /etc/curator/cleanup.sh") | crontab -

This cronjob will run the script 5 minutes after midnight forever.

Curator should be added to the HELK soon. If you aren’t aware of the HELK and want to get into Threat Hunting (Or just want a super quick way to spin up and ELK stack) you should definitely look into the HELK. The incredible HELK (Hunting, Elasticsearch, Logstash, Kibana) makes it is easy as running a script to setup a reliable ELK stack tailored for threat hunting.

Check out these labs: https://cyberwardog.blogspot.com/2017/02/setting-up-pentesting-i-mean-threat_98.html for an in-depth guide on how to set this stuff up manually as well as build the lab around it.

If you run into any issues, feel free to reach out to me on Twitter or by email!

The advice and scripts contained and referenced in this point are provided with no warranty. As always, never blindly trust scripts off the internet let alone throw them into a cron job running as root.