Automating the detection of Mimikatz with ELK

I’ve been going through CyberWarDog’s Threat Hunting posts as of late and stumbled upon his ‘Hunting for In-Memory Mimikatz’ Series. The methods used to build signatures are very straight forward and seem to remove a barrier to entry for figuring out how to profile malicious tools.

The method used to detect Mimikatz is referred to as grouping which consists of taking a group of unique artifacts and identifying when multiple of the unique artifacts appear together. So for this post, we will use Cyberwardog’s guidance to build an alert for the detection of Mimikatz using Sysmon and the ELK Stack.

This post assumes you already have an ELK Stack stood up with ElastAlert configured. This can be stood up in minutes with the HELK. (ElastAlert will be merged soon)

I want to start out by saying this is definitely not the most elegant solution. The idea was simple enough, alert when 5 DLL’s are accessed within 1 second of each other from the same host. Unfortunately, ElastAlert does not have this functionality built in, so Python it is..

The Sysmon config I am using is the Ion-Storm sysmon config. By default, the proper events are forwarded. Lines 571-579.

To get started, we need a script to handle some of the logic required to verify a couple things before we fire off an alert. I tried to make the python tool as modular as possible so that we can easily alert on other event ‘groupings’.

The 5 DLL’s we will be detecting are:

Cryptdll.dll

Hid.dll

Samlib.dll

Vaultcli.dll

Winscard.dll

We will also only be detecting these if they happen to be accessed within one second of each other.

On your server running your ELK stack:

sudo nano /bin/py-alert.py

Paste in:

sudo chmod 755 /bin/py-alert.py

This script handles all of our logic and also sends our Slack Notification. Using the options, we can alert on any combination of events.

Add our individual rules to our alerts rules directory.

Grab our rules off GitHub:

git clone https://github.com/jordanpotti/ElastAlertGrouper.git

Copy our rules into our ElastAlert rules directory:

sudo cp ElastAlertGrouper/alert_rules/* /etc/elastalert/alert_rules/

We now have 6 new rules in our rule directory. Each rule with a DLL name alerts when that given DLL is loaded.

As you can see, we have the typical alert rule options and we are querying for samlib in event_data.ImageLoaded. When this alert is tripped, it calls our python script with this command:

python3 /bin/py-alert.py –T D –a Mimikatz –o /tmp/mimikatz –c $ComputerName
-T is telling the script what action to take, in this case, we are just writing the hostname to a file so we want to use the ‘Document’ or D option.

-a is the alert type, in this case Mimikatz

–c is the hostname taken from the alert.

This is reflected across all DLL alerts. So when mimikatz is ran, the output file will have 5 hostnames written there.

Now let’s take a look at our Mimikatz rule.

This alert uses frequency as well as a different index. ElastAlert actually has its own index that indexes everytime an alert is queried. So now we can check this index if all five of the DLL alerts were fired in less than one second. It does this by filtering for only the DLL rules, only returning those with the alert_sent flag set to true, and alerting only if identifies 5 results within 1 second.

You will need to generate a Slack Web Hook and replace ‘SLACKWEBHOOK’ with your web hook. 

The alert is a command calling our Python script again:

python3 /bin/py-alert.py –T S –S SLACKWEBHOOK –a Mimikatz –t 5
-T is telling the script that we want to perform a ‘Send’ action.

-S needs to have our Slack Web Hook

-a tells the script what detection type we are alerting on

-t tells the script to only alert if there are 5 or more unique hostnames in our output file.

This last part is important. This number should always be the amount of rules that make up your grouping.

Run Mimikatz:

Profit:

One thing to keep in mind is that these have been tested in a lab environment with a small population of end points. Deploying this in production will likely involve major tuning.

Check out these labs: https://cyberwardog.blogspot.com/2017/02/setting-up-pentesting-i-mean-threat_98.html for an in-depth guide on how to set this stuff up manually as well as build the lab around it.

If you run into any issues, feel free to reach out to me on Twitter or by email!

The advice and scripts contained and referenced in this point are provided with no warranty. As always, never blindly trust scripts off the internet.

 

Using Elastic Curator To Clean Up ELK

I recently setup ELK in order to begin collecting logs from Sysmon for security monitoring in my lab. The problem I could foresee running into was the issue of disk space. Unfortunately when my ELK server runs out of space, it runs out of space. I needed a way to clean up the logs when the server began to reach a threshold. This led me to Elastic Curator. Curator allows us to manage our indices which includes deleting indices over a given number of days ago. However, this is just a one-off command we can run so I wanted to add some logic to the process.

Disclaimer: This is strictly for one node setups, if you have a large setup with multiple clusters you will want a different solution. (You can use this method still but you might get some weird issues)

Install Curator:

sudo pip install elasticsearch-curator

Create Curator Directory:

sudo mkdir /etc/curator

Create config file:

sudo nano /etc/curator/config.yml

Paste in:

Create delete config action file:

sudo nano /etc/curator/delete-after.yml

Paste in:

Unless you followed this guide you will most likely have to change some of the details in this action config file. Namely the filters –> value field. You will need to put the name of your index here.

Create Script file:

sudo nano /etc/curator/cleanup.sh

Paste in:

Warning: The above script will check that the disk space is less than 80 percent. It will check the free space of the root or whatever drive is mounted on “/”. If the free space is less than 20 percent, it will begin to delete the oldest indices, once it deletes the oldest one, it will check again and delete the next oldest until the disk space usage is below 80 percent. It will stop deleting regardless of disk space if there is less than 2 days of indices left. Since I couldn’t find an easy way to simple delete the oldest indices, I start at 90 days ago and move forward until it starts to find indices, if you have more than that, please adjust the script (the days variable at line 8). You may also want more than 2 days as a safety net.

Add script to cron:

sudo su
(crontab -l 2>/dev/null; echo "5 0 * * * /etc/curator/cleanup.sh") | crontab -

This cronjob will run the script 5 minutes after midnight forever.

Curator should be added to the HELK soon. If you aren’t aware of the HELK and want to get into Threat Hunting (Or just want a super quick way to spin up and ELK stack) you should definitely look into the HELK. The incredible HELK (Hunting, Elasticsearch, Logstash, Kibana) makes it is easy as running a script to setup a reliable ELK stack tailored for threat hunting.

Check out these labs: https://cyberwardog.blogspot.com/2017/02/setting-up-pentesting-i-mean-threat_98.html for an in-depth guide on how to set this stuff up manually as well as build the lab around it.

If you run into any issues, feel free to reach out to me on Twitter or by email!

The advice and scripts contained and referenced in this point are provided with no warranty. As always, never blindly trust scripts off the internet let alone throw them into a cron job running as root. 

 

Using ElastAlert to Help Automate Threat Hunting

I first want to say thanks to CyberWarDog for his fantastic lab walk through for setting up a Threat Hunting Lab. It is hands down the best guide I have read to getting started with Threat Hunting. I followed his guide and got my lab completely setup. I then decided that Elastalert would be pretty nice for getting some of the highly likely IOC’s sent off to a security team for further analysis. This post will guide you through setting up Elastalert to get notifications when certain actions are logged.

This guide assumes you have gone through all parts of CyberWarDogs tutorials: https://cyberwardog.blogspot.com/2017/02/setting-up-pentesting-i-mean-threat.html

Not required, but it also assumes that you have set up Enhanced Powershell Logging so that we can begin to capture useful PowerShell data. https://cyberwardog.blogspot.com/2017/06/enabling-enhanced-ps-logging-shipping.html

Also not required but useful for this guide: A Slack Channel.

Cool, ready to go?

  • SSH or Console into your Ubuntu server running your ELK stack.
  • Download Elastalert from Yelp’s GitHub.
git clone https://github.com/yelp/elastalert
  • Copy Elastalert to ‘/etc/’
sudo cp -r elastalert /etc/
  • Change directory into your new Elastalert directory.
cd /etc/elastalert
  • If not already installed, install pip.
sudo apt install python-pip
  • Install Elastalert
pip install elastalert
  • Install ElasticSearch-py
  • pip install "elasticsearch>=5.0.0"
  • Install dependencies:
pip install -r requirements.txt
  • Lets make a directory for our Elastalert templates:
sudo mkdir templates
  • Change directory into our new templates directory
cd templates
  • Create a new template for monitoring commands executed:
sudo nano cmd_template.yaml

Paste:

es_host: localhost
es_port: 9200
name: "PLACEHOLDER"
index: winlogbeat-*
filter:
- terms:
    event_data.CommandLine: ["PLACEHOLDER"]
type: any
alert:
- slack
slack_webhook_url: "SLACK_WEB_HOOK"

es_host: This is the host your ELK stack is running on.

es_port: This is the port Elastic Search is listening on.

index: This is the index you setup with CyberWarDog’s blog.

filter: This is tell Elastalert to filter its search, in this case, we are filtering with ‘terms’ and we are looking for ‘event_data.CommandLine’ that equals whatever we put in place for PLACEHOLDER.

type: This means that Elastalert should alert on an matches that our Filter hits. We could also specify this Type to alert on new values identified, a spike in certain logs, a lack of logs and a bunch of other cool things.

alert: This tells elastalert how to alert you! There are a bunch of ways to get these alerts and I chose Slack for its simplicity to set up and use. For more options you can visit: http://elastalert.readthedocs.io/en/latest/ruletypes.html#alerts

  • Create a new template for monitoring powershell commands executed:
sudo nano powershell_template.yaml

Paste:

es_host: localhost
es_port: 9200
name: "PLACEHOLDER"
index: winlogbeat-*
filter:
- terms:
    powershell.scriptblock.text: ["PLACEHOLDER"]
type: any
alert:
- slack
slack_webhook_url: "SLACK_WEB_HOOK"
  • Create your main config.yaml file.
cd ..
sudo nano config.yaml

Paste:

rules_folder: alert_rules
run_every:
    seconds: 30
buffer_time:
    seconds: 60
es_host: localhost
es_port: 9200
alert_time_limit:
    days: 1
writeback_index: elastalert_status
alert_text: "Username: {0} \nHost: {1} \nTime: {2} \nLog:{3}"
alert_text_type: alert_text_only
alert_text_args: ["user.name","host", "@timestamp","log_name"]

To change the body of the alert, you can modify the last three lines, you can add or remove attributes to include in your report. https://elastalert.readthedocs.io/en/latest/ruletypes.html#alert-content

  • Create our Rules directory:
sudo mkdir alert_rules
cd alert_rules
  • Copy our templates here:
sudo cp ../templates/* .
  • Make copies of our templates.
cp cmd_template.yaml cmd_whoami.yaml
cp powershell_template.yaml powershell_invoke_webrequest.yaml
  • Modify cmd_whoami.yaml to alert when whoami is executed.
sudo nano cmd_whoami.yaml
  • Replace the PLACEHOLDER text in both locations with ‘whoami’, you can also copy this file many times over to alert on multiple commands ran.
es_host: localhost
es_port: 9200
name: "whoami"
index: winlogbeat-*
filter:
- terms:
 event_data.CommandLine: ["whoami"]
type: any
alert:
- slack
slack_webhook_url: "SLACK_WEB_HOOK"
sudo nano powershell_invoke_webrequest.yaml
es_host: localhost
es_port: 9200
name: "invoke-webrequest"
index: winlogbeat-*
filter:
- terms:
    powershell.scriptblock.text: ["webrequest"]
type: any
alert:
- slack
slack_webhook_url: "SLACK_WEB_HOOK"

Only query lowercase terms.

  • Remove the two template files we copied over:
sudo rm *template.yaml
  • Run elastalert-create-index and follow the prompts
elastalert-create-index

Remember: You host is localhost and your port is 9200, if you followed CyberWarDog’s guide, you also do not have authentication set up for ElasticSearch (You used nginx instead) so leave username and password empty. You also don’t have SSL or TLS setup.

  • Change directory back to /etc/elastalert
cd /etc/elastalert
  • Run elastalert –verbose
elastalert --verbose
  • Go to your Windows machine running winlogbeat and open up your command prompt.
  • Enter whoami and monitor your slack.
whoami
  • Profit

Commands you may want to monitor for:

Whoami

Netstat

Wmic

Powershell Functions you may want to monitor on:

Invoke-WebRequest

Invoke-Obfuscation

Downloadstring

Invoke-ShellCommand

If you are going to take this Threat Hunting thing seriously, you will most likely want to add alerts for Spikes, Frequency, Cardinality and a billion other types of things that are good ideas to check for with any Production system.

For comments, questions, concerns you can reach me at Twitter or via Email

[UPDATE: Several issues fixed 12/26]