Elk + Osquery + Kolide Fleet = Love

Threat hunting on Linux and Mac has probably never been easier. With the combination of these tools, we can query all of our hosts on demand for IOC’s, schedule queries to run on an automated basis and feed all of these results into our SIEM. Osquery is even platform agnostic so we can deploy it across all endpoints, regardless of host OS.

Osquery – Is a tool that allows us to query devices as if they are databases. It was built by Facebook and is built with performance in mind.

Kolide Fleet – A flexible control server for osquery fleets. Fleet allows us query multiple hosts on demand as well as create query packs, build schedules and manage the hosts in our environment.

Elastic Stack – Elasticsearch, Logstash and Kibana are tools that allow for the collection, normalizing and visualization of logs.

This post will assume a couple of things:

  1. You have an Elastic Stack configured. This has most likely never been easier, simply check out Roberto Rodriguez’s HELK (Hunting ELK) and run the setup script. For this post, I am using HELK so it should be all you need. If you like to do things manually so you understand how things are working, Roberto Rodriguez has you covered, head over to his site and follow his tutorials (They are top-notch).

What you will need:

  1. At least one Linux host to run your osquery daemon, you can also run it on the same box as you are running Kolide or your Elastic stack.
  2. An Ubuntu 16.04 Server to run Kolide Fleet, you can run this on the same box as your Elastic stack. This tutorial uses a separate host to run Kolide Fleet so I will let you know what you might need to change to make it work on the same server.

Before we begin, make sure to run:

apt update && apt upgrade

Kolide Setup:

You can use Kolides official documentation for most of this if you’d like. I basically customized their install guide to be more fitting for our purpose.

Official Setup Guide

Install MySQL:

Use the password: ‘kolide’ (Or whatever you want, just adjust accordingly as you go)

Install and run Redis:

Or if you have time, use the proper procedure to run Redis although totally not necessary for the purpose of this guide!

Fleet Server Setup:

Now, if you go to https://localhost:8080 in your local browser, you should be redirected to https://localhost:8080/setup where you can create your first Fleet user account.

We need to create some queries now, you can do this with the GUI, or you can run the importer tool found here. The importer tool is a bit buggy so for the purpose of this post, we will just configure the queries manually.

Go to Packs –> Manage Packs –> Create New Pack

Name the pack ‘linux_collection’ and add a description of you’d like. Under Select Pack Targets, choose All Hosts.

Now on the right hand side of the page, you should see Select Query. Click that drop down and choose ‘crontab’.

Set the interval to 60, the Platform to Linux, the minimum version to All and the Logging to Snapshot. Set the shard to 100.

The logging set to Snapshot will simply return all the results, differential would return the changes since the last query, and this is good for monitoring for malicious changes.

The shard is the percent of hosts that this Osquery pack will query. In a large environment, it might make sense to only query a portion of the hosts every time instead of all of them.

Repeat these steps for the following queries: etc_hosts, iptables, listening_ports, mounts, open_files, and shell_history..

All of these queries can be run on demand as well.

OSQuery Install:

Now on your Linux host we need to install OSQuery, like I mentioned in the beginning of the post, you can install this on the same machine that’s running Kolide, your Elastic stack or a standalone box.

Back on your new Kolide instance, select ‘Add New Hosts’ and copy the enroll Secret. Select Fetch Kolide Certificate and move certificate to your Linux box at /var/osquery/server.pem. Instead of using scp or the likes, you can simply open the certificate file with a text editor and copy/paste into you Linux terminal.

Now, back on the host that you are installing OSQuery, replace <enroll secret> with your secret that was provided from Fleet and on line 11, replace localhost with the server IP running Kolide Fleet.

If you go back to you Kolide app, you should see your new host appear!

FileBeat Installation:

We need to use FileBeat to move our osquery logs over to our Elastic Stack. If you are running osquery on the same machine as your Elastic Stack, you don’t need FileBeat, you can simply use the Logstash file plugin to pull the logs from the log file and push them to Elasticsearch.

On the server running Kolide run:

In the filebeat config file paste the following:

On line: 31, update localhost with your Logstash server IP.

If you are using TheHELK out of the box, you will not need to configure the ssl.certificate_authorities, however, if you followed the manual ELK setup and/or would like to use an SSL cert to encrypt the log traffic, you will need to comment this line and add the cert that Logstash is using.

Now run:

service filebeat start

Logstash:

On you Elastic Stack server, or the server that Filebeat is forwarding osquery logs to, you will need to add some Logstash filters.

Add a new output filter:

sudo nano /etc/logstash/conf.d/60-osquery-output.conf

Replace ‘fleet-controller’ with the hostname of you Kolide server and paste this in to the conf file you just opened:

Kibana:

Choose Management -> Index Patterns -> Create Index Pattern

In the Index Pattern box, enter osquery-result-* and choose Next Step. From the Time Filter field name drop down, choose @timestamp and then choose Create index pattern.

Now go to Discover and choose osquery-result-* from the Index dropdown. You should see your queries filter in.

Go To Management -> Import and import the two files found in this gist. One is the visualizations and one is the dashboard for osquery.

Recap:

This was all done with free tools. Add in sysmon to the mix and you now have a comprehensive threat hunting platform and a rudimentary SIEM. Osquery has support for Windows as well allowing you to query every. single. host. in your environment in a matter of seconds. If you are security and need the support from other teams before you can roll it out, sell it to them. osquery isn’t just a security tool. Operations, helpdesk, compliance teams etc can all get value from this, and the best part? Its free!

Further Reading/Resources:

https://medium.com/@palantir/osquery-across-the-enterprise-3c3c9d13ec55

https://blog.kolide.com/monitoring-macos-hosts-with-osquery-ba5dcc83122d

https://medium.com/@clong/osquery-for-security-b66fffdf2daf

https://medium.com/@clong/osquery-for-security-part-2-2e03de4d3721

The osquery slack channel: https://osquery.slack.com is super active and helpful! Also, if you run into any issues, /var/log/ is your friend 😉

Comments, suggestions, concerns… feel free to reach out to me on Twitter or by email!

The advice and scripts contained and referenced in this point are provided with no warranty. As always, never blindly trust scripts off the internet.

 

 

Automating the detection of Mimikatz with ELK

I’ve been going through CyberWarDog’s Threat Hunting posts as of late and stumbled upon his ‘Hunting for In-Memory Mimikatz’ Series. The methods used to build signatures are very straight forward and seem to remove a barrier to entry for figuring out how to profile malicious tools.

The method used to detect Mimikatz is referred to as grouping which consists of taking a group of unique artifacts and identifying when multiple of the unique artifacts appear together. So for this post, we will use Cyberwardog’s guidance to build an alert for the detection of Mimikatz using Sysmon and the ELK Stack.

This post assumes you already have an ELK Stack stood up with ElastAlert configured. This can be stood up in minutes with the HELK. (ElastAlert will be merged soon)

I want to start out by saying this is definitely not the most elegant solution. The idea was simple enough, alert when 5 DLL’s are accessed within 1 second of each other from the same host. Unfortunately, ElastAlert does not have this functionality built in, so Python it is..

The Sysmon config I am using is the Ion-Storm sysmon config. By default, the proper events are forwarded. Lines 571-579.

To get started, we need a script to handle some of the logic required to verify a couple things before we fire off an alert. I tried to make the python tool as modular as possible so that we can easily alert on other event ‘groupings’.

The 5 DLL’s we will be detecting are:

Cryptdll.dll

Hid.dll

Samlib.dll

Vaultcli.dll

Winscard.dll

We will also only be detecting these if they happen to be accessed within one second of each other.

On your server running your ELK stack:

sudo nano /bin/py-alert.py

Paste in:

sudo chmod 755 /bin/py-alert.py

This script handles all of our logic and also sends our Slack Notification. Using the options, we can alert on any combination of events.

Add our individual rules to our alerts rules directory.

Grab our rules off GitHub:

git clone https://github.com/jordanpotti/ElastAlertGrouper.git

Copy our rules into our ElastAlert rules directory:

sudo cp ElastAlertGrouper/alert_rules/* /etc/elastalert/alert_rules/

We now have 6 new rules in our rule directory. Each rule with a DLL name alerts when that given DLL is loaded.

As you can see, we have the typical alert rule options and we are querying for samlib in event_data.ImageLoaded. When this alert is tripped, it calls our python script with this command:

python3 /bin/py-alert.py –T D –a Mimikatz –o /tmp/mimikatz –c $ComputerName
-T is telling the script what action to take, in this case, we are just writing the hostname to a file so we want to use the ‘Document’ or D option.

-a is the alert type, in this case Mimikatz

–c is the hostname taken from the alert.

This is reflected across all DLL alerts. So when mimikatz is ran, the output file will have 5 hostnames written there.

Now let’s take a look at our Mimikatz rule.

This alert uses frequency as well as a different index. ElastAlert actually has its own index that indexes everytime an alert is queried. So now we can check this index if all five of the DLL alerts were fired in less than one second. It does this by filtering for only the DLL rules, only returning those with the alert_sent flag set to true, and alerting only if identifies 5 results within 1 second.

You will need to generate a Slack Web Hook and replace ‘SLACKWEBHOOK’ with your web hook. 

The alert is a command calling our Python script again:

python3 /bin/py-alert.py –T S –S SLACKWEBHOOK –a Mimikatz –t 5
-T is telling the script that we want to perform a ‘Send’ action.

-S needs to have our Slack Web Hook

-a tells the script what detection type we are alerting on

-t tells the script to only alert if there are 5 or more unique hostnames in our output file.

This last part is important. This number should always be the amount of rules that make up your grouping.

Run Mimikatz:

Profit:

One thing to keep in mind is that these have been tested in a lab environment with a small population of end points. Deploying this in production will likely involve major tuning.

Check out these labs: https://cyberwardog.blogspot.com/2017/02/setting-up-pentesting-i-mean-threat_98.html for an in-depth guide on how to set this stuff up manually as well as build the lab around it.

If you run into any issues, feel free to reach out to me on Twitter or by email!

The advice and scripts contained and referenced in this point are provided with no warranty. As always, never blindly trust scripts off the internet.

 

Using Elastic Curator To Clean Up ELK

I recently setup ELK in order to begin collecting logs from Sysmon for security monitoring in my lab. The problem I could foresee running into was the issue of disk space. Unfortunately when my ELK server runs out of space, it runs out of space. I needed a way to clean up the logs when the server began to reach a threshold. This led me to Elastic Curator. Curator allows us to manage our indices which includes deleting indices over a given number of days ago. However, this is just a one-off command we can run so I wanted to add some logic to the process.

Disclaimer: This is strictly for one node setups, if you have a large setup with multiple clusters you will want a different solution. (You can use this method still but you might get some weird issues)

Install Curator:

sudo pip install elasticsearch-curator

Create Curator Directory:

sudo mkdir /etc/curator

Create config file:

sudo nano /etc/curator/config.yml

Paste in:

Create delete config action file:

sudo nano /etc/curator/delete-after.yml

Paste in:

Unless you followed this guide you will most likely have to change some of the details in this action config file. Namely the filters –> value field. You will need to put the name of your index here.

Create Script file:

sudo nano /etc/curator/cleanup.sh

Paste in:

Warning: The above script will check that the disk space is less than 80 percent. It will check the free space of the root or whatever drive is mounted on “/”. If the free space is less than 20 percent, it will begin to delete the oldest indices, once it deletes the oldest one, it will check again and delete the next oldest until the disk space usage is below 80 percent. It will stop deleting regardless of disk space if there is less than 2 days of indices left. Since I couldn’t find an easy way to simple delete the oldest indices, I start at 90 days ago and move forward until it starts to find indices, if you have more than that, please adjust the script (the days variable at line 8). You may also want more than 2 days as a safety net.

Add script to cron:

sudo su
(crontab -l 2>/dev/null; echo "5 0 * * * /etc/curator/cleanup.sh") | crontab -

This cronjob will run the script 5 minutes after midnight forever.

Curator should be added to the HELK soon. If you aren’t aware of the HELK and want to get into Threat Hunting (Or just want a super quick way to spin up and ELK stack) you should definitely look into the HELK. The incredible HELK (Hunting, Elasticsearch, Logstash, Kibana) makes it is easy as running a script to setup a reliable ELK stack tailored for threat hunting.

Check out these labs: https://cyberwardog.blogspot.com/2017/02/setting-up-pentesting-i-mean-threat_98.html for an in-depth guide on how to set this stuff up manually as well as build the lab around it.

If you run into any issues, feel free to reach out to me on Twitter or by email!

The advice and scripts contained and referenced in this point are provided with no warranty. As always, never blindly trust scripts off the internet let alone throw them into a cron job running as root. 

 

Using ElastAlert to Help Automate Threat Hunting

I first want to say thanks to CyberWarDog for his fantastic lab walk through for setting up a Threat Hunting Lab. It is hands down the best guide I have read to getting started with Threat Hunting. I followed his guide and got my lab completely setup. I then decided that Elastalert would be pretty nice for getting some of the highly likely IOC’s sent off to a security team for further analysis. This post will guide you through setting up Elastalert to get notifications when certain actions are logged.

This guide assumes you have gone through all parts of CyberWarDogs tutorials: https://cyberwardog.blogspot.com/2017/02/setting-up-pentesting-i-mean-threat.html

Not required, but it also assumes that you have set up Enhanced Powershell Logging so that we can begin to capture useful PowerShell data. https://cyberwardog.blogspot.com/2017/06/enabling-enhanced-ps-logging-shipping.html

Also not required but useful for this guide: A Slack Channel.

Cool, ready to go?

  • SSH or Console into your Ubuntu server running your ELK stack.
  • Download Elastalert from Yelp’s GitHub.
git clone https://github.com/yelp/elastalert
  • Copy Elastalert to ‘/etc/’
sudo cp -r elastalert /etc/
  • Change directory into your new Elastalert directory.
cd /etc/elastalert
  • If not already installed, install pip.
sudo apt install python-pip
  • Install Elastalert
pip install elastalert
  • Install ElasticSearch-py
  • pip install "elasticsearch>=5.0.0"
  • Install dependencies:
pip install -r requirements.txt
  • Lets make a directory for our Elastalert templates:
sudo mkdir templates
  • Change directory into our new templates directory
cd templates
  • Create a new template for monitoring commands executed:
sudo nano cmd_template.yaml

Paste:

es_host: localhost
es_port: 9200
name: "PLACEHOLDER"
index: winlogbeat-*
filter:
- terms:
    event_data.CommandLine: ["PLACEHOLDER"]
type: any
alert:
- slack
slack_webhook_url: "SLACK_WEB_HOOK"

es_host: This is the host your ELK stack is running on.

es_port: This is the port Elastic Search is listening on.

index: This is the index you setup with CyberWarDog’s blog.

filter: This is tell Elastalert to filter its search, in this case, we are filtering with ‘terms’ and we are looking for ‘event_data.CommandLine’ that equals whatever we put in place for PLACEHOLDER.

type: This means that Elastalert should alert on an matches that our Filter hits. We could also specify this Type to alert on new values identified, a spike in certain logs, a lack of logs and a bunch of other cool things.

alert: This tells elastalert how to alert you! There are a bunch of ways to get these alerts and I chose Slack for its simplicity to set up and use. For more options you can visit: http://elastalert.readthedocs.io/en/latest/ruletypes.html#alerts

  • Create a new template for monitoring powershell commands executed:
sudo nano powershell_template.yaml

Paste:

es_host: localhost
es_port: 9200
name: "PLACEHOLDER"
index: winlogbeat-*
filter:
- terms:
    powershell.scriptblock.text: ["PLACEHOLDER"]
type: any
alert:
- slack
slack_webhook_url: "SLACK_WEB_HOOK"
  • Create your main config.yaml file.
cd ..
sudo nano config.yaml

Paste:

rules_folder: alert_rules
run_every:
    seconds: 30
buffer_time:
    seconds: 60
es_host: localhost
es_port: 9200
alert_time_limit:
    days: 1
writeback_index: elastalert_status
alert_text: "Username: {0} \nHost: {1} \nTime: {2} \nLog:{3}"
alert_text_type: alert_text_only
alert_text_args: ["user.name","host", "@timestamp","log_name"]

To change the body of the alert, you can modify the last three lines, you can add or remove attributes to include in your report. https://elastalert.readthedocs.io/en/latest/ruletypes.html#alert-content

  • Create our Rules directory:
sudo mkdir alert_rules
cd alert_rules
  • Copy our templates here:
sudo cp ../templates/* .
  • Make copies of our templates.
cp cmd_template.yaml cmd_whoami.yaml
cp powershell_template.yaml powershell_invoke_webrequest.yaml
  • Modify cmd_whoami.yaml to alert when whoami is executed.
sudo nano cmd_whoami.yaml
  • Replace the PLACEHOLDER text in both locations with ‘whoami’, you can also copy this file many times over to alert on multiple commands ran.
es_host: localhost
es_port: 9200
name: "whoami"
index: winlogbeat-*
filter:
- terms:
 event_data.CommandLine: ["whoami"]
type: any
alert:
- slack
slack_webhook_url: "SLACK_WEB_HOOK"
sudo nano powershell_invoke_webrequest.yaml
es_host: localhost
es_port: 9200
name: "invoke-webrequest"
index: winlogbeat-*
filter:
- terms:
    powershell.scriptblock.text: ["webrequest"]
type: any
alert:
- slack
slack_webhook_url: "SLACK_WEB_HOOK"

Only query lowercase terms.

  • Remove the two template files we copied over:
sudo rm *template.yaml
  • Run elastalert-create-index and follow the prompts
elastalert-create-index

Remember: You host is localhost and your port is 9200, if you followed CyberWarDog’s guide, you also do not have authentication set up for ElasticSearch (You used nginx instead) so leave username and password empty. You also don’t have SSL or TLS setup.

  • Change directory back to /etc/elastalert
cd /etc/elastalert
  • Run elastalert –verbose
elastalert --verbose
  • Go to your Windows machine running winlogbeat and open up your command prompt.
  • Enter whoami and monitor your slack.
whoami
  • Profit

Commands you may want to monitor for:

Whoami

Netstat

Wmic

Powershell Functions you may want to monitor on:

Invoke-WebRequest

Invoke-Obfuscation

Downloadstring

Invoke-ShellCommand

If you are going to take this Threat Hunting thing seriously, you will most likely want to add alerts for Spikes, Frequency, Cardinality and a billion other types of things that are good ideas to check for with any Production system.

For comments, questions, concerns you can reach me at Twitter or via Email

[UPDATE: Several issues fixed 12/26]

 

Honey Accounts

I recently saw a tweet mentioning the use of an AD account with the password in the description attribute and logon hours set to none. I can’t find that tweet anymore so I apologize for the lack of attribution.

The idea is that when someone does breach your network perimeter, some of the first steps in performing recon is collecting information from Active Directory. In this recon, they stumble on a DA account called ‘helpdeskDA’. They even discover a password in the description! Well this looks like an easy win and a critical finding. In order to figure out how to leverage this new found user, the attacker attempts to RDP or use psexec to move to a higher value target. In doing so, AD checks the credentials and returns to the attacker that his newfound account is not allowed to login during this time. Meanwhile, this login attempt has triggered an alert and is being investigated.

I personally love security tools that aren’t just blinky lights and can run natively on devices we already possess. I took a stab at setting this up in my test environment

Setting up the AD Account:

  1. Create your new account in AD Users and Groups and add to the Domain Administrators group.
  2. Set the user description to something believable or just set the password in there and let the attacker make their own assumptions.

  1. Go to the Account tab and select Logon Hours…
  2. Set Logon Denied to 24×7

Q:\Restricted\SECURITY\Application Security Analyst\HoneyAccounts\AD\login.png

Group Policy:

There are a couple Group Policy options that need to be enabled in order for this to work. Depending on your organization, these are already configured.

  1. Edit the Default Domain Policy.

  1. Navigate to: Computer Configuration -> Policies -> Windows Settings -> Security Settings -> Advanced Audit Policy Configuration -> Audit Policies -> Account Logon -> Audit Kerberos Authentication Services and set to audit Failure events.

Q:\Restricted\SECURITY\Application Security Analyst\HoneyAccounts\GP\gp_config1.png

Don’t forget to update group policy on the client by running: gpupdate /force

Event Viewer:

Here is an example of what the event now looks like in Event Viewer.

Q:\Restricted\SECURITY\Application Security Analyst\HoneyAccounts\EventViewer\eventLog.png

And here is the XML view with more details, this is important for writing our event trigger in Task Scheduler. Note the event ID. A normal failed login is event ID 4625. However, since this won’t be a traditional failed login due to the login hours, it falls under 4768 which indicates that the Kerberos authentication ticket was requested and failed due to various reasons, in this case the logon hours.

Q:\Restricted\SECURITY\Application Security Analyst\HoneyAccounts\EventViewer\eventDetails.png

Task Scheduler:

Configuring the task scheduler took a bit of time. This is due to the way the event was logged. Since it is not the traditional 4625 failed login event, the event trigger based on User did not work. I had to write a custom XML trigger rule to catch the username in this context.

  1. Open Task Scheduler and create a new task. Name it whatever you like and make sure to select ‘Run whether user is logged on or not’.

Q:\Restricted\SECURITY\Application Security Analyst\HoneyAccounts\TaskSched\step1_CreateTask.png

  1. Move to the trigger tab and edit trigger.

Q:\Restricted\SECURITY\Application Security Analyst\HoneyAccounts\TaskSched\step2_NewTrigger-Custom.png

  1. Select New Event Filter and select the XML tab. Check Edit query manually. Modify the following query to fit your user and domain. Since we don’t know what username format the attacker will use, we need some OR statements in there to cover our bases. Typically, we could use the SID but in Event 4678, it is not logged. You can also add granularity and only log on certain events but in this case, I thought it best to alert on any account activity.

Q:\Restricted\SECURITY\Application Security Analyst\HoneyAccounts\TaskSched\step3_EventCustomFilter.png

  1. You should be able to save that and move to the Action tab. You can do whatever sort of event here that you like. I chose a powershell script that alerts various people and provides details about the event.

Q:\Restricted\SECURITY\Application Security Analyst\HoneyAccounts\TaskSched\step4_actionEmail.png

  1. Select Start the task only if computer is on AC power.

Q:\Restricted\SECURITY\Application Security Analyst\HoneyAccounts\TaskSched\step5_conditionsTask.png

Powershell Script:

function sendMail{

Write-Host "Sending Email"

#SMTP server name

$smtpServer = "smtpserver"

#Creating a Mail object

$msg = new-object Net.Mail.MailMessage

#Creating SMTP server object

$smtp = new-object Net.Mail.SmtpClient($smtpServer)

$file = "C:\users\Administrator\Desktop\alert.txt"

$att = new-object Net.Mail.Attachment($file)

#Email structure

$msg.From = "[email protected]"

$msg.ReplyTo = "[email protected]"

$msg.To.Add("[email protected]")

$msg.To.Add("[email protected]")

$msg.subject = "ALERT:Honey Account"

$msg.body = "Honey Account Activity Detected"

$msg.Attachments.Add($att)

#Sending email

$smtp.Send($msg)

 


}

function collectDetails{

 $attachment= Get-WinEvent -LogName "Security" -FilterXPath "*[EventData[(Data='[email protected]' or Data='ws_admin' or Data='SECURITY-DC\ws_admin')]]" | Format-List -Property * | Out-File C:\users\Administrator\Desktop\alert.txt

}

#Calling function

collectDetails

sendMail

HTTP Security Headers

HTTP security headers seem to be findings on nearly every assessment I have been doing lately. I decided to come up with some handy quick references for these headers in order to better understand them. HTTP Response headers are a way for a server and client to exchange information. In this case, these headers are enforced to ensure that the client is protected from common client side vulnerabilities.

Content-Security-Policy

Content-Security-Policy – CSP is a HTTP security header that allows the web application to specify the source for any file that must be loaded from a separate domain. Its main use is to prevent cross-site scripting (XSS) and other injection techniques that load malicious data from an external source. An example of a CSP that restricts the source for all scripts, images and fonts would look like this:

Content-Security-Policy: default-src 'self'

If you were to add sources for other script, you could simply append the trusted domain. It even accepts wildcards for specifying subdomains.

Content-Security-Policy: default-src 'self' *.jordanpotti.com

You can get more granular and specify image-src, media-src, child-src and a host of other restrictions. You can also use report-uri in order to POST reports of policy failures to a URI of your choice. Here is an example of that:

Content-Security-Policy: default-src 'self' *.google.com ; report-uri https://errorcollector.jordanpotti.com/collection.php

Reporting these violations could give your developers diagnostic information as well as possibly alert you to a vulnerability or attempt to exploit a vulnerability.

Perhaps the most beneficial feature is CSP’s ability to prohibit inline JavaScript from executing. This prevents malicious code from being inserted into the page, the downside is that you now have to migrate all of you JavaScript to External Resources. Using External Resources is best practice so doing this increases your security as well as code cleanliness.

For more information on CSP, visit: https://content-security-policy.com/

Subresource Integrity:

Subresource Integrity – SRI is similar to CSP as it applies a layer of security to the scripts and other external data sources your application may use. SRI works by verifying the hash of an external data source before attempting to use it. This works by taking the hash of the external data and placing it in an ‘integrity’ attribute inside a <script> or <link> element. If the external data source inside that element do not match the provided integrity hash value, the resource is not loaded.

<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js" integrity="sha384-Tc5IQib027qvyjSMfHjOMaLkfuWVxZxUPnCJA7l2mCWNIpG9mGCD8wGNIcPD7Txa" crossorigin="anonymous"></script>

You can also use Content-Security-Policy to require that all scripts run have Subresource Integrity set in order to run. Applying this to our last CSP policy, it would look like this:

Content-Security-Policy: default-src 'self' *.google.com ; report-uri https://errorcollector.jordanpotti.com/collection.php ; require-sri-for script;

For more information on SRI, visit: https://developer.mozilla.org/en-US/docs/Web/Security/Subresource_Integrity

CroSs Origin Resource Sharing

CORS is a way for applications to access resources not stored locally. CORS aims at scaling back the restrictions of the same-origin policy in order to allow the legitimate sharing of web resources.  In the above sample for SRI, we used crossorigin=”anonymous”, this specifies that this resource can be accessed without providing credentials.

The most basic CORS request are simple, they consist of the request to the resources from the client, and a response stating the resource it is requesting, the typical headers as well as a Origin header. The Origin header is similar to the Referrer header except that is has less information. For example, the Referrer header states the entire path and the Origin header only states the server name. When the server receives the CORS request, it verifies if the Origin is allowed. If it is allowed, Access-Control-Allow-Origin is set as a response header.

Access-Control-Allow-Origin: https://jordanpotti.com

When the browser receives the response, it verifies that server name sent back the browser in the new header matches the site the browser is visiting. Access-Control-Allow-Credentials also allows the passing of credentials through these requests in order to access to protected content. Some sites allow all by specifying a wild card ‘*’ for allowed Origins. This is a very dangerous practice and can open the door to a multitude of attacks.

To read more on the security of CORS, read this: http://blog.portswigger.net/2016/10/exploiting-cors-misconfigurations-for.html

For more information on CORS, visit: https://www.w3.org/TR/cors/

X-Frame-Options:

X-Frame-Options has a very specific purpose. It is used to prevent malicious iframe’s from loading on a site. In regards to other HTTP headers, it is fairly simple with only three directives. DENY, SAMEORIGIN, an ALLOW-FROM ‘domain’. This is to enforce the source of <iframe>, <frame> and <object>. The most common I have seen is SAMEORIGIN which enforces the source to come from the sites own domain.

X-Frame-Options: SAMEORIGIN or X-Frame-Options: ALLOW-FROM https://jordanpotti.com

For more information on X-Frame-Options, visit: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Frame-Options

X-XSS-Protection:

X-XSS-Protections is similar to X-Frame-Options in that it has a narrower scope than some other security headers. It has the ability to report violations, block a page from rendering is it detects a XSS attack as well as the ability to report only and not block. It has two main directives; 0 and 1. 0 disables XSS filtering and 1 enables it. If set to 1 with no other options, the browser will sanitize the page by removing the detected XSS portions. Adding mode=block will prevent the page from loading at all if XSS is detected.

X-XSS-Protection: 1; mode=block

For more information on X-XSS-Protection, visit: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-XSS-Protection

X-Content-Type:

X-Content-Type is a HTTP header that tells the server not to overwrite the response content-type. If the server specifies content as non-executable such as text/html, the browser will render it that way. It only has one option making it very easy to setup.

X-Content-Type-Options: nosniff

For more information on X-Content-Type-Options, visit: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Content-Type-Options

HTTP Strict-Transport-Security

HSTS is a way for a server to tell the browser to always upgrade a connection to HTTPS. If set, any pages on that domain that only work over HTTP will no longer work.

Strict-Transport-Security: max-age=31536000; includeSubDomains; preload

This specifies how long the browser should respect this header, it covers all sub-domains, and it also is eligible for being added to a browsers HSTS list. This will enforce HTTPS on your site even after the HSTS header is removed.

Visit: https://hstspreload.org/ to add your site to the HSTS hard-coded list.

For more information on HSTS, visit: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Strict-Transport-Security

 

To view how your site rates, check: https://securityheaders.io/ . Scott Helme also has a pretty informative post on these headers as well so go check that out to learn more.

 

 

How I Gained Access to Nearly Half A Million College Transcripts

Last year I spent some time digging into bug bounty programs. Since I was quite new to the scene, I spent most of my time just figuring out how a lot of the tools worked, as well as figuring out a good process that worked for me. I spent loads of time in the Web Application Hackers Handbook and tried to figure out what looked normal, and what didn’t look normal.

My first bug however wasn’t from a bug bounty. It was on a massive university system. This system has collectively approximately 500,000 enrolled students at this time.

Now I do not recommend going bug hunting on a system that isn’t part of a bounty program or one that you do not have explicit authorization to do so.

I came across this bug sort of by accident. While I was checking my academic record, or my transcript, I noticed something strange. The URL had two suspicious parameters that seemed to be sequential and pre-fixed with the current days date. One was jobQSeqNo and the other was job_id. Out of curiosity, I tried to open the link in an incognito window, which stripped my cookies from the request. I still could see my transcript..

That seemed strange since the URL did not provide any sense of obscurity. Using Burp Intruder, I cycled the numbers in the two identified parameters and started seeing hundreds and hundreds of transcripts. It turns out there was no authentication checked on the page as well as easily guessable URL parameters. I immediately contacted the security team for the system and they responded and resolved the vulnerability quite quickly. Luckily they did not sue me and my transcripts are safe now. Win, Win. They were even okay with disclosing, which was quite surprising, so kudos to them.

Some of the information gleaned from someone’s transcript includes:

  • Student ID Number
  • Student Name
  • Degree
  • Courses Taken
  • Grades for Each Course
  • Total GPA

This information is protected by FERPA, in fact, at the bottom of each report, there was a FERPA statement.

Overall, there aren’t many technical details to include. It really consisted of some curiosity, Burp Intruder, and a list of 1000 sequential numbers.

Here are the Intruder results showing quite a few results: