Jekyll2022-05-31T21:32:37+00:00https://jordanpotti.com/feed.xmlJordan PottiSecurity ThingsJordan PottiCareer Mindset Spectrum2022-05-31T20:37:56+00:002022-05-31T20:37:56+00:00https://jordanpotti.com/2022/05/31/career-progress-mindset<p>Career progression requires a shift of mindset. Having the wrong mindset, at the wrong time, isn’t necessarily bad, it just means that traditional career progression may not apply. Adopting the mindset of the next rung of an organizations ladder will expedite your progress.</p>
<p><img src="https://raw.githubusercontent.com/pottijordan/pottijordan.github.io/master/images/2022/csm_1.png" alt="Career Mindset Spectrum" /></p>
<p>There is no ideal mindset. Each individual has different goals. If you would like to move up, then adopting that next mindset is important. If you are passionate about building widgets, and have no appetite for moving up, then you don’t necessarily need to adopt a different mindset.</p>
<p>As a hiring manager, you may want to hire specific mindsets for various levels of your organizations. It doesn’t always make sense to desire individual contributors to have a strong alignment to the vision and mission of the organization.</p>
<p><img src="https://github.com/pottijordan/pottijordan.github.io/raw/master/images/2022/csm_2.webp" alt="r/comics - Why do you want to work for for our company ? [OC]" /></p>
<p>There are times when it suffices to have someone who is skilled or passionate in their profession and is happy to just get the job done.</p>
<p>This also explains why people take roles as they move around the corporate structure and sometimes, move back into old roles, taking a figurative demotion. They’ve realized that they can’t adapt their mindset to fit their new role.</p>
<p>If the individual does not realize that they have not shifted their mindset, we get what is called the Peter Principle. This principle states that individuals are promoted to their point of incompetency. This holds true many times as top performers get promoted with the assumption that they will also be top performers in the role they are promoted into.</p>
<p><img src="https://github.com/pottijordan/pottijordan.github.io/raw/master/images/2022/csm_3.png" alt="Career Mindset Spectrum - Peter Principle" /></p>
<p>Having a mindset too far in advance may lead to a few things.</p>
<p>It may lead to frustration and burnout. This is because the individual contributor does not have all of the context for decisions being made a different levels of the business. They aren’t empowered to fulfill what drives them, whether it’s the mission of the business, or driving the mission of the specific business unit.</p>
<p><img src="https://github.com/pottijordan/pottijordan.github.io/raw/master/images/2022/csm_4.png" alt="Career Mindset Spectrum - Mission Driven" /></p>
<p>There are several ways to alleviate that frustration if you find yourself in that position. Move your way up the corporate ladder, search for a new role that fits your mindset, or become a founder.</p>
<p>Be wary though, becoming a founder may seem like a fast track. However, if the same amount of work was put into a career as it is to start a business, you will surely be successful in your career.</p>
<p>One major caveat, the upside of becoming a founder is far higher than the upside of a traditional career.</p>
<p>If you’re a business owner or manager, you should be recognizing which mindset pertains to which employee and make sure that their desires are being fulfilled. That might mean communicating how their work impacts the mission of the organization, or it might mean making sure they aren’t
side tracked with anything but the work they want to do.</p>jpCareer progression requires a shift of mindset. Having the wrong mindset, at the wrong time, isn’t necessarily bad, it just means that traditional career progression may not apply. Adopting the mindset of the next rung of an organizations ladder will expedite your progress.Parenting Lessons for the Information Security Industry2021-12-06T20:37:56+00:002021-12-06T20:37:56+00:00https://jordanpotti.com/2021/12/06/parenting-lessons-for-the-information-security-industry<p>In Peter Thiel’s book <em>Zero to One: Notes on Startups, or How to Build the Future</em>, he talks about the best interview question. The question is this: <strong>“What important truth do very few people agree with you on?”</strong></p>
<p>Something I’ve had a gut feeling about for a while without any justification is this: The information security industry is causing more harm than good.</p>
<p>This is a bit of a thought experiment but bear with me.</p>
<p>Okay, what if your information security organization decided to disband. Yep, no more security org except for maybe compliance and some other folks that you’re contractually required to employ.</p>
<p>Now imagine if before you did this, you told everyone that security is up to them. Here are the standards that we as a company promise to our customers and stakeholders. Don’t mess it up. And also, we now have some extra budget you can put towards whatever you’d like.</p>
<p>What would happen?</p>
<p>I recently read “Parenting with Love & Logic” by Foster Cline and Jim Fay. The entire idea behind the book is to be the consultant parent and at the same time, not a “Helicopter Parent”, or a “Drill Sergeant Parent”. Foster Cline and Jim Fay call this the Love and Logic Technique.</p>
<p>Let’s define those types of parents:</p>
<p>Helicopter parents make excuses for the child and then complain about mishandled responsibilities. They take responsibility for the child and make all the child’s decisions. The helicopter parent also uses words and actions that indicate that the child is not capable or responsible.</p>
<p>Drill Sergeant parents make many demands and have many expectations about responsibility. They tell the child how he or she should handle responsibility and provides absolutes “This is the decision you should make!”. The drill sergeant parent demands that jobs or responsibilities get done now. Typically, the drill sergeant parent uses many harsh words, but very few actions.</p>
<p>Now, if we replace parent with security org and child with the business, we start to see some parallels.</p>
<p>If you’re still with me here, let’s go on to explain the “consultant parent”.</p>
<p>The consultant parent very rarely mentions responsibility. They provide alternatives and then allow the child to make his or her own decisions. They make sure the child owns the problem and helps the child explore solutions to his or her own problems. The consultant parent uses many actions but few words and allows the child to experience life’s natural consequences.</p>
<p>Let’s replace parent with security org and child with the business. Now we are getting somewhere.</p>
<p>In the book “Parenting with Love & Logic”, one subject is dealing with a child’s report card. This has an excellent parallel with information security.</p>
<p>Foster remembers how his father handled his report cards. Every time Foster brought home a report card with bad grades, his father would ask him: “Are you proud of this?”. Foster remembers answering “No”. And this ritual continued during the duration of his school years. Had his father detected that Foster was okay with the poor grades, he likely would have been enrolled in all sorts of tutors and private schooling.</p>
<p><img src="https://paper-attachments.dropbox.com/s_FCC434BC4301E767805822D2FF67333EFBAED59761DEDFF46E8C24A0AD6E8806_1638849542409_image.png" alt="Graph adapted from Parenting with Love & Logic" /></p>
<p>Let’s be concerned when teams feel good about their poor security assessments. But otherwise, life is good.</p>
<p>So, instead of disbanding your security org, turn them all into consultants and stop mandating. Provide guidance but let teams own their security. Let them fail if that’s what it takes.</p>
<p>Also, singing the “uh-oh” song when someone gets breached probably won’t go over well, so not all of the lessons may apply.</p>jpIn Peter Thiel’s book Zero to One: Notes on Startups, or How to Build the Future, he talks about the best interview question. The question is this: “What important truth do very few people agree with you on?”The Great Mental Models and Information Security2021-12-06T20:37:56+00:002021-12-06T20:37:56+00:00https://jordanpotti.com/2021/12/06/the-great-mental-models-and-information-security<p>As a regular reader of the Farnam Street project, I’ve come to appreciate mental models and how they can help us make decisions. In regards to the information security industry, since we are still likely in the “pre-Galilean” period, it suffices to say that applying mental models to the way we think about the problems we face could greatly benefit us.</p>
<p>According to Farnam Street, <strong><em>“There is no system that can prepare us for all risks”</em></strong>, <strong><em>“But being able to draw on a repertoire of mental models can help us minimize risk by understanding the forces that are at play”.</em></strong></p>
<p>It’d be hard for anyone to argue that applying mental models to Information Security is a bad idea. Applying mental models to any industry, or set of problems can only result in greater understanding. That being said, I would go a step further in that by <em>not</em> applying mental models to the Information Security industry, we may be causing more harm than good.</p>
<p>Galileo’s Ship is an analogy that explains that when we are part of a system, we have a hard time seeing new solutions and understanding problems. Picture this, you are a fish and you are watching a ship moving at a constant velocity, a scientist drops a ball while standing on the deck of the ship. To the scientist, the ball simply drops straight down. To the fish, the ball moves horizontally at the speed of the ship as it falls.</p>
<p>As an Information Security practitioner, we may not have a good understanding of the problems at hand. By leveraging mental models, we can think clearer and with fewer biases about the problems we face.</p>
<p><strong>The Models:</strong></p>
<ol>
<li>The Map is Not the Territory</li>
<li>Circle of Competence</li>
<li>First Principles Thinking</li>
<li>Thought Experiment</li>
<li>Second-Order Thinking</li>
<li>Probabilistic Thinking</li>
<li>Inversion</li>
<li>Occams Razor</li>
<li>Hanlon’s Razor</li>
</ol>
<h2 id="the-map-is-not-the-territory"><a href="https://fs.blog/map-and-territory/"><strong>The Map is Not the Territory</strong></a></h2>
<p>A map shows us how to get from point A, to point B. It shows contour lines, roads, and many other static data points. And in physical navigation, this works well. Roads typically don’t change once built. New cliffs or rivers don’t appear overnight. However, it’s still not perfect. A map doesn’t show the weather that you will be traveling through, it doesn’t tell us the best shoes to wear for the terrain, or what tires work well in the mountain passes. It doesn’t show how much deadfall you’ll have to climb over if you’re hiking, or if a bear is on the trail. It’s simply static data that acts as an assistant.</p>
<p>Information Security has many such “maps”. Frameworks, career roadmaps, standards, and maturity models all provide map-like assistance to getting from point A to point B. However, each step in any given model could have its own map, and there likely is. Those are the array of footnotes for each step in implementing a framework. And those consultants or employees, those are the guides we hire to take us from point A to point B, map or no.</p>
<p>This leads us to our next Mental Model. In this case, we need to hire a guide even though we have a map. It’s like climbing Mount Everest. You can Google the best route, and even obtain highly detailed maps which contain elevations, hazards, camp locations, and a myriad of other details. However, you likely need to hire a guide. Someone who knows the terrain, someone who’s considered a “local”, someone whose Circle of Competence, includes climbing Mount Everest.</p>
<h2 id="circle-of-competence"><a href="https://fs.blog/circle-of-competence/"><strong>Circle of Competence</strong></a></h2>
<p>Your Circle of Competence is pretty simple. It’s what you’re good at, what’s familiar to you, what you can talk all day about. Ideally, it’s something you enjoy. That being said, no matter how good you are at something, nothing replaces experience. Building a Circle of Competence takes time and practice.</p>
<p>One thing that stuck with me was something one of my first bosses in information security told me after asking for a promotion I thought was well deserved. He said, “Yes, you do operate technically at a senior level, but nothing can replace cold hard experience, and that’s something you don’t have”. I have a feeling that many people in today’s culture will have a problem with that, but there is truth to it. There may be exceptions to that, but I wasn’t one of them.</p>
<p>Information Security cannot be your Circle of Competence.</p>
<p>It’d be like saying airplanes are your Circle of Competence. What kind? Flying them? Fixing them? Jets? Prop planes? Aerodynamics? Engineering? Warfare?</p>
<p>There is too much to learn, the domain is simply too wide.</p>
<p>So how do you find your Circle Of Competence? That’s sort of like asking “What do I want to be when I grow up?”. You find it by trial and error. Hopefully, educated trial and error. For example, if you don’t like heights, maybe don’t try and be a windmill repairman.</p>
<p>Moving ahead to where you found something you love, double down on it. If you love web application security, or reverse engineering, or JavaScript, make that your own. After several years of focus, you will have an established Circle Of Competence.</p>
<p>But be careful, this isn’t an end goal. A Circle of Competence isn’t something you just attain. You must continually improve on it. <a href="https://jamesclear.com/goldilocks-rule">The Goldilocks Rule</a> states that humans experience peak motivation when working on tasks that are right on the edge of their current abilities. So continue pushing the boundaries of your circle.</p>
<h2 id="first-principles-thinking"><a href="https://fs.blog/first-principles/"><strong>First Principles Thinking</strong></a></h2>
<p>Have you ever had a boss that wouldn’t buy into your ideas unless they saw it done before? There is a reason why social proof is such an effective sales strategy. <a href="https://www.negotiationtraining.com.au/articles/encourage-social-proof/">Social proof is one of the most powerful negotiating techniques</a>, however, you’ll <a href="https://fs.blog/mental-model-social-proof/">never innovate if it’s the only method by which you make decisions.</a></p>
<p>Reasoning with First Principles is a much better alternative. By unpacking the fundamental rules of whatever it is you’re considering, you open up countless novel paths.</p>
<p>Red Teaming, Vulnerability management, Security Operations, GRC, Security Architecture. What do they all have in common? First Principle Thinking allows us to step back, and consider the core reason why we are all here. Considering the “Five-Whys” might help you get to the First Principles sooner.</p>
<p><strong>Why do we have a Red Team?</strong> To identify attack paths.<br />
<strong>Why do we need to identify attack paths?</strong> So that we can close them before an attacker exploits them.<br />
<strong>Why do we need to close the attack path before attackers exploit them?</strong> To prevent a breach.<br />
<strong>Why do we need to prevent a breach?</strong> To protect our IP/PII/PCI/etc.<br />
<strong>Why do we need to protect that data?</strong> To keep our customer’s trust and stay out of financial and legal trouble.</p>
<p>So according to the Five Whys, you could say a First Principle of InfoSec is revenue protection.</p>
<p>Another First Principle of InfoSec is that we are here to reduce the probability of breaches at a cost that the business is willing to pay. It’s pretty simple. While we all play different roles at that, we can’t lose sight of the First Principles.</p>
<p><strong><em>What are your own First Principles?</em></strong></p>
<p>Simon Sinek coined this as Find your Why. Victor Frankl calls it Logo Therapy. It’s also important to note that determining the First Principle of why you do something, is not always in line with why someone pays you to do something. You should be aware of both.</p>
<h2 id="thought-experiment"><a href="https://fs.blog/thought-experiment/"><strong>Thought Experiment</strong></a></h2>
<p>Nothing is original. However, just because something has been tried before, and failed, does not mean that it will fail if you try it. Cyber Security is not a “solved” industry. We need Thought Experiments, which inevitably lead to innovation.</p>
<p>John Kindervag, the pioneer of Zero Trust, was likely playing around with a Thought Experiment when he came up with the idea.</p>
<p>It could have gone something like this: “What would happen, if instead of trusting a user or device based on a static secret, what if we continually re-assessed our trust relationship? And used many different components, not just a username and password.”</p>
<p>Before we can do anything with that thought experiment, we need to try and prove it wrong. Why is it possible? Why is it impossible?</p>
<p>John Kindervag likely decided that yes, this concept is possible. The technology exists to do this, and hence, Zero Trust was born.</p>
<h2 id="second-order-thinking"><a href="https://fs.blog/second-order-thinking"><strong>Second-Order Thinking</strong></a></h2>
<p>How many people do you know that owe payments on most of what they own? While it’s possible they’ve never considered the difference between an asset and a liability, it’s also likely that they’ve never thought beyond the monthly payment. Second-Order Thinking forces us to consider what comes after the initial result of a decision.</p>
<p>So consider what happens after you sign on the dotted line. Now you have a monthly payment, you need somewhere to store the thing, you need to maintain and winterize the thing, and instead of the desired result of having a fun toy, you only have less time and less money.</p>
<p>How many of us have written a report, or an email, or a Slack message with a message of fist meet face. We’ve had enough, the security of this company will not suffer because this department refuses to comply. I bet many of us have been there, and I bet it even works most times the first time.</p>
<p>Second-Order thinking would have likely led us to consider but then what?</p>
<p>What will that team do the next time they face a security decision? You’ve just incentivized them to minimize their relationship with security. The next time they have to make a security decision (or not), they may no longer plan on including security if they can help it.</p>
<h2 id="probabilistic-thinking"><a href="https://fs.blog/probabilistic-thinking/"><strong>Probabilistic Thinking</strong></a></h2>
<p>Risk is a big industry. It’s a subset of InfoSec as a whole, and yet, it <em>is</em> InfoSec. And no one really understands it. It might be because those that don’t understand it don’t think about it, and those that do think about, are thinking about it too hard.</p>
<p>InfoSec is full of fat tails. It’s impossible to guess when and how we will get breached, let alone, which vulnerabilities the threat will use. For example, if we consider the risk of a breach, how likely is it that a breach introduces an existential risk? Or how likely is it that a breach causes moderate financial impact?</p>
<p>Shane Parish with Farnam Street says it much better than I could:</p>
<blockquote>
<p>The important thing is not to sit down and imagine every possible scenario in the tail (by definition, it is impossible) but to deal with fat-tailed domains in the correct way: by positioning ourselves to survive or even benefit from the wildly unpredictable future, by being the only ones thinking correctly and planning for a world we don’t fully understand.</p>
</blockquote>
<p>I’ve written about this before but I think it’s worth thinking about this succinctly.</p>
<p>While we can’t predict accurately how likely a vulnerability will lead to a breach, we can continually plan and prepare for the likelihood that a breach will occur. As Shane Parish wrote, we need to be “positioning ourselves to survive”.</p>
<p>On this subject, I’d love to see some research around metaprobability in regards to various risk scoring frameworks. My hypothesis is that is not very accurate.</p>
<h2 id="inversion"><a href="https://fs.blog/inversion/"><strong>Inversion</strong></a></h2>
<p>This is similar to <a href="https://hbr.org/2007/09/performing-a-project-premortem">Premortem Analysis</a>.</p>
<p>Inversion lets us think hypothetically about exactly what we don’t want, and then we consider how we got there. Let’s say you want to run a world-class security organization. Inversion doesn’t work really well at a high level, but if you break down the problem into a small set of problems, you can start to see the value of Inversion.</p>
<p>Start with your goal, lets say you want everyone in your organization to feel like they own security for their own domains. We know it’s not an ideal situation when the organization relies solely on “the security team” when it comes to security ownership. Using Inversion, you can consider how you could get to a place you don’t want to be, and quickly.</p>
<p>To begin, you would make sure all security decisions are routed through your org. You would position yourself as the gatekeeper of all things security-related. You would set security policies that would be enforced regardless of business requirements. If various teams can’t fulfill some requirements, you would handle an exception process for them. When you do end up in meetings with other teams, you would be patronizing when security topics are discussed.</p>
<p>Now that we’ve identified the things you can do to get to an undesirable state, we now know what to avoid, or at least be careful with.</p>
<h2 id="occams-razor"><a href="https://fs.blog/occams-razor/"><strong>Occams Razor</strong></a></h2>
<p>Occams Razor argues that the simplest answer is usually correct. How will an attacker breach your organization? It’s unlikely that it will be a zero-day, and it’ll likely be a spear phish, or an unpatched server, or some other obvious weakness. This leads us to the belief that doing the ordinary security work well, will go much further than chasing the latest hotness.</p>
<p>Worrying about zero-days is a bit like running down the middle of a busy highway and being worried about getting struck by lightning.</p>
<p>People crave novelty. Those latest zero-days and new TTP’s get the most attention.</p>
<p>Many of those organizations worried about the latest threats aren’t doing the boring stuff yet. No whitelisting, no host firewalls, lacking asset management, and the list goes on and on.</p>
<p>Jim Alkove, the Chief Trust Officer at SalesForce calls this “doing the common uncommonly well”.</p>
<h2 id="hanlons-razor"><a href="https://fs.blog/mental-model-hanlons-razor/"><strong>Hanlon’s Razor</strong></a></h2>
<p>You may have heard the saying “Stupidity is the same as evil if you judge by the results.’ Hanlon’s Razor is taking the results of something gone wrong, and defaulting to blame it on stupidity.</p>
<p>That organization that got breached probably wasn’t purposefully malicious. Those engineers that push terrible code, aren’t evil. No one is (probably) out to get their customers. I think everyone can agree that we don’t want breaches to happen.</p>
<p>Hanlon’sor helps us understand that more times than not, mistakes happen through innocent intentions.</p>
<p>As an information security professional, it’s your job to prevent those mistakes from happening, through education and mechanisms designed to reduce risk.</p>jpAs a regular reader of the Farnam Street project, I’ve come to appreciate mental models and how they can help us make decisions. In regards to the information security industry, since we are still likely in the “pre-Galilean” period, it suffices to say that applying mental models to the way we think about the problems we face could greatly benefit us.Using Zero Days for Red Teams2021-11-05T20:37:56+00:002021-11-05T20:37:56+00:00https://jordanpotti.com/2021/11/05/using-zero-days-for-red-teams<h3 id="what-do-you-think-when-you-hear-the-term-zero-day"><strong>What do you think when you hear the term zero day?</strong></h3>
<p>Most of us think of high dollar zero days that organizations such as Zerodium peddle. These types of zero days represent a minuscule amount of the zero days that one: currently exist, and two: are being released every day.</p>
<p>FireEye classifies a zero day as “an unknown exploit in the wild that exposes a vulnerability in software or hardware”. The vast majority of software is not Microsoft Windows, OSx, Android, Chrome or the litany of other popular software.</p>
<p>Pick any organization with at least several hundred employees. Chances are, they use obscure software, or internally developed software. And chances are, that software at some point, somewhere, sits on the perimeter.</p>
<p>A topic of contention in the Red Team community is whether or not a Red Team would “burn an O-day”, or “drop tens/hundreds of thousands for an O-day” during an operation. If the understanding is that when we say zero days, we are speaking about those types of vulnerabilities that Zerodium is interested in, then we have a very valid argument. In fact, I would say you can count on a couple hands the amount of organizations that need that in their threat model. And even those organizations don’t necessarily need to burn a valuable zero day to simulate this type of attack.</p>
<p>Now if we consider zero days in a dulled lens, that is, an unknown vulnerability for any piece of code in our environment, it seems like something we should really consider.</p>
<p>Does your blue team know how to investigate a compromise when the shell magically appeared on a server?</p>
<p>When software has a known vulnerability, we typically have published artifacts and IOC’s. When that piece of software has no known vulnerabilities, but was somehow compromised, does the blue team have processes to handle that?</p>
<p>As a Red Team, it is your responsibility to determine if using zero days can be a valuable training opportunity for your stakeholders. And in many cases, they don’t need a ton of R&D time. Spend some time on an internally developed application, or the software running some obscure physical security hardware, for example.</p>
<p>Training your blue team to be ready to face the unknown unknowns is what you are here for.</p>
<p>Spar with them, make them better.</p>
<p><a href="https://www.fireeye.com/current-threats/what-is-a-zero-day-exploit.html">https://www.fireeye.com/current-threats/what-is-a-zero-day-exploit.html</a></p>
<p><a href="https://debricked.com/blog/2019/06/17/vulnerabilities-in-dependencies/">https://debricked.com/blog/2019/06/17/vulnerabilities-in-dependencies/</a></p>jpWhat do you think when you hear the term zero day?Determining Risk Less Badly2021-05-14T20:37:56+00:002021-05-14T20:37:56+00:00https://jordanpotti.com/2021/05/14/determining-risk-less-badly<p><em>“Risk is a factor in decisions, as well as costs, interests, and even our ability to frame decisions around a risk.” - Ryan McGeehan</em></p>
<p>The sole reason for ranking risk is so that decisions makers can use it as a factor for making some decision. If that risk ranking is considered bad or inflated, it won’t carry much weight.</p>
<hr />
<p>After an offensive security assessment, rank all the findings based on your gut. Now spend an hour or so per finding ranking these on the industry frameworks.</p>
<p>Did you learn anything? Chances are, most of the risk scores are largely the same as what you scored based on your gut feeling.</p>
<p>Vulnerability scoring using common frameworks doesn’t give us anything besides being able to back up our gut feelings about risk. Either way we get to the same conclusion, and arguably, both are inaccurate.</p>
<h2 id="so-how-should-we-rank-risk">So how should we rank risk?</h2>
<p>Most organizations perform annual risk assessments. These are how the business identifies risk and determines how to mitigate or reduce that risk. There is a good chance that Legal, Product, HR, Security and other leaders are part of that conversation.</p>
<p>Some common factors of these risk assessments include financial loss, employee dissatisfaction, customer loss, regulatory or compliance impact, and of course, likelihood. When we talk about risk to senior leadership, this is the language they speak. So going from a conversation with other leaders discussing all the threats that face the business, it’s no wonder our “critical” Windows Server vulnerability we reported to IT leadership gets disregarded.</p>
<p>When we start to rank our findings, we start to ask these types of questions depending on the scoring model we are using:</p>
<p><strong><em>How much money does it cost the company if the SRE team doesn’t turn on command line logging for our nix infrastructure?</em></strong></p>
<p><strong><em>How many customers do we lose if IT doesn’t use a tiered approach with domain admin workstations?</em></strong></p>
<p><strong><em>How is employee morale impacted if those developers fail to remove credentials from that source code repository?</em></strong></p>
<p>None of those really make sense. Some of us start to chain findings together, or to look at the impact of the operation itself and try to factor those into the scores and overall, it begins to match our gut feelings around how bad these things are. (Which isn’t what we want)</p>
<p>When we begin sharing our findings and explaining the attack path, we get asked a set of common questions:</p>
<p><strong><em>“If that weakness wasn’t there, wouldn’t the rest be null?”</em></strong></p>
<p><strong><em>“If we fix the first thing on this path, would this other one still be high, or would it now be a medium?”</em></strong></p>
<p><strong><em>“What can we do now, that reduces the risk, but won’t take much work?”</em></strong></p>
<p>All of the questions stem from a poor presentation of what the offensive security did/does, and how bad the issues really are.</p>
<h2 id="a-new-approach">A new approach:</h2>
<p>This approach saves time, allows us to communicate with leaders better, and prioritizes disrupting the attackers ROI. We first rank the risk of the operation as a whole, and then prioritize each finding or recommendation based on the return on investment.</p>
<p><strong><em>Maintain focus on pragmatism by ensuring that defensive measures are likely to meaningfully disrupt the attacker value proposition of attacking you, increasing cost and friction on the attacker’s ability to successfully attack you. Evaluating how defensive measures would impact the adversary’s cost of attack provides both a healthy reminder to focus on the attackers perspective as well as a structured mechanism to compare the effectiveness of different mitigation options.</em></strong> - Microsoft Security Best Practices</p>
<p><img src="https://docs.microsoft.com/en-us/security/compass/media/privileged-access-strategy/balance-defender--and-attacker-cost.png" alt="Increase attack cost with minimal defense cost" /></p>
<ol>
<li>Use the enterprise risk model that is used by leadership to rank the risk of the <strong>outcome</strong> of the operation.
<ol>
<li>For example, ransomware getting deployed might be a critical risk, medium monetary risk, loss of customer confidence, employee morale, some regulatory issues maybe, etc.</li>
<li>Supply chain attack leading to malicious code on customers endpoints. Might only be a high risk if the operation proved that its likely only possible on a small subset of products or in edge cases.</li>
<li>Customer data breached could even be a medium risk if the operation focused on PII. As we know, these types of breaches don’t always result in a huge financial loss and often times are buried in the news.</li>
</ol>
</li>
<li>
<p>Rank each recommendation based on <strong>ROI</strong>. A large return on investment would mean that the solution is relatively quick or low cost, while the cost to an attacker is vastly larger.</p>
<p><strong>Some examples of high ROI solutions:</strong><br />
1. Removing a credential from a public GitHub repository and rolling the credential<br />
2. Shutting down an exposed script console on a production Jenkins server<br />
3. Patching a WordPress vulnerability on a public server with an available exploit</p>
<p><strong>Examples of also important but lower ROI solutions:</strong><br />
1. Implementing MFA on email internally<br />
2. Remove credentials from internal source code repositories and leverage an enterprise password storage solution<br />
3. Restrict workstations from communication with one another</p>
</li>
</ol>
<p>At this point, you know the actual business risk of the operation, and you can explain to leadership why it’s critical. You also have a list of findings, and they are stack ranked by ROI without an individual risk score attached to them.</p>
<p><strong><em>This operation demonstrated the ability for a threat actor to infect all eighteen million subscribers to the Colorful Widgets Unlimited software. This would likely result in moderate financial impact, a critical security impact to Colorful Software Inc and its customers, as well as a high likelihood of major regulatory and compliance impact.</em></strong></p>
<p>Once you capture leaderships attention regarding the total risk, the individual risk of each finding isn’t so important. The high ROI findings get fixed quickly to stop the bleeding, while the lower ROI recommendations are picked off as longer projects.</p>
<h2 id="does-it-work-better">Does it work better?</h2>
<p>The entire goal of having these types of discussions is to influence decisions that reduce risk. If we can reduce the paradox of choice for decision makers by having pragmatic risk discussions, it should ultimately lead to better decisions.</p>
<p><em>Thanks to <a href="https://twitter.com/magoo">Ryan McGeehan</a> for his feedback on this article</em></p>jp“Risk is a factor in decisions, as well as costs, interests, and even our ability to frame decisions around a risk.” - Ryan McGeehanForeScout Secure Connector Local Privilege Escalation2021-03-30T20:37:56+00:002021-03-30T20:37:56+00:00https://jordanpotti.com/2021/03/30/forescout-priv-esc-folder-permissions<p><strong>Application</strong>: ForeScout CounterACT Secure Connector<br />
<strong>Operating System tested on</strong>: Windows 10 1809 (x64)<br />
<strong>Vulnerability</strong>: ForeScout CounterACT SecureConnector Local Privilege Escalation through Insecure Folder Permissions</p>
<p><strong>Overview:</strong><br />
This vulnerability exists due to the permissions set on the logs directory used by the ForeScout SecureConnector application. Every several seconds, a new log entry is placed into <code class="language-plaintext highlighter-rouge">c:\ProgramData\ForeScout SecureConnector\Logs\sc.log</code>. The Logs directory, as well as the file <code class="language-plaintext highlighter-rouge">sc.log</code>, allows Everyone Full Control.</p>
<p>Due to this, a low privileged user can create a symbolic link in the sc.log location, and point it at a privileged location such as <code class="language-plaintext highlighter-rouge">c:\Windows\System32</code>. The log entries will be created in a file at the receiving end of the symbolic link. By setting the receiving end of the symbolic link as a valid DLL in that location, the DLL is overwritten with the log file, and the permissions of the “log” file allows Everyone Full Control. At that point, the DLL can be overwritten with a malicious DLL to gain privileged code execution.</p>
<p><strong>Walkthrough:</strong><br />
While searching for file operation vulnerabilities, I came across the ForeScout SecureConnector making a <code class="language-plaintext highlighter-rouge">CreateFile</code> call as <code class="language-plaintext highlighter-rouge">NT AUTHORITY\SYSTEM</code>, in the ProgramData directory. This was discovered using Process Monitor.</p>
<p><img src="https://paper-attachments.dropbox.com/s_8B38E0E6030580BF6375E7AE440D804CD30147AEB058F32A7BED213B44E75B20_1617115494293_image.png" alt="" /></p>
<p>In order to determine if this was a vulnerability, I used PowerShell’s <code class="language-plaintext highlighter-rouge">get-acl</code> function to determine the file permissions of the parent directory. As shown below, the <code class="language-plaintext highlighter-rouge">Everyone</code> group has <code class="language-plaintext highlighter-rouge">FullControl</code>.</p>
<p><img src="https://paper-attachments.dropbox.com/s_8B38E0E6030580BF6375E7AE440D804CD30147AEB058F32A7BED213B44E75B20_1617116019383_image.png" alt="" /></p>
<p>In order to exploit this as a low privileged user, we first need to create an NTFS junction pointing to a writable Object Manager directory. <code class="language-plaintext highlighter-rouge">\RPC Control\</code> is commonly used since it is writable by everyone. In order to create a directory junction, the source directory must be empty. For some reason, I couldn’t create the junction even after emptying the Logs directory. By deleting the Logs directory all together, the junction was finally able to be created.</p>
<p>Using <code class="language-plaintext highlighter-rouge">mklink /J</code>, we are able to create our junction pointing to RPC Control.</p>
<p><img src="https://paper-attachments.dropbox.com/s_8B38E0E6030580BF6375E7AE440D804CD30147AEB058F32A7BED213B44E75B20_1617116051475_image.png" alt="" /></p>
<p>Once the junction is created, it’s now possible to create a symbolic link pointing to our target directory with the file name of our choosing.</p>
<p>I used CreateSymLink.exe from from James Forshaws <a href="https://github.com/googleprojectzero/symboliclink-testing-tools">Symbolic Link Testing Tools</a> repo. For the target, I used <code class="language-plaintext highlighter-rouge">ualapi.dll</code> (From <a href="https://enigma0x3.net/2019/07/24/cve-2019-13382-privilege-escalation-in-snagit/">https://enigma0x3.net/2019/07/24/cve-2019-13382-privilege-escalation-in-snagit/</a>).</p>
<p><img src="https://paper-attachments.dropbox.com/s_8B38E0E6030580BF6375E7AE440D804CD30147AEB058F32A7BED213B44E75B20_1617116096762_image.png" alt="" /></p>
<p>As we can now see, the <code class="language-plaintext highlighter-rouge">ualapi.dll</code> file located in <code class="language-plaintext highlighter-rouge">C:\Windows\system32</code> is now being populated with the log entries from ForeScout.</p>
<p><img src="https://paper-attachments.dropbox.com/s_8B38E0E6030580BF6375E7AE440D804CD30147AEB058F32A7BED213B44E75B20_1617116131903_image.png" alt="" /></p>
<p>This alone still isn’t enough to do anything beside possible a DOS, we need to now replace <code class="language-plaintext highlighter-rouge">ualapi.dll</code> with a malicious DLL. The file <code class="language-plaintext highlighter-rouge">ualapi.dll</code> in this case does not inherit the permissions from parent directory (system32), but instead, inherits the permission from the source, in this case, our symlink which was created by our low privileged user.</p>
<p><img src="https://paper-attachments.dropbox.com/s_8B38E0E6030580BF6375E7AE440D804CD30147AEB058F32A7BED213B44E75B20_1617115566279_image.png" alt="" /></p>
<p><code class="language-plaintext highlighter-rouge">ualapi.dll</code> is used by the Spooler service, and although it typically isn’t located in <code class="language-plaintext highlighter-rouge">c:\windows\system32</code>, due to the DLL search order, <code class="language-plaintext highlighter-rouge">c:\windows\system32</code> will be checked before finding the actual ualapi.dll, therefore executing our malicious DLL.</p>
<p>Currently, the malicious DLL is simply a log file, we need to replace it with something of our own. Since the permissions allow for that, we can copy over an actual malicious DLL.</p>
<p><img src="https://paper-attachments.dropbox.com/s_8B38E0E6030580BF6375E7AE440D804CD30147AEB058F32A7BED213B44E75B20_1617116171116_image.png" alt="" /></p>
<p>We can either reboot, or wait for the spooler service to restart in order to validate the DLL execution. The DLL I placed there echoed the current user to C:\, as shown below, we now have code execution as System.</p>
<p><img src="https://paper-attachments.dropbox.com/s_8B38E0E6030580BF6375E7AE440D804CD30147AEB058F32A7BED213B44E75B20_1617115585699_image.png" alt="" /></p>
<p>This vulnerability was fixed by ForeScout on November 2020 in CounterACT version 8.1.4.</p>jpApplication: ForeScout CounterACT Secure Connector Operating System tested on: Windows 10 1809 (x64) Vulnerability: ForeScout CounterACT SecureConnector Local Privilege Escalation through Insecure Folder Permissions Overview: This vulnerability exists due to the permissions set on the logs directory used by the ForeScout SecureConnector application. Every several seconds, a new log entry is placed into c:\ProgramData\ForeScout SecureConnector\Logs\sc.log. The Logs directory, as well as the file sc.log, allows Everyone Full Control. Due to this, a low privileged user can create a symbolic link in the sc.log location, and point it at a privileged location such as c:\Windows\System32. The log entries will be created in a file at the receiving end of the symbolic link. By setting the receiving end of the symbolic link as a valid DLL in that location, the DLL is overwritten with the log file, and the permissions of the “log” file allows Everyone Full Control. At that point, the DLL can be overwritten with a malicious DLL to gain privileged code execution.ServiceNow - HelpTheHelpDesk And The Hackers2021-02-21T20:37:56+00:002021-02-21T20:37:56+00:00https://jordanpotti.com/2021/02/21/ServiceNow-HelpTheHelpDeskAndTheHackers<p><strong>tldr; ServiceNow had a feature that exposed credentials to hundreds (if not thousands) of their customers ServiceNow instances. These credentials varied from limited permissions, to full administrative access to the ServiceNow instance. The vulnerability was patched on October 8th, 2020.</strong></p>
<p>ServiceNow has a feature, that when configured, allows ServiceNow customers to collect information from their employees or customers endpoints. This is in the form of a WMI script, that collects system information, and ships it off to ServiceNow.</p>
<p>This sounds great, except for how ServiceNow decided to authenticate those requests to submit system information. Curiously, the credentials for the request were stored in a public JavaScript file on all ServiceNow instances utilizing the HelpTheHelpDesk feature.</p>
<p>The JavaScript file can be accessed at <code class="language-plaintext highlighter-rouge">https://<customername>.servicenow.com/HelpTheHelpDesk.jsdbx</code>. The credentials are at the top of the script for anyone’s viewing pleasure. How this hadn’t been found before is interesting.</p>
<p>Fortunately, the password was encrypted… Unfortunately, the password was actually base64 encoded, and not encrypted as the JS file tried to convince us by prepending the base64 encoded password with <code class="language-plaintext highlighter-rouge">encrypt:</code>.</p>
<p>Since determining if a host is exposing credentials was done with a simple GET request, it was a trivial process to determine the breadth of the issue. Using some open source reconnaissance, a list of ServiceNow subdomains was collected and each one was issued a request for the HelpTheHelpDesk script. If the <code class="language-plaintext highlighter-rouge">httpUsername</code> and <code class="language-plaintext highlighter-rouge">httpPassword</code> values were filled, the request was logged.</p>
<p>This led to quickly finding over 600 enterprises, government agencies and universities exposing credentials for their ServiceNow instances.</p>
<p><img src="https://paper-attachments.dropbox.com/s_950FE6053E05F52791508FF8799E8027016BE6B9BAB57C646249306A732842EE_1613888606369_image.png" alt="" /></p>
<p><strong>So what do these credentials get us?</strong></p>
<p>The documentation describes a process that provides only the required unprivileged role to the SOAP HelpTheHelpDesk user.</p>
<p><img src="https://paper-attachments.dropbox.com/s_950FE6053E05F52791508FF8799E8027016BE6B9BAB57C646249306A732842EE_1613889501817_image.png" alt="" /></p>
<p>Unfortunately again, many customers just used their ServiceNow administrator credentials..</p>
<p>Usernames such as <code class="language-plaintext highlighter-rouge">sn_admin</code>, <code class="language-plaintext highlighter-rouge">admin</code>, and <code class="language-plaintext highlighter-rouge">servicenow-admin</code> were plenty, and in more than one case, credentials provided full admin access to ServiceNow instances that were used by global companies with bug bounty programs.</p>
<p>Administrative access to a ServiceNow instance provides a smattering of varying access. You get access to customer support tickets, employee data, internal documentation, internal IT tickets and internal HR tickets. Other ServiceNow features can even provide command execution on servers and workstations enrolled in various ServiceNow integrations. Given the amount of information and access ServiceNow has in many environments, this can lead directly to entire environment compromise.</p>
<p>Due to the vast exposure, this vulnerability (feature?) was reported to ServiceNow and a patch was issued shortly after, closing up the massive hole in a matter of months.</p>
<h3 id="timeline"><strong>Timeline:</strong></h3>
<p><strong>Discovered:</strong> August 15th, 2020<br />
<strong>Reported to ServiceNow:</strong> August 20th, 2020<br />
<strong>Response from ServiceNow:</strong> August 21st, 2020<br />
<strong>Patch Released:</strong> October 8th, 2020<br />
<strong>Public Disclosure:</strong> February 22nd, 2021</p>jptldr; ServiceNow had a feature that exposed credentials to hundreds (if not thousands) of their customers ServiceNow instances. These credentials varied from limited permissions, to full administrative access to the ServiceNow instance. The vulnerability was patched on October 8th, 2020.Measuring Your Red Team2020-11-23T20:37:56+00:002020-11-23T20:37:56+00:00https://jordanpotti.com/2020/11/23/measuring-your-red-team<p>How do you measure your Red Team?</p>
<p>One of the primary differences between a Red Team and a Penetration Testing team is the primary stakeholder. With a Red Team, the primary stakeholders are those responsible for your detection and response capabilities. That being said, a side effect of Red Team operations are uncovering vulnerabilities and weaknesses owned by, lets say IT.</p>
<p>So based on those generic statements, we can begin to consider how we might be able to determine if we, as a Red Team, are uplifting the security of the organization.</p>
<p>Lets start with some easy to come by metrics:</p>
<ul>
<li>Vulnerabilities discovered/remediated</li>
<li>Success rate of phishing emails</li>
<li>Amount of “operations” - categorized by operation type</li>
</ul>
<p>Those metrics are nice, but they don’t address our primary stakeholders. Having a strong understanding of the blue teams weaknesses, strengths as well as how the blue team measures their own performance, will help the Red Team determine what metrics are important to gather.</p>
<p>Some of the basics are:</p>
<ul>
<li><strong>Mean time to detect</strong> - Shows improvement in detection capabilities</li>
<li><strong>Mean time to respond</strong> - Shows improvement in response capabilities</li>
<li><strong>Mean time to initial access</strong> - Shows improvement in perimeter security/phishing controls</li>
<li><strong>Mean time to act on objectives from beginning of op</strong> - Shows increase in defense posture holistically</li>
<li><strong>Mean time from reporting, to detection/control in place</strong> - Shows improvements in detection engineering capabilities</li>
<li><strong>Amount of detection’s trending activity</strong> - Shows improvements in detection and response capabilities based on Red Team activity</li>
</ul>
<p>Now that we have a good start for understanding what needs tracked at the macro level, we need solid tracking of the micro events. One way to do this, is to have a spreadsheet per operation that tracks all the relevant details of each action performed:</p>
<ul>
<li><strong>TTP</strong> - Map to the Mitre ID, this allows a heat map to be created which identifies areas of strengths and weaknesses in detection capabilities.</li>
<li><strong>Kill Chain Step</strong> - Similar to the above, this lets us see detection and response capabilities strengths and weaknesses</li>
<li><strong>Hostname</strong></li>
<li><strong>Host type</strong> - Allows us to see which OS’s have various strengths and weaknesses</li>
<li><strong>Type of detection expected/lacking</strong> - Allows us to map which controls are performing well, or under performing.</li>
<li><strong>Action timestamp</strong> - Make sure this is in a timestamp compatible with the SOC’s SIEM or standard.</li>
<li><strong>Detection timestamp</strong> - Make sure this is in a timestamp compatible with the SOC’s SIEM or standard.</li>
<li><strong>Response timestamp</strong> - Make sure this is in a timestamp compatible with the SOC’s SIEM or standard.</li>
<li><strong>Business Unit</strong> - Allows us to see which business units are lacking, can identify business units that might need more collaboration with the security org.</li>
<li><strong>Whether or not detection was intentional</strong> - This allows us to include/exclude actions that may have intentionally alerted the defense team, it also allows us to see where the Red Team needs to improve preparation if we are generating alerts in places we don’t want.</li>
</ul>
<p>Leveraging the Mitre Attack framework allows the Red Team to begin to see the strengths and weaknesses of the blue team, which allows for intentional operations maximizing value add.</p>
<p>For an example of a spreadsheet that tracks much of this data, check out Cedric Owens GitHub project: https://github.com/cedowens/Rolling_Op_Metrics</p>
<p>References:</p>
<p>https://medium.com/red-teaming-with-a-blue-team-mentaility/helpful-red-team-operation-metrics-fabe5e74c4ac</p>
<p>https://medium.com/starting-up-security/measuring-a-red-team-or-penetration-test-44ea373e5089</p>jpHow do you measure your Red Team?Serverless Authentication FTW2020-09-28T20:37:56+00:002020-09-28T20:37:56+00:00https://jordanpotti.com/2020/09/28/serverless-authentication-FTW<p>Many applications you find on GitHub that can be used for one off tasks, or for simple automation don’t have built in authentication. Typically, I just run it on localhost and port forward, or just run the application locally. This can be a pain and doesn’t scale very well.</p>
<p>With AWS Application Load Balancing, and AWS Cognito, we can control access to applications using Cognito’s built in user directory services, and AWS’s Application Load Balancing conditional forwarding.</p>
<p>This works like this:</p>
<p><img src="https://paper-attachments.dropbox.com/s_6A8AD8E9B8CF9D48F3B9EDEC041237C695C2CBEE8AEE9F15FB740839B77F8950_1601310296880_image.png" alt="" /></p>
<p>Before we begin, I will assume you have the following prerequisites.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>1. A VPC configured
2. 2 Subnets in different Zones
3. An Application you want to protect
4. A domain name
</code></pre></div></div>
<p>To begin lets navigate to AWS Cognito, and add our User Pools.</p>
<p><img src="https://paper-attachments.dropbox.com/s_6A8AD8E9B8CF9D48F3B9EDEC041237C695C2CBEE8AEE9F15FB740839B77F8950_1600982784985_image.png" alt="" /></p>
<p>From there, select <strong>Create A User Pool.</strong></p>
<p>I will name it RT-Metrics, since that is the name of the application I am going to be using.</p>
<p>Then select, Review Defaults. You can customize your settings here, the only setting I changed was under to prevent users from being able to sign up themselves.</p>
<p>Before moving on, choose <strong>Add an App Client</strong>. Once again, I will name this the name of my application. The check <strong>Enable Username Based Authentication</strong>.</p>
<p><img src="https://paper-attachments.dropbox.com/s_6A8AD8E9B8CF9D48F3B9EDEC041237C695C2CBEE8AEE9F15FB740839B77F8950_1600983057804_image.png" alt="" /></p>
<p>Now choose <strong>Create app client → Return to pool details → Create Pool</strong>.</p>
<p>Congrats! You have just completed your first steps to doing this thing!</p>
<p>Now, go to the left hand side and select <strong>App client settings</strong>.</p>
<p><img src="https://paper-attachments.dropbox.com/s_6A8AD8E9B8CF9D48F3B9EDEC041237C695C2CBEE8AEE9F15FB740839B77F8950_1600983148426_image.png" alt="" /></p>
<p>For the next part you need to perform a couple changes:</p>
<ol>
<li>Select <strong>Cognito User Pool for Enabled Identity Providers</strong></li>
<li>Set the <strong>Callback URL</strong> to https://<yourdomain>/oauth2/idpresponse</yourdomain></li>
<li>Select <strong>Authorization code grant</strong> for the <strong>Allowed OAuth Flows</strong></li>
<li>Select <strong>email</strong> and <strong>openid</strong> for the <strong>Allowed OAuth Scopes</strong>
<img src="https://paper-attachments.dropbox.com/s_6A8AD8E9B8CF9D48F3B9EDEC041237C695C2CBEE8AEE9F15FB740839B77F8950_1600983413290_image.png" alt="" /></li>
</ol>
<p>Now select <strong>Choose Domain Name</strong>. You can use the AWS provided one, or choose your own. In this case, I am going to go with AWS provided domain for ease of setup.</p>
<p><img src="https://paper-attachments.dropbox.com/s_6A8AD8E9B8CF9D48F3B9EDEC041237C695C2CBEE8AEE9F15FB740839B77F8950_1600983613313_image.png" alt="" /></p>
<p>One last thing needs to be done to wrap up the Cognito setup, users. To create a user, head to the left hand side and select <strong>Users and Groups → Create User</strong>. And complete the user details.</p>
<p><img src="https://paper-attachments.dropbox.com/s_6A8AD8E9B8CF9D48F3B9EDEC041237C695C2CBEE8AEE9F15FB740839B77F8950_1600983810022_image.png" alt="" /></p>
<p>Now we need to configure the AWS Load Balancer.</p>
<p>First, go to the EC2 page, and on the bottom of the left hand side, you should be able to choose <strong>Load Balancers</strong>.</p>
<p><img src="https://paper-attachments.dropbox.com/s_6A8AD8E9B8CF9D48F3B9EDEC041237C695C2CBEE8AEE9F15FB740839B77F8950_1600983980810_image.png" alt="" /></p>
<p>After you select <strong>Create Load Balancer,</strong> you are given a couple options, choose the first one; <strong>Application Load Balancer.</strong></p>
<p>Give your Load Balancer a name, and add new Listener on port 443. You also need to choose at least two of the subnets. Remember, these subnets need to be in different availability zones.</p>
<p><img src="https://paper-attachments.dropbox.com/s_6A8AD8E9B8CF9D48F3B9EDEC041237C695C2CBEE8AEE9F15FB740839B77F8950_1600984142966_image.png" alt="" /></p>
<p>Select <strong>Next: Configure Security Settings</strong>.</p>
<p>If you already have a cert, you will need to upload it here, or choose it from Amazon’s Certificate Manager. Since I don’t yet, lets go through the motions.</p>
<p>Select <strong>Request a new certificate from ACM</strong>.</p>
<p><img src="https://paper-attachments.dropbox.com/s_6A8AD8E9B8CF9D48F3B9EDEC041237C695C2CBEE8AEE9F15FB740839B77F8950_1600984284009_image.png" alt="" /></p>
<p>From the Certificate Manager page, add the names for the domain/s you own. I chose to add a wildcard domain so that I don’t have to do this again for other apps I configure.</p>
<p><img src="https://paper-attachments.dropbox.com/s_6A8AD8E9B8CF9D48F3B9EDEC041237C695C2CBEE8AEE9F15FB740839B77F8950_1600984427073_image.png" alt="" /></p>
<p>Choose your validation, I chose email but if you don’t have access to one of those email addresses, you can go with DNS, that option will require you to add a cname record to validate ownership.</p>
<p>Once you get the email, you can follow the link to approve the certificate issuance.</p>
<p><img src="https://paper-attachments.dropbox.com/s_6A8AD8E9B8CF9D48F3B9EDEC041237C695C2CBEE8AEE9F15FB740839B77F8950_1600990067529_image.png" alt="" /></p>
<p>Once that is done, back in ACM, you should have a valid certificate.</p>
<p><img src="https://paper-attachments.dropbox.com/s_6A8AD8E9B8CF9D48F3B9EDEC041237C695C2CBEE8AEE9F15FB740839B77F8950_1600990168039_image.png" alt="" /></p>
<p>Back on the Load Balancer page, select your new certificate. You may need to click the green refresh button next to the cert dropdown.</p>
<p><img src="https://paper-attachments.dropbox.com/s_6A8AD8E9B8CF9D48F3B9EDEC041237C695C2CBEE8AEE9F15FB740839B77F8950_1600990246700_image.png" alt="" /></p>
<p>Click <strong>Next: Configure Security Groups</strong>. If you already have one you’d like to use, go ahead and select it. I am gong to create a new one and add HTTP and HTTPS traffic from anywhere.</p>
<p><img src="https://paper-attachments.dropbox.com/s_6A8AD8E9B8CF9D48F3B9EDEC041237C695C2CBEE8AEE9F15FB740839B77F8950_1600990329603_image.png" alt="" /></p>
<p>Select <strong>Next: Configure Routing.</strong> In my case, I will be serving up an EC2 Instance, so that is what we want to choose here. And if you have SSL on the instance, set that here as well.</p>
<p><img src="https://paper-attachments.dropbox.com/s_6A8AD8E9B8CF9D48F3B9EDEC041237C695C2CBEE8AEE9F15FB740839B77F8950_1601046868729_image.png" alt="" /></p>
<p>On the next page, choose your instance, and then select <strong>Add to registered</strong>.</p>
<p><img src="https://paper-attachments.dropbox.com/s_6A8AD8E9B8CF9D48F3B9EDEC041237C695C2CBEE8AEE9F15FB740839B77F8950_1601046993498_image.png" alt="" /></p>
<p>Next, review your options and if it all looks good, hit <strong>Create</strong>.</p>
<p><img src="https://paper-attachments.dropbox.com/s_6A8AD8E9B8CF9D48F3B9EDEC041237C695C2CBEE8AEE9F15FB740839B77F8950_1601047040111_image.png" alt="" /></p>
<p>The last thing we need to do, is to configure our load balancer to use Cognito for authentication. On the load balancer tab, select your newly created load balancer, select <strong>listeners</strong>, and then select the listener you want to configure.</p>
<p>If you added HTTP and HTTPS listeners, you will need to configure both. In my case, I configured the HTTP listener to forward to port 443.</p>
<p>Select <strong>HTTP: 80</strong> and choose View/edit rules.</p>
<p><img src="https://paper-attachments.dropbox.com/s_6A8AD8E9B8CF9D48F3B9EDEC041237C695C2CBEE8AEE9F15FB740839B77F8950_1601047243125_image.png" alt="" /></p>
<p>From this page, click the edit option near the top, then click the edit option on the rule itself. Delete the existing THEN rule and add a new one for redirecting to SSL.</p>
<p><img src="https://paper-attachments.dropbox.com/s_6A8AD8E9B8CF9D48F3B9EDEC041237C695C2CBEE8AEE9F15FB740839B77F8950_1601047373887_image.png" alt="" /></p>
<p>Press <strong>Update</strong> and go back to the load balancer page.</p>
<p>Following the same steps as before, edit the <strong>HTTPS listener</strong>.</p>
<p>When choosing a new <strong>THEN</strong> action, select <strong>Authenticate</strong>. You newly created Cognito configuration should be available to choose from this page.</p>
<p><img src="https://paper-attachments.dropbox.com/s_6A8AD8E9B8CF9D48F3B9EDEC041237C695C2CBEE8AEE9F15FB740839B77F8950_1601047490084_image.png" alt="" /></p>
<p>Add one more action to forward to our target.</p>
<p><img src="https://paper-attachments.dropbox.com/s_6A8AD8E9B8CF9D48F3B9EDEC041237C695C2CBEE8AEE9F15FB740839B77F8950_1601047533519_image.png" alt="" /></p>
<p>And select <strong>Update</strong>.</p>
<p>We need to now update our DNS settings. Since we opted to use the AWS provided DNS name for authentication, we only need to add one CNAME record to the domain name we want our application to be reachable at.</p>
<p><img src="https://paper-attachments.dropbox.com/s_6A8AD8E9B8CF9D48F3B9EDEC041237C695C2CBEE8AEE9F15FB740839B77F8950_1601047858570_image.png" alt="" /></p>
<p><img src="https://paper-attachments.dropbox.com/s_6A8AD8E9B8CF9D48F3B9EDEC041237C695C2CBEE8AEE9F15FB740839B77F8950_1601048044990_image.png" alt="" /></p>
<p>Now hit your domain and see if you get redirected to auth.</p>
<p><img src="https://paper-attachments.dropbox.com/s_6A8AD8E9B8CF9D48F3B9EDEC041237C695C2CBEE8AEE9F15FB740839B77F8950_1601048092806_image.png" alt="" /></p>
<p><img src="https://paper-attachments.dropbox.com/s_6A8AD8E9B8CF9D48F3B9EDEC041237C695C2CBEE8AEE9F15FB740839B77F8950_1601091341531_image.png" alt="" /></p>
<p>Now that we know our authentication is working, we need to make sure that our application can’t be reached via the direct IP address.</p>
<p>We need to set a Security Group on that instance and only allow traffic from the load balancer.</p>
<p><img src="https://paper-attachments.dropbox.com/s_6A8AD8E9B8CF9D48F3B9EDEC041237C695C2CBEE8AEE9F15FB740839B77F8950_1601048519686_image.png" alt="" /></p>
<p>And you’re done! You should have an application behind authentication using only Cognito and an AWS Load Balancer!</p>jpMany applications you find on GitHub that can be used for one off tasks, or for simple automation don’t have built in authentication. Typically, I just run it on localhost and port forward, or just run the application locally. This can be a pain and doesn’t scale very well.Synthesis Of Vectors2020-08-18T20:37:56+00:002020-08-18T20:37:56+00:00https://jordanpotti.com/2020/08/18/synthesis-of-vectors<h3 id="if-you-are-only-as-strong-as-your-weakest-link-dont-let-that-weak-link-be-your-detection-and-response-capabilities">If you are only as strong as your weakest link, don’t let that weak link be your detection and response capabilities..</h3>
<p>There will always be multiple gaps in each layer of your defense in depth model. Make sure finding those gaps takes longer than your detection and response times.</p>
<p>Visualizing attacks with the Attack Kill Chain is helpful, but it doesn’t demonstrate all of the nitty gritty things that an attacker has to bypass in order to execute on their objective. Same goes for public breach data, often times, only the entry point is discussed. It would be more valuable to discuss what could have been done internally to prevent the breach in favor of discussing the perimeter.</p>
<p>The following model isn’t inclusive of all the controls and and layers of the security onion, but it does offer a visualization of how an attacker moves through an environment, its often finding that one weak system out of tens or hundreds of secure systems.</p>
<p><img src="/images/2020/08/synthesis.png" /></p>jpIf you are only as strong as your weakest link, don’t let that weak link be your detection and response capabilities..