Skip to content

Give your adversary every opportunity to make a mistake.

This is my first maxim of Information Security. It is my keystone. We hear variations on this. An adversary only needs to be right once to get in, but then only needs to be wrong once to be discovered. APT 1 had behaviors that led FireEye to track them to Shanghai and a building tied to China's People's Third Army. Crowdstrike reviewed the DNC hack and was able to discern that two separate Russian intelligence bureaus hacked into the system, and didn't realize the other also had. Guccifer 2.0 forgot to turn on his VPN just once before going onto twitter and his location was tagged in a building in Moscow tied to an intelligence directorate. Stuxnet was traced back to the NSA, Duqu to the Israelis. The best of the best make mistakes. This leads to a corollary to my first maxim: on a long enough timeline, everyone makes a mistake.

Here's a story that was shared with me by a good friend in the industry. It is missing relevant details out of respect for my friend. Some details have been changed. The processes, trail, and TTPs are accurate. Apologies to Dick Wolf.

An adversary (henceforth identified as Beetroot) was intent to commit fraud. Beetroot would accomplish this by pretending to be an American company that would help foreign businesses get loans that would allow it to establish a presence inside the United States. The presence would help them register with the IRS and get an Employee Identification Number. Beetroot would claim to be able to facilitate the paperwork, the line of credit with an American bank, and set up contacts in the United States for the foreign business allowing them access to the lucrative American markets, for a moderate to large fee with a revenue sharing percentage over some amount of time. Beetroot claimed to be able to do this because he was a university professor with access to Masters and PhD candidates to do the work for research and credits. Beetroot would reach out to targets by utilizing Search Engine Optimization (SEO) on popular foreign search engines (Yandex or Baidu, for example).

Beetroot had been running this scam for a long time. As he didn't target American citizens or businesses, no one domestically took any notice. His fee amounts were small enough foreign governments wouldn't go through the hassle of dealing with the US State Department to attempt to apprehend Beetroot or retrieve the money. Beetroot was safe.

Beetroot would do some brand impersonation on a website. One of the brands he impersonated found out and had his site taken down. Beetroot spun up another site and impersonated someone else.

Later on Beetroot spun up another site, with a domain name very similar to the one that had previously been used against my friend. Once again, my friend's educational institution (a collegiate business school in the greater midwest) found the site, and worked to take it down. My friend came to me and asked me to take a look at what he had. We worked at different shops but were both contracting through the same firm so NDAs were easy to handle.

### I reread that NDA 4 times before hitting publish. This births a new maxim. Do not mess with an NDA.

Beetroot used servers in Eastern Europe. Beetroot used privacy guard. Beetroot used publicly available information from any search engine to do the impersonation. Beetroot had no digital footprint of any kind in the US. There wasn't much to go on. Except Beetroot went back to the well and impersonated the same school twice (mistake #1).

This time Beetroot's tradecraft was nearly flawless. But, since the attack was virtually identical in every way (what he did, how he did it, who he targeted, where the targets lived) one could say with moderate confidence it was the same adversary. So the focus of the investigation was the original impersonation website.

Both websites were a variation on the school's URL acronym, but at .com instead of .edu (many schools, even business schools, don't register the .com - poor brand defense). But, on the original one, Beetroot made one hiccup. At one point he switched registrars. Maybe it was due to being cheap, maybe he had a deal, maybe he liked the local geolocation better. But the day he switched, he forgot to check the box for whois privacy. (mistake #2). And for one day, the full whois record was listed, and passive DNS captured it in perpetuity. There was no name, but there was an e-mail address, and a street address. Tied to the registration date, we had behaviors tied to an indicator - we had pivot points. The e-mail address turned up three more websites that were impersonating Australian and New Zealand schools that had business and law departments specializing in South Pacific maritime law, offering to (for a fee) set up businesses in regional countries to deal with shipping laws. Same scam, different business model (mistake #3).

The street address was diamond studded 24 carat platinum plated solid gold. Over 40 websites with 15 different e-mail addresses tied to that address. All 40 sites were hosted on one of three different Middle Eastern bulletproof hosts. At each host, all the sites lived on a /30 subnet. Every single site used the same web server. The web server differences were tied to versions, and the versions tracked to when the sites were spun up. There were more sites on those subnets, and they led to a few more e-mail addresses, which led to a few more sites (mistakes 4 -1329542). These took the timeline to a point, when Beetroot figured to privacy guard everything. There were tons of pivot points to investigate, spoofing tons of other schools in English speaking countries.

That wasn't all. Looking at the original site that spawned the original investigation, there was one line of text that stood out. It looks like a sentence was run through Google Translate into another language and back into English. The original line wasn't hard to guess, and when run through translate into Russian and back into English it produced the distinctive sentence. We ran a Google search on that sentence. We got three hits. One website didn't exist anymore. The other two did. And they were near carbon copies of the original website my friend originally investigated. Those two were privacy guarded. And they had the same web server, same web structure, and operated on a subnet that tied to an early DNS record for the original imposter site (mistake X). But the defunct website was a diamond the size of a softball.

The original site was <university acronym>.<general university-biz word dash LLP>.com. It contained multiple subdomains for all the business types Beetroot would spoof. Whois wasn't private, the address nearly lines up (one digit was off), and the registrant had a phone number, and it had the area code and local prefix of the city and state in the whois. Later in the whois history, Beetroot switched phone numbers to a Google Voice number, which used geolocation to give him a number with the same area code and prefix. The registration date puts this as the first site spun up. A web archive view of the site showed a very rough draft of some of the impersonating sites.

The cherry on top - Google Earth. The addresses should be tied to a lat // long scale. Beetroot's address was in the middle of nowhere. Google Earth showed an empty field of tall grass.  We went down the road in both directions, and found that the addresses on the few mailboxes didn't line up with Google Earth. So we clicked down the road to a small house in surrounded by fields for hundreds of yards. The address marker had the address of the original discovered address from whois. The small house had multiple satellite dishes (like one would have for Dish or Direct TV), which would make sense for middle of nowhere internet. And the smile on the Mona Lisa? We spun the Google Earth around, and someone had paid the money to put an internet junction box like you see in suburbs right across the street from this house in the middle of nowhere. There were still signs of a fresh trench dig and fill in from there to the direction of the highway. And a fresh strip of asphalt from it across the street to (what I assessed with High Confidence based on everything together) was Beetroot's house.

From a Threat Intel standpoint, this was unbelievable. It was the Deathly Hollows, the Lost Ark, even the alien from Area 51. We had tradecraft. We had a full timeline from start to current. We had targets. We had consistent TTPs stretching over years. And we had Beetroot's home.

We imagined that's what it felt like when the Mandiant researcher stood outside the office building in Shanghai and took that picture.

Beetroot represented something that gets zero discussion in most online Infosec circles - the Persistent Threat. We hear about Advanced Persistent Threats all the time. And we hear about script kiddies who wreak havoc with a tool. Beetroot fell in the middle. Beetroot probably started out as one person, and then worked with others to make his scam work. Beetroot's skills improved with time. But Beetroot never wiped his slate clean. As his tradecraft got better, he didn't clean up his previous footprints.

Persistent threats have greater initial technical debt, and much more limited resources. They need to build on previous successes with very limited budgets. Their advantage is it's harder to defend than attack, and Beetroot wasn't attacking anyone who had the means to fight back. But the work wasn't lucrative enough to throw away his old infrastructure, and then he likely forgot about it. He diversified, but not enough. He (like most adversaries) had consistent TTPs across his fraud. Lone indicators were a starting point, but the TTPs were so obvious from one to the next.

We think of the near impossibility of finding APTs without multiple dedicated staff assigned to each Infosec function. And how would one train to challenge such an adversary? Lots of businesses will fall into the targeting reticule of one of the many APTs. But for each of the APTs, there are dozens of persistent threats coming after your networks, with tradecraft not as good. You can use these to show successes to leadership. You can use these to sharpen your skills. And you can use the learning experience to better position yourself to catch the advanced threats, who will also make mistakes.

Give your adversary every opportunity to make a mistake. They will. And you will catch them.

Infosec_Samurai

Who are you?

That one question defines so much of you. Thinking about the question defines you. Specifically, how you think about that question. In Infosec you have to be analytical. Whether you work or desire to work at a strategic (leadership), operational (cooperative), or tactical (technical) level, the ability to ask the right questions, and analyze questions asked is part of the job. What are you trying to find out? What will that information get you? Why is getting that information important? What does the person asking the question want to know? What do they need to know? Are they asking for what they need? What questions will the answer you give prompt? A proper analytic question is the start of a series of multi-order effects birthed by the series of questions that spawn from the first one.

By virtue of reading this blog, I'd bet money you have created a profile on at least one social media site, even if it was for a short time. If you haven't, you've at least read one profile on social media. The odds that neither are true are smaller then a rounding error to significant digits. Think of any profile you have read. There is a character limit. They are designed to be small blurbs, succinct, and by their very nature incomplete. And that is the problem - especially in Infosec.

A moment in time can change a life. A person's most outrageous experience in life comes down to one single moment. Every social media post, upload, and interaction is at best one moment in time. Sometimes the ones we want to show the world. Often it is one's weakness, rage, or hate, vile and unfiltered. And very disturbing, this is prevalent in Infosec. Even worse, those in Infosec are willing to judge based on one moment. What makes that an egregious sin is Infosec is supposed to be so analytical. A moment in time is an indicator. And an indicator without adversarial TTPs only shows what happened right at that moment. If that. Investigators who claim to be purely analytical when dealing with a digital indicator will then judge someone worthy of damnation (or termination from whatever job they have) based on an indicator. And based on a truly perverted sense of absolutist justice.

One of the great moments in the movie High Fidelity is when John Cusack explains why Joan Cusack came into his shop and referred to him in a very unkind fashion. He then explains four pieces of information his ex-girlfriend most likely shared with Joan that painted him in a very unflattering light. He then explains to the audience that each of these four horrible things was absolutely true. He then goes on to rationalize (minimize) these behaviors. Knowing full well that the audience is judging his character, he looks into the camera and gives the audience a pop quiz. Think of the top five all time worst things you've done to your mate that they don't know about. There is a pause, giving the audience time to think. Then he gives the line of the movie: now who's the fucking asshole.

Infosec rationalizes it's bad behavior under the justification that people don't understand the fight we had to get where we are. There is no easy in to this part of technology. We see evil intent and behavior as part of our job, so in comparison our snap judgements, our condemnations, our willingness to hurt (trying to take someone's job away so they can't eat, have shelter, have transportation is a most cruel hurt) shouldn't be held against us - we fight the bad guys. We see a moment in time, and depending on who the perceived slight would hurt judgement is hurled. Ends (vanquishing evil) justifies the means (inflicting harm).

Except we're looking at one point in time. Infosec people would make a very bad juror. Think back to a judgement, whether hurled in a tweet, said behind someone's back, or used to cause harm. Think of the worst, or the most recent. To quote Cusack, now who's the fucking asshole?

I am fortunate. Whether it's my path, age, having lived life ever on the outside, or likely a combination of the above, I focus on my bias more and more often. I focus on the source of that bias. I focus on how it affects my life. I focus on how it will be viewed by others. My most reoccurring maxim is Words Matter, and that is continually apropos, moment by moment.  My words reflect my bias.

I was taught by individuals, by collective groups, and by my state government that, on the basis of my demographic, I was disposable, and that the world was justified in disposing me based on actions of others long dead, or with more resources and power than I will ever have. Therefore, those who cling to victimhood, as if they were special, or that the history of their identity group should grant them favor or recompense, I identify as weak and untrustworthy. Bias.

I have always been on the outside of whatever large groups I wished to belong to. I have seen and experienced the injustice of the mob. I have experienced those in power applying different rules to me than the group because I wasn't part of the group. I see larger groups that won't police themselves as corrupt and incapable of being a voice to justice. People don't ask forgiveness because they are sorry, they ask forgiveness to avoid punishment. Bias.

Like Colm Meaney's character Gene in Layer Cake, I'm too loyal for my own good. Very often I've held up my end of a deal based on a promise - real or strongly implied - that the other side never had any real basis to honor. A former boss told me that in ten years of reference checks, my former managers gave the exact same weakness, when asked about mine. When he's part of a project or a team and people aren't holding up their end, he won't let it fail. He puts on boots and a cape and saves the day, every single time. That makes him reliable, and difficult to work with. People will abuse my ethic. People will find a way to betray. On a long enough timeline, people will show they can't be trusted. I discard people who betray my trust with great ease. Bias.

Depending on how you read that, your bias shows. Do you see someone who has overcome adversity, understands his responsibility in life to himself and others, and works to keep the team from failing and to preserve earned trust? Or, do you see an angry man who never fit in and won't give people a chance? That's your bias. No matter which you choose, judgement based on three paragraphs shows bias. And if you say you didn't, you're either Detective Columbo or a liar. And Columbo is dead.

And that's the point. Bias seeps into everything. It colors your judgement. I have taken seemingly extreme actions in some factions of life lately. They weren't based on a single indicator, but people's TTPs (pattern of behavior). I've paid a price for it. That price will collect a reoccurring fee of opportunities and allies lost for a long time. Those choices were made for the right reasons, even if the outcomes attempt to reinforce my biases.

So who are you? You are far more than a profile or post. You need to understand you. Understand as much of you as you can define, as you can put into thought. Once you can do that, you can start to view that from the other side of the looking glass with Alice. Analyze. Like a good investigator. Like a good communicator. Like a good researcher. Once you've identified your bias, you can work to overcome it. Like a good human being.

Both an infinite collection of moments in time, and their sum total. That's who you are.

Infosec_Samurai

My old boss had one iron clad rule when reporting on an alert or incident. Don't think, know. What he meant by this is the need in any investigation to be sure. He ran security at a very large financial organization before joining the institution where he and I met. He had to face breach notices, legal summons, and visits from at least one three letter agency. And in all these dealings, he understood the difference between 0% and 99% understanding was minuscule compared to the difference between 99% and 100% sure.

100% sure is obvious. There is proof. There is evidence. There are logs. All of these combine to paint a complete picture. They leave no doubt, much less reasonable doubt.

99% sure is where the problems occur. Your odds are so overwhelming that you have virtual certainty you are correct. 1% is a rounding error, or a margin of error.

The truth is that 1% is an error. Employees being termed, adversaries being arrested, even APTs live and die off that one tiny percent. Believe me, when the lawyers get involved, that 1% can save someone from legal action or keep them out of jail. A majority of the time that 99% will bury far more than 99% of your adversaries. The ones who can navigate that 1% are the ones you should really be worried about.

Enter 'Don't Think, Know.'

We see a system beaconing out to an IP listed in a threat intel report as being part of APT 29's infrastructure, ergo the Russians hacked us. What process spawned the call? What spawned the process? Is the IP a compromised public server the APT used to piggyback as a watering hole attack, and the system is making a normal call to the box? Was an engineer playing with a sample and triggered the call? Has the alert been verified with the source? How recent is the intel? Did EDR flag on anything? Did EPP block the rest of the process? Did the firewall stop the dropper's download? Sweeping declaratory statements are made at the end of an investigative process, not the beginning. In threat intel, an indicator by itself is a starting point at best. The behavior and the chain of events that spawn from that indicator's investigation determine fact. The desire to be right, to fight the good fight and take down the bad guys can cloud the search for fact. One can think they are right. If one isn't 100% sure, they may not clearly see that difference between 1% and 99%.

Some times it's easy. Someone leaves a digital footprint that only they could leave. Someone makes a blatantly sexist or racist remark in a print medium. Don't assume this is common. And, most important of all, do not project your bias onto it. This leads one to disregard evidence that can contradict their thesis.

Accusations have a human cost. People so easily point fingers. This is due to our thirst for answers, and the need for closure to an event. And our desire for retribution. Just look at any twitter mob. If you follow a large enough chunk of Infosec twitter you will see these far too often and they will include people who are incident responders and investigators who should know better.

An accusation is an indicator. Investigators need to take every accusation seriously. But an accusation isn't fact, it's a starting point. When an accusation is leveled that someone has committed fraud, embezzlement, theft, or worse, that accusation needs to be taken seriously. The voice making the accusation can lend a great degree of credibility to it, but by itself is not indisputable proof of wrongdoing.

Less common in Infosec (I hope) but prevalent in the real world (too often) is the ending of an incomplete investigation with a declaratory statement claiming nothing was wrong. No malfeasance happened. At some point in an investigation, it will get hard. An investigator will have to dig in deep and wade through logs. This isn't a quick process. It shouldn't be rushed. Conclusions shouldn't be rushed. Behavior needs to be analyzed. The blank spaces have to be filled in.

When you don't know, look for a way to find out. If it is impossible to find out (e.g. logs rotate) an investigator needs to state where the holes in that part of the investigation are. The investigator needs to find a way to corroborate the behavior, not let assumption become fact and move on. When that is not possible, take a queue from Colin Powell.

What do you know?

What don't you know?

What do you think?

Why?

These questions answered as honestly and completely as possible, are what it takes to shrink down that 1% to as small a number as possible.

Any ethical investigator needs to be mindful of the human cost of their work. To do that, they need to be as thorough as possible. Their behavior comes down to one simple credo.

Don't think, know.

Infosec_Samurai

 

The most common question about security jobs is how to get the first one. How does one break into security? Where are the entry level security jobs? I went to school and got a bachelors in Information Systems Security. Even before leaving school, I started looking for entry level information security jobs. That concept, an entry level information security job, was built on a flawed premise. They don't exist.

My bias is built around that time period - 2008. The market collapsed, we had unemployment so high congress had to vote to extend unemployment benefits and kick out a stimulus check, and no one was hiring. No one. And, with a few notable exceptions, people were hoarding information. Talks were as technical as you could get, and conferences were financially restrictive - especially to those who didn't have jobs. And people were scared. They were so scared they were hoarding information, doing everything in their power to make sure their company couldn't fire them. They hid the keys to the kingdom, and made sure no jr levels could move up and take their positions. Tales of the older workers making 3x that of junior employees being laid off, or RIFfed (reductions in force), were daily occurrences. Trust between employees and management was at an all-time low. No matter the company culture, everyone's IT got gutted. That affected the world today: Soaring GDP and stagnant or falling wages, everyone wants contractors and not FTEs, and fewer companies are willing to pay for training for anyone not of their mission critical staff. To a degree some of that is changing, but that change exists primarily in specialized areas.

What are the barriers? First and foremost, you don't know what you don't know. I fight with Rest API coding as I took a coding class in 2006 and 2007. It was in visual basic. My coworker can puzzle through these issues in less than an hour when they take me days. I have to hunt through forums of questions to find more questions I didn't know to ask. Even for the veterans, not knowing what we don't know teeters on the edge of crippling. Secondly, I never had a mentor. There was no guiding hand to show me the way. I was the fat, straight, white guy. No one wanted another one of those in the pipeline. Plus, I was not a drinker, so I was never in the social circles of the people in power. I had to fight for my information, learning akin to strip mining or scorched earth, and there was no forgiveness for mistakes. I moved around a lot. Those who had mentors were guided through pitfalls with ease, and taught how to learn, as well as what to learn. Third, the career path was not defined. Listen to any faux humble "I'd never use the phrase thought leader" types, and they talk about a career path utopia where certs are pointless, and they'd take a skill set over formal education any day of the week. Next time you see this, look at the background. I would bet a steak dinner that they are A) ex-military, B) worked for a Federal Government three letter acronym, of C) both. The most notorious of these people went military to NSA - and yes, that's more than one person. So, unless you are 18 with high technical skills about to join the military, most of their career advice is for naught. This fog completely obscures any vision of entry level security.

There is one thing you need to know, above all else. Burn this into your brain in large flaming letters.

THERE IS NO ENTRY LEVEL SECURITY JOB.

People will try to argue that. To do so violates one of my most important maxims: words matter. You can't approach that statement without trying to change the meaning of words. People do online. They then violate another maxim: deal with the world as it is. It's like a triad, pick two. You can have an entry level job, you can have a security job, or you can do entry level security. Entry level jobs don't carry the level of responsibility that security jobs have. Entry level security work is not something people pay for with the risk associated. Security jobs require a degree of expertise that far exceeds anything we think of as entry level.

Starting points in security depend on your background. Security analysts who work in SOC (Security Operations Center) environments have backgrounds looking at operating systems or network traffic, or both. They take expertise in a previous life as a sysadmin or network admin, and parlay that into looking through alerts for outliers in data transmission or deltas (differences) in configurations. SecDevOps were DevOps people who learned to secure and bugfix their code, and the code on their systems. Network admins become firewall admins. Though I loathe to make the comparison, switching from one of the early IT jobs to security is akin to the evolution of a Pokémon, Abra to Kadabra to Alakazam. You can't move up until you've made a firm grasp on the previous level (without potentially crashing your career).

Deal with the three hurdles. First, all you need is a concept. Do you want to secure a network? Secure Windows//Mac//Linux operating systems? Attack networks? Build secure code? Start simply by googling that concept. There are numerous and extensive papers, articles, podcasts, and videos on nearly every subject. Or, even better, search twitter. You will find many a person who tweets and writes about these concepts, and those who will retweet those who do. In doing so, you will clear hurdle one, and make it most of the way over hurdle two. The online community can act as a crowdsourced mentor. Read the writings of established professionals. Look at their histories on LinkedIn and see the evolution of their job titles. Look where they started and you will see you can come from nearly anywhere and get to security. Some are even approachable at conferences and talks. When you look at those histories and talk to those people, you will see that there are some basic funnels to get to where you want to go, but those aren't the only paths. Find something you want to do and pour yourself into it in your soon to be not free time. You will build yourself into a subject matter expert and that will have value. And that will help you clear hurdle number three.

If I could do it again knowing what I know now, what would I do differently?

If I was in college I would find a paid internship. This gets you in and working in a professional environment, and working with the tools they don't have in schools. Plus, it gets a real company on your resume, and then you aren't someone with no experience.

If I was in a career rut, I would build a home lab (very inexpensive with virtual machine software). I would play with tools like Wireshark, looking at traffic. I would rip apart group policy on multiple Windows operating systems. I would read about system vulnerabilities and how to attack them, then test it out. I would find free tools that mimic what the expensive stuff does, to make it easier to work with the tools I have never touched, as the underlying idea is the same.

Where I am now? I would keep learning. I would keep working to make sure I'm not ashamed for not knowing an answer my dramatically younger colleagues take for granted. I'd use twitter more as a learning and networking tool, and as an outlet to share my view on topics I feel are underrepresented.

It doesn't get easier. But then again, neither does life. Keep pushing forward.

Infosec_Samurai

1

Threat hunting and threat intelligence has a special relationship. Think Sonny and Cher, Peanut Butter and Jelly, even cake and ice cream. They each stand on their own, some with great renown, but put them together and you have a whole that far exceeds the sum of its parts. And like the ouroboros, hunting and intel feed off of each other.

Start with a hunt. The purpose of a hunt is to find adversarial behavior on the network. You do this by forming a hypothesis (I believe the adversaries are trying to move laterally through my network using PS Exec), and then reviewing log information testing that hypothesis (what unexpected accounts are attempting type 3 logins on multiple systems, successful or not, spawned by the process psexec.exe). You find an anomaly and you document it, and then you run it down to see if it can be explained by regular user or system behavior. Should you find proof of adversary behavior, you document everything and you kick it over to incident response (assuming you are not also the incident responder). You then work to eliminate that adversary from your network.

Enter threat intelligence. They take the documentation from the hunt and analyze it. Was there a pattern in the remote login attempts? Did it target servers with a specific function? Was it the same user every time? Was it regular users or IT users with higher levels of access? Did it happen during certain times of the day indicating an adversary's working hours? What other processes did the compromised user account attempt? They work to see if it is all the work of one adversary or multiple.

Yes, multiple adversaries can be inside the same network, even doing battle with each other while assuming the other is legit sysadmin or security personnel. See the 2016 DNC hack after action report.

Threat intel works to build a profile, and that includes examining the kill chain from the recon stage to the point the adversary was discovered. The use the diamond model (http://www.activeresponse.org/wp-content/uploads/2013/07/diamond.pdf) shows tracking an adversary along the kill chain (https://www.lockheedmartin.com/en-us/capabilities/cyber/cyber-kill-chain.html) focusing on four points at each step in the kill chain: adversary, infrastructure, capability, victim. To analyze an adversary's attack, threat intel wants to be able to fill in all four vertices of the diamond. As they build a profile, they will see that an adversary may have undiscovered capabilities. An adversary may be discovering moving laterally with PS Exec, but how did they get on the network to begin with? How did they establish persistence? Building the adversary profile will create more questions. This can be compared against previous adversary documentation, or compared to information from external trusted threat intelligence sources.

The intel team takes these questions back to the hunters. Please hunt the history of the account usage, and look for the origin of anomalous behavior. Something had to happen (a process run, a file downloaded, a website visited) that preceded this anomalous behavior. The hunters then refine the hunt using the parameters given to them by threat intel team to flesh out more of the adversary capabilities. They return their findings to intel, who analyzes and asks more questions, the hunters refine the hunt even more, and this process is cyclical until the adversary tactics, techniques, and procedures can be assessed and documented.

Now, the results of this will be to create alerts (traps) should the adversary ever penetrate the network again. Then incident responders can use the adversary profile created by intel with information gathered from the hunters to contain and eradicate adversary presence with greater rapidity. These profiles can be used to track between similar but separate adversaries, and help paint a picture of motivation. This tracking of adversaries, and intent derived from behavior, can be documented and taken to leadership to say these are the types of organizations targeting our institution, and this is what they find valuable to disrupt and steal. We are better off directing our resources to elevate protection on this set of assets and people.

Documented evidence of intent and capability with a clear target make it easier for leadership to support a course of action. This continual process relies heavily on the coordination between the hunters and gatherers.

None of us is as secure as all of us.

Infosec_Samurai

Threat hunting is a popular concept in the modern Information Security space. Vendors will tout their systems as a threat hunting solution. Or even more inaccurately, they will claim their box or their service can eliminate the need to do threat hunting. Both of these claims are false. The first because it incorrectly defines threat hunting. The second because it claims to help you abdicate responsibility - the ultimate sin in Information Security.

Clearing the fog around the beliefs surrounding threat hunting starts with defining what isn't threat hunting. Checking on an alert in a system isn't threat hunting. This is triaging - determining the accuracy and risk of a given alert. If some source, a box, an indicator, or a listserv, tells you to go look to see if a given action is malicious, it's not hunting.

Hunting isn't about indicators. It's about behavior. You are looking for behavior out of the norm. For an adversary to get a foothold in your network, and then begin to act in their interests, their behavior will be both defined, and different from the norm.

To properly hunt, there are some prerequisites. First, hunting is a process that will take time. It can't be rushed. You can scope your hunts to look at a specific behavior on one system over a short time to control the time investment. Starting small will allow you to understand how much time you budget.

Second, create documentation. Hunts need to be documented to help baseline. Environments change, and the hunts can help keep baselines up to date. Hunt documentation needs to show exactly what the hunt was about, how it was scoped, what the hunter sought, and the results of that hunt. This allows the hunter to refine their process, create a history of refinements to the hunt, and provide a template for teaching junior security team members how to hunt. My personal preference is Microsoft OneNote, but you can use a wiki, you can use notepad. As long as you can organize your documentation, you are going in the right direction.

Third, have visibility. You will need to be able to see data. That data has to come from somewhere, and the easier the access, the easier it will be able to search. You can hunt with the windows event viewer. You can hunt with netflow. You can hunt with just about any logging data. You need to be able to see it and carve it.

Carving is the ability to manipulate data to remove irrelevant sections or sections that require further analysis. This manipulation can be based on simple thoughts (running FINDSTR looking for logon type 10 or using GREP to look for netflow connections into the server core from unexpected IPs). Talk to any experienced threat intelligence analyst, and they will sing the praises of Microsoft Excel.

Once you have set aside time, can create documentation, have visibility, and can manipulate data, you are ready to hunt. The process of a hunt is very simple. Behavior A is normal. An adversary on the network will have a behavior that will deviate from Behavior A. How do I find behaviors that deviate from behavior A? You then look at the data and filter out normal behavior. The behavior that's left needs your analysis. Every bit of anomalous behavior needs to be either justified or addressed.

This is where you find unique (or erroneous) configurations in your environment. This information can correct issues, or help people understand what their systems do. On more than one occasion I've asked system owners at multiple jobs why does their system do this, and repeatedly I have been told they have no idea.

Once you have cleared known good behavior, and you have justified what can be justified in your environment, you are left in two potential states. One - there is nothing left to carve out from your hunt. This means that an adversary didn't exhibit this behavior (or found a way to disguise it, but that's farther down the threat hunting rabbit hole). Two - there are unexplained behaviors found in your hunt.

If it's option two, congratulations! The process of hunting is now concluded. The process of incident response begins.

Infosec_Samurai

This is part three in the series on personal codes of conduct. These are my maxims, my personal guiding philosophic code.

Part 1
Part 2

Maxim 7: Never say no to a user. Say "Let me find a way for you to do that safely."

Information Security professionals are relentless with finding ways to make their job easier. We turn threat hunts into alerts. We automate response actions. We use scripts to automate as much as possible. We do anything to make our lives easier. End users are the same way. If software will make a user's job easier, they will use it, whether or not the company pays for it. How often have you found unlicensed, hacked software on your user's computers? If you haven't checked your users’ systems, take a Xanax and go hunting.

End users don't tell us about these unpatchable, unlicensed, trojan horses because they expect us to rip away what they have and make their job harder. You want to change the paradigm? When you find this software, sit down with the user, and explain the issue. Then, tell them you want to find a way for them to do that safely. If you talk to leadership about this software as needed, you can press to get licensed copies. You can find free versions with similar functionality that can be patched. When you show users you understand what they need, and you can demonstrate you want to see them do their job safely without roadblocks, you create an ally and an advocate.

Maxim 8: Remember kids all mics are hot, all guns are loaded, and all systems are production.

Credit for this goes to @infosecxual.

I haven't had an employer yet where I do testing on 'test' systems, only to find out I shouldn't have or I need to stop because someone uses it in a production capacity. Then why is it called test? Just because something is labeled a certain way, doesn't mean it's being used in that way. Define: hacking.

<Aside>

I had a job where a coworker had a bowl of movie theater candy out with a spoon so people could serve themselves a spoonful of Mike and Ikes or Junior Mints and enjoy. We were having a talk about expectations and mismatched expectations, when she set out a big bowl of M&Ms. To prove a point, I went downstairs to the vending machine and bought a pack of Skittles. I then ninjaed in the red, orange, and yellow Skittles among the M&Ms. That look on people's faces, especially when they get a yellow one, became an unspoken example of misplaced expectations.

</Aside>

This doesn't just apply to test systems. Every time you try to cut a corner, such as quickly updating this one router or slipping in a quick vulnerability scan against a prod system during business hours, you are rolling dice. The accountability for crashing a prod system because you weren't patient is more than downtime. It affects how you are viewed. See Maxim 3.

And don't forget, you never know who is listening. Keep that unpleasant opinion about users to yourself.

Maxim 9: People will use what power they have. Plan for it.

End users may not have much power when it comes to policy or procedure. We do what we can to work with them, but there are times policy or procedure dictates certain behavior. When Infosec has a victory where a user has to do something the way we want, we have to be careful in how that is presented to the user. When users are shut down, when their process changes, if it is not done in a way respectful to users and their job, users will find a way to push back.

Understand this. People can be petty. Users are people. Ergo ...

Does the user or one of their friends // allies sit on the change board? Prepare to fight to have even the most basic changes approved. At a previous job, security had a history of hampering other departments instead of working with them. My first change was adding more vulnerability scanners to offset load and speed up the scanning process during approved windows, allowing us to shrink those windows. This change was a benefit to everyone. Two members of the board fought against all the things they thought could go wrong which made no sense. The name Skynet even came up in the argument. If you follow current politics, you know you can't argue reason with people who absolutely refuse to embrace it.

Can they deprioritize a process? Same job, we were doing annual IAM role permission reviews. The system was set up to give people a month to get it done, with a reminder e-mail at 14, 7, 5, 3, 2, and 1 days to deadline. Once you hit 5 days, those e-mails would include their supervisor. When you hit -1, that included the supervisor's supervisor. We had one holdout who it -14 days. All three in that line up the food chain didn't like Security due to some perceived slight years ago. So, every one of them argued about other priorities being more important. We had to get people with Cs in their title involved, and this made it a month late. Of course, the mitigating issue didn't make Security look any better.

In security, many processes and needs will be handed off to other teams due to separation of duties or the need for specialists and specialty knowledge. If you need something, the monopolistic provider has a good deal of power. If you haven't worked to foster good relationships, they will use that power to show you who has it. You may win in the end, at the cost of stress, frazzled nerves, and other users watching you go to war.

"Why have enemies when you can have friends?"
Charlie Hunnam as King Arthur

Infosec_Samurai

This is a continuation of part 1 of my series on personal codes of conduct.

 

Maxim 3: Your most important asset is your name.

Of all the things you carry as an Information Security professional, and as a human, is your name. Your name carries your reputation. Think of any famous person's name. What image does that conjure? What do you automatically think when hearing that name? Is that person pretty? Talented? Caring? Aloof? Cruel? Crazy? Think about how most people view that person. The views are filled with the bias of personal experience. However, Those images are cultivated carefully. Now think of your boss. Think of your employer. Think of your best friend, or your significant other. How are they viewed, by you and the world at large? What behaviors do they exhibit to cultivate that reputation?

As you think of the good and bad of it, their history and how that has affected their reputation comes to mind. One malicious act carries more weight than all the good they may have done. Have they repented? Have they worked tirelessly to rebuild their reputation? Are there people who still think of them as bad or untrustworthy based on that event?

<Aside>

I have a problem with this in modern discourse. Look at the political arena or social media. When one side wants to prove someone is bad, they have to go in the way back machine to find one prior bad act (usually inappropriate speech as a younger person). And this is used - mostly wrongly - to excoriate that person. Social media takes away that passage of time making such words ever present, even if the person who used them is no longer here. It's one thing if the attitude hasn't changed. But if the person no longer exhibits that behavior, they grew as a human. They are greater than their past self. There is no greater achievement.

</Aside>

Corollary: You can destroy your reputation in an instant. Be warned.

Think about Infosec explicitly. The example of Terry Childs is perfect. He locked out the city to prove a point about the security of the network, and potentially in defense of a malicious insider. He took it to an extreme, but his actions have ensured that no one but the most desperate of need would hire him to do any job that carries a burden of trust, or responsibility. I have often received calls from network contacts to ask if I know their applicant and what I can tell people about them. Sometimes I have glowing reviews, sometimes I have very little to say either way. Only three times have I ever directly said, "Do not hire this person. I can't tell you why, but I would expect lots of time spent with HR // disciplinary measures // zero productive work." My name carries trust. I go out of my way not to torpedo someone unless they have a series of behaviors that are disruptive and dangerous. I account for the passage of time. Between them and I, the value of my recommendation comes down to both our names.

 

Maxim 4: Title does not equal mastery.

We've all met that person. They have certifications galore (MCSE anyone?), or held a job title for a while. You interview them, and they talk a good game. You hire them on, only to find out they couldn't admin their way through the drive thru at a McDonalds. It's especially frustrating with technical certs, where (in theory) a level of mastery must be demonstrated to get the certification. If you are old enough, you remember the days of the certification mills in the XP/7 days. These people were trained to take a test, and could then pass the test.

I worked with several at a previous job, where they all have a slew of Microsoft certs, and I had an Associates degree in Computer Networking Systems. They "took a chance" on me as I didn't have the credentials the others had. We were all on the same project duty, migrate a series of systems from Win 2K to Win 2K3. The process would take 8-9 hours depending on transfer speeds. Sometimes they would have problems with the process, or didn't understand what to do when basic errors cropped up. They had been there for a month, and I was brought on because they couldn't find a fourth otherwise. Within the first week, I had found ways to increase my productivity so I could finish the process in 6.5-7 hours every day, three of which were just waiting for transfers to complete. The processes weren't difficult. The scripts may have been intermediate to advanced, but the process was rudimentary. One guy quit because it was "too hard." Another left for an opportunity from a buddy. We still finished the project on time.

We chase the titles, as early on they get us past HR. However, without the mastery, that bluff doesn't last. I disagree with talking heads that say certifications are a waste for new Infosec talent, as those talking heads already have the mastery, and it is tied to their name (see maxim 3). They don't need them. Just remember that the certifications are a means, not a goal, on the path to continued excellence (see maxim 2). You can build upon mastery much easier than building upon certs and titles.

 

Maxim 5: Never lash out in emotion.

In Infosec, even when people despise how you "get in the way," you are their rock. If you are calm, everything is ok. You may be a pain in their ass, but there's no reason to worry. Subconsciously, they understand that you carry a burden of knowledge, an awareness of what can really go bad. If you are calm, then everything is all right.

If you, security, the rock on whom they are forced to trust, the one with secret knowledge, are all worked up, how screwed is everyone else?

Think of the reputation of your team (maxim 3 - these reenforce each other. There's a lesson there). The expectations that come with that. Think of what security means to everyone. Do you think we're just people? We can have bad days? Imagine if the CIO when running down the hall grunting. What would you think if the CEO was walking around with slumped shoulders? Assume your boss, or your CFO was screaming at people. What crosses your mind? How much does your foundation shake?

The fear of a kaboom is one side of the coin, both bad. Emotions are about control (see young Spock in the 2009 Star Trek). An adversary, even (and more likely) one working for the same institution, will work to get you to pop off. When they do, they exhibit control in a situation where you can't. To anyone else, who shows better they can handle whatever the argument is about? Who is better equipped to handle the strain of what needs to be done? Who is more likely to rupture, cause an incident, or walk out? People are going to test you. In Infosec, we carry one of the greatest burdens of performance of any role in the institution. That's the price of the role. Don't let emotions taint that burden, and how people see you carry it (#3, yet again).

 

Maxim 6: At some point, you will lose.

Corollary: You can be absolutely right and still lose. Be prepared.

Axiom: Just because you lose, doesn't mean you have to like it.

 

Tell me if this sounds familiar. You have an obvious gap in your institution's security. Maybe it's a vulnerability (having unsupported .Net for legacy apps), maybe it's a capability gap (not logging relevant windows events). There is an obvious fix that takes time, money or training. The damage that can come from this security risk is quantifiable. It may be widely exploited. You make a solid case why you need X to fix Y, as an issue with Y will cost $Z. This can't be refuted, and everyone accepts this as both truth and fact.

Then the decision makers say no. They're willing to accept the risk rather than create a new app. Their financial priorities place new office furniture above training to fix an issue. Or worse, they won't spend the money on a new capability, because an existing tool says they can do it (albeit with the need for several custom virtual machines).

And you are left wondering how someone so dumb is higher up the food chain than you.

Most of the time, this is your bias getting in the way. As techs, we don't see the operating budget as a whole (usually). We don't know revenue streams. We don't see risks outside of our own. We don't have to deal with the wants of external customers. We don't see the choices they have to make. They can be ignorant or self serving. My belief in humanity tells me they are more likely dealing with the world as it is (#2), and they understand the value of their name (#3), and wouldn't be willfully acting against that.

 

Think about your rules as an Infosec professional. I still have (currently) seven to share. Stay tuned.

Codes of Conduct at conferences make me angry. They make me angry the same way I have to be given warning that this coffee is served hot, and not to use the chainsaw on my genitals. These exist because somewhere, a grown human being did something to warrant the need for warnings like this. Perhaps it is my work environment, or the people with whom I choose to spend my time. I have worked hard to make sure I am not spending time with people who need to be told that peanut butter contains peanuts. I do not like the way as an attendee I am impugned by default simply for attending.

The part I really hate? They're needed, and there should be one for the staff as well (Captain Crunch, and those who kept boys away from him instead of dealing with the issue, for example).

In life, individuals should have their own code of conduct. The idea is to regulate their own behavior based on the environment in which they exist. This harkens back to simple ideas like putting There Be Dragons on a map. Depending on who you are, your code of conduct may say to stay away from physical threats, or to train to be better able to face them. A baker's may contain a maxim about early to bed and early to rise, as the goods need to be fresh when people wake up. A politician may (but generally doesn't) treat every mic as hot, and that what they say around recording equipment will be broadcast and transmitted. It is no different in Information Security.

As my career as evolved, I have - so far - built up a list of eleven maxims that apply to a career in Infosec. These eleven maxims, in structure akin to Gibbs' Rules in NCIS, have guided me through my career, and kept a light on in dark places where all other lights go out (audit check box security). Everyone should have their own set of rules that applies to their life and their work. As one thinks about it, they should be written down. I've developed these over the course of a decade. If I thought about it, I'd probably have more, but they cover wide areas, and generally apply to life as well as Infosec.

Maxim 1: Give your adversary every opportunity to make a mistake.

I came up with this idea whilst spending leisure time years ago playing a certain collectible card game. In this game, each color of card lent itself to a specific strategy. One of the most popular, focused on control, and took a very different understanding of the game. The most common way to defeat someone is to reduct their life total from 20 to 0. Some tried to do this as fast as possible, some tried to do this by surviving to the mid game and playing a nigh unstoppable strategy. The control player took a very different tack. They would let an opponent exhaust their resources over the course of a long game. The opponent's strategy would become clear early on, and the control player just had to survive. They knew that an opponent could blast them for 19 in one shot, so long as their life total didn't go to zero. The difference between 20 and 1 was negligible in compared to the difference between 1 and 0. The opponent understood the nature of the control player's strategy, but the factor of the unknown always stood in the way, and in a long enough game ultimately led to mistakes. It was the job of the control player to capitalize on each and every one of these. If the control player ended the game at 1, and the opponent 0, the control player still won.

The same is true in Infosec. The difference between Reconnaissance and Command and Control is negligible compared to the difference between Command and Control and Acting on Objectives. Up until an adversary starts doing what they intended to do, they can still be caught and any damage is a learning experience. Much like that collectible card game, the adversary has a limited bag of tricks, based on the bias of their own experiences. If an adversary gets stopped trying to send in a spearphishing e-mail, there's strong odds that they will try again. If an adversary runs an nmap scan to see what's accessible from the system they now control, once they move to a neighboring system, they will likely do the same thing rather than check the system registry for RDP targets the usual user of that account engages regularly. Does an adversary pull credentials from active memory versus offline SAM cracking (turn on LAPS, please). Some have a wide skill set and tool set, but that variety can also be an indicator. Institutional defenders should have solid visibility in their networks to be able to see these anomalies. Whether you stop them at the Delivery phase by blocking the e-mail or have the user report it as a phish, or you prevent the compromised system from downloading the malware or attacker toolset from Command and Control, you still win the engagement. An adversary need only trip up once, so long as you are ready to capitalize on that mistake.

Maxim 2: We deal with the world as it is.

Corollary: We work to create the world we want.

One of the hardest parts of being poor, is explaining to your kids why someone else has something you can't have: vacations, a new car, designer clothes, or the latest iPhone while you have an old LG. Most people fall into the trap of whining about how it is unfair, and thus there is no point to trying to compete in a world where the scales are so far tipped against you. In doing so, there are a myriad of mistakes being made. First, a person is measuring themselves against an impossible standard. You can't compare outcomes when the starting positions are different. Fair or not, the mindset should be about making one's situation better, and living better than one did the previous day, not benchmarking oneself against others. Second, They automatically assume the one against whom they benchmark themselves didn't make sacrifices (wise or otherwise) to be in the situation they are in, i.e. how deep in debt do they have to be to maintain that lifestyle. Third, people take on a nihilistic approach. I can't get to where that person is unless I win the lottery or a miracle happens, so I won't work to make incremental changes that will improve my situation over time. Daddy I want an Oompa Loompa now!

In Infosec, the hardest things to do are to go to conferences or events and network with peers and hear that they have their own pen test squads internal, and they don't outsource code reviews, etc. What kind of resources do they have in play? Even better, listen to how leadership tries to benchmark themselves against industry peers from a purely spending standpoint without looking at a capability standpoint. I remember working for an ICS company where the budget for IT was baselined against their top competitor. They only spend 3% on IT, so we only spend 3%. That was the only metric. The maturity of IT, and what they defined as IT, wasn't even a factor. They may have been comparing apples to apples, more likely apples to rutabagas, or potentially apples to oil filters.

The right thing to do is to measure where you are now, where do you want to be, and how do you get there. Build a plan based on on where you need to be and the resources available, not to push management based on what Google has.

---Aside---

I had taken the SANS Threat Intel class last year. In that class, it was mentioned that a best practice was to take a senior, mid level, and junior team member from the SOC and IR to work as part of a team doing threat intel for a time. Then rotate with another senior, mid level, and junior, to give fresh perspectives and everyone a shot. All while having enough people left to run the SOC and IR functions. With the exception of the guy from Google in the class, everyone had this glazed look like they don't have that many people in Security, much less in varied disciplines with a rotational capability. People were measuring themselves against the resources the instructor had at his day job (a well known very large silicon valley firm), and measured themselves (incorrectly) as wanting. Apples to oil filters.

---End Aside---

When benchmarking against these other companies, we don't see the differences. Are we established and they are new with no controls and flush with VC money? Are they beholden to one or two investors who demand a certain image, or that they work in an area of expensive real estate like San Francisco? Are they blowing their budget on marketing without investing internally? (Google PCI.net, their stadium naming rights, and their Super Bowl ad). Remember, just because they're trying to make us think they're holding four aces, doesn't mean we're not playing chess. A great hand in their game can be worthless to us.

Nihilism is a danger to an Infosec professional. Our education can easily take us past the capabilities of our controls, and much like a kid who understands calculus being forced to sit in an advanced algebra class, we can lose interest and become stunted. This is where personal responsibility comes in. The goal should be to maximize the capability of the current controls, while continually educating yourself to be able to justify the better controls and how they will be of value. Like the student stuck in class, we shouldn't fall into the trap that we can only learn and experiment on company time. Yes they should invest time and money into your education. So should you.

Nine more maxims to go. To be continued ...

Infosec_Samurai

Infosec, and life, is ultimately based on one principle: personal responsibility. This principle is the cornerstone of all aspects of successful, sentient life. Everything that happens that is successful comes down to someone taking personal responsibility for something. Is the network secure? Someone took responsibility to build a perimeter. Someone took responsibility to tune the firewall rules. Someone took responsibility to set up logging, build an asset list, define priority systems, doing user education, configure e-mail protection, setting up A/V and EDR, setting up whitelisting, and – most importantly of all – tuning it all to the environment. In Infosec, we carry the burden of everyone’s responsibility, as our behavior and education and engagement spread out to everyone else. Ultimately, we are responsible for what happens on our networks, no matter who clicks on what. Every time we take responsibility to answer a question, tune a rule, or check on a reported phish, we demonstrate our willingness to put in the effort, and we make the institution we defend incrementally safer.

Personal responsibility begets ethics. It begets a code of behavior. More importantly, it shows a pattern of behavior and a standard. Good leaders notice. Users who care notice (everyone cares to some degree). Over time, one or more of the following will happen:

  • Others will start holding themselves to your standard, lest they look bad. You become patient zero for an improvement in culture.
  • People become more forgiving. If you make an error, or forget something once, people won’t then bring the hammer down on you. They recognize you are human, and realize this is the outlier, not the trend.
  • Leadership clearly identifies your value and invests more in your compensation and training to keep you around as long as possible.
  • You find out leadership and the users don’t care after all, but this clears up any imposter syndrome you have, and you can put together a clear concise resume full of measurable wins to move on to a better job. If you can demonstrate measurable value, good companies will extend an offer.

Understanding the nature of personal responsibility in people’s lives, the principle of working to change what one can for the better instead of whining about the unfair disadvantages and lack of equal outcomes in situations, is very comparable to taking a HUMINT course, or really learning about nutrition and calories. You can’t unlearn it. It will color every interaction you see, and every choice you make. It is Neo’s red pill. When Cypher understood the horrors of the real world, he wanted to go back. The laws of nature say it’s impossible.

Sometimes a coincidence is a coincidence. The other day when I went home, I was thinking about food, and I took the personal responsibility to skip the fast food and go to the grocery store. I then skipped the junk and loaded up on produce and meat. As I’m approaching the checkout line, I observe a situation that I can't help but view through this frame. I see the police and the store manager dealing with an elderly man. This man had been abusing the staff. I don’t know what his life is like. What I do know is some of the staff is afraid of him. He had been abusive. I don’t believe this was warranted. He made a choice to take his issues and be abusive to the staff. He was then banned from every one of this chain’s stores in the state. He thought it was unfair, and he made a stink about it. The parallels between this, and security professionals who abuse their users are all too common. They call their users stupid. The take punitive actions against uneducated users. They rail against the decisions of the business and those who make those decisions. Then they get fired. And it’s the shitty company, It’s the whiny users. It’s the underinvestment in technology. It is everything except their own behavior. Even worse is when that behavior isn’t addressed until someone goes to HR. Management is then forced to find a replacement, and the bad blood towards security was let sit that much longer.

Even when we deal with environments like that, our good work puts a shine on the most important asset we have. Our name. And everything that our name carries with it. In bad environments especially, take the responsibility to make yourself stand out by contrast. It will be noticed.

 

@infosec_samurai