Skip to content

The concept of trust is a foundational one in InfoSec. You give a user access, you expect that access to be used in the designated way. You give an accountant trust to dispense money in accordance with, and only for, the business need. You give your kids the car keys or let them stay home alone, trusting to get the car back in one piece and the house clean of party remnants. If a user misuses the system, the accountant embezzles money, or the kids damage the car or house, privileges (or jobs) can be revoked.

Thus the concept of forced trust. You want a job with the Federal government? You are filling out very detailed forms, and you have no choice but to turn over that data. Your employer has to have your W-2 information for payroll. You want to stay connected with your family? That may mean you need a Facebook account. Even with all the privacy settings, the data gets slurped, in ways you may realize, but most don't.

To be a part of the world, in so many ways you are forced to trust entities that have already, maybe even repeatedly, proved they aren't worthy of that trust.




Every major hotel chain (Marriott 2018, Hilton 2017, Hyatt 2015, Starwood 2015).

Online retailers.

Brick and mortar retailers.


Sure there's a fix. Never submit identifying information. Only use cash. Drive older cars. Only use prepaid cellular, and only turn it on and call from the same place.

How practical is any of that? Even monks in monasteries are online. So what can you or anyone do?

Humble Bundle prompted this. The good news is only those with a humble subscription, not regular users, are affected. And the reports show the adversaries got e-mail addresses and that those e-mails were tied to subscriptions. These can be leveraged for phishing attacks, or spam from other game services.

I purchase the monthly bundle on occasion. My protections for this and other online retail is somewhat simple. Anything that isn't primary to my life is tied to a secondary e-mail account, and a secondary account for my money. I move money in to pay, and I'll happily take the monthly account fee to not have a minimum balance. A low balance credit card fits this bill nicely. Any compromise will send spam and phishing to the secondary e-mail account. If something goes horribly wrong, it's easy to burn that account and spin up a new one. Password managers prevent reuse attacks. And if something slips through the e-mail provider's BS detector, I know not to click the link and just login at the site directly. Any reputable service will have alert notices clearly visible right after login. I know people who use more unusual browsers (e.g. Opera) for transactions on banking and healthcare sites, knowing they are less likely to be targeted for exploitation on those sites. Obscurity is not security, but obscurity can augment security.

We live in a world where forced trust is constantly betrayed. Even if Facebook is broken in half, other services will fill the void. They too will betray you (whether or not members of their board Lean In). The best anyone can do is understand their personal threat model: what do they have that would hurt when lost, and how can they reduce the risk of that loss, or in the modern world prepare to continue on when that loss happens. We are in the Matrix, there's no more getting out. There is simply dealing with the world as it is.

"You lost today, kid. That doesn't mean you have to like it."

-Man who gave Indiana Jones his hat.




This is part four in the series on personal codes of conduct. These are my maxims, my personal guiding philosophic code.

Part 1

Part 2

Part 3

Maxim 10: People aren't dumb. They are illogical.

Dumb users may be the foundational trope of IT. Doubly so in Infosec. I remember the early days of the Bastard Operator from Hell (reference point: go to and search BOFH). There were other actual non-satirical blogs like this of admins having days ruined by inane stupid requests from users. If you are old enough, you remember the stories of the now extinct cd-rom drives' birth when there was always that one user who thought it was a coffee cup holder. To be fair, by modern definition, the coffee cup wizard is a hacker. They found an undesigned use for their hardware. We mocked them; we should have praised them for their ingenuity. Thus my maxim.

Users aren’t dumb. They are illogical.

End users are trained to do their processes. Most jobs in offices today are designed to be done in near assembly line style. A user has a very defined set of duties. They are trained on that set of duties. They practice those duties every time they do them. The procedure is logical for them. That logic exists within the bias of their experience. Most jobs do not require – nor do they want – people who think outside of the box. This is the complete antithesis of IT and Infosec – we follow processes but are constantly put into situations where the box doesn’t exist, and we must solve the problem // track the adversary // stop the malware // fix the issue RIGHT NOW. This requires us to be agile in thought while still following a logical progression. To be in the IT // Infosec space, you need to have the ability to be logical. Troubleshooting is applying logic to a problem. The nature of our jobs requires us to be able to logic any situation that comes up, as inevitably many have nothing to do with our systems.

In the modern day Everything as a Service society where most people outsource their needs to third parties, the need to be able to solve problems logically is no longer a necessity. It gets outsourced. Thus, when people need to be logical in an unfamiliar environment, they get frustrated.


subject inexperience x emotional escalation x attention at that moment = disproportionate response (Blowback)

You must understand the logical approach in dealing with an illogical person, then you can mitigate any unpleasant response. If you can minimize the attention on them at that moment, calm the situation down through the liberal use of patience, and use it as a teaching moment, you minimize all three factors leading to blowback. The biggest part of this is knowing that they will be illogical with their next tech and security issue, and the next, and the next. On a long enough timeline with enough interaction, they will start understanding the logic. And helping them get there gets you an ally.


Maxim 11: Words matter.

“If you give me six lines written by the hand of the most honest of men, I will find something in them which will hang him.”     -- Cardinal Richelieu, in The Three Musketeers.

Words change the path of the world. Look at what a tweet from the commander in chief can do. Words can be used to change a mood, challenge an assertion, even save a life or sentence a human being to death. Words can build allies or create enemies.

We live in a world where people want to be offended, forgiveness is conditional, and even (especially) the most mighty of Infosec heads look for reasons to crucify people based on their personal orthodoxy. For all the talk in helping people up, they also file away every printed work to use against those same people someday.

In the modern world, our words are eternal – every tweet a testament, every comment an epitaph. In this world, you must be skillful with your words as if they will be carved in stone forever. You need to respect the damage they can do to you as well as to others. Your words will be used against you by your adversaries and enemies.

Be direct. Use exacting language. Understand how to communicate for your medium. If you use twitter, reference an idea then link to a blog post expanding on it. Review written words before publishing. Think on any e-mail before hitting send. Think about the potential recipients (even ones not in the To field or among your twitter followers) as your message is shared and how they choose to interpret the words from within their bias bubble.

It sounds like a lot of work. It is. It doesn’t matter if we don’t like the world being that way, we deal with the world as it is (Maxim 2).

Corollary: Passive aggressive statements are a sign of weakness. Those who deliver such statements demonstrate a cowardice to take responsibility and challenge something directly, most likely because they know their challenge will not stand up to any logical scrutiny. These statements are most often used when logical truth is at odds with emotional (childish) desires.

Do Not Be Passive Aggressive.

Maxim 12: Take care of the people who take care of you.

I am fortunate. Not because I have slogged through the mud to an amazing job with the best benefits I have ever seen (not primarily). Having slogged through thankless jobs, I am very appreciative of those who enable me to spend more time doing what I need to be doing, rather than being forced to do little housekeeping day to day time sinks that whittle down my time availability in my day. I once thought the idea of the rich person with the butler was snooty. Having been support staff, I realize the value in the staff giving back time into my day. Lunch is provided, so I don’t have to spend time making it before work. People keep the office and restrooms clean, so I don’t. I have a HR department that is constantly challenging our benefit providers to do better so I don’t have to shop around myself. I have a bona fide excellent IT support team who makes sure I never have to engage with angry users.

I also realize these people get paid far, far less than I do.

It starts by acknowledging that they are not an invisible service provider. They have names. Most have families. They aspire. At times they have unpleasant jobs. I want them to feel valued; I want them to feel appreciated. I want them to achieve and do well in life. I want them to be part of a culture of success.

And to do that, I give my time. When they have questions of personal security, I will take the time to do a security review and let them know what options are available. Are they in a branch of IT and are looking at positions on the horizon? I work with them to know what they should train to build the skillset for pending internal positions that would be a promotion. I help them find the conferences and knowledge bases they don’t know exist. Sometimes, just having hallway talk about how bad the Buccaneers are doing this year (after a too promising 2-0 start with a backup) so the slightly better Packers could be in a worse position.

Taking the time is more than just being human, it’s pragmatic. Because that day will come when I need them to take the time for a reason I can’t imagine, when they wouldn’t have to. I won’t even have to ask.

Across seas of monsters and forests of demons we traveled. Praise be to Allah, the Merciful and Compassionate. May His blessing be upon pagan men who loved other Gods, who shared their food, and shed their blood. That His servant, Ahmed Ibn Fahdlan, might become a man, and a useful servant of God.

-- Ahmed Ibn Fahdlan Ibn Al Abbas Ibn Rashid Ibn Hamad, closing line to 13th Warrior.




I have a folder in my e-mail where I save the CFP rejection notices I have received, from the conferences that send those notices. When these rejection notices come in, they always come with platitudes such as 'thank you for submitting' and 'please submit next year'. They never say 'your submission was awful' or 'please don't contact us again.' They come with zero constructive feedback. If you talk to people on the selection committee, they will say some variation on the following lessons people can learn from the process:

  1. Try submitting the talk again at other conferences.
  2. Try again next year.

These are complete falsehoods.

Submitting at other conferences may be a waste. Selection committees are inbred. Selection committees are made of high profile Infosec people and conference insiders. There are not a lot of these in a region. Ergo, they get reused. If you are rejected from giving a talk in Indy and you submit to a conference in Chicago // Louisville // Grand Rapids, you may very well be rejected again by some of the the same people who told you to submit it elsewhere.

Why would you try again next year? In the world of Infosec, where things change daily, if the talk wasn't up to their snuff this year, when all the incremental changes happen in a year, how will your talk be even more relevant? Doing this is a waste of time.

As always Maxim #2 applies (We deal with the world as it is - we don't pretend it's the way we think it should be). In light of that, here are the real lessons I have learned from the CFP process, and my several rejections.


Make a decision - do you want to speak on this topic, or do you want to speak at this conference?

Understand this. You may have a topic you think is of value. You may have  conference where you'd like to speak. And they may not go together. Most every talk can fit into the base design of some conference - there are dozens, if not hundreds, in the US alone. But most conferences have a very specific template. Look at the webpages for that conference past, and see the talks and abstracts they publish. Does your topic line up with these? If your goal is to present at a conference versus give a talk on a specific topic, look at the past talks to find what they like to have presented at their conference (and it is THEIR conference, despite any claim about being part of or welcoming to the community - internalize that). Find something in that vein and present it. Do they take talks about threat hunting? Find a topic on hunting that hasn't been done, such as hunting with outlook registry artifacts or hunting through Mac system logs. Learn a topic that they'd like that no one has presented, become an expert, and submit that. That may mean waiting until next year, but if the goal is to present, put yourself in a position to do that. If you want to present on the topic, you may have to widen your search, and expect to travel.


Every conference has a clear template about what presentations they accept. They are the presentations from previous years.

This seems so common sense, but it is never really preached. People will mention it occasionally, but it is the ultimate Canon on what a conference wants. You have a library of what talks they want, how they should be titled, what the abstract should look like, and most importantly, what kind of people they want presenting (this last one is the unspoken dirty little secret - conferences are run by people with agendas, remember). Everything from the headshot to the name to the title to the bio is laid out in a nice order. Review these over a long enough timeline and you will see a pattern. Build to fit the pattern they want. This increases your chance at selection.


Don't punch above your weight.

Some conferences, through the patterns explained above, don't want new people or unknowns. They tie the prestige of the conference to the speakers who present. When a conference publishes a partial list of speakers before the stated date of selection, they are demonstrating their prestige. Each of these speakers will have some list of notable accomplishments or previous speaking engagements which give the conference weight, and explain what they are looking for.

Like every rule there is an exception. There is nearly always a magical little checkbox at these conferences that (when most politically correct) says 'check here if you are an underrepresented group.' Understand in modern parlance, that means not a white male. As a white male, I have very strong feelings about this, for reasons you wouldn't expect (and some you would). But the truth (maxim 2) is that if you are not a white male, use this to your advantage. Conferences want people who aren't white males (for reasons ranging from pure to sexist//racist, depending on the conference - not everyone is on the side of the angels). Use this to your advantage. Make use of the opportunity. Understand this doesn't mean (at most conferences) that you will be accepted because of a sub-par talk. What it means is you win tiebreakers. The conference will pick out the big names and the talks they clearly want. If yours is a talk they want, and you aren't up against an Infosec name, and you followed the submission guidelines (people don't - conference organizers whine about this every year), your competition is whittled down to any other similar talk being presented by either an unrepresented group or an insider who knows someone on the selection committee - and the checkbox can beat even that. Understand this, there is no shame in using the available advantages. It is your future, and your resume - don't hold yourself back.


Sometimes, the only winning move is not to play.

If you read this as a defeatist attitude, you already miss the point. As the old woman in The 13th Warrior told Buliwyf, perhaps you've been fighting in the wrong field. If your goal is to get information out there, but you don't think you can get past the selection committees for whatever reason, you have options. Write a blog. Do a podcast. Post a video on youtube. Create the content with your own personal spin, and use that to build your personal brand. Demonstrate value. Connect with like minded people. Share content. Do this, improve your skills at presenting information (in any format), build a history of useful content, and you become a name that the conferences want, you build bridges to people on the selection committees, or you may be brave enough to put in the time to start a conference covering those uncovered topics.

Here's the dirty little secret of meritocracy, and an example of where even the most beneficial and fair system breaks down. When you accomplish something that lets you connect with the people running these conferences, you are in a position to make better connections and have access to research others don't, making it easier to get the better jobs or access to information on topics you'd like to research, that you can then present, creating a continuous cycle. It takes a lot more effort to get into the eye of the storm than it does to stay there. And when others see people make that same journey, they will work to insulate that group. It's the nature of tribalism, which has existed since the dawn of mankind and will never go away. Understand it. Accept it. Make use of it.

Ultimately, decide what you want. Take the time to learn the real rules of engagement, then play to win.


Give your adversary every opportunity to make a mistake.

This is my first maxim of Information Security. It is my keystone. We hear variations on this. An adversary only needs to be right once to get in, but then only needs to be wrong once to be discovered. APT 1 had behaviors that led FireEye to track them to Shanghai and a building tied to China's People's Third Army. Crowdstrike reviewed the DNC hack and was able to discern that two separate Russian intelligence bureaus hacked into the system, and didn't realize the other also had. Guccifer 2.0 forgot to turn on his VPN just once before going onto twitter and his location was tagged in a building in Moscow tied to an intelligence directorate. Stuxnet was traced back to the NSA, Duqu to the Israelis. The best of the best make mistakes. This leads to a corollary to my first maxim: on a long enough timeline, everyone makes a mistake.

Here's a story that was shared with me by a good friend in the industry. It is missing relevant details out of respect for my friend. Some details have been changed. The processes, trail, and TTPs are accurate. Apologies to Dick Wolf.

An adversary (henceforth identified as Beetroot) was intent to commit fraud. Beetroot would accomplish this by pretending to be an American company that would help foreign businesses get loans that would allow it to establish a presence inside the United States. The presence would help them register with the IRS and get an Employee Identification Number. Beetroot would claim to be able to facilitate the paperwork, the line of credit with an American bank, and set up contacts in the United States for the foreign business allowing them access to the lucrative American markets, for a moderate to large fee with a revenue sharing percentage over some amount of time. Beetroot claimed to be able to do this because he was a university professor with access to Masters and PhD candidates to do the work for research and credits. Beetroot would reach out to targets by utilizing Search Engine Optimization (SEO) on popular foreign search engines (Yandex or Baidu, for example).

Beetroot had been running this scam for a long time. As he didn't target American citizens or businesses, no one domestically took any notice. His fee amounts were small enough foreign governments wouldn't go through the hassle of dealing with the US State Department to attempt to apprehend Beetroot or retrieve the money. Beetroot was safe.

Beetroot would do some brand impersonation on a website. One of the brands he impersonated found out and had his site taken down. Beetroot spun up another site and impersonated someone else.

Later on Beetroot spun up another site, with a domain name very similar to the one that had previously been used against my friend. Once again, my friend's educational institution (a collegiate business school in the greater midwest) found the site, and worked to take it down. My friend came to me and asked me to take a look at what he had. We worked at different shops but were both contracting through the same firm so NDAs were easy to handle.

### I reread that NDA 4 times before hitting publish. This births a new maxim. Do not mess with an NDA.

Beetroot used servers in Eastern Europe. Beetroot used privacy guard. Beetroot used publicly available information from any search engine to do the impersonation. Beetroot had no digital footprint of any kind in the US. There wasn't much to go on. Except Beetroot went back to the well and impersonated the same school twice (mistake #1).

This time Beetroot's tradecraft was nearly flawless. But, since the attack was virtually identical in every way (what he did, how he did it, who he targeted, where the targets lived) one could say with moderate confidence it was the same adversary. So the focus of the investigation was the original impersonation website.

Both websites were a variation on the school's URL acronym, but at .com instead of .edu (many schools, even business schools, don't register the .com - poor brand defense). But, on the original one, Beetroot made one hiccup. At one point he switched registrars. Maybe it was due to being cheap, maybe he had a deal, maybe he liked the local geolocation better. But the day he switched, he forgot to check the box for whois privacy. (mistake #2). And for one day, the full whois record was listed, and passive DNS captured it in perpetuity. There was no name, but there was an e-mail address, and a street address. Tied to the registration date, we had behaviors tied to an indicator - we had pivot points. The e-mail address turned up three more websites that were impersonating Australian and New Zealand schools that had business and law departments specializing in South Pacific maritime law, offering to (for a fee) set up businesses in regional countries to deal with shipping laws. Same scam, different business model (mistake #3).

The street address was diamond studded 24 carat platinum plated solid gold. Over 40 websites with 15 different e-mail addresses tied to that address. All 40 sites were hosted on one of three different Middle Eastern bulletproof hosts. At each host, all the sites lived on a /30 subnet. Every single site used the same web server. The web server differences were tied to versions, and the versions tracked to when the sites were spun up. There were more sites on those subnets, and they led to a few more e-mail addresses, which led to a few more sites (mistakes 4 -1329542). These took the timeline to a point, when Beetroot figured to privacy guard everything. There were tons of pivot points to investigate, spoofing tons of other schools in English speaking countries.

That wasn't all. Looking at the original site that spawned the original investigation, there was one line of text that stood out. It looks like a sentence was run through Google Translate into another language and back into English. The original line wasn't hard to guess, and when run through translate into Russian and back into English it produced the distinctive sentence. We ran a Google search on that sentence. We got three hits. One website didn't exist anymore. The other two did. And they were near carbon copies of the original website my friend originally investigated. Those two were privacy guarded. And they had the same web server, same web structure, and operated on a subnet that tied to an early DNS record for the original imposter site (mistake X). But the defunct website was a diamond the size of a softball.

The original site was <university acronym>.<general university-biz word dash LLP>.com. It contained multiple subdomains for all the business types Beetroot would spoof. Whois wasn't private, the address nearly lines up (one digit was off), and the registrant had a phone number, and it had the area code and local prefix of the city and state in the whois. Later in the whois history, Beetroot switched phone numbers to a Google Voice number, which used geolocation to give him a number with the same area code and prefix. The registration date puts this as the first site spun up. A web archive view of the site showed a very rough draft of some of the impersonating sites.

The cherry on top - Google Earth. The addresses should be tied to a lat // long scale. Beetroot's address was in the middle of nowhere. Google Earth showed an empty field of tall grass.  We went down the road in both directions, and found that the addresses on the few mailboxes didn't line up with Google Earth. So we clicked down the road to a small house in surrounded by fields for hundreds of yards. The address marker had the address of the original discovered address from whois. The small house had multiple satellite dishes (like one would have for Dish or Direct TV), which would make sense for middle of nowhere internet. And the smile on the Mona Lisa? We spun the Google Earth around, and someone had paid the money to put an internet junction box like you see in suburbs right across the street from this house in the middle of nowhere. There were still signs of a fresh trench dig and fill in from there to the direction of the highway. And a fresh strip of asphalt from it across the street to (what I assessed with High Confidence based on everything together) was Beetroot's house.

From a Threat Intel standpoint, this was unbelievable. It was the Deathly Hollows, the Lost Ark, even the alien from Area 51. We had tradecraft. We had a full timeline from start to current. We had targets. We had consistent TTPs stretching over years. And we had Beetroot's home.

We imagined that's what it felt like when the Mandiant researcher stood outside the office building in Shanghai and took that picture.

Beetroot represented something that gets zero discussion in most online Infosec circles - the Persistent Threat. We hear about Advanced Persistent Threats all the time. And we hear about script kiddies who wreak havoc with a tool. Beetroot fell in the middle. Beetroot probably started out as one person, and then worked with others to make his scam work. Beetroot's skills improved with time. But Beetroot never wiped his slate clean. As his tradecraft got better, he didn't clean up his previous footprints.

Persistent threats have greater initial technical debt, and much more limited resources. They need to build on previous successes with very limited budgets. Their advantage is it's harder to defend than attack, and Beetroot wasn't attacking anyone who had the means to fight back. But the work wasn't lucrative enough to throw away his old infrastructure, and then he likely forgot about it. He diversified, but not enough. He (like most adversaries) had consistent TTPs across his fraud. Lone indicators were a starting point, but the TTPs were so obvious from one to the next.

We think of the near impossibility of finding APTs without multiple dedicated staff assigned to each Infosec function. And how would one train to challenge such an adversary? Lots of businesses will fall into the targeting reticule of one of the many APTs. But for each of the APTs, there are dozens of persistent threats coming after your networks, with tradecraft not as good. You can use these to show successes to leadership. You can use these to sharpen your skills. And you can use the learning experience to better position yourself to catch the advanced threats, who will also make mistakes.

Give your adversary every opportunity to make a mistake. They will. And you will catch them.


Who are you?

That one question defines so much of you. Thinking about the question defines you. Specifically, how you think about that question. In Infosec you have to be analytical. Whether you work or desire to work at a strategic (leadership), operational (cooperative), or tactical (technical) level, the ability to ask the right questions, and analyze questions asked is part of the job. What are you trying to find out? What will that information get you? Why is getting that information important? What does the person asking the question want to know? What do they need to know? Are they asking for what they need? What questions will the answer you give prompt? A proper analytic question is the start of a series of multi-order effects birthed by the series of questions that spawn from the first one.

By virtue of reading this blog, I'd bet money you have created a profile on at least one social media site, even if it was for a short time. If you haven't, you've at least read one profile on social media. The odds that neither are true are smaller then a rounding error to significant digits. Think of any profile you have read. There is a character limit. They are designed to be small blurbs, succinct, and by their very nature incomplete. And that is the problem - especially in Infosec.

A moment in time can change a life. A person's most outrageous experience in life comes down to one single moment. Every social media post, upload, and interaction is at best one moment in time. Sometimes the ones we want to show the world. Often it is one's weakness, rage, or hate, vile and unfiltered. And very disturbing, this is prevalent in Infosec. Even worse, those in Infosec are willing to judge based on one moment. What makes that an egregious sin is Infosec is supposed to be so analytical. A moment in time is an indicator. And an indicator without adversarial TTPs only shows what happened right at that moment. If that. Investigators who claim to be purely analytical when dealing with a digital indicator will then judge someone worthy of damnation (or termination from whatever job they have) based on an indicator. And based on a truly perverted sense of absolutist justice.

One of the great moments in the movie High Fidelity is when John Cusack explains why Joan Cusack came into his shop and referred to him in a very unkind fashion. He then explains four pieces of information his ex-girlfriend most likely shared with Joan that painted him in a very unflattering light. He then explains to the audience that each of these four horrible things was absolutely true. He then goes on to rationalize (minimize) these behaviors. Knowing full well that the audience is judging his character, he looks into the camera and gives the audience a pop quiz. Think of the top five all time worst things you've done to your mate that they don't know about. There is a pause, giving the audience time to think. Then he gives the line of the movie: now who's the fucking asshole.

Infosec rationalizes it's bad behavior under the justification that people don't understand the fight we had to get where we are. There is no easy in to this part of technology. We see evil intent and behavior as part of our job, so in comparison our snap judgements, our condemnations, our willingness to hurt (trying to take someone's job away so they can't eat, have shelter, have transportation is a most cruel hurt) shouldn't be held against us - we fight the bad guys. We see a moment in time, and depending on who the perceived slight would hurt judgement is hurled. Ends (vanquishing evil) justifies the means (inflicting harm).

Except we're looking at one point in time. Infosec people would make a very bad juror. Think back to a judgement, whether hurled in a tweet, said behind someone's back, or used to cause harm. Think of the worst, or the most recent. To quote Cusack, now who's the fucking asshole?

I am fortunate. Whether it's my path, age, having lived life ever on the outside, or likely a combination of the above, I focus on my bias more and more often. I focus on the source of that bias. I focus on how it affects my life. I focus on how it will be viewed by others. My most reoccurring maxim is Words Matter, and that is continually apropos, moment by moment.  My words reflect my bias.

I was taught by individuals, by collective groups, and by my state government that, on the basis of my demographic, I was disposable, and that the world was justified in disposing me based on actions of others long dead, or with more resources and power than I will ever have. Therefore, those who cling to victimhood, as if they were special, or that the history of their identity group should grant them favor or recompense, I identify as weak and untrustworthy. Bias.

I have always been on the outside of whatever large groups I wished to belong to. I have seen and experienced the injustice of the mob. I have experienced those in power applying different rules to me than the group because I wasn't part of the group. I see larger groups that won't police themselves as corrupt and incapable of being a voice to justice. People don't ask forgiveness because they are sorry, they ask forgiveness to avoid punishment. Bias.

Like Colm Meaney's character Gene in Layer Cake, I'm too loyal for my own good. Very often I've held up my end of a deal based on a promise - real or strongly implied - that the other side never had any real basis to honor. A former boss told me that in ten years of reference checks, my former managers gave the exact same weakness, when asked about mine. When he's part of a project or a team and people aren't holding up their end, he won't let it fail. He puts on boots and a cape and saves the day, every single time. That makes him reliable, and difficult to work with. People will abuse my ethic. People will find a way to betray. On a long enough timeline, people will show they can't be trusted. I discard people who betray my trust with great ease. Bias.

Depending on how you read that, your bias shows. Do you see someone who has overcome adversity, understands his responsibility in life to himself and others, and works to keep the team from failing and to preserve earned trust? Or, do you see an angry man who never fit in and won't give people a chance? That's your bias. No matter which you choose, judgement based on three paragraphs shows bias. And if you say you didn't, you're either Detective Columbo or a liar. And Columbo is dead.

And that's the point. Bias seeps into everything. It colors your judgement. I have taken seemingly extreme actions in some factions of life lately. They weren't based on a single indicator, but people's TTPs (pattern of behavior). I've paid a price for it. That price will collect a reoccurring fee of opportunities and allies lost for a long time. Those choices were made for the right reasons, even if the outcomes attempt to reinforce my biases.

So who are you? You are far more than a profile or post. You need to understand you. Understand as much of you as you can define, as you can put into thought. Once you can do that, you can start to view that from the other side of the looking glass with Alice. Analyze. Like a good investigator. Like a good communicator. Like a good researcher. Once you've identified your bias, you can work to overcome it. Like a good human being.

Both an infinite collection of moments in time, and their sum total. That's who you are.


My old boss had one iron clad rule when reporting on an alert or incident. Don't think, know. What he meant by this is the need in any investigation to be sure. He ran security at a very large financial organization before joining the institution where he and I met. He had to face breach notices, legal summons, and visits from at least one three letter agency. And in all these dealings, he understood the difference between 0% and 99% understanding was minuscule compared to the difference between 99% and 100% sure.

100% sure is obvious. There is proof. There is evidence. There are logs. All of these combine to paint a complete picture. They leave no doubt, much less reasonable doubt.

99% sure is where the problems occur. Your odds are so overwhelming that you have virtual certainty you are correct. 1% is a rounding error, or a margin of error.

The truth is that 1% is an error. Employees being termed, adversaries being arrested, even APTs live and die off that one tiny percent. Believe me, when the lawyers get involved, that 1% can save someone from legal action or keep them out of jail. A majority of the time that 99% will bury far more than 99% of your adversaries. The ones who can navigate that 1% are the ones you should really be worried about.

Enter 'Don't Think, Know.'

We see a system beaconing out to an IP listed in a threat intel report as being part of APT 29's infrastructure, ergo the Russians hacked us. What process spawned the call? What spawned the process? Is the IP a compromised public server the APT used to piggyback as a watering hole attack, and the system is making a normal call to the box? Was an engineer playing with a sample and triggered the call? Has the alert been verified with the source? How recent is the intel? Did EDR flag on anything? Did EPP block the rest of the process? Did the firewall stop the dropper's download? Sweeping declaratory statements are made at the end of an investigative process, not the beginning. In threat intel, an indicator by itself is a starting point at best. The behavior and the chain of events that spawn from that indicator's investigation determine fact. The desire to be right, to fight the good fight and take down the bad guys can cloud the search for fact. One can think they are right. If one isn't 100% sure, they may not clearly see that difference between 1% and 99%.

Some times it's easy. Someone leaves a digital footprint that only they could leave. Someone makes a blatantly sexist or racist remark in a print medium. Don't assume this is common. And, most important of all, do not project your bias onto it. This leads one to disregard evidence that can contradict their thesis.

Accusations have a human cost. People so easily point fingers. This is due to our thirst for answers, and the need for closure to an event. And our desire for retribution. Just look at any twitter mob. If you follow a large enough chunk of Infosec twitter you will see these far too often and they will include people who are incident responders and investigators who should know better.

An accusation is an indicator. Investigators need to take every accusation seriously. But an accusation isn't fact, it's a starting point. When an accusation is leveled that someone has committed fraud, embezzlement, theft, or worse, that accusation needs to be taken seriously. The voice making the accusation can lend a great degree of credibility to it, but by itself is not indisputable proof of wrongdoing.

Less common in Infosec (I hope) but prevalent in the real world (too often) is the ending of an incomplete investigation with a declaratory statement claiming nothing was wrong. No malfeasance happened. At some point in an investigation, it will get hard. An investigator will have to dig in deep and wade through logs. This isn't a quick process. It shouldn't be rushed. Conclusions shouldn't be rushed. Behavior needs to be analyzed. The blank spaces have to be filled in.

When you don't know, look for a way to find out. If it is impossible to find out (e.g. logs rotate) an investigator needs to state where the holes in that part of the investigation are. The investigator needs to find a way to corroborate the behavior, not let assumption become fact and move on. When that is not possible, take a queue from Colin Powell.

What do you know?

What don't you know?

What do you think?


These questions answered as honestly and completely as possible, are what it takes to shrink down that 1% to as small a number as possible.

Any ethical investigator needs to be mindful of the human cost of their work. To do that, they need to be as thorough as possible. Their behavior comes down to one simple credo.

Don't think, know.



The most common question about security jobs is how to get the first one. How does one break into security? Where are the entry level security jobs? I went to school and got a bachelors in Information Systems Security. Even before leaving school, I started looking for entry level information security jobs. That concept, an entry level information security job, was built on a flawed premise. They don't exist.

My bias is built around that time period - 2008. The market collapsed, we had unemployment so high congress had to vote to extend unemployment benefits and kick out a stimulus check, and no one was hiring. No one. And, with a few notable exceptions, people were hoarding information. Talks were as technical as you could get, and conferences were financially restrictive - especially to those who didn't have jobs. And people were scared. They were so scared they were hoarding information, doing everything in their power to make sure their company couldn't fire them. They hid the keys to the kingdom, and made sure no jr levels could move up and take their positions. Tales of the older workers making 3x that of junior employees being laid off, or RIFfed (reductions in force), were daily occurrences. Trust between employees and management was at an all-time low. No matter the company culture, everyone's IT got gutted. That affected the world today: Soaring GDP and stagnant or falling wages, everyone wants contractors and not FTEs, and fewer companies are willing to pay for training for anyone not of their mission critical staff. To a degree some of that is changing, but that change exists primarily in specialized areas.

What are the barriers? First and foremost, you don't know what you don't know. I fight with Rest API coding as I took a coding class in 2006 and 2007. It was in visual basic. My coworker can puzzle through these issues in less than an hour when they take me days. I have to hunt through forums of questions to find more questions I didn't know to ask. Even for the veterans, not knowing what we don't know teeters on the edge of crippling. Secondly, I never had a mentor. There was no guiding hand to show me the way. I was the fat, straight, white guy. No one wanted another one of those in the pipeline. Plus, I was not a drinker, so I was never in the social circles of the people in power. I had to fight for my information, learning akin to strip mining or scorched earth, and there was no forgiveness for mistakes. I moved around a lot. Those who had mentors were guided through pitfalls with ease, and taught how to learn, as well as what to learn. Third, the career path was not defined. Listen to any faux humble "I'd never use the phrase thought leader" types, and they talk about a career path utopia where certs are pointless, and they'd take a skill set over formal education any day of the week. Next time you see this, look at the background. I would bet a steak dinner that they are A) ex-military, B) worked for a Federal Government three letter acronym, of C) both. The most notorious of these people went military to NSA - and yes, that's more than one person. So, unless you are 18 with high technical skills about to join the military, most of their career advice is for naught. This fog completely obscures any vision of entry level security.

There is one thing you need to know, above all else. Burn this into your brain in large flaming letters.


People will try to argue that. To do so violates one of my most important maxims: words matter. You can't approach that statement without trying to change the meaning of words. People do online. They then violate another maxim: deal with the world as it is. It's like a triad, pick two. You can have an entry level job, you can have a security job, or you can do entry level security. Entry level jobs don't carry the level of responsibility that security jobs have. Entry level security work is not something people pay for with the risk associated. Security jobs require a degree of expertise that far exceeds anything we think of as entry level.

Starting points in security depend on your background. Security analysts who work in SOC (Security Operations Center) environments have backgrounds looking at operating systems or network traffic, or both. They take expertise in a previous life as a sysadmin or network admin, and parlay that into looking through alerts for outliers in data transmission or deltas (differences) in configurations. SecDevOps were DevOps people who learned to secure and bugfix their code, and the code on their systems. Network admins become firewall admins. Though I loathe to make the comparison, switching from one of the early IT jobs to security is akin to the evolution of a Pokémon, Abra to Kadabra to Alakazam. You can't move up until you've made a firm grasp on the previous level (without potentially crashing your career).

Deal with the three hurdles. First, all you need is a concept. Do you want to secure a network? Secure Windows//Mac//Linux operating systems? Attack networks? Build secure code? Start simply by googling that concept. There are numerous and extensive papers, articles, podcasts, and videos on nearly every subject. Or, even better, search twitter. You will find many a person who tweets and writes about these concepts, and those who will retweet those who do. In doing so, you will clear hurdle one, and make it most of the way over hurdle two. The online community can act as a crowdsourced mentor. Read the writings of established professionals. Look at their histories on LinkedIn and see the evolution of their job titles. Look where they started and you will see you can come from nearly anywhere and get to security. Some are even approachable at conferences and talks. When you look at those histories and talk to those people, you will see that there are some basic funnels to get to where you want to go, but those aren't the only paths. Find something you want to do and pour yourself into it in your soon to be not free time. You will build yourself into a subject matter expert and that will have value. And that will help you clear hurdle number three.

If I could do it again knowing what I know now, what would I do differently?

If I was in college I would find a paid internship. This gets you in and working in a professional environment, and working with the tools they don't have in schools. Plus, it gets a real company on your resume, and then you aren't someone with no experience.

If I was in a career rut, I would build a home lab (very inexpensive with virtual machine software). I would play with tools like Wireshark, looking at traffic. I would rip apart group policy on multiple Windows operating systems. I would read about system vulnerabilities and how to attack them, then test it out. I would find free tools that mimic what the expensive stuff does, to make it easier to work with the tools I have never touched, as the underlying idea is the same.

Where I am now? I would keep learning. I would keep working to make sure I'm not ashamed for not knowing an answer my dramatically younger colleagues take for granted. I'd use twitter more as a learning and networking tool, and as an outlet to share my view on topics I feel are underrepresented.

It doesn't get easier. But then again, neither does life. Keep pushing forward.



Threat hunting and threat intelligence has a special relationship. Think Sonny and Cher, Peanut Butter and Jelly, even cake and ice cream. They each stand on their own, some with great renown, but put them together and you have a whole that far exceeds the sum of its parts. And like the ouroboros, hunting and intel feed off of each other.

Start with a hunt. The purpose of a hunt is to find adversarial behavior on the network. You do this by forming a hypothesis (I believe the adversaries are trying to move laterally through my network using PS Exec), and then reviewing log information testing that hypothesis (what unexpected accounts are attempting type 3 logins on multiple systems, successful or not, spawned by the process psexec.exe). You find an anomaly and you document it, and then you run it down to see if it can be explained by regular user or system behavior. Should you find proof of adversary behavior, you document everything and you kick it over to incident response (assuming you are not also the incident responder). You then work to eliminate that adversary from your network.

Enter threat intelligence. They take the documentation from the hunt and analyze it. Was there a pattern in the remote login attempts? Did it target servers with a specific function? Was it the same user every time? Was it regular users or IT users with higher levels of access? Did it happen during certain times of the day indicating an adversary's working hours? What other processes did the compromised user account attempt? They work to see if it is all the work of one adversary or multiple.

Yes, multiple adversaries can be inside the same network, even doing battle with each other while assuming the other is legit sysadmin or security personnel. See the 2016 DNC hack after action report.

Threat intel works to build a profile, and that includes examining the kill chain from the recon stage to the point the adversary was discovered. The use the diamond model ( shows tracking an adversary along the kill chain ( focusing on four points at each step in the kill chain: adversary, infrastructure, capability, victim. To analyze an adversary's attack, threat intel wants to be able to fill in all four vertices of the diamond. As they build a profile, they will see that an adversary may have undiscovered capabilities. An adversary may be discovering moving laterally with PS Exec, but how did they get on the network to begin with? How did they establish persistence? Building the adversary profile will create more questions. This can be compared against previous adversary documentation, or compared to information from external trusted threat intelligence sources.

The intel team takes these questions back to the hunters. Please hunt the history of the account usage, and look for the origin of anomalous behavior. Something had to happen (a process run, a file downloaded, a website visited) that preceded this anomalous behavior. The hunters then refine the hunt using the parameters given to them by threat intel team to flesh out more of the adversary capabilities. They return their findings to intel, who analyzes and asks more questions, the hunters refine the hunt even more, and this process is cyclical until the adversary tactics, techniques, and procedures can be assessed and documented.

Now, the results of this will be to create alerts (traps) should the adversary ever penetrate the network again. Then incident responders can use the adversary profile created by intel with information gathered from the hunters to contain and eradicate adversary presence with greater rapidity. These profiles can be used to track between similar but separate adversaries, and help paint a picture of motivation. This tracking of adversaries, and intent derived from behavior, can be documented and taken to leadership to say these are the types of organizations targeting our institution, and this is what they find valuable to disrupt and steal. We are better off directing our resources to elevate protection on this set of assets and people.

Documented evidence of intent and capability with a clear target make it easier for leadership to support a course of action. This continual process relies heavily on the coordination between the hunters and gatherers.

None of us is as secure as all of us.


Threat hunting is a popular concept in the modern Information Security space. Vendors will tout their systems as a threat hunting solution. Or even more inaccurately, they will claim their box or their service can eliminate the need to do threat hunting. Both of these claims are false. The first because it incorrectly defines threat hunting. The second because it claims to help you abdicate responsibility - the ultimate sin in Information Security.

Clearing the fog around the beliefs surrounding threat hunting starts with defining what isn't threat hunting. Checking on an alert in a system isn't threat hunting. This is triaging - determining the accuracy and risk of a given alert. If some source, a box, an indicator, or a listserv, tells you to go look to see if a given action is malicious, it's not hunting.

Hunting isn't about indicators. It's about behavior. You are looking for behavior out of the norm. For an adversary to get a foothold in your network, and then begin to act in their interests, their behavior will be both defined, and different from the norm.

To properly hunt, there are some prerequisites. First, hunting is a process that will take time. It can't be rushed. You can scope your hunts to look at a specific behavior on one system over a short time to control the time investment. Starting small will allow you to understand how much time you budget.

Second, create documentation. Hunts need to be documented to help baseline. Environments change, and the hunts can help keep baselines up to date. Hunt documentation needs to show exactly what the hunt was about, how it was scoped, what the hunter sought, and the results of that hunt. This allows the hunter to refine their process, create a history of refinements to the hunt, and provide a template for teaching junior security team members how to hunt. My personal preference is Microsoft OneNote, but you can use a wiki, you can use notepad. As long as you can organize your documentation, you are going in the right direction.

Third, have visibility. You will need to be able to see data. That data has to come from somewhere, and the easier the access, the easier it will be able to search. You can hunt with the windows event viewer. You can hunt with netflow. You can hunt with just about any logging data. You need to be able to see it and carve it.

Carving is the ability to manipulate data to remove irrelevant sections or sections that require further analysis. This manipulation can be based on simple thoughts (running FINDSTR looking for logon type 10 or using GREP to look for netflow connections into the server core from unexpected IPs). Talk to any experienced threat intelligence analyst, and they will sing the praises of Microsoft Excel.

Once you have set aside time, can create documentation, have visibility, and can manipulate data, you are ready to hunt. The process of a hunt is very simple. Behavior A is normal. An adversary on the network will have a behavior that will deviate from Behavior A. How do I find behaviors that deviate from behavior A? You then look at the data and filter out normal behavior. The behavior that's left needs your analysis. Every bit of anomalous behavior needs to be either justified or addressed.

This is where you find unique (or erroneous) configurations in your environment. This information can correct issues, or help people understand what their systems do. On more than one occasion I've asked system owners at multiple jobs why does their system do this, and repeatedly I have been told they have no idea.

Once you have cleared known good behavior, and you have justified what can be justified in your environment, you are left in two potential states. One - there is nothing left to carve out from your hunt. This means that an adversary didn't exhibit this behavior (or found a way to disguise it, but that's farther down the threat hunting rabbit hole). Two - there are unexplained behaviors found in your hunt.

If it's option two, congratulations! The process of hunting is now concluded. The process of incident response begins.


This is part three in the series on personal codes of conduct. These are my maxims, my personal guiding philosophic code.

Part 1
Part 2

Maxim 7: Never say no to a user. Say "Let me find a way for you to do that safely."

Information Security professionals are relentless with finding ways to make their job easier. We turn threat hunts into alerts. We automate response actions. We use scripts to automate as much as possible. We do anything to make our lives easier. End users are the same way. If software will make a user's job easier, they will use it, whether or not the company pays for it. How often have you found unlicensed, hacked software on your user's computers? If you haven't checked your users’ systems, take a Xanax and go hunting.

End users don't tell us about these unpatchable, unlicensed, trojan horses because they expect us to rip away what they have and make their job harder. You want to change the paradigm? When you find this software, sit down with the user, and explain the issue. Then, tell them you want to find a way for them to do that safely. If you talk to leadership about this software as needed, you can press to get licensed copies. You can find free versions with similar functionality that can be patched. When you show users you understand what they need, and you can demonstrate you want to see them do their job safely without roadblocks, you create an ally and an advocate.

Maxim 8: Remember kids all mics are hot, all guns are loaded, and all systems are production.

Credit for this goes to @infosecxual.

I haven't had an employer yet where I do testing on 'test' systems, only to find out I shouldn't have or I need to stop because someone uses it in a production capacity. Then why is it called test? Just because something is labeled a certain way, doesn't mean it's being used in that way. Define: hacking.


I had a job where a coworker had a bowl of movie theater candy out with a spoon so people could serve themselves a spoonful of Mike and Ikes or Junior Mints and enjoy. We were having a talk about expectations and mismatched expectations, when she set out a big bowl of M&Ms. To prove a point, I went downstairs to the vending machine and bought a pack of Skittles. I then ninjaed in the red, orange, and yellow Skittles among the M&Ms. That look on people's faces, especially when they get a yellow one, became an unspoken example of misplaced expectations.


This doesn't just apply to test systems. Every time you try to cut a corner, such as quickly updating this one router or slipping in a quick vulnerability scan against a prod system during business hours, you are rolling dice. The accountability for crashing a prod system because you weren't patient is more than downtime. It affects how you are viewed. See Maxim 3.

And don't forget, you never know who is listening. Keep that unpleasant opinion about users to yourself.

Maxim 9: People will use what power they have. Plan for it.

End users may not have much power when it comes to policy or procedure. We do what we can to work with them, but there are times policy or procedure dictates certain behavior. When Infosec has a victory where a user has to do something the way we want, we have to be careful in how that is presented to the user. When users are shut down, when their process changes, if it is not done in a way respectful to users and their job, users will find a way to push back.

Understand this. People can be petty. Users are people. Ergo ...

Does the user or one of their friends // allies sit on the change board? Prepare to fight to have even the most basic changes approved. At a previous job, security had a history of hampering other departments instead of working with them. My first change was adding more vulnerability scanners to offset load and speed up the scanning process during approved windows, allowing us to shrink those windows. This change was a benefit to everyone. Two members of the board fought against all the things they thought could go wrong which made no sense. The name Skynet even came up in the argument. If you follow current politics, you know you can't argue reason with people who absolutely refuse to embrace it.

Can they deprioritize a process? Same job, we were doing annual IAM role permission reviews. The system was set up to give people a month to get it done, with a reminder e-mail at 14, 7, 5, 3, 2, and 1 days to deadline. Once you hit 5 days, those e-mails would include their supervisor. When you hit -1, that included the supervisor's supervisor. We had one holdout who it -14 days. All three in that line up the food chain didn't like Security due to some perceived slight years ago. So, every one of them argued about other priorities being more important. We had to get people with Cs in their title involved, and this made it a month late. Of course, the mitigating issue didn't make Security look any better.

In security, many processes and needs will be handed off to other teams due to separation of duties or the need for specialists and specialty knowledge. If you need something, the monopolistic provider has a good deal of power. If you haven't worked to foster good relationships, they will use that power to show you who has it. You may win in the end, at the cost of stress, frazzled nerves, and other users watching you go to war.

"Why have enemies when you can have friends?"
Charlie Hunnam as King Arthur