Skip to content

Threat hunting is a popular concept in the modern Information Security space. Vendors will tout their systems as a threat hunting solution. Or even more inaccurately, they will claim their box or their service can eliminate the need to do threat hunting. Both of these claims are false. The first because it incorrectly defines threat hunting. The second because it claims to help you abdicate responsibility - the ultimate sin in Information Security.

Clearing the fog around the beliefs surrounding threat hunting starts with defining what isn't threat hunting. Checking on an alert in a system isn't threat hunting. This is triaging - determining the accuracy and risk of a given alert. If some source, a box, an indicator, or a listserv, tells you to go look to see if a given action is malicious, it's not hunting.

Hunting isn't about indicators. It's about behavior. You are looking for behavior out of the norm. For an adversary to get a foothold in your network, and then begin to act in their interests, their behavior will be both defined, and different from the norm.

To properly hunt, there are some prerequisites. First, hunting is a process that will take time. It can't be rushed. You can scope your hunts to look at a specific behavior on one system over a short time to control the time investment. Starting small will allow you to understand how much time you budget.

Second, create documentation. Hunts need to be documented to help baseline. Environments change, and the hunts can help keep baselines up to date. Hunt documentation needs to show exactly what the hunt was about, how it was scoped, what the hunter sought, and the results of that hunt. This allows the hunter to refine their process, create a history of refinements to the hunt, and provide a template for teaching junior security team members how to hunt. My personal preference is Microsoft OneNote, but you can use a wiki, you can use notepad. As long as you can organize your documentation, you are going in the right direction.

Third, have visibility. You will need to be able to see data. That data has to come from somewhere, and the easier the access, the easier it will be able to search. You can hunt with the windows event viewer. You can hunt with netflow. You can hunt with just about any logging data. You need to be able to see it and carve it.

Carving is the ability to manipulate data to remove irrelevant sections or sections that require further analysis. This manipulation can be based on simple thoughts (running FINDSTR looking for logon type 10 or using GREP to look for netflow connections into the server core from unexpected IPs). Talk to any experienced threat intelligence analyst, and they will sing the praises of Microsoft Excel.

Once you have set aside time, can create documentation, have visibility, and can manipulate data, you are ready to hunt. The process of a hunt is very simple. Behavior A is normal. An adversary on the network will have a behavior that will deviate from Behavior A. How do I find behaviors that deviate from behavior A? You then look at the data and filter out normal behavior. The behavior that's left needs your analysis. Every bit of anomalous behavior needs to be either justified or addressed.

This is where you find unique (or erroneous) configurations in your environment. This information can correct issues, or help people understand what their systems do. On more than one occasion I've asked system owners at multiple jobs why does their system do this, and repeatedly I have been told they have no idea.

Once you have cleared known good behavior, and you have justified what can be justified in your environment, you are left in two potential states. One - there is nothing left to carve out from your hunt. This means that an adversary didn't exhibit this behavior (or found a way to disguise it, but that's farther down the threat hunting rabbit hole). Two - there are unexplained behaviors found in your hunt.

If it's option two, congratulations! The process of hunting is now concluded. The process of incident response begins.


This is part three in the series on personal codes of conduct. These are my maxims, my personal guiding philosophic code.

Part 1
Part 2

Maxim 7: Never say no to a user. Say "Let me find a way for you to do that safely."

Information Security professionals are relentless with finding ways to make their job easier. We turn threat hunts into alerts. We automate response actions. We use scripts to automate as much as possible. We do anything to make our lives easier. End users are the same way. If software will make a user's job easier, they will use it, whether or not the company pays for it. How often have you found unlicensed, hacked software on your user's computers? If you haven't checked your users’ systems, take a Xanax and go hunting.

End users don't tell us about these unpatchable, unlicensed, trojan horses because they expect us to rip away what they have and make their job harder. You want to change the paradigm? When you find this software, sit down with the user, and explain the issue. Then, tell them you want to find a way for them to do that safely. If you talk to leadership about this software as needed, you can press to get licensed copies. You can find free versions with similar functionality that can be patched. When you show users you understand what they need, and you can demonstrate you want to see them do their job safely without roadblocks, you create an ally and an advocate.

Maxim 8: Remember kids all mics are hot, all guns are loaded, and all systems are production.

Credit for this goes to @infosecxual.

I haven't had an employer yet where I do testing on 'test' systems, only to find out I shouldn't have or I need to stop because someone uses it in a production capacity. Then why is it called test? Just because something is labeled a certain way, doesn't mean it's being used in that way. Define: hacking.


I had a job where a coworker had a bowl of movie theater candy out with a spoon so people could serve themselves a spoonful of Mike and Ikes or Junior Mints and enjoy. We were having a talk about expectations and mismatched expectations, when she set out a big bowl of M&Ms. To prove a point, I went downstairs to the vending machine and bought a pack of Skittles. I then ninjaed in the red, orange, and yellow Skittles among the M&Ms. That look on people's faces, especially when they get a yellow one, became an unspoken example of misplaced expectations.


This doesn't just apply to test systems. Every time you try to cut a corner, such as quickly updating this one router or slipping in a quick vulnerability scan against a prod system during business hours, you are rolling dice. The accountability for crashing a prod system because you weren't patient is more than downtime. It affects how you are viewed. See Maxim 3.

And don't forget, you never know who is listening. Keep that unpleasant opinion about users to yourself.

Maxim 9: People will use what power they have. Plan for it.

End users may not have much power when it comes to policy or procedure. We do what we can to work with them, but there are times policy or procedure dictates certain behavior. When Infosec has a victory where a user has to do something the way we want, we have to be careful in how that is presented to the user. When users are shut down, when their process changes, if it is not done in a way respectful to users and their job, users will find a way to push back.

Understand this. People can be petty. Users are people. Ergo ...

Does the user or one of their friends // allies sit on the change board? Prepare to fight to have even the most basic changes approved. At a previous job, security had a history of hampering other departments instead of working with them. My first change was adding more vulnerability scanners to offset load and speed up the scanning process during approved windows, allowing us to shrink those windows. This change was a benefit to everyone. Two members of the board fought against all the things they thought could go wrong which made no sense. The name Skynet even came up in the argument. If you follow current politics, you know you can't argue reason with people who absolutely refuse to embrace it.

Can they deprioritize a process? Same job, we were doing annual IAM role permission reviews. The system was set up to give people a month to get it done, with a reminder e-mail at 14, 7, 5, 3, 2, and 1 days to deadline. Once you hit 5 days, those e-mails would include their supervisor. When you hit -1, that included the supervisor's supervisor. We had one holdout who it -14 days. All three in that line up the food chain didn't like Security due to some perceived slight years ago. So, every one of them argued about other priorities being more important. We had to get people with Cs in their title involved, and this made it a month late. Of course, the mitigating issue didn't make Security look any better.

In security, many processes and needs will be handed off to other teams due to separation of duties or the need for specialists and specialty knowledge. If you need something, the monopolistic provider has a good deal of power. If you haven't worked to foster good relationships, they will use that power to show you who has it. You may win in the end, at the cost of stress, frazzled nerves, and other users watching you go to war.

"Why have enemies when you can have friends?"
Charlie Hunnam as King Arthur