Skip to content


Community is an interesting concept. A collective group bound by shared values and beliefs - this is how community is defined. Now, even in the smallest communities (population 2) not all of the values and beliefs are shared. For a community to stand, the core values must be shared.

Infosec isn't a community. Infosec is a philosophy built around values and beliefs, potentially including a right to privacy, data security, breaking systems to fix them before someone else can break them, and control of one's data. People may have more to add to that list. People may not include everything on that list. Either way, how anyone defines Infosec is built around the values and beliefs they assign to the philosophy - and that isn't a universal list. This distinction of personally defined philosophy versus the values and beliefs that make up that philosophy matters.

A modern American philosopher I know hangs his hat on the credo "Acta, non verba." Actions, not words. We are defined by our actions more so than our words. Social media has amplified this disparity. Many will tweet about injustices. They will post selfies with pontification about wrongs on instagram. What they will not do is take any meaningful action to work to correct the wrongs and injustices. What makes it worse is they will actively decry a wrong in the world, and then demand someone who is characteristically unlike themselves to be the ones to fix it.

At no point in human history has that been the way things have worked. Ever.

History is replete with turning points built on the backs of individuals who take personal responsibility to be the change they want to see in the world. They lead from the front. They let their behavior set the example. Whether it was George Washington leading the rebellion, Dr. Martin Luther King Jr marching non-violently, or Elizabeth I pushing back against tradition, someone said the world shouldn't be this way and worked to change it. Successfully. They did this by establishing a like minded community of people willing to put in the work to change the status quo in a constructive manner.

This is why the concept of an Infosec community is poisonous.

Infosec professionals and aspirants are very active on social media. They share information, brag about accomplishments, and preach. A lot. When some grave ill comes to the attention of the Infosec people engaged on social media, the pitchforks are sharpened and torches lit. Vitriol is flung into the arena and guns start blazing. There is no time to wait, battle must be ensued. People have to be seen challenging this wrong from their phones, tablets, and laptops, and they need to be among the first to engage.

My guiding principle of incident response is simple. When all hell breaks loose, the very first thing you should do is nothing. The second is take a breath. Why? Either you have an incident response plan, which means the incident will be handled properly and timely, or you don't, at which point you are in grave danger of the likelihood of immutable damage occurring by you and your team's hand is taking exponential jumps.

When these horrible behaviors are brought up in social media (ALWAYS selectively edited for maximum impact as desired by the poster) the response is sudden, damning, and often without any analysis or rational thought. Combined with the need to be seen railing against the horrible thing, we start seeing a pattern of what defines the 'Infosec Community.'

You change behaviors by engaging constructively

The 'Infosec Community' chooses to name and shame, and condemn, and then only selectively based on who is in and who is out.

And here is where the concept of Infosec as community crumbles. The 'community' doesn't hold everyone accountable equally (making justice not a principle). The community will indict and sentence (without trial or defense) based on selective information (basing a declaration of attribution on a lone indicator. Due process, a search for truth // fact, and thoroughness out the window). The 'community' will take things out of context if it supports their side. It engages in whataboutism. The list goes on.

There isn't an Infosec community. There are communities that exist within the bounds of Infosec. Recognize them for what they are.

"All animals are equal, but some animals are more equal than others."

-George Orwell.



There is nothing like an empty hotel gym at 5am. You can struggle. You can flatulate with impunity while your guts undulate like a bridge about to flip while on the treadmill. You can do low weight high reps on the dumbbells without 'bro do you even lift' condescension.

No matter what you want to improve, no matter how much of a novitiate you think you are, there is always a way to improve yourself in a manner which will not draw attention to yourself and your perceived shortcomings until your impostor syndrome has lessened.

Or dive right in and face it.

As an institutional defender, I have the disadvantage of having to guess right the first time to detect an attack at the earliest stage. Every institution also has a limited budget, so as a defender I've had to choose what doors I watch with given levels of scrutiny. The only way to do so is to build a threat model.

To understand threat modeling, you need to start with the risk equation.

Risk = (Threat to asset x vulnerability allowing reach x impact to institution) / mitigations


If your business has a gum ball machine in the lobby that takes quarters, the threat is the loss of the gum ball machine, its quarters, and its gum balls. The vulnerability is it can be beaten with a key, someone can use slugs to get gum balls, or someone can grab it and run. The impact is the cost of the lost goods, the reputational impact, and the time lost replacing it, updating policy and procedures, or working with the police. The mitigation could be bolting it to the floor, having a custom key, or hiring someone to either man the machine or protect it.

You need to understand what you are trying to protect, to understand the threat to it, what vulnerabilities it has, the impact to its loss, and if the mitigation is appropriate. Hiring an armed guard will make the loss of the gum ball machine unlikely. The cost outweighs the benefit.

What are you trying to protect? It could be any number of things.

  • Money
  • Physical property
  • Intellectual property
  • Access // Trust
  • Reputation // Brand

The asset needs to be defined, before you can understand the risk involved. Most likely, your institution has multiple asset types. These assets will not carry the same risk, and will not be protected the same way.

In a bank, the obvious asset at risk is the money. That asset exists both as physical currency and digital bits. Each has its own threat model. Both are at risk from thieves, insider threats, or potential destruction. How do you define each? How do you prioritize which one you want to protect more? How do you define your crown jewels?

Think about the threat to the assets. Someone could take the physical money. Someone could manipulate the digital bits to make someone else take ownership of the money. How do they accomplish either feat? Are you more worried about masked assailants taking the currency from a branch office, or a digital adversary abusing the SWIFT banking system to move money to another bank and account in an unauthorized manner? If you controlled security spend, how much would you spend depending which? How would you prioritize your detection capabilities?

Think about the vulnerabilities. Who has access to move the money? Who determines who has that access? How is that access granted? Who audits that behavior? When and how often? How do you define trust of the people involved in access? How do you verify that trust? What about the systems involved? What physical protections exist? How strong are they? What hardware and software is in use to control access to the digital assets? How often are they patched? What is the software // hardware lifecycle? What policies governing use of these assets are in place?

Think about the impact. How does the loss of the asset affect the institution? What is the total cost of that loss? How do you quantify the loss of trust? The failing morale? The loss of time investigating, then vetting and putting in place new mitigations (procedures, audits, hardware and software)?

In order to prioritize your defenses, you need to understand what you are protecting, the impact of its loss, how it can be lost, and and why (and potentially who) that loss would occur. Then design your mitigations based on that. That is your threat model.