Writing Better Alert Names - How to win hearts of SOC Analysts

“What you see and what you hear depends a great deal on where you are standing. It also depends on what sort of person you are.” ~ The Magicians Nephew - CS Lewis

Thanks to Hanner for her feedback on this post.

The Problem

Detection Engineering requires you to be a storyteller

Detection engineering is inherently a very technical and often abstracted process, but conversely, there’s a very intricately human part of creating a detection. The ultimate goal of detection engineering is to select an event that requires further investigation out of a vast ocean of events. The focus in detection engineering is often on how events are selected. But what about how they are displayed to the user investigating them?

A detection name is the first thing that an analyst sees - it tells a story and sets a course for investigation. How do we name an alert so that it sets the analyst on the right path?

What do I mean by not great alert names? Let’s have a perusal in a few public alert libraries.

Disclaimer. I have personally written some absolutely rubbish alerts, this is not me attempting to dunk on any of the work below, but rather show how some of these alerts could do with some modification to make them easier to triage and investigate.

Alerts with names that are hard to understand

Splunk Alert Splunk Alert

  1. Use of uncommon words.
  2. It’s vague and lacks detail.
  3. Where do you start here.
  4. Doesn’t convey why this needs to be looked at.

Azure AD Identity Protection Alert Azure AD Identity Protection Alert

  1. Very vague, little details.
  2. Doesn’t set an analyst on the right path.
  3. Specific activity is not listed in the alert name.

Alerts with names that are a bit easier to understand

*better in comparison to the alert names above.

Potential Credential Access via Windows Utilities Potential Credential Access via Windows Utilities

  1. This has struck the balance between too much detail and too little.
  2. This alert uses common language (this alert uses Mitre ATT&CK).
  3. This is a bit subjective, but I can more easily determine urgency with this alert.

This is also not necessarily something that I’ve made up all on my own…

I was recently re-listening to Episode 24 of the DCP podcast where they were interviewing Jamie Williams from Mitre. They were talking about EDR evaluations, and Jamie made a comment that resonated. He said:

“…We know the scenario we executed like the back of our hand, we built slides we walk them through it from a purple team perspective. Part of our results analysis, is turn your brain off pretend I have no idea what’s going, look at the images from the vendor does it actually detect and say what’s going on, and how does it communicate that to the end user based on the scenario we’re running. Some of the fun we do is just that, looking at a picture and the vendor says this is this, and we say well does it really, I read everything in front of me I understand under the hood what’s happening, and what the motivation for this UI was, but it doesn’t communicate it as well as you think it does "

How do we fix this?


Very quickly I think it might be sensible to figure out what good communication looks like.

I only recently read this study on how to communicate with future societies about the storage of hazardous nuclear waste Expert judgment on markers to deter inadvertent human intrusion into the Waste Isolation Pilot Plant

And this quote stood out to me:

“Before one can communicate with future societies about the location and dangers of the wastes, it is important to consider with whom one is trying to communicate.”

  1. Who are you trying to communicate with? are they entry-level, technical or non-technical?
  2. Good communication is clear and straight up, we don’t dilly dally around the point.
  3. Using big words is cool, but leave them out at this stage. While you might want to say how something is Salient or rather Obstreperous, it’s important when you’re communicating with someone that they understand the words you’re using. Otherwise if you use big words people will just think you’re a bit of weird unit.
  4. And finally, good communication is a two-way street. For me listening is just as important as speaking it should be at a minimum a 50/50 split of talking and listening.

The Plan :tm:

Here’s a few strategies that I’ve found to not only help detection engineering, but overall operations in a SOC. But these are especially suited to improving alert names. They are:

  1. Use consistent terminology.
  2. Peer review detection names.
  3. Decide what is most important for an analyst to know.

Step 1 - Use consistent terminology

If you’re going to call an alert Suspicious XYZ happened then you need to clearly define what the suspicious parameters are. They need to be defined at least in a field in the alert or in an easily accessible taxonomy.

The University of Melbourne has a Great Guide on how you develop a naming convention.

Specifically, they call out the below as good naming convention characteristics and also things to avoid.


Establish good foundations

  • Keep file names short but meaningful.
  • Include any unique identifiers, e.g. case number, project title.
  • Be consistent.
  • Indicate version number where appropriate.
  • Ensure the purpose of the document is quickly and easily identifiable.

Try to avoid

  • Common words such as ‘draft’, ’letter’, ‘current’ or ‘active’.
  • Unclear, vague or repetitive e-mail correspondence titles.
  • Symbol characters such as: \ / < > | " ? ; = + & $ α β.
  • Abbreviations that are not commonly understood, or which may frequently change throughout time.

Step 2 - Peer review detection names

When you’re developing naming conventions often just the simple act of getting together and talking about what works for people and what doesn’t is a great starting place.

The end state here however is a well-defined and written SOP. This SOP should go into the specific details and structure that you use for naming alerts at your organisation.

You should also include a peer review step as part of the alert implementation, this should be somewhat related to the technical peer review. I’d recommend calling it out as an individual step in the process. However, do what works best for your team’s process.

Step 3 - Decide what is most important for an analyst to know

Context is key for an analyst, anything to get that context to them quicker is a win. These are a few tools/methods that help make that easier:

The 5 levels of messages

Sometimes super detailed messages are not always what you want in an alert, more words do not necessarily mean an alert name has the correct details.

Again drawing on my current obsession which is the Sandia labs report on long-term radiation messaging, but they have tiers of information in messaging:

3.1.3 Message Levels and Media Pg. 3-3

Sandia Labs

We want to keep the level of information around the Level 2 mark, Level 3 is probably fine, but we start to push the bounds of too much information to be useful.

From 4.2 Linguistic Principles:

“…This simplicity should be applied to the message itself (e.g., be direct and not misleading), the content of the message (e.g., eliminate extraneous information), and the grammatical structure within the message (e.g., avoid complex sentences and colloquialisms).”

Pg. 4-5

So much of this Sandia study leans on this study by David B. Givens “FROM HERE TO ETERNITY: Communicating with the Distant Future.” This is a fantastic read and really delves deep into this concept of long term hazard messaging. Especially the section “Semiotic Lessons From Antiquity”, which offers some great tips on communication.

Leaning on PowerShell

I am a big fan of the verb-noun syntax for PowerShell. It’s a great way to remember commands as it sticks in your mind. “The verb part of the name identifies the action that the cmdlet performs. The noun part of the name identifies the entity on which the action is performed”

https://learn.microsoft.com/en-us/powershell/scripting/developer/cmdlet/approved-verbs-for-windows-powershell-commands?view=powershell-7.3

For example, take this PowerShell string, where at the start we have what we want to do (verb), and then after that, we have what (noun).

Get-Service | Where-Object -eq "blah"

If we go along the lines of this action happened here and draw on a common taxonomy for setting words. (We’ll use Mitre ATT&CK in this instance as our taxonomy of choice.)

Potential Credential Dumping of LSASS Occurred on LAP0002 By Admin_Tim

Suspicious Login - User Tim logged in from a new laptop and city

While ATT&CK is good for some things and not good for others, (a topic for another day) it is good as a Folk Taxonomy, and using the Techniques and Procedures is a great way to get everyone singing from the same hymnbook.

It’s also a good idea to put technical details in the alert to make triage faster, (remember, not too verbose though). To take from the example earlier, the things that are anomalous about the login, put them in the alert name.

We don’t want entire paragraphs in the alert name, however where it’s applicable you can help facilitate a faster triage process by putting information that the analyst needs to make a decision, especially in times of pressure such as after hours, (anyone who’s ever triaged a Defender for Identity DCSync alert at 2 am after deep sleep would know that you’re not exactly dealing with a full deck)

All in all, analysts need to understand WHY an alert has been triggered, and we can help preload that by thinking about our alert names and using known words that trigger meaning in their minds.

Fin/TLDR

Thank you for your time, if you got this far nice work.

Upon reflection after writing all this, it just boils down to. Don’t create alerts in a vacuum and ask your team if what you’re trying to say makes sense.

You as a team must agree on how you will communicate to each other via alerts. If there are team-specific terminologies write them down, define them, and make them easy for everyone to get to and use a common taxonomy.

I would also recommend documenting how you name detections/alerts and communicate the nomenclature to your analyst team and detection engineers to ensure important alerts are instantly recognized. This not only increases analyst productivity but also ensures critical alerts are treated with the priority they require during the triage process.

Short, sharp, and concise is key to this. As a PowerShell nerd have very much drawn on the noun-verb philosophy here and like using action happened to thing. Combined with using a known taxonomy and know categories of attacks (for example Mitre ATT&CK).

Getting your alerts technically peer reviewed is crucial. However, it’s also important that as part of your peer review process that the reviewer reads out your alert name and can communicate back to you what it means and hopefully if you’ve done your job right they tell you the specific scenario that you’re looking for with the alert.

And finally more words in alerts are not always better.

“It is my ambition to say in ten sentences what others say in a whole book.” ― Friedrich Nietzsche

stay groovy