Back in December, it really doesn’t feel that long ago, I talked about how I was prepping for a project.
The end goal is to brush up on Network Security Monitoring (NSM) and use it to better monitor my home network. I occasionally check the logs but think I would be more active if I had a centralized tool to help. Right now, I have a log of blocked domain alerts in my PFSense Firewall’s PFBlocker-NG reports screen. Most of the entries are tracking related that the Pi-Hole isn’t blocking and is getting to the second block list on the firewall.
Note: I say my home network, not my home lab. As I said in the past, I no longer maintain a home lab due to cost and space. I have parts of my network isolated, but I wouldn’t call that a lab.
My thoughts and notes walking the Forward:
I did like that in the Forward Mike Poor pointed out right up front the NSM cycle of collect, detect, and analyze. My reading notes also include that Mr. Poor points out the last step analysis is often skipped. I also liked the point he made about reviewing logs.
Logs should be reviewed for three reasons; at least I copied three reasons down in my notes.
- For signs of compromise
- Improve system performance
- Collet business analytics.
Each could be their own blog post, but all three tie directly to cyber security. The first one is obvious. The second is about efficiency, smaller log size while having better network communications, and containing operational expense. The third one gives an idea of the possible attack surface based on actual usage.
The logs hold the data you’re looking for about breaches. They come from the systems that will be interacted with by a threat (actor or group). When collecting the logs, Mr. Poor said to start with what you know in the logs. Understand what they are doing, and if they are things you don’t care about, tune them out and look at the rest. For an example of know and didn’t care about: at a previous job, I was collecting the logs on my network switches. I didn’t care that the switch ports were going up and down, as long as the MAC address wasn’t changing. So, I filtered those out.
My Notes and Thoughts on the Preface:
The expectation is set at the beginning, “prevention eventually fails.” This phrase is something that I’ve believed my whole career in security. Whatever controls are put in place to make the system less vulnerable, at some point, someone will come along and be skilled enough to get past the controls.
The book does have some base knowledge that is required. TCP/IP networking, Packet Analysis, attack techniques. Some of the books listed in the preface for reference were really good. I’m a fan of Counter Hack Reloaded by Ed Skodus and Tom Liston. I’m also a fan of Chris Sander’s book Practical Packet Analysis.
Mr. Sanders and Mr. Smith brought back up the concept and approach of NSM’s three primary sections.
- Collection
- Detection
- Analysis
One thing I’m disappointed about, and I do go on a soapbox occasionally, is IP Addresses in written text. In Applied Network Security Monitoring’s case, the IP addresses are randomized in the book and screenshots. I’d much see better adoption of RFC 5737, which contains three netblocks for use in documentation [[https://datatracker.ietf.org/doc/html/rfc5737]]. Even though the addresses are randomized to protect real systems, I feel more of us, in the industry, when writing need to use the three TEST-NET blocks. Sometimes, the Test-Net IP Addresses shouldn’t be used, but I feel like 95% of the use cases out there, TEST-NET, would have been a better choice.
My Thoughts and notes on chapter 1:
Chapter 1 starts by defining some key terms. Mr. Sanders and Mr. Smith point to the U.S. Department of Defense instruction 8500.2 to define Information Security. While trying to find a link for it, I found 8500.2 was canceled/rescinded as far back as 2014. See page 8 at the link provided, reference item (c).
Next, they went through the Key NSM terms; the terms are: Asset, Threat, Vulnerability, Exploit, Risk, Anomaly, and Incident. Most of them had good definitions attached to them. However, I disagreed with the definitions for Vulnerability and Risk.
Vulnerability was defined more or less as a flaw or weakness that a threat can exploit. The term the authors should have used was “exploitable flaw,” and the definition should say something along the lines of something within the code, hardware, or procedures that increase the vulnerability of an asset to a threat’s attacks.
Risk was defined as the ‘possibility a threat will exploit a “vulnerability.”’ Further, the definition paragraph says it’s fruitless to try to quantify risk as managers want. I wonder if the authors have heard of or read FAIR.
In my opinion, I think that they would have been better served by other definitions, and I’m going to roll with the FAIR definitions. If you want an example of some of FAIR’s definitions, see my blog post on Using FAIR with CTI – Some key definitions.
The authors talk a little history and cover Intrusion Detection Systems (IDS) and how those were used to try and monitor the network. The authors point out that Intrusion Detection falls under part two of the main concepts, Detection, and that people shouldn’t use IDS and NSM interchangeably.
From there, Mr. Sanders and Mr. Smith move into talking about NSM and cover the U.S. Department of Defense Joint Publication 3-13 and the parts that make that up. I’ve seen the concepts listed in other places, including CTI classes and books. After that, they focus on some points for NSM, such as Prevention eventually fails, Focus on Collection, Cyclical Processes, and Threat-Centric Defense.
There is a brief section on Vulnerability (exploitable flaw)-Centric VS. Threat-Centric Defense; the hockey analogy is beneficial. Or maybe I think so because I enjoy watching hockey.
The next step is the NSM Cycle. The section discusses and gives examples of Collection, Detection, and Analysis.
The penultimate section, but the last I’m talking about here, relates to NSM Analysts. It looks at the skills needed, discusses analyst specialization, analyst classification levels (the classic level 1, level 2, and level 3 analysts), and how to measure success. Finally, while not calling it Human Capital, the section does end out using a similar model to John Hubbard’s Human Capital Model from his “Virtuous Cycles: Rethinking the SOC for Long-Term Success” presentation from the SANS Security Operations Summit 2019.
The last section Mr. Sanders and Mr. Smith cover is about installing Security Onion. I will talk about that in a different blog post, though. And walk through the setup for the version used in the book VS the version currently available.