Tag Archives: tools

Walking Through Applied Network Security Monitoring – Forward through Chapter 1

Back in December, it really doesn’t feel that long ago, I talked about how I was prepping for a project.

The end goal is to brush up on Network Security Monitoring (NSM) and use it to better monitor my home network. I occasionally check the logs but think I would be more active if I had a centralized tool to help. Right now, I have a log of blocked domain alerts in my PFSense Firewall’s PFBlocker-NG reports screen. Most of the entries are tracking related that the Pi-Hole isn’t blocking and is getting to the second block list on the firewall.

Note: I say my home network, not my home lab. As I said in the past, I no longer maintain a home lab due to cost and space. I have parts of my network isolated, but I wouldn’t call that a lab.

Continue reading

Current Python Working Environment.

Over the last nine to ten months, I’ve changed how I’ve been using Python, again.

Working environment:

I work in either Debian or Xubuntu Linux, or Windows Subsystem Linux (WSL) Debian. I prefer Debian on bare metal hardware. The VMs I use at work are usually Xubuntu (faster, easier setup). Work’s laptop has Windows 10 Enterprise on it, which is where WSL comes in.

Continue reading

How I’m currently using Python Virtual Environments

So as mentioned previously, I’m looking at using Python’s Virtual Environments, to keep code straight. Figuring out how to set it up was a bit of fun. I’m sure there are some great plugins for Atom, but I haven’t found them yet.

So far here is how I’m using it. I’ve created two directories, .venv and Projects. Both are in my home folder. When I create a new project directory, like AtBS_Udemy, I create a matching env directory under .venv. In AtBS’ case, it is AtBS_Udemy_env.

It’s actually working out pretty well so far, but I’ve only done this on 2 *Nix based boxes so far. A work VM and my travel-laptop.  We’ll have to see how this goes long term.

More thoughts on python – projects

This is kind of picking up on where the last “Thoughts on Python post left off.” One of the things I’ve learned over the last few weeks playing with Python is some new lexicon.  Things like Projects, to mean programs. Linters, which are hooks to call out mistakes in the code to help fix it.

Anyway, besides the quick and dirty proxy I listed last time I have two other projects I’m working on.

One is taking a list of domain names that currently don’t have web pages, or have parked pages, and checks to see if they have changed to active. There are several ways to do this, but those methods didn’t work for me.  A couple are using GoDaddy and they seem to have several different pages that host the parked page, which returns different data each time the page is visited. So the simple way of using cURL and a hash doesn’t look like it will work. I’m thinking Requests with BeautifulSoup and .find()

The second uses data pulled from a Shodan search, and searches for context for me from an internal system at work. This is the one I learned the most from, over the last week. Mainly because it has changed several time. I’ve learned some web scraping tricks, mainly using Ryan Mitchell’s book Web Scraping with Python second ed (Amazon affiliate link) . I really want to work through the book from cover to cover but mainly it is a reference guide at this point.

During this project, from WSwP2e, I learned how to use Sessions from requests to capture authentication cookies and replay them during a session while scraping a website for data. I learned how to use BeautifulSoups .get_text() to print only the data I needed. Outside of the book I learned how to drill down to get to drill down to get to the right part of the table. I also learned of the getpass module to ask a have a user input their password without reveling it to the screen or .history_file.

After I got that all figured out and written, with “Open with” and some testing on the table results to get past out of ranger problems; I found out there was an API option. So I can get the  same data from a single URL in JSON. That will make getting the data easier since it ill be in the forms of JSON keys / dictionary like, and not in the form of rows in a table.

So the code is a mess right now, written with the old scrapping way, and with the API mixed in. I’m waiting for the people who wrote the API to tell me if I’m going to have to write a for loop or if I can feed it the whole list I need information on.

A third project  I want to work on deals with collecting IOCs. The other week at work I was going through some Emotet related emails, and the SOC analysts asked for all the related domains my team could find. So I was going to URLhaus to look up the domains we had from the PowerShell script. Then grabbing the hash, and all the domains the hash was found on.

I got real tired of copy, go to terminal window, open file, paste, awk print and sort uniq, copy paste to note pad file.  I set the terminal command line up so all I had to do was up arrow. It would remove the old temp file, get the data, sort it and the print it to the screen. So I could copy and paste.

Even that was a bit of a mess to use, because it needed human interaction and there was a few times that the data didn’t copy so I ended up repeating a few copies a few times.

Without looking at the API for URLhaus, which I’ll get around to eventually, I want to write a script, that while running will watch the clipboard, copy the data, manipulate it, sort it, and paste it to a file, or even just write it to the file. Still trying to flesh that one out. But it will be helpful beyond just the one site.

* Update 2024-10-06: changed to Amazon Affiliate Link, which I earn a commission from qualifying purchases.

Thoughts on Python

I’ve been trying pick up more Python again. It’s hard, having to constantly put it on the back burner for college classes.  I get a little more retained each time at least.

A couple of weeks ago, @WyattRoersma posted an interesting link from Real Python about publishing to PyPI. Which led to a great conversation. I was curious if I should post something. I wrote a quick and dirty module to import required proxy info for the boxes I use at work. Scrubbing it to share wasn’t that hard. But wondered if it was worth sharing to PyPI. To be honest I only wrote it because I was tried of having to copy and paste the same code from a file on my desktop every time I wanted to use it.

Anyway Wyatt offered to review some of what I wrote on his twitch channel. I didn’t get to see the show live, I had to watch later. Man, it was brutal. I knew I was bad, but I didn’t know I was that bad. He didn’t look at the code I wanted but instead looked at some of my older code. My one per environment (Windows, Cygwin, Linux) ping code. He made some great suggestions.

Since then, the first thing I did, was stop using Notepad++ and Vim for coding. I’ve installed Atom. I’ve installed some Linters, I didn’t even know what those were, that really helped with things like following pep8, one of Wyatt’s biggest comments about my code. However, Automate the Boring Stuff with Python, doesn’t teach pep8. Which of course means that I’m now trying to learn Python, AND break bad habits.

Atom might be a bit of a crutch. It has spell checking which my code did badly on in the comments. It also has a linter catch not matching pep8, plus an autopep8 on save option. Really that one gets used for spacing on multi-line commands. I think I’m learning to make things a little more pythonic, but not sure. Though I apparently need to  use m ore modules.

I will say this, in the last 3 weeks, coding Python has become fun again.

scripts to decode base64 and hex

About a month ago, I added a couple shell scripts to my DFIR Github repository. Three of the four scripts are used at work daily in either a Linux terminal, or a Cygwin terminal. The fourth script is something I use to help with quarantined mail, and not really DFIR based.

b64Decode.sh and hexConvert.bash take command line arguments and reports back the result. For example:

Continue reading

more mailserver fun

I’m still working through my quarantine folders. There are about 300 emails in each folder, and there are 62 folders. The folders are named 0-9, a-z, and A-Z. I don’t know why SpamAssassin / Amavisd on Debian does it that way, but it does.

Anyway going through them one at time with zless, and then rm was a bit of a pain. So I wrote a quick little one-liner to help:

The problem is, not all of the files are in gzip format, so it didn’t display those. And going in and out of the page system for Less was an annoying flash between the pager system and the normal terminal output.

So I improved it, using zcat, because I had some issues with zgrep not supporting some grep switches, like recursive.

Now it didn’t launch the pager, so no flashing. The second thing it did was give me just the To, From, Subject, and Date fields, and I could decide to delete or not based on block of info provided. Downside was it still didn’t handle the non-gzip files.

So when I got up today, I thought why not create a shell script to do this. And I can add in the feature to release false positives that SpamAssassin put in the folders.

So I now have a Mail Administration script in my DFIR repository on GitHub, that will check if it is gzip or not. Use the right form of grep, show info, and ask what to do with the file, release or delete (or nothing if you don’t use r or d as the answer).

Still some minor issues with the script:

  1. Must be ran as root, or someone else that has access to the virusmail sub-directories. in my case that means root since the mail accounts have /bin/false set up for shells.
  2. To be more portable it has to be called from spam sub-directory. In my case spam is in /var/lib/amavis/virusmails/. Which means I have to go there, and then in to one of the 0-9, a-z, or A-Z directories first. Like so:

     
  3. I still have 300 or so emails in each folder so I’d rather work 1 folder at a time right now to clear them.

Future plans for the script:
Ask the user where their spam folder is, so the script can be called outside of those folders, and enumerate all the sub-folders.

I also have to find out if the 0-9a-zA-Z is the same for all versions of software or if that is just a Debian thing.

SPF, DKIM, DMARC, and ADSP

For a while now, I’ve been having problems with DKIM. It wasn’t working. My logs always had the same error:

And I’d look for a fix but never find anything useful.

Today I decided to go through my mail quarantine folders. In them I found several emails from a friend who is having problems with spammers using his email address. None of them are going through his mail server, they’re all spoofed. We’ve compared our SPF records and they look right. So I went and looked up why I’m seeing all these mails.

Turns out that not all mail admins have set up their servers right to look at SPF and block. That was my problem.

So I went and found a howto for my operating system to fix SPF with my Mail Transfer Agent (MTA). The document, provided by my VPS hosting provider, had how to set up SPF, how to configure my MTA to quarntine emails that fail SPF, a DKIM walk through, a ADSP howto, and a DMARC howto, all on the same page.

First things first. I fixed the SPF inbound. Now it should do the stuff it needs to. Then I figured since I still had time, I’d go after the DKIM problem.

So I backed up my existing files and followed along. AND NOTHING WORKED!. Still the same problem. Heck even the same error message.

So an

later and I started completely fresh. Nothing old, not even the old backup files.

And it still didn’t work. sysctrl status -l opendkim.service and journalctrl -xe were not much help either. Neither one gave enough information on what was wrong.

I did some searching through the logs, and found that even after changing the port to a local socet for Milter it still couldn’t work. But this time I found that it couldn’t see the files, and searching the directory that local socket should be in, it wasn’t there. After much googling I found an old bug report for Debian (my OS of choice). If the socket and pid files were missing, do this:

And suddenly everything was working. I sent test emails to test services, and they seem to be working. At least they told me that everything works.

Then I went why not and set up the ADSP and DMARC stuff in DNS.

Really just happy to get past the problem where dkim isn’t working. Now to go finish clearing out the quarantine files.

Validate data, before sharing.

I’m going to have to add a couple more slides to my Threat Intelligence: From Zero to Basics deck. But I told GrrCON that I would have an updated deck from Circle City Con anyway.

Over the last two weeks I’ve seen some stuff shared publicly in Threat Intelligence Platforms, that really shouldn’t have been. The data wasn’t valid, at the time of sharing.

Continue reading