Author Archives: Chris J

About Chris J

Chris J studies physical and information security. He started the Ann Arbor Chapter of TOOOL, attended Eastern Michigan University got a degree in Applied Information Assurance. Work involves Threat Intelligence.

Raspi-nas

A couple of years ago, I don’t remember when, I built a small NAS using a Raspberry Pi 2 B version 1.1, and two 128G USB flash drives from Microcenter.  It is called “raspi-nas”, and  I built it following the How-To Geek Guide: How to Turn a Raspberry Pi into a Low-Power Network Storage Device.  It worked well to back up our phones.  Which is all it is used for.  It used wireless for the network connection.

Continue reading

How I’m currently using Python Virtual Environments

So as mentioned previously, I’m looking at using Python’s Virtual Environments, to keep code straight. Figuring out how to set it up was a bit of fun. I’m sure there are some great plugins for Atom, but I haven’t found them yet.

So far here is how I’m using it. I’ve created two directories, .venv and Projects. Both are in my home folder. When I create a new project directory, like AtBS_Udemy, I create a matching env directory under .venv. In AtBS’ case, it is AtBS_Udemy_env.

It’s actually working out pretty well so far, but I’ve only done this on 2 *Nix based boxes so far. A work VM and my travel-laptop.  We’ll have to see how this goes long term.

productivity vs tweaking

I’ve been wanting to switch back to a Linux based system for a while. Main hold up has been school. Recently I  got to rebuild my travel laptop to run Linux.

I started with Debian, but after 2 days and a bunch of tweaking of the system and still not to the point of of actually start working.

So out goes Debian, in moves Xubuntu. A couple of hours later up and running. Disappointed, I’d rather be running Debian. But I really don’t have the time to spend doing endless tweaking. I have several other things to do.

More thoughts on Python – Virtual Environment

Bottom Line Up Front: Experienced Pytonistas say to use Virtual Environments. Very few say how in relation to your project, or where things should be stored. Store the project code outside the virtual environment folder.

Again going back to “Thoughts on Python”, one of the things that Wyatt suggested, was to use Virtual Environments for all my projects. When I started on rebuilding my page_watcher program, that watches parked domains for being moved to live sites, I thought hey why not.

Al Sweigart and Ryan Mitchell both mention using Virtual Environments in their books. But as far as I read in either, only as a passing mention with no in-depth information. Google shows  there are a lot of tutorials online about using Virtual Environments, and I won’t re-hash those.

Now like I said there are some Python Virtual Environment Tutorials out there that seem pretty good. I read through the Real Python Primer first, and started setting up my first environment.

But things went sideways.

I had to install updates and new packages. So while I fighting with Ubuntu’s automated software updater getting in the way,  and trying to install what I needed; I watched Corey Schafer’s Python Tutorial: virtualenv and why you should use virtual environments. Around the 8 minute and 30 seconds mark he points out that you shouldn’t use the virtualenv to hold your project code. But not where the project code should be stored.

I had already started to write the code in the Virtual Environment folder, from within the virtual environment. Only one of those two actions were right. So I had to stop and back out. But then the question came up, “where does one store project code wile using virtual environments?” Nothing I had came across really explained where to store the code. I tried asking on twitter but got no results. So, back to Google for  more reading from links.

The two best blog posts I found that answered that was one by Vanessa Osuka at Code Like a Girl, A Guide to Virtual Environments in Python. and one just as good by Chris Warrick, Python Virtual Environments in Five Minutes. They both covered the same information. Ms. Osuka covered it a little earlier in her article, than Mr. Warrick did. But Mr. Warrick did so with a better example.

I’m still not sure how I’m going to set it up. Mainly because wrappers and not wanting to maintain multiple alias files (since I work on one of about 6 different computers / environments on any given project). But at least I know not to store the project in the virturalenv folder, and but instead in it’s own place (usually ~/bin/project_name).

More thoughts on python – projects

This is kind of picking up on where the last “Thoughts on Python post left off.” One of the things I’ve learned over the last few weeks playing with Python is some new lexicon.  Things like Projects, to mean programs. Linters, which are hooks to call out mistakes in the code to help fix it.

Anyway, besides the quick and dirty proxy I listed last time I have two other projects I’m working on.

One is taking a list of domain names that currently don’t have web pages, or have parked pages, and checks to see if they have changed to active. There are several ways to do this, but those methods didn’t work for me.  A couple are using GoDaddy and they seem to have several different pages that host the parked page, which returns different data each time the page is visited. So the simple way of using cURL and a hash doesn’t look like it will work. I’m thinking Requests with BeautifulSoup and .find()

The second uses data pulled from a Shodan search, and searches for context for me from an internal system at work. This is the one I learned the most from, over the last week. Mainly because it has changed several time. I’ve learned some web scraping tricks, mainly using Ryan Mitchell’s book Web Scraping with Python second ed (Amazon affiliate link) . I really want to work through the book from cover to cover but mainly it is a reference guide at this point.

During this project, from WSwP2e, I learned how to use Sessions from requests to capture authentication cookies and replay them during a session while scraping a website for data. I learned how to use BeautifulSoups .get_text() to print only the data I needed. Outside of the book I learned how to drill down to get to drill down to get to the right part of the table. I also learned of the getpass module to ask a have a user input their password without reveling it to the screen or .history_file.

After I got that all figured out and written, with “Open with” and some testing on the table results to get past out of ranger problems; I found out there was an API option. So I can get the  same data from a single URL in JSON. That will make getting the data easier since it ill be in the forms of JSON keys / dictionary like, and not in the form of rows in a table.

So the code is a mess right now, written with the old scrapping way, and with the API mixed in. I’m waiting for the people who wrote the API to tell me if I’m going to have to write a for loop or if I can feed it the whole list I need information on.

A third project  I want to work on deals with collecting IOCs. The other week at work I was going through some Emotet related emails, and the SOC analysts asked for all the related domains my team could find. So I was going to URLhaus to look up the domains we had from the PowerShell script. Then grabbing the hash, and all the domains the hash was found on.

I got real tired of copy, go to terminal window, open file, paste, awk print and sort uniq, copy paste to note pad file.  I set the terminal command line up so all I had to do was up arrow. It would remove the old temp file, get the data, sort it and the print it to the screen. So I could copy and paste.

Even that was a bit of a mess to use, because it needed human interaction and there was a few times that the data didn’t copy so I ended up repeating a few copies a few times.

Without looking at the API for URLhaus, which I’ll get around to eventually, I want to write a script, that while running will watch the clipboard, copy the data, manipulate it, sort it, and paste it to a file, or even just write it to the file. Still trying to flesh that one out. But it will be helpful beyond just the one site.

* Update 2024-10-06: changed to Amazon Affiliate Link, which I earn a commission from qualifying purchases.

Thoughts on Python

I’ve been trying pick up more Python again. It’s hard, having to constantly put it on the back burner for college classes.  I get a little more retained each time at least.

A couple of weeks ago, @WyattRoersma posted an interesting link from Real Python about publishing to PyPI. Which led to a great conversation. I was curious if I should post something. I wrote a quick and dirty module to import required proxy info for the boxes I use at work. Scrubbing it to share wasn’t that hard. But wondered if it was worth sharing to PyPI. To be honest I only wrote it because I was tried of having to copy and paste the same code from a file on my desktop every time I wanted to use it.

Anyway Wyatt offered to review some of what I wrote on his twitch channel. I didn’t get to see the show live, I had to watch later. Man, it was brutal. I knew I was bad, but I didn’t know I was that bad. He didn’t look at the code I wanted but instead looked at some of my older code. My one per environment (Windows, Cygwin, Linux) ping code. He made some great suggestions.

Since then, the first thing I did, was stop using Notepad++ and Vim for coding. I’ve installed Atom. I’ve installed some Linters, I didn’t even know what those were, that really helped with things like following pep8, one of Wyatt’s biggest comments about my code. However, Automate the Boring Stuff with Python, doesn’t teach pep8. Which of course means that I’m now trying to learn Python, AND break bad habits.

Atom might be a bit of a crutch. It has spell checking which my code did badly on in the comments. It also has a linter catch not matching pep8, plus an autopep8 on save option. Really that one gets used for spacing on multi-line commands. I think I’m learning to make things a little more pythonic, but not sure. Though I apparently need to  use m ore modules.

I will say this, in the last 3 weeks, coding Python has become fun again.

Docker and Remux part 2

In my last post I talked about how I played with docker on a VM I constantly re-stage to original state. Some of what is below can be found on my Peerlyst post too.

Considering how long it took to download the images, I decided on a fresh revert, to install the remnux images after updating the box, and installing docker.io.

Using the thug image, I found that the container image doesn’t work match the directions on the Remnux site, Docker Hub page or on the Github page.

However reading the docker file gives the needed information.

The first thing wrong is the way thug is ran now.  To run thug one has to do

But before that, to run the container, and be able to get logs, the following has to be used.

/tmp/thug/logs is the current working directory in the Dockerfile on Github.

Remnux and Docker

At work, we have this thing on Fridays called power up time.  It is the last 4 hours of the week to work on personal projects, test new ideas to see if they are worth implementing, or self improvement.  Most weeks it is when I get to look at the most tickets doing Tactical level intelligence since the rest of the week is filled with project or priority case work.

Recently while working on tactical level information for SOC tickets, I was able to add in a little fun, and actually power up.  I wanted to do some reverse engineering of the malware associated with the  ticket, to see if there was more IOCs that could be extracted.

Earlier in the day I read an email in the SANS DFIR alumni list, which included someone talking about using Remnux with docker.  So later in the the day working the ticket, and because I didn’t have a Remnux box, I decided to check out the docker containers.  This was also my first time working with docker as well.  Starting at Lenny Zeltzer’s Remnux Docker Site.

I went to my linux vm, a box that gets reset to the fresh installed state via snapshot after each use.  After a sudo apt install docker.io and a sudo docker pull remnux/pescanner I had the container.

I ran it and learned a little bit about docker. I also got an understanding of some of the information that VirusTotal displays under the detail tab.

Passed the GCTI

I know I haven’t posted much lately. Been busy, don’t have the time to research the cool things I want to, or read the books I want to.

I did recently pass the SANS For578 / GIAC GCTI exam back in June.

So that is a thing. First SANS Class taken, first GIAC exam passed. I’d share the embedded link, but it gives too much personal information away. So all that is here is the picture. 

 

 

 

 

 

 

 

I’m hoping to take the OSINT and Python classes in the not too distant future.

scripts to decode base64 and hex

About a month ago, I added a couple shell scripts to my DFIR Github repository. Three of the four scripts are used at work daily in either a Linux terminal, or a Cygwin terminal. The fourth script is something I use to help with quarantined mail, and not really DFIR based.

b64Decode.sh and hexConvert.bash take command line arguments and reports back the result. For example:

Continue reading