The company I’m contracted to did a Business Continuity / Disaster Recovery test recently. We were called the day before and told the building would be closed, and what we had to work from remote locations (read as home). The problem is, it was not an accurate test.
Category Archives: administration
“it’s working don’t touch it, it’s not broken”
A running theme I noticed as of late has been the “it’s not broken, because it’s working, so don’t touch it you’ll break it”. John Strand mentioned it, when talking about Windows XP hitting end of life, on Paul’s Security Weekly 367. Ben Ten and I talked a little about it today in regards to HeartBleed. Lastly I just got off a 4 year project that existed mainly because it wasn’t broke, so don’t fix it.
Here is the problem. IT / IT-Security sees something as “broken”, when it is at end of life / end of service. When we can’t get parts for it anymore, when patches aren’t being made, etc, we say we have to replace it. We say it’s “broken”, or at risk, etc. However that’s not how management sees it. They see it as a system that is still doing what it was purchased to do. It’s not broken, it’s just old but works fine.
IT / IT-Security doesn’t get to say when it’s broken, it’s the “business” that gets to say when it is broken. However it is usually our fault, as IT for not having a new system in place when it finally stops doing what it was purchased for. A good example is a publishing company I worked at. We had Reel to Reel microfilm duplicators, these were devices that the company making them went out of business. They ran NT4. The last I heard, they were still working like a champ, and the company still didn’t see a reason to invest in something new, because those were not broken, they were just old.
To a point it seems a little silly. Company’s get to write off new equipment via deprecation. Investing in what they need to have to do business makes good business sense. But we live in the cut to spending and the bottom line in the name of profit world, so we end up seeing the don’t fix it if it’s not broke attitude come out.
Like I said I just finished a 4 year migration project, I only worked on it the last 9 moths, but every single person I had to interact with, to migrate said the same thing. This solution works, migrating will cost us time and money, we’re not moving because doing so will stop the production lines of the product the company makes. The “business” backed those people, because without justification, they said things would stop. The stance the “business” took was, the old stuff is working today it is old, but not broken. Don’t fix it.
Preventive maintenance is like getting your teeth cleaned. You don’t do it because you like it, or can afford it. You do it because the cost of prevention is cheaper and less painless than the alternative. You don’t fix things when they’re broken, you fix them before they break so they don’t break. We need to learn to tell the business that in better terms than we have now in both IT and Cybersecurity.
WordPress and some security
I was recently listening to Paul’s Security Weekly episode 366: How Security Weekly got defaced, and started thinking about my own security posture around my WordPress sites. When I first created The Rats and Rogues Podcast site, I read everything I could find and on WordPress security. There wasn’t much. Later when I created this site, I still wasn’t impressed.
Snow shoes and Cyber security
There I was, kneeling down on my snow shoes, about 20 minutes in to my little hike, my arm buried up to it’s elbow, reaching around in the hole my pole with a snow basket just made.
What does this have to do with cyber security?
Duo’s two factor was easy
Finally got around to setting up two factor auth on the blog, using the Duo Security Plugin. Took less than 5 minutes, like their video at the plugin site said. I remember the SSH being harder to set up than that.
knowing what your tools do
When I changed my firewall rule policy, part the reason for doing it was because I was getting tired of seeing dovecot:auth failures in the logs. People around the world were brute forcing my mail server, and the rules were 100 lines long of just blocking. I had thought that they were coming from people hitting port 993 (IMAPS), and to a point there were. You can see below where it is dropping port 993 access attempts.
All the ip addresses that DenyHosts blocked
One of the things I did after getting iptables tweaked, was to clear my /etc/hosts.deny file. About 99% of these were put there by DenyHosts. Which is a great little background daemon that looks for failed log in attempts over ssh and then blocks the attacker by adding them to the /etc/hosts.deny file.
Since I know some people are looking for these types of addresses for different reasons, and there are over 2300+ unique ones in the list, I shared it publicly. You can get the list at pastebin.
“failed” – my shell script to see failed login attempts
I mentioned it earlier, so I thought I’d share. After the fold is my shell script, called failed which I used to use to see the ssh brute force attempts on my server. Since I now do default deny on iptables, it doesn’t see much use.
Firewalls and the default deny rule, does that make us bad?
Over the last few weeks I’ve been thinking of redoing my iptables, but by putting in a default deny rule, does that make us bad netizens? By dropping everything that isn’t allowed, it actually makes it harder to fix the original problem. The fact that there are attacks to begin with, and the fact that boxes are compromised.
Over the last few months, since fixing the IMAPS part of my mail server, I noticed people hitting the server and failing to log in. These were not targeted attacks; they were automated bots, using the same users and possible passwords. I’d block them at the firewall but every day, there would be at least 2 new networks. Not all of them from overseas.
For historical reasons I take a stance of contacting the abuse email for North American companies, and some European ones. I found that the large Cable Company ISPs, usually turn out to be black holes. The smaller ones though actually reply back.
Last week I went through a week’s worth of logs, and out of the 47 ip addresses, there were about 30 networks. I actually sent 17 emails, and blocked only 13 networks. From that, 9 replied. Granted they were mostly the auto-replys, but I did have 2 that were interesting. One was from an ISP in England and they thanked me for letting them know. The other was an educational ISP in California. They replied back where they found the problem, and the IR procedures, with a thank you.
That worked because I wasn’t blocking the inbound connections, and letting other tools protect the server’s processes. However the iptable rules had become larger than I wanted to maintain (over 100) for inbound access. So I re-wrote them using a default deny. I now state what on my network can be talked to, and in some cases by what networks. Everything else gets dropped and logged.
The down side to this is that while I can see the dropped connection attempts, I can’t see what the people are trying to do. Is it just a null hit, a failed loggin attempt, something else. I also can’t report it back to other interested parties. While I feel better about their failures, I realize that I’m a bad Netizen because I can’t contact their upstream, with logs showing the problem.
Yes, I know I could set up a honey pot system and fight for both the user and the internet that way, but it feels more like just being the complaint department than it does trying to solve things.
While I am going to keep using the default deny, because it’s easier to handle the rules on the firewall, I still don’t like that I’m walling myself off from the real problem, and not trying to fix underlying issue.
Over Thinking Problems
I think one of the problems we may have in this industry is over thinking the problem, and doing more than is needed for the problem. For example, I upgraded my personal VPS server recently, the one that runs this site and Rats and Rogues. It required a reboot, but because I rarely reboot this box, I keep forgetting that iptables isn’t persistent. I usually remember and restart it fairly quickly when it reboots.
The night of the upgrade wasn’t much different. However I messed up the command, being a lazy admin I use the built in tools to do work for me. I love control-r and how it scrolls through your history based on a few characters you type. Well instead of iptables-restore < firewall.rulz I typed iptables-save > firewall.rulz. Yes, I overwrote my rules with nothing.
My very first thought was WOOHOO I get to do forensics on my live system. I went to twitter to brag, though I’m not sure if people realized that was the point. @secbuff asked why not restore from backup. He was right. The majority of the rules I have are for blocking ssh brute force attempts (the ones that make it past denyhosts), blocking mail relay attempts, and blocking user account enumeration. While playing forensics would be cool, this is a live host on the internet with services that do get attacked. It would have left the box exposed way to long to the internet, and was a case of over thinking the problem.
So I went grabbed a back up file. Instead of uploading it though, I opened it in a text editor, hand sorted the rules by network number, and then pasted them in to the terminal window. I also finally dealt with that persistence issue too, we’ll see if iptables-persistence.dpkg worked right on the next reboot. Oh and since I add networks on a regular basis (when reviewing my logs) I wrote a small shell script to make two copies of the rules in different locations, with a spare backup.