ELK Stack v2 (and a correction)

I’ve learned a lot since my last post. One of those things is that I was wrong… setting up Logstash on your Redis nodes isn’t such a bad idea. Another thing that I have learned is that fluentd / td-agent is not as great as I thought it was. My revised plan as depicted in the updated design below is to use Logstash Forwarder on my non-Windows nodes and Continue reading

ELK Stack Design

I’ve been working on a new logging system based around Elasticsearch, Logstash, and Kibana. One of my biggest challenges was that all the recommended designs I found said that logs should go from a shipper to Redis. The problems with this are twofold:

  1. Logstash doesn’t seem like a good fit for Windows. The biggest issues are that it relies on Java which isn’t something that is very sellable to any Windows admin that I know. The other is that it simply didn’t work reliably in my testing. The 1.4.x series had performance issues and the copy of 1.5.1 I just tried on Windows 7 is throwing
    Windows Event Log error: Invoke of: NextEvent
    Source: SWbemEventSource
    Description: Timed out

    errors under even the simplest of tests. Unlike other tools it also requires specifying each Event Log that you want to monitor individually as opposed to being able to just grab them all.
  2. Not everything can have an agent on it which means that I needed a way to pipe syslog into Redis

The Windows issues were solved by utilizing NXLog as a log shipper but that introduced another problem: NXLog doesn’t have a Redis output. On the plus side though, NXLog seems to be the gold standard when it comes to getting at Event Log data and it can convert the log entries into JSON. This just leaves finding a way to add the Logstash-specific information to the message and then a way to insert that message into Redis. Continue reading

Windows 7 x64 and Underscore-CLI

Underscore-CLI is a great utility for working with JSON data. Below are the steps it took to get it running on my Windows 7 laptop:

  1. Install node.js
    1. Node adds a trailing \ to it’s path… to actually use it you must remove this as Windows does not want it to be there
  2. Install Python
  3. Add Python to your path (something like C:\Python27)
  4. Underscore-CLI uses node-gyp… to get that to work on Windows 7 x64 you have to follow their guide at https://github.com/TooTallNate/node-gyp/wiki/Visual-Studio-2010-Setup. Be sure to pay attention to the part about utilizing the Windows 7 SDK command prompt.

Hyper-V, CentOS 6.5 kernel panic, and 7 long hours

In hopes of it helping someone else not spend hours of work like I just did here is my lesson-learned from my first day of using Windows Server 2012 r2 Hyper-V.

Lesson 1:

Look at the defaults and read the descriptions… don’t just try and set stuff like you do in VMware…

Lesson 2:

It seems that “vga=791” being added as a kernel parameter causes a kernel panic on Hyper-V whereas it works great on VMware and Virtualbox… the interesting thing is that, unlike the other two hypervisors, I get a decent size window by default with Hyper-V so I don’t even really need this option (yay).  I just wish I had known this before spending 7 hours hunting why I was getting a kernel panic after doing a kickstart install.


So far, things look good.  The learning curve has been very minimal and the install was dead-simple.  I am actually looking forward to getting some more time with Hyper-V.

Vagrant, Fusion, & DHCP Oddities

I’ve had some random weirdness that I thought was related to Vagrant’s VMware Fusion provider until I turned on debugging tonight. As it turns out, Fusion had decided at some point in the past to start storing it’s DHCP leases in vmnet-dhcpd-vmnet8.leases~ instead of vmnet-dhcpd-vmnet8.leases. The same was true for vmnet1 too. After quitting Fusion and running ‘sudo /Applications/VMware\ Fusion.app/Contents/Library/vmnet-cli –stop’ I removed vmnet-dhcpd-vmnet* so that all the leases would be reset. After that I reran ‘vagrant up’ and (finally) things worked as expected.

What’s missing from GitLab?

The other day I was asked what GitLab was missing and I realized that, really, it’s not much. The single biggest thing to me is the inability to create new projects and interact with existing ones from a remote shell session a la gh / GitHub CLI. Other than that it really comes down to polish and aesthetics. Below is my $0.02 based on interacting with GitLab as a person who runs a server and as an end user.
Continue reading

Configuration Management Part 3: Vagrant & Packer

To facilitate developing my Puppet code, the Pro Puppet book suggests using Vagrant. Seeing as I’ve been meaning to get around to learning it for a while I decided now was the time to finally do so. The only problem is that, being a responsibly paranoid SysAdmin, I was never a fan of using a base for my work that I didn’t know the contents of. I also never liked the idea of basing my work off of something I didn’t understand (a Vagrant box) or that could go away at anytime.

Box building time
The solution to my dilemma was to learn how to use Vagrant and to make my own base boxes for it. Their site does a good job of listing the minimum specs and Puppet Labs publishes the recipes for their base boxes that are built using Veewee on GitHub. Between these two resources I was able to figure most stuff out and built a CentOS 6 vm that was to be my base. I was then able to use another tool called Packer to reference the VMX file and build a box from it.

I have a box, now what?
Once I built this first iteration of my box I setup an account on Vagrant Cloud, setup space on my personal server to host the boxes, and published my VMware Fusion box. The problem was that I couldn’t publish my Packer template because it was dependant on a custom vm.

Packer to the Rescue, Again
I then dove into Packer a bit more and, thanks to another resource found on GitHub, was able to take what I learned from my first box and produce base boxes for both VirtualBox & VMware Fusion using a fairly simple template file, some shell scripts, an ISO, and a kickstart file. These new boxes are exactly what I was aiming for. I’ve published the template on GitHub and, after a bit more testing, will be publishing the boxes on my Vagrant Cloud account.

Next up:
Once those boxes are vetted I’ll be making versions with Puppet pre-installed so that I can get back to the book.

Configuration Management Part 2: puppetlabs-apache & puppet-lint

Today was a good day. I installed puppet-lint and ran it against a custom module I’m writing for my first node and found lots of issues that it was kind enough to tell me exactly how to resolve. I then got down to using my first module from Puppet Forge: puppetlabs-apache  Installing it was a piece of cake but understanding how to use it took a bit of trial and error.

My First Puppetized Apache Server
One of the things my first node needs is an Apache install that can serve CGI files via httpand https… seems simple enough, right? To facilitate this I see from the module’s docs that it makes a default vhost for httpand, optionally, can do so for SSL too. I took at the default values for the module’s parameters and decided they weren’t going to cut it so I called the apache class and told it not to make a default site. That was simple since the entire code block was I  the docs. Then I had to figure out how to call apache::vhost in my node definition.

Creating the vhost was a little more complicated but made perfect sense after a while & several puppet-lint runs. Once I crossed that bridge I then proceed to take all the required setting from the install docs of what’s going on the node and added them to a newly defined default vhost and, also, to a new default SSL vhost. Again, trial and error and puppet-lint but, in the end, all is well and I am now serving a placeholder “default.html” as the index in my Puppet-created document root.

Up Next
Tonight I’m reading up on Puppet environments and am about to read about “Developing Puppet with Vagrant” in Pro Puppet, second edition. Tomorrow I plan to actually deploy the website’s vendor provided content and associated settings via Puppet…

Configuration Management Part 1: The Restart

As mentioned in my last post, I’ve decided to start over on my journey to doing configuration management in an environment where we treat our infrastructure as code. Today I kicked things off by setting up a new Puppet Master on CentOS 6.5. Once my usual setup was applied to the system via a PXE boot & Kickstart installed Git and the puppetmaster package and was off.

Version Control
One of my main goals is to track everything in Git so my first task was to change the group ownership of /etc/puppet to my puppetadmins group and give them write access. Then I needed to initialize a repo in that directly, tell Git that it’s a shared repository so other admins can work in it too, and tell Git to ignore the modules folder. I then applied the group permissions to everything inside the folder, did setgid on modules & manifests, and lastly I did a setfacl on modules & manifests so that us admins would retain rwx on all files and folders. Lastly I cloned my first module from our GitLab instance into a folder under modules.

Master Configs
This was a bit easier… I just made a node definition in site.pp and set my certname & dns_alt_names.

Today’s Wrapup
With very little work, some time reading Pro Puppet, and some trial and error I now have a working system via Puppet open source that’s tracked in Git.

Next Time
Next up is pulling in Puppet Labs Apache module and using it to enhance my new master and a node.

Foreman: too much voodoo

I finally got around to setting up Foreman at work and managing my first node with it.  After digging around I found that I felt really boxed in using this setup because so much of the work is done behind the scenes in some magical way.  One of my main goals is to facilitate the concept of infrastructure as code and, like my code, track changes via git and store them in our GitLab instance.  The Foreman, as best I can tell, takes and hides everything it does inside a database which prevents me from being able to apply any version control to it’s settings.  This is an unforeseen and unfortunate reality because the developers have made a really good looking product that can do a lot of really cool things.  For me though, this is too much voodoo at this early of a stage of us doing configuration management and I think I’m going to back out my install and start over with a different approach that defines nodes in pain text .pp  files.  I’m sure I’ll take advantage of pulling in data from some external source like Hiera and / or other systems we have to help make decisions dynamically but I don’t think I want the configs themselves in a db… who knows; guess I’ll try it out and see.

On a brighter note, I imagine that I will eventually be able to find a good balance between being able to track things with git and being able to utilize Foreman’s Smart Proxy features to simplify deployments of new systems in my VMware environment.  I love the idea of being able to automate an entire deployment workflow that includes all of the following (and more):

  1. creating the VMware virtual machine
  2. creating a DHCP reservation
  3. creating an A record in DNS
  4. installing the OS
  5. joining the domain
  6. installing and configuring applications
  7. configuring the firewall on the host
  8. setting access rights
  9. running a security scan with Nessus
  10. configuring our F5 LTM if needed
  11. configuring the perimeter firewall if needed