Tonight I had to wave a sad goodbye to dd-wrt and revert back to a stock firmware. This travesty is because the dd-wrt firmware doesn't support the hardware NAT function on the TP-Link Archer C7 v2 which resulted in losing over two third of my bandwidth. Being that my ISP provides me with a full gigabit upstream and down that equated to a getting only 200-300 megs each way instead of over 900 on a wired connection. On wireless things were even worse: I was getting 100-200 megs vs over 500. OpenWRT It was actually a note on the OpenWRT page that led me to discover all this so I feel the need to give them a shout out and a "thanks." Reverting to Stock Getting back to stock was more complicated than expected at first but worked out in the end. Here are a couple of notes in case » Read more

 genebean        

Recently our Oracle DBA hit me up and said that all of a sudden some of his servers were showing a load average of 0.00, 0.00, 0.00. To diagnose this I started looking at our Zabbix dashboard to see when the load dropped off. I noticed it was on March the 3rd so I checked a second host and found that it also dropped off on the same day... interesting. This made me think it might be OS-related so I decided to take a look at /var/log/yum.log to see if anything was installed or updated around the mystical date found in the graphs. To my surprise, not only was there an entry for that date but it was for the Zabbix agent. A moment or two later I realized that that was when we were doing our upgrade of agents from 2.4 to » Read more

 genebean        

Tonight we were trying to make the first post on my wife's blog and ran smack into a "Http error" message. When I looked in the console of my web browser I found an error 413 (Request Entity Too Large) message. After a bit of Googling it turns out that Nginx was the culprit. Apparently the default value of client_max_body_size is 1 meg. As I am sure you can imagine, most images grabbed with a camera phone are larger than that now. The solution was to add client_max_body_size 1024M; to my Nginx config. I picked the size for this setting so that it matched what I put in my php.ini file. Speaking of my PHP config, I am using PHP 7 and added the modified these settings: upload_max_filesize = 1024M post_max_size = 1024M memory_limit = 1024M max_execution_time = 180 Lastly, » Read more

 genebean        

This weekend I decided to check out Grafana. My first test for it was setting up the Zabbix backend. This went much better than I had expected so I started looking at what other data I could pull in. It turns out that Grafana may well be a great tool for centralizing data and metrics from disparate sources. The consensus on the interwebs, as best as I can tell, is that InfluxDB is the backend I should store my metrics in so I'm going try that next. Once InfluxDB is setup my plan is to try out some one-off inputs to it such as: Foreman and Puppet stats via foreman_influxdb VMware stats via vsphere-influxdb-go Veeam metrics to via veeam_grafana I'm also planning to check out several of the inputs listed on the Telegraf site including: Apache Nginx MySQL PostgreSQL MS SQL sysstat memcached php-fpm passenger One of the » Read more

 genebean        

When I started switching everything I could over to https-only I was under the impression that the only option was to tie each host to a single certificate unless I wanted to shell out the big bucks for a wildcard cert. This also meant one host per IP address if I wanted to use the standard port 443. That was two or three years ago. Just a few months ago I learned that SAN certificates were recognized by all the major browsers and started taking advantage of them to reduce the burden of needing two certs to cover things like example.com and www.example.com. In my mind this still required two IP addresses though (one per domain). All this changed tonight when I decided on a whim to see if you could setup Nginx to recognize name-based virtual hosts that were all tied to a single SAN certificate » Read more

 genebean