I've been using GoAccess to look at my logs for a while now. The other day I decided I wanted be able to look at these stats for the different sites on my web server in a variety of ways including: all data from all sites combined all data on a per-site basis daily stats from each site kept for a week The thing with wanting daily stats is it helps if they are created in a way that only covers that day. That sounds simple, but the logrotate generally runs around around 3am. So what's the solution? Cron. To be more exact, run logrotate from cron and generate stats while you're at it. # Puppet Name: rotate nginx logs 0 0 * * * /root/updatestats.sh Now, if you are going to run logrotate from cron you'd better turn of the original one. Here's how I did that: $ cat /etc/logrotate.d/ » Read more

 genebean        

Don't you just love it when package maintainers break you blog? Yeah, me too. Tonight I went to post an article (no, not this one) and found my site to be down. When I went to start it back up I got this: [ghost ~]$ /usr/bin/npm start --production node: error while loading shared libraries: libhttp_parser.so.2: cannot open shared object file: No such file or directory As it turns out, the maintainer of the nodejs-6.10.3-1.el7.x86_64 package added this to their changelog: * Wed May 10 2017 Stephen Gallagher <sgallagh@redhat.com> - 1:6.10.3-1 - Update to 6.10.3 (LTS) - https://nodejs.org/en/blog/release/v6.10.3/ - Stop using the bundled http-parser now that there is an upstream release with a new-enough version. What they didn't do was update the their dependancies to pull » Read more

 genebean        

Tonight I had to wave a sad goodbye to dd-wrt and revert back to a stock firmware. This travesty is because the dd-wrt firmware doesn't support the hardware NAT function on the TP-Link Archer C7 v2 which resulted in losing over two third of my bandwidth. Being that my ISP provides me with a full gigabit upstream and down that equated to a getting only 200-300 megs each way instead of over 900 on a wired connection. On wireless things were even worse: I was getting 100-200 megs vs over 500. OpenWRT It was actually a note on the OpenWRT page that led me to discover all this so I feel the need to give them a shout out and a "thanks." Reverting to Stock Getting back to stock was more complicated than expected at first but worked out in the end. Here are a couple of notes in case » Read more

 genebean        

Recently our Oracle DBA hit me up and said that all of a sudden some of his servers were showing a load average of 0.00, 0.00, 0.00. To diagnose this I started looking at our Zabbix dashboard to see when the load dropped off. I noticed it was on March the 3rd so I checked a second host and found that it also dropped off on the same day... interesting. This made me think it might be OS-related so I decided to take a look at /var/log/yum.log to see if anything was installed or updated around the mystical date found in the graphs. To my surprise, not only was there an entry for that date but it was for the Zabbix agent. A moment or two later I realized that that was when we were doing our upgrade of agents from 2.4 to » Read more

 genebean        

Tonight we were trying to make the first post on my wife's blog and ran smack into a "Http error" message. When I looked in the console of my web browser I found an error 413 (Request Entity Too Large) message. After a bit of Googling it turns out that Nginx was the culprit. Apparently the default value of client_max_body_size is 1 meg. As I am sure you can imagine, most images grabbed with a camera phone are larger than that now. The solution was to add client_max_body_size 1024M; to my Nginx config. I picked the size for this setting so that it matched what I put in my php.ini file. Speaking of my PHP config, I am using PHP 7 and added the modified these settings: upload_max_filesize = 1024M post_max_size = 1024M memory_limit = 1024M max_execution_time = 180 Lastly, » Read more

 genebean