<podcast:txt>
tagUser interaction is done at this point. The service still needs to see the updated feed, but that is beyond the bits I want to talk about here.
This entire flow could be integrated into PowerPress and be completely transparent to the user who installs the plugin in WordPress. Nothing about this flow requires the user to configure anything they haven’t already configured if running PowerPress today because:
Given that to operate PowerPress a user must already know how to log into WordPress, this can be a seamless enhancement to PowerPress if it implements the following:
<podcast:txt>
tagEvery other aspect of this flow already exists in PowerPress or WordPress itself.
If PowerPress adds this functionality, the user gains the ability to prove they own a podcast by doing nothing more than having an up-to-date PowerPress plugin installed, logging into WordPress, confirming they want to proceed, and waiting for the updated feed to be seen by the service.
I don’t think it can get much easier than that.
]]>At first, it seemed that it had somehow gotten zapped (aka hit by a power surge). Upon digging a little more into the device I found that I could power it via a USB cable. Interestingly, that not only made the device boot up, but also made the ethernet connection work again. This seemed odd because I’d think that anything that fried the PoE aspect of it would have fried the entire ethernet board. With that in mind, I started poking around on my UniFi switch and noticed that there was a power setting for the port that said “off” where the others said “PoE+.” I removed the usb power cord from the coordinator, crossed my fingers, and toggled the setting back to “PoE+.” Amazingly, all was well!
The really odd part to me was that my UniFi access points that are also powered by the same switch were fine. My best guess as to what happened is that the antenna on the coordinator attracted some of the electricity in the air from the nearby lightning strike and the switch protected itself (this is a total guess though). Regardless of how it happened, I know two things now that I didn’t before finding this setting:
Here’s hoping I don’t have a reason to need to remember this anytime soon!
Going on the assumption that my theory of what happened is even remotely possible, I am seriously thinking about getting a Ethernet Surge Protector from UniFi to put in-line between my coordinator and the switch. They are only $12.50 and seem well worth it. The catch is that I am going to have to have a drain wire installed so that there is a place for any absorbed surge to go. I already have plans to have a contractor I know out soon to do some other work so I will ask him about doing this too. If it isn’t cost prohibitive I am going to move forward with the extra protection of my equipment.
]]>I’m seriously considering redoing my Home Assistant setup from scratch now that I know what we actually use and what’s just cruft… anyone else done this?
As expected, there were a variety of opinions. Surprisingly though, there was an overwhelming consensus that redos after having used Home Assistant for a whe were a good thing.
After reading all the comments and thinking about things more, I’ve decided I want to download all my backups, copy out several bits of yaml, export some other settings, and then take the plunge. In my “Introducing My Home Assistant Setup” post I said I’d be following up with one that breaks down all my automations. This new decision is going to delay that a bit. Instead, I’m going to start by chronicling my journey through rebuilding my setup.
Before I actually wipe everything and start over I need to do some prep work. This includes:
I’m also going to take some time and define a new naming convention for everything while I can still see a full list of devices. I’ve found that having well named devices makes so many things simpler, especially dashboarding.
After doing all the backups, my plan is to wipe everything and reinstall Home Assistant OS. I already boot my Pi from an external drive, but the process for doing so has changed since I set things up. For this reason, I plan to double check everything to ensure I’m following current best practices, which may mean I have to utilize Raspberry Pi OS as an intermediary step for firmware updates.
Once Home Assistant is reinstalled I’ll start adding devices and automations back slowly and methodically. This methodical process may well result in wanting to reset things an additional couple of times, and that’s okay. I’d much rather have a little extra down time now than be unhappy after everything has been added back in.
One planned change in particular might be the cause of some redos: I’m going to be switching from ZHA (Zigbee Home Automation) to Zigbee2MQTT (z2m). Though I’ve used both before, I didn’t discover z2m until after I’d set everything up at home. I like it a lot more and think I’ll put it to use as part of the redo.
We’ve also been planning to start using a couple of Z-Wave devices and I think now is the time to finally do so. As part of this, I’m thinking I’ll use the Z-Wave JS to MQTT add-on. It won’t surprise me any at all if I need to try a few different times to get things just right.
I’m planning to do all the prep in the next few days and the actually kick off the redo. I’m strangely looking forward to this.
]]>In September of last year I bought an Aqara Water Leak Sensor off Amazon. I placed it between my washing machine and hot water heater in the garage and connected it to Home Assistant via ZHA (Zigbee Home Automation) integration and my Zigbee Coordinator. I then added it to a dashboard that shows me information about the garage and the status of my washer and drier. Finally, I tested that the sensor worked as advertised and that the proper state showed in Home Assistant. Everything checked out… yay for being proactive!
Fast forward to the first Friday of February. My wife is on her way to her car in our garage. When she starts down the stairs from our kitchen she hears an odd noise and sees a lot of water on the floor. The water is a concern, but not totally unexpected as our garage has flooded before and it had been raining hard. The odd noise and the fact that there is steam in our garage while it’s somewhere around freezing outside is actually much more disturbing. She looks out back and sees that our sump pump is still working, which means it’s likely not the issue we’ve had before. At this point calls me and tells me what she sees and then goes back to investigating.
After getting dressed appropriately, I head down and am equally confused and perplexed. We determine that the steam and noise are coming from behind the washer and drier so I decided to climb atop our drier to get a better look. It turns out both are the result of a leak in a plumbing joint near where the hot water hose connection for our laundry is: high pressure hot water is spewing from the joint. A moment or two later we find the shutoff valve for where water goes into our hot water heater and turn it off. The leak stops and we can start assessing what’s happened and the damage.
Having had leaks before, one of the first things that comes to mind is “what is this going to cost us?” It’s about this time that I remember two things:
I opened up the Home Assistant app on my phone and pulled up the sensor. It had, indeed, worked as designed and clearly showed that the leak started at about 11:30pm the night before… it was currently about 7:30am.
It’s at this point that I’m really wishing I hadn’t forgotten to have Home Assistant tell me if the sensor detected water. I set a reminder on my phone to rectify that later in the day and then start removing the water via a push broom (they are surprisingly good at this task, by the way). After moving some things to dry ground and pushing lots of water out the door of our garage, the immediate crisis is over. Now to figure out the root cause and get it fixed.
We are blessed to know an amazing gentleman named Keihan who is a General Contractor and operates his own business focused on residential repairs and upgrades. He and his crew have done almost every bit of work to our house since the day we bought it. So, naturally, my first call after getting the standing water out of our garage was to him. I left a voicemail with all the details about what was going on that morning, and included that I had noticed some sporadic high pressure in our faucets recently.
After checking things out, Keihan determined that the water heater itself was the likely root cause of both the high pressure that periodically showed up in my faucets and in what caused the leak. We made a plan to replace it on Monday and to collect some data over the weekend about the water pressure in my house via a mechanical gauge he attached to the spigot where our water hose would normally be connected. The gauge had an extra needle that would record how high pressure spiked. Keihan asked me to check it periodically and let him know if it spiked much.
A few hours later, I noticed that the pressure had spiked to about 90 PSI. Street pressure is only about 80 PSI and the regulator for the house is less than that, so this was a little concerning to us both. I dialed the observation needle back down to the current pressure so that we could see if this was a one-off spike or if there was a pattern. I checked again around 8pm and was greeted with this:
I sent Keihan this photo showing that the pressure had spiked to nearly 120 PSI and he decided we shouldn’t wait until Monday as this much pressure could easily expose other weak areas in my house’s plumbing. He said he’d be out the following morning, Saturday, to replace the hot water heater. Did I mention that he’s awesome?
Saturday morning came and so did Ro, a long time employee of Keihan’s. Ro got right to getting the old water heater drained and prepped for removal. A little bit later, Keihan arrived with the surprisingly hard to acquire new water heater. You see, it was supposedly in stock at the local big box home improvement store… but no one could find it. The next closest store showed to have three of them, so it was off to there. Apparently they were having some inventory difficulties too as it took them an entire hour to find even one unit. Fortunately, they did find it. At any rate, the new hot water heater was at my house and they were ready to get it installed. The old one had well exceeded its life expectancy (which is something I didn’t know I needed to watch for) and had developed significant amounts of rust that was already starting to clog the attached pressure tank. One thing was for certain after seeing all the rust and learning the age of the old water heater: it may or may not be the only problem, but it was for sure some of it and truly needed replacing.
With the new water heater installed and all the air flushed from the lines, all that was left was to keep an eye on the gauge to see if the issue was fully fixed or just partially fixed. The following day, Sunday, I checked the gauge and, sadly, it had spiked above 120 PSI. I let Keihan know and he said Ro would be out the following day to replace the house’s pressure regulator.
Monday came and so did Ro. He let me know that water was going to be cut off to the house for a little while and then went to work. In what seemed like no time, he was back at the door letting me know he was all finished.
I have said it before, and I will say it again: I am really thankful that we have access to such a good and reliable contractor. By lunch on Monday everything had been completed and life could return to normal.
The last thing I want is to have a repeat of the 🤦♂️ moment where the reason we didn’t know about a water leak was that I had simply neglected to make Home Assistant tell us. Thus, it was time to create a new automation.
I though about what I really wanted this alert to do and came up with this as a baseline: it should alert us in a way that we won’t miss, regardless of the time of day or night that it goes off, and regardless of us being home or away. There is one catch to this: it also should not terrify my toddler.
Enter my new Home Assistant automation entitled “SOS - Water detected in garage.” This automation is triggered any time the leak sensor has been wet for at least one minute. The delay is simply to avoid false alarms and to allow enough time to quickly pick the sensor up if something is spilled near it.
If triggered, the following actions are taken in order:
Right now, to make it stop you’d have to go find the automation and turn it off. I plan to improve this by making the notification to our phones “Actionable.” This will allow us to acknowledge the alarm from the notification. Upon acknowledgment the actions listed above will cease.
One thing that was added as part of getting the new hot water heater was a pan beneath it that is intended to catch water in certain scenarios and route it to a safe place. This is great, but also means that the leak sensor sitting next to my washer is no longer sufficient to tell me about everything I’d like to keep an eye on. The solution: add a second leak sensor.
I hopped back on Amazon and ordered another Aqara Water Leak Sensor to place in the pan under the water heater. The only problem with this plan is that the pan is metal… and metal is great at blocking or interfering with wireless signals. To combat this, I also picked up a SONOFF S31 Lite 15A Zigbee Smart Plug. This plug acts as a Zigbee router, which means that if it were to be plugged in near the sensors then they’d have a strong signal even with the pan causing interference. It just so happens that I had an open place for such a plug in an outlet under my work bench about 10 feet away.
Both the sensor and the plug have come in and have been added to my Home Assistant setup. The new sensor has also been added to the automation that watches for leaks. Here’s to hoping that that automation doesn’t ever actually need to be triggered.
]]>I decided that this was something I could help with, so I hit him up on Twitter:
Hey @ChrisLAS - I was listening to LUP today and heard you might need to monitor temperatures in your garage… DM me if you want a cloudless WiFi monitor based on ESPHome.
— Technical Issues (@technicalissues) January 19, 2022
We chatted a tad via direct messages and then I built this:
It’s modeled after one I have in my own garage with a couple of small modifications to suite his use case better. The setup is made up of:
The idea is that Chris will be able to mount this on, or near, the new server cabinet with the microcontroller at the top so that the heat it generates rises above the onboard sensor. The onboard sensor (the purple part) will allow him to monitor the temperature, barometric pressure, and humidity in the garage while the corded sensor will allow for monitoring the temperature inside the server rack.
The microcontroller is configured to present it’s data locally in two ways: via a web page and via a Prometheus endpoint.
This page is presented at jupiter-garage-data.local:
This data is presented at jupiter-garage-data.local/metrics:
#TYPE esphome_sensor_value GAUGE
esphome_sensor_value{id="jupiter_garage_data_wifi_signal",name="Jupiter Garage Data WiFi Signal",unit="dBm"} -69
esphome_sensor_value{id="jupiter_garage_data_server_rack_temperature",name="Jupiter Garage Data Server Rack Temperature",unit="°C"} 20.9
esphome_sensor_value{id="jupiter_garage_data_garage_temperature",name="Jupiter Garage Data Garage Temperature",unit="°C"} 20.9
esphome_sensor_value{id="jupiter_garage_data_garage_pressure",name="Jupiter Garage Data Garage Pressure",unit="hPa"} 980.3
esphome_sensor_value{id="jupiter_garage_data_garage_humidity",name="Jupiter Garage Data Garage Humidity",unit="%"} 31.1
#TYPE esphome_sensor_failed GAUGE
esphome_sensor_failed{id="jupiter_garage_data_wifi_signal",name="Jupiter Garage Data WiFi Signal"} 0
esphome_sensor_failed{id="jupiter_garage_data_server_rack_temperature",name="Jupiter Garage Data Server Rack Temperature"} 0
esphome_sensor_failed{id="jupiter_garage_data_garage_temperature",name="Jupiter Garage Data Garage Temperature"} 0
esphome_sensor_failed{id="jupiter_garage_data_garage_pressure",name="Jupiter Garage Data Garage Pressure"} 0
esphome_sensor_failed{id="jupiter_garage_data_garage_humidity",name="Jupiter Garage Data Garage Humidity"} 0
Accessing the data locally is all fine and dandy when debugging or doing casual checks, but for everyday usage it is way more helpful to have the data in Home Assistant. ESPHome supports this out of the box and that’s exactly what the code for the microcontroller is built in. Speaking of which, here is the code:
substitutions:
name: jupiter-garage-data
friendly_name: Jupiter Garage Data
on_board_sensor_name: Garage
corded_sensor_name: Server Rack
esphome:
name: "${name}"
esp8266:
board: d1_mini
# Enable Home Assistant API
api:
password: "self-hosted"
# encryption:
# key: !secret enc_key
# Used by fallback Access Point (ap)
captive_portal:
# Enable logging
logger:
ota:
# only use one of the two lines below
password: "self-hosted"
# password: !secret ota_password
prometheus:
web_server:
port: 80
# auth:
# username: admin
# password: !secret ota_password
wifi:
# the two secret names here are now the ones used by default in ESPHome
ssid: !secret wifi_ssid
password: !secret wifi_password
# Enable fallback hotspot (captive portal) in case wifi connection fails
ap:
ssid: "${name}-setup"
password: "self-hosted"
# Set device time to match that of Home Assistant
time:
- platform: homeassistant
id: esptime
# Use the onboard LED to indicate system status
status_led:
pin:
number: D0
inverted: true
# Wired sensor
dallas:
- pin: D4
# The bme280 sensor on the board uses i2c
i2c:
sda: D2
scl: D1
sensor:
- platform: wifi_signal
name: "${friendly_name} WiFi Signal"
update_interval: 60s
- platform: dallas
address: 0x680000066e92ad28
name: "${friendly_name} ${corded_sensor_name} Temperature"
- platform: bme280
temperature:
name: "${friendly_name} ${on_board_sensor_name} Temperature"
oversampling: 16x
pressure:
name: "${friendly_name} ${on_board_sensor_name} Pressure"
humidity:
name: "${friendly_name} ${on_board_sensor_name} Humidity"
address: 0x76
update_interval: 60s
Once this device is connected with Home Assistant all the sensors will be availabe for alerts and automations. For example, an automation could be setup to cut on an exhast fan or portable air conditioner if the temperature gets too high. Additionally, if the temperature doesn’t come down fast enough an alert could be sent to one or more members of the JB team so that someone can manually intervene before the hardware is damaged.
Here are a couple of diagrams to help explain things further.
In the code above the pins D0
, D1
, D2
, and D4
are referenced. You can see exactly what those correlate to here:
The image above was copied from https://escapequotes.net/esp8266-wemos-d1-mini-pins-and-diagram/
This diagram shows how I assembled everything:
This diagram was created with Fritzing
Here’s hoping Chris likes this device hand finds it useful.
]]>A year ago today (January 9th, 2021) I deployed what I consider my first production-grade instance of Home Assistant and couldn’t be happier. It is an amazingly powerful tool that is 100% free and open source. One of Home Assistant’s key features is the fact that it takes a local first approach to everything. By that I mean that every aspect of the project makes a concerted effort to not rely on the internet or cloud services unless they are absolutely required, such as when integrating with a vendor who does not have a local api (or won’t provide access to it to the community). This means that if the internet is out I can still control the vast majority of the devices connected to Home Assistant using either the web interface or the app on my phone… and push notifications from Home Assistant to my phone will continue to work too.
This post describes what my setup looks like today in hopes that it will inspire and/or help others automate things around their home. I cannot recommend Home Assistant enough, and that’s not just for techies like myself. It can provide significant benefits for the non-technically inclined too.
My setup is based on a Raspberry Pi 4 that has 8GB of RAM. It is housed in a Cooler Master Pi Case 40.
Storage wise, I am using a Kingston 250GB A2000 M.2 2280 NVMe drive housed in a FIDECO USB C Gen 2 enclosure. The picture below was taken right after I put things together and just before I slid the lower parts into the black housing.
I configured my Pi so that it will boot from the NVMe drive and then installed Home Assistant Operating System per their instructions. There is no SD card in my Pi at all.
The setup is fast and rock-solid reliable.
The founders of Home Assistant also a company to make the project sustainable. For a measly $5 a month (1 Startbucks drink) you get a secure way to control your home when not at home… with zero technical know how required on your part. No fiddling with routers or anything like that. I love what they provide and find it to be a great value that has the bonus of supporting Home Assistant’s development. Check it out for yourself at nabucasa.com. And, though it may sound otherwise, they have not asked me to say any of this nor am I being compensated in any way. I just really believe in their service and work.
Over time I have accumulated nearly 30 devices that utilize the Zigbee protocol. Initially, I just had Philips Hue bulbs and one of their hubs. That all changed once I decided to start adding devices that weren’t part of the Hue product line. I transitioned from the Hue hub to a ConBee II and the deCONZ software. Though I was able to easily move my Hue bulbs from the Hue hub to the ConBee II, I quickly ran into problems with both the hardware and software.
On the hardware side, the radio in the ConBee II was just too weak for use where I needed it in my house and resulted in really poor reliability. I replaced it with the CC2652P2 Based Zigbee to PoE Coordinator V2 made by tubesZB. I can’t recommend this coordinator enough. Even if it is out of stock initially, it is worth waiting for.
On the software side, I had some reliability and usability issues that I narrowed down to being caused by deCONZ. I bit the bullet and switched everything over to the in-built ZHA (Zigbee Home Automation) integration and have been very happy since.
Before switching out my coordinator at home, I had actually bought a second ConBee II for use in my office. I’ll detail that setup in some other blog post, but the relevant part is that I had issues there too. I replaced that one with a Sonoff Zigbee Bridge that I flashed with Tasmota. There are many guides out there on how to accomplish this, but you can also buy one pre-flashed from CloudFree.
I also ended up replacing deCONZ there too, though it was a good bit after I’d switched to ZHA at home. I ended up running Zigbee2MQTT (z2m) instead both for ease of use and because z2m has a web interface that I could use locally from anything with a browser. Though I have been perfectly satisfied by ZHA, I’d probably use z2m if I was starting over simply because it has a better user experience. The only reason I am not using it now is that migration is tedious and time consuming.
Before moving on, I want to call out explicitly that Zigbee and Wi-Fi utilize the same frequencies. This means that to avoid having problems with both you need to plan accordingly. I found the article “ZigBee and Wi-Fi Coexistence” on metageek to be supremely helpful. For me, this translated to telling my Wi-Fi gear to only use channels 1 and 6 and telling my Zigbee coordinator (by way of ZHA) to use channel 25. The image below was taken from that article and shows how this setup keeps each system from fighting with the other (I picked 25 even though 24 is shown in the image).
With all that background out of the way, let’s get into my philosophies around automation and how I am actually using Home Assistant.
If you research the topic of home automation any at all, you quickly find that there are people who want their house to basically be autonomous… that’s not me. I firmly believe that automation should make things easier, not get in the way of ANYONE in your house… visitors included. For example, lights should still have physical switches and you shouldn’t have to alter the way your automations run just because someone is staying over.
Along those same lines, I tend to prefer smart switches over smart bulbs, where practical. Switches have two distinct advantages:
There a few categories of things I’ve made smart:
I’ve progressively added more and more smart devices. My initial focus was on smart plugs as they are cheap and require next to no effort to install. They were added to our bedside lamps because the lamps were actually really hard to reach from the bed. Once they were added we could just tell the Echo to turn them on or off. I also added one to our Christmas tree. That one felt like a serious win as it not only meant I didn’t have to crawl under or wiggle behind the tree any more, but we could also have the tree come on automatically using the scheduling functions the plug provided.
That was quickly followed by starting to install smart switches. I felt (and still feel) like they gave me the most bang for the buck. This also allowed me to start making a noticeable impact on our day to life by simplifying little things like turning all the lights in multiple rooms off when we left home by simply saying “Alexa, good bye.” It also helped when we returned home, especially when our hands were full, because I could say “Alexa, I’m home.”
Next up was combining a HEMMA cord set, a NYMÖ Lamp shade, and a Philips Hue Soft White bulb to add light to places like above our couch and over the dresser in our nursery.
The lights over the couch were, and still are, really nice because we can have smaller amounts of light in more focused locations instead of one set of central, really bright lights on the ceiling. My wife and I both find this to be much easier on our eyes and significantly more effective when reading.
The light in the nursery was an inspired decision and likely one that we have gained the most benefit from since having a kid. Having this light and connecting it to an Amazon Echo gave us an effective night light, a way to do late night feedings without bright lights like from an overhead, and hands-free operation of the light and its brightness. When set at 1% it is dim enough to not interfere with sleep while still being bright enough that we were able to easy check on our kid without squinting. To further simplify things, and add a touch of automation, we added a Hue Dimmer that is mounted to the wall just inside the door. Having the dimmer, though not as nice as a wired one, allowed for easily brightening or dimming the light without saying a word.
I have a Nest thermostat and temerature sensors in almost every room. The sensors are a combination of Aqara Temperature and Humidity Sensors and custom built ones. Building a sensor that is compact and nice looking turned out to be beyond my skill level so most of the ones I have are the Aqara ones. By having these sensors everywhere and having the data from them pulled into Home Assistant I can quickly see the temperature in the occupied part of the house and adjust the Nest accordingly.
One thing that I want to call out here is that Home Assistant is wht makes this possible. The Aqara sensors are Zigbee, my custom ones are Wi-Fi, and the Nest is read via a remote API. Home Assistant takes these three different systems and pulls them into a single place where I can work with them as if they were all from the same vendor.
I’ve got a Logitech Harmony Companion All in One Remote Control that controls my living room Roku TV and all the things connected to it. This allows me to control my TV as part of automations that I’ll discuss later.
Up to this point, Home Assistant hasn’t really been used except superficially. It has some serious automation abilities that can span all the different products from all the different vendors utilized in my house. In my next post I am going to break down all the automations I currently have setup. Each breakdown will include details on the automation, the equipment involved, and why I think it’s worth having. I am not including this here simply because it would make this post way too long.
]]>I am sharing all this in the open in hopes that it will encourage Epomaker to make process changes so that this doesn’t happen again. My wife and I both love our GK68XS keyboards and have even backed the GK96S, though we did so separately in hopes of avoiding a repeat of the fiasco documented here.
]]>This recap is so much farther along than the last one. Our current reality at the start of part three is:
opentelemetry-*
gems generating traces in both test and productionThe OpenTelemetry projects each have a Gitter channel. I spent a good bit of time chatting in the opentelemetry-ruby one and got some really good tips. I also joined a couple of the weekly OpenTelemetry Ruby Special Interest Groups (SIG) meetings. Those meetings provided a lot of insight into what was going on behind the scenes with the project and also offered a venue to have a real-time chat with the core maintainers about what things on the todo list were most important to me and the goals I’ve been working towards.
The combination of Gitter and the SIG meetings on Zoom have been incredibly helpful.
Before v0.6.0 came out I opened Unable to add tags to spans #312. The discussion on that ticket was very educational. It also lead to Export resources from Jaeger #348 being done and included in the 0.6.0 release. One of the key reasons this is important is that it allows for setting both basic and custom attributes (tags) that prior to 0.6.0 had to be tacked on by the external Jaeger agent.
One of the key tickets I was watching was OTLP exporter #277 but protocolbuffers/protobuf#1594 threw a big wrench into it being a path forward for me: the problem is that right now the gRPC used by OTLP doesn’t support JRuby… most of the apps I am working on run on JRuby. Fortunately, #231 was already scheduled to be in the 0.6.0 release. That issues called for implementing Binary Thrift over HTTP as a transport for the Jaeger exporter.
The reason that, initially, I was watching #277 and then watched #231 is that either of those being done would mean I no longer had to have a local Jaeger agent; I could, instead, send traces directly from the application via a TCP-based protocol directly to the OpenTelemetry collector.
Once the new release came out, it was time to start upgrading so that attributes could be set in code and Jaeger agents could be ditched. It also meant that I could start working in earnest on VMPooler since there was no longer a dependency on the agent. Both sets of work started in parallel at this point… which, in hind sight, might not have been my best plan. Read on to see what I mean.
Things started out pretty simple When I started working on adding the OTel instrumentation to VMPooler. That is, until I ran docker-compose up
and watched something about the new tracing code cause the app to crash with only this error as a clue:
E, [2020-09-12T00:36:44.445784 #1] ERROR – : unexpected error in Jaeger::CollectorExporter#export - Not enough bytes remain in buffer
I started digging and found some Jaeger docs for how to Increase in-memory queue size. My initial impression after reading those docs was that I needed the Jaeger exporter to either flush faster or have a bigger buffer. I couldn’t find a way to do that but I did remember seeing that there was an alternative to the SimpleSpanProcessor
called BatchSpanProcessor
. Sadly, there were not any docs saying what span processor I should use or how to use each one. Fortunately I didn’t give up and poked around in the repository on GitHub and discovered enough info to try it out by reading the comments in batch_span_processor.rb. Though I didn’t exactly understand why, I did find that swapping out the span processor fixed my issue.
During all of this, I had been posting in a thread on Gitter. As a result, I was given this piece of advice:
I think batch span is generally the way to go for anything outside of basic tests, we should probably improve the language here a bit
That was pretty enlightening as every single example shows using SimpleSpanProcessor
. I opened Span Processors are basically undocumented #397 in hopes that this would get clarified in a formal way and am happy to report that it is currently listed as part of the 0.7.0 milestone.
As mentioned earlier, I was working on both VMPooler’s initial setup and the upgrade to 0.6.0 in the other apps at the same time. That work was going smoothly and seemed pretty simple. The problem was that I didn’t make the mental connection that I should also swap out the SimpleSpanProcessor
for the BatchSpanProcessor
in ABS, CITH, and NSPooler. This turned out to be a grave oversight as we started having real problems with NSPooler - it was periodically crashing and, as a result, causing problems in our CI pipelines.
None of us could quite put our finger on what was going on with NSPooler at first. It just didn’t make any sense. Then, one of my team mates noticed in the Lightstep interface that the /status
endpoint was taking over 9 seconds to respond. This too was confusing as there was nothing that should have caused it to slow down like that. It was about this time that I remembered what I had learned a couple of days before while working on VMPooler: never use the SimpleSpanProcessor
. In hopes of the two being related I quickly put up a pull request with this change:
if ENV["NSPOOLER_DISABLE_TRACING"] && ENV["NSPOOLER_DISABLE_TRACING"].eql?('true')
puts "Exporting of traces has been disabled so the span processor has been set to a 'NoopSpanExporter'"
- span_processor = OpenTelemetry::SDK::Trace::Export::SimpleSpanProcessor.new(
- OpenTelemetry::SDK::Trace::Export::NoopSpanExporter.new
+ span_processor = OpenTelemetry::SDK::Trace::Export::BatchSpanProcessor.new(
+ exporter: OpenTelemetry::SDK::Trace::Export::NoopSpanExporter.new
)
else
jaeger_host = ENV.fetch('JAEGER_HOST', 'http://localhost:14268/api/traces')
puts "Exporting of traces will be done over HTTP in binary Thrift format to #{jaeger_host}"
- span_processor = OpenTelemetry::SDK::Trace::Export::SimpleSpanProcessor.new(
- OpenTelemetry::Exporter::Jaeger::CollectorExporter.new(endpoint: jaeger_host)
+ span_processor = OpenTelemetry::SDK::Trace::Export::BatchSpanProcessor.new(
+ exporter: OpenTelemetry::Exporter::Jaeger::CollectorExporter.new(endpoint: jaeger_host)
)
end
Lo and behold, that fixed it. And by fixed, I mean that not only did NSPooler stop crashing, but also that response times to the /status
endpoint changed significantly:
The image above is a screenshot from Lightstep’s interface comparing the latencies on /status
between our 2.8.0 release and the 2.6.0 one. As you can clearly see, there is a massive difference.
After seeing how big of an impact this had, I put up PRs for ABS and CITH the following morning to make the same change.
While working on the 0.6.0 upgrades and setting up the new Jaeger::CollectorExporter
I came across OpenTelemetry::SDK::Resources::Constants::SERVICE_RESOURCE[:name]
in its readme. I have not found any docs on this other than in its source code so I opened OpenTelemetry::SDK::Resources::Constants appears to be undocumented #379. That ticket is also slated for the 0.7.0 milestone. Besides the requested documentation update, starting a conversation on this topic resulted in a helper method being added to the configurator so that within a configuration block a user can simply call c.service_name = 'my-service'
instead of having to do this:
c.resource = OpenTelemetry::SDK::Resources::Resource.create(
OpenTelemetry::SDK::Resources::Constants::SERVICE_RESOURCE[:name] => service_name,
)
I liked this so much that I duplicated the work in #417 and submitted feat: Add service_version setter to configurator #426 so that the same could be done for setting an application’s version.
#417 and #426 combined will allow me to simplify my configuration block like so:
- c.resource = OpenTelemetry::SDK::Resources::Resource.create(
- {
- OpenTelemetry::SDK::Resources::Constants::SERVICE_RESOURCE[:name] => service_name,
- OpenTelemetry::SDK::Resources::Constants::SERVICE_RESOURCE[:version] => version
- }
- )
+ c.service_name = service_name
+ c.service_version = version
Another thing I learned about by way of a chat happening in Gitter was that there is a feature called “resource detectors” that will automatically detect information about where an application is running and add related resources similar to the name and version ones mentioned above. Enabling that was as simple as adding c.resource = OpenTelemetry::Resource::Detectors::AutoDetector.detect
to my configuration block. Doing so allows me to automatically learn quite a bit about both the Kubernetes environment an app is running in and the Google nodes and account on which Kubernetes is running.
After getting the other applications updated to 0.6.0 and fixing the goof of not replacing the span processors in the other applications I was able to turn my attention back to VMPooler. I got all the initil tracing code into it via Add distributed tracing #399. I also put in a PR to Add OTel resource detectors #401 into VMPooler. All that worked locally but, it turns out, I had some missundstandings about what went where gem-wise and also didn’t know what all the different Dockerfiles were used for. Both of those got fixed via Fix mixup of gem placement. #404 and Adding make to the other two Dockerfiles #405.
After those four PRs we were finally able to release version 0.14.9 to both our Mesos cluster and to our staging instance in Kubernetes. When doing the release to our Mesos cluster we added these two environment variables so that tracing would be enabled:
"VMPOOLER_TRACING_ENABLED": "true",
"VMPOOLER_TRACING_JAEGER_HOST": "https://otel-jthrifthttp-prod.k8s.example.net/api/traces"
The first of these is used is used by the code below to effectively turn tracing on and the second maps to tracing_jaeger_host
in it.
if tracing_enabled.eql?('false')
puts "Exporting of traces has been disabled so the span processor has been se to a 'NoopSpanExporter'"
span_processor = OpenTelemetry::SDK::Trace::Export::BatchSpanProcessor.new(
exporter: OpenTelemetry::SDK::Trace::Export::NoopSpanExporter.new
)
else
puts "Exporting of traces will be done over HTTP in binary Thrift format to #{tracing_jaeger_host}"
span_processor = OpenTelemetry::SDK::Trace::Export::BatchSpanProcessor.new(
exporter: OpenTelemetry::Exporter::Jaeger::CollectorExporter.new(endpoint: tracing_jaeger_host)
)
end
You may have noticed that while I am talking about our Mesos cluster the Jaeger endpoint is set to a url containing k8s
: this is becaues the endpoint is a Kubernetes ingress resource that passes the traffic on to an OTel collector running in the cluster. The instance of VMPooler that we have running in Kubernetes does not need to use the ingress - it, instead, sends traffic directly to the service resource by way of its in-cluster address: http://otel-collector.otel-collector.svc:14268/api/traces
.
This is all live now and working well… except for one thing: VMPooler highly utilizes url parameters that are part of the url itself such as get "#{api_prefix}/token/:token/?" do
and the 0.6.0 version of the Sinatra integration doesn’t account for that. The result is that instead of seeing tracing data for the endpoint we actually get unique data sets for each user. Aside from this being less than ideal for seeing how a given endpoint is performing, it also means that all user tokens are exposed within our trace data.
Fortunately, there is already a fix for this that will be included in 0.7.0: fix: default to sinatra.route for span name #415. This PR is incredibly simple on the surface - here’s its entire diff:
def call(env)
+ span_name = env['sinatra.route'] || env['PATH_INFO']
+
tracer.in_span(
- env['PATH_INFO'],
+ span_name,
attributes: { 'http.method' => env['REQUEST_METHOD'],
'http.url' => env['PATH_INFO'] },
kind: :server,
The result is pretty significant though as it will roll up all calls to /api/v1/token/:token/?
into a single data set. Furthermore, I can easily add filters on the OTel collector to redact the actual value of the token before the trace data is stored anywhere. The end result being more useful data that no longer exposes sensitive information.
At the beginning of this post I promised some real code examples that showed how all this was configured so let’s wrap this post up with exactly that.
Here are the gems that got added to VMPooler:
s.add_dependency 'opentelemetry-api', '~> 0.6.0'
s.add_dependency 'opentelemetry-exporter-jaeger', '~> 0.6.0'
s.add_dependency 'opentelemetry-instrumentation-concurrent_ruby', '~> 0.6.0'
s.add_dependency 'opentelemetry-instrumentation-redis', '~> 0.6.0'
s.add_dependency 'opentelemetry-instrumentation-sinatra', '~> 0.6.0'
s.add_dependency 'opentelemetry-resource_detectors', '~> 0.6.0'
s.add_dependency 'opentelemetry-sdk', '~> 0.6.0'
There were multiple additions to lib/vmpooler
. The first was to require all the needed gems:
# Dependencies for tracing
require 'opentelemetry-api'
require 'opentelemetry-instrumentation-concurrent_ruby'
require 'opentelemetry-instrumentation-redis'
require 'opentelemetry-instrumentation-sinatra'
require 'opentelemetry-sdk'
require 'opentelemetry/exporter/jaeger'
require 'opentelemetry/resource/detectors'
Next was to add in some new configuration settings so that the needed parameters could be passed in through VMPooler’s standard methods:
parsed_config[:tracing] = parsed_config[:tracing] || {}
parsed_config[:tracing]['enabled'] = ENV['VMPOOLER_TRACING_ENABLED'] || parsed_config[:tracing]['enabled'] || 'false'
parsed_config[:tracing]['jaeger_host'] = ENV['VMPOOLER_TRACING_JAEGER_HOST'] || parsed_config[:tracing]['jaeger_host'] || 'http://localhost:14268/api/traces'
The last addition here is a helper method that can be used to configure all the tracing bits:
def self.configure_tracing(startup_args, prefix, tracing_enabled, tracing_jaeger_host, version)
if startup_args.length == 1 && startup_args.include?('api')
service_name = 'vmpooler-api'
elsif startup_args.length == 1 && startup_args.include?('manager')
service_name = 'vmpooler-manager'
else
service_name = 'vmpooler'
end
service_name += "-#{prefix}" unless prefix.empty?
if tracing_enabled.eql?('false')
puts "Exporting of traces has been disabled so the span processor has been se to a 'NoopSpanExporter'"
span_processor = OpenTelemetry::SDK::Trace::Export::BatchSpanProcessor.new(
exporter: OpenTelemetry::SDK::Trace::Export::NoopSpanExporter.new
)
else
puts "Exporting of traces will be done over HTTP in binary Thrift format to #{tracing_jaeger_host}"
span_processor = OpenTelemetry::SDK::Trace::Export::BatchSpanProcessor.new(
exporter: OpenTelemetry::Exporter::Jaeger::CollectorExporter.new(
endpoint: tracing_jaeger_host
)
)
end
OpenTelemetry::SDK.configure do |c|
c.use 'OpenTelemetry::Instrumentation::Sinatra'
c.use 'OpenTelemetry::Instrumentation::ConcurrentRuby'
c.use 'OpenTelemetry::Instrumentation::Redis'
c.add_span_processor(span_processor)
c.resource = OpenTelemetry::Resource::Detectors::AutoDetector.detect
c.resource = OpenTelemetry::SDK::Resources::Resource.create(
{
OpenTelemetry::SDK::Resources::Constants::SERVICE_RESOURCE[:name] => service_name,
OpenTelemetry::SDK::Resources::Constants::SERVICE_RESOURCE[:version] => version
}
)
end
end
The self.configure_tracing
method isn’t quite as complex as it may look. All that code breaks down to this:
NoopSpanExporter
get’s used so that no trace data is emmitted. When tracing is enabled the endpoint to which to send the data to is also configured.service.name
and service.version
(note that this is using the verbose method - I’ll update it after #426 is merged into the OTel gems)VMPooler is actually run by calling bin/vmpooler
so that is where the final bit go. First, I needed to bring in a few new settings and make a few new local variables. I did that by adding these lines to the file:
require 'vmpooler/version'
prefix = config[:config]['prefix']
tracing_enabled = config[:tracing]['enabled']
tracing_jaeger_host = config[:tracing]['jaeger_host']
version = Vmpooler::VERSION
startup_args = ARGV
With those in place I was able to call the helper method that was added to lib/vmpooler
by adding this line:
Vmpooler.configure_tracing(startup_args, prefix, tracing_enabled, tracing_jaeger_host, version)
And that’s all the ruby code that was needed. Beyond that, our Dockerfiles did need the small adjustment of adding the installation of make
so that all the gems would properly install.
My Sinatra journey is progressing nicely but there is still more to do. Next up is adding some manual instrumentation to VMPooler and ABS and upgrading to 0.7.0 as soon as it comes out so that I can get the changes talked about above.
With regards to the manual instrumentation part, I have started working on that in Add additional data to spans in api/v1.rb #400 but want to do some manual testing before asking for it to be merged. My learnings there will directly influence how I move forward on a similar PR for ABS.
As a bonus tidbit for anyone who made it through this entire post, I wanted to mention that I have recently started utilizing OTel’s Java agent that provides automated instrumentation of applications running on the JVM. I will be blogging about that work too in the very near future.
]]>Note: Be sure to also see part 3 of this series as some limitations talked about here no longer exist. I am leaving them in this part because they are a relevant part of my journey.
To set the stage for this phase I want to recap where things are:
Just as was done originally, I am starting my new round of instrumentation with our CI Triage Helper (CITH) application. I am doing this because it is the simplest of our applications and because it sits besides our CI pipeline. This makes it much safer to experiment with than one that is directly part of CI.
The first step in this conversion is to replace ls-trace
in the Gemfile
with the bits from OpenTelemetry. In the case of CITH, that means adding this:
gem 'opentelemetry-api', '~> 0.5.1'
gem 'opentelemetry-exporters-jaeger', '~> 0.5.0'
gem 'opentelemetry-instrumentation-restclient', '~> 0.5.0'
gem 'opentelemetry-instrumentation-sinatra', '~> 0.5.0'
gem 'opentelemetry-sdk', '~> 0.5.1'
The first change in config.ru
is updating the requires from simply being ddtrace
to all the OTel components:
require 'opentelemetry-api'
require 'opentelemetry/exporters/jaeger'
require 'opentelemetry-instrumentation-restclient'
require 'opentelemetry-instrumentation-sinatra'
require 'opentelemetry-sdk'
The change in config.ru
is to swap out the configuration block from Lightstep for the one needed by OTel:
if ENV['CITH_LIGHTSTEP_TRACING_TOKEN']
puts "CITH_LIGHTSTEP_TRACING_TOKEN was passed so tracing will be enabled."
Datadog.configure do |c|
c.use :sinatra
c.use :mongo
c.use :rest_client
c.distributed_tracing.propagation_inject_style = [Datadog::Ext::DistributedTracing::PROPAGATION_STYLE_B3]
c.distributed_tracing.propagation_extract_style = [Datadog::Ext::DistributedTracing::PROPAGATION_STYLE_B3]
c.tracer tags: {
'lightstep.service_name' => 'cith-api',
'lightstep.access_token' => ENV['CITH_LIGHTSTEP_TRACING_TOKEN'],
'service.version' => version,
service_name: 'cith', host: jaeger_host, port: 6831
}
end
else
puts 'No CITH_LIGHTSTEP_TRACING_TOKEN passed. Tracing is disabled.'
end
jaeger_host = ENV['JAEGER_HOST'] || 'localhost'
OpenTelemetry::SDK.configure do |c|
c.use 'OpenTelemetry::Instrumentation::Sinatra'
c.use 'OpenTelemetry::Instrumentation::RestClient'
c.add_span_processor(
OpenTelemetry::SDK::Trace::Export::SimpleSpanProcessor.new(
OpenTelemetry::Exporters::Jaeger::Exporter.new(
service_name: 'cith', host: jaeger_host, port: 6831
)
)
)
end
Note that in this configuration block there is nothing about MongoDB… that is because its bits have not been ported over from ddtrace yet (see issue #257).
Another difference here is that the spans are being output to a Jaeger agent… via UDP. As of today, this is the only exporter that OpenTelemetry for ruby has. There is active work to implement others but, in the mean time, this means that some extra steps are needed to deal with the UDP traffic since it’s really designed to be sent to localhost. More on this later.
CITH, like many of our apps, comes with a docker-compose.yml
to facilitate local development and testing. In the case of CITH, that file needed a few changes to switch from Lightstep to OTel:
This wrapped up all the code changes to CITH and was pretty easy to test out locally. The problem is that this was no where near the end of the road with regards to migrating CITH over to OpenTelemetry: its Helm chart still needed updating and additional infrastructure needed to be deployed.
To be able to deploy the new version of CITH and validate everything worked as desired I also needed to update my OTel collector to emit traces to Lightstep and I needed to deploy a Lightstep satellite via their Helm chart. I also needed to deploy these latter two to our production environment along with a Jaeger instance. The way I decided to tackle this was to update CITH’s chart so that it could get data to the OTel collector, then deploy a satellite, then update the collector for Lightstep, and finally deal with upgrading the current Jaeger and preparing for deploying an initial one to production.
CITH’s chart was actually pretty easy: I just needed to delete all the Lightstep related bits from it and add a Jaeger sidecar to the API’s pod. The sidecar is able to collect the traces emitted to localhost and then send them via gRPC to the OTel collector. Here are the flags I added to the Jaeger agent:
args:
- --reporter.grpc.host-port={{ .Values.jaeger_host }}:14250
- --reporter.type=grpc
- --jaeger.tags=helm_chart={{ include "dio-cith.chart" . }},service.version={{ .Chart.AppVersion }}
Breaking those args entries down:
Remember earlier when I mentioned that the Jaeger exporter is really only intended for sending to localhost? Well, that is one reason we need a sidecar. The other is that there isn’t currently a way to add tags like service.version
without using the sidecar. That functionality is coming per work done to fix #312 but, in between now and then, this is what I can do.
When I started on this phase there wasn’t a repository for Lightstep’s Helm chart (#1). Fortunately, the fine folks at Lightstep were willing to rectify this and it is now available at both Artifact HUB and Helm Hub. I deployed their chart and it mostly “just worked” - the exception is that I never did get the statsd metrics coming out of it to work right with a Prometheus statsd exporter. For now I have simply given up on this aspect of monitoring the satellite and, instead, am hoping they implement native Prometheus metrics. Docs for all of this can be found here.
With the satellites up and running it is time to add Lightstep as a destination in my collector configuration. Doing so is as simple as adding this to my exporters section and then adding otlp/lightstep
to the array of locations listed in the exporters part of the pipeline:
otlp/lightstep:
endpoint: "lightstep.lightstep.svc:8184"
insecure: true
headers:
"lightstep-access-token": {{ .Values.lightstepAccessToken }}
This simply sets up an exporter that sends data in OTLP format to the service named lightstep
in the lightstep
namespace on port 8184 and adds a header that includes the access token that matches the desired project in Lightstep. Fortunately, this is all that is needed to get data to Lightstep - no custom exporter or other hacks at all.
I was actually dreading this step but, thanks to a tip from a coworker, it turned out to be really easy as Jaeger now provides a jaeger
chart for deploying their their stack via https://jaegertracing.github.io/helm-charts/. For my setup, all I need to do is create a shallow Helm chart that has the jaeger
chart as a dependency and includes this values.yaml
file:
jaeger:
provisionDataStore:
cassandra: false
elasticsearch: true
storage:
type: elasticsearch
agent:
enabled: false
collector:
autoscaling:
enabled: true
minReplicas: 1
maxReplicas: 3
query:
ingress:
enabled: true
annotations:
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
hosts:
- jaeger-test.k8s.example.net
tls:
- hosts:
- jaeger-test.k8s.example.net
secretName: jaeger-test.k8s.example.net-tls
With all this in place I deployed everything to test, and then to production, and was able to see data from CITH in both Jaeger and Lightstep for both 🎉
Getting to this point has taken longer than anticipated but has been very fruitful as it has provided what I imagine to be a good foundation for all the other things I plan to do. Getting ABS and NSPooler updated to use this is basically a rinse and repeat of CITH so I am not repeating the details here. The one exception is that they still run in our Mesos cluster so an extra step is needed: I need a place to send their traces. I solved this by taking advantage of a host we had previously setup as a static Docker host. I simply deployed a Jaeger Agent to that host that listened on port 6831/udp and gave it basically the same startup arguments that were used in CITH’s Helm chart. This was all done with Puppet code via the puppetlabs/docker module and the following entry in Hiera:
---
docker::run_instance::instance:
jaeger-agent:
image: 'jaegertracing/jaeger-agent:latest'
ports:
- '6831:6831/udp'
command: '--reporter.grpc.host-port=otel-jgrpc-prod.k8s.example.net:443 --reporter.type=grpc --reporter.grpc.tls.enabled=true --reporter.grpc.tls.skip-host-verify=true'
To make this work I also had to add an ingress resource to my OTel collector’s deployment. The ingress looks like this:
{{- if .Values.ingress.enabled -}}
{{- $fqdn := .Values.ingress.protocol.jaegerGrpc.fqdn -}}
{{- $nameSuffix := .Values.ingress.protocol.jaegerGrpc.nameSuffix -}}
{{- $svcPortNumber := .Values.ingress.protocol.jaegerGrpc.svcPortNumber -}}
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: {{ include "dio-otel-collector.fullname" . }}-{{ $nameSuffix }}
labels:
{{- include "dio-otel-collector.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
{{- end }}
spec:
tls:
- hosts:
- {{ $fqdn | quote }}
secretName: {{ $fqdn }}-tls
rules:
- host: {{ $fqdn | quote }}
http:
paths:
- path: /
backend:
serviceName: otel-collector
servicePort: {{ $svcPortNumber }}
{{- end }}
The only thing special about this ingress is this line:
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
That ingress is paired with this entry in my values.yaml
file:
ingress:
enabled: true
annotations:
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
protocol:
jaegerGrpc:
fqdn: otel-jgrpc-test.k8s.example.net
nameSuffix: jaeger-grpc
svcPortNumber: 14250
That line is required to make the gRPC connection actually work. Beyond that, all it’s doing is:
otel-collector
service on port 14250.With this in place data started flowing from ABS and NSPooler too!
Part 3 of this series will cover how things changed, and got way better, when version 0.6.0 of the opentelemetry-*
gems came out. It will also talk about some additional learnings and getting VMPooler added into the mix of things sending tracing data.
My favorite personal project is an application called PiWeatherRock… or it was before I dove in head-first working to update it and create a community for it’s users. Tons of enthusiasm morphed into something else and, before I realized what was happening, I didn’t even want to touch my computer after work. Months have passed since I even opened anything related to the project and, as best as I can tell, this is that thing I’ve seen others talk about called “burnout.”
I’m writing all this in hopes that someone else who’s coming up on their own burnout will recognize these same signs in themselves early enough to head it off. The first thing I suggest keeping an eye out for is a project that all of a sudden is all you can think about and all you do when off from work. For me, this instant and massively intense focus was all consuming. In hindsight, I have no doubt that letting it consume me is what lead to my current state. The next thing I think I should have picked up on was slowly starting to avoid looking at the notifications related to my project. For example, if you go from being excited when someone submits a pull request to avoiding reviewing it you may be starting to burn out. For me, not only did that happen, but I also started avoiding the Gitter that I had setup as a way to bring my community of users together… it’s pretty disheartening to realize that you don’t even want to engage with other people excited about using something you authored or maintain. To this day, I still can’t bring myself to open my own Gitter.
Unfortunately, I don’t have an any answers to this other than try to head it off before it consumes your enthusiasm.
If you are reading this in hopes of learning what my plan is for PiWeatherRock, all I can say is that I hope to resume working on it before the version I have running on my TV every morning stops working at the end of 2021. I have friends who would like to have their own PiWeatherRock and can’t do so without more work being done due to no new API keys being issued… I agree that this sucks. All I can say is that I really do think time will allow my enthusiasm to return.
]]>