Earlier in 2020, this very site was hacked 2 or 3 times and I discovered they got in through another WordPress website that had a bad plugin loaded. So I decided it was time to move to static html using the Jekyll generator. I found a vanilla template and designed it to be very similar to my old WordPress website. Then I moved all comments to Isso which allowed me to keep all my comments and allow new users the ability to comment as well.
The result is everything looks and feels the same! Google PageSpeed shows the site speed increased into the 96% which is great!
This has motivated me so much that I’ve been re-evaluating the need for WordPress on the smaller websites which I manage. None of them require the feature set of WordPress, so I’ve moved them to Jekyll as well.
I’ve been so devoted to WordPress that I feel a little bad about the move, but after 3 of these non-WordPress “worthy” sites I manage being compromised I feel good that static html sites closes that security threat that I had before, which I didn’t need.
Have you been thinking of making a switch? Let me know in the comments!
]]>Features include:
I’ve been an avid user of weewx. It’s a weather archive software for personal weather stations. It’s great to be able to archive this data without relying on cloud companies who may shut down at a moments notice. At the beginning of 2016 I created a custom WordPress website for https://belchertownweather.com which utilized PHP and a lot of MySQL queries to get data in as real-time as it could be. Then in late 2018 I discovered MQTT and had my light bulb moment – this was a game changer. Using the weewx-mqtt extension, and an MQTT broker, I could now have weewx send weather LOOP data immediately to the broker, which can then update the website immediately using AJAX without refreshing the page. I also had interactive graphs using Highcharts, and using MQTT, I’m able to update those graphs when the ARCHIVE packet is published. The result became a true instantly updated website as soon as the weather station received data. This is a win for everything on the site!
This new method of instant real time data and charts gained a little bit of interest from the weewx user group. So I invested (a lot of) time to convert my heavily custom PHP and MySQL website over to Python Cheetah since that’s how weewx generates webpages. Static webpages are a bit of a bummer, but this limitation is due to the way weewx operates. Different options for databases, and different options for uploading the files to remote servers. This means the database may not be local to the website anyways. So static pages it is! Which means A LOT of JavaScript and jQuery for that customized feel and real time updates.
You can see the skin in action now at https://belchertownweather.com. The front page will connect to my weather station and begin to update the observations. Feel free to click around and explore – especially check out the graphs on the Graphs page.
If you run weewx and are looking for a website with interactive charts that can show your LOOP data in real time, check out my skin! You can download, and get instructions on how to install and configure it at the GitHub page here.
Need a cloud server to run your website and weewx theme? I recommend DigitalOcean for cloud servers because they’re fast, easy and secure. You can start your DigitalOcean Ubuntu server (compute) for as low as $5/mo and be online in 1 minute. Best yet, if you use this link you get $10 credit for free!
]]>For the last few years I’ve been running a custom weather website. This website, in conjunction with weewx, allowed me to have a website which updated itself every 10 seconds. But this bothered me. 10 seconds was far too slow for my liking!
Earlier this year (2018) I started using Home Assistant for home automation (goodbye old unreliable cloud-based automation!) and they opened me up to MQTT. It was great to be able to get data from tiny Arduino sensors around the house – but I knew I could do more with it. That’s when I saw an MQTT extension available for weewx. I knew it was time for a website upgrade!
The result was a true real time auto updating website with no delay! Every time my weather station sends an update from the backyard (every 2.5 seconds), that data is immediately read by weewx which archives the data, runs some QC checks on it for accuracy, then publishes it to MQTT. Visitors on my website auto-subscribe to the MQTT topic through websockets and JavaScript updates the data on the webpage within milliseconds. It’s really cool!
If you’re not familiar with MQTT, it’s a machine-to-machine internet of things (IoT) communication protocol. Initially designed as a lightweight publish/subscribe method useful for those devices in remote locations with limited internet connectivity. It even works great for Arduino or NodeMCU temperature sensors around the house. The device that needs to send data would publish to a topic, and the device that needs to receive that data would subscribe to that topic. Topic names are arbitrary, but they should make sense. For example a topic could be weather/weewx/loop
. This could mean it’s the weather parent category, the weewx software, and the loop function. Then you could have weather/weewx/archive
for archive data, weather/rain_bucket/current
if you had a rain bucket that can talk MQTT. Whatever you want!
I have weewx configured to publish weather loop data to the topic weather/weewx/loop
, and my website is configured to subscribe on that topic.
Due to the high nature that weewx publishes weather data (currently my loop is at 2.5 seconds), I couldn’t use a free MQTT broker (or server), and I didn’t want to pay for access to one. Some free ones will handle the frequent data, but offer no uptime reliability, so I decided to install my own broker.
This tutorial was created in 2018 using Ubuntu 18.04, so some things may be different after I write this or if you use another operating system.
You can run MQTT on a Raspberry Pi, but I recommend running MQTT on a cloud server because it’s always available, fast and easy. You can start your DigitalOcean Ubuntu server for as low as $5/mo and be online in 1 minute. Best yet, if you use this link you get $10 credit for free!
First install Mosquitto, which is the name of the MQTT software.
sudo apt update
sudo apt-get install mosquitto mosquitto-clients
If you stopped here and did sudo service mosquitto start
you’ll have a very basic and working MQTT broker on port 1883 with no user authentication. You also won’t have websockets yet. Websockets are needed if you have a website that needs to connect to your MQTT broker.
If you plan on using your MQTT Broker for a website, like the Belchertown weewx skin, then you need to enable websockets.
Let’s create a custom configuration file that we’ll add this – and other items – to for Mosquitto’s config. Run sudo nano /etc/mosquitto/conf.d/myconfig.conf
and update it with the below:
persistence false
# mqtt
listener 1883
protocol mqtt
# websockets
listener 9001
protocol websockets
Make sure you have no empty spaces at the end of those lines or Mosquitto may give you an error.
Restart Mosquitto with sudo service mosquitto restart
and you should now have a working MQTT server on port 1883 and websockets on port 9001!
I locked down my broker so that only those clients who know the password can publish to a topic. You can get super granular here where certain usernames can publish to certain topics only. For my sake I only have 1 user who can publish. All other users who connect to the broker are considered anonymous and can only subscribe. Create a password for publishing with:
sudo mosquitto_passwd -c /etc/mosquitto/passwd <your username>
Next is to create an MQTT ACL (access control list) so that anonymous users are read only, but the weewx system can read and write to the weather topic. Run sudo nano /etc/mosquitto/acl
and enter in:
# Allow anonymous access to the sys
topic read $SYS/#
# Allow anonymous to read weather
topic read weather/#
# weewx readwrite to the loop
user <your username from above>
topic weather/#
Now let’s add the authentication and access control to the custom configuration file. Run sudo nano /etc/mosquitto/conf.d/myconfig.conf
and enter in:
allow_anonymous true
password_file /etc/mosquitto/passwd
acl_file /etc/mosquitto/acl
Mosquitto is VERY PICKY – if you have a space in the wrong place, it will send an error. Make sure there are no spaces after any word at the end of the line.
Save your file and run mosquitto with mosquitto -c /etc/mosquitto/mosquitto.conf
and with another SSH client, log into your server and check that the MQTT ports are open. Run this command sudo netstat -tulpn | grep 1883
and you should see port 1883 open. Repeat with port 9001 to verify websockets are open. It should look something like:
tcp 0 0 127.0.0.1:1883 0.0.0.0:* LISTEN 973/mosquitto
If it’s open like above then open 2 more SSH connections to your server to test publish and subscribe.
Session 1: run this command to subscribe to the weather topic.
mosquitto_sub -h localhost -t weather/#
The # means you want to listen on everything underneath weather. For example weather/topic1
, weather/topic2
, weather/topic3
, etc.
Note: Only use # when troubleshooting. You shouldn’t setup your website or application to publish or subscribe to # in a normal situation.
Session 2: try to publish using this command:
mosquitto_pub -h localhost -t "weather/test" -m "hello world"
`
This command does not use any authentication so it should fail! Your mosquitto_sub window should not show anything. If you do not see anything, so far so good!
Now go back to SSH #2 (with the mosquitto_pub
) and run this command which has authentication:
mosquitto_pub -h localhost -t "weather/test" -m "hello world. this is to the weather topic with authentication" -u "<your username from above>" -P "<your password you created>"
You should see something in your mosquitto_sub
window now! If you do, great! If not, back up and try everything again. Sometimes Mosquitto is fussy and requires a full reboot. You can also check the Mosquitto log file at /var/log/mosquitto/mosquitto.log
So if you’ve made it this far you have a working MQTT broker with authentication and websockets!
If your MQTT broker is going to be used for weather data, this is probably overkill. But you can use SSL with a free certificate from Let’s Encrypt. You will need to have a dynamic DNS hostname setup for your home IP. A provider like duckdns.org can help make this easy. Dynamic DNS is a service that allows you to have a hostname, like myhomemqtt.duckdns.org
that is always pointing to your IP – even when your internet provider changes your IP on you.
If you are running your broker at home (on a Raspberry Pi or something), then you need to setup Dynamic DNS so that the Let’s Encrypt certbot can reach your server for validation from the internet. Admittedly, this is where having a cloud server is easier since there is no port forwarding to mess around with.
Also, please read the setup guides for the Dynamic DNS provider you use. Some require a small bit of software to run so that it detects a change in your IP and it updates your hostname.
The first step is to install a web server which will do the verification that you are who you say you are. I prefer NGINX (pronounced engine-x), but you can use Apache if you’re comfortable with it. Run sudo apt install nginx
and once that’s done, you should have a working web server. Make sure you port forward or open these ports on your server’s firewall: 80
, 443
, 8883
, 9001
.
At this point you should be able to go to http://myhomemqtt.duckdns.org
and see the website that says something like “Welcome to NGINX!”. Try it from your cell phone (not on WiFi) which will double check that the outside world can see it, too. If so, install the Lets Encrypt certbot with sudo apt install certbot
and create the folder the certbot will use. mkdir -p /var/www/html/.well-known
– (Hint: even when we’re done, don’t delete this folder since the automated renewals will use it in the future)
Update NGINX to allow all access to the new .well-known
folder by running sudo nano /etc/nginx/sites-available/default
and below the location /{
section, add this:
# Let's Encrypt
location ~ /.well-known {
allow all;
}
Restart nginx with sudo service nginx restart
Now attempt to get an SSL certificate with
sudo certbot certonly -a webroot --webroot-path=/var/www/html -d <your dynamic dns hostname>
Replace
Follow the prompts and at the end you should see:
IMPORTANT NOTES:
- Congratulations! Your certificate and chain have been saved at:
Great! You’ve got yourself a working SSL certificate! If you did not see that above, go back and try again. Make sure your server is available externally. Make sure the .well-known
folder is also available externally. There are plenty of resources on Google for getting Let’s Encrypt working with NGINX.
The certificates are only good for 3 months, so you’ll want to setup automatic certificate renewals. It’s easy by typing crontab -e
and adding this at the end of the file:
# LetsEncrypt renewals every Monday at 2:30 am
30 2 * * 1 /usr/bin/certbot renew >> /var/log/le-renew.log
If all you’re using your MQTT broker for is weather data, then the SSL can be considered optional since it’s just weather data. But setting up Let’s Encrypt is about a 10 minute process, so it makes sense to go the extra mile and make sure everything is secured.
Update your custom MQTT config file and add the new SSL certificates. Run sudo nano /etc/mosquitto/conf.d/myconfig.conf
and update it with the below:
persistence false
allow_anonymous true
password_file /etc/mosquitto/passwd
acl_file /etc/mosquitto/acl
# Insecure mqtt to localhost only, and secure mqtt
listener 1883 localhost
listener 8883
certfile /etc/letsencrypt/live/myhomemqtt.duckdns.org/cert.pem
cafile /etc/letsencrypt/live/myhomemqtt.duckdns.org/chain.pem
keyfile /etc/letsencrypt/live/myhomemqtt.duckdns.org/privkey.pem
protocol mqtt
# websockets
listener 9001
certfile /etc/letsencrypt/live/myhomemqtt.duckdns.org/cert.pem
cafile /etc/letsencrypt/live/myhomemqtt.duckdns.org/chain.pem
keyfile /etc/letsencrypt/live/myhomemqtt.duckdns.org/privkey.pem
protocol websockets
Where myhomemqtt.duckdns.org
is the name of your hostname or dynamic DNS.
This setup makes MQTT port 1883 available to localhost only, and opens 8883 for the outside world, and it uses your new SSL certificates.
Restart your mosquitto server with sudo service mosquitto restart
and check that the ports are open with sudo netstat -tulpn | grep -E '8883|9001'
. You should see something similar to:
tcp 0 0 0.0.0.0:9001 0.0.0.0:* LISTEN 973/mosquitto
tcp 0 0 0.0.0.0:8883 0.0.0.0:* LISTEN 973/mosquitto
tcp6 0 0 :::8883 :::* LISTEN 973/mosquitto
That should do it! At this point you should have a working MQTT broker which uses SSL!
If you have any questions, or see room for improvements, let me know in the comments.
]]>One of my thermostats is in my hallway and it’ll go into Eco mode if I don’t walk by it frequently enough. Having a push notice tell me that it’s in Eco mode now will enable me to open the app and get it back to heating or cooling.
I couldn’t find a way to get these types of status notifications in the Nest app. The only notice they’ll send you is filter reminders. I could disable Eco mode, but it’s cool that it’s trying to save me money. Maybe I should walk by the thermostat more often? I’m too lazy, so let’s setup push notifications instead!
I’ve created this status notification method using Python, SQLite, IFTTT and a Nest Developers account. You’ll need somewhere to run this script. Linux is preferred and what this guide will be based on. A Raspberry Pi is perfect, but a virtual machine running Ubuntu or CentOS will work too.
Table of Contents:
Download the script
Setting up a Nest Developers Account
Setting up IFTTT Webhooks
Installing the Notification Python script
Removing Everything
Download the script on GitHub along with an empty database. The database is optional since the script will create a new database for you.
You’ll need a Nest developers account to get the status off your thermostats. Since we cannot access the thermostats directly on our LAN, we need to get information through the cloud and Nest’s API system.
Create New Product
Home Thermostats
Home Automation
and Sensors
Individual
Away
and read v2
. Enter description of “To know when house is set to away mode”Thermostat
and read v6
. Enter description of “To be able to see what mode the thermostat is in”Click Create Product
Using your web browser open the Authorization URL
link. Read through the permissions we just defined to make sure they match and click Accept.
You’ll receive a PIN code on the next screen. Save that code so we can exchange it for a permanent code.
Log into your Linux server or Raspberry Pi and run this command on the terminal:
curl -X POST -d "code=AUTH_CODE&client_id=CLIENT_ID&client_secret=CLIENT_SECRET&grant_type=authorization_code" "https://api.home.nest.com/oauth2/access_token"
Replace AUTH_CODE
with the pin you received
Replace CLIENT_ID
with the Product ID from the Nest Developers App you just created
Replace CLIENT_SECRET
with the Product Secret from the Nest App
You should get a reply back that says something like:
{"access_token":"c.ULQLf..................","expires_in":315360000}
If you see that line above, then you’re good! Save the portion that says c.ULQf.........
because that’s your permanent code.
If you don’t see a token that begins with c.
, verify the curl command above has the right values and try again.
More Nest API information is here if you want to read up on it.
IFTTT is how I handle all of my notifications. It’s free, reliable, works great and has support for many platforms (mobile, desktop). There are other services out there, but I haven’t had much success with them and I don’t want to pay for premium services.
Webhooks
and click it to open itConnect
to enable the service and then click on Settings
to view settings.
Warning: clicking on Edit connection will cause the key to be changed and making the old key unusable.
Next we need to setup the applet:
+This
, and select WebhooksCreate Trigger
+That
Create action
Finish
Install the IFTTT app to your phones and tablets you’d like to receive notifications on.
Now that we have our services setup, time to setup the Python script.
If you haven’t yet, download the notification Python script
First make sure you have the Python requests module loaded. You can install via apt or via pip. Here’s apt:
sudo apt-get update
sudo apt-get install python-requests
Then open the Python script in your favorite text editor and at the top of the file:
databaseFile
to a place where you’d like to store your sqlite database. (e.g. /home/pi/nestapi/nestapi.sqlite)iftttEventName
to the name you gave the ifttt applet in step 4iftttSecretKey
to the line of text after the /use/ URL from earliernestAPIURL
change the c.12345 text to the text you received from running the curl command earlier.Feel free to check out the rest of the script, but you shouldn’t have to alter anything else.
Run the script by typing python nestapi.py
in terminal and see if you get any errors. The script should create the database on first run and populate the database with the information. You may also get some initial push notifications, too.
If you don’t have any errors then the final step is to add the script to the crontab so that it runs on a schedule to check your thermostats.
Run crontab -e
and add:
# Nest API Minutely Thermostat Check
* * * * * /usr/bin/python2 /home/pi/nestapi/nestapi.py >/dev/null 2>&1
Change the path of nestapi.py if necessary.
This will run the Python script every 60 seconds and get the thermostat details from the Nest Developers API. It compares it against the saved info in the database and if there’s a change to setpoint, or running state, you’ll get a push notice from the IFTTT App
There is an option to have streaming updates from Nest’s API so you get real-time thermostat updates. I did not attempt to figure that out because the state of my thermostat doesn’t change all that often and a check every 60 seconds is perfect.
That’s it to remove everything.
]]>The fix was for me to use mailx with a gmail Account profile. This way any email sent from the system will get sent through the gmail servers. Since I host my email domain with GSuite, this was an easy win. I simply made a noreply
mailbox, setup 2 factor authentication on it and then used a password for less secure apps. The rest is pretty simple!
I’m performing this install on Ubuntu 16.04
sudo apt-get install heirloom-mailx
Add the following information into ~/.mailrc with nano ~/.mailrc
account gmail {
set smtp-use-starttls
set ssl-verify=ignore
set smtp-auth=login
set smtp=smtp://smtp.gmail.com:587
set from="noreply@yourdomain.com(Your Real Name)"
set smtp-auth-user=noreply@yourdomain.com
set smtp-auth-password=your_less_secure_apps_password
set ssl-verify=ignore
}
Change the settings above to match your requirements. Save and send yourself a test message with:
echo -e "Mail body text" | mailx -A gmail -s "Mail subject" your@email
You should receive your email soon from your gmail account!
For automated system-wide emails to be sent correctly you need to login as root. You can sudo su
to get access as root and then repeat this process.
This has helped keep my automated emails from being flagged as spam. Hope this helps you too!
]]>simplemonitor is the solution! Thanks to @jamesoff and his work on simplemonitor, I now have a reliable way to watch Nagios to make sure it doesn’t go belly up. The premise of simplemonitor is very much like Nagios. You first define a host, then you can layer on top of that host with custom monitors.
I installed simplemonitor on my AWS EC2 instance, so I’m able to monitor it outside of my house. I chose to do this because I wanted to know if I had a power outage or internet outage from home. First line of defense in knowing if my Nagios is working properly is knowing if it can even get to the internet.
Since I’m monitoring from outside my house, I first opened a port on my firewall and did a port forward to Nagios. I verified I could get to Nagios using my Dynamic DNS hostname + the port I opened.
Then on my EC2 instance I installed simplemonitor with git clone https://github.com/jamesoff/simplemonitor
Open the monitor.ini
file and specify some system settings:
[monitor]
monitors=/home/pat/simplemonitor/monitors.ini
interval=60
[reporting]
loggers=db
alerters=gmail
[db]
type=db
db_path=/home/pat/simplemonitor/monitor.db
only_failures=1
[gmail]
type=execute
fail_command=/bin/bash -c "/usr/bin/printf \"The simplemonitor for {name} has failed on {hostname}.\n\nTime: {failed_at}\nInfo: {info}\n\" | /usr/bin/mailx -A gmail -s \"PROBLEM: simplemonitor {name} has failed from {hostname}.\" your@email"
success_command=/bin/bash -c "/usr/bin/printf \"The simplemonitor for {name} is now successful on {hostname}.\n\" | /usr/bin/mailx -A gmail -s \"RECOVERY: simplemonitor {name} is successful from {hostname}.\" your@email"
A few things are going on here, I am monitoring every 60 seconds, and when there’s an alert I execute a bash command for mailx
to send me an email on failure and success. You’ll need mailx
with a Gmail account profile for this to work. Modify to your needs if you end up going this route.
Then open monitors.ini
and define your monitors:
[home-ping]
type=host
host=home.myddns
tolerance=2
[home-nagios]
type=http
url=http://home.myddn:port
gap=300
depend=home-ping
If you’re using this as an example, replace home.myddns
with your DDNS hostname, and replace port
with the port you opened for Nagios.
In the host
type definition, the tolerance=2
means that after 2 ping checks failing, the host goes into failure state (and send me an email based on the monitor.ini
config)
In the http
type definition, the gap=300
means that it’ll only check the Nagios website every 5 minutes. The depend=home-ping
means that if the home-ping check is in a failure state, then don’t worry about this check because there’s something else wrong.
I set my simplemonitor to run on reboot by running crontab -e
and adding:
# Start simplemonitor at reboot
@reboot /usr/bin/python2 /home/pat/simplemonitor/monitor.py -f /home/pat/simplemonitor/monitor.ini >/dev/null 2>&1
However if you want to run it now without a reboot you can run this command which will send it to the background and monitoring will start right away.
/usr/bin/python2 /home/pat/simplemonitor/monitor.py -f /home/pat/simplemonitor/monitor.ini >/dev/null 2>&1 &
That’s it! A reliable way to determine if my Raspberry Pi Nagios is running as it should. Now to complete this setup would be to add a Nagios service check to make sure the /home/pat/simplemonitor/monitor.py
file is running on the AWS server within ps aux
.
After you have Raspbian installed, update it using
sudo apt-get update; sudo apt-get upgrade
Then reboot the Pi so we have a fresh start. Once logged back in run
sudo apt-get install nagios3
During the installation it will prompt you for a password you want to use on the website. Enter it here, confirm it in the 2nd screen and remember it for later.
The installation will continue to setup more packages needed. This could take some time depending on the vintage of your Raspberry Pi.
Once the installation is complete, browse to the Nagios site using http://your.raspberry.pi.ip/nagios3 and login with username nagiosadmin and the password you selected above.
That’s it! Nagios is ready to be configured to monitor your devices.
Setting up your first host can be confusing, but it’s pretty easy.
What is a host? A host is a device and Nagios will check hosts for up or down state only. Once you have a host, you can then layer on a service. A service can be anything you want. For example, you can make a service to make sure your files are within a certain timeframe, or that a port is open or even if a service is running. It all has to start with a host though.
All of the Nagios config files live in /etc/nagios3
and the host and service files are in /etc/nagios3/conf.d
.
Since I use the Raspberry Pi Nagios to monitor devices in my house, I made a file called home.cfg
by using nano /etc/nagios3/conf.d/home.cfg
and then inside I put this:
define host {
use generic-host
host_name ObserverIP
alias ObserverIP
address 192.168.0.55
}
In this example, this is what monitors my weather station’s ObserverIP unit and will notify me if it goes down. This is part of my plan to keep reliable weather data going by making sure it’s up all the time.
The generic-host
is a template which is defined in /etc/nagios3/conf.d/generic-host_nagios2.cfg
. A template helps apply bulk settings without having to redefine them over and over.
The host_name
is the name of the host. This is typically used in services.
The alias
is a friendly name for the host.
The address
is the IP address of the host. You can use DNS if you have your LAN setup to resolve local hostnames.
Note: Any device I want to monitor in my house I have set with a DHCP Reservation. This is so the device’s IP never changes and I can always find it on the network. If your router or DHCP server can do DHCP reservations, use them. They are easier to do than a static IP and you don’t have to reconfigure any of the devices. Most smarthome devices do not have a way to set a static IP, so doing a DHCP reservation is ideal.
Once you have added your hosts, reload Nagios to have it start monitoring it.
First, check for errors with
sudo /usr/sbin/nagios3 -v /etc/nagios3/nagios.cfg
If you see no errors and it says Things look okay
then reload with
sudo service nagios3 reload
That’s it! Your Pi should now be reliably monitoring your home devices for up/down state.
If you want more detailed monitoring, like making sure that SocketLogger has port 2999 open, that’s a service and not covered here. Keep an eye out for that soon though!
]]>I’m monitoring my weather station, Raspberry Pis, SmartThings hub, IoT WiFi devices, my small lab Asterisk PBX and websites I’ve created for businesses, family and friends. It’s important to me that I run a healthy environment – whether it’s in my house or not.
I had a situation recently where the GFCI outlet that my air conditioner’s condensate pump was plugged into had popped it’s breaker and turned off my pump. This caused an overflow and a small flood in my basement. I don’t go to my basement everyday, but thankfully I went down for something and caught the flood before it was bad. Who knows how long it would have been overflowing had I not gone down there. I needed a solution.
I purchased a Belkin WeMo Mini smart outlet because it’s a WiFi device and I knew that I could ping it. So I assigned it a static IP using DHCP reservation so it gets the same IP on my LAN if I unplug it and plug it back in. Then added it to Nagios to monitor that IP and now when the GFCI outlet pops again I’ll get alerted. This can help me react faster (e.g. I can log into my Nest thermostat and turn off the air conditioning to prevent another overflow). I have another identical setup on my instant hot water tank because it also uses a condensate pump.
I’m not using the WeMo Mini for it’s intended purpose. I have nothing plugged into it. 🙂 I’m simply using it as a WiFi endpoint to be able to monitor some infrastructure in my house.
SmartThings and Nest do a good job of notifying me when my WAN goes down (from a power outage or something), but even with all these smart devices – none of them will tell me when my GFCI outlet turns off.
If you get a little creative, there’s all kinds of ways to monitor your home’s infrastructure with Nagios to help identify problems before they become disasters.
How did we ever survive before WiFi? 🙂
]]>Tornado Channel was created as a viewing platform for storm chasers who stream their live video while chasing. Keeping a clean user interface and easy user experience was the sole vision for the design. Instead of a landing page, visitors land right on the Google Map which shows active storm chasers and weather data. The map has multiple weather overlays from the National Weather Service as well as live updated radar from Iowa State Mesonet.
A background PHP script runs on cron to poll the YouTube API to determine if a storm chaser is live. If the chaser’s YouTube channel is live, the script then retrieves their GPS position from Spotter Network and adds their position marker to the map. All map elements are updated in the background automatically. Chaser’s can login through oAuth providers to manage their account details.
In 2018 this site was retired.
]]>O’Brien Photo is a wedding photography business I own with my wife. I created this website with WordPress and developed a custom theme which is designed to walk a visitor through the steps of becoming a client through unique navigation menu titles. There is also a private client portal where we are able to serve documents and photos privately with the client.
O’Brien Photo’s WordPress theme is a child theme using the WordPress Genesis framework, and also has a responsive design for mobile users.
]]>