Vacation to Hawaii

For the first time since last summer I took an actual vacation where I traveled somewhere. Until now, the only state to which I had never been was Hawaii. Now I can say that I’ve been to all fifty states and that I’ve spent not-insignificant amounts of time in each of them. (Next goals: visit all ten provinces and three territories of Canada and all of the public islands in Hawaii. Maybe after that I’ll try out other countries.) Of course, I took pictures.

On the first day I just stayed in a hostel around Kailua Kona. The sky was clear, so I took this photo of the crescent waxing moon. (Also called an ‘ole kū kolu moon.) The moon had a very pretty halo around it.

A halo around the ‘ole kū kolu moon on March 13th, 2019.

The second day I went to South Point or Ka Lae, the southernmost point in Hawaii and the United States. (Wikipedia disagrees with me, but Palmyra is in a territory, not a state.) Interestingly, South Point is not that interesting. It’s on a cliff and very rocky and popular with people who have fishing rods.

Later in the second day I went into Hawai’i Volcanoes National Park. I stayed in a cabin just off of the national park but needed to enter the national park to check into the cabin. There I wandered through the steam vents along the Kīlauea caldera. It’s just remarkable to see steam just rising from the dirt. Then you can look out and see toxic smoke rising from a gigantic hole in the ground that periodically shoots molten rock in the air. Most of the area around the caldera was closed because of recent eruptions. However, there were no eruptions at the time of my visit and no lava was visible anywhere in the park. Very disappointing but probably safer.

The Pacific Ocean crashes against the cliffs at the southern end of Volcanoes National Park. These cliffs were made in the past seventy years.

On the third day I visited Hilo. Hilo felt like home in Seattle with its continually misty rain and lush greenery. On my tour of Hilo I stopped at the Kaumana Caves Park, a county park that features entrance to a totally uncontrolled lava tube. Seriously, you just walk in to this pitch black cave at your own risk. My headlamp did not suffice and I actually relied on the flashlight functionality of my phone to see deep into the cave. You turn several times and duck through several small passages in the two and a half mile tunnel, completely losing sight of the entrance. That is, it is nothing like the caves in Virginia.

Spelunkers descend into the Kaumana Caves lava tube with their flashlights shining. The plant life hanging from the top of the cave is actually a volcanic glass fiber known as Pele’s Hair.

I stayed at the Inn at Kulaniapia Falls, an “off-grid” inn situated overlooking a waterfall. It was a really pleasurable experience and I would do it again. The staff are lovely, even if they did wake me up at 4am slamming doors in the office below my room. At least they compensated me for that!

Rainbows appeared in the morning light around the waterfall. It went into a large pool that then drained into several smaller waterfalls.

I missed the dinner reservation at Kulaniapia Falls and instead went into Hilo and had a fantastic dinner at Pineapples restaurant.

The fourth day was the most remarkable. I took a guided tour to the top of Mauna Kea, the tallest mountain in entire Hawaiian island chain at 13,803 feet above sea level. Another 17,000 feet or more of the mountain is below sea level making it the tallest mountain the world from its base. Fortunately the snow was gone from the top of the mountain this March.

The shadow of Mauna Kea against the clouds over which it looms.

The tour took us up the Hawai’i island Saddle Road where I snapped this picture of a tree that survived a lava flow while everything around it burned. Incidentally, this island is covered in wild goats and feral pigs and cats brought to in various waves by Polynesians, the British, and the Spanish. Additionally, this tree and the surrounding lava flow is actually pretty representative of the island of Hawai’i. Most of the island is pretty much covered in barren lava rock. The western side of the island receives hardly any rain and the rain is what begins to break down the lava rock into more fertile land facilitating growth. There’s also the fact that this island is still an active volcano so new lava is being laid down all the time.

A lone tree surrounded by lava.

Once at the top of Mauna Kea, you can see the most beautiful sunset that you’ve ever seen with more colors than you ever thought imaginable.

The sun setting into the clouds from the top of Mauna Kea.

The top of Mauna Kea, you may have heard, is covered with telescopes. In fact there are thirteen telescopes and a couple more antennas. Because the University of Hawaii actually destroyed the original summit, native Hawaiians designated a nearby peak as the peak where they would perform their rituals.

The actual summit to Mauna Kea and the trail up to it, open only to native Hawaiians.

After watching the sunset, the tour guides took us down to the Mauna Kea Very Long Baseline Array antenna, part of an array of ten radio antennas spanning the globe. That is where we got a tutorial on the stars and how the original inhabitants of the Hawaiian islands navigated by the stars. These tour guides were the best and shared a ton of information about Hawaii that I don’t know where else I would find.

Even with a half moon, you could still see a ton of stars by the VLBA antenna at the top of Mauna Kea.

And I had a terrifically disappointing journey back to Seattle via Delta where they managed to do everything wrong from start to finish. I also don’t regret renting a Jeep but I would have preferred that Avis had given me a Jeep that had been made in the last two or three years than the crap that they did give me. It didn’t even have support for Bluetooth. Still, I’d go again.

Glass Blowing

I recently took a glass blowing class at Pratt Fine Arts Center. The class was four hours every week for six weeks and we learned how to gather glass from the furnace, blow a bubble, gather again, and shape it into something that looks marginally close to a cup or whatever. Here are some of the things that I put together including two cups, a bowl, a vase-like thing, some ornaments, a pumpkin, and a chili pepper.

A “bowl”.
A “vase”.
Two cups!
The ornament on the left weighs about a half of a pound. The ornament on the right is all blown out of proportion.
This pumpkin is my second favorite. It involved a cast.
This is my favorite. It’s a chili pepper.

Views from the Viaduct

At near midnight on January 11th, 2019, the state of Washington closed the Alaskan Way Viaduct permanently after more than sixty years ferrying cars along the Seattle waterfront. The dull roar that made it impossible to hold a conversation in Victor Steinbrueck Park disappeared and with it a calm that the waterfront has not heard since the early 1950s. On the night of February 1st, the state closed the Battery Street Tunnel, which the viaduct used to feed into. (It was still being feed by ramps from Western Avenue after the viaduct’s closure.) And then on February 2nd the city and state opened up the new 99 Tunnel, the Battery Street Tunnel, and a portion of the upper deck of the viaduct to pedestrian traffic before the new tunnel opened on February 4th. I went for a tour. Turns out that taking pictures of a road while actually on the road is not quite as interesting, I don’t think, but here are a handful of photos.

Revelers walk through the poorly lit Battery Street Tunnel.
The lower deck of the viaduct, while lit, was closed to pedestrians. This is a view from the Seneca Street off ramp.
The Seattle Great Wheel from the viaduct with the Port of Seattle behind it. No, this is not really viaduct related.

Still More Black and White Photography

This week was a bit of a weird one with some travel mixed in with family visiting. As such I did not get to take as many photos that I considered to be any good as I had the previous two weeks. This week was also the week that our class didn’t have an outing slash field trip. So here are some photos from my trip to Los Angeles plus some of Duke and Seattle Center.

A life guard station on the beach in Santa Monica, California.
Despite being part beagle, Duke’s fur is only black and white.
The metal paneling on the outside of the Museum of Popular Culture aka MoPOP formerly known as Experience Music Project aka EMP.
Art work outside the Museum of Popular Culture. The color version isn’t that bad, either.

Backing Up Your Backups

Like anyone who cares about the data that is on his or her computer, I keep backups. I don’t have the backups run automatically, as is the default in primary operating system, macOS. I also don’t do backups very often. But I do want to keep my data safe, encrypted, and off-site but also still easily accessible. As you might guess, backups for me are a complicated, manual affair. This is how that goes.

(Why don’t I run automatic backups? Well, a lot of the work that I do on my laptop is with local virtual machines running inside VMWare Fusion. Virtual disks attached to virtual machines get very big on your actual disk. Small changes to the data on the virtual disk usually results in huge changes to the underlying files that back those virtual disks that need to actually be backed up. Finally, backing up an in-use virtual disk is not conducive to a quality restoration. So before I do a backup I will stop my virtual machines and before manually initiating the backup. Then I go to sleep while the backup runs.)

To start, my primary computer is a laptop with a one terabyte hard disk that is usually about half full and whose contents churn fairly constantly. I also keep a second hard disk to store about two terabytes of assorted other files — mostly old photographs — that I want to archive but no longer need to be on my laptop. All of my backups from the laptop also go to the second hard disk. This disk is kept in my living room so that I may easily connect it to my laptop to access the assorted archived data and also to more easily run the backup process. That’s easy enough.

But after an apartment building next door to mine burned down back in 2009 I started keeping a copy of my backup disk in a fire safe in my apartment. After my fire safe got a crack in the casing I worried that it might not be as reliable as I expected it to be so I also started keeping a second copy of my backup disk in my desk at work. When I moved to Seattle and realized that an earthquake or volcano might wipe out both my apartment and my office in the same event, I also stared keeping a copy of my backups in the cloud.

I actually only have two files that I actually need backed up. That’s right: two. On my unencrypted external disk I have an encrypted sparse bundle image of my laptop’s Time Machine backups and I have an encrypted sparse bundle image containing about two terabytes of the digital detritus — mostly old photographs — collected from my twenty plus years using a computer. Actually, since they’re sparse bundle images those two files are really two directories containing approximately 223,000 files, but, uh, close enough.

So my primary backup disk contains those two sparse bundle images. After I perform a backup those two files change. The next step is to replicate those two sparse bundle images to my backup’s backups. How is this done? For the backup disks that I keep in my fire safe and in my office, this is easy: rsync. With both the original backup disk and the backup backup disk connected to my laptop, I open up Terminal and run this command:

cd /Volumes/original
rsync -rlptgoDzhOHi --stats --numeric-ids --delete-during *.sparsebundle /Volumes/backup

So that’s easy enough. It copies the sparse bundle images from one disk to another. I run the rsync twice, once to each backup backup disk, transport them to their respective locations, and I’ve got my two backups. The cloud backups are a little bit more complicated.

After researching a number of the backup options — S3, Glacier, DropBox, etc. — they just really weren’t feasible on cost. Using S3, for example, to back up 2TB of data would cost me about $45 a month, plus the cost of the data transfer and that starts to push $600 a year. Glacier is nearly impossible to use. DropBox doesn’t let you store more than 2TB per month on their less expensive professional plan and the option that lets you store unlimited data costs $60 per month or $720 per year.

But I did find an option that lets you store unlimited data and doesn’t cost an arm and a leg: Backblaze B2 Cloud Storage. My two terabytes is costing me $10 per month and there is no cost to transfer the data to their system and no cost to restore the data from their system. (And when I ran the upload from my office the only limitation on upload performance was my laptop’s 1Gbps network interface. I was able to push three to four hundred gigabytes every hour.)

Because I’m such a fan of rsync it turns out that there is a similar option for backing up to the cloud: rclone. After I set up my storage space on Backblaze I created an access key for my laptop and configured rclone with it’s incredibly simple “config” command and now I just run this command:

rclone --transfers 32 --progress sync /Volumes/storage/compy.sparsebundle/ b2-lockaby:lockaby/compy.sparsebundle/

“b2-lockaby” is the rclone nickname for my Backblaze bucket. Unfortunately, wildcards for matching files doesn’t work so I have to run this command twice: once for each “file” that I am backing up. Still, it’s trivial.

But there are a few catches to get to this point. First, my backup disks are ALL unencrypted but I require that I only store my data encrypted. That’s why my sparse bundles are encrypted. So when I connect my unencrypted disk I have to then open and mount my encrypted sparse bundles before I can use the data. When I do the rsync and the rclone I unmount the encrypted sparse bundles. For the sparse bundle image full of random data it’s easy to see how to set this up and how this works. But for the Time Machine backup this isn’t as obvious.

If you’re using macOS and you want to back up your hard disk you have two options. The first option is to connect an external disk to the computer and back up to that. If you tell macOS to encrypt the backup it will convert the disk to a FileVault disk. The second option is to connect your computer to a network disk such as one attached to an AirPort Express or AirPort Extreme. If you use the second option and you tell it to encrypt your backups then macOS will create an encrypted sparse bundle image on the network disk.

But Apple, in its shortsighted wisdom, has discontinued the AirPort line. As a result, being able to run a network backup seems like something that is going to cease being supported in the not-too-distant future. So I decided that directly attached Time Machine backups were going to be the future for me. But I obviously don’t want to convert my external disk to FileVault because then I won’t have encrypted sparse bundles that I can upload to the cloud.

The solution is actually pretty easy but not well documented. First, create an encrypted sparse bundle image on your external disk. Next, mount it. Then issue this command:

sudo tmutil setdestination /Volumes/{mounted-disk-image}

Now you’ll see Time Machine try to back up to that mounted image. When you finish doing your backup through Time Machine, unmount the image, rsync and rclone the sparse bundle, unmount the disk, and go back to your daily life.

More Black and White Photography

At the second week of the black and white photography class that I am taking we did a field trip to the Seattle Museum of Flight. I also took some pictures around the University of Washington campus. I also stood in the pouring rain in the pitch black to take some pictures of rain falling past a street light. It’s been a long week.

Rain falls around a streetlight over Aurora Avenue in Seattle.
The covering over the large collection of airplanes at the Seattle Museum of Flight.
The covering over the large collection of airplanes at the Seattle Museum of Flight.
The blades on the jet engine on a Boeing 787 Dreamliner on display at the Seattle Museum of Flight.
These fancy pants jet turbine blades are on the demonstration engine for the Boeing 787 Dreamliner on display at the Seattle Museum of Flight.
The book stacks in the Suzzallo Library on the University of Washington campus.
A series of lights hang over the reading room in the Suzzallo Library on the University of Washington campus.

Black and White Photography

I began taking a black and white photography class at the North Seattle College continuing education program last week. One week in and it’s just been a bunch of people who are seemingly good photographers show me their photographs and saying “try looking at things this way!” But we had an assignment the first week and that assignment was to take some photographs in black and white and then share them. These are my first week photographs.

Starting off my black and white adventure nice and easy, these are simple lines on a heating vent. I used a macro lens to get the strange focus plane.
This is the view from the inside of John Grade’s Wawona sculpture that hangs in the Museum of History and Industry in Seattle.
These lights are above the lobby and the mezzanine of the UW Tower in U-District in Seattle.
I was pretty fascinated by the patterns that they formed with these lights.
The lights stretched all the way from one end of the lobby to the other and past the elevators and across two floors.

So Long, Alaskan Way Viaduct

Earlier this evening the state of Washington permanently closed the Alaskan Way Viaduct, a scar on the Seattle waterfront since the early 1950s. Before it closed I went down there to take some pictures. I felt that black and white really captured the lack of color that the waterfront has with this behemoth towering over it.

The northbound lanes on the left and the southbound lanes on the right converge to form a two tier highway just south of the Battery Street Tunnel.
The viaduct stretched for 2.2 miles along the Seattle waterfront.
The viaduct definitely made the waterfront feel neglected.
The Alaska Way Viaduct looms over Alaskan Way.
This tunnel connected 1st Ave with the Seattle Ferry Terminal. It’s a common place for people to hide from the rain.
Underneath the viaduct was lots of extra parking for tourists and visitors as long as you didn’t mind that your car would be crushed in an earthquake.

Using Let’s Encrypt With HAProxy

Part of rebuilding this website was rebuilding the server and reevaluating all of the technologies that I had put together. Previously I had purchased certificates from Comodo and paid $50 for two year certificates and and hoped that I guessed what names I wanted in the SAN correctly for the next two years. It was expensive and prohibitive toward innovation. So this time around I decided to use Let’s Encrypt. Since Let’s Encrypt has been around for a few years and the EFF is both a founder a major backer I feel pretty comfortable using it.

There are a few things that are unusual about Let’s Encrypt if you’re used to using the last generation of certificate authorities. The first is that you’re required to use their API to generate certificates, though the folks behind it provide very well written and supported tools to use that API. The second is that the certificates only last ninety days, so you’re going to want to automate the process of renewing them.

The tool that you’re supposed to use for generating certificates is called certbot and it’s maintained by the EFF. It automatically creates private keys and certificate requests for you and sends them to the Let’s Encrypt API to get back your certificate. It will also automatically renew your certificates when it is time to be renewed. The API validates that you have control of the domain name or names for which you are requesting the certificate and then sends back the new or updated certificate. It’s as easy as that.

The most common way for certbot to do domain verification is by making an HTTP request to the domain in question and seeing if a special file exists. A lot of guides assume that you’re using Apache or Nginx and that it is serving files from a file system and that certbot can just plop some files on the file system and away you go. Another, less common way to use certbot is to let it run its own web server that serves up the files in question. That less common way is how we will use certbot with HAProxy.

Let’s look at some snippets from our HAProxy configuration file

frontend http-frontend
    bind *:80
    mode http
    log global

    # intercept requests for certbot
    acl letsencrypt-acl path_beg /.well-known/acme-challenge/

    # otherwise redirect everything to https
    redirect scheme https code 301 if !letsencrypt-acl !{ ssl_fc }

    # backend rules are always processed after redirects
    use_backend http-certbot if letsencrypt-acl

    # fall through backend (not defined in this snippet)
    default_backend http-bogus

# this will take challenge requests for lets encrypt and send them
# to certbot which will answer the challenge
backend http-certbot
    mode http
    log global

    # this server only runs when we are renewing a certificate
    server localhost localhost:54321

This snippet listens on port 80, the default port for certbot, and looks for requests to the well known endpoint for Automated Certificate Management Environment challenges. If the request is not for an ACME challenge and it is not encrypted then it will be redirected to https. But if it is for an ACME challenge request then it will go to the “http-certbot” backend where the certbot will be waiting to serve requests on port 54321.

With this configuration all I need to do is point a host name at my server with an A or AAAA or CNAME record and I can get a certificate for it. It doesn’t matter if Apache is actually serving the domain or not. Once the host name is pointed at my server, I only need to run this command to generate a new certificate:

certbot certonly --http-01-port 54321 --standalone --preferred-challenges http --post-hook /usr/local/bin/letsencrypt-reload-hook -d paullockaby.com -d www.paullockaby.com

This will generate a private key and a certificate request and fire up a small web server that will respond to challenge requests from the API and write a new certificate to /etc/letsencrypt. Perfect. What if I want to renew the certificate? That’s easy, too:

certbot renew --renew-hook /usr/local/bin/letsencrypt-reload-hook

Only certificates that are ready to be renewed will actually be renewed by this command. Rather than remember these long commands I actually put them both into shell scripts, like this:

#!/bin/sh

# this fills in the default arguments for creating a new
# certificate. all the caller needs to provide is the "-d"
# argument with a comma separated list of names to put on
# the certificate.
exec certbot certonly --http-01-port 54321 --standalone --preferred-challenges http --post-hook /usr/local/bin/letsencrypt-reload-hook "$@"

You’re probably wondering where this letsencrypt-reload-hook is that I keep referencing. It is the secret sauce to the whole mess. See, HAProxy only likes it when you give it combined private key and certificate files and certbot does not create those. Additionally, HAProxy (like most servers) requires that you signal it when a certificate has been replaced. So that’s what our reload hook does:

#!/bin/sh

set -e

PATH_TO_LIVE=/etc/letsencrypt/live
PATH_TO_TARGET=/usr/local/ssl/certs

if [ -z "$RENEWED_LINEAGE" ]; then
    RENEWED_LINEAGE=`ls $PATH_TO_LIVE/`
fi

# for each domain create a concatenated pem file
for DOMAIN in $RENEWED_LINEAGE; do
    if [ -d "$PATH_TO_LIVE/$DOMAIN" ]; then
        echo "assembling certificate $DOMAIN for sites"
        cat "$PATH_TO_LIVE/$DOMAIN/privkey.pem" "$PATH_TO_LIVE/$DOMAIN/fullchain.pem" > "$PATH_TO_TARGET/sites/$DOMAIN.pem"
        chmod 400 "$PATH_TO_TARGET/sites/$DOMAIN.pem"
    fi
done

systemctl reload haproxy

Why do I write the certificates to /usr/local/ssl/certs/sites? That is the location where I have HAProxy configured to load all of its certificates. So I can drop dozens or hundreds of certificates in that directory and HAProxy will load all of them. To accomplish that I put this into my HAProxy configuration file:

frontend https-frontend
    bind *:443 ssl crt /usr/local/ssl/certs/host.pem crt /usr/local/ssl/certs/sites

With this configuration it will load the host certificate first. This is so that clients that don’t support SNI don’t get a random certificate but instead get the certificate for the host itself. After that every certificate in /usr/local/ssl/certs/sites is loaded and the correct cert will be used depending on where the client says it is looking to connect.

That’s literally all there is to it. One script, two commands, and a slightly modified HAProxy configuration file. I’m very happy that I’ve started using Let’s Encrypt to do all of my certificate goodness. I’m not a bank or major corporation so I don’t need advanced verification. I just need things encrypted by a valid certificate authority that will make browsers happy and I don’t want to think too hard about it. This does all of that for me.

A new year, a new blog.

For those of you who still check here you definitely noticed that my blog was inaccessible since Augustish when I decided to take it down. At that time I concluded that the microblog full of quips and pithy comments that I had kept for more than a decade had grown stale and tired. As I get older I can’t keep up the cynicism necessary to regularly populate a microblog. Besides, in the time since I started Twitter has filled that role much better than I can.

So here’s a new website. I gave up writing my own code to maintain a website. Again, as I get older I don’t have the time or energy to keep up a code base for something so mundane as a blog when this problem has been solved a lot. I’ll let people who care more than I do handle the basics of creating websites for me so I can work on things that are actually interesting to me.

Obviously as of this writing the old blog content is not here and obviously as of this writing the theme is a bit spartan. But I’m hoping over some period of time measured in less than years to merge the more interesting things from my old blog into this one and to give the site maybe a bit of color. I’m also hoping to add a page called “Projects” that details some of the code that I’ve written and put on GitHub. I’m also hoping to add a page called “Photography” where I can group together some of the photo projects that I’ve done in the past or intend to work on in the future. Yes, I realize the lack of originality and novelty in my pursuits. But they make me happy and here we are.

There is an RSS feed for blog entries here and an RSS feed for comments. Unfortunately the RSS feeds don’t tell you when the “Projects” or “Photography” pages update so I’ll be sure to write something mentioning when I do updates over there. Comments are enabled for now, too. We’ll see how long that lasts before I get fed up moderating and/or deleting spam.

And here we are. Welcome to 2019.