Creating SSL Certificates

A million people have written about this and now it’s my turn. I have a need to create myself a certificate authority and then to use that certificate authority so that my programs can communicate with each other using those certificates to identify themselves. Here are the commands that I ran with absolutely no explanation behind why or how they work.

I decided to not screw around with RSA and instead I jumped straight to ECC. Everything I’m doing controlled by me so I have no need for strong backward compatibility that RSA provides.

Also note that you’ll need to take my examples and fix your paths as appropriate. I keep my certificates in /usr/local/ssl/certs. The certificate authority is in there and then all of the generated certificates are kept in /usr/local/ssl/certs/local. So you’ll need to double check the paths in my following commands.

First I created my certificate authority private key:

openssl ecparam -out /usr/local/ssl/certs/local-ca.key -name prime256v1 -genkey

That key will be protected with my life, metaphorically. I filled in some values with my details like my location and my name. Whatever, no big deal. Just make it somewhat accurate because it will be pasted into every certificate that you sign from this CA. Then I must create my certificate authority public certificate:

openssl req -x509 -new -nodes -key /usr/local/ssl/certs/local-ca.key -days 36500 -sha512 -out /usr/local/ssl/certs/local-ca.cert

I set it to exist for 36,500 days, aka ten years. Choose a number that works for you.

Now I can start to sign certificates from this. First I must create an OpenSSL configuration file. Mine looks like this:

default_ca                      = CA_default

database                        = certificates
new_certs_dir                   = /usr/local/ssl/certs/local
certificate                     = /usr/local/ssl/certs/local-ca.cert
private_key                     = /usr/local/ssl/certs/local-ca.key
preserve                        = no
email_in_dn                     = no
nameopt                         = default_ca
certopt                         = default_ca
policy                          = policy_match
default_days                    = 2190

countryName                     = match
stateOrProvinceName             = match
organizationName                = match
organizationalUnitName          = match
commonName                      = supplied
emailAddress                    = match

string_mask                     = nombstr
distinguished_name              = req_distinguished_name
req_extensions                  = v3_req
x509_extensions                 = v3_req

0.organizationName              = Organization Name
organizationalUnitName          = Organizational Unit Name
emailAddress                    = Email Address
emailAddress_max                = 40
localityName                    = City
stateOrProvinceName             = State
countryName                     = Country Code
countryName_min                 = 2
countryName_max                 = 2
commonName                      = Common Name
commonName_max                  = 64

0.organizationName_default      = Your Name
organizationalUnitName_default  = Certificate Authority
localityName_default            = Your City
stateOrProvinceName_default     = Your State
countryName_default             = US
emailAddress_default            =

basicConstraints                = CA:TRUE
subjectKeyIdentifier            = hash
authorityKeyIdentifier          = keyid:always,issuer:always

subjectKeyIdentifier            = hash
basicConstraints                = critical,CA:FALSE
keyUsage                        = critical,digitalSignature,keyEncipherment

There’s nothing particularly sensitive in this file. It’s just there. Your certificates will copy their locality and state and country from your CA. With that file and the certificate authority that we created we can start signing certificates with three commands. First, we want to create our certificate’s private key:

openssl ecparam -name prime256v1 -genkey -out /usr/local/ssl/certs/local/

With the private key and the above configuration file we will create a certificate signing request.

openssl req -new -key /usr/local/ssl/certs/local/ -out /usr/local/ssl/certs/local/ -batch -subj "/" -sha512 -config /usr/local/ssl/certs/local-openssl.conf

Those two commands — creating the private key and the signing request — only need to be done once, ever. If you keep those two files then all you need to do when your certificate expires is use the existing signing request to create a new certificate and it will be created with all of the same options. Now to create that new certificate:

openssl x509 -req -extfile <(printf "") -in /usr/local/ssl/certs/local/ -out /usr/local/ssl/certs/local/ -CA /usr/local/ssl/certs/local-ca.cert -CAkey /usr/local/ssl/certs/local-ca.key -CAcreateserial  -sha512

That’s it. That’s the entire process. And this even follows RFC2818 and sets a subjectAltName parameter so that clients don’t complain.

Be aware that SSL is a tricky thing. Some of my options might not be 100% correct and some of them might not age well so it’s really best to double check what you run before you run it and not just trust me. But if you’re lazy then these commands will probably work.

If you want to let someone else do the hard work for you and have it work in your browser, though, I highly recommend that you use Let’s Encrypt to automatically sign certificates. I wrote about how to get that going on my blog in the past.

Flask Connection Pool for PostgreSQL

There are a lot of technology words in that title so I’m going to explain exactly what this is. This is a connection pool for Python applications that make use of the Flask web framework and connect to PostgreSQL databases. Why am I sharing this? In my experience there are a dearth of connection pool libraries for Flask outside of SQLAlchemy.

I wanted something that would reuse connections rather than connecting and disconnecting with every request. Yes, the Psycopg pool library does this but it doesn’t let you make changes to the connection when initially created and on each reuse. For example, in our set up we want to enable auto-commit which is not the default configuration for Python database connectors. We also wanted to set a session configuration value that would indicate to our stored procedures the name of the user, if known, every time a new request takes over the connection. So the Psycopg pool library didn’t cut it.

I also want call out the great Python library Tenacity which I’ve begun using in lots of places in my code for things that need to be retried with different retry strategies. It has greatly minimized the amount of code I’ve had to write and the number of times I’ve had to write time.sleep(1).

All that said, I’ve posted my database connector on GitHub as a “gist”. I’m using it at the University of Washington to pool connections on all of my team’s web applications.

The New Social Network

Some of you may remember my old blog that I used to keep here. It was online from sometime in 2007 until late 2018. It had a few outages when I didn’t feel like blogging and just took the whole thing down, like in spring 2014 when it was down for six months or so with just a “403 Forbidden” error.

I’ve been meaning to backport some of the content of that blog on this one. In particular, I had made a number of posts about programming that I want to get up here to share code that I’ve written. But I can be slow about that kind of thing.

My old blog, as you might remember, was a very Twitter-like. I mostly put up short snarks as well as pictures of my dog and mountains and whatnot. I also put up GPX tracks from hiking and road trips. This blog, on the other hand, has been focused on longer form posts. I have missed having something where I am able to post quick pictures and links and unfunny quips.

Back in 2016, in an effort to “democratize” social networking some forward thinking people used the OStatus protocol (later updated to the ActivityPub protocol) to build a federated, open source social networking system. Being somewhat out of it with respect to social networking myself I only learned of this a few months ago. It lets me set up my own node where I can control who registers and I’m not going to log in some day and discover that all of my posts have become monetized or something. It’s exactly what I’ve been looking for to replace my old blog. The dominant implementation is software called Mastodon.

A few years ago, for reasons unknown to even me, I registered the domain name Analogous to Twitter’s “tweet”, in Mastodon, when one posts a new status it is called a “toot”. This is because the mascot for Mastodon is a mastodon and people toot things from their trumpet, or something. My mind being what it is this obviously lent itself to using the domain to host my own node in the Fediverse. I hope the humor is obvious.

That is all to say that if you’re looking for less frequent, longer form posts from me then this is the place to check out. If you’re looking for random photos and content similar to my old blog, head over to Uncontrollable Gas. You’re welcome to register your own account — either on my node or any other node, really, it’s all a federation — and join the Fediverse.

Using Debian Cloud Images in VMWare Fusion

In my last post I described how to use debian-cloud-tools to build an AMI for AWS EC2 instances. What about running that same instance locally? Personally, I do most of my development in VMWare Fusion and I want that instance to look like my AWS instance. I use VMWare Fusion because I want to be able to develop without an Internet connection and because I don’t want to pay to run two instances in the cloud. Let’s see how I did this.

The steps start out the same as building for AWS. You want to start with an installation of buster and clone the debian-cloud-images repository and install some dependencies. Then you will run the build like this:

bin/debian-cloud-images build buster nocloud amd64 --build-id manual --version 1 --override-name nocloud-buster-image --build-type official

That command will run for a while and create a file called nocloud-buster-image.raw. We need to convert that to a VMDK image usable by VMWare Fusion. That can be done like this:

qemu-img convert -O vmdk nocloud-buster-image.raw nocloud-buster-image.vmdk

Once you have a VMDK image, copy it to the host where you run VMWare Fusion. I’m running VMWare Fusion 11.1.0 and these steps apply to that version.

  1. Choose “Create a vustom virtual machine.”
  2. For the operating system select “Other Linux 4.x or later kernel 64-bit”. This is because VMWare doesn’t know about buster yet.
  3. Select “Legacy BIOS”.
  4. Select “Use an existing virtual disk.” We’re going to select the VMDK file that we just created and we want to “make a separate copy of the virtual disk.”
  5. Finally, click “Customize Settings”. Give the system a name that you like.

Now you’ll see a settings dialog. Make these changes.

  1. Remove the camera.
  2. Remove the printer.
  3. Remove the sound card.
  4. Adjust the memory size and the total CPU count.
  5. Connect a CD/DVD drive and attach a buster boot disk. I’m using the debian-10.0.0-amd64-netinst.iso boot disk.
  6. Adjust the hard disk size.
  7. Set the CD/DVD drive as the startup disk and restart.

Now you’ll be at a Debian installer menu. Choose “Advanced options” and then “Rescue mode”. This will begin booting the rescue disk where we will make a couple changes.

  1. Choose the correct language and keyboard configuration.
  2. Choose any hostname and domain name. It doesn’t matter.
  3. Choose any timezone. It doesn’t matter.
  4. Boot to /dev/sda1.
  5. Mount a separate /boot/efi partition.
  6. Execute a shell in /dev/sda1.

Once you’re on the shell we’re going to edit /etc/default/grub and make these changes:

  1. Comment out the lines labeled GRUB_TERMINAL and GRUB_SERIAL_COMMAND.
  2. Change the line labeled GRUB_CMDLINE_LINUX to be empty, like this: GRUB_CMDLINE_LINUX="".
  3. Change this line, too: GRUB_CMDLINE_LINUX_DEFAULT="quiet elevator=noop net.ifnames=0 transparent_hugepages=never". I think that the elevator option is redundant and you probably don’t need the transparent_hugepages option but since I use Redis I put it in there.

After adjusting /etc/default/grub we need to install it by running this command:

grub-mkconfig -o /boot/grub/grub.cfg

And finally, right now we have no users and no passwords. We need to give the root user a password by running the passwd command. Just choose a password. You can change it or disable it later or add new users later.

Now exit out of the shell and reboot. When it reboots it will be on the Debian installer again so now disconnect the drive and send Ctrl-Alt-Delete to the host. The host will now boot in VMWare Fusion.

There are two final steps before you’re on your own. You need to give the host some SSH keys and you need to expand the disk. These are quite easy:

dpkg-reconfigure openssh-server
systemctl restart sshd
resize2fs /dev/sda1

You can choose to examine the SSH configuration but I used the existing one without issue.

Now you have a VMWare Fusion cloud host that looks just like your AWS host.

Building Debian AWS EC2 AMIs

A lot of acronyms in that title there. There have been some problems getting Debian’s latest release, “buster”, available on AWS. I got tired of waiting so I took it into my own hands to build an AMI for myself. It should be straightforward but the documentation is a bit scattered. It’s likely that what I describe here won’t be valid after some period of time. However, since I just did it and it worked I’m going to share the steps for the next helpless person.

First, build a host running buster from an official release image. This might be a physical host or it might be a virtual machine on your laptop. I’m not 100% sure that it is necessary that the host be running buster, but that’s what I did. I made a virtual machine and performed only the very basic installation steps necessary to get it connected to the internet and, you know, booting.

Once you have your machine that is running buster, clone this repository:

git clone

Take a look at their README but we’re not going to follow it. The README file tells you how to build a development instance and you’ll be surprised when it has some development stuff enabled like random terminal users that automatically log in on boot. Do what I’m about to describe instead. Install some dependencies:

apt-get install --no-install-recommends ca-certificates debsums dosfstools fai-server fai-setup-storage make python3 python3-libcloud python3-marshmallow qemu-utils udev sudo rsync

Now we’re going to build the image. This will take a while.

bin/debian-cloud-images build buster ec2 amd64 --build-id manual --version 1 --override-name ec2-buster-image --build-type official

After this is done running you’ll have a new file called ec2-buster-image.tar. Transfer this to a host that is currently running in AWS. I currently have a host running stretch on which I ran these commands. I chose to use rsync to move these files, since they’re large, which is why it’s part of the apt-get line above. The rest of these steps will be run on your AWS based host.

Before doing anything, ensure that the aws command works. You might need to apt-get install awscli and run aws configure first. You can get access keys on the AWS IAM Management Console.

The tar file that you copied over needs to be untarred. It will create a file called disk.raw. You can rename it if you want but be aware that it will be oddly named on its way out of the tar file.

Over on the AWS console you will want to create a new volume that is 8G in size and you will want to attach it to your host. You do NOT want to mount it, just attach it. It is imperative that you create the volume in the correct availability zone. For example, if your host is in us-west-2c then the volume should also be in us-west-2c.

Run the lsblk command to see what device the volume got given. On my host it was given the name /dev/nvme1n1. With that in hand we can transfer our disk image to the volume:

sudo dd if=disk.raw of=/dev/nvme1n1 bs=512k

When the dd command finishes then detach the volume.

For the last step we’re going to turn that volume into a snapshot and an AMI. To do that I copied a program written by Noah Meyerhans that he has in his ec2-image-builder repository and I made a few changes such that the defaults are for buster and I created a gist called Once you’ve copied that gist, run like this:

./ -F <vol-id>

Once that last step is run you will have an AMI that you can use to create buster hosts. You can delete the temporary volume that you created if you want. You can also delete any or all of the other intermediate products, too. But you must keep (and pay for the keeping of) the snapshot if you want to reuse the AMI.

Go forth and use buster on AWS.

Update: One thing that I did discover missing and/or wrong on the cloud configuration that I got was the list of apt sources. I had change the apt source lists like this:

rm /etc/apt/sources.list.d/backports.list
vi /etc/apt/sources.list
> deb buster main
> deb buster/updates main
> deb buster-updates main

Python Bytes and Characters

Last week I lost two days of my life fighting with Python, encoding, and Supervisor. It all starts with the fact that in Python2 doesn’t handle encodings very well. I’ll leave the discussion of how Python2 handles encodings at that because Python2 is dead at the end of 2019 but suffice it to say that in Python2 bytes and characters were mostly handled the same and it wasn’t ideal because oftentimes characters are made up of multiple bytes.

At work we use Supervisor to control hundreds of programs across about ninety servers. It lets us assign processes to a host and then ensure that they are running. One feature that we take heavy advantage of is the ability to run event listeners inside Supervisor to capture things like logging events from our programs and then run those log lines through a monitoring system and eventually to an event management system.

Until last week we had been running Supervisor version 3 on Python 2. According to the documentation for Supervisor event listeners your event listener communicates with Supervisor by listening on stdin and talking back on stdout. Both stdin and stdout are, by default in Python 2 and Python 3, character streams. In Python2 this matters less but in Python3 this matters more. Supervisor will send new events to your event listener by sending you one line that your listener will read using readline. That one line will have a standard formatting that includes a len field and the len field should, according to the documentation, indicate the number of bytes that you should then read from stdin to get the body of the event. Because Python2 didn’t really distinguish between bytes and characters, this totally worked with Supervisor3 on Python2. My event listener looked roughly like this:

stdin = sys.stdin.buffer
while handle in[stdin], [], [])[0]:
    line = handle.readline()
    if (not line):
        raise EOFError("received eof from supervisord")

    # decode the line to utf8
    line = line.decode("utf-8", "backslashreplace")

    # read the parts of the header line
    header = dict([x.split(":") for x in line.split()])
    data_length = int(header.get("len", 0))
    if (data_length == 0):
        return (header, None, None)

    # read more to get the payload
    data =
    if (not data):
        raise EOFError("received eof from supervisord")

    # decode the data to utf8
    data = data.decode("utf-8", "backslashreplace")

    if ('\n' in data):
        # this message has additional data so extract it out
        event, data = data.split('\n', 1)
        event = dict([x.split(":") for x in event.split()])
        return (header, event, data)
        event = dict([x.split(":") for x in data.split()])
        return (header, event, None)

Basically we use the byte stream version of stdin (stdin.buffer), wait for data to come in to stdin then we readline it, get the payload length, and then read that number of bytes from the buffer and repeat for the next event. With Supervisor3 running on Python2 and our event listener running on Python3 this worked great and was in line with how the documentation for Supervisor said that everything should work.

As said, last week we upgraded Supervisor to version 4 and started running it on Python 3. Things suddenly stopped working but very, very randomly. After a lot of digging we discovered that our event listener was reading in less data than it was supposed to. The event listener would say that ten bytes of data were in the buffer and the listener would read ten bytes and then the next time the listener ran readline it would get garbage data.

It turned out that any time we had a program that logged something that contained variable byte characters (i.e. UTF-8 data) that our event listener would break. After digging through the code for Supervisor it was discovered that this was because Supervisor was sending the listener data that was already encoded in UTF-8 and the length it was giving us was characters and not bytes. UTF-8 is a variable length character encoding. Each character can be one or more bytes. For the vast majority of normal characters (e.g. A through Z, 0 through 9) each character is one byte and so we weren’t experiencing a problem. Some of our programs, however, were logging data that went beyond the normal A through Z and those ended up being more than one byte per character thus why the event listener was reading less data than there actually was.

The solution to this was to turn our byte stream into a character stream so that read would work in characters instead of bytes. Thus we ended up with this solution which does not work and you should keep reading to see why:

stdin = io.TextIOWrapper(sys.stdin.buffer, encoding="utf-8")
while handle in[stdin], [], [])[0]:
    line = handle.readline()
    if (not line):
        raise EOFError("received eof from supervisord")

    # read the parts of the header line
    header = dict([x.split(":") for x in line.split()])
    data_length = int(header.get("len", 0))
    if (data_length == 0):
        return (header, None, None)

    # read more to get the payload
    data =
    if (not data):
        raise EOFError("received eof from supervisord")

    if ('\n' in data):
        # this message has additional data so extract it out
        event, data = data.split('\n', 1)
        event = dict([x.split(":") for x in event.split()])
        return (header, event, data)
        event = dict([x.split(":") for x in data.split()])
        return (header, event, None)

It is pretty identical except that we wrap stdin in a Python3 built-in that converts everything off of the stream into UTF-8. Then we don’t need to do the conversion later and when the event listener calls read it reads characters and not bytes. Problem solved?

After I implemented this the next day we were having a different problem. Event listeners on various servers started blocking. Instead of declaring that they were reading garbage data they just stopped reading data entirely.

It turns out most of our programs print data the normal Unix way: write some data, add a line feed (aka LF aka \n) to the end, repeat. Some of our programs, though, were echoing to their logs data that they got from remote sources that might include line endings other than a simple line feed and instead might print the Windows standard “carriage return line feed” or “CR LF” or \r\n.

By default, io.TextIOWrapper implements what Python calls universal newlines. This “feature” annoyingly converts anything that even looks like a new line into \n so that \r\n becomes \n. Now when a program prints a log line that is ten characters long and has \r\n in the middle of it the TextIOWrapper converts that \r\n into \n. Subsequently the stream reader in the event listener receives nine characters and blocks forever waiting for a tenth character that never will appear.

Thanks Python for trying to be helpful but ultimately not doing what I expect. The solution is to add the argument newline='\n' to the io.TextIOWrapper line so that the wrapper passes you the raw data and doesn’t try to mess with newlines.

Now I have a bug report open on the Supervisor project to address this either by fixing Supervisor to send bytes like the documentation says or by fixing the documentation to say “characters” like is implemented. I would prefer that they send me bytes and let me do the encoding conversion since maybe my programs are printing something non-UTF-8 like CP1256 or Mac Roman or whatever. Right now if your program spits out something that isn’t valid UTF-8 they will replace your log line with the word “Undecipherable” and followed by some object representation of the bytes and maybe I know what it is and I’d like to convert it to something I find useful. Let’s wait and see what happens!

Software Projects

Throughout my time at the University of Washington I have had the privilege of being able to build a lot of systems that underpin large portions of the University’s computing infrastructure. I’m starting to pull some of those systems out of the University’s source control, make some modifications to them such that they are generally useful, and then document and publish them here.

To that end I have put together a projects page with some of the things I’ve done worth highlighting. It is far from complete.

However, to this point it has information on the clone system that I built out, plus some PostgreSQL programs and monitoring tips. I’m hoping to get the software deployment system push up soon, since it is mostly done already, as well as the Supervisor command and control system dart that I recently rebuilt at work up there really soon now, too. Then I’ll get around to creating a publicly usable event management system based on the one I created at UW plus the network device monitoring system that I built at UW as well.

So I just wanted to share that finally, after talking about it for a few years, I’m moving forward with some of this sharing stuff. I also still intend to port over a bunch of my old blog posts, too. But that turned out to be much, much harder than it looked and my video games look so nice after sitting at a desk all day. Soon enough.

Python SSL Socket Server

I recently had to build a small server application in Python. It did not need to be anything complicated. It needed to run on about one hundred servers and receive a tiny command to do something and then be done. A web server would have been overkill and a was definitely not available on all of the hundred servers. Writing a socket server in Python is pretty trivial and the documentation includes example code for you, too. The caveat that I had to deal with is that I needed to validate that the client was who they said they were and I wanted to do it with an SSL certificate so that SSL would handle all of the authentication for me. (The authorization would still have to be handled by me.)

The documentation in Python for writing an SSL server is all over the place. With each version of Python 3 the library has changed in some subtle way that deprecates what was previously the preferred way so if you’re going to do this verify that what I’m showing you here is up to date. I’m pretty certain that this code is valid in Python 3.7, though we are running it in a 3.6 environment.

First, the server.

import socketserver
import ssl

class RequestServer(socketserver.ThreadingMixIn, socketserver.TCPServer):
    # faster re-binding
    allow_reuse_address = True

    # make this bigger than five
    request_queue_size = 10

    # kick connections when we exit
    daemon_threads = True

    def __init__(self, server_address, RequestHandlerClass, bind_and_activate=True):
        super().__init__(server_address, RequestHandlerClass, False)

        # create an ssl context that using the cert
        # that requires the client to present a certificate and
        # validates it against uwca.
        ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
        ctx.verify_mode = ssl.CERT_REQUIRED

        # replace the socket with an ssl version of itself
        self.socket = ctx.wrap_socket(self.socket, server_side=True)

        # bind the socket and start the server
        if (bind_and_activate):

class RequestHandler(socketserver.StreamRequestHandler):
    def handle(self):
        print("connection from {}:{}".format(self.client_address[0], self.client_address[1]))

            common_name = self._get_certificate_common_name(self.request.getpeercert())
            if (common_name is None or common_name != ""):
                print("rejecting {}".format(common_name))
                self.wfile.write('{"accepted": false}\n'.encode())

            # now we're going to listen to what they have to say
            data = self.rfile.readline().strip()
            print("data: {}".format(data))
            self.wfile.write('{"accepted": true}\n'.encode())
        except BrokenPipeError:
            print("broken pipe from {}:{}".format(self.client_address[0], self.client_address[1]))

    def _get_certificate_common_name(self, cert):
        if (cert is None):
            return None

        for sub in cert.get("subject", ()):
            for key, value in sub:
                if (key == "commonName"):
                    return value

# this is the server. it handles the sockets. it passes requests to the
# listener (the second argument). the server will run in its own thread
# so that we can kill it when we need to
server = RequestServer(("", 3278), RequestHandler)

It listens on port 3278 and it listens for SSL connections. It will tell SSL clients that its hostname is “”. You should use whatever certificate it is that you have laying around for your server to identify itself.

You’ll notice the line that says “load_verify_locations” and the preceding line that says CERT_REQUIRED. This means that all incoming connections must present a client certificate and that certificate must have been signed by the CA indicated by “load_verify_locations”. This server will accept any client certificate signed by the UW Certificate Authority. That is the authentication component.

But I only want to allow connections from a certificate that I deem authorized. This is the authorization component. That’s what the private method called “_get_certificate_common_name” does. When given the certificate details from the client connection it will extract the client’s common name and returns that. We make sure that common name matches something authorized. In this case our server identifies itself as “” and only allows clients that are using that same certificate. (Is this a good idea? Probably not. But I don’t have the infrastructure to maintain lots of certificates for just this purpose. This is effective for me.)

What does a client look like to all of this? Super simple.

import socket
import ssl

ctx = ssl.create_default_context()
ctx.verify_mode = ssl.CERT_REQUIRED
ctx.check_hostname = True

with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock:
    with ctx.wrap_socket(sock, server_hostname="") as ssock:
        ssock.connect(("localhost", 3278))
        ssock.sendall(bytes("this is a test\n", "utf-8"))

This verifies that our server is presenting a valid UW Certificate Authority signed certificate. It also presents our server with the certificate with the common name “”. Finally, we tell our client that our server will identify itself as “”. If we didn’t set a “server_hostname” argument then the client would only validate the connection to the server if the server identified itself as “localhost” as that is the hostname we are connecting to. But our server is identifying itself as “” because that’s the certificate that we made it use.

One interesting note to this: I don’t know about the server code (because I haven’t tried) but the client code does NOT work with eventlet, unless I’m doing something wrong. We’ll find out when they respond to my issue request.

Vacation to Hawaii

For the first time since last summer I took an actual vacation where I traveled somewhere. Until now, the only state to which I had never been was Hawaii. Now I can say that I’ve been to all fifty states and that I’ve spent not-insignificant amounts of time in each of them. (Next goals: visit all ten provinces and three territories of Canada and all of the public islands in Hawaii. Maybe after that I’ll try out other countries.) Of course, I took pictures.

On the first day I just stayed in a hostel around Kailua Kona. The sky was clear, so I took this photo of the crescent waxing moon. (Also called an ‘ole kū kolu moon.) The moon had a very pretty halo around it.

A halo around the ‘ole kū kolu moon on March 13th, 2019.

The second day I went to South Point or Ka Lae, the southernmost point in Hawaii and the United States. (Wikipedia disagrees with me, but Palmyra is in a territory, not a state.) Interestingly, South Point is not that interesting. It’s on a cliff and very rocky and popular with people who have fishing rods.

Later in the second day I went into Hawai’i Volcanoes National Park. I stayed in a cabin just off of the national park but needed to enter the national park to check into the cabin. There I wandered through the steam vents along the Kīlauea caldera. It’s just remarkable to see steam just rising from the dirt. Then you can look out and see toxic smoke rising from a gigantic hole in the ground that periodically shoots molten rock in the air. Most of the area around the caldera was closed because of recent eruptions. However, there were no eruptions at the time of my visit and no lava was visible anywhere in the park. Very disappointing but probably safer.

The Pacific Ocean crashes against the cliffs at the southern end of Volcanoes National Park. These cliffs were made in the past seventy years.

On the third day I visited Hilo. Hilo felt like home in Seattle with its continually misty rain and lush greenery. On my tour of Hilo I stopped at the Kaumana Caves Park, a county park that features entrance to a totally uncontrolled lava tube. Seriously, you just walk in to this pitch black cave at your own risk. My headlamp did not suffice and I actually relied on the flashlight functionality of my phone to see deep into the cave. You turn several times and duck through several small passages in the two and a half mile tunnel, completely losing sight of the entrance. That is, it is nothing like the caves in Virginia.

Spelunkers descend into the Kaumana Caves lava tube with their flashlights shining. The plant life hanging from the top of the cave is actually a volcanic glass fiber known as Pele’s Hair.

I stayed at the Inn at Kulaniapia Falls, an “off-grid” inn situated overlooking a waterfall. It was a really pleasurable experience and I would do it again. The staff are lovely, even if they did wake me up at 4am slamming doors in the office below my room. At least they compensated me for that!

Rainbows appeared in the morning light around the waterfall. It went into a large pool that then drained into several smaller waterfalls.

I missed the dinner reservation at Kulaniapia Falls and instead went into Hilo and had a fantastic dinner at Pineapples restaurant.

The fourth day was the most remarkable. I took a guided tour to the top of Mauna Kea, the tallest mountain in entire Hawaiian island chain at 13,803 feet above sea level. Another 17,000 feet or more of the mountain is below sea level making it the tallest mountain the world from its base. Fortunately the snow was gone from the top of the mountain this March.

The shadow of Mauna Kea against the clouds over which it looms.

The tour took us up the Hawai’i island Saddle Road where I snapped this picture of a tree that survived a lava flow while everything around it burned. Incidentally, this island is covered in wild goats and feral pigs and cats brought to in various waves by Polynesians, the British, and the Spanish. Additionally, this tree and the surrounding lava flow is actually pretty representative of the island of Hawai’i. Most of the island is pretty much covered in barren lava rock. The western side of the island receives hardly any rain and the rain is what begins to break down the lava rock into more fertile land facilitating growth. There’s also the fact that this island is still an active volcano so new lava is being laid down all the time.

A lone tree surrounded by lava.

Once at the top of Mauna Kea, you can see the most beautiful sunset that you’ve ever seen with more colors than you ever thought imaginable.

The sun setting into the clouds from the top of Mauna Kea.

The top of Mauna Kea, you may have heard, is covered with telescopes. In fact there are thirteen telescopes and a couple more antennas. Because the University of Hawaii actually destroyed the original summit, native Hawaiians designated a nearby peak as the peak where they would perform their rituals.

The actual summit to Mauna Kea and the trail up to it, open only to native Hawaiians.

After watching the sunset, the tour guides took us down to the Mauna Kea Very Long Baseline Array antenna, part of an array of ten radio antennas spanning the globe. That is where we got a tutorial on the stars and how the original inhabitants of the Hawaiian islands navigated by the stars. These tour guides were the best and shared a ton of information about Hawaii that I don’t know where else I would find.

Even with a half moon, you could still see a ton of stars by the VLBA antenna at the top of Mauna Kea.

And I had a terrifically disappointing journey back to Seattle via Delta where they managed to do everything wrong from start to finish. I also don’t regret renting a Jeep but I would have preferred that Avis had given me a Jeep that had been made in the last two or three years than the crap that they did give me. It didn’t even have support for Bluetooth. Still, I’d go again.

Glass Blowing

I recently took a glass blowing class at Pratt Fine Arts Center. The class was four hours every week for six weeks and we learned how to gather glass from the furnace, blow a bubble, gather again, and shape it into something that looks marginally close to a cup or whatever. Here are some of the things that I put together including two cups, a bowl, a vase-like thing, some ornaments, a pumpkin, and a chili pepper.

A “bowl”.
A “vase”.
Two cups!
The ornament on the left weighs about a half of a pound. The ornament on the right is all blown out of proportion.
This pumpkin is my second favorite. It involved a cast.
This is my favorite. It’s a chili pepper.