The New Social Network

Some of you may remember my old blog that I used to keep here. It was online from sometime in 2007 until late 2018. It had a few outages when I didn’t feel like blogging and just took the whole thing down, like in spring 2014 when it was down for six months or so with just a “403 Forbidden” error.

I’ve been meaning to backport some of the content of that blog on this one. In particular, I had made a number of posts about programming that I want to get up here to share code that I’ve written. But I can be slow about that kind of thing.

My old blog, as you might remember, was a very Twitter-like. I mostly put up short snarks as well as pictures of my dog and mountains and whatnot. I also put up GPX tracks from hiking and road trips. This blog, on the other hand, has been focused on longer form posts. I have missed having something where I am able to post quick pictures and links and unfunny quips.

Back in 2016, in an effort to “democratize” social networking some forward thinking people used the OStatus protocol (later updated to the ActivityPub protocol) to build a federated, open source social networking system. Being somewhat out of it with respect to social networking myself I only learned of this a few months ago. It lets me set up my own node where I can control who registers and I’m not going to log in some day and discover that all of my posts have become monetized or something. It’s exactly what I’ve been looking for to replace my old blog. The dominant implementation is software called Mastodon.

A few years ago, for reasons unknown to even me, I registered the domain name uncontrollablegas.com. Analogous to Twitter’s “tweet”, in Mastodon, when one posts a new status it is called a “toot”. This is because the mascot for Mastodon is a mastodon and people toot things from their trumpet, or something. My mind being what it is this obviously lent itself to using the uncontrollablegas.com domain to host my own node in the Fediverse. I hope the humor is obvious.

That is all to say that if you’re looking for less frequent, longer form posts from me then this is the place to check out. If you’re looking for random photos and content similar to my old blog, head over to Uncontrollable Gas. You’re welcome to register your own account — either on my node or any other node, really, it’s all a federation — and join the Fediverse.

Using Debian Cloud Images in VMWare Fusion

In my last post I described how to use debian-cloud-tools to build an AMI for AWS EC2 instances. What about running that same instance locally? Personally, I do most of my development in VMWare Fusion and I want that instance to look like my AWS instance. I use VMWare Fusion because I want to be able to develop without an Internet connection and because I don’t want to pay to run two instances in the cloud. Let’s see how I did this.

The steps start out the same as building for AWS. You want to start with an installation of buster and clone the debian-cloud-images repository and install some dependencies. Then you will run the build like this:

bin/debian-cloud-images build buster nocloud amd64 --build-id manual --version 1 --override-name nocloud-buster-image --build-type official

That command will run for a while and create a file called nocloud-buster-image.raw. We need to convert that to a VMDK image usable by VMWare Fusion. That can be done like this:

qemu-img convert -O vmdk nocloud-buster-image.raw nocloud-buster-image.vmdk

Once you have a VMDK image, copy it to the host where you run VMWare Fusion. I’m running VMWare Fusion 11.1.0 and these steps apply to that version.

  1. Choose “Create a vustom virtual machine.”
  2. For the operating system select “Other Linux 4.x or later kernel 64-bit”. This is because VMWare doesn’t know about buster yet.
  3. Select “Legacy BIOS”.
  4. Select “Use an existing virtual disk.” We’re going to select the VMDK file that we just created and we want to “make a separate copy of the virtual disk.”
  5. Finally, click “Customize Settings”. Give the system a name that you like.

Now you’ll see a settings dialog. Make these changes.

  1. Remove the camera.
  2. Remove the printer.
  3. Remove the sound card.
  4. Adjust the memory size and the total CPU count.
  5. Connect a CD/DVD drive and attach a buster boot disk. I’m using the debian-10.0.0-amd64-netinst.iso boot disk.
  6. Adjust the hard disk size.
  7. Set the CD/DVD drive as the startup disk and restart.

Now you’ll be at a Debian installer menu. Choose “Advanced options” and then “Rescue mode”. This will begin booting the rescue disk where we will make a couple changes.

  1. Choose the correct language and keyboard configuration.
  2. Choose any hostname and domain name. It doesn’t matter.
  3. Choose any timezone. It doesn’t matter.
  4. Boot to /dev/sda1.
  5. Mount a separate /boot/efi partition.
  6. Execute a shell in /dev/sda1.

Once you’re on the shell we’re going to edit /etc/default/grub and make these changes:

  1. Comment out the lines labeled GRUB_TERMINAL and GRUB_SERIAL_COMMAND.
  2. Change the line labeled GRUB_CMDLINE_LINUX to be empty, like this: GRUB_CMDLINE_LINUX="".
  3. Change this line, too: GRUB_CMDLINE_LINUX_DEFAULT="quiet elevator=noop net.ifnames=0 transparent_hugepages=never". I think that the elevator option is redundant and you probably don’t need the transparent_hugepages option but since I use Redis I put it in there.

After adjusting /etc/default/grub we need to install it by running this command:

grub-mkconfig -o /boot/grub/grub.cfg

And finally, right now we have no users and no passwords. We need to give the root user a password by running the passwd command. Just choose a password. You can change it or disable it later or add new users later.

Now exit out of the shell and reboot. When it reboots it will be on the Debian installer again so now disconnect the drive and send Ctrl-Alt-Delete to the host. The host will now boot in VMWare Fusion.

There are two final steps before you’re on your own. You need to give the host some SSH keys and you need to expand the disk. These are quite easy:

dpkg-reconfigure openssh-server
systemctl restart sshd
resize2fs /dev/sda1

You can choose to examine the SSH configuration but I used the existing one without issue.

Now you have a VMWare Fusion cloud host that looks just like your AWS host.

Building Debian AWS EC2 AMIs

A lot of acronyms in that title there. There have been some problems getting Debian’s latest release, “buster”, available on AWS. I got tired of waiting so I took it into my own hands to build an AMI for myself. It should be straightforward but the documentation is a bit scattered. It’s likely that what I describe here won’t be valid after some period of time. However, since I just did it and it worked I’m going to share the steps for the next helpless person.

First, build a host running buster from an official release image. This might be a physical host or it might be a virtual machine on your laptop. I’m not 100% sure that it is necessary that the host be running buster, but that’s what I did. I made a virtual machine and performed only the very basic installation steps necessary to get it connected to the internet and, you know, booting.

Once you have your machine that is running buster, clone this repository:

git clone https://salsa.debian.org/cloud-team/debian-cloud-images

Take a look at their README but we’re not going to follow it. The README file tells you how to build a development instance and you’ll be surprised when it has some development stuff enabled like random terminal users that automatically log in on boot. Do what I’m about to describe instead. Install some dependencies:

apt-get install --no-install-recommends ca-certificates debsums dosfstools fai-server fai-setup-storage make python3 python3-libcloud python3-marshmallow qemu-utils udev sudo rsync

Now we’re going to build the image. This will take a while.

bin/debian-cloud-images build buster ec2 amd64 --build-id manual --version 1 --override-name ec2-buster-image --build-type official

After this is done running you’ll have a new file called ec2-buster-image.tar. Transfer this to a host that is currently running in AWS. I currently have a host running stretch on which I ran these commands. I chose to use rsync to move these files, since they’re large, which is why it’s part of the apt-get line above. The rest of these steps will be run on your AWS based host.

Before doing anything, ensure that the aws command works. You might need to apt-get install awscli and run aws configure first. You can get access keys on the AWS IAM Management Console.

The tar file that you copied over needs to be untarred. It will create a file called disk.raw. You can rename it if you want but be aware that it will be oddly named on its way out of the tar file.

Over on the AWS console you will want to create a new volume that is 8G in size and you will want to attach it to your host. You do NOT want to mount it, just attach it. It is imperative that you create the volume in the correct availability zone. For example, if your host is in us-west-2c then the volume should also be in us-west-2c.

Run the lsblk command to see what device the volume got given. On my host it was given the name /dev/nvme1n1. With that in hand we can transfer our disk image to the volume:

sudo dd if=disk.raw of=/dev/nvme1n1 bs=512k

When the dd command finishes then detach the volume.

For the last step we’re going to turn that volume into a snapshot and an AMI. To do that I copied a program written by Noah Meyerhans that he has in his ec2-image-builder repository and I made a few changes such that the defaults are for buster and I created a gist called volume-to-ami.sh. Once you’ve copied that gist, run like this:

./volume-to-ami.sh -F <vol-id>

Once that last step is run you will have an AMI that you can use to create buster hosts. You can delete the temporary volume that you created if you want. You can also delete any or all of the other intermediate products, too. But you must keep (and pay for the keeping of) the snapshot if you want to reuse the AMI.

Go forth and use buster on AWS.

Update: One thing that I did discover missing and/or wrong on the cloud configuration that I got was the list of apt sources. I had change the apt source lists like this:

rm /etc/apt/sources.list.d/backports.list
vi /etc/apt/sources.list
> deb http://cdn-aws.deb.debian.org/debian buster main
> deb http://security.debian.org/debian-security buster/updates main
> deb http://cdn-aws.deb.debian.org/debian buster-updates main

Python Bytes and Characters

Last week I lost two days of my life fighting with Python, encoding, and Supervisor. It all starts with the fact that in Python2 doesn’t handle encodings very well. I’ll leave the discussion of how Python2 handles encodings at that because Python2 is dead at the end of 2019 but suffice it to say that in Python2 bytes and characters were mostly handled the same and it wasn’t ideal because oftentimes characters are made up of multiple bytes.

At work we use Supervisor to control hundreds of programs across about ninety servers. It lets us assign processes to a host and then ensure that they are running. One feature that we take heavy advantage of is the ability to run event listeners inside Supervisor to capture things like logging events from our programs and then run those log lines through a monitoring system and eventually to an event management system.

Until last week we had been running Supervisor version 3 on Python 2. According to the documentation for Supervisor event listeners your event listener communicates with Supervisor by listening on stdin and talking back on stdout. Both stdin and stdout are, by default in Python 2 and Python 3, character streams. In Python2 this matters less but in Python3 this matters more. Supervisor will send new events to your event listener by sending you one line that your listener will read using readline. That one line will have a standard formatting that includes a len field and the len field should, according to the documentation, indicate the number of bytes that you should then read from stdin to get the body of the event. Because Python2 didn’t really distinguish between bytes and characters, this totally worked with Supervisor3 on Python2. My event listener looked roughly like this:

stdin = sys.stdin.buffer
while handle in select.select([stdin], [], [])[0]:
    line = handle.readline()
    if (not line):
        raise EOFError("received eof from supervisord")

    # decode the line to utf8
    line = line.decode("utf-8", "backslashreplace")

    # read the parts of the header line
    header = dict([x.split(":") for x in line.split()])
    data_length = int(header.get("len", 0))
    if (data_length == 0):
        return (header, None, None)

    # read more to get the payload
    data = handle.read(data_length)
    if (not data):
        raise EOFError("received eof from supervisord")

    # decode the data to utf8
    data = data.decode("utf-8", "backslashreplace")

    if ('\n' in data):
        # this message has additional data so extract it out
        event, data = data.split('\n', 1)
        event = dict([x.split(":") for x in event.split()])
        return (header, event, data)
    else:
        event = dict([x.split(":") for x in data.split()])
        return (header, event, None)

Basically we use the byte stream version of stdin (stdin.buffer), wait for data to come in to stdin then we readline it, get the payload length, and then read that number of bytes from the buffer and repeat for the next event. With Supervisor3 running on Python2 and our event listener running on Python3 this worked great and was in line with how the documentation for Supervisor said that everything should work.

As said, last week we upgraded Supervisor to version 4 and started running it on Python 3. Things suddenly stopped working but very, very randomly. After a lot of digging we discovered that our event listener was reading in less data than it was supposed to. The event listener would say that ten bytes of data were in the buffer and the listener would read ten bytes and then the next time the listener ran readline it would get garbage data.

It turned out that any time we had a program that logged something that contained variable byte characters (i.e. UTF-8 data) that our event listener would break. After digging through the code for Supervisor it was discovered that this was because Supervisor was sending the listener data that was already encoded in UTF-8 and the length it was giving us was characters and not bytes. UTF-8 is a variable length character encoding. Each character can be one or more bytes. For the vast majority of normal characters (e.g. A through Z, 0 through 9) each character is one byte and so we weren’t experiencing a problem. Some of our programs, however, were logging data that went beyond the normal A through Z and those ended up being more than one byte per character thus why the event listener was reading less data than there actually was.

The solution to this was to turn our byte stream into a character stream so that read would work in characters instead of bytes. Thus we ended up with this solution which does not work and you should keep reading to see why:

stdin = io.TextIOWrapper(sys.stdin.buffer, encoding="utf-8")
while handle in select.select([stdin], [], [])[0]:
    line = handle.readline()
    if (not line):
        raise EOFError("received eof from supervisord")

    # read the parts of the header line
    header = dict([x.split(":") for x in line.split()])
    data_length = int(header.get("len", 0))
    if (data_length == 0):
        return (header, None, None)

    # read more to get the payload
    data = handle.read(data_length)
    if (not data):
        raise EOFError("received eof from supervisord")

    if ('\n' in data):
        # this message has additional data so extract it out
        event, data = data.split('\n', 1)
        event = dict([x.split(":") for x in event.split()])
        return (header, event, data)
    else:
        event = dict([x.split(":") for x in data.split()])
        return (header, event, None)

It is pretty identical except that we wrap stdin in a Python3 built-in that converts everything off of the stream into UTF-8. Then we don’t need to do the conversion later and when the event listener calls read it reads characters and not bytes. Problem solved?

After I implemented this the next day we were having a different problem. Event listeners on various servers started blocking. Instead of declaring that they were reading garbage data they just stopped reading data entirely.

It turns out most of our programs print data the normal Unix way: write some data, add a line feed (aka LF aka \n) to the end, repeat. Some of our programs, though, were echoing to their logs data that they got from remote sources that might include line endings other than a simple line feed and instead might print the Windows standard “carriage return line feed” or “CR LF” or \r\n.

By default, io.TextIOWrapper implements what Python calls universal newlines. This “feature” annoyingly converts anything that even looks like a new line into \n so that \r\n becomes \n. Now when a program prints a log line that is ten characters long and has \r\n in the middle of it the TextIOWrapper converts that \r\n into \n. Subsequently the stream reader in the event listener receives nine characters and blocks forever waiting for a tenth character that never will appear.

Thanks Python for trying to be helpful but ultimately not doing what I expect. The solution is to add the argument newline='\n' to the io.TextIOWrapper line so that the wrapper passes you the raw data and doesn’t try to mess with newlines.

Now I have a bug report open on the Supervisor project to address this either by fixing Supervisor to send bytes like the documentation says or by fixing the documentation to say “characters” like is implemented. I would prefer that they send me bytes and let me do the encoding conversion since maybe my programs are printing something non-UTF-8 like CP1256 or Mac Roman or whatever. Right now if your program spits out something that isn’t valid UTF-8 they will replace your log line with the word “Undecipherable” and followed by some object representation of the bytes and maybe I know what it is and I’d like to convert it to something I find useful. Let’s wait and see what happens!

Software Projects

Throughout my time at the University of Washington I have had the privilege of being able to build a lot of systems that underpin large portions of the University’s computing infrastructure. I’m starting to pull some of those systems out of the University’s source control, make some modifications to them such that they are generally useful, and then document and publish them here.

To that end I have put together a projects page with some of the things I’ve done worth highlighting. It is far from complete.

However, to this point it has information on the clone system that I built out, plus some PostgreSQL programs and monitoring tips. I’m hoping to get the software deployment system push up soon, since it is mostly done already, as well as the Supervisor command and control system dart that I recently rebuilt at work up there really soon now, too. Then I’ll get around to creating a publicly usable event management system based on the one I created at UW plus the network device monitoring system that I built at UW as well.

So I just wanted to share that finally, after talking about it for a few years, I’m moving forward with some of this sharing stuff. I also still intend to port over a bunch of my old blog posts, too. But that turned out to be much, much harder than it looked and my video games look so nice after sitting at a desk all day. Soon enough.

Python SSL Socket Server

I recently had to build a small server application in Python. It did not need to be anything complicated. It needed to run on about one hundred servers and receive a tiny command to do something and then be done. A web server would have been overkill and a was definitely not available on all of the hundred servers. Writing a socket server in Python is pretty trivial and the documentation includes example code for you, too. The caveat that I had to deal with is that I needed to validate that the client was who they said they were and I wanted to do it with an SSL certificate so that SSL would handle all of the authentication for me. (The authorization would still have to be handled by me.)

The documentation in Python for writing an SSL server is all over the place. With each version of Python 3 the library has changed in some subtle way that deprecates what was previously the preferred way so if you’re going to do this verify that what I’m showing you here is up to date. I’m pretty certain that this code is valid in Python 3.7, though we are running it in a 3.6 environment.

First, the server.

import socketserver
import ssl


class RequestServer(socketserver.ThreadingMixIn, socketserver.TCPServer):
    # faster re-binding
    allow_reuse_address = True

    # make this bigger than five
    request_queue_size = 10

    # kick connections when we exit
    daemon_threads = True

    def __init__(self, server_address, RequestHandlerClass, bind_and_activate=True):
        super().__init__(server_address, RequestHandlerClass, False)

        # create an ssl context that using the dart.s.uw.edu cert
        # that requires the client to present a certificate and
        # validates it against uwca.
        ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
        ctx.verify_mode = ssl.CERT_REQUIRED
        ctx.load_verify_locations("/usr/local/ssl/certs/ca-uwca.pem")
        ctx.load_cert_chain("/usr/local/ssl/certs/dart.s.uw.edu.pem")

        # replace the socket with an ssl version of itself
        self.socket = ctx.wrap_socket(self.socket, server_side=True)

        # bind the socket and start the server
        if (bind_and_activate):
            self.server_bind()
            self.server_activate()


class RequestHandler(socketserver.StreamRequestHandler):
    def handle(self):
        print("connection from {}:{}".format(self.client_address[0], self.client_address[1]))

        try:
            common_name = self._get_certificate_common_name(self.request.getpeercert())
            if (common_name is None or common_name != "dart.s.uw.edu"):
                print("rejecting {}".format(common_name))
                self.wfile.write('{"accepted": false}\n'.encode())
                return

            # now we're going to listen to what they have to say
            data = self.rfile.readline().strip()
            print("data: {}".format(data))
            self.wfile.write('{"accepted": true}\n'.encode())
        except BrokenPipeError:
            print("broken pipe from {}:{}".format(self.client_address[0], self.client_address[1]))

    def _get_certificate_common_name(self, cert):
        if (cert is None):
            return None

        for sub in cert.get("subject", ()):
            for key, value in sub:
                if (key == "commonName"):
                    return value


# this is the server. it handles the sockets. it passes requests to the
# listener (the second argument). the server will run in its own thread
# so that we can kill it when we need to
server = RequestServer(("0.0.0.0", 3278), RequestHandler)
server.serve_forever()

It listens on port 3278 and it listens for SSL connections. It will tell SSL clients that its hostname is “dart.s.uw.edu”. You should use whatever certificate it is that you have laying around for your server to identify itself.

You’ll notice the line that says “load_verify_locations” and the preceding line that says CERT_REQUIRED. This means that all incoming connections must present a client certificate and that certificate must have been signed by the CA indicated by “load_verify_locations”. This server will accept any client certificate signed by the UW Certificate Authority. That is the authentication component.

But I only want to allow connections from a certificate that I deem authorized. This is the authorization component. That’s what the private method called “_get_certificate_common_name” does. When given the certificate details from the client connection it will extract the client’s common name and returns that. We make sure that common name matches something authorized. In this case our server identifies itself as “dart.s.uw.edu” and only allows clients that are using that same certificate. (Is this a good idea? Probably not. But I don’t have the infrastructure to maintain lots of certificates for just this purpose. This is effective for me.)

What does a client look like to all of this? Super simple.

import socket
import ssl


ctx = ssl.create_default_context()
ctx.verify_mode = ssl.CERT_REQUIRED
ctx.check_hostname = True
ctx.load_verify_locations("/usr/local/ssl/certs/ca-uwca.pem")
ctx.load_cert_chain("/usr/local/ssl/certs/dart.s.uw.edu.pem")

with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock:
    with ctx.wrap_socket(sock, server_hostname="dart.s.uw.edu") as ssock:
        ssock.connect(("localhost", 3278))
        ssock.sendall(bytes("this is a test\n", "utf-8"))

This verifies that our server is presenting a valid UW Certificate Authority signed certificate. It also presents our server with the certificate with the common name “dart.s.uw.edu”. Finally, we tell our client that our server will identify itself as “dart.s.uw.edu”. If we didn’t set a “server_hostname” argument then the client would only validate the connection to the server if the server identified itself as “localhost” as that is the hostname we are connecting to. But our server is identifying itself as “dart.s.uw.edu” because that’s the certificate that we made it use.

One interesting note to this: I don’t know about the server code (because I haven’t tried) but the client code does NOT work with eventlet, unless I’m doing something wrong. We’ll find out when they respond to my issue request.

Vacation to Hawaii

For the first time since last summer I took an actual vacation where I traveled somewhere. Until now, the only state to which I had never been was Hawaii. Now I can say that I’ve been to all fifty states and that I’ve spent not-insignificant amounts of time in each of them. (Next goals: visit all ten provinces and three territories of Canada and all of the public islands in Hawaii. Maybe after that I’ll try out other countries.) Of course, I took pictures.

On the first day I just stayed in a hostel around Kailua Kona. The sky was clear, so I took this photo of the crescent waxing moon. (Also called an ‘ole kū kolu moon.) The moon had a very pretty halo around it.

A halo around the ‘ole kū kolu moon on March 13th, 2019.

The second day I went to South Point or Ka Lae, the southernmost point in Hawaii and the United States. (Wikipedia disagrees with me, but Palmyra is in a territory, not a state.) Interestingly, South Point is not that interesting. It’s on a cliff and very rocky and popular with people who have fishing rods.

Later in the second day I went into Hawai’i Volcanoes National Park. I stayed in a cabin just off of the national park but needed to enter the national park to check into the cabin. There I wandered through the steam vents along the Kīlauea caldera. It’s just remarkable to see steam just rising from the dirt. Then you can look out and see toxic smoke rising from a gigantic hole in the ground that periodically shoots molten rock in the air. Most of the area around the caldera was closed because of recent eruptions. However, there were no eruptions at the time of my visit and no lava was visible anywhere in the park. Very disappointing but probably safer.

The Pacific Ocean crashes against the cliffs at the southern end of Volcanoes National Park. These cliffs were made in the past seventy years.

On the third day I visited Hilo. Hilo felt like home in Seattle with its continually misty rain and lush greenery. On my tour of Hilo I stopped at the Kaumana Caves Park, a county park that features entrance to a totally uncontrolled lava tube. Seriously, you just walk in to this pitch black cave at your own risk. My headlamp did not suffice and I actually relied on the flashlight functionality of my phone to see deep into the cave. You turn several times and duck through several small passages in the two and a half mile tunnel, completely losing sight of the entrance. That is, it is nothing like the caves in Virginia.

Spelunkers descend into the Kaumana Caves lava tube with their flashlights shining. The plant life hanging from the top of the cave is actually a volcanic glass fiber known as Pele’s Hair.

I stayed at the Inn at Kulaniapia Falls, an “off-grid” inn situated overlooking a waterfall. It was a really pleasurable experience and I would do it again. The staff are lovely, even if they did wake me up at 4am slamming doors in the office below my room. At least they compensated me for that!

Rainbows appeared in the morning light around the waterfall. It went into a large pool that then drained into several smaller waterfalls.

I missed the dinner reservation at Kulaniapia Falls and instead went into Hilo and had a fantastic dinner at Pineapples restaurant.

The fourth day was the most remarkable. I took a guided tour to the top of Mauna Kea, the tallest mountain in entire Hawaiian island chain at 13,803 feet above sea level. Another 17,000 feet or more of the mountain is below sea level making it the tallest mountain the world from its base. Fortunately the snow was gone from the top of the mountain this March.

The shadow of Mauna Kea against the clouds over which it looms.

The tour took us up the Hawai’i island Saddle Road where I snapped this picture of a tree that survived a lava flow while everything around it burned. Incidentally, this island is covered in wild goats and feral pigs and cats brought to in various waves by Polynesians, the British, and the Spanish. Additionally, this tree and the surrounding lava flow is actually pretty representative of the island of Hawai’i. Most of the island is pretty much covered in barren lava rock. The western side of the island receives hardly any rain and the rain is what begins to break down the lava rock into more fertile land facilitating growth. There’s also the fact that this island is still an active volcano so new lava is being laid down all the time.

A lone tree surrounded by lava.

Once at the top of Mauna Kea, you can see the most beautiful sunset that you’ve ever seen with more colors than you ever thought imaginable.

The sun setting into the clouds from the top of Mauna Kea.

The top of Mauna Kea, you may have heard, is covered with telescopes. In fact there are thirteen telescopes and a couple more antennas. Because the University of Hawaii actually destroyed the original summit, native Hawaiians designated a nearby peak as the peak where they would perform their rituals.

The actual summit to Mauna Kea and the trail up to it, open only to native Hawaiians.

After watching the sunset, the tour guides took us down to the Mauna Kea Very Long Baseline Array antenna, part of an array of ten radio antennas spanning the globe. That is where we got a tutorial on the stars and how the original inhabitants of the Hawaiian islands navigated by the stars. These tour guides were the best and shared a ton of information about Hawaii that I don’t know where else I would find.

Even with a half moon, you could still see a ton of stars by the VLBA antenna at the top of Mauna Kea.

And I had a terrifically disappointing journey back to Seattle via Delta where they managed to do everything wrong from start to finish. I also don’t regret renting a Jeep but I would have preferred that Avis had given me a Jeep that had been made in the last two or three years than the crap that they did give me. It didn’t even have support for Bluetooth. Still, I’d go again.

Glass Blowing

I recently took a glass blowing class at Pratt Fine Arts Center. The class was four hours every week for six weeks and we learned how to gather glass from the furnace, blow a bubble, gather again, and shape it into something that looks marginally close to a cup or whatever. Here are some of the things that I put together including two cups, a bowl, a vase-like thing, some ornaments, a pumpkin, and a chili pepper.

A “bowl”.
A “vase”.
Two cups!
The ornament on the left weighs about a half of a pound. The ornament on the right is all blown out of proportion.
This pumpkin is my second favorite. It involved a cast.
This is my favorite. It’s a chili pepper.

Views from the Viaduct

At near midnight on January 11th, 2019, the state of Washington closed the Alaskan Way Viaduct permanently after more than sixty years ferrying cars along the Seattle waterfront. The dull roar that made it impossible to hold a conversation in Victor Steinbrueck Park disappeared and with it a calm that the waterfront has not heard since the early 1950s. On the night of February 1st, the state closed the Battery Street Tunnel, which the viaduct used to feed into. (It was still being feed by ramps from Western Avenue after the viaduct’s closure.) And then on February 2nd the city and state opened up the new 99 Tunnel, the Battery Street Tunnel, and a portion of the upper deck of the viaduct to pedestrian traffic before the new tunnel opened on February 4th. I went for a tour. Turns out that taking pictures of a road while actually on the road is not quite as interesting, I don’t think, but here are a handful of photos.

Revelers walk through the poorly lit Battery Street Tunnel.
The lower deck of the viaduct, while lit, was closed to pedestrians. This is a view from the Seneca Street off ramp.
The Seattle Great Wheel from the viaduct with the Port of Seattle behind it. No, this is not really viaduct related.

Still More Black and White Photography

This week was a bit of a weird one with some travel mixed in with family visiting. As such I did not get to take as many photos that I considered to be any good as I had the previous two weeks. This week was also the week that our class didn’t have an outing slash field trip. So here are some photos from my trip to Los Angeles plus some of Duke and Seattle Center.

A life guard station on the beach in Santa Monica, California.
Despite being part beagle, Duke’s fur is only black and white.
The metal paneling on the outside of the Museum of Popular Culture aka MoPOP formerly known as Experience Music Project aka EMP.
Art work outside the Museum of Popular Culture. The color version isn’t that bad, either.