This is going to be the first post about my setup of a Raspberry Pi Kubernetes Cluster. I saw a post by Hart Hoover and it finally motivated me to purchase his “grocery list” and do this finally. I’ve been using Minikube for local Kubernetes testing but it doesn’t give you multi-host testing abilities. I’ve also been wanting to get deeper into my Raspberry Pi knowledge. Lots of learning and winning.

The items I bought were:

Here is the tweet when it all arrived:

I spent this morning finally putting it together.

Here is me getting started on the “dogbone case” to hold all of the Raspberry Pis:

The layout

The bottom and one layer above:

The bottom and one layer above

And the rest:

3 Layers

4 Layers

5 Layers

6 Layers and Finished

Different angles completed:

Finished Angle 2

Finished Angle 3

And connect the power:


Next post will be on getting the 6 sandisk cards ready and putting them in and watching the Raspberry Pis boot up and get a green light. Stay tuned.



I started seeing this error recently and had brain farted on why.

Received disconnect from Too many authentication failures for hostname

After a bit of googling it came back to me. This is because I’ve loaded too many keys into my ssh-agent locally (ssh-add). Why did you do that? Well, because it is easier than specifying the IdentityFile on the cli when trying to connect. But there is a threshhold. This is set by the ssh host by the MaxAuthTries setting in /etc/ssh/sshd_config. The default is 6.

Solution 1

Clean up the keys in your ssh-agent.

ssh-add -l lists all the keys you have in your ssh-agent ssh-add -d key deletes the key from your ssh-agent

Solution 2

You can solve this on the command line like this:

ssh -o IdentitiesOnly=yes -i ~/.ssh/example_rsa

What is IdentitiesOnly? Explained in Solution 3 below.

Solution 3 (best)

Specifiy, explicitly, which key goes to which host(s) in your .ssh/config file.

You need to configure which key (“IdentityFile”) goes with which domain (or host). You also want to handle the case when the specified key doesn’t work, which would usually be because the public key isn’t in ~/.ssh/authorized_keys on the server. The default is for SSH to then try any other keys it has access to, which takes us back to too many attempts. Setting “IdentitiesOnly” to “yes” tells SSH to only try the specified key and, if that fails, fall through to password authentication (presuming the server allows it).

Your ~/.ssh/config would look like:

Host *
  IdentitiesOnly yes
  IdentityFile ~/.ssh/myhost
  IdentitiesOnly yes
  IdentityFile ~/.ssh/mysecurehost_rsa
Host *.myotherhost.domain
  IdentitiesOnly yes
  IdentityFile ~/.ssh/myotherhost_rsa

Host is the host the key can connect to IdentitiesOnly means only to try this specific key to connect, no others IdentityFile is the path to the key

You can try multiple keys if needed

Host *
  IdentitiesOnly yes
  IdentityFile ~/.ssh/myhost_rsa
  IdentityFile ~/.ssh/myhost_dsa

Hope this helps someone else.


I’m blogging this because I keep forgetting how to do it. Yeah, yeah, I can google it. I run this blog so I know it is always available…..anywho.

Go To:


Click “Clear host cache” button


Hope this helps someone else.


When I’m trying to “dockerize” an applciation I usually have to work through some wonkiness.

To diagnose a container that has errored out, I, obviously, look at the logs via docker logs -f [container_name]. That is sometimes helpful. It will, at minimum tell me where I need to focus on the new container I’m going to create.


Pre-requisites to being able to build a diagnosis container:

  • You need to use CMD, not ENTRYPOINT in the Dockerfile
    • with CMD you’ll be able to start a shell, with ENTRYPOINT your diagnosis container will just keep trying to run that

To create a diagnosis container, do the following:

  • Check your failed container ID by docker ps -a
  • Create docker image form the container with docker commit
    • example: docker commit -m "diagnosis" [failed container id]
  • Check the newly create docker image ID by docker images
  • docker run -it [new container image id] sh
    • this takes you into a container immediately after the error occurred.

Hope this helps someone else.


This bit me in the rear end again today. Had to reinstall mysql-server-5.7 for other reasons.

You just installed mysql-server locally for your development environment on a recent version of Ubuntu (I have 17.10 artful installed). You did it with a blank password for root user. You type mysql -u root and you see Access denied for user 'root'@'localhost'.


Issue: Because you chose to not have a password for the root user, the auth_plugin for my MySQL defaulted to auth_socket. That means if you type sudo mysql -u root you will get in. If you don’t, then this is NOT the fix for you.

Solution: Change the auth_plugin to mysql_native_password so that you can use the root user in the database.

$ sudo mysql -u root

mysql> USE mysql;
mysql> UPDATE user SET plugin='mysql_native_password' WHERE User='root';
mysql> exit;

$ sudo systemctl restart mysql
$ sudo systemctl status mysql

NB ALWAYS set a password for mysql-server in staging/production.