How to video conference without people hating you

While video conferencing has been a real boon to productivity there are still lots of things that can go wrong during a conference video call.

There are some things that are just plain out of your control, but there are some things that you can control. So, after doing these for the past 15 years or so, here are some tips if you’re just getting into remote work and want to do a better job. Of course I have been guilty of all of these. :D

Stuff to have

  • Get a Microphone - Other than my desk, chair, and good monitors, this is the number one upgrade you can do. Sound is one of those things that can immediately change the quality of your call. I use a Blue Yeti due to the simplicity of using USB audio, and having a hardware mute button. This way I know for sure I am muted when there’s a blinking red light in my face. Learn to use your microphone. On my Yeti you speak across the microphone and it has settings for where to pick up the noise from. Adjust these so it sounds correct. Get a pop filter.

  • A Video Camera - Notice I put this second. I can get over a crappy image if the audio is good. The Logitech C-900 series has been my long go to standard for this. It also has dual noise cancelling microphones, which are great for backup (if you’re on a trip), but I will always default to the dedicated microphone.

  • A decent set of headphones - Personal preference. I like open back ones for the home but pack a noise cancelling set for when I am on the road.

What about an integrated headset and microphone? This totally depends on the type. I tend to prefer the full sound of a real microphone but the boom mics on some of these headsets are quite good. If you have awesome heaphones already you can add a modmic to turn them into headsets. I find that even the most budget dedicated headsets sound better than earbud microphones.

Stuff to get rid of

  • Your shitty earbuds - Seriously. If you’re going to be a remote worker invest in respecting your coworker’s time. A full hour long design session with you holding up a junky earbud microphone up to your face is not awesome for anybody. They’re fine if you want to use them for listening, but don’t use the mic.

  • “But this iPhone was $1000, my earbud mic is fine.” Nope. You sound like crap.

Garbage Habits we all hate

If you’re just dialing in to listen then most of these won’t apply to you, however …

  • Always join on muted audio. If the platform you use doesn’t do this by default find this setting and enable it.

  • If you don’t have anything to say at that moment, MUTE. Even if you are just sitting there you’re adding ambient noise to the meeting, and when it gets over 10 people this really, really, sucks. This is why I love having a physical mute button, you can always be sure at a glance without digging into settings. I’ve also used a USB switch pedal for mute with limited success.

  • Jumping in from a coffee shop, your work’s cafeteria, or any other place where there’s noise is not cool. And if you work in an open office all you’re doing is broadcasting to everyone else in the room that your place of employment doesn’t take developer productivity seriously.

  • “Oh I will use my external speakers and built in microphone and adjust the levels and it will sound fine.” - No, it won’t, you sound like a hot mess, put on your headset and use the microphone.

  • If you use your built-in microphone on your laptop and you start typing while you are talking EVERYBODY WILL HATE YOU.

  • If you’re going to dial in from the back on an Uber or from a bus, and you have to talk or present, just don’t come. Ask someone to run the meeting for you or reschedule. You’re just wasting everyone’s time if you think we want to hear you sprinting down a terminal to catch your flight.

  • And if you’re that person sitting on the plane in the meeting and people have to hear whatever thing you’re working on, they will hate you for the entire flight.

Treat video conferencing like you do everything else at work

We invest in our computers and our developer tools, think seriously about putting your video conferencing footprint in that namespace. There is a good chance no one will notice that you always sound good, but it’s one of those background quality things that just makes everyone more productive. Besides, think of the money you’ve spent on your laptop and everything else to make you better at work, better audio gear is a good investment.

In the real world, sometimes you just have to travel and you find yourself stuck on a laptop on hotel wireless in a corner trying to your job, but I strive to make that situation the exception!

Kubernetes Ask Me Anything on Reddit

A bunch of Kubernetes developers are doing an Ask Me Anything today on Reddit if you’re interested in asking any questions, hope to see you there!

Updating your CNCF Developer Affiliation

The Cloud Native Computing Foundation uses gitdm to figue out who is contributing and from where. This is used to generate reports and so forth.

There is a huge text file where they are mapping email addresses used and affiliation. It probably doesn’t hurt to check your entry, for example, here’s mine:

Jorge O. Castro*: jorge.castro!gmail.com
Lemon Ice
Lemon Location City until 2017-05-01
Lemon Travel Smart Vacation Club until 2015-06-01

Whoa? What? This is what a corrected entry looks like, as you can see it takes into account where you used to work for correctness:

Jorge O. Castro*: jorge!heptio.com, jorge!ubuntu.com, jorge.castro!gmail.com
Heptio
Canonical until 2017-03-31

As an aside this also really makes a nice rolodex for looking up people. :D

Using kubeadm to upgrade Kubernetes

I’ve started writing for the Heptio Blog, check out my new article on Upgrading to Kubernetes 1.8 with Kubeadm.

Also if you’re looking for more interactive help with Kubernetes, make sure you check out our brand new Kubernetes Office Hours, where we livestream developers answering user questions about Kubernetes. Starting tomorrow (18 October) at 1pm and 8pm UTC, hope to see you there!

 

Thoughts on the first Kubernetes Steering Election

The first steering committee election for Kubernetes is now over.  Congratulations to Aaron Crickenberger, Derek Carr, Michelle Noorali, Phillip Wittrock, Quinton Hoole and Timothy St. Clair, who will be joining the newly formed Kubernetes Steering Committe.

If you’re unfamiliar with what the SC does, you can check out their charter and backlog. I was fortunate to work alongside Paris Pittman on executing this election, hopefully the first of many “PB&J Productions”.

To give you some backstory on this, the Kubernetes community has been bootstrapping it’s governance over the past few years, and executing a proper election as stated in the charter was an important first step. Therefore it was critical for us to run an open election correctly.

Thankfully we can stand on the shoulders of giants. OpenStack and Debian are just two examples of projects with well formed processes that have stood the test of time. We then produced a voter’s guide to give people a place where they could find all the information they needed and the candidates a spot to fill in their platform statements.

This morning I submitted a pull request with our election notes and steps so that we can start building our institutional knowledge on the process, and of course, to share with whomever is interested.

Also, a big shout out to Cornell University for providing CIVS as a public service.

Can "cloud-native" concepts apply to home servers?

I’ve recently embarked on a journey of trying new things in my homelab. I’ve been experimenting with new methods of managing my services at home, and I have tried a few combinations of methods and operating systems.

Now that I am more familiar with the cloud native landscape, I was wondering if I could wean myself off the most stateful snowflake in my life, the trusty home server.

I’ve been collecting hardware recently and decided to go out of my comfort zone and run some home services via different combinations. Here’s what I found out.

What do I need to run?

I have a few things I run in house that I need to host:

The Unifi stuff depends on Java and MongoDB, and accesses hardware on the network. Pi-hole expects to basically be my DNS server, and Plex is the textbook definition of a stateful app; depending on the size of your video collection it can grow a substantial database that’s all on disk, so moving around means bringing gigs of stuff with it.

With this varied set of apps, we shall begin!

Old Reliable, traditional Ubuntu 16.04 with .debs

This has been working for me for years up to this point. As with anything involving third party packaging, maintenance can tend to get annoying.

For Unifi you need either an external Mongo repo(!) and/or a OpenJDK PPA(!) to get it to work.

Pi hole wants me to pipe a script to bash, and Plex just publishes one off debs with no repository, making that update a hassle.

There are some benefits here, once I configure unattended-upgrades things generally run fine. It’s a well understood system, and having the large repository of software is always good. Over time it tends to accumulate crap though.

Ubuntu 16.04 with Docker containers

The main advantage to this setup is I can crowdsource the maintenance of these apps to people who do a real good job, like the awesome guys at linuxserver.io. I can keep a lean and mostly stock host, toss away broken containers if need be, and keep everything nice and isolated.

What ppa do I need for unifi? Is my plex up to date? What java symlink do I need to fix? I don’t need to care anymore!

docker-compose was really good for this and relatively simple to grok, especially by stealing other people’s configs from github and modifying them to my needs.

Why not LXD containers?

I ran with this config for a while, but it had one major issue for me. LXD containers are system containers, that is, they run a copy of the OS. So now instead of maintaining one host I am now maintaining one host OS and many client OSes. Going into each one and installing/configuring the services felt like I was adding complexity. LXD is great, just not for my specific use case.

Ubuntu Core 16.04 with Docker containers

Finally, something totally different. I am pretty sure “home server” doesn’t rank high on the use case here, but I figured I would give it a shot. I took the same docker-compose files from before, except this time I deploy on top of ubuntu-core.

This gives me some nice features over the mutable-buntu. First off, atomic upgrades. Everytime it gets a new kernel it just reboots, and then on boot all the containers update and come back up.

This has a few teething issues. First off, if there’s a kernel update it’s just going to reboot, you can’t really control that. Another is, it really is small, so it’s missing tools. It can only install snaps, so no rsync, no git, no curl, no wget. I’m not going to run git out of a docker container. Also I couldn’t figure out how to run docker as the non-root user and I can’t seem to find the documentation on how to do that anywhere.

Container Linux with Docker containers

A slim OS designed to only run containers. This one definately has a more “work related” slant to it. There’s no normal installer, the installer basically takes a cloud-init-like yaml file and then dd’s the disk. Or just fire up your home PXE server. :) Most of the docs don’t even talk about how to configure the OS, the entire “state” is kept in this yaml file in git, it is expected that I can blow it away at any time and drop containers on it.

This comes with git, rsync, and curl/wget out of the box, so it’s nice to be able to have these core tools in there instead of totally missing. There’s also a toolbox command that will automagically fetch a traditional distro container (defaults to fedora) and then mounts the filesystem inside, so you can nano to your heart’s content.

This works really well. Container Linux lets me dictate the update policy as part of the config file, and if I have multiple servers I can cluster them together so that they will take turns rebooting without having a service go down. But as you can see, we quickly venture out of the “home server” use case with this one.

Container Linux with rkt/systemd

This is the setup I am enjoying the most. So instead of using the docker daemon, I create a systemd service file, like say /etc/systemd/system/unifi.service:

[Unit]
Description=Unifi
After=network.target
[Service]
Slice=machine.slice
Type=simple
ExecStart=/usr/bin/rkt --insecure-options=image run docker://linuxserver/unifi --volume config,kind=host,source=/home/jorge/config/unifi --mount volume=config,target=/config --net=host --dns=8.8.8.8
KillMode=mixed
Restart=always
[Install]
WantedBy=multi-user.target

Then I systemctl start unifi to start it, and systemctl enable unifi to enable it on boot. ContainerOS is set to reboot on Thursdays at 4am, containers update on boot. I can use journalctl and machinectl like I can with “normal” services.

Note that you can use this config on any OS that has systemd and rkt. Since ContainerOS has a section in it’s yaml file for writing systemd services, I can have one well maintained file that will just enable me to spit out and entire configured server in one shot. Yeah!

This one “feels” like the most future proof, as the OCI spec is now finalized it feels like over time every tool will just be able to consume these images. I don’t know what that means for rkt and/or docker, but you can similarly use docker in this manner as well.

What about state?

So far I’ve only really talked about the “installation problem”, I’ve continually left out the hard part, the state of the applications themselves. Reinstalling coreos will get me all the apps back but no data, that doesn’t sound very cloud native!

If you look at the systemd service file above, you see I keep the state in /home/jorge/config/plex. I do this for each of the services. I need to find a way to make sure that that is saved somewhere else than just local disk.

Saving this onto an NFS share earned overwhelming NOPE NOPE NOPE from the straw poll I took (and one even threatened to come over and fight me). And that’s kind of cheating by moving the problem.

So right now this is still up in the air, I fired up a quick duplicati instance to keep a copy on S3.

Really don’t want to add a ceph cluster to my home administration tasks. Suggestions here would be most welcome.

How have they been running?

I know what you’re thinking. Self rebooting servers and auto updating containers? You’re high.

Surprisingly in the last three months I have been running all five of these things side by side and they’ve all been rock solid.

I don’t know what to say here, some combination of great OS and container maintainers, or maybe not so much churn. I am going to try to keep these up and running as long as possible just to see what happens.

What’s left to try?

The obvious hole here is the Project Atomic stack, which will be next on the list.

And of course, it’s only a matter of time until one of these reboots breaks something or a container has a bad day.

If you think this blog post isn’t crazy enough, Chuck and I will be delving into Kubernetes for this later on as this is all just a warm up.

TLDRing your way to a Kubernetes Bare Metal cluster

Alex Ellis has an excellent tutorial on how to install Kubernetes in 10 minutes. It is a summarized version of what you can find in the official documentation. Read those first, this is a an even shorter version with my choices mixed in.

We’ll install 16.04 on some machines, I’m using three. I just chose to use Weave instead of sending you to a choose your-own-network page as you have other stuff to learn before you dive into an opinion on a networking overlay. We’re also in a lab environment so we assume some things like your machines are on the same network.

Prep the Operating System

First let’s take care of the OS. I set up automatic updates, ensure the latest kernel is installed, and then ensure we’re all up to date, whatever works for you:

sudo -s
dpkg-reconfigure unattended-upgrades
apt install linux-generic-hwe-16.04
apt update
apt dist-upgrade
reboot

Prep each node for Kubernetes:

This is just installing docker and adding the kubernetes repo, we’ll be root for these steps:

sudo -s
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main  
EOF

apt update
apt install -qy docker.io kubelet kubeadm kubernetes-cni

On the master:

Pick a machine to be a master, then on that one:

kubeadm init

And then follow the directions to copy your config file to your user account, we only have a few commands left needed with sudo so you can safely exit out and continue with your user account:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Let’s install the network, and then allow workloads to be scheduled on the master (for a lab we want to use all our hardware for workloads!):

kubectl apply -f https://git.io/weave-kube-1.6
kubectl taint nodes --all node-role.kubernetes.io/master-

On each worker node:

On each machine you want to be a worker (yours will be different, the output of kubeadm init will tell you what to do:

sudo kubeadm join --token 030b75.21ca2b9818ca75ef 192.168.1.202:6443 

You might need to tack on a --skip-preflight-checks, see #347, sorry for the inconvenience.

Ensuring your cluster works

It shouldn’t take long for the nodes to come online, just check em out:

$ kubectl get nodes
NAME       STATUS     AGE       VERSION
dahl       Ready      45m       v1.7.1
hyperion   NotReady   16s       v1.7.1
tediore    Ready      32m       v1.7.1

$ kubectl cluster-info
Kubernetes master is running at https://192.168.1.202:6443
KubeDNS is running at https://192.168.1.202:6443/api/v1/namespaces/kube-system/services/kube-dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'
$

Ok you’re cluster is rocking, now

Set up your laptop

I don’t like to be ssh’ed into my cluster unless I’m doing maintenance, so now that we know stuff is working let’s copy the kubernetes config from the master node to our local workstation. You should know how to copy files around systems already, but here’s mine for reference:

 sudo scp /etc/kubernetes/admin.conf [email protected]:/home/jorge/.kube/config

I don’t need the entire Kubernetes repo on my laptop, so we’ll just install the snap for kubectl and check that I can access the server:

  sudo snap install kubectl --classic
  kubectl get nodes
  kubectl cluster-info

Don’t forget to turn on autocompletion!

Deploy your first application

Let’s deploy the Kubernetes dashboard:

   kubectl create -f https://git.io/kube-dashboard
   kubectl proxy

Then hit up http://localhost:8001/ui.

That’s it, enjoy your new cluster!

Joining the Community

kubeadm is brought to you by SIG Cluster Lifecycle, they have regular meetings that anyone can attend, and you can give feedback on the mailing list. I’ll see you there!