Useful tool alert, explainshell.com

Ran into this while using the Stack Overflow; explainshell.com. Basically they take all of Ubuntu’s manpages and then parse them so you can paste in any Linux command and then see right away what each option does. Example:

rsync -chavzP –stats [email protected]:/path/to/copy /path/to/local/storage

It then takes the command you put in there and breaks down each of the flags. I like this for a few reasons. First of all, I hate reading manpages because I’m a human being. This shows me exactly what each option does without having to parse the entire manpage, it only tells me what I care about. Secondly, I prefer to learn by example instead of just reading manpages in their entirety. I can see this tool being very useful for people who are just starting to learn. Command on the internet confusing you? You can at least paste it in here and figure out what it’s doing before you end up being that guy.

Another example is bropages, which lets people submit example commands for each tool, and then users vote on the usefulness of the examples for a nice stackoverflow-like list of example commands. It seems as though they haven’t had a commit for almost a year, so not sure if people are still contributing to that, but it seems like a decent enough idea.

Unifi's new cheaper switches are great

I started switching to Ubiquiti’s Unifi equipment at home when one of my coworkers, Sean Sosik-Hamor, recommended them for prosumer use. A little while later Lee Hutchinson published Ubiquiti Unifi made me realise how terrible consumer Wi-Fi gear is when they launched their newer (and cheaper) line of 802.11ac access points. I’ve got one of those, as well as the USG for routing duties. The USG isn’t something to write home about, but it gets the job done, and in some advanced cases you can always ssh to it, but generally speaking I just use it as intended and mostly setting it up and forgetting about it.

unifi

Unlike most routers, you don’t manage Unifi gear through a web UI on the device, you run controller software on a host and then the controller software blasts out the config and updates to the devices. I recommend reading Dustin Kirkland’s blog post for running Unifi in LXD as it currently is 14.04 only, and if you’re like me, you’re finding that it’s becoming much more manageable to keep server software nice and isolated in it’s own container instead of splatting all its dependencies on the host OS. If you prefer things more old school, look for the “EdgeRouter” line of routers and switches.

At $99 for an AP and $149 for an access point you can come up with a nice business-grade combo, especially with the latest consumer routers starting to get close the $300(!) and utterly terrible software. The one thing that was always expensive though, was the Unifi line of managed switches. It’s nice, but at $199 for 8 ports, just too much for each port. Here’s a nice review from Lee on the Unifi Switch 8. Thanks to the wonder of their beta store, I was able to pick up the newer, slimmed down 8 port, the Unifi US-8:

unifi8

There it is, with the unmanaged switch it replaced. They dropped the SFP ports, and you can see the port LEDs are on the top instead of in each port, probably for cost? And since it’s Unifi, it plops in nicely with the UI, giving me some nice per-port stats:

unifi3

And it gets better, they’ve done a US-24 and US-48 as well. I put a US-24 in my basement. $215 all day for 24 ports, compared to the older model, which would go north of $500!

unifi2

I’m in the process of still setting up the homelab VLAN, so I don’t have much to report on that, but having everything managed in one system is a really great feature. I didn’t really need SFP plugs or lots of PoE power for my use, so this new low-end line is perfect for me, if you find yourself wanting cheap-but-good equipment with decent software, then I recommend you check them out, and of course drop by /r/ubiquiti if you need anything.

See also Troy Hunt’s more indepth blog post for more information.

New blog, and new status updates

I’m going to try to blog more in the coming new year, but I figured it would get an early start to get back into the swing of things.

Things have been going at breakneck speed lately, so here’s the highlights.

  • We released The Canonical Distribution of Kubernetes 1.5.1.
  • I’m a new dad! Rafael Mateo Castro was born on November 26th 2016. He’s totally healthy and is currently taking up most of my free time.
  • Marco took some time to help launch and run ops for one of the coolest Pokemon Go sites around. Go itself is running on Kubernetes. Marco running this site is a complete coincidence. Make sure you check out that link for some solid gold ops information. All running on Juju of course!
  • Speaking of coincidences, my brother and his wife had a son 6 days early than us, Leonardo Mehta Castro. Again, total coincidence that my son and his son are named after ninja turtles.
  • I’ll be at Config Management Camp and FOSDEM this year, so hope to catch up with everyone there.

It’s cold and wintery now, so here’s a video of Oscar being a super-dog.:

Kubernetes the Easy Way

If you’re interested in running Kubernetes you’ve probably heard of Kelsey Hightower’s Kubernetes the Hard Way. Exercises like these are important, they highlight the coordination needed between components in modern stacks, and it highlights how far the world has come when it comes to software automation. Could you imagine if you had to set everything up the hard way every time?

Learning is fun

Doing things the hard way is fun, once. After that, I’ve got work to do, and soon after I am looking around to see who else has worked on this problem and how I can best leverage the best open source has to offer.

It reminds me of the 1990’s when I was learning Linux. Sure, as a professional, you need to know systems and how they work, down to the kernel level if need. Having to do those things without a working keyboard or network makes that process much harder. Give me a working computer, and then I can begin. There’s value in learning how the components work together and understanding the architecture of Kubernetes, I encourage everyone to try the hard way at least one time, if anything it’ll make you appreciate the work people are putting into automating all of this for you in a composable and reusable way.

The easy way

I am starting a new series of videos on how we’re making the Canonical Distribution of Kubernetes easy for anyone to deploy on any cloud. All our code is open source and we love pull requests. Our goal is to help people get Kubernetes in as many places as quickly and easily as possible. We’ve incorporated lots of the things people tell us they’re looking for in a production-grade Kubernetes, and we’re always looking to codify those best practices.

Enjoy:

Following these steps will get you a working cluster, in this example I’m deploying to us-east-2, the shiny new AWS region. Subsequent videos will cover how to interact with the cluster and do more things with it.

Kubernetes v1.3.3 for Ubuntu ready for testing

We’ve been trailing the Kubernetes 1.3 release for the past few weeks, mostly to ensure that etcd data migrations are preserved from 1.2 to 1.3. We’re also in the process of adding TLS between all the nodes for security reasons, and that has led to use being a bit behind on getting Kubernetes 1.3 out to you, but don’t worry, we’re in the process of testing the upgrade path and this post will outline how to set up a Kubernetes 1.2 cluster and upgrade it to v1.3.3. Once we get good feedback from the community on how this is working out for you, we’ll go ahead and set v1.3.3 (or subsequent version) as the new default for Ubuntu Kubernetes.

Our bundle, which we call “observable-kubernetes” features the following model:

  • Kubernetes (automating deployment, operations, and scaling containers)
    • Three node Kubernetes cluster with one master and two worker nodes.
    • TLS used for communication between nodes for security.
  • Etcd (distributed key value store)
    • Three node cluster for reliability.
  • Elastic stack
    • Two nodes for ElasticSearch
    • One node for a Kibana dashboard
    • Beats on every Kubernetes and Etcd node:
      • Filebeat for forwarding logs to ElasticSearch
      • Topbeat for inserting server monitoring data to ElasticSearch

As usual, you get pure Kubernetes direct from upstream and of course it’s cross-cloud, making it easy for you to use your own bare metal for deployment.

Your First Kubernetes Cluster

After configuring Juju to use the cloud you prefer we can start the cluster deployment.

juju deploy observable-kubernetes

This will deploy the bundle with default constraints. This is great for testing out Kubernets but most clouds won’t give you enough CPU and memory to use the cluster in anger, so I recommend checking out the documentation on how to modify the bundle to more accurately reflect either the hardware you have on hand, or the instance size you prefer.

We can watch the cluster coming up with a watch juju status, this will give us a near-realtime view of the cluster as it comes up:

kubes

Making sure your cluster works

We just wait for things to come up and hit an idle state before moving on. Once we’re up we can manage the cluster with kubectl. We’ve provided this tool for you on the master node with a config file prepoluated for you, first let’s find the master node.

juju run --application kubernetes is-leader

The output will show you which node is the master node, you can then copy the tools to your local machine:

juju scp kubernetes/0:kubectl_package.tar.gz .

Untar that wherever you’d like and cd to that directory, you should have a kubectl binary and a kubeconfig file for you to use along with kubectl. You can now check the status of your cluster with:

./kubectl cluster-info --kubeconfig ./kubeconfig 
Kubernetes master is running at https://104.196.123.155:6443
KubeDNS is running at https://104.196.123.155:6443/api/v1/proxy/namespaces/kube-system/services/kube-dns

Now let’s check the version of Kubernetes we’re running, note how it responds with both the client and server version.

./kubectl version --kubeconfig ./kubeconfig 
Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.3", GitCommit:"882d296a99218da8f6b2a340eb0e81c69e66ecc7", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.3", GitCommit:"882d296a99218da8f6b2a340eb0e81c69e66ecc7", GitTreeState:"clean"}

Upgrading to a new version

So far we’ve done the usual bits on getting Kubernetes running on Ubuntu, now we’re ready to test the latest stuff from upstream.

juju set-config kubernetes version=v1.3.1

And then check juju status again while the model mutates. Now let’s see the version:

./kubectl version --kubeconfig ./kubeconfig 
Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.3", GitCommit:"882d296a99218da8f6b2a340eb0e81c69e66ecc7", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.1", GitCommit:"fe4aa01af2e1ce3d464e11bc465237e38dbcff27", GitTreeState:"clean"}

Aha! As you see here, the cluster has upgraded to v1.3.1, but my local tools are obviously still v1.2.3. Obviously I can just recopy the kubectl tarball from the master node again, but I appear to have made a mistake, as the latest upstream version of Kubernetes is actually v1.3.3. No worries man:

    juju set-config kubernetes version=v1.3.3

And let’s check our version:

./kubectl version --kubeconfig ./kubeconfig
Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.3", GitCommit:"882d296a99218da8f6b2a340eb0e81c69e66ecc7", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.3", GitCommit:"c6411395e09da356c608896d3d9725acab821418", GitTreeState:"clean"}

Now that my cluster is up to date, don’t forget to copy a new version of kubectl from the master so that your client is also up to date. Now we’re ballin’ and on the latest upstream, now we can go ahead and dive into the Kubernetes docs to get started deploying a real workload inside your cluster.

Future Goals

So why isn’t v1.3.3 the default? Well we’d like to see some real feedback from people first and we’re still in the process of validating that your data will get migrated without issues as you upgrade. As you can see upgrading an empty cluster is trivial, and we’d like to make sure we’ve dotted all the t’s and i’s before moving on.

We’d also like to take the next few weeks to prep the charms for the upcoming v1.4 release, as well as revving the etcd and Elastic stacks to their latest upstream versions, as well as revving the OS itself to xenial so that the ZFS backed storage is more robust and has a better out-of-the-box experience. We should also make it so that getting kubectl from the master isn’t so annoying.

Got any feedback for us? You can find us on the Juju mailing list and #sig-cluster-ops and #sig-cluster-lifecycle on kubernetes.slack.com. Hope to see you there!

Nvidia PPA download statistics for May

Welcome back to another exciting update of Nvidia driver downloads!

The biggest change is 364.15 is the new most popular version of the driver, and of course, we’ve added xenial as a new series. Here are the download statistics for the graphics-drivers PPA:

Version summaries
346.72: 57
346.96: 292
352.21: 219
352.79: 249
355.06: 3291
355.11: 11483
358.09: 16949
358.16: 22317
361.18: 4475
361.28: 20638
364.12: 12170
364.15: 37125

Series summaries
precise: 985
trusty: 51066
vivid: 11307
wily: 36171
xenial: 17996
yakkety: 11740

Arch summaries
amd64: 123560
armhf: 55
i386: 5650

Want to help? buy a game and check out ppa:graphics-drivers.

Zeppelin is now a top level Apache project

Apache Zeppelin has just graduated to become a top-level project at the Apache Foundation.

As always, our Big Data team has you covered, you can find all the goodness here:

But for most people you likely just want to be able to consume Zeppelin as part of your Spark cluster, check out these links below for some out-of-the-box clusters:

Happy Big-data-ing, and as always, you can join other big data enthusiasts on the mailing list: [email protected]