Deploying Kubernetes on AWS, GCE, and Bare Metal

As part of of the weekly Kubernetes Community Meeting Marco Ceppi deploys a fully functional Kubernetes cluster on AWS, GCE, and bare metal:

If you’re interested in bare metal Kubernetes, we invite you to join us and other contributors in the sig-onprem.

Not sure where to get started? Check out our Getting Started documentation.

Canonical Distribution of Kubernetes - Release 1.5.3

We’re proud to announce support for Kubernetes 1.5.3 in the Canonical Distribution of Kubernetes. This is a pure upstream distribution of Kubernetes, designed to be easily deployable to public clouds, on-premise (ie vsphere, openstack), bare metal, and developer laptops. Kubernetes 1.5.2 is a patch release comprised of mostly bugfixes, and we encourage you to check out the release notes.

Getting Started:

Here’s the simplest way to get a Kubernetes 1.5.3 cluster up and running on an Ubuntu 16.04 system:

sudo snap install conjure-up --classic
conjure-up kubernetes

During the installation conjure-up will ask you what cloud you want to deploy on and prompt you for the proper credentials. If you’re deploying to local containers (LXD) see these instructions for localhost-specific considerations.

For production grade deployments and cluster lifecycle management it is recommended to read the full Canonical Distribution of Kubernetes documentation.

Home page: https://jujucharms.com/canonical-kubernetes/

Source code: https://github.com/juju-solutions/bundle-canonical-kubernetes

How to upgrade

With your kubernetes model selected, you can deploy the bundle to upgrade your cluster if on the 1.5.x series of kubernetes. At this time releases before 1.5.x have not been tested. Depending on which bundle you have previously deployed, run:

    juju deploy canonical-kubernetes

or

    juju deploy kubernetes-core

If you have made tweaks to your deployment bundle, such as deploying additional worker nodes as a different label, you will need to manually upgrade the components. The following command list assumes you have made no tweaks, but can be modified to work for your deployment.

juju upgrade-charm kubernetes-master
juju upgrade-charm kubernetes-worker
juju upgrade-charm etcd
juju upgrade-charm flannel
juju upgrade-charm easyrsa
juju upgrade-charm kubeapi-load-balancer

This will upgrade the charm code, and the resources to kubernetes 1.5.3 release of the Canonical Distribution of Kubernetes.

New features:

  • Full support for Kubernetes v1.5.3.
  • K8s master charm now properly keeps distributed master files in sync for an HA control plane.

General Fixes

  • #41251 - Fix UpdateAddonsTactic to use local repo
  • #42058 - Fix shebangs in charm actions to use python3
  • #41815 - enable DefaultTolerationSeconds admission controller by default
  • #41351 - Multi master patch
  • #41256 - Lint fixes for the master and worker Python code
  • #41919 - Juju: Disable anonymous auth on kubelet (Thanks to community member @raesene for pointing this out!)

etcd Specific Fixes

  • #74 - Add the ability to attach NRPE to etcd for monitoring with an external Nagios server.
  • #76 - Add the debug action to the etcd charm that makes it easier to collect debug information in the field.

Test Results

The Canonical Distribution of Kubernetes is a running daily tests to verify it works with the upstream code. As part of the Kubernetes test infrastructructure we upload daily test runs. The test results are available on the dashboard. Follow along with our progress here:

How to contact us:

We’re normally found in these Slack channels and attend these sig meetings regularly:

Operators are an important part of Kubernetes, we encourage you to participate with other members of the Kubernetes community!

We also monitor the Kubernetes mailing lists and other community channels, feel free to reach out to us. As always, PRs, recommendations, and bug reports are welcome: https://github.com/juju-solutions/bundle-canonical-kubernetes

Canonical Distribution of Kubernetes - Release 1.5.2

We’re proud to announce support for Kubernetes 1.5.2 in the Canonical Distribution of Kubernetes. This is a pure upstream distribution of Kubernetes, designed to be easily deployable to public clouds, on-premise (ie vsphere, openstack), bare metal, and developer laptops. Kubernetes 1.5.2 is a patch release comprised of mostly bugfixes, and we encourage you to check out the release notes.

Getting Started:

Here’s the simplest way to get a Kubernetes 1.5.2 cluster up and running on an Ubuntu 16.04 system:

sudo apt-add-repository ppa:juju/stable
sudo apt-add-repository ppa:conjure-up/next
sudo apt update
sudo apt install conjure-up
conjure-up kubernetes

During the installation conjure-up will ask you what cloud you want to deploy on and prompt you for the proper credentials. If you’re deploying to local containers (LXD) see these instructions for localhost-specific considerations.

For production grade deployments and cluster lifecycle management it is recommended to read the full Canonical Distribution of Kubernetes documentation.

Home page: https://jujucharms.com/canonical-kubernetes/

Source code: https://github.com/juju-solutions/bundle-canonical-kubernetes

How to upgrade

With your kubernetes model selected, you can deploy the bundle to upgrade your cluster if on the 1.5.x series of kubernetes. At this time releases before 1.5.x have not been tested. Depending on which bundle you have previously deployed, run:

    juju deploy canonical-kubernetes

or

    juju deploy kubernetes-core

If you have made tweaks to your deployment bundle, such as deploying additional worker nodes as a different label, you will need to manually upgrade the components. The following command list assumes you have made no tweaks, but can be modified to work for your deployment.

juju upgrade-charm kubernetes-master
juju upgrade-charm kubernetes-worker
juju upgrade-charm etcd
juju upgrade-charm flannel
juju upgrade-charm easyrsa
juju upgrade-charm kubeapi-load-balancer

This will upgrade the charm code, and the resources to kubernetes 1.5.2 release of the Canonical Distribution of Kubernetes.

New features:

  • Full support for Kubernetes v1.5.2.

General Fixes

  • #151 #187 It wasn’t very transparent to users that they should be using conjure-up when locally developing, conjure-up is now the defacto default mechanism for deploying CDK.

  • #173 Resolved permissions on ~/.kube on kubernetes-worker units

  • #169 Tuned the verbosity of the AddonTacticManager class during charm layer build process

  • #162 Added NO_PROXY configuration to prevent routing all requests through configured proxy [by @axinojolais]

  • #160 Resolved an error by flannel sometimes encountered during cni-relation-changed [by @spikebike]

  • #172 Resolved sporadic timeout issues between worker and apiserver due to nginx connection buffering [by @axinojolais]

  • #101 Work-around for offline installs attempting to contact pypi to install docker-compose

  • #95 Tuned verbosity of copy operations in the debug script for debugging the debug script.

Etcd layer-specific changes

  • #72 #70 Resolved a certificate-relation error where etcdctl would attempt to contact the cluster master before services were ready [by @javacruft]

Unfiled/un-scheduled fixes:

  • #190 Removal of assembled bundles from the repository. See bundle author/contributors notice below

Additional Feature(s):

  • We’ve open sourced our release management process scripts we’re using in a juju deployed jenkins model. These scripts contain the logic we’ve been running by hand, and give users a clear view into how we build, package, test, and release the CDK. You can see these scripts in the juju-solutions/kubernetes-jenkins repository. This is early work, and will continue to be iterated on / documented as we push towards the Kubernetes 1.6 release.

Notice to bundle authors and contributors:

The fix for #190 is a larger change that has landed in the bundle-canonical-kubernetes repository. Instead of maintaining several copies across several repositories of a single use-case bundle; we are now assembling the CDK based bundles as fragments (un-official nomenclature).

This affords us the freedom to rapidly iterate on a CDK based bundle and include partner technologies, such as different SDN vendors, Storage backend components, and other integration points. Keeping our CDK bundle succinct, and allowing the more complex solutions to be assembled easily, reliably, and repeatedly. This does change the contribution guidelines for end users.

Any changes to the core bundle should be placed in its respective fragment under the fragments directory. Once this has been placed/merged, the primary published bundles can be assembled by running ./bundle in the root of the repository. This process has been outlined in the repository README.md

We look forward to any feedback on how opaque/transparent this process is, and if it has any useful applications outside of our own release management process. The ./bundle python script is still very much geared towards our own release process, and how to assemble bundles targeted for the CDK. However we’re open to generalizing them and encourage feedback/contributions to make this more useful to more people.

How to contact us:

We’re normally found in these Slack channels and attend these sig meetings regularly:

Operators are an important part of Kubernetes, we encourage you to participate with other members of the Kubernetes community!

We also monitor the Kubernetes mailing lists and other community channels, feel free to reach out to us. As always, PRs, recommendations, and bug reports are welcome: https://github.com/juju-solutions/bundle-canonical-kubernetes

Fresh Kubernetes documentation available now

Over the past few months our team has been working real hard on the Canonical Distribution of Kubernetes. This is a pure-upstream distribution of k8s with our community’s operational expertise bundled in.

It means that we can use one set of operational code to get the same deployment on GCE, AWS, Azure, Joyent, OpenStack, and Bare Metal.

Like most young distributed systems, Kubernetes isn’t exactly famous for it’s ease of use, though there has been tremendous progress over the past 12 months. Our documentation on Kubernetes was nearly non-existent and it became obvious that we had to dive in there and bust it out. I’ve spent some time fixing it up and it’s been recently merged. 

You can find the Official Ubuntu Guides in the “Create a cluster” section. We’re taking what I call a “sig-cluster-lifecycle” approach to this documentation – the pages are organized into lifecycle topics based on what an operator would do. So “Backups”, or “Upgrades” instead one big page with sections. This will allow us to grow each section based on the expertise we learn on k8s for that given task. 

Over the past few months (and hopefully for Kubernetes 1.6) we will slowly be phasing out the documentation on our individual charm and layer pages to reduce duplication and move to a pure upstream workflow. 

On behalf of our team we hope you enjoy Kubernetes, and if you’re running into issues please let us know or you can find us in the Kubernetes slack channels.

Useful tool alert, explainshell.com

Ran into this while using the Stack Overflow; explainshell.com. Basically they take all of Ubuntu’s manpages and then parse them so you can paste in any Linux command and then see right away what each option does. Example:

rsync -chavzP –stats [email protected]:/path/to/copy /path/to/local/storage

It then takes the command you put in there and breaks down each of the flags. I like this for a few reasons. First of all, I hate reading manpages because I’m a human being. This shows me exactly what each option does without having to parse the entire manpage, it only tells me what I care about. Secondly, I prefer to learn by example instead of just reading manpages in their entirety. I can see this tool being very useful for people who are just starting to learn. Command on the internet confusing you? You can at least paste it in here and figure out what it’s doing before you end up being that guy.

Another example is bropages, which lets people submit example commands for each tool, and then users vote on the usefulness of the examples for a nice stackoverflow-like list of example commands. It seems as though they haven’t had a commit for almost a year, so not sure if people are still contributing to that, but it seems like a decent enough idea.

Unifi's new cheaper switches are great

I started switching to Ubiquiti’s Unifi equipment at home when one of my coworkers, Sean Sosik-Hamor, recommended them for prosumer use. A little while later Lee Hutchinson published Ubiquiti Unifi made me realise how terrible consumer Wi-Fi gear is when they launched their newer (and cheaper) line of 802.11ac access points. I’ve got one of those, as well as the USG for routing duties. The USG isn’t something to write home about, but it gets the job done, and in some advanced cases you can always ssh to it, but generally speaking I just use it as intended and mostly setting it up and forgetting about it.

unifi

Unlike most routers, you don’t manage Unifi gear through a web UI on the device, you run controller software on a host and then the controller software blasts out the config and updates to the devices. I recommend reading Dustin Kirkland’s blog post for running Unifi in LXD as it currently is 14.04 only, and if you’re like me, you’re finding that it’s becoming much more manageable to keep server software nice and isolated in it’s own container instead of splatting all its dependencies on the host OS. If you prefer things more old school, look for the “EdgeRouter” line of routers and switches.

At $99 for an AP and $149 for an access point you can come up with a nice business-grade combo, especially with the latest consumer routers starting to get close the $300(!) and utterly terrible software. The one thing that was always expensive though, was the Unifi line of managed switches. It’s nice, but at $199 for 8 ports, just too much for each port. Here’s a nice review from Lee on the Unifi Switch 8. Thanks to the wonder of their beta store, I was able to pick up the newer, slimmed down 8 port, the Unifi US-8:

unifi8

There it is, with the unmanaged switch it replaced. They dropped the SFP ports, and you can see the port LEDs are on the top instead of in each port, probably for cost? And since it’s Unifi, it plops in nicely with the UI, giving me some nice per-port stats:

unifi3

And it gets better, they’ve done a US-24 and US-48 as well. I put a US-24 in my basement. $215 all day for 24 ports, compared to the older model, which would go north of $500!

unifi2

I’m in the process of still setting up the homelab VLAN, so I don’t have much to report on that, but having everything managed in one system is a really great feature. I didn’t really need SFP plugs or lots of PoE power for my use, so this new low-end line is perfect for me, if you find yourself wanting cheap-but-good equipment with decent software, then I recommend you check them out, and of course drop by /r/ubiquiti if you need anything.

See also Troy Hunt’s more indepth blog post for more information.

New blog, and new status updates

I’m going to try to blog more in the coming new year, but I figured it would get an early start to get back into the swing of things.

Things have been going at breakneck speed lately, so here’s the highlights.

  • We released The Canonical Distribution of Kubernetes 1.5.1.
  • I’m a new dad! Rafael Mateo Castro was born on November 26th 2016. He’s totally healthy and is currently taking up most of my free time.
  • Marco took some time to help launch and run ops for one of the coolest Pokemon Go sites around. Go itself is running on Kubernetes. Marco running this site is a complete coincidence. Make sure you check out that link for some solid gold ops information. All running on Juju of course!
  • Speaking of coincidences, my brother and his wife had a son 6 days early than us, Leonardo Mehta Castro. Again, total coincidence that my son and his son are named after ninja turtles.
  • I’ll be at Config Management Camp and FOSDEM this year, so hope to catch up with everyone there.

It’s cold and wintery now, so here’s a video of Oscar being a super-dog.: