Kubernetes Upgrade: The Definitive Guide to Do-It-Yourself – DZone Cloud | xxxKubernetes Upgrade: The Definitive Guide to Do-It-Yourself – DZone Cloud – xxx
菜单

Kubernetes Upgrade: The Definitive Guide to Do-It-Yourself – DZone Cloud

三月 16, 2020 - MorningStar

Over a million developers have joined DZone.

  • Kubernetes Upgrade: The Definitive Guide to Do-It-Yourself - DZone Cloud

    {{node.title}}

    {{node.type}} · {{ node.urlSource.name }} by

    Download {{node.downloads}}

  • {{totalResults}} search results

{{announcement.body}}

{{announcement.title}}

Let’s be friends:

1024)” dc-slot=”ads.sl1.slot(articles[0], 0)” tags=”ads.sl1.tags(articles[0], 0)” size=”ads.sl1.size(articles[0], 0)” style=”border:0;”>
1 && !articles[0].partner.isSponsoringArticle && (width > 1024)” dc-slot=”ads.sb2.slot(articles[0], 0)” tags=”ads.sb2.tags(articles[0], 0)” size=”ads.sb2.size(articles[0], 0)”>

Kubernetes Upgrade: The Definitive Guide to Do-It-Yourself

DZone ‘s Guide to

Kubernetes Upgrade: The Definitive Guide to Do-It-Yourself

Ready to upgrade your Kubernetes cluster? Here’s a step-by-step guide to updating Kubernetes from start to finish.

Apr. 24, 20 · Cloud Zone ·

Free Resource

Join the DZone community and get the full member experience.

Join For Free

Kubernetes is one of the most active projects on Github to date, having amassed more than 80k commits and 550 releases. The process of installing an HA Kubernetes cluster on-premises or in the Cloud is well documented and, in most cases, we don’t have to perform many steps. There are additional tools like Kops or Kubespray that help to automate some of this process.

Every so often, though, we are required to upgrade the cluster to keep up with the latest security features and bug fixes, as well as benefit from new features being released on an on-going basis. This is especially important when we have installed a really outdated version (for example v1.9) or if we want to automate the process and always be on top of the latest supported version.

In general, when operating an HA Kubernetes Cluster, the upgrade process involves two separate tasks which may not overlap or be performed simultaneously: upgrading the Kubernetes Cluster; and, if needed, upgrading the etcd cluster which is the distributed key-value backing store of Kubernetes. Let’s see how we can perform those tasks with minimal disruptions.

Kubernetes Upgrade Paths

Note that this upgrade process is specifically for manually installing Kubernetes in the Cloud or on-premises. It does not cover managed Kubernetes Environments (where Upgrades are automatically handled by the platform), or Kubernetes services on public clouds (such as AWS’ EKS or Azure Kubernetes Service), which have their own upgrade process. 

For the purposes of this tutorial, we assume that a healthy 3-node Kubernetes and Etcd Clusters have been provisioned. I’ve setup mine using six DigitalOcean Droplets plus one for the worker node.

Let’s say that we have the following Kubernetes master nodes all running v1.13:

NameAddressHostname
kube-110.0.11.1kube-1.example.com
kube-210.0.11.2kube-2.example.com
kube-310.0.11.3kube-3.example.com

Also, we have one worker node running v1.13:

NameAddressHostname
worker10.0.12.1worker..example.com

The process of upgrading the Kubernetes master nodes is documented on the Kubernetes documentation site. The following are the current paths:

There is only one documented version for HA Clusters here, but we can reuse the steps for the other upgrade paths. In this example, we are going to see an upgrade path from v1.13 to v.1.14 HA. Skipping a version – for example, upgrading from v1.13 to v.1.15 – is not recommended.

Before we start, we should always check the release notes of the version that we intend to upgrade, just in case they mention breaking changes.

Upgrading Kubernetes: A Step-by-Step Guide

Let’s follow the upgrade steps now:

1. Login Into the First Node and Upgrade the kubeadm Tool Only:

Shell

 

x
 

1

$ ssh admin@10.0.11.1

2

 

3

$ apt-mark unhold kubeadm && /

4

 

5

$ apt-get update && apt-get install -y kubeadm=1.13.0-00 && apt-mark hold kubeadm

The reason why we run apt-mark unhold and apt-mark hold is because if we upgrade kubeadm then the installation will automatically upgrade the other components like kubelet to the latest version (which is v1.15) by default, so we would have a problem. To fix that, we use hold to mark a package as held back, which will prevent the package from being automatically installed, upgraded, or removed.

2. Verify the Upgrade Plan:

Shell

 

xxxxxxxxxx
1

15

 

1

$ kubeadm upgrade plan

2

...

3

 

4

COMPONENT            CURRENT AVAILABLE

5

 

0

6

 

1

7

 

2

8

 

3

9

 

4

10

 

5

11

 

6

12

 

7

13

 

8

14

 

9

15

$ apt-mark unhold kubeadm && /

0

3. Apply the Upgrade Plan:

Shell

 

$ apt-mark unhold kubeadm && /

1

1

 

1

$ apt-mark unhold kubeadm && /

2

4. Update Kubelet and Restart the Service:

Shell

 

$ apt-mark unhold kubeadm && /

3

1

 

1

$ apt-mark unhold kubeadm && /

4

2

$ apt-mark unhold kubeadm && /

5

5. Apply the Upgrade Plan to the Other Master Nodes:

Shell

 

$ apt-mark unhold kubeadm && /

6

1

 

1

$ apt-mark unhold kubeadm && /

7

2

$ apt-mark unhold kubeadm && /

8

3

$ apt-mark unhold kubeadm && /

9

4

 

0

6. Upgrade kubectl on all Master Nodes:

Shell

 

 

1

1

 

1

 

2

7. Upgrade kubeadm on First Worker Node:

Shell

 

 

3

1

 

1

 

4

2

 

5

8. Login to a Master Node and Drain First Worker Node:

Shell

 

 

6

1

 

1

 

7

2

 

8

9. Upgrade kubelet Config on Worker Node:

Shell

 

 

9

1

 

1

$ apt-get update && apt-get install -y kubeadm=1.13.0-00 && apt-mark hold kubeadm

0

2

$ apt-get update && apt-get install -y kubeadm=1.13.0-00 && apt-mark hold kubeadm

1

10. Upgrade kubelet on Worker Node and Restart the Service:

Shell

 

$ apt-get update && apt-get install -y kubeadm=1.13.0-00 && apt-mark hold kubeadm

2

1

 

1

$ apt-get update && apt-get install -y kubeadm=1.13.0-00 && apt-mark hold kubeadm

3

2

$ apt-get update && apt-get install -y kubeadm=1.13.0-00 && apt-mark hold kubeadm

4

11. Restore Worker Node:

Shell

 

$ apt-get update && apt-get install -y kubeadm=1.13.0-00 && apt-mark hold kubeadm

5

1

 

1

$ apt-get update && apt-get install -y kubeadm=1.13.0-00 && apt-mark hold kubeadm

6

2

$ apt-get update && apt-get install -y kubeadm=1.13.0-00 && apt-mark hold kubeadm

7

3

$ apt-get update && apt-get install -y kubeadm=1.13.0-00 && apt-mark hold kubeadm

8

4

$ apt-get update && apt-get install -y kubeadm=1.13.0-00 && apt-mark hold kubeadm

9

5

xxxxxxxxxx

0

Etcd Upgrade Paths

As you already know, etcd is the highly distributed key-value backing store for Kubernetes, and it’s essentially the point of truth. When we are running an HA Kubernetes cluster, we also want to run an HA etcd cluster because we want to have a fallback just in case some nodes fail.

Typically, we would have a minimum of 3 etcd nodes running with the latest supported version. The process of upgrading the etcd nodes is documented in the etcd repo. These are the current paths:

When planning for etcd upgrades, you should always follow this plan:

  • Check which version you are using. For example:
    Shell

     

    xxxxxxxxxx

    1

    1

     

    1

    xxxxxxxxxx

    2

  • Do not jump more than one minor version. For example, do not upgrade from 3.3 to 3.5. Instead, go from 3.3 to 3.4, and then from 3.4 to 3.5.
  • Use the bundled Kubernetes etcd image. The Kubernetes team bundles a custom etcd image located here which contains etcd and etcdctl binaries for multiple etcd versions as well as a migration operator utility for upgrading and downgrading etcd. This will help you automate the process of migrating and upgrading etcd instances.

Out of those paths, the most important change is the path from 2.3 to 3.0, as there is a major API change which is documented here. You should also take note that:

    • Etcd v3 is able to handle requests for both the v2 and v3 data. For example, we can use the ETCDCTL_APIenv variable to specify the API version:
      Shell

       

      xxxxxxxxxx

      3

      1

       

      1

      xxxxxxxxxx

      4

      2

      xxxxxxxxxx

      5

  • Running etcd v3 against the v2 data dir doesn’t automatically upgrade the data dir to the v3 format.
  • Using v2 api against etcd v3 only updates the v2 data stored in etcd.

You may also wonder which versions of Kubernetes have support for each etcd version. There is a small section in the documentation which says:

  • Kubernetes v1.0: supports etcd2 only
  • Kubernetes v1.5.1: etcd3 support added, new clusters still default to etcd
  • Kubernetes v1.6.0: new clusters created with kube-up.sh default to etcd3, and kube-apiserver defaults to etcd3
  • Kubernetes v1.9.0: deprecation of etcd2 storage backend announced
  • Kubernetes v1.13.0: etcd2 storage backend removed, kube-apiserver will refuse to start with –storage-backend=etcd2, with the message etcd2 is no longer a supported storage backend

So, based on that information, if you are running Kubernetes v1.12.0 with etcd2, then you are required to upgrade etcd to v3 when you upgrade Kubernetes to v1.13.0 as –storage-backend=etcd3 is not supported. If you have Kubernetes v1.12.0 and below, you can have both etcd2 and etcd3 running.

Before every step, we should always perform basic maintenance procedures such as periodic snapshots and periodic smoke rollbacks. Make sure to check the health of the cluster:

Let’s say we have the following etcd cluster nodes:

NameAddressHostname
etcd-110.0.11.1etcd-1.example.com
etcd-210.0.11.2etcd-2.example.com
etcd-310.0.11.3etcd-3.example.com
Shell

 

xxxxxxxxxx

6

1

 

1

xxxxxxxxxx

7

2

xxxxxxxxxx

8

3

xxxxxxxxxx

9

4

$ kubeadm upgrade plan

0

5

$ kubeadm upgrade plan

1

Upgrading etcd

Based on the above considerations, a typical upgrade etcd procedure consists of the following steps:

1. Login to the First Node and Stop the Existing etcd Process:

Shell

 

$ kubeadm upgrade plan

2

1

 

1

$ kubeadm upgrade plan

3

2

$ kubeadm upgrade plan

4

2. Backup the etcd Data Directory to Provide a Downgrade Path in Case of Errors:

Shell

 

$ kubeadm upgrade plan

5

1

 

1

$ kubeadm upgrade plan

6

2

$ kubeadm upgrade plan

7

3

$ kubeadm upgrade plan

8

4

$ kubeadm upgrade plan

9

5

...

0

3. Download the New Binary Taken From etcd Releases Page and Start the etcd Server Using the Same Configuration:

Shell

 

...

1

1

17

 

1

...

2

2

...

3

3

...

4

4

...

5

5

...

6

6

...

7

7

...

8

8

...

9

9

 

0

10

 

1

11

 

2

12

 

3

13

 

4

14

 

5

15

 

6

16

 

7

17

 

8

4. Repeat Step 1 to Step 3 for all Other Members.

5. Verify That the Cluster Is Healthy:

Shell

 

 

9

1

 

1

COMPONENT            CURRENT AVAILABLE

0

2

COMPONENT            CURRENT AVAILABLE

1

3

COMPONENT            CURRENT AVAILABLE

2

4

COMPONENT            CURRENT AVAILABLE

3

Note: If you are having issues connecting to the cluster, you may need to provide HTTPS transport security certificates; for example:

Shell

 

COMPONENT            CURRENT AVAILABLE

4

1

 

1

COMPONENT            CURRENT AVAILABLE

5

For convenience, you can use the following environmental variables:

Properties files

 

COMPONENT            CURRENT AVAILABLE

6

1

 

1

COMPONENT            CURRENT AVAILABLE

7

2

COMPONENT            CURRENT AVAILABLE

8

3

COMPONENT            CURRENT AVAILABLE

9

Final Thoughts

In this article, we showed step-by-step instructions on how to upgrade both Kubernetes and Etcd clusters. These are important maintenance procedures and eventualities for the day-to-day operations in a typical business environment. All participants who work with HA Kubernetes deployments should become familiar with the previous steps.

Like This Article? Read More From DZone

Topics:
kubernetes ,kubernetes tutorial ,cloud ,container orchestration ,tutorial

Published at DZone with permission of Theofanis Despoudis , DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Cloud Partner Resources

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.linkDescription }}

{{ parent.urlSource.name }}

by

CORE

· {{ parent.articleDate | date:’MMM. dd, yyyy’ }} {{ parent.linkDate | date:’MMM. dd, yyyy’ }}


Notice: Undefined variable: canUpdate in /var/www/html/wordpress/wp-content/plugins/wp-autopost-pro/wp-autopost-function.php on line 51