meta-virtualization/recipes-containers/k3s
Bruce Ashfield d36563caf1 k3s: update to v1.24.7
Bumping k3s to version v1.24.7-rc4+k3s1, which comprises the following commits:

    e3c9d859e8 Return ProviderID in URI format
    e44d22ca61 Add ServiceAccount for svclb pods
    2ca51a3d59 Update to v1.24.7-k3s1 (#6270)
    0751b6052e Fix dualStack test
    519f13e34d [Release-1.24] Replace deprecated ioutil package (#6235)
    c1c7b95dc0 Fix flakey etcd test
    6ed1e1423f Fix helm job failure on multi-server tests
    87bfc8883b Bump traefik to 2.9.1 / chart 12.0.0
    06eb948c23 Fix the typo in the test
    3a829ae860 Handle custom kubelet port in agent tunnel
    3f5c88e4a3 Fix occasional "TLS handshake error" in apiserver network proxy.
    cb0f4bd49c Use structured logging instead of logrus for event recorders
    44ae7aa4db Dump info on coredns when deployment rollout fails
    a75bbf5f4e Add ADR for ServiceLB move to CCM
    69dd30433b Disable cloud-node and cloud-node-lifecycle if CCM is disabled
    76f13d3558 Move servicelb into cloudprovider LoadBalancer interface
    23c302dccc Move DisableServiceLB/Rootless/ServiceLBNamespace into config.Control
    307e45e739 Implement InstancesV2 instead of Instances
    7198eb2f74 Bump metrics-server to v0.6.1
    0be4ef9213 Add flannel-external-ip when there is a k3s node-external-ip
    a8e0c66d1a updating to v1.24.6-k3s1 (#6164)
    fb823c8a5f Update to v1.24.5 (#6143)
    ae7d6285b6 Fix gofmt warnings
    1b806f5fee Bump golang to correct version
    ee859f7f5a Add validation check to confirm correct golang version for Kubernetes
    cf684c74a3 [Release-1.24] Bulk Backport of Testing Changes
    b8f05e4904 Bump containerd to v1.6.8-k3s1
    35e488c9c7 Bump runc to v1.1.4
    e1884e4d60 Update Flannel to v0.19.2 to fix older iptables issue
    79bb7bccd9 Fix e2e tests (#6018)
    4c9ad2546c Fix dualStack test and change ipv6 network (#6023)
    654d2b9567 CI: update Fedora 34 -> 35 (#5996)
    2b35f89664 Convert install tests to run PR build of k3s (#6003)
    f81138402e E2E: Add support for CentOS 7 and Rocky 8 (#6015)
    ab2638a247 mark v1.24.4+k3s1 as stable (#6036)
    7d6982d1fa Export agent.NetworkName for Windows
    3e394f8ec5 The Windows kubelet does not accept cadvisor flags
    c3f830e9b9 Update to v1.24.4 (#6014)
    035c03cfaa Remove codespell from Drone, add to GH Actions (#6004)
    b14cabc107 Add nightly install github action (#5998)
    75f8cfb6ea E2E: Local cluster testing (#5977)
    116c977fbf Convert vagrant tests to yaml based config (#5992)
    30fc909581 Update run scripts (#5979)
    a30971efaa Updated flannel to v0.19.1
    6b7b9c5aa9 Add scripts to run e2e test using ansible (#5134)
    18cb7ef650 fix checkError in terraform/testutils (#5893)
    77fa7fb490 Removing checkbox indicating backports since the policy is to backport everything (#5947)
    b7f7379157 Update MAINTAINERS with new folks and departures (#5948)
    db3c569b7f Add docker e2e test
    aadab55145 Add ADR for inclusion of cri-dockerd
    4aca21a1f1 Add cri-dockerd support as backend for --docker flag
    b1fa63dfb7 Revert "Remove --docker/dockershim support"
    cf66559940 Print stack on panic
    abdf0c7319 Fix comments and add check in case of IPv6 only node
    d90ba30353 Added NodeIP autodect in case of dualstack connection
    82e5da35a9 Upgrade macos-10.15 to macos-12 (#5953)
    43508341c1 Bump minio to v7.0.33
    1c17f05b8e Fix secrets reencryption for 8K+ secrets (#5936)
    118a68c913 Updates to CLI flag grouping + deprecated flag warnings. (#5937)
    13af0b1d88 Save agent token to /var/lib/rancher/k3s/server/agent-token
    4c0bc8c046 Update etcd error to match correct url (#5909)
    db2ba7b61d Don't enable unprivileged ports and icmp on old kernels
    90016c208d ADR: Depreciating and Removing Old Flags (#5890)
    24da6adfa9 Move v1.24.3+k3s1 to stable (#5889)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
2022-10-25 15:09:46 -04:00
..
k3s k3s: update to v1.24.7 2022-10-25 15:09:46 -04:00
k3s_git.bb k3s: update to v1.24.7 2022-10-25 15:09:46 -04:00
README.md k3s: clean up README 2021-03-16 09:30:48 -04:00
relocation.inc k3s: update to v1.24.7 2022-10-25 15:09:46 -04:00
src_uri.inc k3s: update to v1.24.7 2022-10-25 15:09:46 -04:00

k3s: Lightweight Kubernetes

Rancher's k3s, available under Apache License 2.0, provides lightweight Kubernetes suitable for small/edge devices. There are use cases where the installation procedures provided by Rancher are not ideal but a bitbake-built version is what is needed. And only a few mods to the k3s source code is needed to accomplish that.

CNI

By default, K3s will run with flannel as the CNI, using VXLAN as the default backend. It is both possible to change the flannel backend and to change from flannel to another CNI.

Please see https://rancher.com/docs/k3s/latest/en/installation/network-options/ for further k3s networking details.

Configure and run a k3s agent

The convenience script k3s-agent can be used to set up a k3s agent (service):

k3s-agent -t <token> -s https://<master>:6443

(Here <token> is found in /var/lib/rancher/k3s/server/node-token at the k3s master.)

Example:

k3s-agent -t /var/lib/rancher/k3s/server/node-token -s https://localhost:6443

If you are running an all in one node (both the server and agent) for testing purposes, do not run the above script. It will perform cleanup and break flannel networking on your host.

Instead, run the following (note the space between 'k3s' and 'agent'):

k3s agent -t /var/lib/rancher/k3s/server/token --server http://localhost:6443/

Notes:

Memory:

if running under qemu, the default of 256M of memory is not enough, k3s will OOM and exit.

Boot with qemuparams="-m 2048" to boot with 2G of memory (or choose the appropriate amount for your configuration)

Disk:

if using qemu and core-image* you'll need to add extra space in your disks to ensure containers can start. The following in your image recipe, or local.conf would add 2G of extra space to the rootfs:

IMAGE_ROOTFS_EXTRA_SPACE = "2097152"

Example qemux86-64 boot line:

runqemu qemux86-64 nographic kvm slirp qemuparams="-m 2048"

k3s logs can be seen via:

% journalctl -u k3s

or

% journalctl -xe

Example output from qemux86-64 running k3s server:

root@qemux86-64:~# kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
qemux86-64   Ready    master   46s   v1.18.9-k3s1

root@qemux86-64:~# kubectl get pods -n kube-system
NAME                                     READY   STATUS      RESTARTS   AGE
local-path-provisioner-6d59f47c7-h7lxk   1/1     Running     0          2m32s
metrics-server-7566d596c8-mwntr          1/1     Running     0          2m32s
helm-install-traefik-229v7               0/1     Completed   0          2m32s
coredns-7944c66d8d-9rfj7                 1/1     Running     0          2m32s
svclb-traefik-pb5j4                      2/2     Running     0          89s
traefik-758cd5fc85-lxpr8                 1/1     Running     0          89s

root@qemux86-64:~# kubectl describe pods -n kube-system

root@qemux86-64:~# ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:12:35:02 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fec0::5054:ff:fe12:3502/64 scope site dynamic mngtmpaddr 
       valid_lft 86239sec preferred_lft 14239sec
    inet6 fe80::5054:ff:fe12:3502/64 scope link 
       valid_lft forever preferred_lft forever
3: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
    link/sit 0.0.0.0 brd 0.0.0.0
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default 
    link/ether e2:aa:04:89:e6:0a brd ff:ff:ff:ff:ff:ff
    inet 10.42.0.0/32 brd 10.42.0.0 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::e0aa:4ff:fe89:e60a/64 scope link 
       valid_lft forever preferred_lft forever
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:be:3e:25:e7 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
6: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
    link/ether 82:8e:b4:f8:06:e7 brd ff:ff:ff:ff:ff:ff
    inet 10.42.0.1/24 brd 10.42.0.255 scope global cni0
       valid_lft forever preferred_lft forever
    inet6 fe80::808e:b4ff:fef8:6e7/64 scope link 
       valid_lft forever preferred_lft forever
7: veth82ac482e@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default 
    link/ether ea:9d:14:c1:00:70 brd ff:ff:ff:ff:ff:ff link-netns cni-c52e6e09-f6e0-a47b-aea3-d6c47d3e2d01
    inet6 fe80::e89d:14ff:fec1:70/64 scope link 
       valid_lft forever preferred_lft forever
8: vethb94745ed@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default 
    link/ether 1e:7f:7e:d3:ca:e8 brd ff:ff:ff:ff:ff:ff link-netns cni-86958efe-2462-016f-292d-81dbccc16a83
    inet6 fe80::8046:3cff:fe23:ced1/64 scope link 
       valid_lft forever preferred_lft forever
9: veth81ffb276@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default 
    link/ether 2a:1d:48:54:76:50 brd ff:ff:ff:ff:ff:ff link-netns cni-5d77238e-6452-4fa3-40d2-91d48386080b
    inet6 fe80::acf4:7fff:fe11:b6f2/64 scope link 
       valid_lft forever preferred_lft forever
10: vethce261f6a@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default 
    link/ether 72:a3:90:4a:c5:12 brd ff:ff:ff:ff:ff:ff link-netns cni-55675948-77f2-a952-31ce-615f2bdb0093
    inet6 fe80::4d5:1bff:fe5d:db3a/64 scope link 
       valid_lft forever preferred_lft forever
11: vethee199cf4@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default 
    link/ether e6:90:a4:a3:bc:a1 brd ff:ff:ff:ff:ff:ff link-netns cni-4aeccd16-2976-8a78-b2c4-e028da3bb1ea
    inet6 fe80::c85a:8bff:fe0b:aea0/64 scope link 
       valid_lft forever preferred_lft forever


root@qemux86-64:~# kubectl describe nodes

Name:               qemux86-64
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/instance-type=k3s
                    beta.kubernetes.io/os=linux
                    k3s.io/hostname=qemux86-64
                    k3s.io/internal-ip=10.0.2.15
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=qemux86-64
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/master=true
                    node.kubernetes.io/instance-type=k3s
Annotations:        flannel.alpha.coreos.com/backend-data: {"VtepMAC":"2e:52:6a:1b:76:d4"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 10.0.2.15
                    k3s.io/node-args: ["server"]
                    k3s.io/node-config-hash: MLFMUCBMRVINLJJKSG32TOUFWB4CN55GMSNY25AZPESQXZCYRN2A====
                    k3s.io/node-env: {}
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Tue, 10 Nov 2020 14:01:28 +0000
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  qemux86-64
  AcquireTime:     <unset>
  RenewTime:       Tue, 10 Nov 2020 14:56:27 +0000
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Tue, 10 Nov 2020 14:43:46 +0000   Tue, 10 Nov 2020 14:43:46 +0000   FlannelIsUp                  Flannel is running on this node
  MemoryPressure       False   Tue, 10 Nov 2020 14:51:48 +0000   Tue, 10 Nov 2020 14:45:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Tue, 10 Nov 2020 14:51:48 +0000   Tue, 10 Nov 2020 14:45:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Tue, 10 Nov 2020 14:51:48 +0000   Tue, 10 Nov 2020 14:45:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Tue, 10 Nov 2020 14:51:48 +0000   Tue, 10 Nov 2020 14:45:46 +0000   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  10.0.2.15
  Hostname:    qemux86-64
Capacity:
  cpu:                1
  ephemeral-storage:  39748144Ki
  memory:             2040164Ki
  pods:               110
Allocatable:
  cpu:                1
  ephemeral-storage:  38666994453
  memory:             2040164Ki
  pods:               110
System Info:
  Machine ID:                 6a4abfacbf83457e9a0cbb5777457c5d
  System UUID:                6a4abfacbf83457e9a0cbb5777457c5d
  Boot ID:                    f5ddf6c8-1abf-4aef-9e29-106488e3c337
  Kernel Version:             5.8.13-yocto-standard
  OS Image:                   Poky (Yocto Project Reference Distro) 3.2+snapshot-20201105 (master)
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  containerd://1.4.1-4-ge44e8ebea.m
  Kubelet Version:            v1.18.9-k3s1
  Kube-Proxy Version:         v1.18.9-k3s1
PodCIDR:                      10.42.0.0/24
PodCIDRs:                     10.42.0.0/24
ProviderID:                   k3s://qemux86-64
Non-terminated Pods:          (5 in total)
  Namespace                   Name                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                      ------------  ----------  ---------------  -------------  ---
  kube-system                 svclb-traefik-jpmnd                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         54m
  kube-system                 metrics-server-7566d596c8-wh29d           0 (0%)        0 (0%)      0 (0%)           0 (0%)         56m
  kube-system                 local-path-provisioner-6d59f47c7-npn4d    0 (0%)        0 (0%)      0 (0%)           0 (0%)         56m
  kube-system                 coredns-7944c66d8d-md8hr                  100m (10%)    0 (0%)      70Mi (3%)        170Mi (8%)     56m
  kube-system                 traefik-758cd5fc85-phjr2                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         54m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                100m (10%)  0 (0%)
  memory             70Mi (3%)   170Mi (8%)
  ephemeral-storage  0 (0%)      0 (0%)
Events:
  Type     Reason                   Age                From        Message
  ----     ------                   ----               ----        -------
  Normal   Starting                 56m                kube-proxy  Starting kube-proxy.
  Normal   Starting                 55m                kubelet     Starting kubelet.
  Warning  InvalidDiskCapacity      55m                kubelet     invalid capacity 0 on image filesystem
  Normal   NodeHasSufficientPID     55m (x2 over 55m)  kubelet     Node qemux86-64 status is now: NodeHasSufficientPID
  Normal   NodeHasSufficientMemory  55m (x2 over 55m)  kubelet     Node qemux86-64 status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    55m (x2 over 55m)  kubelet     Node qemux86-64 status is now: NodeHasNoDiskPressure
  Normal   NodeAllocatableEnforced  55m                kubelet     Updated Node Allocatable limit across pods
  Normal   NodeReady                54m                kubelet     Node qemux86-64 status is now: NodeReady
  Normal   Starting                 52m                kube-proxy  Starting kube-proxy.
  Normal   NodeReady                50m                kubelet     Node qemux86-64 status is now: NodeReady
  Normal   NodeAllocatableEnforced  50m                kubelet     Updated Node Allocatable limit across pods
  Warning  Rebooted                 50m                kubelet     Node qemux86-64 has been rebooted, boot id: a4e4d2d8-ddb4-49b8-b0a9-e81d12707113
  Normal   NodeHasSufficientMemory  50m (x2 over 50m)  kubelet     Node qemux86-64 status is now: NodeHasSufficientMemory
  Normal   Starting                 50m                kubelet     Starting kubelet.
  Normal   NodeHasSufficientPID     50m (x2 over 50m)  kubelet     Node qemux86-64 status is now: NodeHasSufficientPID
  Normal   NodeHasNoDiskPressure    50m (x2 over 50m)  kubelet     Node qemux86-64 status is now: NodeHasNoDiskPressure
  Normal   NodeNotReady             17m                kubelet     Node qemux86-64 status is now: NodeNotReady
  Warning  InvalidDiskCapacity      15m (x2 over 50m)  kubelet     invalid capacity 0 on image filesystem
  Normal   Starting                 12m                kube-proxy  Starting kube-proxy.
  Normal   Starting                 10m                kubelet     Starting kubelet.
  Warning  InvalidDiskCapacity      10m                kubelet     invalid capacity 0 on image filesystem
  Normal   NodeAllocatableEnforced  10m                kubelet     Updated Node Allocatable limit across pods
  Warning  Rebooted                 10m                kubelet     Node qemux86-64 has been rebooted, boot id: f5ddf6c8-1abf-4aef-9e29-106488e3c337
  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet     Node qemux86-64 status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet     Node qemux86-64 status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet     Node qemux86-64 status is now: NodeHasSufficientPID
  Normal   NodeReady                10m                kubelet     Node qemux86-64 status is now: NodeReady