meta-virtualization/recipes-containers/k3s
Bruce Ashfield 2ef1ee0412 k3s: update to v1.31.1+k3s1
Bumping k3s to version v1.31.1+k3s1, which comprises the following commits:

    452dbbc14c update kubernetes to v1.31.1-k3s3 (#10910)
    9ae2c39004 Update Kubernetes to v1.31.1 (#10895)
    d926e69073 Fix hosts.toml header var
    2caa785e17 Only clean up containerd hosts dirs managed by k3s
    4c8ef7f477 Fix rotateca validation failures when not touching default self-signed CAs
    0c8d3c0d58 Bump helm-controller for skip-verify/plain-http and updated tolerations
    db3cf9370e Bump containerd to v1.7.21, runc to v1.1.14
    28a1fd0302 Update coredns to 1.11.3 and metrics-server to 0.7.2
    944b3b2830 Bump traefik to v2.11.8
    703e7697b0 Tag PR image build as latest before scanning
    88d5576be6 Fix /trivy action running against target branch instead of PR branch
    9c537cb705 Bump aquasecurity/trivy-action from 0.20.0 to 0.24.0 (#10795)
    be60661f18 Add trivy scanning trigger for PRs (#10758)
    e0c4e60171 Update CNI plugins version
    3923e0c699 Cover edge case when on new minor release for E2E upgrade test (#10781)
    8bfcfd70cc Fix deploy latest commit on E2E tests (#10725)
    e8de533e90 Remove secrets encryption controller (#10612)
    34be6d96d1 Update kubernetes to v1.31.0-k3s3 (#10780)
    c7468edbe7 Bump go dependencies to match upstream 1.31
    ebbb109840 Update VERSION_K8S to handle any k3s revision
    f5c6472b16 Bump Kine to v0.12.0
    d358a89171 Fix secrets-encrypt metrics
    178aadbe20 Add k3s-io/kubernetes tags
    5087240e32 Downgrade Microsoft/hcsshim to v0.8.26
    8cbcbcd044 go generate
    20b50426ab Update to v1.31.0
    876d54cf49 chore: Bump Trivy version (#10670)
    518276fb77 adding MariaDB to README.md (#10717)
    649678bd89 Fix k3s-killall.sh support for custom data dir
    38df76708d Fix caching name for e2e vagrant box (#10695)
    ae0d79c7ea Update to v1.30.3-k3s1 and Go 1.22.5 (#10536)
    019b0afdd8 Fix: Add $SUDO prefix to transactional-update commands in install script (#10531)
    22fb7049bd Add tolerations support for DaemonSet pods
    daf0094cc7 Bump helm-controller to v0.16.3 to drop Helm v2 support
    ac247d29cf Update to newer OS images for install testing (#10681)
    0ee714d62b Bump containerd to v1.7.20 (#10659)
    acb71ee379 Allow Amazon Linux 2 rpm installs
    79ec016b6d Allow kylin V10 rpm installs
    8ff7d162cc Allow fedora iot rpm installs
    45c04f3502 Allow Amazon Linux 2023 rpm installs
    3aceb85c22 Add a change for killall to not unmount server and agent directory
    82ba778a86 bump docker/docker to v25.0.6
    38e8b01b8f update stable channel to v1.30.3+k3s1 (#10647)
    bffdf463e1 Fix cloudprovider controller name
    e168438d44 Wire lasso metrics up to common gatherer
    e2179aa957 Update pkg/cluster/managed.go
    3ec086f6f7 Update pkg/secretsencrypt/config.go
    e4f3cc7b54 remove deprecated use of wait functions
    e514940020 Fix inconsistent loading of config dropins when config file does not exist
    9111b1f77e Add K3S_DATA_DIR as env var for --data-dir flag
    a26a5ab1d7 Don't set K3S_DATA_DIR env var
    59e0761043 Use higher QPS for secrets reencryption (#10571)
    a70157c12e Allow Pprof and Superisor metrics in standalone mode (#10576)
    ecff337e00 Enhance E2E Hardened option (#10558)
    d4c3422a85 Fix ipv6 sysctl required by non-ipv6 LoadBalancer service
    21611c5665 Cap length of generated name used for servicelb daemonset
    891e72f90f Update secretsencrypt pagination
    c2216a62ad Use pagination when retrieving etcd snapshot list
    37830fe170 Don't use server and token values from config file for etcd-snapshot commands
    cb6bf74bc4 Add dial duration to debug error message
    118acabec2 Fix IPv6 primary node-ip handling
    9841517457 Fix agents removing configured supervisor address
    9d0c2e0000 Fix reentrant rlock in loadbalancer.dialContext
    b999a5b23d Bump kine to v0.11.11
    58ab25927f For E2E upgrade test, automatically determine the channel to use (#10461)
    c36db53e54 Add etcd s3 config secret implementation
    5508589fae chore: Bump Trivy version
    eb8bd15889 Ensure remotedialer kubelet connections use kubelet bind address
    a0b374508e Bump Local Path Provisioner version (#10394)
    0b417385a4 chore: Bump golang:alpine version
    f6942f3de4 Bump github.com/hashicorp/go-retryablehttp from 0.7.4 to 0.7.7
    b045465178 Add data-dir to uninstall and killall scripts
    d1709d60ce Fix INSTALL_K3S_PR support
    047664b610 Bump k3s-root to v0.14.0
    4204248bc3 Check for bad token permissions when install via PR (#10387)
    8f9ad1f992 Move test-compat to GHA (#10414)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
2024-10-03 01:28:52 +00:00
..
k3s k3s: update to v1.31.1+k3s1 2024-10-03 01:28:52 +00:00
k3s_git.bb k3s: update to v1.31.1+k3s1 2024-10-03 01:28:52 +00:00
README.md k3s: clean up README 2021-03-16 09:30:48 -04:00
relocation.inc k3s: update to v1.31.1+k3s1 2024-10-03 01:28:52 +00:00
src_uri.inc k3s: update to v1.31.1+k3s1 2024-10-03 01:28:52 +00:00

k3s: Lightweight Kubernetes

Rancher's k3s, available under Apache License 2.0, provides lightweight Kubernetes suitable for small/edge devices. There are use cases where the installation procedures provided by Rancher are not ideal but a bitbake-built version is what is needed. And only a few mods to the k3s source code is needed to accomplish that.

CNI

By default, K3s will run with flannel as the CNI, using VXLAN as the default backend. It is both possible to change the flannel backend and to change from flannel to another CNI.

Please see https://rancher.com/docs/k3s/latest/en/installation/network-options/ for further k3s networking details.

Configure and run a k3s agent

The convenience script k3s-agent can be used to set up a k3s agent (service):

k3s-agent -t <token> -s https://<master>:6443

(Here <token> is found in /var/lib/rancher/k3s/server/node-token at the k3s master.)

Example:

k3s-agent -t /var/lib/rancher/k3s/server/node-token -s https://localhost:6443

If you are running an all in one node (both the server and agent) for testing purposes, do not run the above script. It will perform cleanup and break flannel networking on your host.

Instead, run the following (note the space between 'k3s' and 'agent'):

k3s agent -t /var/lib/rancher/k3s/server/token --server http://localhost:6443/

Notes:

Memory:

if running under qemu, the default of 256M of memory is not enough, k3s will OOM and exit.

Boot with qemuparams="-m 2048" to boot with 2G of memory (or choose the appropriate amount for your configuration)

Disk:

if using qemu and core-image* you'll need to add extra space in your disks to ensure containers can start. The following in your image recipe, or local.conf would add 2G of extra space to the rootfs:

IMAGE_ROOTFS_EXTRA_SPACE = "2097152"

Example qemux86-64 boot line:

runqemu qemux86-64 nographic kvm slirp qemuparams="-m 2048"

k3s logs can be seen via:

% journalctl -u k3s

or

% journalctl -xe

Example output from qemux86-64 running k3s server:

root@qemux86-64:~# kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
qemux86-64   Ready    master   46s   v1.18.9-k3s1

root@qemux86-64:~# kubectl get pods -n kube-system
NAME                                     READY   STATUS      RESTARTS   AGE
local-path-provisioner-6d59f47c7-h7lxk   1/1     Running     0          2m32s
metrics-server-7566d596c8-mwntr          1/1     Running     0          2m32s
helm-install-traefik-229v7               0/1     Completed   0          2m32s
coredns-7944c66d8d-9rfj7                 1/1     Running     0          2m32s
svclb-traefik-pb5j4                      2/2     Running     0          89s
traefik-758cd5fc85-lxpr8                 1/1     Running     0          89s

root@qemux86-64:~# kubectl describe pods -n kube-system

root@qemux86-64:~# ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:12:35:02 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fec0::5054:ff:fe12:3502/64 scope site dynamic mngtmpaddr 
       valid_lft 86239sec preferred_lft 14239sec
    inet6 fe80::5054:ff:fe12:3502/64 scope link 
       valid_lft forever preferred_lft forever
3: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
    link/sit 0.0.0.0 brd 0.0.0.0
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default 
    link/ether e2:aa:04:89:e6:0a brd ff:ff:ff:ff:ff:ff
    inet 10.42.0.0/32 brd 10.42.0.0 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::e0aa:4ff:fe89:e60a/64 scope link 
       valid_lft forever preferred_lft forever
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:be:3e:25:e7 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
6: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
    link/ether 82:8e:b4:f8:06:e7 brd ff:ff:ff:ff:ff:ff
    inet 10.42.0.1/24 brd 10.42.0.255 scope global cni0
       valid_lft forever preferred_lft forever
    inet6 fe80::808e:b4ff:fef8:6e7/64 scope link 
       valid_lft forever preferred_lft forever
7: veth82ac482e@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default 
    link/ether ea:9d:14:c1:00:70 brd ff:ff:ff:ff:ff:ff link-netns cni-c52e6e09-f6e0-a47b-aea3-d6c47d3e2d01
    inet6 fe80::e89d:14ff:fec1:70/64 scope link 
       valid_lft forever preferred_lft forever
8: vethb94745ed@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default 
    link/ether 1e:7f:7e:d3:ca:e8 brd ff:ff:ff:ff:ff:ff link-netns cni-86958efe-2462-016f-292d-81dbccc16a83
    inet6 fe80::8046:3cff:fe23:ced1/64 scope link 
       valid_lft forever preferred_lft forever
9: veth81ffb276@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default 
    link/ether 2a:1d:48:54:76:50 brd ff:ff:ff:ff:ff:ff link-netns cni-5d77238e-6452-4fa3-40d2-91d48386080b
    inet6 fe80::acf4:7fff:fe11:b6f2/64 scope link 
       valid_lft forever preferred_lft forever
10: vethce261f6a@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default 
    link/ether 72:a3:90:4a:c5:12 brd ff:ff:ff:ff:ff:ff link-netns cni-55675948-77f2-a952-31ce-615f2bdb0093
    inet6 fe80::4d5:1bff:fe5d:db3a/64 scope link 
       valid_lft forever preferred_lft forever
11: vethee199cf4@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default 
    link/ether e6:90:a4:a3:bc:a1 brd ff:ff:ff:ff:ff:ff link-netns cni-4aeccd16-2976-8a78-b2c4-e028da3bb1ea
    inet6 fe80::c85a:8bff:fe0b:aea0/64 scope link 
       valid_lft forever preferred_lft forever


root@qemux86-64:~# kubectl describe nodes

Name:               qemux86-64
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/instance-type=k3s
                    beta.kubernetes.io/os=linux
                    k3s.io/hostname=qemux86-64
                    k3s.io/internal-ip=10.0.2.15
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=qemux86-64
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/master=true
                    node.kubernetes.io/instance-type=k3s
Annotations:        flannel.alpha.coreos.com/backend-data: {"VtepMAC":"2e:52:6a:1b:76:d4"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 10.0.2.15
                    k3s.io/node-args: ["server"]
                    k3s.io/node-config-hash: MLFMUCBMRVINLJJKSG32TOUFWB4CN55GMSNY25AZPESQXZCYRN2A====
                    k3s.io/node-env: {}
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Tue, 10 Nov 2020 14:01:28 +0000
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  qemux86-64
  AcquireTime:     <unset>
  RenewTime:       Tue, 10 Nov 2020 14:56:27 +0000
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Tue, 10 Nov 2020 14:43:46 +0000   Tue, 10 Nov 2020 14:43:46 +0000   FlannelIsUp                  Flannel is running on this node
  MemoryPressure       False   Tue, 10 Nov 2020 14:51:48 +0000   Tue, 10 Nov 2020 14:45:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Tue, 10 Nov 2020 14:51:48 +0000   Tue, 10 Nov 2020 14:45:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Tue, 10 Nov 2020 14:51:48 +0000   Tue, 10 Nov 2020 14:45:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Tue, 10 Nov 2020 14:51:48 +0000   Tue, 10 Nov 2020 14:45:46 +0000   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  10.0.2.15
  Hostname:    qemux86-64
Capacity:
  cpu:                1
  ephemeral-storage:  39748144Ki
  memory:             2040164Ki
  pods:               110
Allocatable:
  cpu:                1
  ephemeral-storage:  38666994453
  memory:             2040164Ki
  pods:               110
System Info:
  Machine ID:                 6a4abfacbf83457e9a0cbb5777457c5d
  System UUID:                6a4abfacbf83457e9a0cbb5777457c5d
  Boot ID:                    f5ddf6c8-1abf-4aef-9e29-106488e3c337
  Kernel Version:             5.8.13-yocto-standard
  OS Image:                   Poky (Yocto Project Reference Distro) 3.2+snapshot-20201105 (master)
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  containerd://1.4.1-4-ge44e8ebea.m
  Kubelet Version:            v1.18.9-k3s1
  Kube-Proxy Version:         v1.18.9-k3s1
PodCIDR:                      10.42.0.0/24
PodCIDRs:                     10.42.0.0/24
ProviderID:                   k3s://qemux86-64
Non-terminated Pods:          (5 in total)
  Namespace                   Name                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                      ------------  ----------  ---------------  -------------  ---
  kube-system                 svclb-traefik-jpmnd                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         54m
  kube-system                 metrics-server-7566d596c8-wh29d           0 (0%)        0 (0%)      0 (0%)           0 (0%)         56m
  kube-system                 local-path-provisioner-6d59f47c7-npn4d    0 (0%)        0 (0%)      0 (0%)           0 (0%)         56m
  kube-system                 coredns-7944c66d8d-md8hr                  100m (10%)    0 (0%)      70Mi (3%)        170Mi (8%)     56m
  kube-system                 traefik-758cd5fc85-phjr2                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         54m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                100m (10%)  0 (0%)
  memory             70Mi (3%)   170Mi (8%)
  ephemeral-storage  0 (0%)      0 (0%)
Events:
  Type     Reason                   Age                From        Message
  ----     ------                   ----               ----        -------
  Normal   Starting                 56m                kube-proxy  Starting kube-proxy.
  Normal   Starting                 55m                kubelet     Starting kubelet.
  Warning  InvalidDiskCapacity      55m                kubelet     invalid capacity 0 on image filesystem
  Normal   NodeHasSufficientPID     55m (x2 over 55m)  kubelet     Node qemux86-64 status is now: NodeHasSufficientPID
  Normal   NodeHasSufficientMemory  55m (x2 over 55m)  kubelet     Node qemux86-64 status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    55m (x2 over 55m)  kubelet     Node qemux86-64 status is now: NodeHasNoDiskPressure
  Normal   NodeAllocatableEnforced  55m                kubelet     Updated Node Allocatable limit across pods
  Normal   NodeReady                54m                kubelet     Node qemux86-64 status is now: NodeReady
  Normal   Starting                 52m                kube-proxy  Starting kube-proxy.
  Normal   NodeReady                50m                kubelet     Node qemux86-64 status is now: NodeReady
  Normal   NodeAllocatableEnforced  50m                kubelet     Updated Node Allocatable limit across pods
  Warning  Rebooted                 50m                kubelet     Node qemux86-64 has been rebooted, boot id: a4e4d2d8-ddb4-49b8-b0a9-e81d12707113
  Normal   NodeHasSufficientMemory  50m (x2 over 50m)  kubelet     Node qemux86-64 status is now: NodeHasSufficientMemory
  Normal   Starting                 50m                kubelet     Starting kubelet.
  Normal   NodeHasSufficientPID     50m (x2 over 50m)  kubelet     Node qemux86-64 status is now: NodeHasSufficientPID
  Normal   NodeHasNoDiskPressure    50m (x2 over 50m)  kubelet     Node qemux86-64 status is now: NodeHasNoDiskPressure
  Normal   NodeNotReady             17m                kubelet     Node qemux86-64 status is now: NodeNotReady
  Warning  InvalidDiskCapacity      15m (x2 over 50m)  kubelet     invalid capacity 0 on image filesystem
  Normal   Starting                 12m                kube-proxy  Starting kube-proxy.
  Normal   Starting                 10m                kubelet     Starting kubelet.
  Warning  InvalidDiskCapacity      10m                kubelet     invalid capacity 0 on image filesystem
  Normal   NodeAllocatableEnforced  10m                kubelet     Updated Node Allocatable limit across pods
  Warning  Rebooted                 10m                kubelet     Node qemux86-64 has been rebooted, boot id: f5ddf6c8-1abf-4aef-9e29-106488e3c337
  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet     Node qemux86-64 status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet     Node qemux86-64 status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet     Node qemux86-64 status is now: NodeHasSufficientPID
  Normal   NodeReady                10m                kubelet     Node qemux86-64 status is now: NodeReady