![]() Generated with: ./scripts/oe-go-mod-autogen.py --repo https://github.com/rancher/k3s.git --rev v1.28.7+k3s1 plus one manual modification to relocation.inc: - sigs.k8s.io/kustomize/kustomize/v5:sigs.k8s.io/kustomize/kustomize/v5:force + sigs.k8s.io/kustomize/kustomize/v5:sigs.k8s.io/kustomize/kustomize/v5/kustomize:force Bumping k3s to version v1.28.7+k3s1, which comprises the following commits: 051b14b248 Fix netpol startup when flannel is disabled 4c1b91e3f9 Use 3/2/1 cluster for split role test 16ad3bc39c Change default number of etcd nodes in E2E splitserver test 9c0e5a5ff8 Rename AgentReady to ContainerRuntimeReady for better clarity 80baec697f Restore original order of agent startup functions f19db855bf remove e2e logs drone step (#9516) 25e12bc10c [Release-1.28] Fix drone publish for arm (#9508) 9f78e474d7 Update Kubernetes to v1.28.7 (#9492) 1ca64a81be [Release-1.28] Support PR testing installs (#9469) 45860105bb [Release-1.28] Test_UnitApplyContainerdQoSClassConfigFileIfPresent (#9440) 78543f4850 [Release-1.28] Enable longer http timeout requests (#9444) 3d0674ad1c Bump kine and set NotifyInterval to what the apiserver expects a3770d21e2 Expose rootless containerd socket directories for external access 0259b8e535 Expose rootless state dir under ~/.rancher/k3s/rootless 865b454a05 Don't verify the node password if the local host is not running an agent 493ebb9517 Fix ipv6 endpoint address selection for on-demand snapshots cd7c557754 Fix issue with coredns node hosts controller 3d46c7da70 Bump CNI plugins to v1.4.0 b620348998 Add check for etcd-snapshot-dir and fix panic in Walk 6b2c1ecb0f Retry startup snapshot reconcile c2c9a265bf Fix excessive retry on snapshot reconcile dda9780f23 Update Kube-router to v2.0.1 a922a0e340 allow executors to define containerd and docker behavior 034ee89344 Update flannel to v0.24.0 and remove multiclustercidr flag (#9075) 6ff57ab749 Bump flannel version 25c7208b7e Changed how lastHeartBeatTime works in the etcd condition f3b4effb32 Runtimes refactor using exec.LookPath 5eb278b838 [Release-1.28] Auto Dependancy Bump (#9419) 190864259e Consistently handle component exit on shutdown 5857584463 Bump cri-dockerd 35ef1cec92 Bump Local Path Provisioner version (#8953) (#9426) c9f49a3b06 Bump helm-controller to fix issue with ChartContent 2f9788ab55 Bump runc and helm-controller versions 14fdacb85b gofmt config_test.go aebdccfae5 Fix issues with certs.d template generation 39a0001575 Use `ipFamilyPolicy: RequireDualStack` for dual-stack kube-dns (#8984) c236c9ff77 Update to v1.28.6 (#9260) 6224ea62af Error getting node in setEtcdStatusCondition 470bcd1bff Move proxy dialer out of init() and fix crash 04ce0ac0a9 Rebase & Squash (#9070) 4724315b8c Pin opa version for missing dependency chain (#9216) 2858f89a5b Bump quic-go for CVE-2023-49295 b04e18c4a0 Enable network policy controller metrics bda4b73493 Add e2e test for embedded registry mirror f3c6250b28 Add embedded registry implementation ef4e7ae143 Add server CLI flag and config fields for embedded registry ece564ec93 Add ADR for embedded registry ea66fe65b4 Propagate errors up from config.Get a62ee4fd0d Move registries.yaml load into agent config ace1714e0c Pin images instead of locking layers with lease 3b863906e0 Fix OS PRETTY_NAME on tagged releases ee85990a83 Add runtime checking of golang version 3be858a878 Add more paths to crun runtime detection (#9086) fa798ba272 Add support for containerd cri registry config_path f95ab7aaf9 Fix nil map in full snapshot configmap reconcile fe19faaf9a Handle logging flags when parsing kube-proxy args fc3136f54f Fix the OTHER log message that prints the wrong variable 9d5950741e Dockerfile.dapper: set $HOME properly 3248fd05c7 Add ServiceLB support for PodHostIPs FeatureGate a503d13591 Remove GA feature-gates (#8970) 53c6e05ef5 Handle etcd status condition when cluster reset and disable etcd 3d08cfd0fe Wait for taint to be gone in the node before starting the netpol controller 90367d80b0 Add a retry around updating a secrets-encrypt node annotations (#9125) 5b2d1271a6 Only publish to code_cov on merged E2E builds (#9083) 19b361f30b Update to v1.28.5-k3s1 (#9081) 71a3c35fb7 Bump containerd to v1.7.11 08509a2a90 Allow setting default-runtime on servers b9c288f702 Bump containerd/runc to v1.7.10-k3s1/v1.1.10 03532f7c0b Added runtime classes for crun/wasm/nvidia 9c6ba42ca0 Nov 2023 stable channel update (#9022) 79438cecaa Modify CONTRIBUTING.md guide d34550fb2f Fix overlapping address range 6ba6c1b65f remove s390x from manifest (#8998) 022cf6d51f remove s390x steps temporarily since runners are disabled 3f23723035 Update to v1.28.4 (#8920) 6d3a92a658 Print key instead of file path in snapshot metadata log message b23e70d519 Don't apply s3 retention if S3 client failed to initialize a92c4a0f17 Don't request metadata when listing objects 96ebb96317 Fix flakey dynamic-cert.json in cert rotation e2e test 611ac0894c Revert e2e pipeline depends_on change 3a6284e2b9 Bump dynamiclistener to fix secret sync race 1e0a7044cf Reorder snapshot configmap reconcile to reduce log spew during initial startup e53c189587 Handle nil pointer when runtime core is not ready in etcd 6c544a4679 Add jitter to client config retry fa4c180637 Update install.sh sha256sum (#8885) da0593bcf9 More improves for K3s patch release docs (#8800) abc2efdd57 Disable helm CRD installation for disable-helm-controller (#8702) 07ee854914 Tweaked order of ingress IPs in ServiceLB (#8711) 7ecd5874d2 Skip initial datastore reconcile during cluster-reset 2088218c5f Fix issue with snapshot metadata configmap fd8db56d5a Fix wrong warning from restorecon in install script (#8871) 78ea593780 General updates to README (#8786) 19fd7e38f6 enh: Force umount for NFS mount (like with longhorn) b47cbbfd42 add agent flag disable-apiserver-lb (#8717) 30c8ad926d QoS-class resource configuration 32a1efa408 Bump kine to fix multiple issues a26441613b add: timezone info in image 0011eb5ead optimize: Simplify and clean up Dockerfile (#8244) 8f7a8b23b7 Improve dualStack log f5920d7864 Add warning for multiclustercidr flag (#8758) ba5fcf13fc Wasm shims and runtimes detection 875a9d19c6 Added ADR for etcd status c5cd7b3d65 Added etcd status condition 022c49242d update channels latest to v1.27.7+k3s2 (#8799) bbafb86e91 Don't use iptables-save/iptables-restore if it will corrupt rules 9e13aad4a8 Update traefik to fix registry value (#8792) 1ae053d944 Upgrade traefik chart to v25.0.0 (#8771) f575a05be2 fix: Access outer scope .SystemdCgroup (#8761) c7c339f0b7 chore: Bump Trivy version (#8739) 1e99a46256 chore: Update sonobuoy image versions (#8710) 9377accd9e update stable to v1.27.7+k3s1 (#8753) 112e1339b7 Restore selinux context systemd unit file (#8593) 49411e7084 Don't try to read token hash and cluster id during cluster-reset 6aef26e94b Update to v1.28.3 (#8682) 5b6b9685e9 Manually requeue configmap reconcile when no nodes have reconciled snapshots 3db1d33282 Re-enable etcd endpoint auto-sync b8dc95539b Fix CloudDualStackNodeIPs feature-gate inconsistency 0c9bf36fe0 [K3s][Windows Port] Build script, multi-call binary, and Flannel (#7259) aaf8409096 Use version.Program not K3s in log (#8653) 9597ea1183 Start etcd client before ensuring self removal 2291d6d079 Add etcd-only/control-plane-only server test 7bb4a826af Update kube-router package in build script 3abc8b82ed Bump traefik, golang.org/x/net, google.golang.org/grpc 1ffb4603cd Use IPv6 in case is the first configured IP with dualstack 3d25e9f66c Switch build target from main.go to a package. (#8342) 7c5b69ca1d Fix etcd snapshot integration tests d885162967 Add server token hash to CR and S3 550ab36ab7 Switch to managing ETCDSnapshotFile resources 5cd4f69bfa Move snapshot delete into local/s3 functions a15b804e00 Sort snapshots by time and key in tabwriter output 7464007037 Store extra metadata and cluster ID for snapshots 80f909d0ca Move s3 snapshot list functionality to s3.go 8d47645312 Consistently set snapshotFile timestamp f1afe153a3 Tidy s3 upload functions 2b0e2e8ada Elide old snapshot data when apiserver rejects configmap with ErrRequestEntityTooLarge 676b00aa0e Move etcd snapshot code into separate file 500744bb94 Add new CRD for etcd snapshots 64107b54e4 Minor updates as per design review discussion 22065affa2 Add ADR for etcd snapshot CRD migration 9bb1ce1253 Bump busybox to v1.36.1 5fe4f6709a Bump containerd to v1.7.7-k3s1 7d38b4a3db E2E Domain Drone Cleanup (#8579) dface01de8 Server Token Rotation (#8265) ced25af5b1 Fixed tailscale node IP dualstack mode in case of IPv4 only node ba750e28b7 [v1.28] System agent push tags fix (#8568) e33359d375 Update install.sh.sha256sum a6acdd0d75 Fix slemicro check for selinux (#8526) e82b37640a Network defaults are duplicated, remove one d4a487d83f Fix spellcheck problem (boostrap ==> bootstrap) f2c7117374 Take IPFamily precedence based on order 0b23a478cf ipFamilyPolicy:PreferDualStack for coredns and metrics-server 021c5b291b Improve release docs - updated (#8414) 0e5c760625 Pass SystemdCgroup setting through to nvidia runtime options 1e38b5d904 Don't ignore assets in home dir if system assets exist fe18b1fce9 Add --image-service-endpoint flag (#8279) 79b44cee29 Create and validate install.sh signatures (#8312) ad206310d1 Update kube-router b6ab24c4fd Added error when cluster reset while using server flag b010c941cf Fix .github regex to skip drone runs on gh action bumps (#8433) d349c9db6c Added cluster reset from non bootstrap nodes on snapshot restore e2e test d0ab4ef26b Added advertise address integration test 172a7f1d1a Fix gofmt error 8705a88bf4 Clear remove annotations on cluster reset; refuse to delete last member from cluster 002e6c43ee Reorganize Driver interface and etcd driver to avoid passing context and config into most calls 890645924f Don't export functions not needed outside the etcd package a3c52d60a5 Skip creating CRDs and setting up event recorder for CLI controller context 391e61bd72 Use admin kubeconfig instead of supervisor for etcd snapshot CLI bd9dad87d5 Typo fix 5c5d957e73 Set server-token adr to accepted 6398c38690 Server token rotation ADR 8c73fd670b Disable HTTP on main etcd client port 12459fca97 Add extraArgs to tailscale 8c197bdce4 Include the interface name in the error message 56abe7055f add link to drone in documentation (#8295) e1706875f4 Update channel latest to v1.27.6+k3s1 (#8397) 66cb1064d1 Add context to flannel errors d3f7632463 Fix error reporting Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com> |
||
---|---|---|
.. | ||
k3s | ||
k3s_git.bb | ||
README.md | ||
relocation.inc | ||
src_uri.inc |
k3s: Lightweight Kubernetes
Rancher's k3s, available under Apache License 2.0, provides lightweight Kubernetes suitable for small/edge devices. There are use cases where the installation procedures provided by Rancher are not ideal but a bitbake-built version is what is needed. And only a few mods to the k3s source code is needed to accomplish that.
CNI
By default, K3s will run with flannel as the CNI, using VXLAN as the default backend. It is both possible to change the flannel backend and to change from flannel to another CNI.
Please see https://rancher.com/docs/k3s/latest/en/installation/network-options/ for further k3s networking details.
Configure and run a k3s agent
The convenience script k3s-agent
can be used to set up a k3s agent (service):
k3s-agent -t <token> -s https://<master>:6443
(Here <token>
is found in /var/lib/rancher/k3s/server/node-token
at the
k3s master.)
Example:
k3s-agent -t /var/lib/rancher/k3s/server/node-token -s https://localhost:6443
If you are running an all in one node (both the server and agent) for testing purposes, do not run the above script. It will perform cleanup and break flannel networking on your host.
Instead, run the following (note the space between 'k3s' and 'agent'):
k3s agent -t /var/lib/rancher/k3s/server/token --server http://localhost:6443/
Notes:
Memory:
if running under qemu, the default of 256M of memory is not enough, k3s will OOM and exit.
Boot with qemuparams="-m 2048" to boot with 2G of memory (or choose the appropriate amount for your configuration)
Disk:
if using qemu and core-image* you'll need to add extra space in your disks to ensure containers can start. The following in your image recipe, or local.conf would add 2G of extra space to the rootfs:
IMAGE_ROOTFS_EXTRA_SPACE = "2097152"
Example qemux86-64 boot line:
runqemu qemux86-64 nographic kvm slirp qemuparams="-m 2048"
k3s logs can be seen via:
% journalctl -u k3s
or
% journalctl -xe
Example output from qemux86-64 running k3s server:
root@qemux86-64:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
qemux86-64 Ready master 46s v1.18.9-k3s1
root@qemux86-64:~# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
local-path-provisioner-6d59f47c7-h7lxk 1/1 Running 0 2m32s
metrics-server-7566d596c8-mwntr 1/1 Running 0 2m32s
helm-install-traefik-229v7 0/1 Completed 0 2m32s
coredns-7944c66d8d-9rfj7 1/1 Running 0 2m32s
svclb-traefik-pb5j4 2/2 Running 0 89s
traefik-758cd5fc85-lxpr8 1/1 Running 0 89s
root@qemux86-64:~# kubectl describe pods -n kube-system
root@qemux86-64:~# ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:12:35:02 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fec0::5054:ff:fe12:3502/64 scope site dynamic mngtmpaddr
valid_lft 86239sec preferred_lft 14239sec
inet6 fe80::5054:ff:fe12:3502/64 scope link
valid_lft forever preferred_lft forever
3: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/sit 0.0.0.0 brd 0.0.0.0
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
link/ether e2:aa:04:89:e6:0a brd ff:ff:ff:ff:ff:ff
inet 10.42.0.0/32 brd 10.42.0.0 scope global flannel.1
valid_lft forever preferred_lft forever
inet6 fe80::e0aa:4ff:fe89:e60a/64 scope link
valid_lft forever preferred_lft forever
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:be:3e:25:e7 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
6: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
link/ether 82:8e:b4:f8:06:e7 brd ff:ff:ff:ff:ff:ff
inet 10.42.0.1/24 brd 10.42.0.255 scope global cni0
valid_lft forever preferred_lft forever
inet6 fe80::808e:b4ff:fef8:6e7/64 scope link
valid_lft forever preferred_lft forever
7: veth82ac482e@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default
link/ether ea:9d:14:c1:00:70 brd ff:ff:ff:ff:ff:ff link-netns cni-c52e6e09-f6e0-a47b-aea3-d6c47d3e2d01
inet6 fe80::e89d:14ff:fec1:70/64 scope link
valid_lft forever preferred_lft forever
8: vethb94745ed@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default
link/ether 1e:7f:7e:d3:ca:e8 brd ff:ff:ff:ff:ff:ff link-netns cni-86958efe-2462-016f-292d-81dbccc16a83
inet6 fe80::8046:3cff:fe23:ced1/64 scope link
valid_lft forever preferred_lft forever
9: veth81ffb276@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default
link/ether 2a:1d:48:54:76:50 brd ff:ff:ff:ff:ff:ff link-netns cni-5d77238e-6452-4fa3-40d2-91d48386080b
inet6 fe80::acf4:7fff:fe11:b6f2/64 scope link
valid_lft forever preferred_lft forever
10: vethce261f6a@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default
link/ether 72:a3:90:4a:c5:12 brd ff:ff:ff:ff:ff:ff link-netns cni-55675948-77f2-a952-31ce-615f2bdb0093
inet6 fe80::4d5:1bff:fe5d:db3a/64 scope link
valid_lft forever preferred_lft forever
11: vethee199cf4@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default
link/ether e6:90:a4:a3:bc:a1 brd ff:ff:ff:ff:ff:ff link-netns cni-4aeccd16-2976-8a78-b2c4-e028da3bb1ea
inet6 fe80::c85a:8bff:fe0b:aea0/64 scope link
valid_lft forever preferred_lft forever
root@qemux86-64:~# kubectl describe nodes
Name: qemux86-64
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/instance-type=k3s
beta.kubernetes.io/os=linux
k3s.io/hostname=qemux86-64
k3s.io/internal-ip=10.0.2.15
kubernetes.io/arch=amd64
kubernetes.io/hostname=qemux86-64
kubernetes.io/os=linux
node-role.kubernetes.io/master=true
node.kubernetes.io/instance-type=k3s
Annotations: flannel.alpha.coreos.com/backend-data: {"VtepMAC":"2e:52:6a:1b:76:d4"}
flannel.alpha.coreos.com/backend-type: vxlan
flannel.alpha.coreos.com/kube-subnet-manager: true
flannel.alpha.coreos.com/public-ip: 10.0.2.15
k3s.io/node-args: ["server"]
k3s.io/node-config-hash: MLFMUCBMRVINLJJKSG32TOUFWB4CN55GMSNY25AZPESQXZCYRN2A====
k3s.io/node-env: {}
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 10 Nov 2020 14:01:28 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: qemux86-64
AcquireTime: <unset>
RenewTime: Tue, 10 Nov 2020 14:56:27 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Tue, 10 Nov 2020 14:43:46 +0000 Tue, 10 Nov 2020 14:43:46 +0000 FlannelIsUp Flannel is running on this node
MemoryPressure False Tue, 10 Nov 2020 14:51:48 +0000 Tue, 10 Nov 2020 14:45:46 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 10 Nov 2020 14:51:48 +0000 Tue, 10 Nov 2020 14:45:46 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 10 Nov 2020 14:51:48 +0000 Tue, 10 Nov 2020 14:45:46 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 10 Nov 2020 14:51:48 +0000 Tue, 10 Nov 2020 14:45:46 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 10.0.2.15
Hostname: qemux86-64
Capacity:
cpu: 1
ephemeral-storage: 39748144Ki
memory: 2040164Ki
pods: 110
Allocatable:
cpu: 1
ephemeral-storage: 38666994453
memory: 2040164Ki
pods: 110
System Info:
Machine ID: 6a4abfacbf83457e9a0cbb5777457c5d
System UUID: 6a4abfacbf83457e9a0cbb5777457c5d
Boot ID: f5ddf6c8-1abf-4aef-9e29-106488e3c337
Kernel Version: 5.8.13-yocto-standard
OS Image: Poky (Yocto Project Reference Distro) 3.2+snapshot-20201105 (master)
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.4.1-4-ge44e8ebea.m
Kubelet Version: v1.18.9-k3s1
Kube-Proxy Version: v1.18.9-k3s1
PodCIDR: 10.42.0.0/24
PodCIDRs: 10.42.0.0/24
ProviderID: k3s://qemux86-64
Non-terminated Pods: (5 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system svclb-traefik-jpmnd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 54m
kube-system metrics-server-7566d596c8-wh29d 0 (0%) 0 (0%) 0 (0%) 0 (0%) 56m
kube-system local-path-provisioner-6d59f47c7-npn4d 0 (0%) 0 (0%) 0 (0%) 0 (0%) 56m
kube-system coredns-7944c66d8d-md8hr 100m (10%) 0 (0%) 70Mi (3%) 170Mi (8%) 56m
kube-system traefik-758cd5fc85-phjr2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 54m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 100m (10%) 0 (0%)
memory 70Mi (3%) 170Mi (8%)
ephemeral-storage 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 56m kube-proxy Starting kube-proxy.
Normal Starting 55m kubelet Starting kubelet.
Warning InvalidDiskCapacity 55m kubelet invalid capacity 0 on image filesystem
Normal NodeHasSufficientPID 55m (x2 over 55m) kubelet Node qemux86-64 status is now: NodeHasSufficientPID
Normal NodeHasSufficientMemory 55m (x2 over 55m) kubelet Node qemux86-64 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 55m (x2 over 55m) kubelet Node qemux86-64 status is now: NodeHasNoDiskPressure
Normal NodeAllocatableEnforced 55m kubelet Updated Node Allocatable limit across pods
Normal NodeReady 54m kubelet Node qemux86-64 status is now: NodeReady
Normal Starting 52m kube-proxy Starting kube-proxy.
Normal NodeReady 50m kubelet Node qemux86-64 status is now: NodeReady
Normal NodeAllocatableEnforced 50m kubelet Updated Node Allocatable limit across pods
Warning Rebooted 50m kubelet Node qemux86-64 has been rebooted, boot id: a4e4d2d8-ddb4-49b8-b0a9-e81d12707113
Normal NodeHasSufficientMemory 50m (x2 over 50m) kubelet Node qemux86-64 status is now: NodeHasSufficientMemory
Normal Starting 50m kubelet Starting kubelet.
Normal NodeHasSufficientPID 50m (x2 over 50m) kubelet Node qemux86-64 status is now: NodeHasSufficientPID
Normal NodeHasNoDiskPressure 50m (x2 over 50m) kubelet Node qemux86-64 status is now: NodeHasNoDiskPressure
Normal NodeNotReady 17m kubelet Node qemux86-64 status is now: NodeNotReady
Warning InvalidDiskCapacity 15m (x2 over 50m) kubelet invalid capacity 0 on image filesystem
Normal Starting 12m kube-proxy Starting kube-proxy.
Normal Starting 10m kubelet Starting kubelet.
Warning InvalidDiskCapacity 10m kubelet invalid capacity 0 on image filesystem
Normal NodeAllocatableEnforced 10m kubelet Updated Node Allocatable limit across pods
Warning Rebooted 10m kubelet Node qemux86-64 has been rebooted, boot id: f5ddf6c8-1abf-4aef-9e29-106488e3c337
Normal NodeHasSufficientMemory 10m (x2 over 10m) kubelet Node qemux86-64 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 10m (x2 over 10m) kubelet Node qemux86-64 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 10m (x2 over 10m) kubelet Node qemux86-64 status is now: NodeHasSufficientPID
Normal NodeReady 10m kubelet Node qemux86-64 status is now: NodeReady