

But neither I understand why the API server on vm01 stops working when adding a second API server nor I can find a reason, whe the output is talking about the 192.168. To see the stack trace of this error execute with -v=5 or higherĪnd after that timeout even on vm01 the API server stops working, I cannot run any kubeadm or kubectl commands anymore. Initial timeout of 40s passed.Įrror execution phase control-plane-join/etcd: error creating local etcd static pod manifest file: timeout waiting for etcd cluster to be available Waiting for the new etcd member to join the cluster. Announced new etcd member joining to the existing etcd cluster Waiting for the kubelet to perform the TLS Bootstrap. Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" Checking that the etcd cluster is healthy Creating static Pod manifest for "kube-scheduler" Creating static Pod manifest for "kube-controller-manager" Creating static Pod manifest for "kube-apiserver" Using manifest folder "/etc/kubernetes/manifests" WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address Using kubeconfig folder "/etc/kubernetes" Valid certificates and keys now exist in "/etc/kubernetes/pki" Using the existing "apiserver-kubelet-client" certificate and key apiserver serving cert is signed for DNS names and IPs Generating "apiserver" certificate and key Generating "apiserver-etcd-client" certificate and key Generating "etcd/healthcheck-client" certificate and key etcd/peer serving cert is signed for DNS names and IPs Generating "etcd/peer" certificate and key

etcd/server serving cert is signed for DNS names and IPs Generating "etcd/server" certificate and key Generating "front-proxy-client" certificate and key

Using certificateDir folder "/etc/kubernetes/pki" You can also perform this action in beforehand using 'kubeadm config images pull' This might take a minute or two, depending on the speed of your internet connection Pulling images required for setting up a Kubernetes cluster Running pre-flight checks before initializing the new control plane instance FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' Of course before that, I've copied several files from vm01 to vm02 from /etc/kubernetes/pki like the ca.*, sa.*, front-proxy-ca.*, apiserver-kubelet-client.* and etcd/ca.*.īut when I use the flannel or calico network together with the wireguard interface, something strange happens after the join command. When I add a control plane without the wireguard interface, I can also add different control planes with vm2> kubeadm join 127.0.0.1:443 -token. The node is added, pods are created and the communication is done via the defined network interface. When I add a normal node - everything is fine. Either by adding -iface= or by setting the custom manifest. Then I started the cluster with kubeadm vm01> kubeadm init -apiserver-advertise-address=10.11.12.10 -pod-network-cidr=10.20.0.0/16Īfter that I tested to integrate either flannel or calico. It works as a load balancer by listing to localhost:443 and forwarding the requests to one of the online control-planes. Also I've installed a haproxy on all nodes too. So I've installed docker and kubeadm/kubelet/kubectl in version 1.23.5 on all nodes.
#Controlplane kubernetes install
I want to install a Kubernetes on all nodes. In the future the nodes will be in different locations with different network and Wireguard is used to "overlay" all the different network environments. All nodes have a private IP and a second wireguard interface.

In my current test setup I've several VMs running Debian-11.
