Playbook #1

/root/kubeinit/ci/builds/6mbKNrxD/0/kubeinit/kubeinit/kubeinit-aux/kubeinit/playbook.yml

Report Status CLI Date Duration Controller User Versions Hosts Plays Tasks Results Files Records
31 Oct 2023 08:44:29 +0000 00:45:59.97 nyctea root Ansible 2.15.2 ara 1.6.1 (client), 1.6.1 (server) Python 3.11.4 9 7 1091 1091 53 1

Task result details

Field Value
changed
True
msg
All items completed
results

Result #1

Field Value
ansible_loop_var
controller_node
changed
False
controller_node
controller-01
false_condition
kubeinit_controller_count|int > 1 and controller_node not in kubeinit_first_controller_node
skip_reason
Conditional result was False
skipped
True



Result #2

Field Value
ansible_loop_var
controller_node
changed
True
cmd
kubeadm reset -f || true
echo "kubeadm join api.k8scluster.kubeinit.local:6443 --token lbd7iq.3s5vmfvaldhsu3lk --discovery-token-ca-cert-hash sha256:2b9dbb9c64d4868edf54d6eb6be607cdc1713b12e2b2155a4fa7c31059140d50  	--control-plane --certificate-key a556f0b00416b503d4050e1d2e1b5048cb3d0030c38c3990a7d266633eb2cbdd" > ~/k8s_controller_join_command.sh
sh ~/k8s_controller_join_command.sh
controller_node
controller-02
delta
0:01:18.890225
end
2023-10-31 09:28:23.702266
failed
False
invocation
{
    "module_args": {
        "_raw_params": "kubeadm reset -f || true\necho \"kubeadm join api.k8scluster.kubeinit.local:6443 --token lbd7iq.3s5vmfvaldhsu3lk --discovery-token-ca-cert-hash sha256:2b9dbb9c64d4868edf54d6eb6be607cdc1713b12e2b2155a4fa7c31059140d50  \t--control-plane --certificate-key a556f0b00416b503d4050e1d2e1b5048cb3d0030c38c3990a7d266633eb2cbdd\" > ~/k8s_controller_join_command.sh\nsh ~/k8s_controller_join_command.sh\n",
        "_uses_shell": true,
        "argv": null,
        "chdir": null,
        "creates": null,
        "executable": "/bin/bash",
        "removes": null,
        "stdin": null,
        "stdin_add_newline": true,
        "strip_empty_ends": true
    }
}
msg

rc
0
start
2023-10-31 09:27:04.812041
stderr
W1031 09:27:04.909899   60678 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
W1031 09:27:04.935126   60678 cleanupnode.go:134] [reset] Failed to evaluate the "/var/lib/kubelet" directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory
stderr_lines
[
    "W1031 09:27:04.909899   60678 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory",
    "W1031 09:27:04.935126   60678 cleanupnode.go:134] [reset] Failed to evaluate the \"/var/lib/kubelet\" directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory"
]
stdout
[preflight] Running pre-flight checks
[reset] Deleted contents of the etcd data directory: /var/lib/etcd
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[download-certs] Saving the certificates to the folder: "/etc/kubernetes/pki"
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [api.k8scluster.kubeinit.local controller-02.k8scluster.kubeinit.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.2]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [controller-02.k8scluster.kubeinit.local localhost] and IPs [10.0.0.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [controller-02.k8scluster.kubeinit.local localhost] and IPs [10.0.0.2 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation
[mark-control-plane] Marking the node controller-02.k8scluster.kubeinit.local as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node controller-02.k8scluster.kubeinit.local as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

	mkdir -p $HOME/.kube
	sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.
stdout_lines
[
    "[preflight] Running pre-flight checks",
    "[reset] Deleted contents of the etcd data directory: /var/lib/etcd",
    "[reset] Stopping the kubelet service",
    "[reset] Unmounting mounted directories in \"/var/lib/kubelet\"",
    "[reset] Deleting contents of directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]",
    "[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]",
    "",
    "The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d",
    "",
    "The reset process does not reset or clean up iptables rules or IPVS tables.",
    "If you wish to reset iptables, you must do so manually by using the \"iptables\" command.",
    "",
    "If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)",
    "to reset your system's IPVS tables.",
    "",
    "The reset process does not clean your kubeconfig files and you must remove them manually.",
    "Please, check the contents of the $HOME/.kube/config file.",
    "[preflight] Running pre-flight checks",
    "[preflight] Reading configuration from the cluster...",
    "[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'",
    "[preflight] Running pre-flight checks before initializing the new control plane instance",
    "[preflight] Pulling images required for setting up a Kubernetes cluster",
    "[preflight] This might take a minute or two, depending on the speed of your internet connection",
    "[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'",
    "[download-certs] Downloading the certificates in Secret \"kubeadm-certs\" in the \"kube-system\" Namespace",
    "[download-certs] Saving the certificates to the folder: \"/etc/kubernetes/pki\"",
    "[certs] Using certificateDir folder \"/etc/kubernetes/pki\"",
    "[certs] Generating \"apiserver\" certificate and key",
    "[certs] apiserver serving cert is signed for DNS names [api.k8scluster.kubeinit.local controller-02.k8scluster.kubeinit.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.2]",
    "[certs] Generating \"apiserver-kubelet-client\" certificate and key",
    "[certs] Generating \"front-proxy-client\" certificate and key",
    "[certs] Generating \"etcd/peer\" certificate and key",
    "[certs] etcd/peer serving cert is signed for DNS names [controller-02.k8scluster.kubeinit.local localhost] and IPs [10.0.0.2 127.0.0.1 ::1]",
    "[certs] Generating \"etcd/healthcheck-client\" certificate and key",
    "[certs] Generating \"etcd/server\" certificate and key",
    "[certs] etcd/server serving cert is signed for DNS names [controller-02.k8scluster.kubeinit.local localhost] and IPs [10.0.0.2 127.0.0.1 ::1]",
    "[certs] Generating \"apiserver-etcd-client\" certificate and key",
    "[certs] Valid certificates and keys now exist in \"/etc/kubernetes/pki\"",
    "[certs] Using the existing \"sa\" key",
    "[kubeconfig] Generating kubeconfig files",
    "[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"",
    "[kubeconfig] Writing \"admin.conf\" kubeconfig file",
    "[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file",
    "[kubeconfig] Writing \"scheduler.conf\" kubeconfig file",
    "[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"",
    "[control-plane] Creating static Pod manifest for \"kube-apiserver\"",
    "[control-plane] Creating static Pod manifest for \"kube-controller-manager\"",
    "[control-plane] Creating static Pod manifest for \"kube-scheduler\"",
    "[check-etcd] Checking that the etcd cluster is healthy",
    "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"",
    "[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"",
    "[kubelet-start] Starting the kubelet",
    "[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...",
    "[etcd] Announced new etcd member joining to the existing etcd cluster",
    "[etcd] Creating static Pod manifest for \"etcd\"",
    "[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s",
    "The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation",
    "[mark-control-plane] Marking the node controller-02.k8scluster.kubeinit.local as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]",
    "[mark-control-plane] Marking the node controller-02.k8scluster.kubeinit.local as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]",
    "",
    "This node has joined the cluster and a new control plane instance was created:",
    "",
    "* Certificate signing request was sent to apiserver and approval was received.",
    "* The Kubelet was informed of the new secure connection details.",
    "* Control plane label and taint were applied to the new node.",
    "* The Kubernetes control plane instances scaled up.",
    "* A new etcd member was added to the local/stacked etcd cluster.",
    "",
    "To start administering your cluster from this node, you need to run the following as a regular user:",
    "",
    "\tmkdir -p $HOME/.kube",
    "\tsudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config",
    "\tsudo chown $(id -u):$(id -g) $HOME/.kube/config",
    "",
    "Run 'kubectl get nodes' to see this node join the cluster."
]



Result #3

Field Value
ansible_loop_var
controller_node
changed
True
cmd
kubeadm reset -f || true
echo "kubeadm join api.k8scluster.kubeinit.local:6443 --token lbd7iq.3s5vmfvaldhsu3lk --discovery-token-ca-cert-hash sha256:2b9dbb9c64d4868edf54d6eb6be607cdc1713b12e2b2155a4fa7c31059140d50  	--control-plane --certificate-key a556f0b00416b503d4050e1d2e1b5048cb3d0030c38c3990a7d266633eb2cbdd" > ~/k8s_controller_join_command.sh
sh ~/k8s_controller_join_command.sh
controller_node
controller-03
delta
0:01:14.496290
end
2023-10-31 09:29:39.052847
failed
False
invocation
{
    "module_args": {
        "_raw_params": "kubeadm reset -f || true\necho \"kubeadm join api.k8scluster.kubeinit.local:6443 --token lbd7iq.3s5vmfvaldhsu3lk --discovery-token-ca-cert-hash sha256:2b9dbb9c64d4868edf54d6eb6be607cdc1713b12e2b2155a4fa7c31059140d50  \t--control-plane --certificate-key a556f0b00416b503d4050e1d2e1b5048cb3d0030c38c3990a7d266633eb2cbdd\" > ~/k8s_controller_join_command.sh\nsh ~/k8s_controller_join_command.sh\n",
        "_uses_shell": true,
        "argv": null,
        "chdir": null,
        "creates": null,
        "executable": "/bin/bash",
        "removes": null,
        "stdin": null,
        "stdin_add_newline": true,
        "strip_empty_ends": true
    }
}
msg

rc
0
start
2023-10-31 09:28:24.556557
stderr
W1031 09:28:24.650914   60271 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
W1031 09:28:24.673896   60271 cleanupnode.go:134] [reset] Failed to evaluate the "/var/lib/kubelet" directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory
stderr_lines
[
    "W1031 09:28:24.650914   60271 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory",
    "W1031 09:28:24.673896   60271 cleanupnode.go:134] [reset] Failed to evaluate the \"/var/lib/kubelet\" directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory"
]
stdout
[preflight] Running pre-flight checks
[reset] Deleted contents of the etcd data directory: /var/lib/etcd
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[download-certs] Saving the certificates to the folder: "/etc/kubernetes/pki"
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [controller-03.k8scluster.kubeinit.local localhost] and IPs [10.0.0.3 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [controller-03.k8scluster.kubeinit.local localhost] and IPs [10.0.0.3 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [api.k8scluster.kubeinit.local controller-03.k8scluster.kubeinit.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.3]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation
[mark-control-plane] Marking the node controller-03.k8scluster.kubeinit.local as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node controller-03.k8scluster.kubeinit.local as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

	mkdir -p $HOME/.kube
	sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.
stdout_lines
[
    "[preflight] Running pre-flight checks",
    "[reset] Deleted contents of the etcd data directory: /var/lib/etcd",
    "[reset] Stopping the kubelet service",
    "[reset] Unmounting mounted directories in \"/var/lib/kubelet\"",
    "[reset] Deleting contents of directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]",
    "[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]",
    "",
    "The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d",
    "",
    "The reset process does not reset or clean up iptables rules or IPVS tables.",
    "If you wish to reset iptables, you must do so manually by using the \"iptables\" command.",
    "",
    "If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)",
    "to reset your system's IPVS tables.",
    "",
    "The reset process does not clean your kubeconfig files and you must remove them manually.",
    "Please, check the contents of the $HOME/.kube/config file.",
    "[preflight] Running pre-flight checks",
    "[preflight] Reading configuration from the cluster...",
    "[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'",
    "[preflight] Running pre-flight checks before initializing the new control plane instance",
    "[preflight] Pulling images required for setting up a Kubernetes cluster",
    "[preflight] This might take a minute or two, depending on the speed of your internet connection",
    "[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'",
    "[download-certs] Downloading the certificates in Secret \"kubeadm-certs\" in the \"kube-system\" Namespace",
    "[download-certs] Saving the certificates to the folder: \"/etc/kubernetes/pki\"",
    "[certs] Using certificateDir folder \"/etc/kubernetes/pki\"",
    "[certs] Generating \"etcd/server\" certificate and key",
    "[certs] etcd/server serving cert is signed for DNS names [controller-03.k8scluster.kubeinit.local localhost] and IPs [10.0.0.3 127.0.0.1 ::1]",
    "[certs] Generating \"etcd/healthcheck-client\" certificate and key",
    "[certs] Generating \"etcd/peer\" certificate and key",
    "[certs] etcd/peer serving cert is signed for DNS names [controller-03.k8scluster.kubeinit.local localhost] and IPs [10.0.0.3 127.0.0.1 ::1]",
    "[certs] Generating \"apiserver-etcd-client\" certificate and key",
    "[certs] Generating \"front-proxy-client\" certificate and key",
    "[certs] Generating \"apiserver\" certificate and key",
    "[certs] apiserver serving cert is signed for DNS names [api.k8scluster.kubeinit.local controller-03.k8scluster.kubeinit.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.3]",
    "[certs] Generating \"apiserver-kubelet-client\" certificate and key",
    "[certs] Valid certificates and keys now exist in \"/etc/kubernetes/pki\"",
    "[certs] Using the existing \"sa\" key",
    "[kubeconfig] Generating kubeconfig files",
    "[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"",
    "[kubeconfig] Writing \"admin.conf\" kubeconfig file",
    "[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file",
    "[kubeconfig] Writing \"scheduler.conf\" kubeconfig file",
    "[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"",
    "[control-plane] Creating static Pod manifest for \"kube-apiserver\"",
    "[control-plane] Creating static Pod manifest for \"kube-controller-manager\"",
    "[control-plane] Creating static Pod manifest for \"kube-scheduler\"",
    "[check-etcd] Checking that the etcd cluster is healthy",
    "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"",
    "[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"",
    "[kubelet-start] Starting the kubelet",
    "[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...",
    "[etcd] Announced new etcd member joining to the existing etcd cluster",
    "[etcd] Creating static Pod manifest for \"etcd\"",
    "[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s",
    "The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation",
    "[mark-control-plane] Marking the node controller-03.k8scluster.kubeinit.local as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]",
    "[mark-control-plane] Marking the node controller-03.k8scluster.kubeinit.local as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]",
    "",
    "This node has joined the cluster and a new control plane instance was created:",
    "",
    "* Certificate signing request was sent to apiserver and approval was received.",
    "* The Kubelet was informed of the new secure connection details.",
    "* Control plane label and taint were applied to the new node.",
    "* The Kubernetes control plane instances scaled up.",
    "* A new etcd member was added to the local/stacked etcd cluster.",
    "",
    "To start administering your cluster from this node, you need to run the following as a regular user:",
    "",
    "\tmkdir -p $HOME/.kube",
    "\tsudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config",
    "\tsudo chown $(id -u):$(id -g) $HOME/.kube/config",
    "",
    "Run 'kubectl get nodes' to see this node join the cluster."
]