Playbook #1

/root/kubeinit/ci/builds/6mbKNrxD/0/kubeinit/kubeinit/kubeinit-aux/kubeinit/playbook.yml

Report Status CLI Date Duration Controller User Versions Hosts Plays Tasks Results Files Records
26 Oct 2023 19:36:23 +0000 00:45:02.10 nyctea root Ansible 2.15.2 ara 1.6.1 (client), 1.6.1 (server) Python 3.11.4 8 9 1048 1048 53 1

Task result details

Field Value
changed
True
msg
All items completed
results

Result #1

Field Value
ansible_loop_var
controller_node
changed
False
controller_node
controller-01
false_condition
kubeinit_controller_count|int > 1 and controller_node not in kubeinit_first_controller_node
skip_reason
Conditional result was False
skipped
True



Result #2

Field Value
ansible_loop_var
controller_node
changed
True
cmd
kubeadm reset -f || true
echo "kubeadm join api.ekscluster.kubeinit.local:6443 --token 65sua9.qdc2olnrsgjbkoui --discovery-token-ca-cert-hash sha256:ea360cbd6dc7f622c2fdc8dcbf7f79137a87aaa566c58293cb37f666ab8dbee3  	--control-plane --certificate-key f5d54c8514e228a6f95e40430e942692465c67217aa7a1a52ae08a8c0ecab7ff" > ~/eks_controller_join_command.sh
sh ~/eks_controller_join_command.sh
controller_node
controller-02
delta
0:01:17.052925
end
2023-10-26 20:19:45.825324
failed
False
invocation
{
    "module_args": {
        "_raw_params": "kubeadm reset -f || true\necho \"kubeadm join api.ekscluster.kubeinit.local:6443 --token 65sua9.qdc2olnrsgjbkoui --discovery-token-ca-cert-hash sha256:ea360cbd6dc7f622c2fdc8dcbf7f79137a87aaa566c58293cb37f666ab8dbee3  \t--control-plane --certificate-key f5d54c8514e228a6f95e40430e942692465c67217aa7a1a52ae08a8c0ecab7ff\" > ~/eks_controller_join_command.sh\nsh ~/eks_controller_join_command.sh\n",
        "_uses_shell": true,
        "argv": null,
        "chdir": null,
        "creates": null,
        "executable": "/bin/bash",
        "removes": null,
        "stdin": null,
        "stdin_add_newline": true,
        "strip_empty_ends": true
    }
}
msg

rc
0
start
2023-10-26 20:18:28.772399
stderr
W1026 20:18:28.863217   61998 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
W1026 20:18:28.888324   61998 cleanupnode.go:134] [reset] Failed to evaluate the "/var/lib/kubelet" directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory
W1026 20:19:02.099827   62019 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
stderr_lines
[
    "W1026 20:18:28.863217   61998 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory",
    "W1026 20:18:28.888324   61998 cleanupnode.go:134] [reset] Failed to evaluate the \"/var/lib/kubelet\" directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory",
    "W1026 20:19:02.099827   62019 checks.go:835] detected that the sandbox image \"registry.k8s.io/pause:3.6\" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using \"registry.k8s.io/pause:3.9\" as the CRI sandbox image."
]
stdout
[preflight] Running pre-flight checks
[reset] Deleted contents of the etcd data directory: /var/lib/etcd
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[download-certs] Saving the certificates to the folder: "/etc/kubernetes/pki"
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [api.ekscluster.kubeinit.local controller-02.ekscluster.kubeinit.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.2]
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [controller-02.ekscluster.kubeinit.local localhost] and IPs [10.0.0.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [controller-02.ekscluster.kubeinit.local localhost] and IPs [10.0.0.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation
[mark-control-plane] Marking the node controller-02.ekscluster.kubeinit.local as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node controller-02.ekscluster.kubeinit.local as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

	mkdir -p $HOME/.kube
	sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.
stdout_lines
[
    "[preflight] Running pre-flight checks",
    "[reset] Deleted contents of the etcd data directory: /var/lib/etcd",
    "[reset] Stopping the kubelet service",
    "[reset] Unmounting mounted directories in \"/var/lib/kubelet\"",
    "[reset] Deleting contents of directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]",
    "[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]",
    "",
    "The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d",
    "",
    "The reset process does not reset or clean up iptables rules or IPVS tables.",
    "If you wish to reset iptables, you must do so manually by using the \"iptables\" command.",
    "",
    "If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)",
    "to reset your system's IPVS tables.",
    "",
    "The reset process does not clean your kubeconfig files and you must remove them manually.",
    "Please, check the contents of the $HOME/.kube/config file.",
    "[preflight] Running pre-flight checks",
    "[preflight] Reading configuration from the cluster...",
    "[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'",
    "[preflight] Running pre-flight checks before initializing the new control plane instance",
    "[preflight] Pulling images required for setting up a Kubernetes cluster",
    "[preflight] This might take a minute or two, depending on the speed of your internet connection",
    "[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'",
    "[download-certs] Downloading the certificates in Secret \"kubeadm-certs\" in the \"kube-system\" Namespace",
    "[download-certs] Saving the certificates to the folder: \"/etc/kubernetes/pki\"",
    "[certs] Using certificateDir folder \"/etc/kubernetes/pki\"",
    "[certs] Generating \"apiserver-kubelet-client\" certificate and key",
    "[certs] Generating \"apiserver\" certificate and key",
    "[certs] apiserver serving cert is signed for DNS names [api.ekscluster.kubeinit.local controller-02.ekscluster.kubeinit.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.2]",
    "[certs] Generating \"etcd/server\" certificate and key",
    "[certs] etcd/server serving cert is signed for DNS names [controller-02.ekscluster.kubeinit.local localhost] and IPs [10.0.0.2 127.0.0.1 ::1]",
    "[certs] Generating \"etcd/peer\" certificate and key",
    "[certs] etcd/peer serving cert is signed for DNS names [controller-02.ekscluster.kubeinit.local localhost] and IPs [10.0.0.2 127.0.0.1 ::1]",
    "[certs] Generating \"etcd/healthcheck-client\" certificate and key",
    "[certs] Generating \"apiserver-etcd-client\" certificate and key",
    "[certs] Generating \"front-proxy-client\" certificate and key",
    "[certs] Valid certificates and keys now exist in \"/etc/kubernetes/pki\"",
    "[certs] Using the existing \"sa\" key",
    "[kubeconfig] Generating kubeconfig files",
    "[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"",
    "[kubeconfig] Writing \"admin.conf\" kubeconfig file",
    "[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file",
    "[kubeconfig] Writing \"scheduler.conf\" kubeconfig file",
    "[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"",
    "[control-plane] Creating static Pod manifest for \"kube-apiserver\"",
    "[control-plane] Creating static Pod manifest for \"kube-controller-manager\"",
    "[control-plane] Creating static Pod manifest for \"kube-scheduler\"",
    "[check-etcd] Checking that the etcd cluster is healthy",
    "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"",
    "[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"",
    "[kubelet-start] Starting the kubelet",
    "[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...",
    "[etcd] Announced new etcd member joining to the existing etcd cluster",
    "[etcd] Creating static Pod manifest for \"etcd\"",
    "[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s",
    "The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation",
    "[mark-control-plane] Marking the node controller-02.ekscluster.kubeinit.local as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]",
    "[mark-control-plane] Marking the node controller-02.ekscluster.kubeinit.local as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]",
    "",
    "This node has joined the cluster and a new control plane instance was created:",
    "",
    "* Certificate signing request was sent to apiserver and approval was received.",
    "* The Kubelet was informed of the new secure connection details.",
    "* Control plane label and taint were applied to the new node.",
    "* The Kubernetes control plane instances scaled up.",
    "* A new etcd member was added to the local/stacked etcd cluster.",
    "",
    "To start administering your cluster from this node, you need to run the following as a regular user:",
    "",
    "\tmkdir -p $HOME/.kube",
    "\tsudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config",
    "\tsudo chown $(id -u):$(id -g) $HOME/.kube/config",
    "",
    "Run 'kubectl get nodes' to see this node join the cluster."
]



Result #3

Field Value
ansible_loop_var
controller_node
changed
True
cmd
kubeadm reset -f || true
echo "kubeadm join api.ekscluster.kubeinit.local:6443 --token 65sua9.qdc2olnrsgjbkoui --discovery-token-ca-cert-hash sha256:ea360cbd6dc7f622c2fdc8dcbf7f79137a87aaa566c58293cb37f666ab8dbee3  	--control-plane --certificate-key f5d54c8514e228a6f95e40430e942692465c67217aa7a1a52ae08a8c0ecab7ff" > ~/eks_controller_join_command.sh
sh ~/eks_controller_join_command.sh
controller_node
controller-03
delta
0:01:05.633603
end
2023-10-26 20:20:52.333470
failed
False
invocation
{
    "module_args": {
        "_raw_params": "kubeadm reset -f || true\necho \"kubeadm join api.ekscluster.kubeinit.local:6443 --token 65sua9.qdc2olnrsgjbkoui --discovery-token-ca-cert-hash sha256:ea360cbd6dc7f622c2fdc8dcbf7f79137a87aaa566c58293cb37f666ab8dbee3  \t--control-plane --certificate-key f5d54c8514e228a6f95e40430e942692465c67217aa7a1a52ae08a8c0ecab7ff\" > ~/eks_controller_join_command.sh\nsh ~/eks_controller_join_command.sh\n",
        "_uses_shell": true,
        "argv": null,
        "chdir": null,
        "creates": null,
        "executable": "/bin/bash",
        "removes": null,
        "stdin": null,
        "stdin_add_newline": true,
        "strip_empty_ends": true
    }
}
msg

rc
0
start
2023-10-26 20:19:46.699867
stderr
W1026 20:19:46.787081   61429 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
W1026 20:19:46.811179   61429 cleanupnode.go:134] [reset] Failed to evaluate the "/var/lib/kubelet" directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory
W1026 20:20:19.275305   61452 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
stderr_lines
[
    "W1026 20:19:46.787081   61429 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory",
    "W1026 20:19:46.811179   61429 cleanupnode.go:134] [reset] Failed to evaluate the \"/var/lib/kubelet\" directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory",
    "W1026 20:20:19.275305   61452 checks.go:835] detected that the sandbox image \"registry.k8s.io/pause:3.6\" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using \"registry.k8s.io/pause:3.9\" as the CRI sandbox image."
]
stdout
[preflight] Running pre-flight checks
[reset] Deleted contents of the etcd data directory: /var/lib/etcd
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[download-certs] Saving the certificates to the folder: "/etc/kubernetes/pki"
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [api.ekscluster.kubeinit.local controller-03.ekscluster.kubeinit.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.3]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [controller-03.ekscluster.kubeinit.local localhost] and IPs [10.0.0.3 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [controller-03.ekscluster.kubeinit.local localhost] and IPs [10.0.0.3 127.0.0.1 ::1]
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation
[mark-control-plane] Marking the node controller-03.ekscluster.kubeinit.local as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node controller-03.ekscluster.kubeinit.local as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

	mkdir -p $HOME/.kube
	sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.
stdout_lines
[
    "[preflight] Running pre-flight checks",
    "[reset] Deleted contents of the etcd data directory: /var/lib/etcd",
    "[reset] Stopping the kubelet service",
    "[reset] Unmounting mounted directories in \"/var/lib/kubelet\"",
    "[reset] Deleting contents of directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]",
    "[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]",
    "",
    "The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d",
    "",
    "The reset process does not reset or clean up iptables rules or IPVS tables.",
    "If you wish to reset iptables, you must do so manually by using the \"iptables\" command.",
    "",
    "If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)",
    "to reset your system's IPVS tables.",
    "",
    "The reset process does not clean your kubeconfig files and you must remove them manually.",
    "Please, check the contents of the $HOME/.kube/config file.",
    "[preflight] Running pre-flight checks",
    "[preflight] Reading configuration from the cluster...",
    "[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'",
    "[preflight] Running pre-flight checks before initializing the new control plane instance",
    "[preflight] Pulling images required for setting up a Kubernetes cluster",
    "[preflight] This might take a minute or two, depending on the speed of your internet connection",
    "[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'",
    "[download-certs] Downloading the certificates in Secret \"kubeadm-certs\" in the \"kube-system\" Namespace",
    "[download-certs] Saving the certificates to the folder: \"/etc/kubernetes/pki\"",
    "[certs] Using certificateDir folder \"/etc/kubernetes/pki\"",
    "[certs] Generating \"apiserver\" certificate and key",
    "[certs] apiserver serving cert is signed for DNS names [api.ekscluster.kubeinit.local controller-03.ekscluster.kubeinit.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.3]",
    "[certs] Generating \"apiserver-kubelet-client\" certificate and key",
    "[certs] Generating \"front-proxy-client\" certificate and key",
    "[certs] Generating \"etcd/peer\" certificate and key",
    "[certs] etcd/peer serving cert is signed for DNS names [controller-03.ekscluster.kubeinit.local localhost] and IPs [10.0.0.3 127.0.0.1 ::1]",
    "[certs] Generating \"etcd/healthcheck-client\" certificate and key",
    "[certs] Generating \"apiserver-etcd-client\" certificate and key",
    "[certs] Generating \"etcd/server\" certificate and key",
    "[certs] etcd/server serving cert is signed for DNS names [controller-03.ekscluster.kubeinit.local localhost] and IPs [10.0.0.3 127.0.0.1 ::1]",
    "[certs] Valid certificates and keys now exist in \"/etc/kubernetes/pki\"",
    "[certs] Using the existing \"sa\" key",
    "[kubeconfig] Generating kubeconfig files",
    "[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"",
    "[kubeconfig] Writing \"admin.conf\" kubeconfig file",
    "[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file",
    "[kubeconfig] Writing \"scheduler.conf\" kubeconfig file",
    "[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"",
    "[control-plane] Creating static Pod manifest for \"kube-apiserver\"",
    "[control-plane] Creating static Pod manifest for \"kube-controller-manager\"",
    "[control-plane] Creating static Pod manifest for \"kube-scheduler\"",
    "[check-etcd] Checking that the etcd cluster is healthy",
    "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"",
    "[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"",
    "[kubelet-start] Starting the kubelet",
    "[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...",
    "[etcd] Announced new etcd member joining to the existing etcd cluster",
    "[etcd] Creating static Pod manifest for \"etcd\"",
    "[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s",
    "The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation",
    "[mark-control-plane] Marking the node controller-03.ekscluster.kubeinit.local as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]",
    "[mark-control-plane] Marking the node controller-03.ekscluster.kubeinit.local as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]",
    "",
    "This node has joined the cluster and a new control plane instance was created:",
    "",
    "* Certificate signing request was sent to apiserver and approval was received.",
    "* The Kubelet was informed of the new secure connection details.",
    "* Control plane label and taint were applied to the new node.",
    "* The Kubernetes control plane instances scaled up.",
    "* A new etcd member was added to the local/stacked etcd cluster.",
    "",
    "To start administering your cluster from this node, you need to run the following as a regular user:",
    "",
    "\tmkdir -p $HOME/.kube",
    "\tsudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config",
    "\tsudo chown $(id -u):$(id -g) $HOME/.kube/config",
    "",
    "Run 'kubectl get nodes' to see this node join the cluster."
]