Playbook #1

/root/kubeinit/ci/builds/6mbKNrxD/0/kubeinit/kubeinit/kubeinit-aux/kubeinit/playbook.yml

Report Status CLI Date Duration Controller User Versions Hosts Plays Tasks Results Files Records
31 Oct 2023 08:44:29 +0000 00:45:59.97 nyctea root Ansible 2.15.2 ara 1.6.1 (client), 1.6.1 (server) Python 3.11.4 9 7 1091 1091 53 1

Task result details

Field Value
_result_kubeadm_init_output
{
    "changed": true,
    "cmd": "set -eo pipefail\nkubeadm reset -f || true\nkubeadm init    --control-plane-endpoint \"api.k8scluster.kubeinit.local:6443\"    --upload-certs    --pod-network-cidr=10.244.0.0/16\n",
    "delta": "0:01:20.379301",
    "end": "2023-10-31 09:26:52.470069",
    "failed": false,
    "msg": "",
    "rc": 0,
    "start": "2023-10-31 09:25:32.090768",
    "stderr": "W1031 09:25:32.214958   61264 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory\nW1031 09:25:32.239478   61264 cleanupnode.go:134] [reset] Failed to evaluate the \"/var/lib/kubelet\" directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory\nI1031 09:25:34.164414   61263 version.go:256] remote version is much newer: v1.28.3; falling back to: stable-1.26",
    "stderr_lines": [
        "W1031 09:25:32.214958   61264 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory",
        "W1031 09:25:32.239478   61264 cleanupnode.go:134] [reset] Failed to evaluate the \"/var/lib/kubelet\" directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory",
        "I1031 09:25:34.164414   61263 version.go:256] remote version is much newer: v1.28.3; falling back to: stable-1.26"
    ],
    "stdout": "[preflight] Running pre-flight checks\n[reset] Deleted contents of the etcd data directory: /var/lib/etcd\n[reset] Stopping the kubelet service\n[reset] Unmounting mounted directories in \"/var/lib/kubelet\"\n[reset] Deleting contents of directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]\n[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]\n\nThe reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d\n\nThe reset process does not reset or clean up iptables rules or IPVS tables.\nIf you wish to reset iptables, you must do so manually by using the \"iptables\" command.\n\nIf your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)\nto reset your system's IPVS tables.\n\nThe reset process does not clean your kubeconfig files and you must remove them manually.\nPlease, check the contents of the $HOME/.kube/config file.\n[init] Using Kubernetes version: v1.26.10\n[preflight] Running pre-flight checks\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/etc/kubernetes/pki\"\n[certs] Generating \"ca\" certificate and key\n[certs] Generating \"apiserver\" certificate and key\n[certs] apiserver serving cert is signed for DNS names [api.k8scluster.kubeinit.local controller-01.k8scluster.kubeinit.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.1]\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\n[certs] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [controller-01.k8scluster.kubeinit.local localhost] and IPs [10.0.0.1 127.0.0.1 ::1]\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [controller-01.k8scluster.kubeinit.local localhost] and IPs [10.0.0.1 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\n[certs] Generating \"apiserver-etcd-client\" certificate and key\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\". This can take up to 4m0s\n[apiclient] All control plane components are healthy after 21.026627 seconds\n[upload-config] Storing the configuration used in ConfigMap \"kubeadm-config\" in the \"kube-system\" Namespace\n[kubelet] Creating a ConfigMap \"kubelet-config\" in namespace kube-system with the configuration for the kubelets in the cluster\n[upload-certs] Storing the certificates in Secret \"kubeadm-certs\" in the \"kube-system\" Namespace\n[upload-certs] Using certificate key:\na556f0b00416b503d4050e1d2e1b5048cb3d0030c38c3990a7d266633eb2cbdd\n[mark-control-plane] Marking the node controller-01.k8scluster.kubeinit.local as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]\n[mark-control-plane] Marking the node controller-01.k8scluster.kubeinit.local as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]\n[bootstrap-token] Using token: 17ziyg.uxb3iy6rfh4ydrr8\n[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles\n[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes\n[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials\n[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token\n[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster\n[bootstrap-token] Creating the \"cluster-info\" ConfigMap in the \"kube-public\" namespace\n[kubelet-finalize] Updating \"/etc/kubernetes/kubelet.conf\" to point to a rotatable kubelet client certificate and key\n[addons] Applied essential addon: CoreDNS\n[addons] Applied essential addon: kube-proxy\n\nYour Kubernetes control-plane has initialized successfully!\n\nTo start using your cluster, you need to run the following as a regular user:\n\n  mkdir -p $HOME/.kube\n  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\n  sudo chown $(id -u):$(id -g) $HOME/.kube/config\n\nAlternatively, if you are the root user, you can run:\n\n  export KUBECONFIG=/etc/kubernetes/admin.conf\n\nYou should now deploy a pod network to the cluster.\nRun \"kubectl apply -f [podnetwork].yaml\" with one of the options listed at:\n  https://kubernetes.io/docs/concepts/cluster-administration/addons/\n\nYou can now join any number of the control-plane node running the following command on each as root:\n\n  kubeadm join api.k8scluster.kubeinit.local:6443 --token 17ziyg.uxb3iy6rfh4ydrr8 \\\n\t--discovery-token-ca-cert-hash sha256:2b9dbb9c64d4868edf54d6eb6be607cdc1713b12e2b2155a4fa7c31059140d50 \\\n\t--control-plane --certificate-key a556f0b00416b503d4050e1d2e1b5048cb3d0030c38c3990a7d266633eb2cbdd\n\nPlease note that the certificate-key gives access to cluster sensitive data, keep it secret!\nAs a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use\n\"kubeadm init phase upload-certs --upload-certs\" to reload certs afterward.\n\nThen you can join any number of worker nodes by running the following on each as root:\n\nkubeadm join api.k8scluster.kubeinit.local:6443 --token 17ziyg.uxb3iy6rfh4ydrr8 \\\n\t--discovery-token-ca-cert-hash sha256:2b9dbb9c64d4868edf54d6eb6be607cdc1713b12e2b2155a4fa7c31059140d50 ",
    "stdout_lines": [
        "[preflight] Running pre-flight checks",
        "[reset] Deleted contents of the etcd data directory: /var/lib/etcd",
        "[reset] Stopping the kubelet service",
        "[reset] Unmounting mounted directories in \"/var/lib/kubelet\"",
        "[reset] Deleting contents of directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]",
        "[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]",
        "",
        "The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d",
        "",
        "The reset process does not reset or clean up iptables rules or IPVS tables.",
        "If you wish to reset iptables, you must do so manually by using the \"iptables\" command.",
        "",
        "If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)",
        "to reset your system's IPVS tables.",
        "",
        "The reset process does not clean your kubeconfig files and you must remove them manually.",
        "Please, check the contents of the $HOME/.kube/config file.",
        "[init] Using Kubernetes version: v1.26.10",
        "[preflight] Running pre-flight checks",
        "[preflight] Pulling images required for setting up a Kubernetes cluster",
        "[preflight] This might take a minute or two, depending on the speed of your internet connection",
        "[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'",
        "[certs] Using certificateDir folder \"/etc/kubernetes/pki\"",
        "[certs] Generating \"ca\" certificate and key",
        "[certs] Generating \"apiserver\" certificate and key",
        "[certs] apiserver serving cert is signed for DNS names [api.k8scluster.kubeinit.local controller-01.k8scluster.kubeinit.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.1]",
        "[certs] Generating \"apiserver-kubelet-client\" certificate and key",
        "[certs] Generating \"front-proxy-ca\" certificate and key",
        "[certs] Generating \"front-proxy-client\" certificate and key",
        "[certs] Generating \"etcd/ca\" certificate and key",
        "[certs] Generating \"etcd/server\" certificate and key",
        "[certs] etcd/server serving cert is signed for DNS names [controller-01.k8scluster.kubeinit.local localhost] and IPs [10.0.0.1 127.0.0.1 ::1]",
        "[certs] Generating \"etcd/peer\" certificate and key",
        "[certs] etcd/peer serving cert is signed for DNS names [controller-01.k8scluster.kubeinit.local localhost] and IPs [10.0.0.1 127.0.0.1 ::1]",
        "[certs] Generating \"etcd/healthcheck-client\" certificate and key",
        "[certs] Generating \"apiserver-etcd-client\" certificate and key",
        "[certs] Generating \"sa\" key and public key",
        "[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"",
        "[kubeconfig] Writing \"admin.conf\" kubeconfig file",
        "[kubeconfig] Writing \"kubelet.conf\" kubeconfig file",
        "[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file",
        "[kubeconfig] Writing \"scheduler.conf\" kubeconfig file",
        "[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"",
        "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"",
        "[kubelet-start] Starting the kubelet",
        "[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"",
        "[control-plane] Creating static Pod manifest for \"kube-apiserver\"",
        "[control-plane] Creating static Pod manifest for \"kube-controller-manager\"",
        "[control-plane] Creating static Pod manifest for \"kube-scheduler\"",
        "[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"",
        "[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\". This can take up to 4m0s",
        "[apiclient] All control plane components are healthy after 21.026627 seconds",
        "[upload-config] Storing the configuration used in ConfigMap \"kubeadm-config\" in the \"kube-system\" Namespace",
        "[kubelet] Creating a ConfigMap \"kubelet-config\" in namespace kube-system with the configuration for the kubelets in the cluster",
        "[upload-certs] Storing the certificates in Secret \"kubeadm-certs\" in the \"kube-system\" Namespace",
        "[upload-certs] Using certificate key:",
        "a556f0b00416b503d4050e1d2e1b5048cb3d0030c38c3990a7d266633eb2cbdd",
        "[mark-control-plane] Marking the node controller-01.k8scluster.kubeinit.local as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]",
        "[mark-control-plane] Marking the node controller-01.k8scluster.kubeinit.local as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]",
        "[bootstrap-token] Using token: 17ziyg.uxb3iy6rfh4ydrr8",
        "[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles",
        "[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes",
        "[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials",
        "[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token",
        "[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster",
        "[bootstrap-token] Creating the \"cluster-info\" ConfigMap in the \"kube-public\" namespace",
        "[kubelet-finalize] Updating \"/etc/kubernetes/kubelet.conf\" to point to a rotatable kubelet client certificate and key",
        "[addons] Applied essential addon: CoreDNS",
        "[addons] Applied essential addon: kube-proxy",
        "",
        "Your Kubernetes control-plane has initialized successfully!",
        "",
        "To start using your cluster, you need to run the following as a regular user:",
        "",
        "  mkdir -p $HOME/.kube",
        "  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config",
        "  sudo chown $(id -u):$(id -g) $HOME/.kube/config",
        "",
        "Alternatively, if you are the root user, you can run:",
        "",
        "  export KUBECONFIG=/etc/kubernetes/admin.conf",
        "",
        "You should now deploy a pod network to the cluster.",
        "Run \"kubectl apply -f [podnetwork].yaml\" with one of the options listed at:",
        "  https://kubernetes.io/docs/concepts/cluster-administration/addons/",
        "",
        "You can now join any number of the control-plane node running the following command on each as root:",
        "",
        "  kubeadm join api.k8scluster.kubeinit.local:6443 --token 17ziyg.uxb3iy6rfh4ydrr8 \\",
        "\t--discovery-token-ca-cert-hash sha256:2b9dbb9c64d4868edf54d6eb6be607cdc1713b12e2b2155a4fa7c31059140d50 \\",
        "\t--control-plane --certificate-key a556f0b00416b503d4050e1d2e1b5048cb3d0030c38c3990a7d266633eb2cbdd",
        "",
        "Please note that the certificate-key gives access to cluster sensitive data, keep it secret!",
        "As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use",
        "\"kubeadm init phase upload-certs --upload-certs\" to reload certs afterward.",
        "",
        "Then you can join any number of worker nodes by running the following on each as root:",
        "",
        "kubeadm join api.k8scluster.kubeinit.local:6443 --token 17ziyg.uxb3iy6rfh4ydrr8 \\",
        "\t--discovery-token-ca-cert-hash sha256:2b9dbb9c64d4868edf54d6eb6be607cdc1713b12e2b2155a4fa7c31059140d50 "
    ]
}
changed
False