Kubernetes 시작 하기 – Ubuntu 20.04 기준

Kubernetes 는 온 프레미스 서버 또는 하이브리드, 클라우드 환경에서 대규모로 Docker 컨테이너를 조정하고 관리하기위한 도구입니다. Kubeadm은 사용자가 모범 사례 시행을 통해 프로덕션 준비 Kubernetes 클러스터를 설치하도록 돕기 위해 Kubernetes와 함께 제공되는 도구입니다. 

kubeadm을 사용하여 Ubuntu 20.04에 Kubernetes 설치하고 클러스터를구성해 보겠습니다.

Kubernetes 구조

Kubernetes 클러스터 배포에 사용되는 두 가지 서버 유형이 있습니다.

  • 마스터 : Kubernetes 마스터는 Kubernetes 클러스터의 포드, 복제 컨트롤러, 서비스, 노드 및 기타 구성 요소에 대한 제어 API 호출이 실행되는 곳입니다.
  • Node : Node는 컨테이너의 런타임 환경을 제공하는 시스템입니다. 컨테이너 포드 집합은 여러 노드에 걸쳐있을 수 있습니다.

Kubernetes 실행 가능한 설정을위한 최소 요구 사항은 다음과 같습니다.

  • 메모리 : 시스템 당 2GiB 이상의 RAM
  • CPU : 제어 플레인 머신에 최소 2 개의 CPU .
  • 컨테이너를 가져 오기위한 인터넷 연결 필요 (개인 레지스트리도 사용할 수 있음)
  • 클러스터의 컴퓨터 간 전체 네트워크 연결 – 개인 또는 공용입니다.

Kubernetes 클러스터 설치

이 랩 설정에는 세 개의 서버가 있습니다. (클라우드 환경이 아닌 온프로미스 서버나, VM 환경을 기준으로 설졍 합니다.)

컨테이너화 된 워크로드를 실행하는 데 사용할 하나의 제어 플레인 머신과 두 개의 노드. 예를 들어 HA에 3 개의 제어 플레인 노드를 사용하여 원하는 사용 사례 및로드에 맞게 더 많은 노드를 추가 할 수 있습니다 .

서버 유형서버 호스트 이름명세서
controlk8s-master01.mylab.com4GB 램, 2vcpus
workerk8s-worker01.mylab.com4GB 램, 2vcpus
workerk8s-worker02..mylab.com4GB 램, 2vcpus

1 단계 : Kubernetes 서버 설치

Ubuntu 20.04에서 Kubernetes 배포에 사용할 서버를 프로비저닝합니다. 설정 프로세스는 사용중인 가상화 또는 클라우드 환경에 따라 다를수 있습니다.

보안관련 사용할 port들을 먼저 Access Group 에 설정을 해줍니다.
ssh(22), http(80), https(443)은 미리 설정되어 있습니다.

Control-plane node(s)

ProtocolDirectionPort RangePurposeUsed By
TCPInbound6443*Kubernetes API serverAll
TCPInbound2379-2380etcd server client APIkube-apiserver, etcd
TCPInbound10250kubelet APISelf, Control plane
TCPInbound10251kube-schedulerSelf
TCPInbound10252kube-controller-managerSelf

Worker node(s)

ProtocolDirectionPort RangePurposeUsed By
TCPInbound10250kubelet APISelf, Control plane
TCPInbound30000-32767NodePort Services†All

서버가 준비되면 업데이트

sudo apt update
sudo apt -y upgrade && sudo systemctl reboot

2 단계 : kubelet, kubeadm 및 kubectl 설치

서버가 재부팅되면 모든 서버에 Ubuntu 20.04 용 Kubernetes 저장소를 추가합니다.

sudo apt update
sudo apt -y install curl apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

그런 다음 필요한 패키지를 설치하십시오.

sudo apt update
sudo apt -y install vim git curl wget kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

kubectl의 버전을 확인하여 설치를 확인하십시오.

$ kubectl version --client && kubeadm version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-20T12:52:00Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
kubeadm version: &version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-20T12:49:29Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

3 단계 : 스왑 비활성화

스왑을 끕니다.

sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
sudo swapoff -a

sysctl을 구성하십시오.

sudo modprobe overlay
sudo modprobe br_netfilter

sudo tee /etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

sudo sysctl --system

4 단계 : 컨테이너 런타임 설치

포드에서 컨테이너를 실행하기 위해 Kubernetes는 컨테이너 런타임을 사용합니다. 지원되는 컨테이너 런타임은 다음과 같습니다.

  • Docker
  • CRI-O
  • 컨테이너

참고 : 하나의 런타임을 선택 해서 설치 합니다 . 저는 컨테이너를 사용 합니다.

Docker 런타임 설치 :

# Add repo and Install packages
sudo apt update
sudo apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt update
sudo apt install -y containerd.io docker-ce docker-ce-cli

# Create required directories
sudo mkdir -p /etc/systemd/system/docker.service.d

# Create daemon json config file
sudo tee /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF

# Start and enable Services
sudo systemctl daemon-reload 
sudo systemctl restart docker
sudo systemctl enable docker

CRI-O 설치 :

# Ensure you load modules
sudo modprobe overlay
sudo modprobe br_netfilter

# Set up required sysctl params
sudo tee /etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

# Reload sysctl
sudo sysctl --system

# Add repo
. /etc/os-release
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/x${NAME}_${VERSION_ID}/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list"
wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/x${NAME}_${VERSION_ID}/Release.key -O- | sudo apt-key add -
sudo apt update

# Install CRI-O
sudo apt install cri-o-1.17

# Start and enable Service
sudo systemctl daemon-reload
sudo systemctl start crio
sudo systemctl enable crio

Containerd 설치 :

# Configure persistent loading of modules
sudo tee /etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF

# Load at runtime
sudo modprobe overlay
sudo modprobe br_netfilter

# Ensure sysctl params are set
sudo tee /etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

# Reload configs
sudo sysctl --system

# Install required packages
sudo apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates


# Add Docker repo
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

# Install containerd
sudo apt update
sudo apt install -y containerd.io

# Configure containerd and start service
sudo mkdir -p /etc/containerd
sudo su -
containerd config default  /etc/containerd/config.toml

# restart containerd
sudo systemctl restart containerd
sudo systemctl enable containerd

systemd cgroup 드라이버를 사용하려면 에서 plugins.cri.systemd_cgroup = true 를 설정하십시오 /etc/containerd/config.toml. kubeadm을 사용할 때 kubelet 용 cgroup 드라이버를 수동으로 구성하십시오.

sudo nano /etc/containerd/config.toml

#   Copyright 2018-2020 Docker Inc.

#   Licensed under the Apache License, Version 2.0 (the "License");
#   you may not use this file except in compliance with the License.
#   You may obtain a copy of the License at

#       http://www.apache.org/licenses/LICENSE-2.0

#   Unless required by applicable law or agreed to in writing, software
#   distributed under the License is distributed on an "AS IS" BASIS,
#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#   See the License for the specific language governing permissions and
#   limitations under the License.

#disabled_plugins = ["cri"]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true

#root = "/var/lib/containerd"
#state = "/run/containerd"
#subreaper = true
#oom_score = 0

[....]

5 단계 : 마스터 노드 초기화

마스터로 사용할 서버에 로그인하고 br_netfilter 모듈이로드 되었는지 확인합니다 .

$ lsmod | grep br_netfilter
br_netfilter           24576  0
bridge                155648  1 br_netfilter

kubelet 서비스를 활성화합니다.

sudo systemctl enable kubelet

이제 etcd (클러스터 데이터베이스)와 API 서버 를 포함하는 제어 플레인 구성 요소를 실행할 머신을 초기화하려고합니다 .

컨테이너 이미지 가져 오기 :

$ sudo kubeadm config images pull
[config/images] Pulled k8s.gcr.io/kube-apiserver:v1.21.2
[config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.21.2
[config/images] Pulled k8s.gcr.io/kube-scheduler:v1.21.2
[config/images] Pulled k8s.gcr.io/kube-proxy:v1.21.2
[config/images] Pulled k8s.gcr.io/pause:3.4.1
[config/images] Pulled k8s.gcr.io/etcd:3.4.13-0
[config/images] Pulled k8s.gcr.io/coredns/coredns:v1.8.0

러스터 엔드 포인트 DNS 이름을 설정하거나 / etc / hosts 파일에 레코드를 추가하십시오.

$ sudo vim /etc/hosts
172.16.11.1 k8s-cluster.mylab.com

클러스터 생성 :

sudo kubeadm init \
  --pod-network-cidr=192.168.0/16 \
  --control-plane-endpoint=k8s-cluster.mylab.com

참고 : 192.168.0.0/16 이 네트워크 내에서 이미 사용중인 경우 위 명령에서 192.168.0.0/16을 대체하는 다른 포드 네트워크 CIDR을 선택해야합니다.

컨테이너 런타임 소켓 :

실행 시간Unix 도메인 소켓 경로
Docker/var/run/docker.sock
컨테이너/run/containerd/containerd.sock
CRI-O/var/run/crio/crio.sock

선택적으로 런타임 용 소켓 파일을 전달하고 설정에 따라 주소를 알릴 수 있습니다.
제가 실행한 kubeadm init 실행은 다음과 같이 출력 되었습니다.

$ sudo kubeadm init \
>   --pod-network-cidr=172.16.0.0/16 \
>   --control-plane-endpoint=k8s-cluster.mylab.com
[init] Using Kubernetes version: v1.21.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-c k8sncp.bankware.global kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.11.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-c localhost] and IPs [192.168.11.1 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-c localhost] and IPs [192.168.11.1 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 14.002609 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-c as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-c as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 2aizuk.aaj4ulk4flo5q2ew
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join k8s-cluster.mylab.com:6443 --token 2aizuk.aaj4ulk4flo5q2ew \
        --discovery-token-ca-cert-hash sha256:1f3626d7cc5306463ece9158f06fd9ff8f9e134fd6091eed6c1c94bacaec7289 \
        --control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join k8s-cluster.mylab.com:6443 --token 2aizuk.aaj4ulk4flo5q2ew \
        --discovery-token-ca-cert-hash sha256:1f3626d7cc5306463ece9158f06fd9ff8f9e134fd6091eed6c1c94bacaec7289

출력의 명령을 사용하여 kubectl을 구성합니다.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

클러스터 상태를 확인하십시오.

$ kubectl cluster-info
Kubernetes master is running at https://k8s-cluster.mylab.com:6443
KubeDNS is running at https://k8s-cluster.mylab.com:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

설치 출력의 명령을 사용하여 추가 마스터 노드를 추가 할 수 있습니다.

kubeadm join k8s-cluster.mylab.com:6443 --token sr4l2l.2kvot0pfalh5o4ik \
    --discovery-token-ca-cert-hash sha256:c692fb047e15883b575bd6710779dc2c5af8073f7cab460abd181fd3ddb29a18 \
    --control-plane 
진행 도중에 " Error saying Unable to connect to the server x509 certificate signed by unknown authority " 메세지가 나오면 다음과 같이 시도하여 문제 해결

$ export KUBECONFIG=/etc/kubernetes/admin.conf

6 단계 : Master에 네트워크 플러그인 설치

이 가이드에서는 Calico를 사용 합니다. 지원되는 다른 네트워크 플러그인을 선택할 수 있습니다 .

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

다음 출력이 표시 됩니다.

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/calico-kube-controllers created

모든 포드가 실행 중인지 확인합니다.

$ watch kubectl get pods --all-namespaces
NAMESPACE     NAME                                                         READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-76d4774d89-nfqrr                     1/1     Running   0          2m52s
kube-system   calico-node-kpprr                                            1/1     Running   0          2m52s
kube-system   coredns-66bff467f8-9bxgm                                     1/1     Running   0          7m43s
kube-system   coredns-66bff467f8-jgwln                                     1/1     Running   0          7m43s
kube-system   etcd-k8s-master01.mylab.com                      1/1     Running   0          7m58s
kube-system   kube-apiserver-k8s-master01.mylab.com            1/1     Running   0          7m58s
kube-system   kube-controller-manager-k8s-master01.mylab.com   1/1     Running   0          7m58s
kube-system   kube-proxy-bt7ff                                             1/1     Running   0          7m43s
kube-system   kube-scheduler-k8s-master01.mylab.com            1/1     Running   0          7m58s

마스터 노드가 준비되었는지 확인합니다.

$ kubectl get nodes -o wide
NAME           STATUS   ROLES    AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE           KERNEL-VERSION     CONTAINER-RUNTIME
k8s-master01   Ready    master   64m   v1.18.3   135.181.28.113   <none>        Ubuntu 20.04 LTS   5.4.0-37-generic   docker://19.3.11

7 단계 : 작업자 노드 추가

제어 플레인이 준비되면 예약 된 워크로드를 실행하기 위해 클러스터에 작업자 노드를 추가 할 수 있습니다.

엔드 포인트 주소가 DNS에없는 경우 / etc / hosts에 레코드를 추가하십시오 .

$ sudo vim /etc/hosts
172.29.20.5 k8s-cluster.mylab.com

제공된 join 명령은 클러스터에 작업자 노드를 추가하는 데 사용됩니다.

kubeadm join k8s-cluster.mylab.com:6443 \
  --token sr4l2l.2kvot0pfalh5o4ik \
  --discovery-token-ca-cert-hash sha256:c692fb047e15883b575bd6710779dc2c5af8073f7cab460abd181fd3ddb29a18

산출:

[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

컨트롤 플레인에서 아래 명령을 실행하여 노드가 클러스터에 가입했는지 확인합니다.

$ kubectl get nodes
NAME                                 STATUS   ROLES    AGE   VERSION
k8s-master01.mylab.com   Ready    master   10m   v1.18.3
k8s-worker01.mylab.com   Ready    <none>   50s   v1.18.3
k8s-worker02.mylab.com   Ready    <none>   12s   v1.18.3

$ kubectl get nodes -o wide

조인 토큰이 만료 된 경우 작업자 노드 조인 방법에 대한 가이드를 참조하세요.

8 단계 : 클러스터에 애플리케이션 배포

애플리케이션을 배포하여 클러스터가 작동하는지 확인해야합니다.

kubectl apply -f https://k8s.io/examples/pods/commands.yaml

포드가 시작되었는지 확인

$ kubectl get pods
NAME           READY   STATUS      RESTARTS   AGE
command-demo   0/1     Completed   0          16s

9 단계 : Kubernetes 대시 보드 설치 (선택 사항)

Kubernetes 대시 보드는 컨테이너화 된 애플리케이션을 Kubernetes 클러스터에 배포하고 컨테이너화 된 애플리케이션의 문제를 해결하며 클러스터 리소스를 관리하는 데 사용할 수 있습니다.

Leave a Reply

Your email address will not be published. Required fields are marked *