====== 使用 rke 安裝 K8s 的程序 ====== * 安裝環境 Ubuntu 20.04.2 LTS x86_64 * 兩個節點 IP : 10.20.0.35 / 10.20.0.37 ===== 前置作業 ===== * apt 更新與安裝套件 sudo apt update sudo apt-get install unzip curl software-properties-common snap -y * 安裝 Docker 19.03.14 curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" sudo apt-get install docker-ce=5:19.03.14~3-0~ubuntu-focal docker-ce-cli=5:19.03.14~3-0~ubuntu-focal containerd.io -y * 安裝 kubectl sudo curl -LO https://dl.k8s.io/release/v1.18.17/bin/linux/amd64/kubectl sudo chmod a+x kubectl sudo mv ./kubectl /usr/local/bin/ mkdir -p ~/.kube/ * 關閉 swap sudo swapoff -a ===== 各主機節點建立與設定 rkeuser 用戶 ===== * 所有節點建立 rkeuser 帳號 sudo useradd -s /bin/bash -d /home/rkeuser/ -m -G sudo rkeuser sudo passwd rkeuser sudo usermod -aG docker rkeuser * 配置免密登入-master節點-10.20.0.35 * master節點-10.20.0.35 產生公鑰 ssh-keygen -t rsa -C 'tryweb@ichiayi.com' * 將公鑰複製到各節點的 rkeuser 用戶內 ssh-copy-id rkeuser@10.20.0.35 ssh-copy-id rkeuser@10.20.0.37 * 確認所有節點 rkeuser 可以執行 docker 命令 ssh rkeuser@10.20.0.35 docker ps * 如有啟動 firewall 要開啟 port * 6443-KubeAPI * 2379-etcd * SSH server配置 TCP 轉發 sudo vi /etc/ssh/sshd_config : AllowTcpForwarding yes : sudo systemctl reload sshd ===== 安裝 rke 與建立 K8s Cluster ===== * 參考 - https://github.com/rancher/rke/releases/ * 下載 rke 1.2.7 wget https://github.com/rancher/rke/releases/download/v1.2.7/rke_linux-amd64 sudo mv rke_linux-amd64 /usr/local/bin/rke sudo chmod +x /usr/local/bin/rke rke --version * 產生 rke 配置 K8s Cluster 檔 rke config --name cluster.yml * ++看輸入資訊| localadmin@Cori-test3:~$ rke config --name cluster.yml [+] Cluster Level SSH Private Key Path [~/.ssh/id_rsa]: [+] Number of Hosts [1]: 2 [+] SSH Address of host (1) [none]: 10.20.0.35 [+] SSH Port of host (1) [22]: [+] SSH Private Key Path of host (10.20.0.35) [none]: [-] You have entered empty SSH key path, trying fetch from SSH key parameter [+] SSH Private Key of host (10.20.0.35) [none]: [-] You have entered empty SSH key, defaulting to cluster level SSH key: ~/.ssh/id_rsa [+] SSH User of host (10.20.0.35) [ubuntu]: rkeuser [+] Is host (10.20.0.35) a Control Plane host (y/n)? [y]: [+] Is host (10.20.0.35) a Worker host (y/n)? [n]: y [+] Is host (10.20.0.35) an etcd host (y/n)? [n]: y [+] Override Hostname of host (10.20.0.35) [none]: [+] Internal IP of host (10.20.0.35) [none]: 10.20.0.35 [+] Docker socket path on host (10.20.0.35) [/var/run/docker.sock]: [+] SSH Address of host (2) [none]: 10.20.0.37 [+] SSH Port of host (2) [22]: [+] SSH Private Key Path of host (10.20.0.37) [none]: [-] You have entered empty SSH key path, trying fetch from SSH key parameter [+] SSH Private Key of host (10.20.0.37) [none]: [-] You have entered empty SSH key, defaulting to cluster level SSH key: ~/.ssh/id_rsa [+] SSH User of host (10.20.0.37) [ubuntu]: rkeuser [+] Is host (10.20.0.37) a Control Plane host (y/n)? [y]: n [+] Is host (10.20.0.37) a Worker host (y/n)? [n]: y [+] Is host (10.20.0.37) an etcd host (y/n)? [n]: n [+] Override Hostname of host (10.20.0.37) [none]: [+] Internal IP of host (10.20.0.37) [none]: 10.20.0.37 [+] Docker socket path on host (10.20.0.37) [/var/run/docker.sock]: [+] Network Plugin Type (flannel, calico, weave, canal, aci) [canal]: calico [+] Authentication Strategy [x509]: [+] Authorization Mode (rbac, none) [rbac]: [+] Kubernetes Docker image [rancher/hyperkube:v1.20.5-rancher1]: rancher/hyperkube:v1.18.17-rancher1 [+] Cluster domain [cluster.local]: iiidevops-k8s [+] Service Cluster IP Range [10.43.0.0/16]: [+] Enable PodSecurityPolicy [n]: [+] Cluster Network CIDR [10.42.0.0/16]: [+] Cluster DNS Service IP [10.43.0.10]: [+] Add addon manifest URLs or YAML files [no]: ++ * 檢視與編輯 cluster.yml 內容 nodes: - address: 10.20.0.35 port: "22" internal_address: 10.20.0.35 role: - controlplane - worker - etcd hostname_override: "" user: rkeuser docker_socket: /var/run/docker.sock ssh_key: "" ssh_key_path: ~/.ssh/id_rsa ssh_cert: "" ssh_cert_path: "" labels: {} taints: [] - address: 10.20.0.37 port: "22" internal_address: 10.20.0.37 role: - worker hostname_override: "" user: rkeuser docker_socket: /var/run/docker.sock ssh_key: "" ssh_key_path: ~/.ssh/id_rsa ssh_cert: "" ssh_cert_path: "" labels: {} taints: [] services: etcd: image: "" : : * 開始執行安裝 rke up --config cluster.yml 因為要下載一些 images 需要等一些時間才會完成, 最後應該可以看到類似以下的訊息 : INFO[0378] [addons] Executing deploy job rke-ingress-controller INFO[0405] [ingress] ingress controller nginx deployed successfully INFO[0405] [addons] Setting up user addons INFO[0405] [addons] no user addons defined INFO[0405] Finished building Kubernetes cluster successfully * 將產生的檔案備份起來 $ ls -lt total 4476 -rw-r----- 1 localadmin localadmin 105805 Apr 7 19:18 cluster.rkestate -rw-r----- 1 localadmin localadmin 5381 Apr 7 19:13 kube_config_cluster.yml -rw-r----- 1 localadmin localadmin 5653 Apr 7 19:12 cluster.yml * 複製 kubeconfig 檔案及驗證 kubeconfig 檔案 cp kube_config_cluster.yml ~/.kube/config kubectl get nodes 如果沒問題應該會出現類似以下的訊息 $ kubectl get node NAME STATUS ROLES AGE VERSION 10.20.0.35 Ready controlplane,etcd,worker 7m52s v1.18.17 10.20.0.37 Ready worker 7m47s v1.18.17 ===== 新增與移除 K8s Node ==== * 只要修改 cluster.yml 節點資料, 再執行以下指令即可 rke up --update-only --config cluster.yml * Exp. 加上 10.20.0.36 * 前面所有節點準備程序都要進行 * 修改 cluster.yml nodes: - address: 10.20.0.35 port: "22" internal_address: 10.20.0.35 role: - controlplane - worker - etcd hostname_override: "" user: rkeuser docker_socket: /var/run/docker.sock ssh_key: "" ssh_key_path: ~/.ssh/id_rsa ssh_cert: "" ssh_cert_path: "" labels: {} taints: [] - address: 10.20.0.36 port: "22" internal_address: 10.20.0.36 role: - worker hostname_override: "" user: rkeuser docker_socket: /var/run/docker.sock ssh_key: "" ssh_key_path: ~/.ssh/id_rsa ssh_cert: "" ssh_cert_path: "" labels: {} taints: [] - address: 10.20.0.37 port: "22" internal_address: 10.20.0.37 role: - worker hostname_override: "" user: rkeuser docker_socket: /var/run/docker.sock ssh_key: "" ssh_key_path: ~/.ssh/id_rsa ssh_cert: "" ssh_cert_path: "" labels: {} taints: [] services: etcd: : : * 執行更新 rke up --update-only --config cluster.yml ===== 關閉 rke (移除 K8s Cluster) ==== * 直接執行 rke remove 就會將 K8s 移除 rke remove --config cluster.yml * 執行後原本運行中的 rancher 服務還會持續運行, 可以透過重新開機解決 ===== 參考網址 ===== * https://www.mdeditor.tw/pl/glor/zh-tw * https://rancher.com/docs/rke/latest/en/managing-clusters/ {{tag>Rancher RKE K8s}}