使用 kubeadm 部署 kubernetes 1.13.1

最近有时间重新学习 k8s。k8s 的安装比之前简单了许多,本文介绍如何使用 kubeadm 部署 kubernetns 1.13.1

前期准备

环境概览

准备了3台机器,有一台master,两台node,主机名及IP如下:

主机名IP地址
k8s-master172.20.6.116
k8s-node1172.20.6.117
k8s-node2172.20.6.118

系统设置

1. 修改三台机器的主机名

# hostnamectl set-hostname XXXX

2. 设置本地解析

编辑三台机器的 hosts 文件加入以下内容

# vim /etc/hosts

172.20.6.116 k8s-master
172.20.6.117 k8s-node1
172.20.6.118 k8s-node2

3. 关闭防火墙

# systemctl disable firewalld

4. 关闭selinux

# sed -i s/SELINUX=enforcing/SELINUX=disabled/g /etc/selinux/config

5. 关闭NetworkManager

# systemctl disable NetworkManager

6. 设置时间同步

所有机器上安装 chrony

# yum install -y chrony

设置时间同步(172.50.10.16为我本地的 NTP 服务器,也可以直接使用阿里云的NTP: time1.aliyun.com)

# vim /etc/chrony.conf

#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server 172.50.10.16 iburst

启动服务并同步时间

# systemctl enable chronyd && systemctl restart chronyd
# chronyc sources
210 Number of sources = 1
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* 172.50.10.16 3 6 17 13 +103us[ +12us] +/- 24ms

星号代表同步成功

7. 重启所有主机

# reboot

部署 kubernetes

安装 docker-ce(所有主机 )

1. 下载 docker-ce 源

# wget -O /etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/centos/docker-ce.repo

2. 配置 docker-ce 使用国内源

# sed -i 's+download.docker.com+mirrors.tuna.tsinghua.edu.cn/docker-ce+' /etc/yum.repos.d/docker-ce.repo

3. 安装docker-ce

由于 kubernetes 1.13.1 只在 docker-ce 18.06以下测试过,所以指定安装的 docker-ce 版本

# yum install docker-ce-18.06.1.ce-3.el7

4. 启动并设置开机自启

# systemctl enable docker.service && systemctl start docker.service

安装 kubeadm, kubelet and kubectl(所有主机)

1. 配置 kubeadm 的源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

2. 安装 kubeadm, kubelet and kubectl

# yum install -y kubelet kubeadm kubectl

3. 启动并设置开机自启

# systemctl enable kubelet && systemctl start kubelet

4. 调整内核参数

# cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

# sysctl --system

初始化 k8s (master 节点)

1. 导入镜像包

由于不可描述的原因,无法拉取k8s的镜像,所以我准备了一份离线的数据,需要在所有节点导入,下载地址:kube.tar

# docker load -i kube.tar

查看导入后的镜像

# docker images

REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.13.1 fdb321fd30a0 2 weeks ago 80.2MB
k8s.gcr.io/kube-controller-manager v1.13.1 26e6f1db2a52 2 weeks ago 146MB
k8s.gcr.io/kube-apiserver v1.13.1 40a63db91ef8 2 weeks ago 181MB
k8s.gcr.io/kube-scheduler v1.13.1 ab81d7360408 2 weeks ago 79.6MB
k8s.gcr.io/coredns 1.2.6 f59dcacceff4 7 weeks ago 40MB
k8s.gcr.io/etcd 3.2.24 3cab8e1b9802 3 months ago 220MB
quay.io/coreos/flannel v0.10.0-amd64 f0fad859c909 11 months ago 44.6MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 12 months ago 742kB

2. 初始化 master 节点

# kubeadm init --pod-network-cidr=10.244.0.0/16

**********

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join 172.20.6.116:6443 --token lyycbq.uogsx4a9h7ponmg5 --discovery-token-ca-cert-hash sha256:60d0338c4927907cf56d9697bcdb261cd2fe2dac0f36a9901b254253516177ed

master节点初始化成功,注意保存最后的 kubeadmin join 的内容

3. 加载 k8s 环境变量

# export KUBECONFIG=/etc/kubernetes/admin.conf
# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile

4. 安装 network addon

要docker之间能互相通信需要做些配置,这里用Flannel来实现

# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml

clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

5. 确认集群状态

# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-86c58d9df4-lhb7w 1/1 Running 0 95m
kube-system coredns-86c58d9df4-zprwr 1/1 Running 0 95m
kube-system etcd-k8s-master 1/1 Running 0 100m
kube-system kube-apiserver-k8s-master 1/1 Running 0 100m
kube-system kube-controller-manager-k8s-master 1/1 Running 0 100m
kube-system kube-flannel-ds-amd64-jjdmz 1/1 Running 0 91m
kube-system kube-proxy-lfhbs 1/1 Running 0 101m
kube-system kube-scheduler-k8s-master 1/1 Running 0 100m

确认 CoreDNS pod 为运行状态

加入集群(node节点)

1. 配置node节点加入集群

在 k8s-node1 和 k8s-node2 执行以下命令(初始化中保存的 join 命令)

# kubeadm join 172.20.6.116:6443 --token lyycbq.uogsx4a9h7ponmg5 --discovery-token-ca-cert-hash sha256:60d0338c4927907cf56d9697bcdb261cd2fe2dac0f36a9901b254253516177ed

******

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

2. 检查集群

# kubectl get nodes

NAME STATUS ROLES AGE VERSION
k8s-master Ready master 2m23s v1.13.1
k8s-node1 Ready <none> 39s v1.13.1
k8s-node2 Ready <none> 16s v1.13.1

可以看到两个节点都已经加入了,并且是正常的ready状态。
至此,整个集群的配置完成,可以开始使用了。

配置 dashboard

服务配置

默认没有web页面,可以通过以下步骤部署 dashboard

1. 导入 dashboard-ui(所有节点)

下载地址:dashboard-ui.tar

# docker load -i dashboard-ui.tar

2. 下载配置文件

# wget https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml

编辑kubernetes-dashboard.yaml文件,添加type: NodePort,暴露Dashboard服务,便于从外部访问dashboard。注意这里只添加行type: NodePort即可,其他配置不用改。

spec:
type: NodePort
ports:
- port: 443
targetPort: 8443

3. 部署 Dashboard UI

# kubectl create -f  kubernetes-dashboard.yaml

4. 查看 dashboard 服务状态

# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-86c58d9df4-8zhr5 1/1 Running 0 2d22h
kube-system coredns-86c58d9df4-jqn7r 1/1 Running 0 2d22h
kube-system etcd-k8s-master 1/1 Running 0 2d22h
kube-system kube-apiserver-k8s-master 1/1 Running 0 2d22h
kube-system kube-controller-manager-k8s-master 1/1 Running 0 2d22h
kube-system kube-flannel-ds-amd64-krf6t 1/1 Running 0 2d22h
kube-system kube-flannel-ds-amd64-tkftg 1/1 Running 0 2d22h
kube-system kube-flannel-ds-amd64-zxzld 1/1 Running 0 2d22h
kube-system kube-proxy-5znt7 1/1 Running 0 2d22h
kube-system kube-proxy-gl9sl 1/1 Running 0 2d22h
kube-system kube-proxy-q7j7m 1/1 Running 0 2d22h
kube-system kube-scheduler-k8s-master 1/1 Running 0 2d22h
kube-system kubernetes-dashboard-57df4db6b-pghk8 1/1 Running 0 19h

kubernetes-dashboard 为运行状态

创建简单用户

1. 创建服务账号和集群角色绑定配置文件

创建 dashboard-adminuser.yaml 文件,加入以下内容:

# vim dashboard-adminuser.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-admin
namespace: kube-system

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard-admin
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard-admin
namespace: kube-system

2. 创建用户和角色绑定

# kubectl apply -f dashboard-adminuser.yaml

3. 查看 Token

# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep kubernetes-dashboard-admin-token | awk '{print $1}')

Name: kubernetes-dashboard-admin-token-xdrs6
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: kubernetes-dashboard-admin
kubernetes.io/service-account.uid: 14082a92-0e3c-11e9-ac3f-fa163e25b09e

Type: kubernetes.io/service-account-token

Data
====
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbi10b2tlbi14ZHJzNiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjE0MDgyYTkyLTBlM2MtMTFlOS1hYzNmLWZhMTYzZTI1YjA5ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiJ9.K9Z5NOY2MusdXhiFx6NdA42Jpo1cCChN16CKsdsw-9eh76p1O4kd4u22_ZgWzhRwarnllURXieDxEGpRmCJaBOmMo_xFmlCX6fxFQ-7bWcXuWWpi3ay5qSOPsv_7EyvCvkFSFVfgMnppu3dvEhD5NoeSjnrkHshHxFFnhZc7ePIUVlY9KvMVWv7UDkhinJKy5HjLu_ejwy2jxmSNwZ-g9wnLVzw3-XObmUUL8nTRdE8KehKtpdo6Kd-BJlmfTNUPiSGxrcU1sW1hzwJLsEfTix4oQOhdCh2-z37Gr_1J7-bnf8F5_U90okH2nf1it2brmIM3JbzuQ8sWERx66gEkKQ
ca.crt: 1025 bytes
namespace: 11 bytes

保存 token 部分的内容

部署 Metrics Server

Heapter 将在 Kubernetes 1.13 版本中移除(https://github.com/kubernetes/heapster/blob/master/docs/deprecation.md),推荐使用 metrics-server 与 Prometheus。

1. 导入 metrics-server 镜像

下载地址:metrics-server.tar

# docker load -i metrics-server.tar

2. 保存配置文件

# mkdir metrics-server
# cd metrics-server
# wget https://github.com/kubernetes-incubator/metrics-server/raw/master/deploy/1.8%2B/aggregated-metrics-reader.yaml
# wget https://github.com/kubernetes-incubator/metrics-server/raw/master/deploy/1.8%2B/auth-delegator.yaml
# wget https://github.com/kubernetes-incubator/metrics-server/raw/master/deploy/1.8%2B/auth-reader.yaml
# wget https://github.com/kubernetes-incubator/metrics-server/raw/master/deploy/1.8%2B/metrics-apiservice.yaml
# wget https://github.com/kubernetes-incubator/metrics-server/raw/master/deploy/1.8%2B/metrics-server-deployment.yaml
# wget https://github.com/kubernetes-incubator/metrics-server/raw/master/deploy/1.8%2B/metrics-server-service.yaml
# wget https://github.com/kubernetes-incubator/metrics-server/raw/master/deploy/1.8%2B/resource-reader.yaml

修改 metrics-server-deployment.yaml 文件修改镜像默认拉去策略为 IfNotPresent

# vim metrics-server-deployment.yaml

containers:
- name: metrics-server
image: k8s.gcr.io/metrics-server-amd64:v0.3.1
#imagePullPolicy: Always
imagePullPolicy: IfNotPresent
volumeMounts:
- name: tmp-dir
mountPath: /tmp

修改使用 IP 连接并且不验证证书

# vim metrics-server-deployment.yaml

containers:
- name: metrics-server
image: k8s.gcr.io/metrics-server-amd64:v0.3.1
imagePullPolicy: IfNotPresent
command:
- /metrics-server
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP
volumeMounts:
- name: tmp-dir
mountPath: /tmp

3. 执行部署

# kubectl apply -f ./

4. 查看监控数据

# kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
k8s-master 196m 4% 1101Mi 14%
k8s-node1 44m 1% 2426Mi 31%
k8s-node2 38m 0% 2198Mi 28%

# kubectl top pod --all-namespaces
NAMESPACE NAME CPU(cores) MEMORY(bytes)
kube-system coredns-86c58d9df4-8zhr5 3m 13Mi
kube-system coredns-86c58d9df4-jqn7r 2m 13Mi
kube-system etcd-k8s-master 17m 76Mi
kube-system kube-apiserver-k8s-master 30m 402Mi
kube-system kube-controller-manager-k8s-master 36m 63Mi
kube-system kube-flannel-ds-amd64-krf6t 2m 13Mi
kube-system kube-flannel-ds-amd64-tkftg 3m 15Mi
kube-system kube-flannel-ds-amd64-zxzld 2m 12Mi
kube-system kube-proxy-5znt7 2m 14Mi
kube-system kube-proxy-gl9sl 2m 18Mi
kube-system kube-proxy-q7j7m 2m 16Mi
kube-system kube-scheduler-k8s-master 9m 16Mi
kube-system kubernetes-dashboard-57df4db6b-wtmkt 1m 16Mi
kube-system metrics-server-879f5ff6d-9q5xw 1m 13Mi

查看 Dashboard

1. 查找 dashboard 服务端口

# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 3d19h
kubernetes-dashboard NodePort 10.109.196.92 <none> 443:30678/TCP 17h
metrics-server ClusterIP 10.109.23.19 <none> 443/TCP 6m16s

端口为: 30678

2. 访问 dashboard

访问地址为: https://172.20.6.116:30678,选择令牌,输入之前保存的 token 即可进入
login
dashboard


参考文章

kubeadm 部署 kube1.10
Creating a single master cluster with kubeadm
使用 Kubeadm 安装部署 Kubernetes 1.12.1 集群
kubeadm快速部署Kubernetes(1.13.1,HA)

-------------本文结束感谢阅读-------------
0%