`
lannerK
  • 浏览: 50613 次
  • 性别: Icon_minigender_1
  • 来自: 广州
社区版块
存档分类
最新评论

Centos7 二进制安装Kubernetes 1.9 详细教程

    博客分类:
  • k8s
阅读更多

由于国内无法正常访问Google服务,所以安装kubernetes推荐使用二进制文件安装的方式,安装过程中也能加深对各个组件和配置参数的理解。

用到及涉及到的组件说明

K8s:kubernetes的简称,k(8个字母)s。
Kubernetes 集群中主要存在两种类型的节点,分别是 master 节点,以及 minion 节点。
Minion 节点是实际运行 Docker 容器的节点,负责和节点上运行的 Docker 进行交互,并且提供了代理功能。
Master 节点负责对外提供一系列管理集群的 API 接口,并且通过和 Minion 节点交互来实现对集群的操作管理。
apiserver:用户和 kubernetes 集群交互的入口,封装了核心对象的增删改查操作,提供了 RESTFul 风格的 API 接口,通过 etcd 来实现持久化并维护对象的一致性。
scheduler:负责集群资源的调度和管理,例如当有 pod 异常退出需要重新分配机器时,scheduler 通过一定的调度算法从而找到最合适的节点。
controller-manager:主要是用于保证 replicationController 定义的复制数量和实际运行的 pod 数量一致,另外还保证了从 service 到 pod 的映射关系总是最新的。
kubelet:运行在 minion 节点,负责和节点上的 Docker 交互,例如启停容器,监控运行状态等。
proxy:运行在 minion 节点,负责为 pod 提供代理功能,会定期从 etcd 获取 service 信息,并根据 service 信息通过修改 iptables 来实现流量转发(最初的版本是直接通过程序提供转发功能,效率较低。),将流量转发到要访问的 pod 所在的节点上去。
etcd:key-value键值存储数据库,用来存储kubernetes的信息的。
flannel:Flannel 是 CoreOS 团队针对 Kubernetes 设计的一个覆盖网络(Overlay Network)工具,需要另外下载部署。我们知道当我们启动 Docker 后会有一个用于和容器进行交互的 IP 地址,如果不去管理的话可能这个 IP 地址在各个机器上是一样的,并且仅限于在本机上进行通信,无法访问到其他机器上的 Docker 容器。Flannel 的目的就是为集群中的所有节点重新规划 IP 地址的使用规则,从而使得不同节点上的容器能够获得同属一个内网且不重复的 IP 地址,并让属于不同节点上的容器能够直接通过内网 IP 通信。
kube-dns:用来为kubernetes service分配子域名,在集群中可以通过名称访问service。通常kube-dns会为service赋予一个名为“service名称.namespace.svc.cluster.local”的A记录,用来解析service的cluster ip。在实际应用中,如果访问default namespace下的服务,则可以通过“service名称”直接访问。如果访问其他namespace下的服务,则可以通过“service名称.namespace”访问。

kube-router: 取代 kube-proxy,本教程用kube-router组件取代kube-proxy,用lvs做svc负载均衡,更快稳定。
kube-dashboard:kubernetes官方的web ui管理界面
core-dns: 取代 kube-dns,更稳定。

部署说明

节点
ip role hostname install service
192.168.31.184 master k8sm1 flanneld , kube-apiserver ,  kube-controller-manager ,
kube-scheduler
192.168.31.185 node1,etcd2 m8sn1,etcd2 etcd , flanneld , docker , kubelet
192.168.31.186 node2,etcd3 m8sn2,etcd3 etcd , flanneld , docker , kubelet
192.168.31.187 etcd1  etcd1  etcd , flanneld
 
 
 
 
 
 
 
 
 
 
 

 

网络结构
name ip block
集群网络 172.20.0.0/16
svc网络 172.21.0.0/16
物理网络  192.168.31.0/24
 
资源版本及配置
name value desc
OS CentOs7.4 升级到4.14.7-1.el7.elrepo.x86_64内核
k8s版本 1.9  
各组件资源下载 百度盘
https://pan.baidu.com/s/1mj8YnEw
密码 wip6
 
yaml下载 https://pan.baidu.com/s/1ggp5Cgb 密码:qfpa  
 docker 17.12.0-ce  
 Storage  Driver: overlay2  
Cgroup  Driver: systemd  
 etcd  见百度盘下载包  
 flanneld  见百度盘下载包  
 dashboard  1.8  
os user root  

 

相关参考及问题

描述某个pod
kubectl describe po -n kube-system kubernetes-dashboard-545f54df97-g67sv (pod name)

检查错误日志

journalctl -xe

tail -f /var/log/message

解决各种权限无法访问问题参考

 

本文参考了以下安装教程

https://www.kubernetes.org.cn/3336.html

http://blog.csdn.net/john_f_lau/article/details/78217490

http://blog.csdn.net/john_f_lau/article/details/78249510

http://blog.csdn.net/john_f_lau/article/details/78216146

 

安装

节点主机准备

安装四台虚拟机,操作系统下载安装centOs7 最新,其实安装3台虚拟机也行,那么master节点也安装etcd就可以了。
每台主机修改host文件

vi /etc/hosts

非master主机添加以下内容
192.168.31.184 k8sm1
192.168.31.185 k8sn1
192.168.31.186 k8sn2
192.168.31.187 etcd1
192.168.31.185 etcd2
192.168.31.186 etcd3

master 主机添加以下内容
127.0.0.1 k8sm1
192.168.31.185 k8sn1
192.168.31.186 k8sn2
192.168.31.187 etcd1
192.168.31.185 etcd2
192.168.31.186 etcd3

为了能在一台主机方便安装和操作其他主机,我选择操作主机为master主机192.168.31.184 ,其他主机ssh免登陆授权给master。
首先在master执行以下命令生成公私钥
ssh-keygen -t rsa
生成目录在 /root/.ssh/下
在其他主机创建文件夹 /root/.ssh/

然后在master主机执行以下命令,把配置文件复制到各个主机
cd /root/.ssh/
for item in {k8sn1,k8sn2,etcd1};do
scp -r id_rsa.pub ${item}:/root/.ssh/authorized_key
done
第一次需要输入密码

所有主机关闭防火墙
systemctl stop firewalld.service
systemctl disable firewalld.service
systemctl stop iptables.service
systemctl disable iptables.service

所有主机关闭selinux
vi /etc/selinux/comfig
SELINUX=disabled
运行命令setenforce 0 临时关闭或保存重启

 

所有主机关闭swap
swapoff -a

 

安装包准备
从百度盘下载好要用的安装包,上传到master节点的 /root 目录
分别有
etcd-v3.2.11-linux-amd64.tar.gz
flannel-v0.9.0-linux-amd64.tar.gz
kubernetes-server-linux-amd64.tar.gz
其他kuber* 安装包不需要,已经包含在 kubernetes-server-linux-amd64.tar.gz

yum升级内核
运行docker的node节点需要升级到4.x内核支持overlay2驱动
每台主机检查内核
uname -sr
以下操作在每台主机执行
添加升级内核的第三方库
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
安装最新稳定版本
yum --enablerepo=elrepo-kernel install kernel-ml -y
查看内核默认启动顺序
awk -F\' '$1=="menuentry " {print $2}' /etc/grub2.cfg
结果显示
CentOS Linux (3.10.0-693.11.1.el7.x86_64) 7 (Core)
CentOS Linux (4.14.7-1.el7.elrepo.x86_64) 7 (Core)
CentOS Linux (3.10.0-693.el7.x86_64) 7 (Core)
CentOS Linux (0-rescue-b449d39284dc451aa298a30920e5dcfc) 7 (Core)
顺序index 分别是 0,1,2,3,每个人机器不一样,看清楚选择自己的index
执行以下代码选择内核
grub2-set-default 1
重启
reboot
检查内核
uname -a

软件安装

所有节点安装 ipvsadm
yum install ipvsadm -y

k8sn1(192.168.31.185),k8sn2(192.168.31.186) 安装docker
curl -sSL https://get.docker.com/ | sh

k8sn1,k8sn2设置文件系统为ovelay2驱动
cat /etc/docker/daemon.json
{
"storage-driver": "overlay2"
}

好了,现在可以回到master主机操作了

解压安装文件
cd /root
tar xvf kubernetes-server-linux-amd64.tar.gz && tar xvf etcd-v3.2.11-linux-amd64.tar.gz && tar xvf flannel-v0.9.0-linux-amd64.tar.gz

创建node,master ,etcd所需的二进制目录并进行归类,方便后面修改复制文件
mkdir -p /root/kubernetes/server/bin/{node,master,etcd}
复制二进制文件到归类的目录
cp -r /root/kubernetes/server/bin/kubelet /root/kubernetes/server/bin/node/
cp -r /root/mk-docker-opts.sh /root/kubernetes/server/bin/node/
cp -r /root/flanneld /root/kubernetes/server/bin/node/
cp -r /root/kubernetes/server/bin/kube-* /root/kubernetes/server/bin/master/
cp -r /root/kubernetes/server/bin/kubelet /root/kubernetes/server/bin/master/
cp -r /root/kubernetes/server/bin/kubectl /root/kubernetes/server/bin/master/
cp -r /root/etcd-v3.2.11-linux-amd64/etcd* /root/kubernetes/server/bin/etcd/
把二进制执行文件发送到各主机
for node in k8sm1 k8sn1 k8sn2 etcd1;do
rsync -avzP /root/kubernetes/server/bin/node/ ${node}:/usr/local/bin/
done

for master in k8sm1;do
rsync -avzP /root/kubernetes/server/bin/master/ ${master}:/usr/local/bin/
done

for etcd in etcd1 etcd2 etcd3;do
rsync -avzP /root/kubernetes/server/bin/etcd/ ${etcd}:/usr/local/bin/
done

创建service文件
创建service归类目录
mkdir -p /root/kubernetes/server/bin/{node-service,master-service,etcd-service,docker-service,ssl}

创建node所需的服务
# docker.service
cat >/root/kubernetes/server/bin/node-service/docker.service <<'HERE'
[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.io

[Service]
Environment="PATH=/usr/local/bin:/bin:/sbin:/usr/bin:/usr/sbin"
EnvironmentFile=-/run/flannel/docker
ExecStart=/usr/bin/dockerd --log-level=error $DOCKER_NETWORK_OPTIONS \
--exec-opt native.cgroupdriver=systemd
ExecReload=/bin/kill -s HUP $MAINPID
Restart=on-failure
RestartSec=5
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Delegate=yes
KillMode=process
[Install]
WantedBy=multi-user.target
HERE

#kubelet.service
cat >/root/kubernetes/server/bin/node-service/kubelet.service <<'HERE'
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet \
--address=192.168.31.185 \
--hostname-override=192.168.31.185 \
--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest \
--experimental-bootstrap-kubeconfig=/etc/kubernetes/ssl/bootstrap.kubeconfig \
--kubeconfig=/etc/kubernetes/ssl/kubeconfig \
--cert-dir=/etc/kubernetes/ssl \
--hairpin-mode promiscuous-bridge \
--allow-privileged=true \
--serialize-image-pulls=false \
--logtostderr=true \
--cgroup-driver=systemd \
--cluster_dns=172.21.0.2 \
--cluster_domain=cluster.local \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target

HERE

说明,这个文件默认配置的是node1 的 ip,复制到 node2后需要把192.168.31.185 改成node2 的ip 192.168.31.186,在后面再改

#flanneld.service
cat >/root/kubernetes/server/bin/node-service/flanneld.service <<'HERE'
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service
[Service]
Type=notify
ExecStart=/usr/local/bin/flanneld \
-etcd-cafile=/etc/kubernetes/ssl/k8s-root-ca.pem \
-etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \
-etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \
-etcd-endpoints=https://192.168.31.187:2379,https://192.168.31.185:2379,https://192.168.31.186:2379 \
-etcd-prefix=/kubernetes/network \
-iface=ens192
ExecStartPost=/usr/local/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure
[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
HERE

关于flanneld.service这里有坑位说明
配置参数 -iface=ens192 其中 ens192 是我的主机的网卡配置文件,你要查询你的网卡配置文件名称,写进这里
在目录 /etc/sysconfig/network-scripts/ 下可以看到
(ifcfg-ens192这个文件因不同虚拟机而不同,有些虚拟机的文件名是ifcfg-eth0,其他的都是固定的路径)

创建master所需的服务
#kube-apiserver.service
cat >/root/kubernetes/server/bin/master-service/kube-apiserver.service <<'HERE'
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-apiserver \
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction \
--advertise-address=192.168.31.184 \
--bind-address=192.168.31.184 \
--insecure-bind-address=127.0.0.1 \
--kubelet-https=true \
--runtime-config=rbac.authorization.k8s.io/v1beta1 \
--authorization-mode=Node,RBAC \
--anonymous-auth=false \
--basic-auth-file=/etc/kubernetes/basic_auth_file \
--enable-bootstrap-token-auth \
--token-auth-file=/etc/kubernetes/ssl/token.csv \
--service-cluster-ip-range=172.21.0.0/16 \
--service-node-port-range=300-9000 \
--tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
--client-ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
--service-account-key-file=/etc/kubernetes/ssl/k8s-root-ca-key.pem \
--etcd-cafile=/etc/kubernetes/ssl/k8s-root-ca.pem \
--etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \
--etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \
--etcd-servers=https://192.168.31.187:2379,https://192.168.31.185:2379,https://192.168.31.186:2379 \
--enable-swagger-ui=true \
--allow-privileged=true \
--apiserver-count=3 \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/lib/audit.log \
--event-ttl=1h \
--v=2
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
HERE

#kube-controller-manager.service
cat >/root/kubernetes/server/bin/master-service/kube-controller-manager.service <<'HERE'
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
--address=127.0.0.1 \
--master=http://127.0.0.1:8080 \
--allocate-node-cidrs=true \
--service-cluster-ip-range=172.21.0.0/16 \
--cluster-cidr=172.20.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
--cluster-signing-key-file=/etc/kubernetes/ssl/k8s-root-ca-key.pem \
--service-account-private-key-file=/etc/kubernetes/ssl/k8s-root-ca-key.pem \
--root-ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
--leader-elect=true \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
HERE

#kube-scheduler.service
cat >/root/kubernetes/server/bin/master-service/kube-scheduler.service <<'HERE'
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-scheduler \
--address=127.0.0.1 \
--master=http://127.0.0.1:8080 \
--leader-elect=true \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
HERE

创建etcd所需的服务
cat >/root/kubernetes/server/bin/etcd-service/etcd.service <<'HERE'
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
ExecStart=/usr/local/bin/etcd \
--name=etcd01 \
--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
--peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
--trusted-ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
--peer-trusted-ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
--initial-advertise-peer-urls=https://192.168.31.187:2380 \
--listen-peer-urls=https://192.168.31.187:2380 \
--listen-client-urls=https://192.168.31.187:2379,http://127.0.0.1:2379 \
--advertise-client-urls=https://192.168.31.187:2379 \
--initial-cluster-token=etcd-cluster-0 \
--initial-cluster=etcd01=https://192.168.31.187:2380,etcd02=https://192.168.31.185:2380,etcd03=https://192.168.31.186:2380 \
--initial-cluster-state=new \
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
HERE

说明,这个文件默认配置的是etcd1 的 ip,复制到 etcd2 和 etcd3 需要修改红色部分的名字和ip

把所有service分发到不同的节点主机

for node in {k8sn1,k8sn2,k8sm1,etcd1};do
rsync -avzP /root/kubernetes/server/bin/node-service/ ${node}:/lib/systemd/system/
done

for master in {k8sm1};do
rsync -avzP /root/kubernetes/server/bin/master-service/ ${master}:/lib/systemd/system/
done

for etcd in {etcd1,etcd2,etcd3};do
rsync -avzP /root/kubernetes/server/bin/etcd-service/ ${etcd}:/lib/systemd/system/
done

#添加访问用户admin 密码也是admin
vi /etc/kubernetes/basic_auth_file
加入以下内容,文件内容格式:password,username,uid
admin,admin,1002
保存

文件修改

登录到node2主机 k8s2 (192.168.31.186)
cd /lib/systemd/system/
# 修改etcd ip地址及名称,上面有说明
vi etcd.service
# 修改kubelet node ip,上面有说明
vi kubelet.service

登录到node1主机 k8s1 (192.168.31.185)
cd /lib/systemd/system/
# 修改etcd ip地址及名称,上面有说明
vi etcd.service

创建证书

回到master主机,安装证书工具

#安装 CFSSL
#直接使用二进制源码包安装

cd /root
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl*
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
export PATH=/usr/local/bin:$PATH

生成各种证书

#admin-csr.json
cat >/root/kubernetes/server/bin/ssl/admin-csr.json <<'HERE'
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Shenzhen",
"L": "Shenzhen",
"O": "system:masters",
"OU": "System"
}
]
}
HERE

#k8s-gencert.json
cat >/root/kubernetes/server/bin/ssl/k8s-gencert.json <<'HERE'
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "87600h"
}
}
}
}
HERE

#k8s-root-ca-csr.json
cat >/root/kubernetes/server/bin/ssl/k8s-root-ca-csr.json <<'HERE'
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 4096
},
"names": [
{
"C": "CN",
"ST": "Shenzhen",
"L": "Shenzhen",
"O": "k8s",
"OU": "System"
}
]
}
HERE

#kube-proxy-csr.json
cat >/root/kubernetes/server/bin/ssl/kube-proxy-csr.json <<'HERE'
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Shenzhen",
"L": "Shenzhen",
"O": "k8s",
"OU": "System"
}
]
}
HERE

#注意,此处需要将dns首ip、etcd、k8s-master节点的ip都填上
cat >/root/kubernetes/server/bin/ssl/kubernetes-csr.json <<'HERE'
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"192.168.31.184",
"192.168.31.185",
"192.168.31.186",
"192.168.31.187",
"172.21.0.1",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Shenzhen",
"L": "Shenzhen",
"O": "k8s",
"OU": "System"
}
]
}
HERE

生成通用证书以及kubeconfig

#进入ssl目录
cd /root/kubernetes/server/bin/ssl/
# 生成证书
cfssl gencert --initca=true k8s-root-ca-csr.json | cfssljson --bare k8s-root-ca

for targetName in kubernetes admin kube-proxy; do
cfssl gencert --ca k8s-root-ca.pem --ca-key k8s-root-ca-key.pem --config k8s-gencert.json --profile kubernetes $targetName-csr.json | cfssljson --bare $targetName
done

export KUBE_APISERVER="https://192.168.31.184:6443"
export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
echo "Toknen: ${BOOTSTRAP_TOKEN}"

cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

echo "Create kubelet bootstrapping kubeconfig..."
kubectl config set-cluster kubernetes \
--certificate-authority=k8s-root-ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig

kubectl config set-credentials kubelet-bootstrap \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=bootstrap.kubeconfig

kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig

# 生成高级审计配置
cat >> audit-policy.yaml <<EOF
apiVersion: audit.k8s.io/v1beta1
kind: Policy
rules:
- level: Metadata
EOF

# 生成集群管理员admin kubeconfig配置文件供kubectl调用
# admin set-cluster
kubectl config set-cluster kubernetes \
--certificate-authority=k8s-root-ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=./kubeconfig

# admin set-credentials
kubectl config set-credentials kubernetes-admin \
--client-certificate=admin.pem \
--client-key=admin-key.pem \
--embed-certs=true \
--kubeconfig=./kubeconfig

# admin set-context
kubectl config set-context kubernetes-admin@kubernetes \
--cluster=kubernetes \
--user=kubernetes-admin \
--kubeconfig=./kubeconfig

# admin set default context
kubectl config use-context kubernetes-admin@kubernetes \
--kubeconfig=./kubeconfig

#为各个节点主机创建ssl文件夹
for node in {k8sm1,etcd1,k8sn1,k8sn2};do
ssh ${node} "mkdir -p /etc/kubernetes/ssl/ "
done

#把证书复制到各个节点主机
for ssl in {k8sm1,etcd1,k8sn1,k8sn2};do
rsync -avzP /root/kubernetes/server/bin/ssl/ ${ssl}:/etc/kubernetes/ssl/
done

#创建master /root/.kube 目录,复制超级admin授权config
mkdir -p /root/.kube ; \cp -f /etc/kubernetes/ssl/kubeconfig /root/.kube/config

启动服务

启动etcd集群服务

在master主机执行

for node in {etcd1,etcd2,etcd3};do
ssh ${node} "systemctl daemon-reload && systemctl start etcd && systemctl enable etcd"
done

任意 etcd节点执行

#检查集群健康
etcdctl \
--ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
cluster-health

结果显示

22e721e13d08d9b0: name=etcd02 peerURLs=https://192.168.31.185:2380 clientURLs=https://192.168.31.185:2379 isLeader=false
5f5c726de9ebbc4a: name=etcd03 peerURLs=https://192.168.31.186:2380 clientURLs=https://192.168.31.186:2379 isLeader=true
71e4146fd1cf2436: name=etcd01 peerURLs=https://192.168.31.187:2380 clientURLs=https://192.168.31.187:2379 isLeader=false

查看memberlist
etcdctl \
--ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
member list

结果显示

22e721e13d08d9b0: name=etcd02 peerURLs=https://192.168.31.185:2380 clientURLs=https://192.168.31.185:2379 isLeader=false
5f5c726de9ebbc4a: name=etcd03 peerURLs=https://192.168.31.186:2380 clientURLs=https://192.168.31.186:2379 isLeader=true
71e4146fd1cf2436: name=etcd01 peerURLs=https://192.168.31.187:2380 clientURLs=https://192.168.31.187:2379 isLeader=false

#设置集群网络范围

etcdctl --endpoints=https://192.168.31.187:2379,https://192.168.31.185:2379,https://192.168.31.186:2379 \
--ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
mkdir /kubernetes/network

etcdctl --endpoints=https://192.168.31.187:2379,https://192.168.31.185:2379,https://192.168.31.186:2379 \
--ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem\
--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
mk /kubernetes/network/config '{ "Network": "172.20.0.0/16", "Backend": { "Type": "vxlan", "VNI": 1 }}'

启动master节点服务

systemctl daemon-reload && systemctl start flanneld kube-apiserver kube-controller-manager kube-scheduler && systemctl enable flanneld kube-apiserver kube-controller-manager kube-scheduler

启动node节点服务

for node in {k8sn1,k8sn2};do
ssh ${node} "systemctl daemon-reload && systemctl start flanneld && systemctl enable flanneld "
done

for node in {k8sn1,k8sn2};do
ssh ${node} "systemctl daemon-reload && systemctl start docker && systemctl enable docker "
done

for node in {k8sn1,k8sn2};do
ssh ${node} "systemctl daemon-reload && systemctl start kubelet && systemctl enable kubelet "
done

验证集群

# 在master机器上执行,授权kubelet-bootstrap角色
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap

#通过所有集群认证
kubectl get csr

这一步如果出现no resource 的话忽略,也不用执行下一步

kubectl get csr | awk '/Pending/ {print $1}' | xargs kubectl certificate approve

查看所有角色
kubectl get clusterrole

绑定用户到角色 admin-bind 是随便改一个唯一的绑定名
kubectl create clusterrolebinding admin-bind \
--clusterrole=cluster-admin \
--user=admin

查看node节点状况及各个组件状况

kubectl get nodes
kubectl get componentstatuses
kubectl cluster-info
查看pod所在的namespace
kubectl get po --all-namespaces
 

布署kube-router组件,kube-dashboard和core-dns

master主机执行
cd /root
mkdir yaml
cd yaml

把以下百度云几个yaml上传到该目录

链接:https://pan.baidu.com/s/1ggp5Cgb 密码:qfpa

执行

kubectl create -f kube-router.yaml
kubectl create -f dashboard.yaml
kubectl create -f dashboard-svc.yaml
kubectl create -f coredns.yaml

测试

dashboard 地址

访问 https://192.168.31.184:6443/ui 会出错原因是dashboard不能跳转到https,直接访问以下地址

https://192.168.31.184:6443/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/

0
0
分享到:
评论

相关推荐

Global site tag (gtag.js) - Google Analytics