`
maosheng
  • 浏览: 549437 次
  • 性别: Icon_minigender_1
  • 来自: 北京
社区版块
存档分类
最新评论

CentOS7 部署 K8S 过程及问题处理

k8s 
阅读更多
前提

前提:VirtualBox CentOS7
物理机IP   192.168.18.8
虚拟机0IP:192.168.18.100(VMaster master)
虚拟机2IP:192.168.18.101(VServer1 node1)
虚拟机3IP:192.168.18.102(VServer2 node2)
虚拟机3IP:192.168.18.103(VServer3 node3)

一、CentOS7 虚拟机IP配置

1.#cd /etc/sysconfig/network-scripts

2.#vi ifcfg-enp0s3
TYPE=Ethernet
DEVICE=enp0s3
NAME=enp0s3
ONBOOT=yes
DEFROUTE=yes
BOOTPROTO=static    
IPADDR=192.168.18.101
NETMASK=255.255.255.0
DNS1=192.168.18.1
GATEWAY=192.168.18.1
BROADCAST=192.168.18.1

3.#service network restart

4.#ip address

二:虚拟机hostname 设置(重启生效)
1.#hostname

  #hostnamectl

2.#vim /etc/sysconfig/network 
NETWORKING=yes
HOSTNAME=VServer1

3.#vi /etc/hosts
最后一行加上修改后的IP地址及对应的主机名:
192.168.18.101 VServer1

4.#vi /etc/hostname
修改为VServer1

5.#reboot         ##重启虚拟机

centos7系统,有systemctl restart systemd-hostnamed服务,重启这个服务即可
#systemctl restart systemd-hostnamed

6.#hostname

  #hostnamectl

7.#yum update
CentOS升级(包括系统版本和内核版本)

8.#reboot       ##重启虚拟机

三.K8S安装前准备
1.关闭firewalld,并禁止开机启动
# systemctl stop firewalld.service #停止firewall
# systemctl disable firewalld.service #禁止firewall开机启动

2.安装ntp服务
# yum install -y ntp wget net-tools
# systemctl start ntpd
# systemctl enable ntpd

四、K8S Master安装配置
1.安装Kubernetes Master
# yum install -y kubernetes etcd

2.编辑/etc/etcd/etcd.conf 使etcd监听所有的ip地址,确保下列行没有注释,并修改为下面的值
# vi /etc/etcd/etcd.conf
# [member]
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://0.0.0.0:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
#[clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.18.100:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.18.100:2379"
ETCD_INITIAL_CLUSTER="default=http://192.168.18.100:2380"

3.编辑Kubernetes API server的配置文件 /etc/kubernetes/apiserver,确保下列行没有被注释,并为下列的值
# vi /etc/kubernetes/apiserver

###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#

# The address on the local server to listen to.
#KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1"
KUBE_API_ADDRESS="--address=0.0.0.0"

# The port on the local server to listen on.
# KUBE_API_PORT="--port=8080"

# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.18.100:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"

# Add your own!
KUBE_API_ARGS=""

4.启动etcd, kube-apiserver, kube-controller-manager and kube-scheduler服务,并设置开机自启
#mkdir /script
#touch kubenetes_service.sh
#vi kubenetes_service.sh

for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
    systemctl restart $SERVICES
    systemctl enable $SERVICES
    systemctl status $SERVICES
done

#chmod 777 kubenetes_service.sh

#sh /script/kubenetes_service.sh

5.在etcd中定义flannel network的配置,这些配置会被flannel service下发到nodes:
# etcdctl mk /centos.com/network/config '{"Network":"172.17.0.0/16"}'

6.添加iptables规则,允许相应的端口
# iptables -I INPUT -p tcp --dport 2379 -j ACCEPT
# iptables -I INPUT -p tcp --dport 10250 -j ACCEPT
# iptables -I INPUT -p tcp --dport 8080 -j ACCEPT
# iptables-save

#瞄一眼现有的iptables规则
#iptables -L -n
#netstat -lnpt|grep kube-apiserver

查看k8s的版本信息
#kubectl api-versions
#kubectl version


五、K8S Nodes安装配置
1.使用yum安装kubernetes 和 flannel
# yum install -y flannel kubernetes

2.为flannel service配置etcd服务器,编辑/etc/sysconfig/flanneld文件中的下列行以连接到master
#vi /etc/sysconfig/flanneld

FLANNEL_ETCD="http://192.168.18.100:2379" #改为etcd服务器的ip
FLANNEL_ETCD_PREFIX="/centos.com/network"

3.编辑/etc/kubernetes/config 中kubernetes的默认配置,确保KUBE_MASTER的值是连接到Kubernetes master API server:
#vi /etc/kubernetes/config

KUBE_MASTER="--master=http://192.168.18.100:8080"

4.编辑/etc/kubernetes/kubelet 如下行:

node1:
# vi /etc/kubernetes/kubelet

KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname_override=192.168.18.101"
KUBELET_API_SERVER="--api_servers=http://192.168.18.100:8080"
KUBELET_ARGS=""

node2:
# vi /etc/kubernetes/kubelet

KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname_override=192.168.18.102"
KUBELET_API_SERVER="--api_servers=http://192.168.18.100:8080"
KUBELET_ARGS=""

node2:
# vi /etc/kubernetes/kubelet

KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname_override=192.168.18.103"
KUBELET_API_SERVER="--api_servers=http://192.168.18.100:8080"
KUBELET_ARGS=""

5.启动kube-proxy, kubelet, docker 和 flanneld services服务,并设置开机自启
#mkdir /script
#touch kubernetes_node_service.sh
#vi kubernetes_node_service.sh

for SERVICES in kube-proxy kubelet docker flanneld; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done

#chmod 777 kubernetes_node_service.sh

#sh /script/kubernetes_node_service.sh

6.添加iptables规则:

# iptables -I INPUT -p tcp --dport 2379 -j ACCEPT
# iptables -I INPUT -p tcp --dport 10250 -j ACCEPT
# iptables -I INPUT -p tcp --dport 8080 -j ACCEPT
# iptables-save

#检查服务状态
# systemctl status kubelet
# systemctl status docker
..........

#瞄一眼现有的iptables规则
#iptables -L -n
#netstat -lnpt|grep kubelet
#ss -tunlp | grep 8080

六、问题处理

(一)、CentOS7用yum安装软件显示错误:cannot find a valid baseurl for repo: base/7/x86_64

解决方案:

1、首先备份 CentOS-Base.repo
#sudo cp /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bak

2.将yum源配置文件/etc/yum.repos.d/CentOS-Base.repo改为阿里云源,内容如下:
#vi CentOS-Base.rep

# CentOS-Base.repo
#
# The mirror system uses the connecting IP address of the client and the
# update status of each mirror to pick mirrors that are updated to and
# geographically close to the client.  You should use this for CentOS updates
# unless you are manually picking other mirrors.
#
# If the mirrorlist= does not work for you, as a fall back you can try the
# remarked out baseurl= line instead.
#
#

[base]
name=CentOS-$releasever - Base
baseurl=https://mirrors.aliyun.com/centos/$releasever/os/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

#released updates
[updates]
name=CentOS-$releasever - Updates
baseurl=https://mirrors.aliyun.com/centos/$releasever/updates/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

#additional packages that may be useful
[extras]
name=CentOS-$releasever - Extras
baseurl=https://mirrors.aliyun.com/centos/$releasever/extras/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

#additional packages that extend functionality of existing packages
[centosplus]
name=CentOS-$releasever - Plus
baseurl=https://mirrors.aliyun.com/centos/$releasever/centosplus/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=centosplus
gpgcheck=1
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

3、清除缓存

yum clean all

rm -rf  /var/cache/yum/

4、生成缓存

yum makecache fast


(二)、使用kubernetes 通过Deployment.yaml创建容器一直处于ContainerCreating状态问题:

1、通过命令查看:
#kubectl get pod [pod-name] -o wide

#kubectl describe pod [pod-name]


Events:
  FirstSeen     LastSeen        Count   From                            SubObjec                                     tPath   Type            Reason          Message
  ---------     --------        -----   ----                            --------                                     -----   --------        ------          -------
  30m           30m             1       {default-scheduler }                   N                                     ormal           Scheduled       Successfully assigned nginx-deployment-67353951-                                     4b29t to 192.168.18.184
  30m           3m              10      {kubelet 192.168.18.184}               W                                     arning          FailedSync      Error syncing pod, skipping: failed to "StartCon                                     tainer" for "POD" with ErrImagePull: "image pull failed for registry.access.redh                                     at.com/rhel7/pod-infrastructure:latest, this may be because there are no credent                                     ials on this request.  details: (open /etc/docker/certs.d/registry.access.redhat                                     .com/redhat-ca.crt: no such file or directory)"

  30m   9s      128     {kubelet 192.168.18.184}                Warning FailedSy                                     nc      Error syncing pod, skipping: failed to "StartContainer" for "POD" with I                                     magePullBackOff: "Back-off pulling image \"registry.access.redhat.com/rhel7/pod-                                     infrastructure:latest\""


2、解决方案:

Node节点执行:
#yum install -y *rhsm*

#wget http://mirror.centos.org/centos/7/os/x86_64/Packages/python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm

#rpm2cpio python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm | cpio -iv --to-stdout ./etc/rhsm/ca/redhat-uep.pem | tee /etc/rhsm/ca/redhat-uep.pem

前两个命令会生成/etc/rhsm/ca/redhat-uep.pem文件
#docker pull registry.access.redhat.com/rhel7/pod-infrastructure:latest

#systemctl restart kubelet
#systemctl status kubelet

Master 节点执行:
#kubectl get pod -o wide


(三)docker启动WARNING:IPv4 forwarding is disabled. Networking will not work. 报错解决办法:

1.问题:
centos 7 docker 启动了一个web服务,但是启动时报

WARNING: IPv4 forwarding is disabled. Networking will not work.


2.解决办法:

#vi /etc/sysctl.conf

net.ipv4.ip_forward=1  #添加这段代码

重启network服务

#systemctl restart network && systemctl restart docker

查看是否修改成功 (备注:返回1,就是成功)

# sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1


(四)使用kubernetes 通过Deployment.yaml创建容器,running后变为CrashLoopBackOff状态问题:

通过命令查看:
#kubectl get pod --namespace=kube-system

#kubectl describe pod --namespace=kube-system [pod-name]

Events:
  FirstSeen     LastSeen        Count   From                            SubObjectPath                           Type            Reason          Message
  ---------     --------        -----   ----                            -------------                           --------        ------          -------
  15m           15m             1       {default-scheduler }                                                    Normal          Scheduled       Successfully assigned kubernetes-dashboard-latest-1032225734-t7h1k to 192.168.18.182
  15m           15m             1       {kubelet 192.168.18.182}        spec.containers{kubernetes-dashboard}   Normal          Created         Created container with docker id 49eb3e178a31; Security:[seccomp=unconfined]
  15m           15m             1       {kubelet 192.168.18.182}        spec.containers{kubernetes-dashboard}   Normal          Started         Started container with docker id 49eb3e178a31
  14m           14m             1       {kubelet 192.168.18.182}        spec.containers{kubernetes-dashboard}   Normal          Killing         Killing container with docker id 49eb3e178a31: pod "kubernetes-dashboard-latest-1032225734-t7h1k_kube-system(9f253538-ba6b-11ea-a541-080027f52697)" container "kubernetes-dashboard" is unhealthy, it will be killed and re-created.
  14m           14m             1       {kubelet 192.168.18.182}        spec.containers{kubernetes-dashboard}   Normal          Started         Started container with docker id 593ef9a3b58b
  14m           14m             1       {kubelet 192.168.18.182}        spec.containers{kubernetes-dashboard}   Normal          Created         Created container with docker id 593ef9a3b58b; Security:[seccomp=unconfined]
  13m           13m             1       {kubelet 192.168.18.182}        spec.containers{kubernetes-dashboard}   Normal          Started         Started container with docker id db852a4811ef
  13m           13m             1       {kubelet 192.168.18.182}        spec.containers{kubernetes-dashboard}   Normal          Killing         Killing container with docker id 593ef9a3b58b: pod "kubernetes-dashboard-latest-1032225734-t7h1k_kube-system(9f253538-ba6b-11ea-a541-080027f52697)" container "kubernetes-dashboard" is unhealthy, it will be killed and re-created.
  13m           13m             1       {kubelet 192.168.18.182}        spec.containers{kubernetes-dashboard}   Normal          Created         Created container with docker id db852a4811ef; Security:[seccomp=unconfined]
  13m           13m             1       {kubelet 192.168.18.182}        spec.containers{kubernetes-dashboard}   Normal          Killing         Killing container with docker id db852a4811ef: pod "kubernetes-dashboard-latest-1032225734-t7h1k_kube-system(9f253538-ba6b-11ea-a541-080027f52697)" container "kubernetes-dashboard" is unhealthy, it will be killed and re-created.
  13m           13m             1       {kubelet 192.168.18.182}        spec.containers{kubernetes-dashboard}   Normal          Created         Created container with docker id 8e17cc3051f4; Security:[seccomp=unconfined]
  13m           13m             1       {kubelet 192.168.18.182}        spec.containers{kubernetes-dashboard}   Normal          Started         Started container with docker id 8e17cc3051f4
  12m           12m             1       {kubelet 192.168.18.182}        spec.containers{kubernetes-dashboard}   Normal          Started         Started container with docker id 78410f1187c4
  12m           12m             1       {kubelet 192.168.18.182}        spec.containers{kubernetes-dashboard}   Normal          Killing         Killing container with docker id 8e17cc3051f4: pod "kubernetes-dashboard-latest-1032225734-t7h1k_kube-system(9f253538-ba6b-11ea-a541-080027f52697)" container "kubernetes-dashboard" is unhealthy, it will be killed and re-created.
  12m           12m             1       {kubelet 192.168.18.182}        spec.containers{kubernetes-dashboard}   Normal          Created         Created container with docker id 78410f1187c4; Security:[seccomp=unconfined]
  12m           12m             1       {kubelet 192.168.18.182}        spec.containers{kubernetes-dashboard}   Normal          Killing         Killing container with docker id 78410f1187c4: pod "kubernetes-dashboard-latest-1032225734-t7h1k_kube-system(9f253538-ba6b-11ea-a541-080027f52697)" container "kubernetes-dashboard" is unhealthy, it will be killed and re-created.
  12m           11m             5       {kubelet 192.168.18.182}                                                Warning         FailedSync      Error syncing pod, skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-latest-1032225734-t7h1k_kube-system(9f253538-ba6b-11ea-a541-080027f52697)"

  11m   11m     1       {kubelet 192.168.18.182}        spec.containers{kubernetes-dashboard}   Normal  Started         Started container with docker id db7406db72b5
  11m   11m     1       {kubelet 192.168.18.182}        spec.containers{kubernetes-dashboard}   Normal  Created         Created container with docker id db7406db72b5; Security:[seccomp=unconfined]
  10m   10m     1       {kubelet 192.168.18.182}        spec.containers{kubernetes-dashboard}   Normal  Created         Created container with docker id 21f23ff4afd4; Security:[seccomp=unconfined]
  10m   10m     1       {kubelet 192.168.18.182}        spec.containers{kubernetes-dashboard}   Normal  Started         Started container with docker id 21f23ff4afd4
  10m   10m     1       {kubelet 192.168.18.182}        spec.containers{kubernetes-dashboard}   Normal  Killing         Killing container with docker id db7406db72b5: pod "kubernetes-dashboard-latest-1032225734-t7h1k_kube-system(9f253538-ba6b-11ea-a541-080027f52697)" container "kubernetes-dashboard" is unhealthy, it will be killed and re-created.
  10m   10m     1       {kubelet 192.168.18.182}        spec.containers{kubernetes-dashboard}   Normal  Killing         Killing container with docker id 21f23ff4afd4: pod "kubernetes-dashboard-latest-1032225734-t7h1k_kube-system(9f253538-ba6b-11ea-a541-080027f52697)" container "kubernetes-dashboard" is unhealthy, it will be killed and re-created.
  10m   7m      14      {kubelet 192.168.18.182}                                                Warning FailedSync      Error syncing pod, skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-latest-1032225734-t7h1k_kube-system(9f253538-ba6b-11ea-a541-080027f52697)"

  7m    7m      1       {kubelet 192.168.18.182}        spec.containers{kubernetes-dashboard}   Normal  Started                 Started container with docker id 36b562f2b843
  7m    7m      1       {kubelet 192.168.18.182}        spec.containers{kubernetes-dashboard}   Normal  Created                 Created container with docker id 36b562f2b843; Security:[seccomp=unconfined]
  7m    7m      1       {kubelet 192.168.18.182}        spec.containers{kubernetes-dashboard}   Normal  Killing                 Killing container with docker id 36b562f2b843: pod "kubernetes-dashboard-latest-1032225734-t7h1k_kube-system(9f253538-ba6b-11ea-a541-080027f52697)" container "kubernetes-dashboard" is unhealthy, it will be killed and re-created.
  1m    1m      1       {kubelet 192.168.18.182}        spec.containers{kubernetes-dashboard}   Normal  Created                 Created container with docker id c4892ddab77b; Security:[seccomp=unconfined]
  1m    1m      1       {kubelet 192.168.18.182}        spec.containers{kubernetes-dashboard}   Normal  Started                 Started container with docker id c4892ddab77b
  15m   1m      11      {kubelet 192.168.18.182}                                                Warning MissingClusterDNS       kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
  1m    1m      1       {kubelet 192.168.18.182}        spec.containers{kubernetes-dashboard}   Normal  Created                 (events with common reason combined)
  1m    1m      1       {kubelet 192.168.18.182}        spec.containers{kubernetes-dashboard}   Normal  Started                 (events with common reason combined)
  15m   1m      10      {kubelet 192.168.18.182}        spec.containers{kubernetes-dashboard}   Normal  Pulled                  Container image "registry.cn-hangzhou.aliyuncs.com/google-containers/kubernetes-dashboard-amd64:v1.5.0" already present on machine
  1m    1m      1       {kubelet 192.168.18.182}        spec.containers{kubernetes-dashboard}   Normal  Killing                 Killing container with docker id c4892ddab77b: pod "kubernetes-dashboard-latest-1032225734-t7h1k_kube-system(9f253538-ba6b-11ea-a541-080027f52697)" container "kubernetes-dashboard" is unhealthy, it will be killed and re-created.
  52s   52s     1       {kubelet 192.168.18.182}        spec.containers{kubernetes-dashboard}   Normal  Killing                 (events with common reason combined)
  14m   52s     12      {kubelet 192.168.18.182}        spec.containers{kubernetes-dashboard}   Warning Unhealthy               Liveness probe failed: Get http://172.17.0.2:9180/: dial tcp 172.17.0.2:9180: getsockopt: connection refused
  7m    11s     29      {kubelet 192.168.18.182}                                                Warning FailedSync              Error syncing pod, skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-latest-1032225734-t7h1k_kube-system(9f253538-ba6b-11ea-a541-080027f52697)"

  12m   11s     48      {kubelet 192.168.18.182}        spec.containers{kubernetes-dashboard}   Warning BackOff Back-off restarting failed docker container

解决方案:

#iptables -P FORWARD ACCEPT


(五)ngnix pod 发布成功后,外部端口无法访问问题
内部通过如下方式可以访问:
#curl http://192.168.18.180:30080

解决方案:
容器所在的服务器执行,开启转发功能
#iptables -P FORWARD ACCEPT


(六)当pod运行状态为Terminating时,强制删除pod
#kubectl delete pod ${pod_name} --grace-period=0 --force

强制删除rc
#kubectl delete rc nginx-controller --force --cascade=false

强制删除deployment
#kubectl delete deployment kubernetes-dashboard-latest  --namespace=kube-system --force=true --cascade=false

七、常用命令:

#kubectl get svc --namespace=kube-system

#kubectl delete svc --namespace=kube-system kubernetes-dashboard

#kubectl get deployment --namespace=kube-system

#kubectl delete deployment --namespace=kube-system kubernetes-dashboard-latest

#kubectl get pod --namespace=kube-system

#kubectl delete pod --namespace=kube-system kubernetes-dashboard-latest-3665071062-t6sgk

#kubectl create --validate -f dashboard-deployment.yaml

#kubectl create --validate -f dashboard-service.yaml

#kubectl describe pod --namespace=kube-system kubernetes-dashboard-latest-3665071062-b5k84

#kubectl --namespace=kube-system get pod kubernetes-dashboard-latest-3665071062-b5k84 -o yaml

查看kubelet日志
#journalctl -u kubelet
#kubectl logs <pod-name>
#kubectl logs --previous <pod-name>

进入一个正在运行的Pod
#kubectl exec -it pod-name /bin/bash

进入一个正在运行的包含多个容器的Pod
#kubectl exec -it pod-name -c container-name /bin/bash










分享到:
评论

相关推荐

Global site tag (gtag.js) - Google Analytics