`
sillycat
  • 浏览: 2486765 次
  • 性别: Icon_minigender_1
  • 来自: 成都
社区版块
存档分类
最新评论

K8S Helm(1)Understand YAML and Kubectl Pod and Deployment

 
阅读更多
K8S Helm(1)Understand YAML and Kubectl Pod and Deployment


In K8S, usually, we write all the resources: Pod, Service, Volume, Namespace, ReplicaSet, Deployment, Job and etc. We define all these resources in YAML file, then Kubectl will call Kubernetes API to deploy that.

Helm will manage Charts, similar to APT to Ubuntu, or YUM to CentOS.

Understand YAML file in K8S
https://www.qikqiak.com/k8s-book/docs/18.YAML%20%E6%96%87%E4%BB%B6.html
YAML Basic
Only blank is allowed, no Tab
# stands for comments
Lists and Maps

Maps - key value pair
---
apiVersion: v1
kind: Pod

Lists
args
— Cat
— Dog
— Fish

Equal to

{
    “args”: [ “Cat”, “Dog”, “Fish” ]
}

Use YAML to Create Pod
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.16/

apiVersion: v1 - K8S API version
kind: Pod - Other options can be Deployment, Job, Ingress, Service
metadata: - some meta information, like name, namespace, label
spec: containers, storage, volumes and etc

A simple pod definition file as follow:


Set Up Kubectl Command Line in Rancher Home or My Local MAC
On CentOS 7
> kubectl version
-bash: kubectl: command not found

> curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl

Remove the old version if I have one
> sudo rm -fr /usr/local/bin/kubectl

Give permission to the local file
> chmod a+x ./kubectl

Move the file to the PATH
> sudo mv ./kubectl /usr/local/bin/kubectl

Check version
> kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?

It can not connect to the service, because of the configuration is not point to my local rancher server.
In my rancher server cluster — home, click on the button [Kubeconfig File]
> mkdir ~/.kube
> vi ~/.kube/config

Paste the configuration content there
> kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.6", GitCommit:"96fac5cd13a5dc064f7d9f4f23030a6aeface6cc", GitTreeState:"clean", BuildDate:"2019-08-19T11:05:16Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}

> kubectl cluster-info
Kubernetes master is running at https://rancher-home/k8s/clusters/c-2ldm9
CoreDNS is running at https://rancher-home/k8s/clusters/c-2ldm9/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

If we need this on my MAC book.
> curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/darwin/amd64/kubectl
> chmod a+x ./kubectl
> sudo mv ./kubectl /usr/local/bin/kubectl
> vi ~/.kube/config

> kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.6", GitCommit:"96fac5cd13a5dc064f7d9f4f23030a6aeface6cc", GitTreeState:"clean", BuildDate:"2019-08-19T11:05:16Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}

> kubectl cluster-info
Kubernetes master is running at https://rancher-home/k8s/clusters/c-2ldm9
CoreDNS is running at https://rancher-home/k8s/clusters/c-2ldm9/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

Choose configuration file
> export KUBECONFIG=~/.kube/config-dev

Check the running pods
> kubectl get pods
NAME                          READY   STATUS    RESTARTS   AGE
sillycatnginx-775bf7d556-pg2fl   1/1     Running   1          2d7h
sillycatnginx-775bf7d556-sqd5x   1/1     Running   1          2d7h

Check the running services
> kubectl get svc
NAME                                       TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
ingress-5ed19ad916d01260cef53a965736a2ff   ClusterIP   10.43.122.67   <none>        80/TCP    2d7h
kubernetes                                 ClusterIP   10.43.0.1      <none>        443/TCP   8d
sillycatnginx                                 ClusterIP   10.43.14.70    <none>        80/TCP    2d7h

Check namespace
> kubectl get namespaces
NAME              STATUS   AGE
cattle-system     Active   9d
default           Active   9d
ingress-nginx     Active   9d
kube-node-lease   Active   9d
kube-public       Active   9d
kube-system       Active   9d

List the Nodes
> kubectl get node
NAME              STATUS   ROLES               AGE   VERSION
rancher-home      Ready    controlplane,etcd   9d    v1.14.6
rancher-worker1   Ready    worker              9d    v1.14.6
rancher-worker2   Ready    worker              9d    v1.14.6

List the Nodes with IP
> kubectl get node -o wide
NAME              STATUS   ROLES               AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION               CONTAINER-RUNTIME
rancher-home      Ready    controlplane,etcd   9d    v1.14.6   192.168.56.110   <none>        CentOS Linux 7 (Core)   3.10.0-1062.1.1.el7.x86_64   docker://19.3.2
rancher-worker1   Ready    worker              9d    v1.14.6   192.168.56.111   <none>        CentOS Linux 7 (Core)   3.10.0-1062.1.1.el7.x86_64   docker://19.3.2
rancher-worker2   Ready    worker              9d    v1.14.6   10.0.3.15        <none>        CentOS Linux 7 (Core)   3.10.0-1062.1.1.el7.x86_64   docker://19.3.2

SSH to the pod
> kubectl exec -it sillycatnginx-775bf7d556-pg2fl -- /bin/bash

Or
> kubectl exec -it sillycatnginx-775bf7d556-pg2fl -- /bin/sh

First simple YAML to create Pod
> cat nginxpod.yaml
---
apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    app: web
spec:
  containers:
    - name: front-end
      image: nginx
      ports:
        - containerPort: 80

Create the pod if we need
> kubectl create -f nginxpod.yaml
pod/nginx created

> kubectl get pod
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          2m12s

> kubectl get pod -o wide
NAME    READY   STATUS    RESTARTS   AGE     IP           NODE              NOMINATED NODE   READINESS GATES
nginx   1/1     Running   0          3m17s   10.42.2.36   rancher-worker2   <none>           <none>

On the node rancher-worker2, we can see
> curl -G http://10.42.2.36

Check the pod information
> kubectl describe pod nginx
Name:         nginx
Namespace:    default
Priority:     0
Node:         rancher-worker2/10.0.3.15
Start Time:   Sat, 28 Sep 2019 13:30:34 -0400
Labels:       app=web
Annotations:  cni.projectcalico.org/podIP: 10.42.2.36/32
Status:       Running
IP:           10.42.2.36
IPs:          <none>
Containers:
  front-end:
    Container ID:   docker://fedbe8f015fe8be1718cb80bf64f9e66689fdfe9cfb1a70915bc864f60f150dd
    Image:          nginx
    Image ID:       docker-pullable://nginx@sha256:aeded0f2a861747f43a01cf1018cf9efe2bdd02afd57d2b11fcc7fcadc16ccd1
    Port:           80/TCP
    Host Port:      0/TCP

Check pod nginx
> kubectl get pod/nginx
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          30m

Check pod YAML
> kubectl get pod/nginx -o yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    cni.projectcalico.org/podIP: 10.42.2.36/32
  creationTimestamp: "2019-09-29T03:12:23Z"
  labels:
    app: web
  name: nginx
  namespace: default
  resourceVersion: "644756"
  selfLink: /api/v1/namespaces/default/pods/nginx
  uid: f5183fbd-e266-11e9-89f1-080027609f67
spec:
  containers:
  - image: nginx
    imagePullPolicy: Always
    name: front-end
    ports:
    - containerPort: 80
      protocol: TCP
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-q8b8g
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  nodeName: rancher-worker2
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: default-token-q8b8g
    secret:
      defaultMode: 420
      secretName: default-token-q8b8g
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2019-09-28T17:30:34Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2019-09-28T17:31:03Z"
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2019-09-28T17:31:03Z"
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2019-09-29T03:12:23Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://fedbe8f015fe8be1718cb80bf64f9e66689fdfe9cfb1a70915bc864f60f150dd
    image: nginx:latest
    imageID: docker-pullable://nginx@sha256:aeded0f2a861747f43a01cf1018cf9efe2bdd02afd57d2b11fcc7fcadc16ccd1
    lastState: {}
    name: front-end
    ready: true
    restartCount: 0
    state:
      running:
        startedAt: "2019-09-28T17:31:03Z"
  hostIP: 10.0.3.15
  phase: Running
  podIP: 10.42.2.36
  qosClass: BestEffort
  startTime: "2019-09-28T17:30:34Z"

We can delete that
> kubectl delete -f nginxpod.yaml
pod "nginx" deleted

Create Deployment
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-site
spec:
  replicas: 2

apiVersion: https://matthewpalmer.net/kubernetes-app-developer/articles/kubernetes-apiversion-definition-guide.html

The basic deployment
> cat nginxdeployment.yaml
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-site
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: front-end
          image: nginx
          ports:
            - containerPort: 80

Create the deployment
> kubectl create -f nginxdeployment.yaml
deployment.extensions/nginx-site created

> get deployments
NAME         READY   UP-TO-DATE   AVAILABLE   AGE
nginx-site   2/2     2            2           4m36s

> kubectl get deployments -o wide
NAME         READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS   IMAGES   SELECTOR
nginx-site   2/2     2            2           4m54s   front-end    nginx    app=nginx

It will deploy the Workloads on the UI, it will show nginx-site with 2 nginx running on rancher-worker1 and rancher-worker2.
> kubectl get pods -o wide
NAME                          READY   STATUS    RESTARTS   AGE    IP           NODE              NOMINATED NODE   READINESS GATES
nginx-site-7ff6d945f6-lvcwp   1/1     Running   0          124m   10.42.1.65   rancher-worker1   <none>           <none>
nginx-site-7ff6d945f6-pdsng   1/1     Running   0          124m   10.42.2.38   rancher-worker2   <none>           <none>

We can easily access these service on the rancher-worker1
> curl -G http://10.42.1.65

On the rancher-worker2
> curl -G http://10.42.2.38

This website can validate your YAML file
http://www.yamllint.com/

Deploy from our Private Registry
https://blog.csdn.net/wucong60/article/details/81586272
I configure my harbor private registry on the UI
[Resources] —> [Registry] —>
Address: 192.168.56.110:8088
Username: sillycat
Password:

> kubectl get secrets
NAME                  TYPE                                  DATA   AGE
default-token-q8b8g   kubernetes.io/service-account-token   3      11d
sillycatharbor           kubernetes.io/dockerconfigjson        1      3d20h

Latest deployment configuration
> cat nginxdeployment2.yaml
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-site
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: front-end
          image: rancher-home:8088/sillycat/nginx:v1
          ports:
            - containerPort: 80
      imagePullSecrets:
        - name: sillycatharbor

> kubectl create -f nginxdeployment2.yaml
deployment.extensions/nginx-site created

It works well.

Create Service
My basic configuration YAML is as follow:
> cat nginxservice.yaml
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    field.cattle.io/targetWorkloadIds: '["deployment:default:nginx-site"]'
  name: nginx-service
  namespace: default
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

Create the service
> kubectl create -f nginxservice.yaml
service/nginx-service created

Create Load balance
My basic load balance configuration is as follow:
> cat nginxloadbalancing.yaml
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginxproxy
  namespace: default
spec:
  rules:
  - host: rancher.sillycat.com
    http:
      paths:
      - backend:
          serviceName: nginx-service
          servicePort: 80
status:
  loadBalancer: {}

Create the Load Balancing
> kubectl create -f nginxloadbalancing.yaml
ingress.extensions/nginxproxy created



References:
https://www.hi-linux.com/posts/21466.html
https://www.qikqiak.com/k8s-book/docs/18.YAML%20%E6%96%87%E4%BB%B6.html
https://stackoverflow.com/questions/45714658/need-to-do-ssh-to-kubernetes-pod
https://matthewpalmer.net/kubernetes-app-developer/articles/kubernetes-apiversion-definition-guide.html
https://blog.csdn.net/wucong60/article/details/81586272
分享到:
评论

相关推荐

Global site tag (gtag.js) - Google Analytics