- 浏览: 2486765 次
- 性别:
- 来自: 成都
文章分类
最新评论
-
nation:
你好,在部署Mesos+Spark的运行环境时,出现一个现象, ...
Spark(4)Deal with Mesos -
sillycat:
AMAZON Relatedhttps://www.godad ...
AMAZON API Gateway(2)Client Side SSL with NGINX -
sillycat:
sudo usermod -aG docker ec2-use ...
Docker and VirtualBox(1)Set up Shared Disk for Virtual Box -
sillycat:
Every Half an Hour30 * * * * /u ...
Build Home NAS(3)Data Redundancy -
sillycat:
3 List the Cron Job I Have>c ...
Build Home NAS(3)Data Redundancy
K8S Helm(1)Understand YAML and Kubectl Pod and Deployment
In K8S, usually, we write all the resources: Pod, Service, Volume, Namespace, ReplicaSet, Deployment, Job and etc. We define all these resources in YAML file, then Kubectl will call Kubernetes API to deploy that.
Helm will manage Charts, similar to APT to Ubuntu, or YUM to CentOS.
Understand YAML file in K8S
https://www.qikqiak.com/k8s-book/docs/18.YAML%20%E6%96%87%E4%BB%B6.html
YAML Basic
Only blank is allowed, no Tab
# stands for comments
Lists and Maps
Maps - key value pair
---
apiVersion: v1
kind: Pod
Lists
args
— Cat
— Dog
— Fish
Equal to
{
“args”: [ “Cat”, “Dog”, “Fish” ]
}
Use YAML to Create Pod
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.16/
apiVersion: v1 - K8S API version
kind: Pod - Other options can be Deployment, Job, Ingress, Service
metadata: - some meta information, like name, namespace, label
spec: containers, storage, volumes and etc
A simple pod definition file as follow:
Set Up Kubectl Command Line in Rancher Home or My Local MAC
On CentOS 7
> kubectl version
-bash: kubectl: command not found
> curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
Remove the old version if I have one
> sudo rm -fr /usr/local/bin/kubectl
Give permission to the local file
> chmod a+x ./kubectl
Move the file to the PATH
> sudo mv ./kubectl /usr/local/bin/kubectl
Check version
> kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
It can not connect to the service, because of the configuration is not point to my local rancher server.
In my rancher server cluster — home, click on the button [Kubeconfig File]
> mkdir ~/.kube
> vi ~/.kube/config
Paste the configuration content there
> kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.6", GitCommit:"96fac5cd13a5dc064f7d9f4f23030a6aeface6cc", GitTreeState:"clean", BuildDate:"2019-08-19T11:05:16Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
> kubectl cluster-info
Kubernetes master is running at https://rancher-home/k8s/clusters/c-2ldm9
CoreDNS is running at https://rancher-home/k8s/clusters/c-2ldm9/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
If we need this on my MAC book.
> curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/darwin/amd64/kubectl
> chmod a+x ./kubectl
> sudo mv ./kubectl /usr/local/bin/kubectl
> vi ~/.kube/config
> kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.6", GitCommit:"96fac5cd13a5dc064f7d9f4f23030a6aeface6cc", GitTreeState:"clean", BuildDate:"2019-08-19T11:05:16Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
> kubectl cluster-info
Kubernetes master is running at https://rancher-home/k8s/clusters/c-2ldm9
CoreDNS is running at https://rancher-home/k8s/clusters/c-2ldm9/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Choose configuration file
> export KUBECONFIG=~/.kube/config-dev
Check the running pods
> kubectl get pods
NAME READY STATUS RESTARTS AGE
sillycatnginx-775bf7d556-pg2fl 1/1 Running 1 2d7h
sillycatnginx-775bf7d556-sqd5x 1/1 Running 1 2d7h
Check the running services
> kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-5ed19ad916d01260cef53a965736a2ff ClusterIP 10.43.122.67 <none> 80/TCP 2d7h
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 8d
sillycatnginx ClusterIP 10.43.14.70 <none> 80/TCP 2d7h
Check namespace
> kubectl get namespaces
NAME STATUS AGE
cattle-system Active 9d
default Active 9d
ingress-nginx Active 9d
kube-node-lease Active 9d
kube-public Active 9d
kube-system Active 9d
List the Nodes
> kubectl get node
NAME STATUS ROLES AGE VERSION
rancher-home Ready controlplane,etcd 9d v1.14.6
rancher-worker1 Ready worker 9d v1.14.6
rancher-worker2 Ready worker 9d v1.14.6
List the Nodes with IP
> kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
rancher-home Ready controlplane,etcd 9d v1.14.6 192.168.56.110 <none> CentOS Linux 7 (Core) 3.10.0-1062.1.1.el7.x86_64 docker://19.3.2
rancher-worker1 Ready worker 9d v1.14.6 192.168.56.111 <none> CentOS Linux 7 (Core) 3.10.0-1062.1.1.el7.x86_64 docker://19.3.2
rancher-worker2 Ready worker 9d v1.14.6 10.0.3.15 <none> CentOS Linux 7 (Core) 3.10.0-1062.1.1.el7.x86_64 docker://19.3.2
SSH to the pod
> kubectl exec -it sillycatnginx-775bf7d556-pg2fl -- /bin/bash
Or
> kubectl exec -it sillycatnginx-775bf7d556-pg2fl -- /bin/sh
First simple YAML to create Pod
> cat nginxpod.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: web
spec:
containers:
- name: front-end
image: nginx
ports:
- containerPort: 80
Create the pod if we need
> kubectl create -f nginxpod.yaml
pod/nginx created
> kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 2m12s
> kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 1/1 Running 0 3m17s 10.42.2.36 rancher-worker2 <none> <none>
On the node rancher-worker2, we can see
> curl -G http://10.42.2.36
Check the pod information
> kubectl describe pod nginx
Name: nginx
Namespace: default
Priority: 0
Node: rancher-worker2/10.0.3.15
Start Time: Sat, 28 Sep 2019 13:30:34 -0400
Labels: app=web
Annotations: cni.projectcalico.org/podIP: 10.42.2.36/32
Status: Running
IP: 10.42.2.36
IPs: <none>
Containers:
front-end:
Container ID: docker://fedbe8f015fe8be1718cb80bf64f9e66689fdfe9cfb1a70915bc864f60f150dd
Image: nginx
Image ID: docker-pullable://nginx@sha256:aeded0f2a861747f43a01cf1018cf9efe2bdd02afd57d2b11fcc7fcadc16ccd1
Port: 80/TCP
Host Port: 0/TCP
Check pod nginx
> kubectl get pod/nginx
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 30m
Check pod YAML
> kubectl get pod/nginx -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
cni.projectcalico.org/podIP: 10.42.2.36/32
creationTimestamp: "2019-09-29T03:12:23Z"
labels:
app: web
name: nginx
namespace: default
resourceVersion: "644756"
selfLink: /api/v1/namespaces/default/pods/nginx
uid: f5183fbd-e266-11e9-89f1-080027609f67
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: front-end
ports:
- containerPort: 80
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-q8b8g
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: rancher-worker2
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: default-token-q8b8g
secret:
defaultMode: 420
secretName: default-token-q8b8g
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2019-09-28T17:30:34Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2019-09-28T17:31:03Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2019-09-28T17:31:03Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2019-09-29T03:12:23Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://fedbe8f015fe8be1718cb80bf64f9e66689fdfe9cfb1a70915bc864f60f150dd
image: nginx:latest
imageID: docker-pullable://nginx@sha256:aeded0f2a861747f43a01cf1018cf9efe2bdd02afd57d2b11fcc7fcadc16ccd1
lastState: {}
name: front-end
ready: true
restartCount: 0
state:
running:
startedAt: "2019-09-28T17:31:03Z"
hostIP: 10.0.3.15
phase: Running
podIP: 10.42.2.36
qosClass: BestEffort
startTime: "2019-09-28T17:30:34Z"
We can delete that
> kubectl delete -f nginxpod.yaml
pod "nginx" deleted
Create Deployment
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-site
spec:
replicas: 2
apiVersion: https://matthewpalmer.net/kubernetes-app-developer/articles/kubernetes-apiversion-definition-guide.html
The basic deployment
> cat nginxdeployment.yaml
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-site
spec:
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: front-end
image: nginx
ports:
- containerPort: 80
Create the deployment
> kubectl create -f nginxdeployment.yaml
deployment.extensions/nginx-site created
> get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-site 2/2 2 2 4m36s
> kubectl get deployments -o wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
nginx-site 2/2 2 2 4m54s front-end nginx app=nginx
It will deploy the Workloads on the UI, it will show nginx-site with 2 nginx running on rancher-worker1 and rancher-worker2.
> kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-site-7ff6d945f6-lvcwp 1/1 Running 0 124m 10.42.1.65 rancher-worker1 <none> <none>
nginx-site-7ff6d945f6-pdsng 1/1 Running 0 124m 10.42.2.38 rancher-worker2 <none> <none>
We can easily access these service on the rancher-worker1
> curl -G http://10.42.1.65
On the rancher-worker2
> curl -G http://10.42.2.38
This website can validate your YAML file
http://www.yamllint.com/
Deploy from our Private Registry
https://blog.csdn.net/wucong60/article/details/81586272
I configure my harbor private registry on the UI
[Resources] —> [Registry] —>
Address: 192.168.56.110:8088
Username: sillycat
Password:
> kubectl get secrets
NAME TYPE DATA AGE
default-token-q8b8g kubernetes.io/service-account-token 3 11d
sillycatharbor kubernetes.io/dockerconfigjson 1 3d20h
Latest deployment configuration
> cat nginxdeployment2.yaml
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-site
spec:
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: front-end
image: rancher-home:8088/sillycat/nginx:v1
ports:
- containerPort: 80
imagePullSecrets:
- name: sillycatharbor
> kubectl create -f nginxdeployment2.yaml
deployment.extensions/nginx-site created
It works well.
Create Service
My basic configuration YAML is as follow:
> cat nginxservice.yaml
---
apiVersion: v1
kind: Service
metadata:
annotations:
field.cattle.io/targetWorkloadIds: '["deployment:default:nginx-site"]'
name: nginx-service
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
Create the service
> kubectl create -f nginxservice.yaml
service/nginx-service created
Create Load balance
My basic load balance configuration is as follow:
> cat nginxloadbalancing.yaml
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginxproxy
namespace: default
spec:
rules:
- host: rancher.sillycat.com
http:
paths:
- backend:
serviceName: nginx-service
servicePort: 80
status:
loadBalancer: {}
Create the Load Balancing
> kubectl create -f nginxloadbalancing.yaml
ingress.extensions/nginxproxy created
References:
https://www.hi-linux.com/posts/21466.html
https://www.qikqiak.com/k8s-book/docs/18.YAML%20%E6%96%87%E4%BB%B6.html
https://stackoverflow.com/questions/45714658/need-to-do-ssh-to-kubernetes-pod
https://matthewpalmer.net/kubernetes-app-developer/articles/kubernetes-apiversion-definition-guide.html
https://blog.csdn.net/wucong60/article/details/81586272
In K8S, usually, we write all the resources: Pod, Service, Volume, Namespace, ReplicaSet, Deployment, Job and etc. We define all these resources in YAML file, then Kubectl will call Kubernetes API to deploy that.
Helm will manage Charts, similar to APT to Ubuntu, or YUM to CentOS.
Understand YAML file in K8S
https://www.qikqiak.com/k8s-book/docs/18.YAML%20%E6%96%87%E4%BB%B6.html
YAML Basic
Only blank is allowed, no Tab
# stands for comments
Lists and Maps
Maps - key value pair
---
apiVersion: v1
kind: Pod
Lists
args
— Cat
— Dog
— Fish
Equal to
{
“args”: [ “Cat”, “Dog”, “Fish” ]
}
Use YAML to Create Pod
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.16/
apiVersion: v1 - K8S API version
kind: Pod - Other options can be Deployment, Job, Ingress, Service
metadata: - some meta information, like name, namespace, label
spec: containers, storage, volumes and etc
A simple pod definition file as follow:
Set Up Kubectl Command Line in Rancher Home or My Local MAC
On CentOS 7
> kubectl version
-bash: kubectl: command not found
> curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
Remove the old version if I have one
> sudo rm -fr /usr/local/bin/kubectl
Give permission to the local file
> chmod a+x ./kubectl
Move the file to the PATH
> sudo mv ./kubectl /usr/local/bin/kubectl
Check version
> kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
It can not connect to the service, because of the configuration is not point to my local rancher server.
In my rancher server cluster — home, click on the button [Kubeconfig File]
> mkdir ~/.kube
> vi ~/.kube/config
Paste the configuration content there
> kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.6", GitCommit:"96fac5cd13a5dc064f7d9f4f23030a6aeface6cc", GitTreeState:"clean", BuildDate:"2019-08-19T11:05:16Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
> kubectl cluster-info
Kubernetes master is running at https://rancher-home/k8s/clusters/c-2ldm9
CoreDNS is running at https://rancher-home/k8s/clusters/c-2ldm9/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
If we need this on my MAC book.
> curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/darwin/amd64/kubectl
> chmod a+x ./kubectl
> sudo mv ./kubectl /usr/local/bin/kubectl
> vi ~/.kube/config
> kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.6", GitCommit:"96fac5cd13a5dc064f7d9f4f23030a6aeface6cc", GitTreeState:"clean", BuildDate:"2019-08-19T11:05:16Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
> kubectl cluster-info
Kubernetes master is running at https://rancher-home/k8s/clusters/c-2ldm9
CoreDNS is running at https://rancher-home/k8s/clusters/c-2ldm9/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Choose configuration file
> export KUBECONFIG=~/.kube/config-dev
Check the running pods
> kubectl get pods
NAME READY STATUS RESTARTS AGE
sillycatnginx-775bf7d556-pg2fl 1/1 Running 1 2d7h
sillycatnginx-775bf7d556-sqd5x 1/1 Running 1 2d7h
Check the running services
> kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-5ed19ad916d01260cef53a965736a2ff ClusterIP 10.43.122.67 <none> 80/TCP 2d7h
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 8d
sillycatnginx ClusterIP 10.43.14.70 <none> 80/TCP 2d7h
Check namespace
> kubectl get namespaces
NAME STATUS AGE
cattle-system Active 9d
default Active 9d
ingress-nginx Active 9d
kube-node-lease Active 9d
kube-public Active 9d
kube-system Active 9d
List the Nodes
> kubectl get node
NAME STATUS ROLES AGE VERSION
rancher-home Ready controlplane,etcd 9d v1.14.6
rancher-worker1 Ready worker 9d v1.14.6
rancher-worker2 Ready worker 9d v1.14.6
List the Nodes with IP
> kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
rancher-home Ready controlplane,etcd 9d v1.14.6 192.168.56.110 <none> CentOS Linux 7 (Core) 3.10.0-1062.1.1.el7.x86_64 docker://19.3.2
rancher-worker1 Ready worker 9d v1.14.6 192.168.56.111 <none> CentOS Linux 7 (Core) 3.10.0-1062.1.1.el7.x86_64 docker://19.3.2
rancher-worker2 Ready worker 9d v1.14.6 10.0.3.15 <none> CentOS Linux 7 (Core) 3.10.0-1062.1.1.el7.x86_64 docker://19.3.2
SSH to the pod
> kubectl exec -it sillycatnginx-775bf7d556-pg2fl -- /bin/bash
Or
> kubectl exec -it sillycatnginx-775bf7d556-pg2fl -- /bin/sh
First simple YAML to create Pod
> cat nginxpod.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: web
spec:
containers:
- name: front-end
image: nginx
ports:
- containerPort: 80
Create the pod if we need
> kubectl create -f nginxpod.yaml
pod/nginx created
> kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 2m12s
> kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 1/1 Running 0 3m17s 10.42.2.36 rancher-worker2 <none> <none>
On the node rancher-worker2, we can see
> curl -G http://10.42.2.36
Check the pod information
> kubectl describe pod nginx
Name: nginx
Namespace: default
Priority: 0
Node: rancher-worker2/10.0.3.15
Start Time: Sat, 28 Sep 2019 13:30:34 -0400
Labels: app=web
Annotations: cni.projectcalico.org/podIP: 10.42.2.36/32
Status: Running
IP: 10.42.2.36
IPs: <none>
Containers:
front-end:
Container ID: docker://fedbe8f015fe8be1718cb80bf64f9e66689fdfe9cfb1a70915bc864f60f150dd
Image: nginx
Image ID: docker-pullable://nginx@sha256:aeded0f2a861747f43a01cf1018cf9efe2bdd02afd57d2b11fcc7fcadc16ccd1
Port: 80/TCP
Host Port: 0/TCP
Check pod nginx
> kubectl get pod/nginx
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 30m
Check pod YAML
> kubectl get pod/nginx -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
cni.projectcalico.org/podIP: 10.42.2.36/32
creationTimestamp: "2019-09-29T03:12:23Z"
labels:
app: web
name: nginx
namespace: default
resourceVersion: "644756"
selfLink: /api/v1/namespaces/default/pods/nginx
uid: f5183fbd-e266-11e9-89f1-080027609f67
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: front-end
ports:
- containerPort: 80
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-q8b8g
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: rancher-worker2
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: default-token-q8b8g
secret:
defaultMode: 420
secretName: default-token-q8b8g
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2019-09-28T17:30:34Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2019-09-28T17:31:03Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2019-09-28T17:31:03Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2019-09-29T03:12:23Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://fedbe8f015fe8be1718cb80bf64f9e66689fdfe9cfb1a70915bc864f60f150dd
image: nginx:latest
imageID: docker-pullable://nginx@sha256:aeded0f2a861747f43a01cf1018cf9efe2bdd02afd57d2b11fcc7fcadc16ccd1
lastState: {}
name: front-end
ready: true
restartCount: 0
state:
running:
startedAt: "2019-09-28T17:31:03Z"
hostIP: 10.0.3.15
phase: Running
podIP: 10.42.2.36
qosClass: BestEffort
startTime: "2019-09-28T17:30:34Z"
We can delete that
> kubectl delete -f nginxpod.yaml
pod "nginx" deleted
Create Deployment
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-site
spec:
replicas: 2
apiVersion: https://matthewpalmer.net/kubernetes-app-developer/articles/kubernetes-apiversion-definition-guide.html
The basic deployment
> cat nginxdeployment.yaml
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-site
spec:
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: front-end
image: nginx
ports:
- containerPort: 80
Create the deployment
> kubectl create -f nginxdeployment.yaml
deployment.extensions/nginx-site created
> get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-site 2/2 2 2 4m36s
> kubectl get deployments -o wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
nginx-site 2/2 2 2 4m54s front-end nginx app=nginx
It will deploy the Workloads on the UI, it will show nginx-site with 2 nginx running on rancher-worker1 and rancher-worker2.
> kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-site-7ff6d945f6-lvcwp 1/1 Running 0 124m 10.42.1.65 rancher-worker1 <none> <none>
nginx-site-7ff6d945f6-pdsng 1/1 Running 0 124m 10.42.2.38 rancher-worker2 <none> <none>
We can easily access these service on the rancher-worker1
> curl -G http://10.42.1.65
On the rancher-worker2
> curl -G http://10.42.2.38
This website can validate your YAML file
http://www.yamllint.com/
Deploy from our Private Registry
https://blog.csdn.net/wucong60/article/details/81586272
I configure my harbor private registry on the UI
[Resources] —> [Registry] —>
Address: 192.168.56.110:8088
Username: sillycat
Password:
> kubectl get secrets
NAME TYPE DATA AGE
default-token-q8b8g kubernetes.io/service-account-token 3 11d
sillycatharbor kubernetes.io/dockerconfigjson 1 3d20h
Latest deployment configuration
> cat nginxdeployment2.yaml
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-site
spec:
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: front-end
image: rancher-home:8088/sillycat/nginx:v1
ports:
- containerPort: 80
imagePullSecrets:
- name: sillycatharbor
> kubectl create -f nginxdeployment2.yaml
deployment.extensions/nginx-site created
It works well.
Create Service
My basic configuration YAML is as follow:
> cat nginxservice.yaml
---
apiVersion: v1
kind: Service
metadata:
annotations:
field.cattle.io/targetWorkloadIds: '["deployment:default:nginx-site"]'
name: nginx-service
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
Create the service
> kubectl create -f nginxservice.yaml
service/nginx-service created
Create Load balance
My basic load balance configuration is as follow:
> cat nginxloadbalancing.yaml
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginxproxy
namespace: default
spec:
rules:
- host: rancher.sillycat.com
http:
paths:
- backend:
serviceName: nginx-service
servicePort: 80
status:
loadBalancer: {}
Create the Load Balancing
> kubectl create -f nginxloadbalancing.yaml
ingress.extensions/nginxproxy created
References:
https://www.hi-linux.com/posts/21466.html
https://www.qikqiak.com/k8s-book/docs/18.YAML%20%E6%96%87%E4%BB%B6.html
https://stackoverflow.com/questions/45714658/need-to-do-ssh-to-kubernetes-pod
https://matthewpalmer.net/kubernetes-app-developer/articles/kubernetes-apiversion-definition-guide.html
https://blog.csdn.net/wucong60/article/details/81586272
发表评论
-
Update Site will come soon
2021-06-02 04:10 1609I am still keep notes my tech n ... -
Stop Update Here
2020-04-28 09:00 260I will stop update here, and mo ... -
NodeJS12 and Zlib
2020-04-01 07:44 430NodeJS12 and Zlib It works as ... -
Docker Swarm 2020(2)Docker Swarm and Portainer
2020-03-31 23:18 310Docker Swarm 2020(2)Docker Swar ... -
Docker Swarm 2020(1)Simply Install and Use Swarm
2020-03-31 07:58 321Docker Swarm 2020(1)Simply Inst ... -
Traefik 2020(1)Introduction and Installation
2020-03-29 13:52 291Traefik 2020(1)Introduction and ... -
Portainer 2020(4)Deploy Nginx and Others
2020-03-20 12:06 380Portainer 2020(4)Deploy Nginx a ... -
Private Registry 2020(1)No auth in registry Nginx AUTH for UI
2020-03-18 00:56 374Private Registry 2020(1)No auth ... -
Docker Compose 2020(1)Installation and Basic
2020-03-15 08:10 327Docker Compose 2020(1)Installat ... -
VPN Server 2020(2)Docker on CentOS in Ubuntu
2020-03-02 08:04 397VPN Server 2020(2)Docker on Cen ... -
Buffer in NodeJS 12 and NodeJS 8
2020-02-25 06:43 335Buffer in NodeJS 12 and NodeJS ... -
NodeJS ENV Similar to JENV and PyENV
2020-02-25 05:14 415NodeJS ENV Similar to JENV and ... -
Prometheus HA 2020(3)AlertManager Cluster
2020-02-24 01:47 359Prometheus HA 2020(3)AlertManag ... -
Serverless with NodeJS and TencentCloud 2020(5)CRON and Settings
2020-02-24 01:46 292Serverless with NodeJS and Tenc ... -
GraphQL 2019(3)Connect to MySQL
2020-02-24 01:48 208GraphQL 2019(3)Connect to MySQL ... -
GraphQL 2019(2)GraphQL and Deploy to Tencent Cloud
2020-02-24 01:48 390GraphQL 2019(2)GraphQL and Depl ... -
GraphQL 2019(1)Apollo Basic
2020-02-19 01:36 275GraphQL 2019(1)Apollo Basic Cl ... -
Serverless with NodeJS and TencentCloud 2020(4)Multiple Handlers and Running wit
2020-02-19 01:19 264Serverless with NodeJS and Tenc ... -
Serverless with NodeJS and TencentCloud 2020(3)Build Tree and Traverse Tree
2020-02-19 01:19 259Serverless with NodeJS and Tenc ... -
Serverless with NodeJS and TencentCloud 2020(2)Trigger SCF in SCF
2020-02-19 01:18 252Serverless with NodeJS and Tenc ...
相关推荐
helm部署应用到k8s集群(helm+k8s)——详细文档
k8s-v1.24.1启动prometheus监控的yaml文件
本镜像为helm2.9.0对应tille镜像,适用于linux系统,适用于helm的离线安装等。
手动在gitlab管理的K8s集群中安装Helm Tiller需要的yaml配置文件,修改仓库为阿里仓库同时升级Helm版本和Tiller的版本以匹配k8s-1.16及其子版本。
k8s-helm-资料笔记-v3版部署k8s集群—超详细,超全面(带文档和相关软件包)
本文档较详细说明了helm2.10.的安装、通过配置国内源可以不需要翻正常使用,通过helm可以批量的对kubernetes所需的软件进行安装、达到自动化的效果。
个人使用的资源包,包含dockerfile文件,以及各个组件所使用的yaml
Helm Chart 是用来封装 Kubernetes 原生应用程序的一系列 YAML 文件。可以在你部署应用的时候自定义应用程序的一些 Metadata,以便于应用程序的分发。 使用 Helm 后不用需要编写复杂的应用部署文件,可以以简单的...
helm-v3.8.1
k8s学习中使用的_helm_helm是用来方便构建k8s的pod的工具
kubectl create -f ./k8s/proxysql.service.yaml 状态集 要部署实际节点,应将一个有状态集添加到kubernetes集群中。 /k8s/proxysql.statefulset.yaml 使用以下命令在kubernetes中使用statefulset kubectl ...
k8s-整个部署都使用kubectl -Kubernetes命令行工具, 舵-部署是通过使用完成的, helmfile-与上一个非常相似,但是这次为Helm安装了helmfile插件。 用法 输入其中一个文件夹以找出其中一种方法。 项目架构 该项目...
helm-v3.8.2-linux-amd64.tar.gz 官方的解释是在没使用helm之前,向 kubernetes 部署应用,我们要依次...通过动态生成 K8s 资源清 单文件(deployment.yaml,service.yaml)。然后调用 Kubectl 自动执行 K8s 资源部署。
k8s helm工具镜像,2.9.1版本tiller不好下载,这是我从docker里考出来的,linux系统适用
通过Jenkins流水线发布Java服务到k8s集群上,主要是通过在Jenkins slave上集成宿主机docker,helm,kubectl工具来打包镜像部署服务到k8s集群上。
通过动态生成 K8s 资源清 单文件(deployment.yaml,service.yaml)。然后调用 Kubectl 自动执行 K8s 资源部署。 ———————————————— 版权声明:本文为CSDN博主「leechm_liqi」的原创文章,遵循CC 4.0...
kubectl run --generator=run-pod/v1 k8s-wait-for --rm -it --image groundnuty/k8s-wait-for:v1.3 --restart Never --command /bin/sh 阅读--help并玩它! / > wait_for.sh -h This script waits until a job, ...
k8s_helm_repo 通过Flask在浏览器中运行Linux命令的Helm图表。 访问头盔图表应遵循的步骤:- 在您的PC中下载文件夹flask_linux_cmd。 在AWS上启动Kubernetes集群。 复制k8s_master_node中的flask_linux_cmd文件夹...
get_helm.sh ingress-controller.yaml ingress-demo.yml kube-flannel.yml kubernetes-dashboard.yaml kubesphere-complete-setup.yaml master_images.sh node_images.sh product.yaml Vagrantfile
kubernetes:我的Kubernetes配置,与YAML和基于Helm的部署混合在一起