K8S使用

资源管理

资源创建方式

  • 命令行
  • YAML

Namespace

命名空间用来隔离资源

# 创建命名空间
kubectl create ns XXX
# 删除命名空间 注意:删除命名空间会同时 删除 命名空间下的所有资源
kubectl delete ns XXX
apiVersion: v1
kind: Namespace
metadata:
  name: hello

Pod

运行中的 一组 容器,Pod是kubernetes中应用的最小单位。
kubernetes 会为每个 Pod 分配独立的ip。

通过命令创建Pod

# 创建Pod
kubectl run mynginx --image=nginx -n k8s-test
# 查看Pod
kubectl get pod -n k8s-test
# 描述Pod
kubectl describe pod mynginx -n k8s-test
# 查看Pod的运行日志
kubectl logs mynginx -n k8s-test
# 查看Pod 的详细信息。(每个Pod - k8s都会分配一个ip 可通过命令查出来的ip访问)
kubectl get pod -owide
# 删除Pod
kubectl delete pod -n k8s-test
# 进入对应的Pod
kubectl exec -it mynginx -n k8s-test -- /bin/bash
# 集群中的任意一个机器以及任意的应用都能通过Pod分配的ip来访问这个Pod

通过配置文件(yaml)创建Pod

apiVersion: v1
kind: Pod
metadata:
  labels:
    run: my-nginx  # Pod标签
  name: my-nginx1  # Pod名称
  namespace: k8s-test # 命名空间
spec:
  containers:
  - image: nginx # Pod使用的镜像
    name: mynginx2  # 容器名

创建的Pod效果

容器

Tips: Pod 的 标签、名称、容器名 可以不一致。

创建多容器的Pod

apiVersion: v1
kind: Pod
metadata:
  labels:
    run: multiple-pod # Pod标签
  name: my-multiple-pod  # Pod名称
  namespace: k8s-test # 命名空间
spec:
  containers:
  - image: nginx # Pod使用的镜像
    name: nginx-multiple  # 容器名
  - image: tomcat
    name: tomcat-multiple

Tips: Pod中的多个容器不能使用同一个端口

Deployment 部署

控制Pod,使Pod拥有多副本,自愈,扩缩容等能力

使用命令创建

# 创建Pod (拥有自愈能力 --> 如果删除Pod 会再次拉起一个Pod)
kubectl create deployment mytomcat --image=tomcat:8.5.68 -n k8s-test
# 创建多副本Pod
kubectl create deployment mytomcat2 --image=tomcat:8.5.68 --replicas=3 -n k8s-test
# 删除Deployment
kubectl delete deployment mytomcat2 -n k8s-test

使用配置文件创建

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: deployment-pod
  name: deployment-pod
  namespace: k8s-test
spec:
  replicas: 3
  selector:
    matchLabels:
      app: deployment-pod
  template:
    metadata:
      labels:
        app: deployment-pod
    spec:
      containers:
      - image: nginx
        name: nginx-deployment
      - image: tomcat 
        name: tomcat-deployment 

自愈

多副本

扩缩容

kubectl scale --replicas=5 deployment/deployment-pod -n k8s-test

扩缩容

kubectl edit deployment/deployment-pod -n k8s-test

扩缩容-yaml

扩缩容-结果

滚动更新

# 设置镜像 并保存记录
kubectl set image deployment/deployment-pod nginx-deployment=nginx:1.16.1 -n k8s-test --record
# 查看更新状态
kubectl rollout status deployment/deployment-pod

滚动更新

# 滚动更新-修改yml形式
kubectl edit deployment/deployment-pod -n k8s-test
# 修改yml

滚动更新-yml

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "2"
    kubernetes.io/change-cause: kubectl set image deployment/deployment-pod nginx-deployment=nginx:1.16.1
      --namespace=k8s-test --record=true
  creationTimestamp: "2022-02-25T09:21:35Z"
  generation: 6
  labels:
    app: deployment-pod
  name: deployment-pod
  namespace: k8s-test
  resourceVersion: "730137"
  uid: f2160c44-3333-45db-bd38-2e6eca59933d
spec:
  progressDeadlineSeconds: 600
  replicas: 2
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: deployment-pod
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: deployment-pod
    spec:
      containers:
      - image: nginx:1.16.1
        imagePullPolicy: Always
        name: nginx-deployment
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      - image: tomcat
        imagePullPolicy: Always
        name: tomcat-deployment
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 2
  conditions:
  - lastTransitionTime: "2022-03-01T07:06:27Z"
    lastUpdateTime: "2022-03-01T07:06:27Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2022-02-25T09:21:35Z"
    lastUpdateTime: "2022-03-01T07:15:21Z"
    message: ReplicaSet "deployment-pod-7f958895f5" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 6
  readyReplicas: 2
  replicas: 2
  updatedReplicas: 2

版本回退

# 历史记录
kubectl rollout history deployment/deployment-pod -n k8s-test
# 查看某个历史详情
kubectl rollout history deployment/deployment-pod -n k8s-test --revision=2
# 回滚(回到上一版本)
kubectl rollout undo deployment/deployment-pod
# 回滚(回到指定版本)
kubectl rollout undo deployment/deployment-pod --revision=2

版本记录

版本记录详情

tips:
除了Deployment,k8s还有 StatefulSet、DaemonSet、Job 等类型资源。我们都称为 工作负载。
有状态应用使用 StatefulSet 部署,无状态应用使用 Deployment 部署
kubernetes文档

Service

将一组 Pods 公开为网络服务的抽象方法。

# 暴露deployment端口 注意:不支持多端口暴露 如需要多端口暴露用yml
kubectl expose deployment/deployment-pod --port=8000 --target-port=80 -n k8s-test
# 查看暴露svc 可通过查询的(Service)IP+端口 访问
kubectl get svc deployment-pod -n k8s-test
# 使用标签检索Pod
kubectl get pod -l app=deployment-pod -n k8s-test

查看Service

通过Service访问且支持负载

apiVersion: v1
kind: Service
metadata:
  labels:
    app: deployment-pod
  name: deployment-pod
spec:
  selector:
    app: deployment-pod
  ports:
  - name: nginx-port
    port: 8000
    protocol: TCP
    targetPort: 80
  - name: tomcat-port
    port: 8090
    protocol: TCP
    targetPort: 8080

多端口暴露

yml方式支持暴露多个端口,暴露多端口时需要添加端口名称
多端口Service
多端口Service

说明:
与一般的Kubernetes名称一样,端口名称只能包含小写字母数字字符 和 -。 端口名称还必须以字母数字字符开头和结尾。
例如,名称 123-abc 和 web 有效,但是 123_abc 和 -web 无效。

ClusterIP

集群IP模式,仅集群内节点可访问

# ClusterIP模式 --type=ClusterIP 可省略
kubectl expose deployment/deployment-pod --port=8000 --target-port=80 -n k8s-test --type=ClusterIP
apiVersion: v1
kind: Service
metadata:
  labels:
    app: deployment-pod
  name: deployment-pod
spec:
  selector:
    app: deployment-pod
  ports:
  - name: nginx-port
    port: 8000
    protocol: TCP
    targetPort: 80
  - name: tomcat-port
    port: 8090
    protocol: TCP
    targetPort: 8080
  type: ClusterIP

NodePort

如果你将 type 字段设置为 NodePort,则 Kubernetes 控制平面将在 --service-node-port-range 标志指定的范围内分配端口(默认值:30000-32767)。 每个节点将那个端口(每个节点上的相同端口号)代理到你的服务中。 你的服务在其 .spec.ports[*].nodePort 字段中要求分配的端口。

使用节点端口模式,则集群外机器也可以访问。

# NodePort模式,集群外也可以访问
kubectl expose deployment/deployment-pod --port=8000 --target-port=80 -n k8s-test --type=NodePort

如果 service 已存在

# 修改 service配置
kubectl edit service deployment-pod -n k8s-test
# 修改 type 字段

如果 service 不存在,可以通过yml直接创建。

apiVersion: v1
kind: Service
metadata:
  labels:
    app: deployment-pod
  name: deployment-pod
spec:
  selector:
    app: deployment-pod
  ports:
  - name: nginx-port
    port: 8000
    protocol: TCP
    targetPort: 80
  - name: tomcat-port
    port: 8090
    protocol: TCP
    targetPort: 8080
  type: NodePort

service已存在

NodePort模式

# 集群内访问
## nginx
curl http://10.96.28.46:8000
## tomcat
curl http://10.96.28.46:8090

# 集群外访问
## nginx
curl http://192.168.200.73:30588
## tomcat
curl http://192.168.200.73:31010

集群内-外 访问

Ingress

Ingress 是对集群中服务的外部访问进行管理的 API 对象,典型的访问方式是 HTTP。
Ingress 可以提供负载均衡、SSL 终结和基于名称的虚拟托管。

Ingress 公开了从集群外部到集群内服务的 HTTP 和 HTTPS 路由。 流量路由由 Ingress 资源上定义的规则控制。

请求链路 Web -> Ingress -> Service -> Pod
Ingress作为Service的统一网关入口。

Ingress是什么

安装

# 安装Ingress
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/cloud/deploy.yaml

如果不能下载,可使用该文件

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx

---
# Source: ingress-nginx/templates/controller-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
automountServiceAccountToken: true
---
# Source: ingress-nginx/templates/controller-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
data:
  allow-snippet-annotations: 'true'
---
# Source: ingress-nginx/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
  name: ingress-nginx
rules:
  - apiGroups:
      - ''
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
      - namespaces
    verbs:
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ''
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingressclasses
    verbs:
      - get
      - list
      - watch
---
# Source: ingress-nginx/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
  name: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx
subjects:
  - kind: ServiceAccount
    name: ingress-nginx
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
rules:
  - apiGroups:
      - ''
    resources:
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ''
    resources:
      - configmaps
      - pods
      - secrets
      - endpoints
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingressclasses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - configmaps
    resourceNames:
      - ingress-controller-leader
    verbs:
      - get
      - update
  - apiGroups:
      - ''
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ''
    resources:
      - events
    verbs:
      - create
      - patch
---
# Source: ingress-nginx/templates/controller-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx
subjects:
  - kind: ServiceAccount
    name: ingress-nginx
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-service-webhook.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller-admission
  namespace: ingress-nginx
spec:
  type: ClusterIP
  ports:
    - name: https-webhook
      port: 443
      targetPort: webhook
      appProtocol: https
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  type: LoadBalancer
  externalTrafficPolicy: Local
  ipFamilyPolicy: SingleStack
  ipFamilies:
    - IPv4
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: http
      appProtocol: http
    - name: https
      port: 443
      protocol: TCP
      targetPort: https
      appProtocol: https
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/instance: ingress-nginx
      app.kubernetes.io/component: controller
  revisionHistoryLimit: 10
  minReadySeconds: 0
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/component: controller
    spec:
      dnsPolicy: ClusterFirst
      containers:
        - name: controller
          image: registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:V1.1.1@sha256:e16123f3932f44a2bba8bc3cf1c109cea4495ee271d6d16ab99228b58766d3ab
          imagePullPolicy: IfNotPresent
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown
          args:
            - /nginx-ingress-controller
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
            - --election-id=ingress-controller-leader
            - --controller-class=k8s.io/ingress-nginx
            - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
            - --validating-webhook=:8443
            - --validating-webhook-certificate=/usr/local/certificates/cert
            - --validating-webhook-key=/usr/local/certificates/key
          securityContext:
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            runAsUser: 101
            allowPrivilegeEscalation: true
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: LD_PRELOAD
              value: /usr/local/lib/libmimalloc.so
          livenessProbe:
            failureThreshold: 5
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
            - name: https
              containerPort: 443
              protocol: TCP
            - name: webhook
              containerPort: 8443
              protocol: TCP
          volumeMounts:
            - name: webhook-cert
              mountPath: /usr/local/certificates/
              readOnly: true
          resources:
            requests:
              cpu: 100m
              memory: 90Mi
      nodeSelector:
        kubernetes.io/os: linux
      serviceAccountName: ingress-nginx
      terminationGracePeriodSeconds: 300
      volumes:
        - name: webhook-cert
          secret:
            secretName: ingress-nginx-admission
---
# Source: ingress-nginx/templates/controller-ingressclass.yaml
# We don't support namespaced ingressClass yet
# So a ClusterRole and a ClusterRoleBinding is required
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: nginx
  namespace: ingress-nginx
spec:
  controller: k8s.io/ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml
# before changing this value, check the required kubernetes version
# https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  name: ingress-nginx-admission
webhooks:
  - name: validate.nginx.ingress.kubernetes.io
    matchPolicy: Equivalent
    rules:
      - apiGroups:
          - networking.k8s.io
        apiVersions:
          - v1
        operations:
          - CREATE
          - UPDATE
        resources:
          - ingresses
    failurePolicy: Fail
    sideEffects: None
    admissionReviewVersions:
      - v1
    clientConfig:
      service:
        namespace: ingress-nginx
        name: ingress-nginx-controller-admission
        path: /networking/v1/ingresses
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ingress-nginx-admission
  namespace: ingress-nginx
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
rules:
  - apiGroups:
      - admissionregistration.k8s.io
    resources:
      - validatingwebhookconfigurations
    verbs:
      - get
      - update
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx-admission
subjects:
  - kind: ServiceAccount
    name: ingress-nginx-admission
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: ingress-nginx-admission
  namespace: ingress-nginx
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
rules:
  - apiGroups:
      - ''
    resources:
      - secrets
    verbs:
      - get
      - create
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: ingress-nginx-admission
  namespace: ingress-nginx
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx-admission
subjects:
  - kind: ServiceAccount
    name: ingress-nginx-admission
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: ingress-nginx-admission-create
  namespace: ingress-nginx
  annotations:
    helm.sh/hook: pre-install,pre-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
spec:
  template:
    metadata:
      name: ingress-nginx-admission-create
      labels:
        helm.sh/chart: ingress-nginx-4.0.15
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 1.1.1
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: admission-webhook
    spec:
      containers:
        - name: create
          image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.1.1@sha256:23a03c9c381fba54043d0f6148efeaf4c1ca2ed176e43455178b5c5ebf15ad70
          imagePullPolicy: IfNotPresent
          args:
            - create
            - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
            - --namespace=$(POD_NAMESPACE)
            - --secret-name=ingress-nginx-admission
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          securityContext:
            allowPrivilegeEscalation: false
      restartPolicy: OnFailure
      serviceAccountName: ingress-nginx-admission
      nodeSelector:
        kubernetes.io/os: linux
      securityContext:
        runAsNonRoot: true
        runAsUser: 2000
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: ingress-nginx-admission-patch
  namespace: ingress-nginx
  annotations:
    helm.sh/hook: post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
spec:
  template:
    metadata:
      name: ingress-nginx-admission-patch
      labels:
        helm.sh/chart: ingress-nginx-4.0.15
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 1.1.1
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: admission-webhook
    spec:
      containers:
        - name: patch
          image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.1.1@sha256:23a03c9c381fba54043d0f6148efeaf4c1ca2ed176e43455178b5c5ebf15ad70
          imagePullPolicy: IfNotPresent
          args:
            - patch
            - --webhook-name=ingress-nginx-admission
            - --namespace=$(POD_NAMESPACE)
            - --patch-mutating=false
            - --secret-name=ingress-nginx-admission
            - --patch-failure-policy=Fail
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          securityContext:
            allowPrivilegeEscalation: false
      restartPolicy: OnFailure
      serviceAccountName: ingress-nginx-admission
      nodeSelector:
        kubernetes.io/os: linux
      securityContext:
        runAsNonRoot: true
        runAsUser: 2000

使用

创建测试环境

// TODO 方便看效果 可以修改tomcat、nginx的默认页

apiVersion: apps/v1
kind: Deployment
metadata:
  name: tomcat-server
  namespace: k8s-test
spec:
  replicas: 2
  selector:
    matchLabels:
      app: tomcat-server
  template:
    metadata:
      labels:
        app: tomcat-server
    spec:
      containers:
      - name: tomcat-server
        image: tomcat
        ports:
        - containerPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-server
  namespace: k8s-test
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-server
  template:
    metadata:
      labels:
        app: nginx-server
    spec:
      containers:
      - name: nginx-server
        image: nginx
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx-demo
  name: nginx-demo
  namespace: k8s-test
spec:
  selector:
    app: nginx-server
  ports:
  - port: 8000
    protocol: TCP
    targetPort: 80
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: tomcat-demo
  name: tomcat-demo
  namespace: k8s-test
spec:
  selector:
    app: tomcat-server
  ports:
  - port: 8000
    protocol: TCP
    targetPort: 8080
配置Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress  
metadata:
  name: ingress-host-test
  namespace: k8s-test
spec:
  ingressClassName: nginx
  rules:
  - host: tomcat.klaus.com
    http:
      paths:
      - pathType: Prefix
        path: /
        backend:
          service:
            name: tomcat-demo
            port:
              number: 8000
  - host: nginx.klaus.com
    http:
      paths:
      - pathType: Prefix
        path: /  
        backend:
          service:
            name: nginx-demo
            port:
              number: 8000

可能碰到的异常情况:

kubectl logs -f ingress-nginx-controller-6d77d8b984-24ttw -n ingress-nginx

Ingress异常情况

异常情况参考链接

所以它清楚地表明控制器类没有匹配的键nginx- 因为没有ingressClass资源是 NGINX Ingress 控制器和正在运行的 Ingress 资源之间的“链接”。

解决:

  1. 首先查看Ingress安装配置文件
  2. 设置正确的 IngressClass
kubectl get pod ingress-nginx-controller-6d77d8b984-24ttw -n ingress-nginx -o yaml

Ingress 安装配置

IngressClass 配置

我的问题:安装时,注释了整个 kind: IngressClass 这一块的配置, 所有没有办法匹配。😳

针对没有域名的情况测试:

  1. 修改发起请求 [1] 的机器的 hosts 文件
  2. 添加上对应的域名 [2] 映射

访问效果

存储抽象

环境准备

# 所有节点安装nfs-utils
yum install -y nfs-utils

## 主节点
echo "/root/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports
# 递归创建文件夹
mkdir -p /root/nfs/data
# 开机自启并且现在启动
systemctl enable rpcbind --now
systemctl enable nfs-server --now
#配置生效
exportfs -r

## 从节点
# 查看共享目录
showmount -e 172.31.39.64
# 递归创建文件夹
mkdir -p /root/nfs/data
# 挂载nfs服务器的共享目录到本机目录
mount -t nfs 172.31.39.64:/root/nfs/data /root/nfs/data
# 原生方式数据挂载
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx-pv-demo
  name: nginx-pv-demo
  namespace: k8s-test
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-pv-demo
  template:
    metadata:
      labels:
        app: nginx-pv-demo
    spec:
      containers:
      - image: nginx
        name: nginx
        volumeMounts:
        - name: html
          mountPath: /usr/share/nginx/html
      volumes:
        - name: html
          nfs:
            server: 172.31.39.64
            path: /root/nfs/data/nginx/html

PV & PVC

PV: 持久卷(PersistentVolume,PV)是集群中的一块存储,可以由管理员事先供应,或者 使用存储类(Storage Class)来动态供应。 持久卷是集群资源,就像节点也是集群资源一样。PV 持久卷和普通的 Volume 一样,也是使用 卷插件来实现的,只是它们拥有独立于任何使用 PV 的 Pod 的生命周期。 此 API 对象中记述了存储的实现细节,无论其背后是 NFS、iSCSI 还是特定于云平台的存储系统。

PVC: 持久卷申领(PersistentVolumeClaim,PVC)表达的是用户对存储的请求。概念上与 Pod 类似。 Pod 会耗用节点资源,而 PVC 申领会耗用 PV 资源。Pod 可以请求特定数量的资源(CPU 和内存);同样 PVC 申领也可以请求特定的大小和访问模式 (例如,可以要求 PV 卷能够以 ReadWriteOnce、ReadOnlyMany 或 ReadWriteMany 模式之一来挂载,参见访问模式)。


  1. 通常是浏览器所在的机器 ↩︎

  2. 通过修改hosts文件来达到访问 tomcat.klaus.comnginx.klaus.com 的目的 ↩︎