KubeSphere 体系架构
KubeSphere 的多租户系统分三个层级,即集群、企业空间和项目。KubeSphere 中的项目等同于 Kubernetes 的命名空间。
您需要创建一个新的企业空间进行操作,而不是使用系统企业空间,系统企业空间中运行着系统资源,绝大部分仅供查看。出于安全考虑,强烈建议给不同的租户授予不同的权限在企业空间中进行协作。
您可以在一个 KubeSphere 集群中创建多个企业空间,每个企业空间下可以创建多个项目。KubeSphere 为每个级别默认设有多个内置角色。此外,您还可以创建拥有自定义权限的角色。KubeSphere 多层次结构适用于具有不同团队或组织以及每个团队中需要不同角色的企业用户。
平台内置角色
创建用户
在用户中,点击创建。在弹出的对话框中,提供所有必要信息(带有*标记),然后在角色一栏选择 users-manager。请参考下图示例。
完成后,点击确定。新创建的帐户将显示在用户中的帐户列表中。
切换帐户使用 user-manager 重新登录,创建新帐户。
点击左上角 平台管理
,在遮罩层中选择访问控制,点击用户,选择创建创建对应的用户。
创建企业空间
点击左上角 平台管理
,在遮罩层中选择访问控制,点击企业空间,选择创建进入创建企业空间。
创建项目
使用空间管理员账号登入,选择对应的命名空间(wowaili,不建议使用系统命名看空间创建项目)。如果是CI/CD
项目选择DevOps 项目, 普通项目选择项目即可。
OpenELB
为什么需要 OpenELB?
在基于云的 Kubernetes 集群中,通常使用云供应商提供的负载均衡器来公开服务。但是,基于云的负载均衡器在裸机环境中不可用。 OpenELB 允许用户在裸机、边缘和虚拟化环境中创建 LoadBalancer 服务以供外部访问,并提供与基于云的负载均衡器相同的用户体验。
如何安装?
KubeSphere 商店提供了开源的 OpenELB 应用,在商店安装即可。
如何使用(Layer2 Mode)?
允许 strictARP
kubectl edit configmap kube-proxy -n kube-system
设置
data.config.conf.ipvs.strictARP
值为true
重启 kebe-proxy 服务
kubectl rollout restart daemonset kube-proxy -n kube-system
创建 Eip 对象配置文件(layer2-eip.yaml)
apiVersion: network.kubesphere.io/v1alpha2
kind: Eip
metadata:
name: layer2-eip
annotations:
eip.openelb.kubesphere.io/is-default-eip: "true"
spec:
address: 192.168.3.91-192.168.3.120
interface: ens192
protocol: layer2
如果开启了
metadata.annotations.eip.openelb.kubesphere.io/is-default-eip
注解,则可以不用每个svc
都配置step 3
中的注解。
- 命令创建 Eip 对象
kubectl apply -f layer2-eip.yaml
- 在每个需要 LoadBalancer 服务的 svc 中, 在
metadata.annotations
添加如下注解
metadata:
name: layer2-svc
annotations:
lb.kubesphere.io/v1alpha1: openelb
protocol.openelb.kubesphere.io/v1alpha1: layer2
eip.openelb.kubesphere.io/v1alpha2: layer2-eip
验证 OpenELB
域名格式
普通的 Service:会生成
servicename.namespace.svc.cluster.local
的域名,会解析到 Service 对应的 ClusterIP 上,在 Pod 之间的调用可以简写成servicename.namespace
,如果处于同一个命名空间下面,甚至可以只写成servicename
即可访问Headless Service
:无头服务,就是把 clusterIP 设置为 None 的,会被解析为指定 Pod 的 IP 列表,同样还可以通过podname.servicename.namespace.svc.cluster.local
访问到具体的某一个 Pod。
Mysql 高可用集群
安装 RadonDB-Mysql-Operator
- 添加仓库
https://radondb.github.io/radondb-mysql-kubernetes
- 安装
mysql-operator
应用 - 执行脚本
kubectl apply -f https://github.com/radondb/radondb-mysql-kubernetes/releases/latest/download/mysql_v1alpha1_mysqlcluster.yaml --namespace=mysql-cluster
配置 LoadBalancer 访问
为mysql-leader
和mysql-follower
配置外网访问,选择 LoadBalancer 即可(前提是已经配置了metadata.annotations.eip.openelb.kubesphere.io/is-default-eip: "true"
, 如果未配置, 则在每个服务中均要添加对应的注解,参考如何使用(Layer2 Mode)
中的 step 3
)
使用 root 账号连接 Radondb-MySql
使用默认集群配置文件创建的 mysql 集群,提供了一个可以远程访问(internal
)的 root 账号和密码,需要通过配置文件才能找到 root 帐户的密码(显示为 Base64 编码后的结果,需要 Base64 解码)。
创建超级用户
# mysql8 创建用户
create user 'wowaili'@'%' identified with mysql_native_password by 'Klaus@123';
# 给用户授权
grant all privileges on *.* to 'wowaili'@'%' with grant option;
# 刷新权限
flush privileges;
Redis 集群模式
Redis Sentinel 与 Redis Cluster
Redis Sentinel 支持多主多从、选举、故障转移,是 redis 高可用的解决方案。但是存在从节点下线后,sentinel 是不会对其进行故障转移的,连接从节点的客户端无法获取新的可用从节点,且无法实现动态扩容。
Redis Cluster 是 Redis 的分布式解决方案,当遇到单机内存、并发、流量等瓶颈时,可以采用 Cluster 架构达到负载均衡的目的。可实现动态扩容,自动故障转移。
Redis Sentinel 中所有节点有数据的完整备份,Redis Cluster 中节点只存储部分数据。
Redis Cluster 安装
- 添加
bitnami
仓库https://charts.bitnami.com/bitnami
。 - 配置
values.yaml
文件,修改global.redis.password
、cluster.externalAccess.enabled
、cluster.externalAccess.service.loadBalancerIP
字段。
global:
imageRegistry: ''
imagePullSecrets: []
storageClass: ''
redis:
password: Klaus123@.
nameOverride: ''
fullnameOverride: ''
clusterDomain: cluster.local
commonAnnotations: {}
commonLabels: {}
extraDeploy: []
diagnosticMode:
enabled: false
command:
- sleep
args:
- infinity
image:
registry: docker.io
repository: bitnami/redis-cluster
tag: 7.0.4-debian-11-r4
digest: ''
pullPolicy: IfNotPresent
pullSecrets: []
debug: false
networkPolicy:
enabled: false
allowExternal: true
ingressNSMatchLabels: {}
ingressNSPodMatchLabels: {}
serviceAccount:
create: false
name: ''
annotations: {}
automountServiceAccountToken: false
rbac:
create: false
role:
rules: []
podSecurityContext:
enabled: true
fsGroup: 1001
runAsUser: 1001
sysctls: []
podDisruptionBudget: {}
minAvailable: ''
maxUnavailable: ''
containerSecurityContext:
enabled: true
runAsUser: 1001
runAsNonRoot: true
usePassword: true
password: ''
existingSecret: ''
existingSecretPasswordKey: ''
usePasswordFile: false
tls:
enabled: false
authClients: true
autoGenerated: false
existingSecret: ''
certificatesSecret: ''
certFilename: ''
certKeyFilename: ''
certCAFilename: ''
dhParamsFilename: ''
service:
ports:
redis: 6379
nodePorts:
redis: ''
extraPorts: []
annotations: {}
labels: {}
type: ClusterIP
clusterIP: ''
loadBalancerIP: ''
loadBalancerSourceRanges: []
externalTrafficPolicy: Cluster
sessionAffinity: None
sessionAffinityConfig: {}
persistence:
path: /bitnami/redis/data
subPath: ''
storageClass: ''
annotations: {}
accessModes:
- ReadWriteOnce
size: 8Gi
matchLabels: {}
matchExpressions: {}
volumePermissions:
enabled: false
image:
registry: docker.io
repository: bitnami/bitnami-shell
tag: 11-debian-11-r23
digest: ''
pullPolicy: IfNotPresent
pullSecrets: []
resources:
limits: {}
requests: {}
podSecurityPolicy:
create: false
redis:
command: []
args: []
updateStrategy:
type: RollingUpdate
rollingUpdate:
partition: 0
podManagementPolicy: Parallel
hostAliases: []
hostNetwork: false
useAOFPersistence: 'yes'
containerPorts:
redis: 6379
bus: 16379
lifecycleHooks: {}
extraVolumes: []
extraVolumeMounts: []
customLivenessProbe: {}
customReadinessProbe: {}
customStartupProbe: {}
initContainers: []
sidecars: []
podLabels: {}
priorityClassName: ''
configmap: ''
extraEnvVars: []
extraEnvVarsCM: ''
extraEnvVarsSecret: ''
podAnnotations: {}
resources:
limits: {}
requests: {}
schedulerName: ''
shareProcessNamespace: false
livenessProbe:
enabled: true
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
enabled: true
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 5
startupProbe:
enabled: false
path: /
initialDelaySeconds: 300
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
podAffinityPreset: ''
podAntiAffinityPreset: soft
nodeAffinityPreset:
type: ''
key: ''
values: []
affinity: {}
nodeSelector: {}
tolerations: []
topologySpreadConstraints: []
updateJob:
activeDeadlineSeconds: 600
command: []
args: []
hostAliases: []
annotations: {}
podAnnotations: {}
podLabels: {}
extraEnvVars: []
extraEnvVarsCM: ''
extraEnvVarsSecret: ''
extraVolumes: []
extraVolumeMounts: []
initContainers: []
podAffinityPreset: ''
podAntiAffinityPreset: soft
nodeAffinityPreset:
type: ''
key: ''
values: []
affinity: {}
nodeSelector: {}
tolerations: []
priorityClassName: ''
resources:
limits: {}
requests: {}
cluster:
init: true
nodes: 6
replicas: 1
externalAccess:
enabled: true
service:
type: LoadBalancer
port: 6379
loadBalancerIP:
- 192.168.3.93
- 192.168.3.94
- 192.168.3.96
- 192.168.3.97
- 192.168.3.98
- 192.168.3.99
loadBalancerSourceRanges: []
annotations: {}
update:
addNodes: false
currentNumberOfNodes: 6
currentNumberOfReplicas: 1
newExternalIPs: []
metrics:
enabled: false
image:
registry: docker.io
repository: bitnami/redis-exporter
tag: 1.43.0-debian-11-r19
digest: ''
pullPolicy: IfNotPresent
pullSecrets: []
resources: {}
extraArgs: {}
podAnnotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9121'
podLabels: {}
containerSecurityContext:
enabled: false
allowPrivilegeEscalation: false
serviceMonitor:
enabled: false
namespace: ''
interval: ''
scrapeTimeout: ''
selector: {}
labels: {}
annotations: {}
jobLabel: ''
relabelings: []
metricRelabelings: []
prometheusRule:
enabled: false
additionalLabels: {}
namespace: ''
rules: []
priorityClassName: ''
service:
type: ClusterIP
clusterIP: ''
loadBalancerIP: ''
annotations: {}
labels: {}
sysctlImage:
enabled: false
command: []
registry: docker.io
repository: bitnami/bitnami-shell
tag: 11-debian-11-r23
digest: ''
pullPolicy: IfNotPresent
pullSecrets: []
mountHostSys: false
resources:
limits: {}
requests: {}
```
#### 验证Redis Cluster



通过redis客户端以集群模式连接任意IP,此时会自动发现集群内其他主节点。
##### 结果报出异常(error) CLUSTERDOWN Hash slot not served提示
连接任意一台pod
redis-cli
查看集群节点
cluster nodes
查看集群信息
cluster info
发现集群内节点
cluster meet 10.233.123.91 6379
##### 修复 hash 槽
`redis-cli --cluster fix 10.233.123.91:6379`
### Harbor 安装
修改`expose.type` 为 `loadBalancer`,添加
`expose.commonName` 为 `harbor`(必填),指定 `expose.loadBalancer.IP` 为 `192.168.3.100`,修改`externalURL` 为 `https://harbor.wowaili.com`。
`harborAdminPassword` 密码为 `Klaus123@.@`。
---
以下为源安装配置文件
expose:
Set the way how to expose the service. Set the type as “ingress”,
“clusterIP”, “nodePort” or “loadBalancer” and fill the information
in the corresponding section
type: ingress
tls:
# Enable the tls or not. Note: if the type is "ingress" and the tls
# is disabled, the port must be included in the command when pull/push
# images. Refer to https://github.com/goharbor/harbor/issues/5291
# for the detail.
enabled: true
# Fill the name of secret if you want to use your own TLS certificate.
# The secret contains keys named:
# "tls.crt" - the certificate (required)
# "tls.key" - the private key (required)
# "ca.crt" - the certificate of CA (optional), this enables the download
# link on portal to download the certificate of CA
# These files will be generated automatically if the "secretName" is not set
secretName: ""
# By default, the Notary service will use the same cert and key as
# described above. Fill the name of secret if you want to use a
# separated one. Only needed when the type is "ingress".
notarySecretName: ""
# The common name used to generate the certificate, it's necessary
# when the type isn't "ingress" and "secretName" is null
commonName: ""
ingress:
hosts:
core: core.harbor.domain
notary: notary.harbor.domain
# set to the type of ingress controller if it has specific requirements.
# leave as `default` for most ingress controllers.
# set to `gce` if using the GCE ingress controller
# set to `ncp` if using the NCP (NSX-T Container Plugin) ingress controller
controller: default
annotations:
ingress.kubernetes.io/ssl-redirect: "true"
ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "0"
clusterIP:
# The name of ClusterIP service
name: harbor
ports:
# The service port Harbor listens on when serving with HTTP
httpPort: 80
# The service port Harbor listens on when serving with HTTPS
httpsPort: 443
# The service port Notary listens on. Only needed when notary.enabled
# is set to true
notaryPort: 4443
nodePort:
# The name of NodePort service
name: harbor
ports:
http:
# The service port Harbor listens on when serving with HTTP
port: 80
# The node port Harbor listens on when serving with HTTP
nodePort: 30002
https:
# The service port Harbor listens on when serving with HTTPS
port: 443
# The node port Harbor listens on when serving with HTTPS
nodePort: 30003
# Only needed when notary.enabled is set to true
notary:
# The service port Notary listens on
port: 4443
# The node port Notary listens on
nodePort: 30004
loadBalancer:
# The name of LoadBalancer service
name: harbor
# Set the IP if the LoadBalancer supports assigning IP
IP: ""
ports:
# The service port Harbor listens on when serving with HTTP
httpPort: 80
# The service port Harbor listens on when serving with HTTPS
httpsPort: 443
# The service port Notary listens on. Only needed when notary.enabled
# is set to true
notaryPort: 4443
annotations: {}
sourceRanges: []
The external URL for Harbor core service. It is used to
1) populate the docker/helm commands showed on portal
2) populate the token service URL returned to docker/notary client
Format: protocol://domain[:port]. Usually:
1) if “expose.type” is “ingress”, the “domain” should be
the value of “expose.ingress.hosts.core”
2) if “expose.type” is “clusterIP”, the “domain” should be
the value of “expose.clusterIP.name”
3) if “expose.type” is “nodePort”, the “domain” should be
the IP address of k8s node
If Harbor is deployed behind the proxy, set it as the URL of proxy
externalURL: https://core.harbor.domain
The internal TLS used for harbor components secure communicating. In order to enable https
in each components tls cert files need to provided in advance.
internalTLS:
If internal TLS enabled
enabled: false
There are three ways to provide tls
1) “auto” will generate cert automatically
2) “manual” need provide cert file manually in following value
3) “secret” internal certificates from secret
certSource: “auto”
The content of trust ca, only available when certSource
is “manual”
trustCa: “”
core related cert configuration
core:
# secret name for core's tls certs
secretName: ""
# Content of core's TLS cert file, only available when `certSource` is "manual"
crt: ""
# Content of core's TLS key file, only available when `certSource` is "manual"
key: ""
jobservice related cert configuration
jobservice:
# secret name for jobservice's tls certs
secretName: ""
# Content of jobservice's TLS key file, only available when `certSource` is "manual"
crt: ""
# Content of jobservice's TLS key file, only available when `certSource` is "manual"
key: ""
registry related cert configuration
registry:
# secret name for registry's tls certs
secretName: ""
# Content of registry's TLS key file, only available when `certSource` is "manual"
crt: ""
# Content of registry's TLS key file, only available when `certSource` is "manual"
key: ""
portal related cert configuration
portal:
# secret name for portal's tls certs
secretName: ""
# Content of portal's TLS key file, only available when `certSource` is "manual"
crt: ""
# Content of portal's TLS key file, only available when `certSource` is "manual"
key: ""
chartmuseum related cert configuration
chartmuseum:
# secret name for chartmuseum's tls certs
secretName: ""
# Content of chartmuseum's TLS key file, only available when `certSource` is "manual"
crt: ""
# Content of chartmuseum's TLS key file, only available when `certSource` is "manual"
key: ""
clair related cert configuration
clair:
# secret name for clair's tls certs
secretName: ""
# Content of clair's TLS key file, only available when `certSource` is "manual"
crt: ""
# Content of clair's TLS key file, only available when `certSource` is "manual"
key: ""
trivy related cert configuration
trivy:
# secret name for trivy's tls certs
secretName: ""
# Content of trivy's TLS key file, only available when `certSource` is "manual"
crt: ""
# Content of trivy's TLS key file, only available when `certSource` is "manual"
key: ""
The persistence is enabled by default and a default StorageClass
is needed in the k8s cluster to provision volumes dynamicly.
Specify another StorageClass in the “storageClass” or set “existingClaim”
if you have already existing persistent volumes to use
For storing images and charts, you can also use “azure”, “gcs”, “s3”,
“swift” or “oss”. Set it in the “imageChartStorage” section
persistence:
enabled: true
Setting it to “keep” to avoid removing PVCs during a helm delete
operation. Leaving it empty will delete PVCs after the chart deleted
resourcePolicy: “keep”
persistentVolumeClaim:
registry:
# Use the existing PVC which must be created manually before bound,
# and specify the "subPath" if the PVC is shared with other components
existingClaim: ""
# Specify the "storageClass" used to provision the volume. Or the default
# StorageClass will be used(the default).
# Set it to "-" to disable dynamic provisioning
storageClass: ""
subPath: ""
accessMode: ReadWriteOnce
size: 5Gi
chartmuseum:
existingClaim: ""
storageClass: ""
subPath: ""
accessMode: ReadWriteOnce
size: 5Gi
jobservice:
existingClaim: ""
storageClass: ""
subPath: ""
accessMode: ReadWriteOnce
size: 1Gi
# If external database is used, the following settings for database will
# be ignored
database:
existingClaim: ""
storageClass: ""
subPath: ""
accessMode: ReadWriteOnce
size: 1Gi
# If external Redis is used, the following settings for Redis will
# be ignored
redis:
existingClaim: ""
storageClass: ""
subPath: ""
accessMode: ReadWriteOnce
size: 1Gi
trivy:
existingClaim: ""
storageClass: ""
subPath: ""
accessMode: ReadWriteOnce
size: 5Gi
Define which storage backend is used for registry and chartmuseum to store
images and charts. Refer to
https://github.com/docker/distribution/blob/master/docs/configuration.md#storage
for the detail.
imageChartStorage:
# Specify whether to disable `redirect` for images and chart storage, for
# backends which not supported it (such as using minio for `s3` storage type), please disable
# it. To disable redirects, simply set `disableredirect` to `true` instead.
# Refer to
# https://github.com/docker/distribution/blob/master/docs/configuration.md#redirect
# for the detail.
disableredirect: false
# Specify the "caBundleSecretName" if the storage service uses a self-signed certificate.
# The secret must contain keys named "ca.crt" which will be injected into the trust store
# of registry's and chartmuseum's containers.
# caBundleSecretName:
# Specify the type of storage: "filesystem", "azure", "gcs", "s3", "swift",
# "oss" and fill the information needed in the corresponding section. The type
# must be "filesystem" if you want to use persistent volumes for registry
# and chartmuseum
type: filesystem
filesystem:
rootdirectory: /storage
#maxthreads: 100
azure:
accountname: accountname
accountkey: base64encodedaccountkey
container: containername
#realm: core.windows.net
gcs:
bucket: bucketname
# The base64 encoded json file which contains the key
encodedkey: base64-encoded-json-key-file
#rootdirectory: /gcs/object/name/prefix
#chunksize: "5242880"
s3:
region: us-west-1
bucket: bucketname
#accesskey: awsaccesskey
#secretkey: awssecretkey
#regionendpoint: http://myobjects.local
#encrypt: false
#keyid: mykeyid
#secure: true
#v4auth: true
#chunksize: "5242880"
#rootdirectory: /s3/object/name/prefix
#storageclass: STANDARD
swift:
authurl: https://storage.myprovider.com/v3/auth
username: username
password: password
container: containername
#region: fr
#tenant: tenantname
#tenantid: tenantid
#domain: domainname
#domainid: domainid
#trustid: trustid
#insecureskipverify: false
#chunksize: 5M
#prefix:
#secretkey: secretkey
#accesskey: accesskey
#authversion: 3
#endpointtype: public
#tempurlcontainerkey: false
#tempurlmethods:
oss:
accesskeyid: accesskeyid
accesskeysecret: accesskeysecret
region: regionname
bucket: bucketname
#endpoint: endpoint
#internal: false
#encrypt: false
#secure: true
#chunksize: 10M
#rootdirectory: rootdirectory
imagePullPolicy: IfNotPresent
Use this set to assign a list of default pullSecrets
imagePullSecrets:
- name: docker-registry-secret
- name: internal-registry-secret
The update strategy for deployments with persistent volumes(jobservice, registry
and chartmuseum): “RollingUpdate” or “Recreate”
Set it as “Recreate” when “RWM” for volumes isn't supported
updateStrategy:
type: RollingUpdate
debug, info, warning, error or fatal
logLevel: info
The initial password of Harbor admin. Change it from portal after launching Harbor
harborAdminPassword: “Harbor12345”
The secret key used for encryption. Must be a string of 16 chars.
secretKey: “not-a-secure-key”
The proxy settings for updating clair vulnerabilities from the Internet and replicating
artifacts from/to the registries that cannot be reached directly
proxy:
httpProxy:
httpsProxy:
noProxy: 127.0.0.1,localhost,.local,.internal
components:
- core
- jobservice
- clair
UAA Authentication Options
If you're using UAA for authentication behind a self-signed
certificate you will need to provide the CA Cert.
Set uaaSecretName below to provide a pre-created secret that
contains a base64 encoded CA Certificate named ca.crt
.
uaaSecretName:
If expose the service via “ingress”, the Nginx will not be used
nginx:
image:
repository: goharbor/nginx-photon
tag: v2.0.0
replicas: 1
resources:
requests:
memory: 256Mi
cpu: 100m
nodeSelector: {}
tolerations: []
affinity: {}
Additional deployment annotations
podAnnotations: {}
portal:
image:
repository: goharbor/harbor-portal
tag: v2.0.0
replicas: 1
resources:
requests:
memory: 256Mi
cpu: 100m
nodeSelector: {}
tolerations: []
affinity: {}
Additional deployment annotations
podAnnotations: {}
core:
image:
repository: goharbor/harbor-core
tag: v2.0.0
replicas: 1
Liveness probe values
livenessProbe:
initialDelaySeconds: 300
resources:
requests:
memory: 256Mi
cpu: 100m
nodeSelector: {}
tolerations: []
affinity: {}
Additional deployment annotations
podAnnotations: {}
Secret is used when core server communicates with other components.
If a secret key is not specified, Helm will generate one.
Must be a string of 16 chars.
secret: “”
Fill the name of a kubernetes secret if you want to use your own
TLS certificate and private key for token encryption/decryption.
The secret must contain keys named:
“tls.crt” - the certificate
“tls.key” - the private key
The default key pair will be used if it isn't set
secretName: “”
The XSRF key. Will be generated automatically if it isn't specified
xsrfKey: “”
jobservice:
image:
repository: goharbor/harbor-jobservice
tag: v2.0.0
replicas: 1
maxJobWorkers: 10
The logger for jobs: “file”, “database” or “stdout”
jobLogger: file
resources:
requests:
memory: 256Mi
cpu: 100m
nodeSelector: {}
tolerations: []
affinity: {}
Additional deployment annotations
podAnnotations: {}
Secret is used when job service communicates with other components.
If a secret key is not specified, Helm will generate one.
Must be a string of 16 chars.
secret: “”
registry:
registry:
image:
repository: goharbor/registry-photon
tag: v2.0.0
# resources:
# requests:
# memory: 256Mi
# cpu: 100m
controller:
image:
repository: goharbor/harbor-registryctl
tag: v2.0.0
# resources:
# requests:
# memory: 256Mi
# cpu: 100m
replicas: 1
nodeSelector: {}
tolerations: []
affinity: {}
Additional deployment annotations
podAnnotations: {}
Secret is used to secure the upload state from client
and registry storage backend.
See: https://github.com/docker/distribution/blob/master/docs/configuration.md#http
If a secret key is not specified, Helm will generate one.
Must be a string of 16 chars.
secret: “”
If true, the registry returns relative URLs in Location headers. The client is responsible for resolving the correct URL.
relativeurls: false
credentials:
username: "harbor_registry_user"
password: "harbor_registry_password"
# If you update the username or password of registry, make sure use cli tool htpasswd to generate the bcrypt hash
# e.g. "htpasswd -nbBC10 $username $password"
htpasswd: "harbor_registry_user:$2y$10$9L4Tc0DJbFFMB6RdSCunrOpTHdwhid4ktBJmLD00bYgqkkGOvll3m"
middleware:
enabled: false
type: cloudFront
cloudFront:
baseurl: example.cloudfront.net
keypairid: KEYPAIRID
duration: 3000s
ipfilteredby: none
# The secret key that should be present is CLOUDFRONT_KEY_DATA, which should be the encoded private key
# that allows access to CloudFront
privateKeySecret: "my-secret"
chartmuseum:
enabled: true
Harbor defaults ChartMuseum to returning relative urls, if you want using absolute url you should enable it by change the following value to 'true'
absoluteUrl: false
image:
repository: goharbor/chartmuseum-photon
tag: v2.0.0
replicas: 1
resources:
requests:
memory: 256Mi
cpu: 100m
nodeSelector: {}
tolerations: []
affinity: {}
Additional deployment annotations
podAnnotations: {}
clair:
enabled: true
clair:
image:
repository: goharbor/clair-photon
tag: v2.0.0
# resources:
# requests:
# memory: 256Mi
# cpu: 100m
adapter:
image:
repository: goharbor/clair-adapter-photon
tag: v2.0.0
# resources:
# requests:
# memory: 256Mi
# cpu: 100m
replicas: 1
The interval of clair updaters, the unit is hour, set to 0 to
disable the updaters
updatersInterval: 12
nodeSelector: {}
tolerations: []
affinity: {}
Additional deployment annotations
podAnnotations: {}
trivy:
enabled the flag to enable Trivy scanner
enabled: true
image:
# repository the repository for Trivy adapter image
repository: goharbor/trivy-adapter-photon
# tag the tag for Trivy adapter image
tag: v2.0.0
replicas the number of Pod replicas
replicas: 1
debugMode the flag to enable Trivy debug mode with more verbose scanning log
debugMode: false
vulnType a comma-separated list of vulnerability types. Possible values are os
and library
.
vulnType: “os,library”
severity a comma-separated list of severities to be checked
severity: “UNKNOWN,LOW,MEDIUM,HIGH,CRITICAL”
ignoreUnfixed the flag to display only fixed vulnerabilities
ignoreUnfixed: false
insecure the flag to skip verifying registry certificate
insecure: false
gitHubToken the GitHub access token to download Trivy DB
Trivy DB contains vulnerability information from NVD, Red Hat, and many other upstream vulnerability databases.
It is downloaded by Trivy from the GitHub release page https://github.com/aquasecurity/trivy-db/releases and cached
in the local file system (/home/scanner/.cache/trivy/db/trivy.db
). In addition, the database contains the update
timestamp so Trivy can detect whether it should download a newer version from the Internet or use the cached one.
Currently, the database is updated every 12 hours and published as a new release to GitHub.
Anonymous downloads from GitHub are subject to the limit of 60 requests per hour. Normally such rate limit is enough
for production operations. If, for any reason, it's not enough, you could increase the rate limit to 5000
requests per hour by specifying the GitHub access token. For more details on GitHub rate limiting please consult
https://developer.github.com/v3/#rate-limiting
You can create a GitHub token by following the instructions in
https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token-for-the-command-line
gitHubToken: “”
skipUpdate the flag to disable Trivy DB downloads from GitHub
You might want to set the value of this flag to true
in test or CI/CD environments to avoid GitHub rate limiting issues.
If the value is set to true
you have to manually download the trivy.db
file and mount it in the
/home/scanner/.cache/trivy/db/trivy.db
path.
skipUpdate: false
resources:
requests:
cpu: 200m
memory: 512Mi
limits:
cpu: 1
memory: 1Gi
Additional deployment annotations
podAnnotations: {}
notary:
enabled: true
server:
image:
repository: goharbor/notary-server-photon
tag: v2.0.0
replicas: 1
# resources:
# requests:
# memory: 256Mi
# cpu: 100m
signer:
image:
repository: goharbor/notary-signer-photon
tag: v2.0.0
replicas: 1
# resources:
# requests:
# memory: 256Mi
# cpu: 100m
nodeSelector: {}
tolerations: []
affinity: {}
Additional deployment annotations
podAnnotations: {}
Fill the name of a kubernetes secret if you want to use your own
TLS certificate authority, certificate and private key for notary
communications.
The secret must contain keys named ca.crt, tls.crt and tls.key that
contain the CA, certificate and private key.
They will be generated if not set.
secretName: “”
database:
if external database is used, set “type” to “external”
and fill the connection informations in “external” section
type: internal
internal:
image:
repository: goharbor/harbor-db
tag: v2.0.0
# the image used by the init container
initContainerImage:
repository: busybox
tag: latest
# The initial superuser password for internal database
password: "changeit"
# resources:
# requests:
# memory: 256Mi
# cpu: 100m
nodeSelector: {}
tolerations: []
affinity: {}
external:
host: "192.168.0.1"
port: "5432"
username: "user"
password: "password"
coreDatabase: "registry"
clairDatabase: "clair"
notaryServerDatabase: "notary_server"
notarySignerDatabase: "notary_signer"
# "disable" - No SSL
# "require" - Always SSL (skip verification)
# "verify-ca" - Always SSL (verify that the certificate presented by the
# server was signed by a trusted CA)
# "verify-full" - Always SSL (verify that the certification presented by the
# server was signed by a trusted CA and the server host name matches the one
# in the certificate)
sslmode: "disable"
The maximum number of connections in the idle connection pool.
If it <=0, no idle connections are retained.
maxIdleConns: 50
The maximum number of open connections to the database.
If it <= 0, then there is no limit on the number of open connections.
Note: the default number of connections is 100 for postgre.
maxOpenConns: 100
Additional deployment annotations
podAnnotations: {}
redis:
if external Redis is used, set “type” to “external”
and fill the connection informations in “external” section
type: internal
internal:
image:
repository: goharbor/redis-photon
tag: v2.0.0
# resources:
# requests:
# memory: 256Mi
# cpu: 100m
nodeSelector: {}
tolerations: []
affinity: {}
external:
host: "192.168.0.2"
port: "6379"
# The "coreDatabaseIndex" must be "0" as the library Harbor
# used doesn't support configuring it
coreDatabaseIndex: "0"
jobserviceDatabaseIndex: "1"
registryDatabaseIndex: "2"
chartmuseumDatabaseIndex: "3"
clairAdapterIndex: "4"
trivyAdapterIndex: "5"
password: ""
Additional deployment annotations
podAnnotations: {}
### Nacos 集群
1. 部署 Nacos 集群 (需要选择有状态服务),因为 Nacos 集群部署,需要配置集群 IP:Port,为了防止 Pod 故障转移后 IP 变化,所以使用域名访问(所以需要配置有状态服务)。
## DevOps
> 基于 Jenkins 的 KubeSphere DevOps 系统是专为 Kubernetes 中的 CI/CD 工作流设计的,它提供了一站式的解决方案,帮助开发和运维团队用非常简单的方式构建、测试和发布应用到 Kubernetes。它还具有插件管理、Binary-to-Image (B2I)、Source-to-Image (S2I)、代码依赖缓存、代码质量分析、流水线日志等功能。
### 启用 DevOps
启用 DevOps 有两种方式,`安装前`和`安装后`。
#### 安装前启用
##### 安装 KubeSphere 并启用 DevOps
在[[KubeSphere多节点安装]]中,创建集群时会产生`config-sample.yaml`文件。

在该文件中,搜索 `devops`,并将 `enabled` 的 `false` 改为 `true`,完成后保存文件。
如果未安装KubeSphere
./kk create cluster -f config-sample.yaml
如果已经安装KubeSphere
kubectl apply -f config-sample.yaml
##### Kubernetes 上安装并启用
1. 下载`cluster-configuration.yaml`文件
wget https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml
2. 编辑`cluster-configuration.yaml`文件, 修改`devops`字段的值为`true`。
devops:
enabled: true
# resources: {}
jenkinsMemoryLim: 2Gi
jenkinsMemoryReq: 1500Mi
jenkinsVolumeSize: 8Gi
jenkinsJavaOpts_Xms: 1200m
jenkinsJavaOpts_Xmx: 1600m
jenkinsJavaOpts_MaxRAM: 2g
3. 执行安装命令:
如果已经安装了KubeSphere 可以省略这一步
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml
更新KubeSphere
kubectl apply -f cluster-configuration.yaml
#### 安装后启用
1. 以 `admin` 用户登录控制台,点击左上角的平台管理,选择集群管理。
2. 点击**定制资源定义**,在搜索栏中输入`clusterconfiguration`
3. 在**自定义资源**中,找到 `ks-installer`, 点击更过选择编辑 **YAML**。
4. 搜索`devops`,并将值改为`true`,然后保存。
5. 在`kubectl`执行以下命令检查安装过程:
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
### 创建 DevOps 项目
进入对应的命名空间,选择`DevOps项目`, 点击`创建`。


#### 创建流水线




Tip: 注意**创建凭证**时,填入的**用户名**是`gitlab`中的用户名,**私钥**是`gitlab`中`SSH`公钥 对应的**私钥**

#### 自定义 PodTemplate
##### 自定义 maven-jdk11 和 nodejs16
1. 进入`kubesphere-devops-system` 命名空间,选择`配置字典`,选择编辑`jenkins-casc-config`的 yaml 文件。
2. 编辑`jenkins_user.yaml`部分,`jenkins.clouds.kubernetes.templates`下添加
name: “nodejs16”
label: “nodejs nodejs16”
inheritFrom: “nodejs”
containers:- name: “nodejs”
image: “ccr.ccs.tencentyun.com/wowaili/build-nodejs:V16”
- name: “nodejs”
name: “mavenjdk11”
label: “jdk11 maven java”
inheritFrom: “maven”
containers:- name: “maven”
image: “kubespheredev/builder-maven:v3.2.0jdk11”
- name: “maven”
使用时注意修改
agent > node > label
为自定义的label
pipeline { agent { node { label 'maven && jdk11' } } stages { stage('拉取 & 编译代码') { agent none steps { container('maven') { sh 'java -version && mvn -v' git(url: 'ssh://[email protected]:2222/wowaili/shortkey-console.git', credentialsId: 'gitlab-ssh', branch: "$BRANCH_NAME", changelog: true, poll: false) sh 'mvn clean package -DskipTests=true && ls -all' } } } stage('构建 & 推送镜像') { agent none steps { container('maven') { withCredentials([usernamePassword(credentialsId : "$DOCKER_CREDENTIAL_ID" ,passwordVariable : 'DOCKER_PASSWORD' ,usernameVariable : 'DOCKER_USERNAME' ,)]) { sh 'echo "$DOCKER_PASSWORD" | docker login $REGISTRY -u "$DOCKER_USERNAME" --password-stdin' sh 'docker build -f Dockerfile -t $REGISTRY/$DOCKERHUB_NAMESPACE/$APP_NAME:SNAPSHOT-$BRANCH_NAME-$BUILD_NUMBER .' sh 'docker push $REGISTRY/$DOCKERHUB_NAMESPACE/$APP_NAME:SNAPSHOT-$BRANCH_NAME-$BUILD_NUMBER' } } } } stage('部署') { agent none steps { container ('maven') { withCredentials([ kubeconfigFile( credentialsId: env.KUBECONFIG_CREDENTIAL_ID, variable: 'KUBECONFIG') ]) { sh 'envsubst < "$DEPLOY_YML" | kubectl apply -f -' } } } } } environment { DOCKER_CREDENTIAL_ID = 'harbor-registry-secret' KUBECONFIG_CREDENTIAL_ID = 'kubeconfig' REGISTRY = 'harbor.wowaili.com' DOCKERHUB_NAMESPACE = 'wowaili' APP_NAME = 'wowaili-shortkey' SONAR_CREDENTIAL_ID = 'sonar-token' DEPLOY_NAME = "${BRANCH_NAME.equals('main') ? 'prod' : BRANCH_NAME}" DEPLOY_YML = "deploy/deploy-" + "${DEPLOY_NAME}" + ".yml" } }
shortkey 网络
Namespace 一直 Terminating
介绍
在使用 KubeSphere 之后,总会碰到想要玩一下的项目。然而,安装完后会发现并不怎么想使用它。(可能我只是想看看它到底是什么吧。)那么删除它就是它的结局了,于是在 KubeSphere 中使用强制删除:
kubectl delete ns halojjnwm --force --grace-period=0
五分钟后,我查看所有命名空间,发现这货居然还在这里。
以为是页面上有缓存,切换到控制台查看。啊呜~, 受不了。
如何彻底删除
1. 获取 namespace 的详情信息并转为 json
kubectl get ns halojjnwm -o json > halojjnwm.json
执行命令后,会在当前文件夹下生成命令中指定的 命名为xx.json
的文件
2. 编辑 json 文件中对应的字段
移除json
[^1] 文件中 spec.finalizers
字段的值(删除kubernetes
这一行)并保存。
3. 执行清理命令
现在执行下面的命令,就是见证奇迹的时刻了。
kubectl replace --raw "/api/v1/namespaces/halojjnwm/finalize" -f ./halojjnwm.json
当你看到如下所示的返回结果,那么说明奇迹已经发生。
[^1]: 完整的 JSON 文件
{
"apiVersion": "v1",
"kind": "Namespace",
"metadata": {
"annotations": {
"kubesphere.io/creator": "wowaili-workspaces-manager"
},
"creationTimestamp": "2022-09-01T10:46:37Z",
"finalizers": [
"finalizers.kubesphere.io/namespaces"
],
"labels": {
"kubernetes.io/metadata.name": "halojjnwm",
"kubesphere.io/devopsproject": "halojjnwm",
"kubesphere.io/namespace": "halojjnwm"
},
"name": "halojjnwm",
"ownerReferences": [
{
"apiVersion": "devops.kubesphere.io/v1alpha3",
"blockOwnerDeletion": true,
"controller": true,
"kind": "DevOpsProject",
"name": "halojjnwm",
"uid": "0d6bdad7-537a-405e-b2a4-1bf1b7012599"
}
],
"resourceVersion": "94687",
"uid": "1f1fbd02-d56e-46a6-9414-c145107ce61d"
},
"spec": {
"finalizers": [
"kubernetes"
]
},
"status": {
"phase": "Active"
}
}