【K8S】基于Docker+K8S+GitLab/SVN+Jenkins+Harbor搭建持续集成交付环境(环境搭建篇)

前端之家收集整理的这篇文章主要介绍了【K8S】基于Docker+K8S+GitLab/SVN+Jenkins+Harbor搭建持续集成交付环境(环境搭建篇)前端之家小编觉得挺不错的,现在分享给大家,也给大家做个参考。

写在前面

最近在 K8S 1.18.2 版本的集群上搭建DevOps环境,期间遇到了各种坑。目前,搭建环境的过程中出现的各种坑均已被填平,特此记录,并分享给大家!
小伙伴们可以到链接https://download.csdn.net/download/l1028386804/12579236下载所需要的yaml文件

服务器规划

IP 主机名 节点 操作系统
192.168.175.101 binghe101 K8S Master CentOS 8.0.1905
192.168.175.102 binghe102 K8S Worker CentOS 8.0.1905
192.168.175.103 binghe103 K8S Worker CentOS 8.0.1905

安装环境版本

软件名称 软件版本 说明
Docker 19.03.8 提供容器环境
docker-compose 1.25.5 定义和运行由多个容器组成的应用
K8S 1.8.12 是一个开源的,用于管理云平台中多个主机上的容器化的应用,Kubernetes的目标是让部署容器化的应用简单并且高效(powerful),Kubernetes提供了应用部署,规划,更新,维护的一种机制。
GitLab 12.1.6 代码仓库(与SVN安装一个即可)
Harbor 1.10.2 私有镜像仓库
Jenkins 2.89.3 持续集成交付
SVN 1.10.2 代码仓库(与GitLab安装一个即可)
JDK 1.8.0_202 Java运行基础环境
maven 3.6.3 构建项目的基础插件

服务器免密码登录

在各服务器执行如下命令。

  1. ssh-keygen -t rsa
  2. cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

将binghe102和binghe103服务器上的id_rsa.pub文件复制到binghe101服务器。

  1. [root@binghe102 ~]# scp .ssh/id_rsa.pub binghe101:/root/.ssh/102
  2. [root@binghe103 ~]# scp .ssh/id_rsa.pub binghe101:/root/.ssh/103

在binghe101服务器上执行如下命令。

  1. cat ~/.ssh/102 >> ~/.ssh/authorized_keys
  2. cat ~/.ssh/103 >> ~/.ssh/authorized_keys

然后将authorized_keys文件分别复制到binghe102、binghe103服务器。

  1. [root@binghe101 ~]# scp .ssh/authorized_keys binghe102:/root/.ssh/authorized_keys
  2. [root@binghe101 ~]# scp .ssh/authorized_keys binghe103:/root/.ssh/authorized_keys

删除binghe101节点上~/.ssh下的102和103文件

  1. rm ~/.ssh/102
  2. rm ~/.ssh/103

安装JDK

需要在每台服务器上安装JDK环境。到Oracle官方下载JDK,我这里下的JDK版本为1.8.0_202,下载后解压并配置系统环境变量。

  1. tar -zxvf jdk1.8.0_212.tar.gz
  2. mv jdk1.8.0_212 /usr/local

接下来,配置系统环境变量。

  1. vim /etc/profile

配置项内容如下所示。

  1. JAVA_HOME=/usr/local/jdk1.8.0_212
  2. CLASS_PATH=.:$JAVA_HOME/lib
  3. PATH=$JAVA_HOME/bin:$PATH
  4. export JAVA_HOME CLASS_PATH PATH

接下来执行如下命令使系统环境变量生效。

  1. source /etc/profile

安装Maven

到Apache官方下载Maven,我这里下载的Maven版本为3.6.3。下载后直接解压并配置系统环境变量。

  1. tar -zxvf apache-maven-3.6.3-bin.tar.gz
  2. mv apache-maven-3.6.3-bin /usr/local

接下来,就是配置系统环境变量。

  1. vim /etc/profile

配置项内容如下所示。

  1. JAVA_HOME=/usr/local/jdk1.8.0_212
  2. MAVEN_HOME=/usr/local/apache-maven-3.6.3-bin
  3. CLASS_PATH=.:$JAVA_HOME/lib
  4. PATH=$MAVEN_HOME/bin:$JAVA_HOME/bin:$PATH
  5. export JAVA_HOME CLASS_PATH MAVEN_HOME PATH

接下来执行如下命令使系统环境变量生效。

  1. source /etc/profile

接下来,修改Maven的配置文件,如下所示。

  1. <localRepository>/home/repository</localRepository>

将Maven下载的Jar包存储到/home/repository目录下。

安装Docker环境

本文档基于Docker 19.03.8 版本搭建Docker环境。

在所有服务器上创建install_docker.sh脚本,脚本内容如下所示。

  1. export REGISTRY_MIRROR=https://registry.cn-hangzhou.aliyuncs.com
  2. dnf install yum*
  3. yum install -y yum-utils device-mapper-persistent-data lvm2
  4. yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
  5. dnf install https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.13-3.1.el7.x86_64.rpm
  6. yum install -y docker-ce-19.03.8 docker-ce-cli-19.03.8
  7. systemctl enable docker.service
  8. systemctl start docker.service
  9. docker version

在每台服务器上为install_docker.sh脚本赋予可执行权限,并执行脚本即可。

安装docker-compose

注意:在每台服务器上安装docker-compose

1.下载docker-compose文件

  1. curl -L https://github.com/docker/compose/releases/download/1.25.5/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose

2.为docker-compose文件赋予可执行权限

  1. chmod a+x /usr/local/bin/docker-compose

3.查看docker-compose版本

  1. [root@binghe ~]# docker-compose version
  2. docker-compose version 1.25.5,build 8a1c60f6
  3. docker-py version: 4.1.0
  4. CPython version: 3.7.5
  5. OpenSSL version: OpenSSL 1.1.0l 10 Sep 2019

安装K8S集群环境

本文档基于K8S 1.8.12版本来搭建K8S集群

安装K8S基础环境

在所有服务器上创建install_k8s.sh脚本文件,脚本文件内容如下所示。

  1. #配置阿里云镜像加速器
  2. mkdir -p /etc/docker
  3. tee /etc/docker/daemon.json <<-'EOF'
  4. {
  5. "registry-mirrors": ["https://zz3sblpi.mirror.aliyuncs.com"]
  6. }
  7. EOF
  8. systemctl daemon-reload
  9. systemctl restart docker
  10. #安装nfs-utils
  11. yum install -y nfs-utils
  12. yum install -y wget
  13. #启动nfs-server
  14. systemctl start nfs-server
  15. systemctl enable nfs-server
  16. #关闭防火墙
  17. systemctl stop firewalld
  18. systemctl disable firewalld
  19. #关闭SeLinux
  20. setenforce 0
  21. sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
  22. # 关闭 swap
  23. swapoff -a
  24. yes | cp /etc/fstab /etc/fstab_bak
  25. cat /etc/fstab_bak |grep -v swap > /etc/fstab
  26. #修改 /etc/sysctl.conf
  27. # 如果有配置,则修改
  28. sed -i "s#^net.ipv4.ip_forward.*#net.ipv4.ip_forward=1#g" /etc/sysctl.conf
  29. sed -i "s#^net.bridge.bridge-nf-call-ip6tables.*#net.bridge.bridge-nf-call-ip6tables=1#g" /etc/sysctl.conf
  30. sed -i "s#^net.bridge.bridge-nf-call-iptables.*#net.bridge.bridge-nf-call-iptables=1#g" /etc/sysctl.conf
  31. sed -i "s#^net.ipv6.conf.all.disable_ipv6.*#net.ipv6.conf.all.disable_ipv6=1#g" /etc/sysctl.conf
  32. sed -i "s#^net.ipv6.conf.default.disable_ipv6.*#net.ipv6.conf.default.disable_ipv6=1#g" /etc/sysctl.conf
  33. sed -i "s#^net.ipv6.conf.lo.disable_ipv6.*#net.ipv6.conf.lo.disable_ipv6=1#g" /etc/sysctl.conf
  34. sed -i "s#^net.ipv6.conf.all.forwarding.*#net.ipv6.conf.all.forwarding=1#g" /etc/sysctl.conf
  35. # 可能没有,追加
  36. echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
  37. echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
  38. echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
  39. echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.conf
  40. echo "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.conf
  41. echo "net.ipv6.conf.lo.disable_ipv6 = 1" >> /etc/sysctl.conf
  42. echo "net.ipv6.conf.all.forwarding = 1" >> /etc/sysctl.conf
  43. # 执行命令以应用
  44. sysctl -p
  45. # 配置K8S的yum源
  46. cat <<EOF > /etc/yum.repos.d/kubernetes.repo
  47. [kubernetes]
  48. name=Kubernetes
  49. baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
  50. enabled=1
  51. gpgcheck=0
  52. repo_gpgcheck=0
  53. gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
  54. http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
  55. EOF
  56. # 卸载旧版本K8S
  57. yum remove -y kubelet kubeadm kubectl
  58. # 安装kubelet、kubeadm、kubectl,这里我安装的是1.18.2版本,你也可以安装1.17.2版本
  59. yum install -y kubelet-1.18.2 kubeadm-1.18.2 kubectl-1.18.2
  60. # 修改docker Cgroup Driver为systemd
  61. # # 将/usr/lib/systemd/system/docker.service文件中的这一行 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
  62. # # 修改为 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd
  63. # 如果不修改,在添加 worker 节点时可能会碰到如下错误
  64. # [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd".
  65. # Please follow the guide at https://kubernetes.io/docs/setup/cri/
  66. sed -i "s#^ExecStart=/usr/bin/dockerd.*#ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd#g" /usr/lib/systemd/system/docker.service
  67. # 设置 docker 镜像,提高 docker 镜像下载速度和稳定性
  68. # 如果访问 https://hub.docker.io 速度非常稳定,亦可以跳过这个步骤
  69. # curl -sSL https://kuboard.cn/install-script/set_mirror.sh | sh -s ${REGISTRY_MIRROR}
  70. # 重启 docker,并启动 kubelet
  71. systemctl daemon-reload
  72. systemctl restart docker
  73. systemctl enable kubelet && systemctl start kubelet
  74. docker version

在每台服务器上为install_k8s.sh脚本赋予可执行权限,并执行脚本即可。

初始化Master节点

只在binghe101服务器上执行的操作。

1.初始化Master节点的网络环境

注意:下面的命令需要在命令行手动执行。

  1. # 只在 master 节点执行
  2. # export 命令只在当前 shell 会话中有效,开启新的 shell 窗口后,如果要继续安装过程,请重新执行此处的 export 命令
  3. export MASTER_IP=192.168.175.101
  4. # 替换 k8s.master 为 您想要的 dnsName
  5. export APISERVER_NAME=k8s.master
  6. # Kubernetes 容器组所在的网段,该网段安装完成后,由 kubernetes 创建,事先并不存在于物理网络中
  7. export POD_SUBNET=172.18.0.1/16
  8. echo "${MASTER_IP} ${APISERVER_NAME}" >> /etc/hosts

2.初始化Master节点

在binghe101服务器上创建init_master.sh脚本文件文件内容如下所示。

  1. #!/bin/bash
  2. # 脚本出错时终止执行
  3. set -e
  4. if [ ${#POD_SUBNET} -eq 0 ] || [ ${#APISERVER_NAME} -eq 0 ]; then
  5. echo -e "\033[31;1m请确保您已经设置了环境变量 POD_SUBNET 和 APISERVER_NAME \033[0m"
  6. echo 当前POD_SUBNET=$POD_SUBNET
  7. echo 当前APISERVER_NAME=$APISERVER_NAME
  8. exit 1
  9. fi
  10. # 查看完整配置选项 https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2
  11. rm -f ./kubeadm-config.yaml
  12. cat <<EOF > ./kubeadm-config.yaml
  13. apiVersion: kubeadm.k8s.io/v1beta2
  14. kind: ClusterConfiguration
  15. kubernetesVersion: v1.18.2
  16. imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
  17. controlPlaneEndpoint: "${APISERVER_NAME}:6443"
  18. networking:
  19. serviceSubnet: "10.96.0.0/16"
  20. podSubnet: "${POD_SUBNET}"
  21. dnsDomain: "cluster.local"
  22. EOF
  23. # kubeadm init
  24. # 根据服务器网速的情况,您需要等候 3 - 10 分钟
  25. kubeadm init --config=kubeadm-config.yaml --upload-certs
  26. # 配置 kubectl
  27. rm -rf /root/.kube/
  28. mkdir /root/.kube/
  29. cp -i /etc/kubernetes/admin.conf /root/.kube/config
  30. # 安装 calico 网络插件
  31. # 参考文档 https://docs.projectcalico.org/v3.13/getting-started/kubernetes/self-managed-onprem/onpremises
  32. echo "安装calico-3.13.1"
  33. rm -f calico-3.13.1.yaml
  34. wget https://kuboard.cn/install-script/calico/calico-3.13.1.yaml
  35. kubectl apply -f calico-3.13.1.yaml

赋予init_master.sh脚本文件可执行权限并执行脚本。

3.查看Master节点的初始化结果

(1)确保所有容器组处于Running状态

  1. # 执行如下命令,等待 3-10 分钟,直到所有的容器组处于 Running 状态
  2. watch kubectl get pod -n kube-system -o wide

具体执行如下所示。

  1. [root@binghe101 ~]# watch kubectl get pod -n kube-system -o wide
  2. Every 2.0s: kubectl get pod -n kube-system -o wide binghe101: Sun May 10 11:01:32 2020
  3. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  4. calico-kube-controllers-5b8b769fcd-5dtlp 1/1 Running 0 118s 172.18.203.66 binghe101 <none> <none>
  5. calico-node-fnv8g 1/1 Running 0 118s 192.168.175.101 binghe101 <none> <none>
  6. coredns-546565776c-27t7h 1/1 Running 0 2m1s 172.18.203.67 binghe101 <none> <none>
  7. coredns-546565776c-hjb8z 1/1 Running 0 2m1s 172.18.203.65 binghe101 <none> <none>
  8. etcd-binghe101 1/1 Running 0 2m7s 192.168.175.101 binghe101 <none> <none>
  9. kube-apiserver-binghe101 1/1 Running 0 2m7s 192.168.175.101 binghe101 <none> <none>
  10. kube-controller-manager-binghe101 1/1 Running 0 2m7s 192.168.175.101 binghe101 <none> <none>
  11. kube-proxy-dvgsr 1/1 Running 0 2m1s 192.168.175.101 binghe101 <none> <none>
  12. kube-scheduler-binghe101 1/1 Running 0 2m7s 192.168.175.101 binghe101 <none> <none>

(2) 查看 Master 节点初始化结果

  1. kubectl get nodes -o wide

具体执行如下所示。

  1. [root@binghe101 ~]# kubectl get nodes -o wide
  2. NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
  3. binghe101 Ready master 3m28s v1.18.2 192.168.175.101 <none> CentOS Linux 8 (Core) 4.18.0-80.el8.x86_64 docker://19.3.8

初始化Worker节点

1.获取join命令参数

在Master节点(binghe101服务器)上执行如下命令获取join命令参数。

  1. kubeadm token create --print-join-command

具体执行如下所示。

  1. [root@binghe101 ~]# kubeadm token create --print-join-command
  2. W0510 11:04:34.828126 56132 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
  3. kubeadm join k8s.master:6443 --token 8nblts.62xytoqufwsqzko2 --discovery-token-ca-cert-hash sha256:1717cc3e34f6a56b642b5751796530e367aa73f4113d09994ac3455e33047c0d

其中,有如下一行输出

  1. kubeadm join k8s.master:6443 --token 8nblts.62xytoqufwsqzko2 --discovery-token-ca-cert-hash sha256:1717cc3e34f6a56b642b5751796530e367aa73f4113d09994ac3455e33047c0d

这行代码就是获取到的join命令。

注意:join命令中的token的有效时间为 2 个小时,2小时内,可以使用此 token 初始化任意数量的 worker 节点。

2.初始化Worker节点

针对所有的 worker 节点执行,在这里,就是在binghe102服务器和binghe103服务器上执行。

在命令分别手动执行如下命令。

  1. # 只在 worker 节点执行
  2. # 192.168.175.101 为 master 节点的内网 IP
  3. export MASTER_IP=192.168.175.101
  4. # 替换 k8s.master 为初始化 master 节点时所使用的 APISERVER_NAME
  5. export APISERVER_NAME=k8s.master
  6. echo "${MASTER_IP} ${APISERVER_NAME}" >> /etc/hosts
  7. # 替换为 master 节点上 kubeadm token create 命令输出的join
  8. kubeadm join k8s.master:6443 --token 8nblts.62xytoqufwsqzko2 --discovery-token-ca-cert-hash sha256:1717cc3e34f6a56b642b5751796530e367aa73f4113d09994ac3455e33047c0d

具体执行如下所示。

  1. [root@binghe102 ~]# export MASTER_IP=192.168.175.101
  2. [root@binghe102 ~]# export APISERVER_NAME=k8s.master
  3. [root@binghe102 ~]# echo "${MASTER_IP} ${APISERVER_NAME}" >> /etc/hosts
  4. [root@binghe102 ~]# kubeadm join k8s.master:6443 --token 8nblts.62xytoqufwsqzko2 --discovery-token-ca-cert-hash sha256:1717cc3e34f6a56b642b5751796530e367aa73f4113d09994ac3455e33047c0d
  5. W0510 11:08:27.709263 42795 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
  6. [preflight] Running pre-flight checks
  7. [WARNING FileExisting-tc]: tc not found in system path
  8. [preflight] Reading configuration from the cluster...
  9. [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
  10. [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
  11. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  12. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  13. [kubelet-start] Starting the kubelet
  14. [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
  15. This node has joined the cluster:
  16. * Certificate signing request was sent to apiserver and a response was received.
  17. * The Kubelet was informed of the new secure connection details.
  18. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

根据输出结果可以看出,Worker节点加入了K8S集群。

注意:kubeadm join…就是master 节点上 kubeadm token create 命令输出的join。

3.查看初始化结果

在Master节点(binghe101服务器)执行如下命令查看初始化结果。

  1. kubectl get nodes -o wide

具体执行如下所示。

  1. [root@binghe101 ~]# kubectl get nodes
  2. NAME STATUS ROLES AGE VERSION
  3. binghe101 Ready master 20m v1.18.2
  4. binghe102 Ready <none> 2m46s v1.18.2
  5. binghe103 Ready <none> 2m46s v1.18.2

注意:kubectl get nodes命令后面加上-o wide参数可以输出更多的信息。

重启K8S集群引起的问题

1.Worker节点故障不能启动

Master 节点的 IP 地址发生变化,导致 worker 节点不能启动。需要重新安装K8S集群,并确保所有节点都有固定的内网 IP 地址。

2.Pod崩溃或不能正常访问

重启服务器后使用如下命令查看Pod的运行状态。

  1. kubectl get pods --all-namespaces

发现很多 Pod 不在 Running 状态,此时,需要使用如下命令删除运行不正常的Pod。

  1. kubectl delete pod <pod-name> -n <pod-namespece>

注意:如果Pod 是使用 Deployment、StatefulSet 等控制器创建的,K8S 将创建新的 Pod 作为替代,重新启动的 Pod 通常能够正常工作。

K8S安装ingress-Nginx

注意:在Master节点(binghe101服务器上执行)

1.创建ingress-Nginx命名空间

创建ingress-Nginx-namespace.yaml文件文件内容如下所示。

  1. apiVersion: v1
  2. kind: Namespace
  3. Metadata:
  4. name: ingress-Nginx
  5. labels:
  6. name: ingress-Nginx

执行如下命令创建ingress-Nginx命名空间。

  1. kubectl apply -f ingress-Nginx-namespace.yaml

2.安装ingress controller

创建ingress-Nginx-mandatory.yaml文件文件内容如下所示。

  1. apiVersion: v1
  2. kind: Namespace
  3. Metadata:
  4. name: ingress-Nginx
  5. ---
  6. apiVersion: apps/v1
  7. kind: Deployment
  8. Metadata:
  9. name: default-http-backend
  10. labels:
  11. app.kubernetes.io/name: default-http-backend
  12. app.kubernetes.io/part-of: ingress-Nginx
  13. namespace: ingress-Nginx
  14. spec:
  15. replicas: 1
  16. selector:
  17. matchLabels:
  18. app.kubernetes.io/name: default-http-backend
  19. app.kubernetes.io/part-of: ingress-Nginx
  20. template:
  21. Metadata:
  22. labels:
  23. app.kubernetes.io/name: default-http-backend
  24. app.kubernetes.io/part-of: ingress-Nginx
  25. spec:
  26. terminationGracePeriodSeconds: 60
  27. containers:
  28. - name: default-http-backend
  29. # Any image is permissible as long as:
  30. # 1. It serves a 404 page at /
  31. # 2. It serves 200 on a /healthz endpoint
  32. image: registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/defaultbackend-amd64:1.5
  33. livenessProbe:
  34. httpGet:
  35. path: /healthz
  36. port: 8080
  37. scheme: HTTP
  38. initialDelaySeconds: 30
  39. timeoutSeconds: 5
  40. ports:
  41. - containerPort: 8080
  42. resources:
  43. limits:
  44. cpu: 10m
  45. memory: 20Mi
  46. requests:
  47. cpu: 10m
  48. memory: 20Mi
  49. ---
  50. apiVersion: v1
  51. kind: Service
  52. Metadata:
  53. name: default-http-backend
  54. namespace: ingress-Nginx
  55. labels:
  56. app.kubernetes.io/name: default-http-backend
  57. app.kubernetes.io/part-of: ingress-Nginx
  58. spec:
  59. ports:
  60. - port: 80
  61. targetPort: 8080
  62. selector:
  63. app.kubernetes.io/name: default-http-backend
  64. app.kubernetes.io/part-of: ingress-Nginx
  65. ---
  66. kind: ConfigMap
  67. apiVersion: v1
  68. Metadata:
  69. name: Nginx-configuration
  70. namespace: ingress-Nginx
  71. labels:
  72. app.kubernetes.io/name: ingress-Nginx
  73. app.kubernetes.io/part-of: ingress-Nginx
  74. ---
  75. kind: ConfigMap
  76. apiVersion: v1
  77. Metadata:
  78. name: tcp-services
  79. namespace: ingress-Nginx
  80. labels:
  81. app.kubernetes.io/name: ingress-Nginx
  82. app.kubernetes.io/part-of: ingress-Nginx
  83. ---
  84. kind: ConfigMap
  85. apiVersion: v1
  86. Metadata:
  87. name: udp-services
  88. namespace: ingress-Nginx
  89. labels:
  90. app.kubernetes.io/name: ingress-Nginx
  91. app.kubernetes.io/part-of: ingress-Nginx
  92. ---
  93. apiVersion: v1
  94. kind: ServiceAccount
  95. Metadata:
  96. name: Nginx-ingress-serviceaccount
  97. namespace: ingress-Nginx
  98. labels:
  99. app.kubernetes.io/name: ingress-Nginx
  100. app.kubernetes.io/part-of: ingress-Nginx
  101. ---
  102. apiVersion: rbac.authorization.k8s.io/v1beta1
  103. kind: ClusterRole
  104. Metadata:
  105. name: Nginx-ingress-clusterrole
  106. labels:
  107. app.kubernetes.io/name: ingress-Nginx
  108. app.kubernetes.io/part-of: ingress-Nginx
  109. rules:
  110. - apiGroups:
  111. - ""
  112. resources:
  113. - configmaps
  114. - endpoints
  115. - nodes
  116. - pods
  117. - secrets
  118. verbs:
  119. - list
  120. - watch
  121. - apiGroups:
  122. - ""
  123. resources:
  124. - nodes
  125. verbs:
  126. - get
  127. - apiGroups:
  128. - ""
  129. resources:
  130. - services
  131. verbs:
  132. - get
  133. - list
  134. - watch
  135. - apiGroups:
  136. - "extensions"
  137. resources:
  138. - ingresses
  139. verbs:
  140. - get
  141. - list
  142. - watch
  143. - apiGroups:
  144. - ""
  145. resources:
  146. - events
  147. verbs:
  148. - create
  149. - patch
  150. - apiGroups:
  151. - "extensions"
  152. resources:
  153. - ingresses/status
  154. verbs:
  155. - update
  156. ---
  157. apiVersion: rbac.authorization.k8s.io/v1beta1
  158. kind: Role
  159. Metadata:
  160. name: Nginx-ingress-role
  161. namespace: ingress-Nginx
  162. labels:
  163. app.kubernetes.io/name: ingress-Nginx
  164. app.kubernetes.io/part-of: ingress-Nginx
  165. rules:
  166. - apiGroups:
  167. - ""
  168. resources:
  169. - configmaps
  170. - pods
  171. - secrets
  172. - namespaces
  173. verbs:
  174. - get
  175. - apiGroups:
  176. - ""
  177. resources:
  178. - configmaps
  179. resourceNames:
  180. # Defaults to "<election-id>-<ingress-class>"
  181. # Here: "<ingress-controller-leader>-<Nginx>"
  182. # This has to be adapted if you change either parameter
  183. # when launching the Nginx-ingress-controller.
  184. - "ingress-controller-leader-Nginx"
  185. verbs:
  186. - get
  187. - update
  188. - apiGroups:
  189. - ""
  190. resources:
  191. - configmaps
  192. verbs:
  193. - create
  194. - apiGroups:
  195. - ""
  196. resources:
  197. - endpoints
  198. verbs:
  199. - get
  200. ---
  201. apiVersion: rbac.authorization.k8s.io/v1beta1
  202. kind: RoleBinding
  203. Metadata:
  204. name: Nginx-ingress-role-nisa-binding
  205. namespace: ingress-Nginx
  206. labels:
  207. app.kubernetes.io/name: ingress-Nginx
  208. app.kubernetes.io/part-of: ingress-Nginx
  209. roleRef:
  210. apiGroup: rbac.authorization.k8s.io
  211. kind: Role
  212. name: Nginx-ingress-role
  213. subjects:
  214. - kind: ServiceAccount
  215. name: Nginx-ingress-serviceaccount
  216. namespace: ingress-Nginx
  217. ---
  218. apiVersion: rbac.authorization.k8s.io/v1beta1
  219. kind: ClusterRoleBinding
  220. Metadata:
  221. name: Nginx-ingress-clusterrole-nisa-binding
  222. labels:
  223. app.kubernetes.io/name: ingress-Nginx
  224. app.kubernetes.io/part-of: ingress-Nginx
  225. roleRef:
  226. apiGroup: rbac.authorization.k8s.io
  227. kind: ClusterRole
  228. name: Nginx-ingress-clusterrole
  229. subjects:
  230. - kind: ServiceAccount
  231. name: Nginx-ingress-serviceaccount
  232. namespace: ingress-Nginx
  233. ---
  234. apiVersion: apps/v1
  235. kind: Deployment
  236. Metadata:
  237. name: Nginx-ingress-controller
  238. namespace: ingress-Nginx
  239. labels:
  240. app.kubernetes.io/name: ingress-Nginx
  241. app.kubernetes.io/part-of: ingress-Nginx
  242. spec:
  243. replicas: 1
  244. selector:
  245. matchLabels:
  246. app.kubernetes.io/name: ingress-Nginx
  247. app.kubernetes.io/part-of: ingress-Nginx
  248. template:
  249. Metadata:
  250. labels:
  251. app.kubernetes.io/name: ingress-Nginx
  252. app.kubernetes.io/part-of: ingress-Nginx
  253. annotations:
  254. prometheus.io/port: "10254"
  255. prometheus.io/scrape: "true"
  256. spec:
  257. serviceAccountName: Nginx-ingress-serviceaccount
  258. containers:
  259. - name: Nginx-ingress-controller
  260. image: registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/Nginx-ingress-controller:0.20.0
  261. args:
  262. - /Nginx-ingress-controller
  263. - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
  264. - --configmap=$(POD_NAMESPACE)/Nginx-configuration
  265. - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
  266. - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
  267. - --publish-service=$(POD_NAMESPACE)/ingress-Nginx
  268. - --annotations-prefix=Nginx.ingress.kubernetes.io
  269. securityContext:
  270. capabilities:
  271. drop:
  272. - ALL
  273. add:
  274. - NET_BIND_SERVICE
  275. # www-data -> 33
  276. runAsUser: 33
  277. env:
  278. - name: POD_NAME
  279. valueFrom:
  280. fieldRef:
  281. fieldPath: Metadata.name
  282. - name: POD_NAMESPACE
  283. valueFrom:
  284. fieldRef:
  285. fieldPath: Metadata.namespace
  286. ports:
  287. - name: http
  288. containerPort: 80
  289. - name: https
  290. containerPort: 443
  291. livenessProbe:
  292. failureThreshold: 3
  293. httpGet:
  294. path: /healthz
  295. port: 10254
  296. scheme: HTTP
  297. initialDelaySeconds: 10
  298. periodSeconds: 10
  299. successThreshold: 1
  300. timeoutSeconds: 1
  301. readinessProbe:
  302. failureThreshold: 3
  303. httpGet:
  304. path: /healthz
  305. port: 10254
  306. scheme: HTTP
  307. periodSeconds: 10
  308. successThreshold: 1
  309. timeoutSeconds: 1
  310. ---

执行如下命令安装ingress controller。

  1. kubectl apply -f ingress-Nginx-mandatory.yaml

3.安装K8S SVC:ingress-Nginx

主要是用来用于暴露pod:Nginx-ingress-controller。

创建service-nodeport.yaml文件文件内容如下所示。

  1. apiVersion: v1
  2. kind: Service
  3. Metadata:
  4. name: ingress-Nginx
  5. namespace: ingress-Nginx
  6. labels:
  7. app.kubernetes.io/name: ingress-Nginx
  8. app.kubernetes.io/part-of: ingress-Nginx
  9. spec:
  10. type: NodePort
  11. ports:
  12. - name: http
  13. port: 80
  14. targetPort: 80
  15. protocol: TCP
  16. nodePort: 30080
  17. - name: https
  18. port: 443
  19. targetPort: 443
  20. protocol: TCP
  21. nodePort: 30443
  22. selector:
  23. app.kubernetes.io/name: ingress-Nginx
  24. app.kubernetes.io/part-of: ingress-Nginx

执行如下命令安装。

  1. kubectl apply -f service-nodeport.yaml

4.访问K8S SVC:ingress-Nginx

查看ingress-Nginx命名空间的部署情况,如下所示。

  1. [root@binghe101 k8s]# kubectl get pod -n ingress-Nginx
  2. NAME READY STATUS RESTARTS AGE
  3. default-http-backend-796ddcd9b-vfmgn 1/1 Running 1 10h
  4. Nginx-ingress-controller-58985cc996-87754 1/1 Running 2 10h

在命令行服务器命令行输入如下命令查看ingress-Nginx的端口映射情况。

  1. kubectl get svc -n ingress-Nginx

具体如下所示。

  1. [root@binghe101 k8s]# kubectl get svc -n ingress-Nginx
  2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  3. default-http-backend ClusterIP 10.96.247.2 <none> 80/TCP 7m3s
  4. ingress-Nginx NodePort 10.96.40.6 <none> 80:30080/TCP,443:30443/TCP 4m35s

所以,可以通过Master节点(binghe101服务器)的IP地址和30080端口号来访问ingress-Nginx,如下所示。

  1. [root@binghe101 k8s]# curl 192.168.175.101:30080
  2. default backend - 404

也可以在浏览器打开http://192.168.175.101:30080 来访问ingress-Nginx,如下所示。

在这里插入图片描述

K8S安装gitlab代码仓库

注意:在Master节点(binghe101服务器上执行)

1.创建k8s-ops命名空间

创建k8s-ops-namespace.yaml文件文件内容如下所示。

  1. apiVersion: v1
  2. kind: Namespace
  3. Metadata:
  4. name: k8s-ops
  5. labels:
  6. name: k8s-ops

执行如下命令创建命名空间。

  1. kubectl apply -f k8s-ops-namespace.yaml

2.安装gitlab-redis

创建gitlab-redis.yaml文件文件内容如下所示。

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. Metadata:
  4. name: redis
  5. namespace: k8s-ops
  6. labels:
  7. name: redis
  8. spec:
  9. selector:
  10. matchLabels:
  11. name: redis
  12. template:
  13. Metadata:
  14. name: redis
  15. labels:
  16. name: redis
  17. spec:
  18. containers:
  19. - name: redis
  20. image: sameersbn/redis
  21. imagePullPolicy: IfNotPresent
  22. ports:
  23. - name: redis
  24. containerPort: 6379
  25. volumeMounts:
  26. - mountPath: /var/lib/redis
  27. name: data
  28. livenessProbe:
  29. exec:
  30. command:
  31. - redis-cli
  32. - ping
  33. initialDelaySeconds: 30
  34. timeoutSeconds: 5
  35. readinessProbe:
  36. exec:
  37. command:
  38. - redis-cli
  39. - ping
  40. initialDelaySeconds: 10
  41. timeoutSeconds: 5
  42. volumes:
  43. - name: data
  44. hostPath:
  45. path: /data1/docker/xinsrv/redis
  46. ---
  47. apiVersion: v1
  48. kind: Service
  49. Metadata:
  50. name: redis
  51. namespace: k8s-ops
  52. labels:
  53. name: redis
  54. spec:
  55. ports:
  56. - name: redis
  57. port: 6379
  58. targetPort: redis
  59. selector:
  60. name: redis

首先,在命令行执行如下命令创建/data1/docker/xinsrv/redis目录。

  1. mkdir -p /data1/docker/xinsrv/redis

执行如下命令安装gitlab-redis。

  1. kubectl apply -f gitlab-redis.yaml

3.安装gitlab-postgresql

创建gitlab-postgresql.yaml,文件内容如下所示。

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. Metadata:
  4. name: postgresql
  5. namespace: k8s-ops
  6. labels:
  7. name: postgresql
  8. spec:
  9. selector:
  10. matchLabels:
  11. name: postgresql
  12. template:
  13. Metadata:
  14. name: postgresql
  15. labels:
  16. name: postgresql
  17. spec:
  18. containers:
  19. - name: postgresql
  20. image: sameersbn/postgresql
  21. imagePullPolicy: IfNotPresent
  22. env:
  23. - name: DB_USER
  24. value: gitlab
  25. - name: DB_PASS
  26. value: passw0rd
  27. - name: DB_NAME
  28. value: gitlab_production
  29. - name: DB_EXTENSION
  30. value: pg_trgm
  31. ports:
  32. - name: postgres
  33. containerPort: 5432
  34. volumeMounts:
  35. - mountPath: /var/lib/postgresql
  36. name: data
  37. livenessProbe:
  38. exec:
  39. command:
  40. - pg_isready
  41. - -h
  42. - localhost
  43. - -U
  44. - postgres
  45. initialDelaySeconds: 30
  46. timeoutSeconds: 5
  47. readinessProbe:
  48. exec:
  49. command:
  50. - pg_isready
  51. - -h
  52. - localhost
  53. - -U
  54. - postgres
  55. initialDelaySeconds: 5
  56. timeoutSeconds: 1
  57. volumes:
  58. - name: data
  59. hostPath:
  60. path: /data1/docker/xinsrv/postgresql
  61. ---
  62. apiVersion: v1
  63. kind: Service
  64. Metadata:
  65. name: postgresql
  66. namespace: k8s-ops
  67. labels:
  68. name: postgresql
  69. spec:
  70. ports:
  71. - name: postgres
  72. port: 5432
  73. targetPort: postgres
  74. selector:
  75. name: postgresql

首先,执行如下命令创建/data1/docker/xinsrv/postgresql目录。

  1. mkdir -p /data1/docker/xinsrv/postgresql

接下来,安装gitlab-postgresql,如下所示。

  1. kubectl apply -f gitlab-postgresql.yaml

4.安装gitlab

(1)配置用户名和密码

首先,在命令行使用base64编码为用户名和密码进行转码,本示例中,使用的用户名为admin,密码为admin.1231

转码情况如下所示。

  1. [root@binghe101 k8s]# echo -n 'admin' | base64
  2. YWRtaW4=
  3. [root@binghe101 k8s]# echo -n 'admin.1231' | base64
  4. YWRtaW4uMTIzMQ==

转码后的用户名为:YWRtaW4= 密码为:YWRtaW4uMTIzMQ==

也可以对base64编码后的字符串解码,例如,对密码字符串解码,如下所示。

  1. [root@binghe101 k8s]# echo 'YWRtaW4uMTIzMQ==' | base64 --decode
  2. admin.1231

接下来,创建secret-gitlab.yaml文件,主要是用户来配置GitLab的用户名和密码,文件内容如下所示。

  1. apiVersion: v1
  2. kind: Secret
  3. Metadata:
  4. namespace: k8s-ops
  5. name: git-user-pass
  6. type: Opaque
  7. data:
  8. username: YWRtaW4=
  9. password: YWRtaW4uMTIzMQ==

执行配置文件内容,如下所示。

  1. kubectl create -f ./secret-gitlab.yaml

(2)安装GitLab

创建gitlab.yaml文件文件内容如下所示。

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. Metadata:
  4. name: gitlab
  5. namespace: k8s-ops
  6. labels:
  7. name: gitlab
  8. spec:
  9. selector:
  10. matchLabels:
  11. name: gitlab
  12. template:
  13. Metadata:
  14. name: gitlab
  15. labels:
  16. name: gitlab
  17. spec:
  18. containers:
  19. - name: gitlab
  20. image: sameersbn/gitlab:12.1.6
  21. imagePullPolicy: IfNotPresent
  22. env:
  23. - name: TZ
  24. value: Asia/Shanghai
  25. - name: GITLAB_TIMEZONE
  26. value: Beijing
  27. - name: GITLAB_SECRETS_DB_KEY_BASE
  28. value: long-and-random-alpha-numeric-string
  29. - name: GITLAB_SECRETS_SECRET_KEY_BASE
  30. value: long-and-random-alpha-numeric-string
  31. - name: GITLAB_SECRETS_OTP_KEY_BASE
  32. value: long-and-random-alpha-numeric-string
  33. - name: GITLAB_ROOT_PASSWORD
  34. valueFrom:
  35. secretKeyRef:
  36. name: git-user-pass
  37. key: password
  38. - name: GITLAB_ROOT_EMAIL
  39. value: 12345678@qq.com
  40. - name: GITLAB_HOST
  41. value: gitlab.binghe.com
  42. - name: GITLAB_PORT
  43. value: "80"
  44. - name: GITLAB_SSH_PORT
  45. value: "30022"
  46. - name: GITLAB_NOTIFY_ON_BROKEN_BUILDS
  47. value: "true"
  48. - name: GITLAB_NOTIFY_PUSHER
  49. value: "false"
  50. - name: GITLAB_BACKUP_SCHEDULE
  51. value: daily
  52. - name: GITLAB_BACKUP_TIME
  53. value: 01:00
  54. - name: DB_TYPE
  55. value: postgres
  56. - name: DB_HOST
  57. value: postgresql
  58. - name: DB_PORT
  59. value: "5432"
  60. - name: DB_USER
  61. value: gitlab
  62. - name: DB_PASS
  63. value: passw0rd
  64. - name: DB_NAME
  65. value: gitlab_production
  66. - name: REDIS_HOST
  67. value: redis
  68. - name: REDIS_PORT
  69. value: "6379"
  70. ports:
  71. - name: http
  72. containerPort: 80
  73. - name: ssh
  74. containerPort: 22
  75. volumeMounts:
  76. - mountPath: /home/git/data
  77. name: data
  78. livenessProbe:
  79. httpGet:
  80. path: /
  81. port: 80
  82. initialDelaySeconds: 180
  83. timeoutSeconds: 5
  84. readinessProbe:
  85. httpGet:
  86. path: /
  87. port: 80
  88. initialDelaySeconds: 5
  89. timeoutSeconds: 1
  90. volumes:
  91. - name: data
  92. hostPath:
  93. path: /data1/docker/xinsrv/gitlab
  94. ---
  95. apiVersion: v1
  96. kind: Service
  97. Metadata:
  98. name: gitlab
  99. namespace: k8s-ops
  100. labels:
  101. name: gitlab
  102. spec:
  103. ports:
  104. - name: http
  105. port: 80
  106. nodePort: 30088
  107. - name: ssh
  108. port: 22
  109. targetPort: ssh
  110. nodePort: 30022
  111. type: NodePort
  112. selector:
  113. name: gitlab
  114. ---
  115. apiVersion: extensions/v1beta1
  116. kind: Ingress
  117. Metadata:
  118. name: gitlab
  119. namespace: k8s-ops
  120. annotations:
  121. kubernetes.io/ingress.class: traefik
  122. spec:
  123. rules:
  124. - host: gitlab.binghe.com
  125. http:
  126. paths:
  127. - backend:
  128. serviceName: gitlab
  129. servicePort: http

注意:在配置GitLab时,监听主机时,不能使用IP地址,需要使用主机名或者域名,上述配置中,我使用的是gitlab.binghe.com主机名。

在命令行执行如下命令创建/data1/docker/xinsrv/gitlab目录。

  1. mkdir -p /data1/docker/xinsrv/gitlab

安装GitLab,如下所示。

  1. kubectl apply -f gitlab.yaml

5.安装完成

查看k8s-ops命名空间部署情况,如下所示。

  1. [root@binghe101 k8s]# kubectl get pod -n k8s-ops
  2. NAME READY STATUS RESTARTS AGE
  3. gitlab-7b459db47c-5vk6t 0/1 Running 0 11s
  4. postgresql-79567459d7-x52vx 1/1 Running 0 30m
  5. redis-67f4cdc96c-h5ckz 1/1 Running 1 10h

也可以使用如下命令查看。

  1. [root@binghe101 k8s]# kubectl get pod --namespace=k8s-ops
  2. NAME READY STATUS RESTARTS AGE
  3. gitlab-7b459db47c-5vk6t 0/1 Running 0 36s
  4. postgresql-79567459d7-x52vx 1/1 Running 0 30m
  5. redis-67f4cdc96c-h5ckz 1/1 Running 1 10h

二者效果一样。

接下来,查看GitLab的端口映射,如下所示。

  1. [root@binghe101 k8s]# kubectl get svc -n k8s-ops
  2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  3. gitlab NodePort 10.96.153.100 <none> 80:30088/TCP,22:30022/TCP 2m42s
  4. postgresql ClusterIP 10.96.203.119 <none> 5432/TCP 32m
  5. redis ClusterIP 10.96.107.150 <none> 6379/TCP 10h

此时,可以看到,可以通过Master节点(binghe101)的主机名gitlab.binghe.com和端口30088就能够访问GitLab。由于我这里使用的是虚拟机来搭建相关的环境,在本机访问虚拟机映射的gitlab.binghe.com时,需要配置本机的hosts文件,在本机的hosts文件中加入如下配置项。

  1. 192.168.175.101 gitlab.binghe.com

注意:在Windows操作系统中,hosts文件所在的目录如下。

  1. C:\Windows\System32\drivers\etc

接下来,就可以在浏览器中通过链接http://gitlab.binghe.com:30088 来访问GitLab了,如下所示。

在这里插入图片描述

此时,可以通过用户名root和密码admin.1231来登录GitLab了。

注意:这里的用户名是root而不是admin,因为root是GitLab默认的超级用户

在这里插入图片描述

登录后的界面如下所示。

在这里插入图片描述

到此,K8S安装gitlab完成。

安装Harbor私有仓库

注意:这里将Harbor私有仓库安装在Master节点(binghe101服务器)上,实际生产环境中建议安装在其他服务器。

1.下载Harbor的离线安装版本

  1. wget https://github.com/goharbor/harbor/releases/download/v1.10.2/harbor-offline-installer-v1.10.2.tgz

2.解压Harbor的安装包

  1. tar -zxvf harbor-offline-installer-v1.10.2.tgz

解压成功后,会在服务器当前目录生成一个harbor目录。

3.配置Harbor

注意:这里,我将Harbor的端口修改成了1180,如果不修改Harbor的端口,默认的端口是80。

(1)修改harbor.yml文件

  1. cd harbor
  2. vim harbor.yml

修改的配置项如下所示。

  1. hostname: 192.168.175.101
  2. http:
  3. port: 1180
  4. harbor_admin_password: binghe123
  5. ###并把https注释掉,不然在安装的时候会报错:ERROR:root:Error: The protocol is https but attribute ssl_cert is not set
  6. #https:
  7. #port: 443
  8. #certificate: /your/certificate/path
  9. #private_key: /your/private/key/path

(2)修改daemon.json文件

修改/etc/docker/daemon.json文件,没有的话就创建,在/etc/docker/daemon.json文件添加如下内容

  1. [root@binghe~]# cat /etc/docker/daemon.json
  2. {
  3. "registry-mirrors": ["https://zz3sblpi.mirror.aliyuncs.com"],"insecure-registries":["192.168.175.101:1180"]
  4. }

也可以在服务器上使用 ip addr 命令查看本机所有的IP地址段,将其配置到/etc/docker/daemon.json文件中。这里,我配置后的文件内容如下所示。

  1. {
  2. "registry-mirrors": ["https://zz3sblpi.mirror.aliyuncs.com"],"insecure-registries":["192.168.175.0/16","172.17.0.0/16","172.18.0.0/16","172.16.29.0/16","192.168.175.101:1180"]
  3. }

4.安装并启动harbor

配置完成后,输入如下命令即可安装并启动Harbor

  1. [root@binghe harbor]# ./install.sh

5.登录Harbor并添加账户

安装成功后,在浏览器地址栏输入http://192.168.175.101:1180打开链接,如下图所示。

在这里插入图片描述

输入用户名admin和密码binghe123,登录系统,如下图所示

在这里插入图片描述

接下来,我们选择用户管理,添加一个管理员账户,为后续打包Docker镜像和上传Docker镜像做准备。添加账户的步骤如下所示。

在这里插入图片描述

在这里插入图片描述

此处填写的密码为Binghe123。

点击确定后,如下所示。

在这里插入图片描述

此时,账户binghe还不是管理员,此时选中binghe账户,点击“设置为管理员”。

在这里插入图片描述

在这里插入图片描述

此时,binghe账户就被设置为管理员了。到此,Harbor的安装就完成了。

6.修改Harbor端口

如果安装Harbor后,大家需要修改Harbor的端口,可以按照如下步骤修改Harbor的端口,这里,我以将80端口修改为1180端口为例

(1)修改harbor.yml文件

  1. cd harbor
  2. vim harbor.yml

修改的配置项如下所示。

  1. hostname: 192.168.175.101
  2. http:
  3. port: 1180
  4. harbor_admin_password: binghe123
  5. ###并把https注释掉,不然在安装的时候会报错:ERROR:root:Error: The protocol is https but attribute ssl_cert is not set
  6. #https:
  7. #port: 443
  8. #certificate: /your/certificate/path
  9. #private_key: /your/private/key/path

(2)修改docker-compose.yml文件

  1. vim docker-compose.yml

修改的配置项如下所示。

  1. ports:
  2. - 1180:80

(3)修改config.yml文件

  1. cd common/config/registry
  2. vim config.yml

修改的配置项如下所示。

  1. realm: http://192.168.175.101:1180/service/token

(4)重启Docker

  1. systemctl daemon-reload
  2. systemctl restart docker.service

(5)重启Harbor

  1. [root@binghe harbor]# docker-compose down
  2. Stopping harbor-log ... done
  3. Removing Nginx ... done
  4. Removing harbor-portal ... done
  5. Removing harbor-jobservice ... done
  6. Removing harbor-core ... done
  7. Removing redis ... done
  8. Removing registry ... done
  9. Removing registryctl ... done
  10. Removing harbor-db ... done
  11. Removing harbor-log ... done
  12. Removing network harbor_harbor
  13. [root@binghe harbor]# ./prepare
  14. prepare base dir is set to /mnt/harbor
  15. Clearing the configuration file: /config/log/logrotate.conf
  16. Clearing the configuration file: /config/Nginx/Nginx.conf
  17. Clearing the configuration file: /config/core/env
  18. Clearing the configuration file: /config/core/app.conf
  19. Clearing the configuration file: /config/registry/root.crt
  20. Clearing the configuration file: /config/registry/config.yml
  21. Clearing the configuration file: /config/registryctl/env
  22. Clearing the configuration file: /config/registryctl/config.yml
  23. Clearing the configuration file: /config/db/env
  24. Clearing the configuration file: /config/jobservice/env
  25. Clearing the configuration file: /config/jobservice/config.yml
  26. Generated configuration file: /config/log/logrotate.conf
  27. Generated configuration file: /config/Nginx/Nginx.conf
  28. Generated configuration file: /config/core/env
  29. Generated configuration file: /config/core/app.conf
  30. Generated configuration file: /config/registry/config.yml
  31. Generated configuration file: /config/registryctl/env
  32. Generated configuration file: /config/db/env
  33. Generated configuration file: /config/jobservice/env
  34. Generated configuration file: /config/jobservice/config.yml
  35. loaded secret from file: /secret/keys/secretkey
  36. Generated configuration file: /compose_location/docker-compose.yml
  37. Clean up the input dir
  38. [root@binghe harbor]# docker-compose up -d
  39. Creating network "harbor_harbor" with the default driver
  40. Creating harbor-log ... done
  41. Creating harbor-db ... done
  42. Creating redis ... done
  43. Creating registry ... done
  44. Creating registryctl ... done
  45. Creating harbor-core ... done
  46. Creating harbor-jobservice ... done
  47. Creating harbor-portal ... done
  48. Creating Nginx ... done
  49. [root@binghe harbor]# docker ps -a
  50. CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS

安装Jenkins(一般的做法)

1.安装nfs(之前安装过的话,可以省略此步)

使用 nfs 最大的问题就是写权限,可以使用 kubernetes 的 securityContext/runAsUser 指定 jenkins 容器中运行 jenkins 的用户 uid,以此来指定 nfs 目录的权限,让 jenkins 容器可写;也可以不限制,让所有用户都可以写。这里为了简单,就让所有用户可写了。

如果之前已经安装过nfs,则这一步可以省略。找一台主机,安装 nfs,这里,我以在Master节点(binghe101服务器)上安装nfs为例。

在命令行输入如下命令安装并启动nfs。

  1. yum install nfs-utils -y
  2. systemctl start nfs-server
  3. systemctl enable nfs-server

2.创建nfs共享目录

在Master节点(binghe101服务器)上创建 /opt/nfs/jenkins-data目录作为nfs的共享目录,如下所示。

  1. mkdir -p /opt/nfs/jenkins-data

接下来,编辑/etc/exports文件,如下所示。

  1. vim /etc/exports

在/etc/exports文件文件添加如下一行配置。

  1. /opt/nfs/jenkins-data 192.168.175.0/24(rw,all_squash)

这里的 ip 使用 kubernetes node 节点的 ip 范围,后面的 all_squash 选项会将所有访问的用户都映射成 nfsnobody 用户,不管你是什么用户访问,最终都会压缩成 nfsnobody,所以只要将 /opt/nfs/jenkins-data 的属主改为 nfsnobody,那么无论什么用户来访问都具有写权限。

这个选项在很多机器上由于用户 uid 不规范导致启动进程的用户不同,但是同时要对一个共享目录具有写权限时很有效。

接下来,为 /opt/nfs/jenkins-data目录授权,并重新加载nfs,如下所示。

  1. chown -R 1000 /opt/nfs/jenkins-data/
  2. systemctl reload nfs-server

在K8S集群中任意一个节点上使用如下命令进行验证:

  1. showmount -e NFS_IP

如果能够看到 /opt/nfs/jenkins-data 就表示 ok 了。

具体如下所示。

  1. [root@binghe101 ~]# showmount -e 192.168.175.101
  2. Export list for 192.168.175.101:
  3. /opt/nfs/jenkins-data 192.168.175.0/24
  4. [root@binghe102 ~]# showmount -e 192.168.175.101
  5. Export list for 192.168.175.101:
  6. /opt/nfs/jenkins-data 192.168.175.0/24

3.创建PV

Jenkins 其实只要加载对应的目录就可以读取之前的数据,但是由于 deployment 无法定义存储卷,因此我们只能使用 StatefulSet。

首先创建 pv,pv 是给 StatefulSet 使用的,每次 StatefulSet 启动都会通过 volumeClaimTemplates 这个模板去创建 pvc,因此必须得有 pv,才能供 pvc 绑定。

创建jenkins-pv.yaml文件文件内容如下所示。

  1. apiVersion: v1
  2. kind: PersistentVolume
  3. Metadata:
  4. name: jenkins
  5. spec:
  6. nfs:
  7. path: /opt/nfs/jenkins-data
  8. server: 192.168.175.101
  9. accessModes: ["ReadWriteOnce"]
  10. capacity:
  11. storage: 1Ti

我这里给了 1T存储空间,可以根据实际配置。

执行如下命令创建pv。

  1. kubectl apply -f jenkins-pv.yaml

4.创建serviceAccount

创建service account,因为 jenkins 后面需要能够动态创建 slave,因此它必须具备一些权限。

创建jenkins-service-account.yaml文件文件内容如下所示。

  1. apiVersion: v1
  2. kind: ServiceAccount
  3. Metadata:
  4. name: jenkins
  5. ---
  6. kind: Role
  7. apiVersion: rbac.authorization.k8s.io/v1beta1
  8. Metadata:
  9. name: jenkins
  10. rules:
  11. - apiGroups: [""]
  12. resources: ["pods"]
  13. verbs: ["create","delete","get","list","patch","update","watch"]
  14. - apiGroups: [""]
  15. resources: ["pods/exec"]
  16. verbs: ["create","watch"]
  17. - apiGroups: [""]
  18. resources: ["pods/log"]
  19. verbs: ["get","watch"]
  20. - apiGroups: [""]
  21. resources: ["secrets"]
  22. verbs: ["get"]
  23. ---
  24. apiVersion: rbac.authorization.k8s.io/v1beta1
  25. kind: RoleBinding
  26. Metadata:
  27. name: jenkins
  28. roleRef:
  29. apiGroup: rbac.authorization.k8s.io
  30. kind: Role
  31. name: jenkins
  32. subjects:
  33. - kind: ServiceAccount
  34. name: jenkins

上述配置中,创建了一个 RoleBinding 和一个 ServiceAccount,并且将 RoleBinding 的权限绑定到这个用户上。所以,jenkins 容器必须使用这个 ServiceAccount 运行才行,不然 RoleBinding 的权限它将不具备。

RoleBinding 的权限很容易就看懂了,因为 jenkins 需要创建和删除 slave,所以才需要上面这些权限。至于 secrets 权限,则是 https 证书。

执行如下命令创建serviceAccount。

  1. kubectl apply -f jenkins-service-account.yaml

5.安装Jenkins

创建jenkins-statefulset.yaml文件文件内容如下所示。

  1. apiVersion: apps/v1
  2. kind: StatefulSet
  3. Metadata:
  4. name: jenkins
  5. labels:
  6. name: jenkins
  7. spec:
  8. selector:
  9. matchLabels:
  10. name: jenkins
  11. serviceName: jenkins
  12. replicas: 1
  13. updateStrategy:
  14. type: RollingUpdate
  15. template:
  16. Metadata:
  17. name: jenkins
  18. labels:
  19. name: jenkins
  20. spec:
  21. terminationGracePeriodSeconds: 10
  22. serviceAccountName: jenkins
  23. containers:
  24. - name: jenkins
  25. image: docker.io/jenkins/jenkins:lts
  26. imagePullPolicy: IfNotPresent
  27. ports:
  28. - containerPort: 8080
  29. - containerPort: 32100
  30. resources:
  31. limits:
  32. cpu: 4
  33. memory: 4Gi
  34. requests:
  35. cpu: 4
  36. memory: 4Gi
  37. env:
  38. - name: LIMITS_MEMORY
  39. valueFrom:
  40. resourceFieldRef:
  41. resource: limits.memory
  42. divisor: 1Mi
  43. - name: JAVA_OPTS
  44. # value: -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=1 -XshowSettings:vm -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85
  45. value: -Xmx$(LIMITS_MEMORY)m -XshowSettings:vm -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85
  46. volumeMounts:
  47. - name: jenkins-home
  48. mountPath: /var/jenkins_home
  49. livenessProbe:
  50. httpGet:
  51. path: /login
  52. port: 8080
  53. initialDelaySeconds: 60
  54. timeoutSeconds: 5
  55. failureThreshold: 12 # ~2 minutes
  56. readinessProbe:
  57. httpGet:
  58. path: /login
  59. port: 8080
  60. initialDelaySeconds: 60
  61. timeoutSeconds: 5
  62. failureThreshold: 12 # ~2 minutes
  63. # pvc 模板,对应之前的 pv
  64. volumeClaimTemplates:
  65. - Metadata:
  66. name: jenkins-home
  67. spec:
  68. accessModes: ["ReadWriteOnce"]
  69. resources:
  70. requests:
  71. storage: 1Ti

jenkins 部署时需要注意它的副本数,你的副本数有多少就要有多少个 pv,同样,存储会有多倍消耗。这里我只使用了一个副本,因此前面也只创建了一个 pv。

使用如下命令安装Jenkins。

  1. kubectl apply -f jenkins-statefulset.yaml

6.创建Service

创建jenkins-service.yaml文件文件内容如下所示。

  1. apiVersion: v1
  2. kind: Service
  3. Metadata:
  4. name: jenkins
  5. spec:
  6. # type: LoadBalancer
  7. selector:
  8. name: jenkins
  9. # ensure the client ip is propagated to avoid the invalid crumb issue when using LoadBalancer (k8s >=1.7)
  10. #externalTrafficPolicy: Local
  11. ports:
  12. - name: http
  13. port: 80
  14. nodePort: 31888
  15. targetPort: 8080
  16. protocol: TCP
  17. - name: jenkins-agent
  18. port: 32100
  19. nodePort: 32100
  20. targetPort: 32100
  21. protocol: TCP
  22. type: NodePort

使用如下命令安装Service。

  1. kubectl apply -f jenkins-service.yaml

7.安装 ingress

jenkins 的 web 界面需要从集群外访问,这里我们选择的是使用 ingress。创建jenkins-ingress.yaml文件文件内容如下所示。

  1. apiVersion: extensions/v1beta1
  2. kind: Ingress
  3. Metadata:
  4. name: jenkins
  5. spec:
  6. rules:
  7. - http:
  8. paths:
  9. - path: /
  10. backend:
  11. serviceName: jenkins
  12. servicePort: 31888
  13. host: jekins.binghe.com

这里,需要注意的是host必须配置为域名或者主机名,否则会报错,如下所示。

  1. The Ingress "jenkins" is invalid: spec.rules[0].host: Invalid value: "192.168.175.101": must be a DNS name,not an IP address

使用如下命令安装ingress。

  1. kubectl apply -f jenkins-ingress.yaml

最后,由于我这里使用的是虚拟机来搭建相关的环境,在本机访问虚拟机映射的jekins.binghe.com时,需要配置本机的hosts文件,在本机的hosts文件中加入如下配置项。

  1. 192.168.175.101 jekins.binghe.com

注意:在Windows操作系统中,hosts文件所在的目录如下。

  1. C:\Windows\System32\drivers\etc

接下来,就可以在浏览器中通过链接http://jekins.binghe.com:31888 来访问Jekins了。

物理机安装SVN

这里,以在Master节点(binghe101服务器)上安装SVN为例。

1.使用yum安装SVN

在命令行执行如下命令安装SVN。

  1. yum -y install subversion

2.创建SVN库

依次执行如下命令。

  1. #创建/data/svn
  2. mkdir -p /data/svn
  3. #初始化svn
  4. svnserve -d -r /data/svn
  5. #创建代码仓库
  6. svnadmin create /data/svn/test

3.配置SVN

  1. mkdir /data/svn/conf
  2. cp /data/svn/test/conf/* /data/svn/conf/
  3. cd /data/svn/conf/
  4. [root@binghe101 conf]# ll
  5. 总用量 20
  6. -rw-r--r-- 1 root root 1080 5 12 02:17 authz
  7. -rw-r--r-- 1 root root 885 5 12 02:17 hooks-env.tmpl
  8. -rw-r--r-- 1 root root 309 5 12 02:17 passwd
  9. -rw-r--r-- 1 root root 4375 5 12 02:17 svnserve.conf
  1. vim authz

配置后的内容如下所示。

  1. [aliases]
  2. # joe = /C=XZ/ST=Dessert/L=Snake City/O=Snake Oil,Ltd./OU=Research Institute/CN=Joe Average
  3. [groups]
  4. # harry_and_sally = harry,sally
  5. # harry_sally_and_joe = harry,sally,&joe
  6. SuperAdmin = admin
  7. binghe = admin,binghe
  8. # [/foo/bar]
  9. # harry = rw
  10. # &joe = r
  11. # * =
  12. # [repository:/baz/fuz]
  13. # @harry_and_sally = rw
  14. # * = r
  15. [test:/]
  16. @SuperAdmin=rw
  17. @binghe=rw
  1. vim passwd

配置后的内容如下所示。

  1. [users]
  2. # harry = harryssecret
  3. # sally = sallyssecret
  4. admin = admin123
  5. binghe = binghe123
  • 配置 svnserve.conf
  1. vim svnserve.conf

配置后的文件如下所示。

  1. ### This file controls the configuration of the svnserve daemon,if you
  2. ### use it to allow access to this repository. (If you only allow
  3. ### access through http: and/or file: URLs,then this file is
  4. ### irrelevant.)
  5. ### Visit http://subversion.apache.org/ for more information.
  6. [general]
  7. ### The anon-access and auth-access options control access to the
  8. ### repository for unauthenticated (a.k.a. anonymous) users and
  9. ### authenticated users,respectively.
  10. ### Valid values are "write","read",and "none".
  11. ### Setting the value to "none" prohibits both reading and writing;
  12. ### "read" allows read-only access,and "write" allows complete
  13. ### read/write access to the repository.
  14. ### The sample settings below are the defaults and specify that anonymous
  15. ### users have read-only access to the repository,while authenticated
  16. ### users have read and write access to the repository.
  17. anon-access = none
  18. auth-access = write
  19. ### The password-db option controls the location of the password
  20. ### database file. Unless you specify a path starting with a /,### the file's location is relative to the directory containing
  21. ### this configuration file.
  22. ### If SASL is enabled (see below),this file will NOT be used.
  23. ### Uncomment the line below to use the default password file.
  24. password-db = /data/svn/conf/passwd
  25. ### The authz-db option controls the location of the authorization
  26. ### rules for path-based access control. Unless you specify a path
  27. ### starting with a /,the file's location is relative to the
  28. ### directory containing this file. The specified path may be a
  29. ### repository relative URL (^/) or an absolute file:// URL to a text
  30. ### file in a Subversion repository. If you don't specify an authz-db,### no path-based access control is done.
  31. ### Uncomment the line below to use the default authorization file.
  32. authz-db = /data/svn/conf/authz
  33. ### The groups-db option controls the location of the file with the
  34. ### group definitions and allows maintaining groups separately from the
  35. ### authorization rules. The groups-db file is of the same format as the
  36. ### authz-db file and should contain a single [groups] section with the
  37. ### group definitions. If the option is enabled,the authz-db file cannot
  38. ### contain a [groups] section. Unless you specify a path starting with
  39. ### a /,the file's location is relative to the directory containing this
  40. ### file. The specified path may be a repository relative URL (^/) or an
  41. ### absolute file:// URL to a text file in a Subversion repository.
  42. ### This option is not being used by default.
  43. # groups-db = groups
  44. ### This option specifies the authentication realm of the repository.
  45. ### If two repositories have the same authentication realm,they should
  46. ### have the same password database,and vice versa. The default realm
  47. ### is repository's uuid.
  48. realm = svn
  49. ### The force-username-case option causes svnserve to case-normalize
  50. ### usernames before comparing them against the authorization rules in the
  51. ### authz-db file configured above. Valid values are "upper" (to upper-
  52. ### case the usernames),"lower" (to lowercase the usernames),and
  53. ### "none" (to compare usernames as-is without case conversion,which
  54. ### is the default behavior).
  55. # force-username-case = none
  56. ### The hooks-env options specifies a path to the hook script environment
  57. ### configuration file. This option overrides the per-repository default
  58. ### and can be used to configure the hook script environment for multiple
  59. ### repositories in a single file,if an absolute path is specified.
  60. ### Unless you specify an absolute path,the file's location is relative
  61. ### to the directory containing this file.
  62. # hooks-env = hooks-env
  63. [sasl]
  64. ### This option specifies whether you want to use the Cyrus SASL
  65. ### library for authentication. Default is false.
  66. ### Enabling this option requires svnserve to have been built with Cyrus
  67. ### SASL support; to check,run 'svnserve --version' and look for a line
  68. ### reading 'Cyrus SASL authentication is available.'
  69. # use-sasl = true
  70. ### These options specify the desired strength of the security layer
  71. ### that you want SASL to provide. 0 means no encryption,1 means
  72. ### integrity-checking only,values larger than 1 are correlated
  73. ### to the effective key length for encryption (e.g. 128 means 128-bit
  74. ### encryption). The values below are the defaults.
  75. # min-encryption = 0
  76. # max-encryption = 256

接下来,将/data/svn/conf目录下的svnserve.conf文件复制到/data/svn/test/conf/目录下。如下所示。

  1. [root@binghe101 conf]# cp /data/svn/conf/svnserve.conf /data/svn/test/conf/
  2. cp:是否覆盖'/data/svn/test/conf/svnserve.conf' y

4.启动SVN服务

(1)创建svnserve.service服务

创建svnserve.service文件

  1. vim /usr/lib/systemd/system/svnserve.service

文件内容如下所示。

  1. [Unit]
  2. Description=Subversion protocol daemon
  3. After=syslog.target network.target
  4. Documentation=man:svnserve(8)
  5. [Service]
  6. Type=forking
  7. EnvironmentFile=/etc/sysconfig/svnserve
  8. #ExecStart=/usr/bin/svnserve --daemon --pid-file=/run/svnserve/svnserve.pid $OPTIONS
  9. ExecStart=/usr/bin/svnserve --daemon $OPTIONS
  10. PrivateTmp=yes
  11. [Install]
  12. WantedBy=multi-user.target

接下来执行如下命令使配置生效。

  1. systemctl daemon-reload

命令执行成功后,修改 /etc/sysconfig/svnserve 文件

  1. vim /etc/sysconfig/svnserve

修改后的文件内容如下所示。

  1. # OPTIONS is used to pass command-line arguments to svnserve.
  2. #
  3. # Specify the repository location in -r parameter:
  4. OPTIONS="-r /data/svn"

(2)启动SVN

首先查看SVN状态,如下所示。

  1. [root@itence10 conf]# systemctl status svnserve.service
  2. svnserve.service - Subversion protocol daemon
  3. Loaded: loaded (/usr/lib/systemd/system/svnserve.service; disabled; vendor preset: disabled)
  4. Active: inactive (dead)
  5. Docs: man:svnserve(8)

可以看到,此时SVN并没有启动,接下来,需要启动SVN。

  1. systemctl start svnserve.service

设置SVN服务开机自启动。

  1. systemctl enable svnserve.service

接下来,就可以下载安装TortoiseSVN,输入链接svn://192.168.0.10/test 并输入用户名binghe,密码binghe123来连接SVN了。

物理机安装Jenkins

注意:安装Jenkins之前需要安装JDK和Maven,我这里同样将Jenkins安装在Master节点(binghe101服务器)。

1.启用Jenkins库

运行以下命令以下载repo文件并导入GPG密钥:

  1. wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat-stable/jenkins.repo
  2. rpm --import https://jenkins-ci.org/redhat/jenkins-ci.org.key

2.安装Jenkins

执行如下命令安装Jenkis。

  1. yum install jenkins

接下来,修改Jenkins默认端口,如下所示。

  1. vim /etc/sysconfig/jenkins

修改后的两项配置如下所示。

  1. JENKINS_JAVA_CMD="/usr/local/jdk1.8.0_212/bin/java"
  2. JENKINS_PORT="18080"

此时,已经将Jenkins的端口由8080修改为18080

3.启动Jenkins

在命令行输入如下命令启动Jenkins。

  1. systemctl start jenkins

配置Jenkins开机自启动。

  1. systemctl enable jenkins

查看Jenkins的运行状态。

  1. [root@itence10 ~]# systemctl status jenkins
  2. jenkins.service - LSB: Jenkins Automation Server
  3. Loaded: loaded (/etc/rc.d/init.d/jenkins; generated)
  4. Active: active (running) since Tue 2020-05-12 04:33:40 EDT; 28s ago
  5. Docs: man:systemd-sysv-generator(8)
  6. Tasks: 71 (limit: 26213)
  7. Memory: 550.8M

说明,Jenkins启动成功。

配置Jenkins运行环境

1.登录Jenkins

首次安装后,需要配置Jenkins的运行环境。首先,在浏览器地址栏访问链接http://192.168.0.10:18080,打开Jenkins界面。

在这里插入图片描述

根据提示使用如下命令到服务器上找密码值,如下所示。

  1. [root@binghe101 ~]# cat /var/lib/jenkins/secrets/initialAdminPassword
  2. 71af861c2ab948a1b6efc9f7dde90776

将密码71af861c2ab948a1b6efc9f7dde90776复制到文本框,点击继续。会跳转自定义Jenkins页面,如下所示。

在这里插入图片描述

这里,可以直接选择“安装推荐的插件”。之后会跳转到一个安装插件页面,如下所示。

在这里插入图片描述

此步骤可能有下载失败的情况,可直接忽略。

2.安装插件

需要安装的插件

  • Kubernetes Cli Plugin:该插件可直接在Jenkins中使用kubernetes命令行进行操作。

  • Kubernetes plugin: 使用kubernetes则需要安装该插件

  • Kubernetes Continuous Deploy Plugin:kubernetes部署插件,可根据需要使用

还有更多的插件可供选择,可点击 系统管理->管理插件进行管理和添加,安装相应的Docker插件、SSH插件、Maven插件。其他的插件可以根据需要进行安装。如下图所示。

在这里插入图片描述

在这里插入图片描述

3.配置Jenkins

(1)配置JDK和Maven

在Global Tool Configuration中配置JDK和Maven,如下所示,打开Global Tool Configuration界面。

在这里插入图片描述

接下来就开始配置JDK和Maven了。

由于我在服务器上将Maven安装在/usr/local/maven-3.6.3目录下,所以,需要在“Maven 配置”中进行配置,如下图所示。

在这里插入图片描述

接下来,配置JDK,如下所示。

在这里插入图片描述

注意:不要勾选“Install automatically”

接下来,配置Maven,如下所示。

在这里插入图片描述

注意:不要勾选“Install automatically”

(2)配置SSH

进入Jenkins的Configure System界面配置SSH,如下所示。

在这里插入图片描述

找到 SSH remote hosts 进行配置。

在这里插入图片描述

在这里插入图片描述

配置完成后,点击Check connection按钮,会显示 Successfull connection。如下所示。

在这里插入图片描述

至此,Jenkins的基本配置就完成了。

写在最后

如果觉得文章对你有点帮助,请微信搜索并关注「 冰河技术 」微信公众号,跟冰河学习各种编程技术。

最后附上K8S最全知识图谱链接

https://www.processon.com/view/link/5ac64532e4b00dc8a02f05eb?spm=a2c4e.10696291.0.0.6ec019a4bYSFIw#map

祝大家在学习K8S时,少走弯路。

猜你在找的Docker相关文章