当前位置: 首页 > news >正文

免费做外贸的网站平台广告设计专业课程有哪些

免费做外贸的网站平台,广告设计专业课程有哪些,网站中引用字体,东莞新闻最新消息文章目录 一、项目架构图二 、项目描述三、项目环境四、环境准备1、IP地址规划2、关闭selinux和firewall3、配置静态ip地址4、修改主机名5、升级系统#xff08;可做可不做#xff09;6、添加hosts解析 五、项目步骤1、设计整个集群的架构#xff0c;规划好服务器的IP地址可做可不做6、添加hosts解析 五、项目步骤1、设计整个集群的架构规划好服务器的IP地址搭建集群2、部署ansible完成相关软件的自动化运维工作部署防火墙服务器部署堡垒机a、部署堡垒机b、部署firewall服务器 3、部署nfs服务器为整个web集群提供数据让所有的web业务pod都去访问通过pv、pvc和卷挂载实现4、构建CI/CD环境部署gitlabJenkinsharbor实现相关的代码发布镜像制作数据备份等流水线工作a、部署gitlabb、部署Jenkinsc、部署harbor 5、将自己用go开发的web接口系统制作成镜像部署到k8s里作为web应用采用HPA技术当cpu使用率达到50%的时候进行水平扩缩最小20个业务pod最多40个业务pod6、启动mysql的pod为web业务提供数据库服务a、尝试k8s部署有状态的MySQL 7、使用探针(liveness、readiness、startup)的httpget、exec方法对web业务pod进行监控一旦出现问题马上重启增强业务pod的可靠性8、使用ingress给web业务做负载均衡使用dashboard对整个集群资源进行掌控9、使用dashboard对整个集群资源进行掌控10、安装zabbix和promethues对整个集群资源cpu内存网络带宽web服务数据库服务磁盘IO等进行监控11、使用测试软件ab对整个k8s集群和相关的服务器进行压力测试 一、项目架构图 二 、项目描述 模拟公司的web业务部署k8swebMySQLnfsharborzabbixPrometheusgitlabJenkinsansible环境保障web业务的高可用达到一个高负载的生产环境。 三、项目环境 CentOS 7.9ansible 2.9.27Docker 20.10.6Docker Compose 2.18.1Kubernetes 1.20.6Calico 3.23Harbor 2.4.1nfs v4metrics-server 0.6.0ingress-nginx-controllerv1.1.0kube-webhook-certgen-v1.1.0MySQL 5.7.42Dashboard v2.5.0Prometheus 2.34.0zabbix 5.0Grafana 10.0.0jenkinsci/blueoceanGitlab-16.0.4-jh。四、环境准备 10台全新的Linux服务器关闭firewalld和seLinux配置静态ip地址修改主机名添加hosts解析 1、IP地址规划 serveripk8smaster192.168.2.104k8snode1192.168.2.111k8snode2192.168.2.112ansibe192.168.2.119nfs192.168.2.121gitlab192.168.2.124harbor192.168.2.106zabbix192.168.2.117firewalld192.168.2.141Bastionhost192.168.2.140 2、关闭selinux和firewall # 防火墙并且设置防火墙开启不启动 service firewalld stop systemctl disable firewalld# 临时关闭seLinux setenforce 0# 永久关闭seLinux sed -i s/SELINUXenforcing/SELINUXdisabled/g /etc/selinux/config[rootk8smaster ~]# service firewalld stop Redirecting to /bin/systemctl stop firewalld.service [rootk8smaster ~]# systemctl disable firewalld Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service. Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service. [rootk8smaster ~]# reboot [rootk8smaster ~]# getenforce Disabled3、配置静态ip地址 cd /etc/sysconfig/network-scripts/ vim ifcfg-ens33TYPEEthernet BOOTPROTOstatic DEVICEens33 NAMEens33 ONBOOTyes IPADDR192.168.2.104 PREFIX24 GATEWAY192.168.2.1 DNS1114.114.114.114TYPEEthernet BOOTPROTOstatic DEVICEens33 NAMEens33 ONBOOTyes IPADDR192.168.2.111 PREFIX24 GATEWAY192.168.2.1 DNS1114.114.114.114TYPEEthernet BOOTPROTOstatic DEVICEens33 NAMEens33 ONBOOTyes IPADDR192.168.2.112 PREFIX24 GATEWAY192.168.2.1 DNS1114.114.114.1144、修改主机名 hostnamcectl set-hostname k8smaster hostnamcectl set-hostname k8snode1 hostnamcectl set-hostname k8snode2#切换用户重新加载环境 su - root [rootk8smaster ~]# [rootk8snode1 ~]# [rootk8snode2 ~]#5、升级系统可做可不做 yum update -y6、添加hosts解析 vim /etc/hosts127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.2.104 k8smaster 192.168.2.111 k8snode1 192.168.2.112 k8snode2五、项目步骤 1、设计整个集群的架构规划好服务器的IP地址搭建集群 # 1.互相之间建立免密通道 ssh-keygen # 一路回车ssh-copy-id k8smaster ssh-copy-id k8snode1 ssh-copy-id k8snode2# 2.关闭交换分区Kubeadm初始化的时候会检测 # 临时关闭swapoff -a # 永久关闭注释swap挂载给swap这行开头加一下注释 [rootk8smaster ~]# cat /etc/fstab# # /etc/fstab # Created by anaconda on Thu Mar 23 15:22:20 2023 # # Accessible filesystems, by reference, are maintained under /dev/disk # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/centos-root / xfs defaults 0 0 UUID00236222-82bd-4c15-9c97-e55643144ff3 /boot xfs defaults 0 0 /dev/mapper/centos-home /home xfs defaults 0 0 #/dev/mapper/centos-swap swap swap defaults 0 0# 3.加载相关内核模块 modprobe br_netfilterecho modprobe br_netfilter /etc/profilecat /etc/sysctl.d/k8s.conf EOF net.bridge.bridge-nf-call-ip6tables 1 net.bridge.bridge-nf-call-iptables 1 net.ipv4.ip_forward 1 EOF#重新加载使配置生效 sysctl -p /etc/sysctl.d/k8s.conf# 为什么要执行modprobe br_netfilter # modprobe br_netfilter命令用于在Linux系统中加载br_netfilter内核模块。这个模块是Linux内# 核中的一个网络桥接模块它允许管理员使用iptables等工具对桥接到同一网卡的流量进行过滤和管理。 # 因为要使用Linux系统作为路由器或防火墙并且需要对来自不同网卡的数据包进行过滤、转发或NAT操作。# 为什么要开启net.ipv4.ip_forward 1参数 # 要让Linux系统具有路由转发功能需要配置一个Linux的内核参数net.ipv4.ip_forward。这个参数指# 定了Linux系统当前对路由转发功能的支持情况其值为0时表示禁止进行IP转发如果是1,则说明IP转发# 功能已经打开。# 4.配置阿里云的repo源 yum install -y yum-utils yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repoyum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh-server socat ipvsadm conntrack ntpdate telnet ipvsadm# 5.配置安装k8s组件需要的阿里云的repo源 [rootk8smaster ~]# vim /etc/yum.repos.d/kubernetes.repo [kubernetes] nameKubernetes baseurlhttps://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled1 gpgcheck0# 6.配置时间同步 [rootk8smaster ~]# crontab -e * */1 * * * /usr/sbin/ntpdate cn.pool.ntp.org#重启crond服务 [rootk8smaster ~]# service crond restart# 7.安装docker服务 yum install docker-ce-20.10.6 -y# 启动docker设置开机自启 systemctl start docker systemctl enable docker.service# 8.配置docker镜像加速器和驱动 vim /etc/docker/daemon.json {registry-mirrors:[https://rsbud4vc.mirror.aliyuncs.com,https://registry.docker-cn.com,https://docker.mirrors.ustc.edu.cn,https://dockerhub.azk8s.cn,http://hub-mirror.c.163.com],exec-opts: [native.cgroupdriversystemd] } # 重新加载配置重启docker服务 systemctl daemon-reload systemctl restart docker# 9.安装初始化k8s需要的软件包 yum install -y kubelet-1.20.6 kubeadm-1.20.6 kubectl-1.20.6# 设置kubelet开机启动 systemctl enable kubelet #注每个软件包的作用 #Kubeadm: kubeadm是一个工具用来初始化k8s集群的 #kubelet: 安装在集群所有节点上用于启动Pod的 #kubectl: 通过kubectl可以部署和管理应用查看各种资源创建、删除和更新各种组件# 10.kubeadm初始化k8s集群 # 把初始化k8s集群需要的离线镜像包上传到k8smaster、k8snode1、k8snode2机器上然后解压 docker load -i k8simage-1-20-6.tar.gz# 把文件远程拷贝到node节点 rootk8smaster ~]# scp k8simage-1-20-6.tar.gz rootk8snode1:/root rootk8smaster ~]# scp k8simage-1-20-6.tar.gz rootk8snode2:/root# 查看镜像 [rootk8snode1 ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE registry.aliyuncs.com/google_containers/kube-proxy v1.20.6 9a1ebfd8124d 2 years ago 118MB registry.aliyuncs.com/google_containers/kube-scheduler v1.20.6 b93ab2ec4475 2 years ago 47.3MB registry.aliyuncs.com/google_containers/kube-controller-manager v1.20.6 560dd11d4550 2 years ago 116MB registry.aliyuncs.com/google_containers/kube-apiserver v1.20.6 b05d611c1af9 2 years ago 122MB calico/pod2daemon-flexvol v3.18.0 2a22066e9588 2 years ago 21.7MB calico/node v3.18.0 5a7c4970fbc2 2 years ago 172MB calico/cni v3.18.0 727de170e4ce 2 years ago 131MB calico/kube-controllers v3.18.0 9a154323fbf7 2 years ago 53.4MB registry.aliyuncs.com/google_containers/etcd 3.4.13-0 0369cf4303ff 2 years ago 253MB registry.aliyuncs.com/google_containers/coredns 1.7.0 bfe3a36ebd25 3 years ago 45.2MB registry.aliyuncs.com/google_containers/pause 3.2 80d28bedfe5d 3 years ago 683kB# 11.使用kubeadm初始化k8s集群 kubeadm config print init-defaults kubeadm.yaml[rootk8smaster ~]# vim kubeadm.yaml apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens: - groups:- system:bootstrappers:kubeadm:default-node-tokentoken: abcdef.0123456789abcdefttl: 24h0m0susages:- signing- authentication kind: InitConfiguration localAPIEndpoint:advertiseAddress: 192.168.2.104 #控制节点的ipbindPort: 6443 nodeRegistration:criSocket: /var/run/dockershim.sockname: k8smaster #控制节点主机名taints:- effect: NoSchedulekey: node-role.kubernetes.io/master --- apiServer:timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns:type: CoreDNS etcd:local:dataDir: /var/lib/etcd imageRepository: registry.aliyuncs.com/google_containers # 需要修改为阿里云的仓库 kind: ClusterConfiguration kubernetesVersion: v1.20.6 networking:dnsDomain: cluster.localserviceSubnet: 10.96.0.0/12podSubnet: 10.244.0.0/16 #指定pod网段需要新增加这个 scheduler: {} #追加如下几行 --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: ipvs --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration cgroupDriver: systemd# 12.基于kubeadm.yaml文件初始化k8s [rootk8smaster ~]# kubeadm init --configkubeadm.yaml --ignore-preflight-errorsSystemVerificationmkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/configkubeadm join 192.168.2.104:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:83421a7d1baa62269508259b33e6563e45fbeb9139a9c214cbe9fc107f07cb4c # 13.扩容k8s集群-添加工作节点 [rootk8snode1 ~]# kubeadm join 192.168.2.104:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:83421a7d1baa62269508259b33e6563e45fbeb9139a9c214cbe9fc107f07cb4c [rootk8snode2 ~]# kubeadm join 192.168.2.104:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:83421a7d1baa62269508259b33e6563e45fbeb9139a9c214cbe9fc107f07cb4c # 14.在k8smaster上查看集群节点状况 [rootk8smaster ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8smaster NotReady control-plane,master 2m49s v1.20.6 k8snode1 NotReady none 19s v1.20.6 k8snode2 NotReady none 14s v1.20.6# 15.k8snode1,k8snode2的ROLES角色为空none就表示这个节点是工作节点。 可以把k8snode1,k8snode2的ROLES变成work [rootk8smaster ~]# kubectl label node k8snode1 node-role.kubernetes.io/workerworker node/k8snode1 labeled[rootk8smaster ~]# kubectl label node k8snode2 node-role.kubernetes.io/workerworker node/k8snode2 labeled [rootk8smaster ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8smaster NotReady control-plane,master 2m43s v1.20.6 k8snode1 NotReady worker 2m15s v1.20.6 k8snode2 NotReady worker 2m11s v1.20.6 # 注意上面状态都是NotReady状态说明没有安装网络插件# 16.安装kubernetes网络组件-Calico # 上传calico.yaml到k8smaster上使用yaml文件安装calico网络插件 。 wget https://docs.projectcalico.org/v3.23/manifests/calico.yaml --no-check-certificate[rootk8smaster ~]# kubectl apply -f calico.yaml configmap/calico-config created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created clusterrole.rbac.authorization.k8s.io/calico-node created clusterrolebinding.rbac.authorization.k8s.io/calico-node created daemonset.apps/calico-node created serviceaccount/calico-node created deployment.apps/calico-kube-controllers created serviceaccount/calico-kube-controllers created poddisruptionbudget.policy/calico-kube-controllers created# 再次查看集群状态 [rootk8smaster ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8smaster Ready control-plane,master 5m57s v1.20.6 k8snode1 Ready worker 3m27s v1.20.6 k8snode2 Ready worker 3m22s v1.20.6 # STATUS状态是Ready说明k8s集群正常运行了2、部署ansible完成相关软件的自动化运维工作部署防火墙服务器部署堡垒机 # 1.建立免密通道 在ansible主机上生成密钥对 [rootansible ~]# ssh-keygen -t ecdsa Generating public/private ecdsa key pair. Enter file in which to save the key (/root/.ssh/id_ecdsa): Created directory /root/.ssh. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_ecdsa. Your public key has been saved in /root/.ssh/id_ecdsa.pub. The key fingerprint is: SHA256:FNgCSDVk6i3foP88MfekA2UzwNn6x3kyi7VmLdoxYE rootansible The keys randomart image is: ---[ECDSA 256]--- | ..*o . | | .o .* o. | | . . . | | . . .. E . | | o o S o . | | o o O | | . . . B X | | . .. B.o | | ..o. oo.. | ----[SHA256]----- [rootansible ~]# cd /root/.ssh [rootansible .ssh]# ls id_ecdsa id_ecdsa.pub # 2.上传公钥到所有服务器的root用户家目录下 # 所有服务器上开启ssh服务 开放22号端口允许root用户登录 # 上传公钥到k8smaster [rootansible .ssh]# ssh-copy-id -i id_ecdsa.pub root192.168.2.104 /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: id_ecdsa.pub The authenticity of host 192.168.2.104 (192.168.2.104) cant be established. ECDSA key fingerprint is SHA256:l7LRfACELrI6mU2XvYaCzsDBWiGkYnAecPgnxJxdvE. ECDSA key fingerprint is MD5:b6:f7:e1:c5:23:24:5c:16:1f:66:42:ba:80:a6:3c:fd. Are you sure you want to continue connecting (yes/no)? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys root192.168.2.104s password: Number of key(s) added: 1 Now try logging into the machine, with: ssh root192.168.2.104 and check to make sure that only the key(s) you wanted were added. # 上传公钥到k8snode [rootansible .ssh]# ssh-copy-id -i id_ecdsa.pub root192.168.2.111 /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: id_ecdsa.pub The authenticity of host 192.168.2.111 (192.168.2.111) cant be established. ECDSA key fingerprint is SHA256:l7LRfACELrI6mU2XvYaCzsDBWiGkYnAecPgnxJxdvE. ECDSA key fingerprint is MD5:b6:f7:e1:c5:23:24:5c:16:1f:66:42:ba:80:a6:3c:fd. Are you sure you want to continue connecting (yes/no)? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys root192.168.2.111s password: Number of key(s) added: 1 Now try logging into the machine, with: ssh root192.168.2.111 and check to make sure that only the key(s) you wanted were added. [rootansible .ssh]# ssh-copy-id -i id_ecdsa.pub root192.168.2.112 /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: id_ecdsa.pub The authenticity of host 192.168.2.112 (192.168.2.112) cant be established. ECDSA key fingerprint is SHA256:l7LRfACELrI6mU2XvYaCzsDBWiGkYnAecPgnxJxdvE. ECDSA key fingerprint is MD5:b6:f7:e1:c5:23:24:5c:16:1f:66:42:ba:80:a6:3c:fd. Are you sure you want to continue connecting (yes/no)? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys root192.168.2.112s password: Number of key(s) added: 1 Now try logging into the machine, with: ssh root192.168.2.112 and check to make sure that only the key(s) you wanted were added. # 验证是否实现免密码密钥认证 [rootansible .ssh]# ssh root192.168.2.121 Last login: Tue Jun 20 10:33:33 2023 from 192.168.2.240 [rootnfs ~]# exit 登出 Connection to 192.168.2.121 closed. [rootansible .ssh]# ssh root192.168.2.112 Last login: Tue Jun 20 10:34:18 2023 from 192.168.2.240 [rootk8snode2 ~]# exit 登出 Connection to 192.168.2.112 closed. [rootansible .ssh]# # 3.安装ansible在管理节点上 # 目前,只要机器上安装了 Python 2.6 或 Python 2.7 (windows系统不可以做控制主机),都可以运行Ansible. [rootansible .ssh]# yum install epel-release -y [rootansible .ssh]# yum install ansible -y [rootansible ~]# ansible --version ansible 2.9.27config file /etc/ansible/ansible.cfgconfigured module search path [u/root/.ansible/plugins/modules, u/usr/share/ansible/plugins/modules]ansible python module location /usr/lib/python2.7/site-packages/ansibleexecutable location /usr/bin/ansiblepython version 2.7.5 (default, Oct 14 2020, 14:45:30) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)] # 4.编写主机清单 [rootansible .ssh]# cd /etc/ansible [rootansible ansible]# ls ansible.cfg hosts roles [rootansible ansible]# vim hosts ## 192.168.1.110 [k8smaster] 192.168.2.104 [k8snode] 192.168.2.111 192.168.2.112 [nfs] 192.168.2.121 [gitlab] 192.168.2.124 [harbor] 192.168.2.106 [zabbix] 192.168.2.117 # 测试 [rootansible ansible]# ansible all -m shell -a ip adda、部署堡垒机 仅需两步快速安装 JumpServer 准备一台 2核4G 最低且可以访问互联网的 64 位 Linux 主机 以 root 用户执行如下命令一键安装 JumpServer。 curl -sSL https://resource.fit2cloud.com/jumpserver/jumpserver/releases/latest/download/quick_start.sh | bashb、部署firewall服务器 # 关闭虚拟机增加一块网卡ens37# 编写脚本实现SNAT_DNAT功能 [rootfirewalld ~]# cat snat_dnat.sh #!/bin/bash# open route echo 1 /proc/sys/net/ipv4/ip_forward# stop firewall systemctl stop firewalld systemctl disable firewalld# clear iptables rule iptables -F iptables -t nat -F# enable snat iptables -t nat -A POSTROUTING -s 192.168.2.0/24 -o ens33 -j MASQUERADE #内网来的192.168.2.0网段过来的ip地址全部伪装替换为ens33接口的公网ip地址,好处就是不需要考虑ens33接口的ip地址是多少你是哪个ip地址我就伪装成哪个ip地址# enable dnat iptables -t nat -A PREROUTING -d 192.168.0.169 -i ens33 -p tcp --dport 2233 -j DNAT --to-destination 192.168.2.104:22# open web 80 iptables -t nat -A PREROUTING -d 192.168.0.169 -i ens33 -p tcp --dport 80 -j DNAT --to-destination 192.168.2.104:80# web服务器上操作 [rootk8smaster ~]# cat open_app.sh #!/bin/bash# open ssh iptables -t filter -A INPUT -p tcp --dport 22 -j ACCEPT# open dns iptables -t filter -A INPUT -p udp --dport 53 -s 192.168.2.0/24 -j ACCEPT# open dhcp iptables -t filter -A INPUT -p udp --dport 67 -j ACCEPT# open http/https iptables -t filter -A INPUT -p tcp --dport 80 -j ACCEPT iptables -t filter -A INPUT -p tcp --dport 443 -j ACCEPT# open mysql iptables -t filter -A INPUT -p tcp --dport 3306 -j ACCEPT# default policy DROP iptables -t filter -P INPUT DROP# drop icmp request iptables -t filter -A INPUT -p icmp --icmp-type 8 -j DROP3、部署nfs服务器为整个web集群提供数据让所有的web业务pod都去访问通过pv、pvc和卷挂载实现 # 1.搭建好nfs服务器 [rootnfs ~]# yum install nfs-utils -y# 建议k8s集群内的所有的节点都安装nfs-utils软件因为节点服务器里创建卷需要支持nfs网络文件系统 [rootk8smaster ~]# yum install nfs-utils -y[rootk8smaster ~]# service nfs restart Redirecting to /bin/systemctl restart nfs.service[rootk8smaster ~]# ps aux |grep nfs root 87368 0.0 0.0 0 0 ? S 16:49 0:00 [nfsd4_callbacks] root 87374 0.0 0.0 0 0 ? S 16:49 0:00 [nfsd] root 87375 0.0 0.0 0 0 ? S 16:49 0:00 [nfsd] root 87376 0.0 0.0 0 0 ? S 16:49 0:00 [nfsd] root 87377 0.0 0.0 0 0 ? S 16:49 0:00 [nfsd] root 87378 0.0 0.0 0 0 ? S 16:49 0:00 [nfsd] root 87379 0.0 0.0 0 0 ? S 16:49 0:00 [nfsd] root 87380 0.0 0.0 0 0 ? S 16:49 0:00 [nfsd] root 87381 0.0 0.0 0 0 ? S 16:49 0:00 [nfsd] root 96648 0.0 0.0 112824 988 pts/0 S 17:02 0:00 grep --colorauto nfs# 2.设置共享目录 [rootnfs ~]# vim /etc/exports [rootnfs ~]# cat /etc/exports /web 192.168.2.0/24(rw,no_root_squash,sync)# 3.新建共享目录和index.html [rootnfs ~]# mkdir /web [rootnfs ~]# cd /web [rootnfs web]# echo welcome to changsha index.html [rootnfs web]# ls index.html [rootnfs web]# ll -d /web drwxr-xr-x. 2 root root 24 6月 18 16:46 /web# 4.刷新nfs或者重新输出共享目录 [rootnfs ~]# exportfs -r #输出所有共享目录 [rootnfs ~]# exportfs -v #显示输出的共享目录 /web 192.168.2.0/24(sync,wdelay,hide,no_subtree_check,secsys,rw,secure,no_root_squash,no_all_squash)# 5.重启nfs服务并且设置nfs开机自启 [rootnfs web]# systemctl restart nfs systemctl enable nfs Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.# 6.在k8s集群里的任意一个节点服务器上测试能否挂载nfs服务器共享的目录 [rootk8snode1 ~]# mkdir /node1_nfs [rootk8snode1 ~]# mount 192.168.2.121:/web /node1_nfs 您在 /var/spool/mail/root 中有新邮件 [rootk8snode1 ~]# df -Th|grep nfs 192.168.2.121:/web nfs4 17G 1.5G 16G 9% /node1_nfs# 7.取消挂载 [rootk8snode1 ~]# umount /node1_nfs# 8.创建pv使用nfs服务器上的共享目录 [rootk8smaster pv]# vim nfs-pv.yml [rootk8smaster pv]# cat nfs-pv.yml apiVersion: v1 kind: PersistentVolume metadata:name: pv-weblabels:type: pv-web spec:capacity:storage: 10Gi accessModes:- ReadWriteManystorageClassName: nfs # pv对应的名字nfs:path: /web # nfs共享的目录server: 192.168.2.121 # nfs服务器的ip地址readOnly: false # 访问模式[rootk8smaster pv]# kubectl apply -f nfs-pv.yml persistentvolume/pv-web created [rootk8smaster pv]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv-web 10Gi RWX Retain Available nfs 5s# 9.创建pvc使用pv [rootk8smaster pv]# vim nfs-pvc.yml [rootk8smaster pv]# cat nfs-pvc.yml apiVersion: v1 kind: PersistentVolumeClaim metadata:name: pvc-web spec:accessModes:- ReadWriteMany resources:requests:storage: 1GistorageClassName: nfs #使用nfs类型的pv[rootk8smaster pv]# kubectl apply -f pvc-nfs.yaml persistentvolumeclaim/sc-nginx-pvc created [rootk8smaster pv]# kubectl apply -f nfs-pvc.yml persistentvolumeclaim/pvc-web created[rootk8smaster pv]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc-web Bound pv-web 10Gi RWX nfs 6s# 10.创建pod使用pvc [rootk8smaster pv]# vim nginx-deployment.yaml [rootk8smaster pv]# cat nginx-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata:name: nginx-deploymentlabels:app: nginx spec:replicas: 3selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:volumes:- name: sc-pv-storage-nfspersistentVolumeClaim:claimName: pvc-webcontainers:- name: sc-pv-container-nfsimage: nginximagePullPolicy: IfNotPresentports:- containerPort: 80name: http-servervolumeMounts:- mountPath: /usr/share/nginx/htmlname: sc-pv-storage-nfs[rootk8smaster pv]# kubectl apply -f nginx-deployment.yaml deployment.apps/nginx-deployment created[rootk8smaster pv]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-deployment-76855d4d79-2q4vh 1/1 Running 0 42s 10.244.185.194 k8snode2 none none nginx-deployment-76855d4d79-mvgq7 1/1 Running 0 42s 10.244.185.195 k8snode2 none none nginx-deployment-76855d4d79-zm8v4 1/1 Running 0 42s 10.244.249.3 k8snode1 none none# 11.测试访问 [rootk8smaster pv]# curl 10.244.185.194 welcome to changsha [rootk8smaster pv]# curl 10.244.185.195 welcome to changsha [rootk8smaster pv]# curl 10.244.249.3 welcome to changsha[rootk8snode1 ~]# curl 10.244.185.194 welcome to changsha [rootk8snode1 ~]# curl 10.244.185.195 welcome to changsha [rootk8snode1 ~]# curl 10.244.249.3 welcome to changsha[rootk8snode2 ~]# curl 10.244.185.194 welcome to changsha [rootk8snode2 ~]# curl 10.244.185.195 welcome to changsha [rootk8snode2 ~]# curl 10.244.249.3 welcome to changsha# 12.修改内容 [rootnfs web]# echo hello,world index.html [rootnfs web]# cat index.html welcome to changsha hello,world# 13.再次访问 [rootk8snode1 ~]# curl 10.244.249.3 welcome to changsha hello,world4、构建CI/CD环境部署gitlabJenkinsharbor实现相关的代码发布镜像制作数据备份等流水线工作 a、部署gitlab # 部署gitlab https://gitlab.cn/install/[rootlocalhost ~]# hostnamectl set-hostname gitlab [rootlocalhost ~]# su - root su - root 上一次登录日 6月 18 18:28:08 CST 2023从 192.168.2.240pts/0 上 [rootgitlab ~]# cd /etc/sysconfig/network-scripts/ [rootgitlab network-scripts]# vim ifcfg-ens33 [rootgitlab network-scripts]# service network restart Restarting network (via systemctl): [ 确定 ] [rootgitlab network-scripts]# sed -i s/SELINUXenforcing/SELINUXdisabled/g /etc/selinux/config [rootgitlab network-scripts]# service firewalld stop systemctl disable firewalld Redirecting to /bin/systemctl stop firewalld.service Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service. Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service. [rootgitlab network-scripts]# reboot [rootgitlab ~]# getenforce Disabled# 1.安装和配置必须的依赖项 yum install -y curl policycoreutils-python openssh-server perl# 2.配置极狐GitLab 软件源镜像 [rootgitlab ~]# curl -fsSL https://packages.gitlab.cn/repository/raw/scripts/setup.sh | /bin/bashDetected OS centos Add yum repo file to /etc/yum.repos.d/gitlab-jh.repo[gitlab-jh] nameJiHu GitLab baseurlhttps://packages.gitlab.cn/repository/el/$releasever/ gpgcheck0 gpgkeyhttps://packages.gitlab.cn/repository/raw/gpg/public.gpg.key priority1 enabled1 Generate yum cache for gitlab-jh Successfully added gitlab-jh repo. To install JiHu GitLab, run sudo yum/dnf install gitlab-jh.[rootgitlab ~]# yum install gitlab-jh -y Thank you for installing JiHu GitLab! GitLab was unable to detect a valid hostname for your instance. Please configure a URL for your JiHu GitLab instance by setting external_url configuration in /etc/gitlab/gitlab.rb file. Then, you can start your JiHu GitLab instance by running the following command:sudo gitlab-ctl reconfigureFor a comprehensive list of configuration options please see the Omnibus GitLab readme https://jihulab.com/gitlab-cn/omnibus-gitlab/-/blob/main-jh/README.mdHelp us improve the installation experience, let us know how we did with a 1 minute survey: https://wj.qq.com/s2/10068464/dc66[rootgitlab ~]# vim /etc/gitlab/gitlab.rb external_url http://myweb.first.com[rootgitlab ~]# gitlab-ctl reconfigure Notes: Default admin account has been configured with following details: Username: root Password: You didnt opt-in to print initial root password to STDOUT. Password stored to /etc/gitlab/initial_root_password. This file will be cleaned up in first reconfigure run after 24 hours. NOTE: Because these credentials might be present in your log files in plain text, it is highly recommended to reset the password following https://docs.gitlab.com/ee/security/reset_user_password.html#reset-your-root-password. gitlab Reconfigured! # 查看密码 [rootgitlab ~]# cat /etc/gitlab/initial_root_password # WARNING: This value is valid only in the following conditions # 1. If provided manually (either via GITLAB_ROOT_PASSWORD environment variable or via gitlab_rails[initial_root_password] setting in gitlab.rb, it was provided before database was seeded for the first time (usually, the first reconfigure run). # 2. Password hasnt been changed manually, either via UI or via command line. # # If the password shown here doesnt work, you must reset the admin password following https://docs.gitlab.com/ee/security/reset_user_password.html#reset-your-root-password.Password: Al5rgYomhXDz5kNfDl3y8qunrSX334aZZxX5vONJ05s# NOTE: This file will be automatically deleted in the first reconfigure run after 24 hours.# 可以登录后修改语言为中文 # 用户的profile/preferences# 修改密码[rootgitlab ~]# gitlab-rake gitlab:env:infoSystem information System: Proxy: no Current User: git Using RVM: no Ruby Version: 3.0.6p216 Gem Version: 3.4.13 Bundler Version:2.4.13 Rake Version: 13.0.6 Redis Version: 6.2.11 Sidekiq Version:6.5.7 Go Version: unknownGitLab information Version: 16.0.4-jh Revision: c2ed99db36f Directory: /opt/gitlab/embedded/service/gitlab-rails DB Adapter: PostgreSQL DB Version: 13.11 URL: http://myweb.first.com HTTP Clone URL: http://myweb.first.com/some-group/some-project.git SSH Clone URL: gitmyweb.first.com:some-group/some-project.git Elasticsearch: no Geo: no Using LDAP: no Using Omniauth: yes Omniauth Providers: GitLab Shell Version: 14.20.0 Repository storages: - default: unix:/var/opt/gitlab/gitaly/gitaly.socket GitLab Shell path: /opt/gitlab/embedded/service/gitlab-shellb、部署Jenkins # Jenkins部署到k8s里 # 1.安装git软件 [rootk8smaster jenkins]# yum install git -y# 2.下载相关的yaml文件 [rootk8smaster jenkins]# git clone https://github.com/scriptcamp/kubernetes-jenkins 正克隆到 kubernetes-jenkins... remote: Enumerating objects: 16, done. remote: Counting objects: 100% (7/7), done. remote: Compressing objects: 100% (7/7), done. remote: Total 16 (delta 1), reused 0 (delta 0), pack-reused 9 Unpacking objects: 100% (16/16), done. [rootk8smaster jenkins]# ls kubernetes-jenkins [rootk8smaster jenkins]# cd kubernetes-jenkins/ [rootk8smaster kubernetes-jenkins]# ls deployment.yaml namespace.yaml README.md serviceAccount.yaml service.yaml volume.yaml# 3.创建命名空间 [rootk8smaster kubernetes-jenkins]# cat namespace.yaml apiVersion: v1 kind: Namespace metadata:name: devops-tools [rootk8smaster kubernetes-jenkins]# kubectl apply -f namespace.yaml namespace/devops-tools created[rootk8smaster kubernetes-jenkins]# kubectl get ns NAME STATUS AGE default Active 22h devops-tools Active 19s ingress-nginx Active 139m kube-node-lease Active 22h kube-public Active 22h kube-system Active 22h# 4.创建服务账号集群角色绑定 [rootk8smaster kubernetes-jenkins]# cat serviceAccount.yaml --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata:name: jenkins-admin rules:- apiGroups: []resources: [*]verbs: [*]--- apiVersion: v1 kind: ServiceAccount metadata:name: jenkins-adminnamespace: devops-tools--- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata:name: jenkins-admin roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: jenkins-admin subjects: - kind: ServiceAccountname: jenkins-admin[rootk8smaster kubernetes-jenkins]# kubectl apply -f serviceAccount.yaml clusterrole.rbac.authorization.k8s.io/jenkins-admin created serviceaccount/jenkins-admin created clusterrolebinding.rbac.authorization.k8s.io/jenkins-admin created# 5.创建卷用来存放数据 [rootk8smaster kubernetes-jenkins]# cat volume.yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata:name: local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer--- apiVersion: v1 kind: PersistentVolume metadata:name: jenkins-pv-volumelabels:type: local spec:storageClassName: local-storageclaimRef:name: jenkins-pv-claimnamespace: devops-toolscapacity:storage: 10GiaccessModes:- ReadWriteOncelocal:path: /mntnodeAffinity:required:nodeSelectorTerms:- matchExpressions:- key: kubernetes.io/hostnameoperator: Invalues:- k8snode1 # 需要修改为k8s里的node节点的名字--- apiVersion: v1 kind: PersistentVolumeClaim metadata:name: jenkins-pv-claimnamespace: devops-tools spec:storageClassName: local-storageaccessModes:- ReadWriteOnceresources:requests:storage: 3Gi[rootk8smaster kubernetes-jenkins]# kubectl apply -f volume.yaml storageclass.storage.k8s.io/local-storage created persistentvolume/jenkins-pv-volume created persistentvolumeclaim/jenkins-pv-claim created[rootk8smaster kubernetes-jenkins]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE jenkins-pv-volume 10Gi RWO Retain Bound devops-tools/jenkins-pv-claim local-storage 33s pv-web 10Gi RWX Retain Bound default/pvc-web nfs 21h[rootk8smaster kubernetes-jenkins]# kubectl describe pv jenkins-pv-volume Name: jenkins-pv-volume Labels: typelocal Annotations: none Finalizers: [kubernetes.io/pv-protection] StorageClass: local-storage Status: Bound Claim: devops-tools/jenkins-pv-claim Reclaim Policy: Retain Access Modes: RWO VolumeMode: Filesystem Capacity: 10Gi Node Affinity: Required Terms: Term 0: kubernetes.io/hostname in [k8snode1] Message: Source:Type: LocalVolume (a persistent volume backed by local storage on a node)Path: /mnt Events: none# 6.部署Jenkins [rootk8smaster kubernetes-jenkins]# cat deployment.yaml apiVersion: apps/v1 kind: Deployment metadata:name: jenkinsnamespace: devops-tools spec:replicas: 1selector:matchLabels:app: jenkins-servertemplate:metadata:labels:app: jenkins-serverspec:securityContext:fsGroup: 1000 runAsUser: 1000serviceAccountName: jenkins-admincontainers:- name: jenkinsimage: jenkins/jenkins:ltsimagePullPolicy: IfNotPresentresources:limits:memory: 2Gicpu: 1000mrequests:memory: 500Micpu: 500mports:- name: httpportcontainerPort: 8080- name: jnlpportcontainerPort: 50000livenessProbe:httpGet:path: /loginport: 8080initialDelaySeconds: 90periodSeconds: 10timeoutSeconds: 5failureThreshold: 5readinessProbe:httpGet:path: /loginport: 8080initialDelaySeconds: 60periodSeconds: 10timeoutSeconds: 5failureThreshold: 3volumeMounts:- name: jenkins-datamountPath: /var/jenkins_home volumes:- name: jenkins-datapersistentVolumeClaim:claimName: jenkins-pv-claim[rootk8smaster kubernetes-jenkins]# kubectl apply -f deployment.yaml deployment.apps/jenkins created[rootk8smaster kubernetes-jenkins]# kubectl get deploy -n devops-tools NAME READY UP-TO-DATE AVAILABLE AGE jenkins 1/1 1 1 5m36s[rootk8smaster kubernetes-jenkins]# kubectl get pod -n devops-tools NAME READY STATUS RESTARTS AGE jenkins-7fdc8dd5fd-bg66q 1/1 Running 0 19s# 7.启动服务发布Jenkins的pod [rootk8smaster kubernetes-jenkins]# cat service.yaml apiVersion: v1 kind: Service metadata:name: jenkins-servicenamespace: devops-toolsannotations:prometheus.io/scrape: trueprometheus.io/path: /prometheus.io/port: 8080 spec:selector: app: jenkins-servertype: NodePort ports:- port: 8080targetPort: 8080nodePort: 32000[rootk8smaster kubernetes-jenkins]# kubectl apply -f service.yaml service/jenkins-service created[rootk8smaster kubernetes-jenkins]# kubectl get svc -n devops-tools NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE jenkins-service NodePort 10.104.76.252 none 8080:32000/TCP 24s# 8.在Windows机器上访问Jenkins宿主机ip端口号 http://192.168.2.104:32000/login?from%2F# 9.进入pod里获取登录的密码 [rootk8smaster kubernetes-jenkins]# kubectl exec -it jenkins-7fdc8dd5fd-bg66q -n devops-tools -- bash bash-5.1$ cat /var/jenkins_home/secrets/initialAdminPassword b0232e2dad164f89ad2221e4c46b0d46# 修改密码[rootk8smaster kubernetes-jenkins]# kubectl get pod -n devops-tools NAME READY STATUS RESTARTS AGE jenkins-7fdc8dd5fd-5nn7m 1/1 Running 0 91sc、部署harbor # 前提是安装好 docker 和 docker compose # 1.配置阿里云的repo源 yum install -y yum-utilsyum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo# 2.安装docker服务 yum install docker-ce-20.10.6 -y# 启动docker设置开机自启 systemctl start docker systemctl enable docker.service# 3.查看docker版本docker compose版本 [rootharbor ~]# docker version Client: Docker Engine - CommunityVersion: 24.0.2API version: 1.41 (downgraded from 1.43)Go version: go1.20.4Git commit: cb74dfcBuilt: Thu May 25 21:55:21 2023OS/Arch: linux/amd64Context: defaultServer: Docker Engine - CommunityEngine:Version: 20.10.6API version: 1.41 (minimum version 1.12)Go version: go1.13.15Git commit: 8728dd2Built: Fri Apr 9 22:43:57 2021OS/Arch: linux/amd64Experimental: falsecontainerd:Version: 1.6.21GitCommit: 3dce8eb055cbb6872793272b4f20ed16117344f8runc:Version: 1.1.7GitCommit: v1.1.7-0-g860f061docker-init:Version: 0.19.0GitCommit: de40ad0[rootharbor ~]# docker compose version Docker Compose version v2.18.1# 4.安装 docker-compose [rootharbor ~]# ls anaconda-ks.cfg docker-compose-linux-x86_64 harbor [rootharbor ~]# chmod x docker-compose-linux-x86_64 [rootharbor ~]# mv docker-compose-linux-x86_64 /usr/local/sbin/docker-compose# 5.安装 harbor到 harbor 官网或者 github 下载harbor源码包 [rootharbor harbor]# ls harbor-offline-installer-v2.4.1.tgz# 6.解压 [rootharbor harbor]# tar xf harbor-offline-installer-v2.4.1.tgz [rootharbor harbor]# ls harbor harbor-offline-installer-v2.4.1.tgz [rootharbor harbor]# cd harbor [rootharbor harbor]# ls common.sh harbor.v2.4.1.tar.gz harbor.yml.tmpl install.sh LICENSE prepare [rootharbor harbor]# pwd /root/harbor/harbor# 7.修改配置文件 [rootharbor harbor]# cat harbor.yml # Configuration file of Harbor# The IP address or hostname to access admin UI and registry service. # DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients. hostname: 192.168.2.106 # 修改为主机ip地址# http related config http:# port for http, default is 80. If https enabled, this port will redirect to https portport: 5000 # 修改成其他端口号#https可以全关闭 # https related config #https:# https port for harbor, default is 443#port: 443# The path of cert and key files for nginx#certificate: /your/certificate/path#private_key: /your/private/key/path# # Uncomment following will enable tls communication between all harbor components # internal_tls: # # set enabled to true means internal tls is enabled # enabled: true # # put your cert and key files on dir # dir: /etc/harbor/tls/internal# Uncomment external_url if you want to enable external proxy # And when it enabled the hostname will no longer used # external_url: https://reg.mydomain.com:8433# The initial password of Harbor admin # It only works in first time to install harbor # Remember Change the admin password from UI after launching Harbor. harbor_admin_password: Harbor12345 #登录密码# Harbor DB configuration database:# The password for the root user of Harbor DB. Change this before any production use.password: root123# The maximum number of connections in the idle connection pool. If it 0, no idle connections are retained.max_idle_conns: 100# The maximum number of open connections to the database. If it 0, then there is no limit on the number of open connections.# Note: the default number of connections is 1024 for postgres of harbor.max_open_conns: 900# The default data volume data_volume: /data# 8.执行部署脚本 [rootharbor harbor]# ./install.sh[Step 0]: checking if docker is installed ...Note: docker version: 24.0.2[Step 1]: checking docker-compose is installed ... ✖ Need to install docker-compose(1.18.0) by yourself first and run this script again.[rootharbor harbor]# ./install.sh [] Running 10/10⠿ Network harbor_harbor Created 0.7s⠿ Container harbor-log Started 1.6s⠿ Container registry Started 5.2s⠿ Container harbor-db Started 4.9s⠿ Container harbor-portal Started 5.1s⠿ Container registryctl Started 4.8s⠿ Container redis Started 3.9s⠿ Container harbor-core Started 6.5s⠿ Container harbor-jobservice Started 9.0s⠿ Container nginx Started 9.1s ✔ ----Harbor has been installed and started successfully.----# 9.配置开机自启 [rootharbor harbor]# vim /etc/rc.local [rootharbor harbor]# cat /etc/rc.local #!/bin/bash # THIS FILE IS ADDED FOR COMPATIBILITY PURPOSES # # It is highly advisable to create own systemd services or udev rules # to run scripts during boot instead of using this file. # # In contrast to previous versions due to parallel execution during boot # this script will NOT be run after all other services. # # Please note that you must run chmod x /etc/rc.d/rc.local to ensure # that this script will be executed during boot.touch /var/lock/subsys/local /usr/local/sbin/docker-compose -f /root/harbor/harbor/docker-compose.yml up -d# 10.设置权限 [rootharbor harbor]# chmod x /etc/rc.local /etc/rc.d/rc.local# 11.登录 http://192.168.2.106:5000/# 账号admin # 密码Harbor12345# 新建一个项目 # 测试以nginx为例进行推送到harbor上 [rootharbor harbor]# docker image ls | grep nginx nginx latest 605c77e624dd 17 months ago 141MB goharbor/nginx-photon v2.4.1 78aad8c8ef41 18 months ago 45.7MB[rootharbor harbor]# docker tag nginx:latest 192.168.2.106:5000/test/nginx1:v1[rootharbor harbor]# docker image ls | grep nginx 192.168.2.106:5000/test/nginx1 v1 605c77e624dd 17 months ago 141MB nginx latest 605c77e624dd 17 months ago 141MB goharbor/nginx-photon v2.4.1 78aad8c8ef41 18 months ago 45.7MB [rootharbor harbor]# docker push 192.168.2.106:5000/test/nginx1:v1 The push refers to repository [192.168.2.106:5000/test/nginx1] Get https://192.168.2.106:5000/v2/: http: server gave HTTP response to HTTPS client[rootharbor harbor]# vim /etc/docker/daemon.json {insecure-registries:[192.168.2.106:5000] } [rootharbor harbor]# docker login 192.168.2.106:5000 Username: admin Password: WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-storeLogin Succeeded[rootharbor harbor]# docker push 192.168.2.106:5000/test/nginx1:v1 The push refers to repository [192.168.2.106:5000/test/nginx1] d874fd2bc83b: Pushed 32ce5f6a5106: Pushed f1db227348d0: Pushed b8d6e692a25e: Pushed e379e8aedd4d: Pushed 2edcec3590a4: Pushed v1: digest: sha256:ee89b00528ff4f02f2405e4ee221743ebc3f8e8dd0bfd5c4c20a2fa2aaa7ede3 size: 1570 [rootharbor harbor]# cat /etc/docker/daemon.json {insecure-registries:[192.168.2.106:5000] } 5、将自己用go开发的web接口系统制作成镜像部署到k8s里作为web应用采用HPA技术当cpu使用率达到50%的时候进行水平扩缩最小20个业务pod最多40个业务pod # k8s集群每个节点都登入到harbor中以便于从harbor中拉回镜像。 [rootk8snode2 ~]# cat /etc/docker/daemon.json {registry-mirrors:[https://rsbud4vc.mirror.aliyuncs.com,https://registry.docker-cn.com,https://docker.mirrors.ustc.edu.cn,https://dockerhub.azk8s.cn,http://hub-mirror.c.163.com],insecure-registries:[192.168.2.106:5000],exec-opts: [native.cgroupdriversystemd] } # 重新加载配置重启docker服务 systemctl daemon-reload systemctl restart docker# 登录harbor [rootk8smaster mysql]# docker login 192.168.2.106:5000 Username: admin Password: WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-storeLogin Succeeded[rootk8snode1 ~]# docker login 192.168.2.106:5000 Username: admin Password: WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-storeLogin Succeeded[rootk8snode2 ~]# docker login 192.168.2.106:5000 Username: admin Password: WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-storeLogin Succeeded# 测试从harbor拉取nginx镜像 [rootk8snode1 ~]# docker pull 192.168.2.106:5000/test/nginx1:v1[rootk8snode1 ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE mysql 5.7.42 2be84dd575ee 5 days ago 569MB nginx latest 605c77e624dd 17 months ago 141MB 192.168.2.106:5000/test/nginx1 v1 605c77e624dd 17 months ago 141MB# 制作镜像 [rootharbor ~]# cd go [rootharbor go]# ls scweb Dockerfile [rootharbor go]# cat Dockerfile FROM centos:7 WORKDIR /go COPY . /go RUN ls /go pwd ENTRYPOINT [/go/scweb][rootharbor go]# docker build -t scmyweb:1.1 .[rootharbor go]# docker image ls | grep scweb scweb 1.1 f845e97e9dfd 4 hours ago 214MB[rootharbor go]# docker tag scweb:1.1 192.168.2.106:5000/test/web:v2[rootharbor go]# docker image ls | grep web 192.168.2.106:5000/test/web v2 00900ace4935 4 minutes ago 214MB scweb 1.1 00900ace4935 4 minutes ago 214MB[rootharbor go]# docker push 192.168.2.106:5000/test/web:v2 The push refers to repository [192.168.2.106:5000/test/web] 3e252407b5c2: Pushed 193a27e04097: Pushed b13a87e7576f: Pushed 174f56854903: Pushed v1: digest: sha256:a723c83407c49e6fcf9aa67a041a4b6241cf9856170c1703014a61dec3726b29 size: 1153[rootk8snode1 ~]# docker login 192.168.2.106:5000 Authenticating with existing credentials... WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-storeLogin Succeeded[rootk8snode1 ~]# docker pull 192.168.2.106:5000/test/web:v2 v1: Pulling from test/web 2d473b07cdd5: Pull complete bc5e56dd1476: Pull complete 694440c745ce: Pull complete 78694d1cffbb: Pull complete Digest: sha256:a723c83407c49e6fcf9aa67a041a4b6241cf9856170c1703014a61dec3726b29 Status: Downloaded newer image for 192.168.2.106:5000/test/web:v2 192.168.2.106:5000/test/web:v1[rootk8snode1 ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE 192.168.2.106:5000/test/web v2 f845e97e9dfd 4 hours ago 214MB[rootk8snode2 ~]# docker login 192.168.2.106:5000 Authenticating with existing credentials... WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-storeLogin Succeeded[rootk8snode2 ~]# docker pull 192.168.2.106:5000/test/web:v2 v1: Pulling from test/web 2d473b07cdd5: Pull complete bc5e56dd1476: Pull complete 694440c745ce: Pull complete 78694d1cffbb: Pull complete Digest: sha256:a723c83407c49e6fcf9aa67a041a4b6241cf9856170c1703014a61dec3726b29 Status: Downloaded newer image for 192.168.2.106:5000/test/web:v2 192.168.2.106:5000/test/web:v1[rootk8snode2 ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE 192.168.2.106:5000/test/web v2 f845e97e9dfd 4 hours ago 214MB# 采用HPA技术当cpu使用率达到50%的时候进行水平扩缩最小1个最多10个pod # HorizontalPodAutoscaler简称 HPA 自动更新工作负载资源例如Deployment目的是自动扩缩# 工作负载以满足需求。 https://kubernetes.io/zh-cn/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/# 1.安装metrics server # 下载components.yaml配置文件 wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml# 替换imageimage: registry.aliyuncs.com/google_containers/metrics-server:v0.6.0imagePullPolicy: IfNotPresentargs: # // 新增下面两行参数- --kubelet-insecure-tls- --kubelet-preferred-address-typesInternalDNS,InternalIP,ExternalDNS,ExternalIP,Hostname# 修改components.yaml配置文件 [rootk8smaster ~]# cat components.yamlspec:containers:- args:- --kubelet-insecure-tls- --kubelet-preferred-address-typesInternalIP - --cert-dir/tmp- --secure-port4443- --kubelet-preferred-address-typesInternalDNS,InternalIP,ExternalIP,Hostname- --kubelet-use-node-status-port- --metric-resolution15simage: registry.aliyuncs.com/google_containers/metrics-server:v0.6.0imagePullPolicy: IfNotPresent# 执行安装命令 [rootk8smaster metrics]# kubectl apply -f components.yaml serviceaccount/metrics-server created clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created clusterrole.rbac.authorization.k8s.io/system:metrics-server created rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created service/metrics-server created deployment.apps/metrics-server created apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created# 查看效果 [rootk8smaster metrics]# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-6949477b58-xdk88 1/1 Running 1 22h calico-node-4knc8 1/1 Running 4 22h calico-node-8jzrn 1/1 Running 1 22h calico-node-9d7pt 1/1 Running 2 22h coredns-7f89b7bc75-52c4x 1/1 Running 2 22h coredns-7f89b7bc75-82jrx 1/1 Running 1 22h etcd-k8smaster 1/1 Running 1 22h kube-apiserver-k8smaster 1/1 Running 1 22h kube-controller-manager-k8smaster 1/1 Running 1 22h kube-proxy-8wp9c 1/1 Running 2 22h kube-proxy-d46jp 1/1 Running 1 22h kube-proxy-whg4f 1/1 Running 1 22h kube-scheduler-k8smaster 1/1 Running 1 22h metrics-server-6c75959ddf-hw7cs 1/1 Running 0 61s# 能够使用下面的命令查看到pod的效果说明metrics server已经安装成功 [rootk8smaster metrics]# kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% k8smaster 322m 16% 1226Mi 71% k8snode1 215m 10% 874Mi 50% k8snode2 190m 9% 711Mi 41% # 确保metrics-server安装好 # 查看pod、apiservice验证metrics-server安装好了 [rootk8smaster HPA]# kubectl get pod -n kube-system|grep metrics metrics-server-6c75959ddf-hw7cs 1/1 Running 4 6h35m[rootk8smaster HPA]# kubectl get apiservice |grep metrics v1beta1.metrics.k8s.io kube-system/metrics-server True 6h35m[rootk8smaster HPA]# kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% k8smaster 349m 17% 1160Mi 67% k8snode1 271m 13% 1074Mi 62% k8snode2 226m 11% 1224Mi 71% [rootk8snode1 ~]# docker images|grep metrics registry.aliyuncs.com/google_containers/metrics-server v0.6.0 5787924fe1d8 14 months ago 68.8MB 您在 /var/spool/mail/root 中有新邮件# node节点上查看 [rootk8snode1 ~]# docker images|grep metrics registry.aliyuncs.com/google_containers/metrics-server v0.6.0 5787924fe1d8 17 months ago 68.8MB kubernetesui/metrics-scraper v1.0.7 7801cfc6d5c0 2 years ago 34.4MB# 2.以yaml文件启动web并暴露服务 [rootk8smaster hpa]# cat my-web.yaml apiVersion: apps/v1 kind: Deployment metadata:labels:app: mywebname: myweb spec:replicas: 3selector:matchLabels:app: mywebtemplate:metadata:labels:app: mywebspec:containers:- name: mywebimage: 192.168.2.106:5000/test/web:v2imagePullPolicy: IfNotPresentports:- containerPort: 8000resources:limits:cpu: 300mrequests:cpu: 100m --- apiVersion: v1 kind: Service metadata:labels:app: myweb-svcname: myweb-svc spec:selector:app: mywebtype: NodePortports:- port: 8000protocol: TCPtargetPort: 8000nodePort: 30001[rootk8smaster HPA]# kubectl apply -f my-web.yaml deployment.apps/myweb created service/myweb-svc created# 3.创建HPA功能 [rootk8smaster HPA]# kubectl autoscale deployment myweb --cpu-percent50 --min1 --max10 horizontalpodautoscaler.autoscaling/myweb autoscaled[rootk8smaster HPA]# kubectl get pod NAME READY STATUS RESTARTS AGE myweb-6dc7b4dfcb-9q85g 1/1 Running 0 9s myweb-6dc7b4dfcb-ddq82 1/1 Running 0 9s myweb-6dc7b4dfcb-l7sw7 1/1 Running 0 9s [rootk8smaster HPA]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 none 443/TCP 3d2h myweb-svc NodePort 10.102.83.168 none 8000:30001/TCP 15s [rootk8smaster HPA]# kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE myweb Deployment/myweb unknown/50% 1 10 3 16s# 4.访问 http://192.168.2.112:30001/[rootk8smaster HPA]# kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE myweb Deployment/myweb 1%/50% 1 10 1 11m[rootk8smaster HPA]# kubectl get pod NAME READY STATUS RESTARTS AGE myweb-6dc7b4dfcb-ddq82 1/1 Running 0 10m# 5.删除hpa [rootk8smaster HPA]# kubectl delete hpa myweb-svc6、启动mysql的pod为web业务提供数据库服务 [rootk8smaster mysql]# cat mysql-deployment.yaml # 定义mysql的Deployment apiVersion: apps/v1 kind: Deployment metadata:labels:app: mysqlname: mysql spec:replicas: 1selector:matchLabels:app: mysqltemplate:metadata:labels:app: mysqlspec:containers:- image: mysql:5.7.42name: mysqlimagePullPolicy: IfNotPresentenv:- name: MYSQL_ROOT_PASSWORD value: 123456ports:- containerPort: 3306 --- #定义mysql的Service apiVersion: v1 kind: Service metadata:labels:app: svc-mysqlname: svc-mysql spec:selector:app: mysqltype: NodePortports:- port: 3306protocol: TCPtargetPort: 3306nodePort: 30007[rootk8smaster mysql]# kubectl apply -f mysql-deployment.yaml deployment.apps/mysql created service/svc-mysql created[rootk8smaster mysql]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 none 443/TCP 28h svc-mysql NodePort 10.105.96.217 none 3306:30007/TCP 10m[rootk8smaster mysql]# kubectl get pod NAME READY STATUS RESTARTS AGE mysql-5f9bccd855-6kglf 1/1 Running 0 8m59s[rootk8smaster mysql]# kubectl exec -it mysql-5f9bccd855-6kglf -- bash bash-4.2# mysql -uroot -p123456 mysql: [Warning] Using a password on the command line interface can be insecure. Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 2 Server version: 5.7.42 MySQL Community Server (GPL)Copyright (c) 2000, 2023, Oracle and/or its affiliates.Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners.Type help; or \h for help. Type \c to clear the current input statement.mysql show databases; -------------------- | Database | -------------------- | information_schema | | mysql | | performance_schema | | sys | -------------------- 4 rows in set (0.01 sec)mysql exit Bye bash-4.2# exit exit [rootk8smaster mysql]# # Web服务和MySQL数据库结合起来 # 第一种在mysql的service中增加以下内容ports:- name: mysqlprotocol: TCPport: 3306targetPort: 3306# 在web的pod中增加以下内容env:- name: MYSQL_HOSTvalue: mysql- name: MYSQL_PORTvalue: 3306# 第二种安装MySQL驱动程序在 Go 代码中引入并初始化该驱动程序。 # 1.导入必要的包和驱动程序import ( database/sqlfmt_ github.com/go-sql-driver/mysql # 导入 MySQL 驱动程序 )# 2.建立数据库连接db, err : sql.Open(mysql, username:passwordtcp(hostname:port)/dbname) if err ! nil {fmt.Println(Failed to connect to database:, err)return } defer db.Close() # 记得关闭数据库连接a、尝试k8s部署有状态的MySQL # 1.创建 ConfigMap [rootk8smaster mysql]# cat mysql-configmap.yaml apiVersion: v1 kind: ConfigMap metadata:name: mysqllabels:app: mysql data:primary.cnf: |# 仅在主服务器上应用此配置[mysqld]log-binreplica.cnf: |# 仅在副本服务器上应用此配置[mysqld]super-read-only[rootk8smaster mysql]# kubectl apply -f mysql-configmap.yaml configmap/mysql created[rootk8smaster mysql]# kubectl get cm NAME DATA AGE kube-root-ca.crt 1 6d22h mysql 2 5s# 2.创建服务 [rootk8smaster mysql]# cat mysql-services.yaml # 为 StatefulSet 成员提供稳定的 DNS 表项的无头服务Headless Service apiVersion: v1 kind: Service metadata:name: mysqllabels:app: mysqlapp.kubernetes.io/name: mysql spec:ports:- name: mysqlport: 3306clusterIP: Noneselector:app: mysql --- # 用于连接到任一 MySQL 实例执行读操作的客户端服务 # 对于写操作你必须连接到主服务器mysql-0.mysql apiVersion: v1 kind: Service metadata:name: mysql-readlabels:app: mysqlapp.kubernetes.io/name: mysqlreadonly: true spec:ports:- name: mysqlport: 3306selector:app: mysql[rootk8smaster mysql]# kubectl apply -f mysql-services.yaml service/mysql created service/mysql-read created[rootk8smaster mysql]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 none 443/TCP 6d22h mysql ClusterIP None none 3306/TCP 7s mysql-read ClusterIP 10.102.31.144 none 3306/TCP 7s# 3.创建 StatefulSet [rootk8smaster mysql]# cat mysql-statefulset.yaml apiVersion: apps/v1 kind: StatefulSet metadata:name: mysql spec:selector:matchLabels:app: mysqlapp.kubernetes.io/name: mysqlserviceName: mysqlreplicas: 3template:metadata:labels:app: mysqlapp.kubernetes.io/name: mysqlspec:initContainers:- name: init-mysqlimage: mysql:5.7.42imagePullPolicy: IfNotPresentcommand:- bash- -c- |set -ex# 基于 Pod 序号生成 MySQL 服务器的 ID。[[ $HOSTNAME ~ -([0-9])$ ]] || exit 1ordinal${BASH_REMATCH[1]}echo [mysqld] /mnt/conf.d/server-id.cnf# 添加偏移量以避免使用 server-id0 这一保留值。echo server-id$((100 $ordinal)) /mnt/conf.d/server-id.cnf# 将合适的 conf.d 文件从 config-map 复制到 emptyDir。if [[ $ordinal -eq 0 ]]; thencp /mnt/config-map/primary.cnf /mnt/conf.d/elsecp /mnt/config-map/replica.cnf /mnt/conf.d/fi volumeMounts:- name: confmountPath: /mnt/conf.d- name: config-mapmountPath: /mnt/config-map- name: clone-mysqlimage: registry.cn-hangzhou.aliyuncs.com/google_samples_thepoy/xtrabackup:1.0command:- bash- -c- |set -ex# 如果已有数据则跳过克隆。[[ -d /var/lib/mysql/mysql ]] exit 0# 跳过主实例序号索引 0的克隆。[[ hostname ~ -([0-9])$ ]] || exit 1ordinal${BASH_REMATCH[1]}[[ $ordinal -eq 0 ]] exit 0# 从原来的对等节点克隆数据。ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql# 准备备份。xtrabackup --prepare --target-dir/var/lib/mysql volumeMounts:- name: datamountPath: /var/lib/mysqlsubPath: mysql- name: confmountPath: /etc/mysql/conf.dcontainers:- name: mysqlimage: mysql:5.7.42imagePullPolicy: IfNotPresentenv:- name: MYSQL_ALLOW_EMPTY_PASSWORDvalue: 1ports:- name: mysqlcontainerPort: 3306volumeMounts:- name: datamountPath: /var/lib/mysqlsubPath: mysql- name: confmountPath: /etc/mysql/conf.dresources:requests:cpu: 500mmemory: 1GilivenessProbe:exec:command: [mysqladmin, ping]initialDelaySeconds: 30periodSeconds: 10timeoutSeconds: 5readinessProbe:exec:# 检查我们是否可以通过 TCP 执行查询skip-networking 是关闭的。command: [mysql, -h, 127.0.0.1, -e, SELECT 1]initialDelaySeconds: 5periodSeconds: 2timeoutSeconds: 1- name: xtrabackupimage: registry.cn-hangzhou.aliyuncs.com/google_samples_thepoy/xtrabackup:1.0ports:- name: xtrabackupcontainerPort: 3307command:- bash- -c- |set -excd /var/lib/mysql# 确定克隆数据的 binlog 位置如果有的话。if [[ -f xtrabackup_slave_info x$(xtrabackup_slave_info) ! x ]]; then# XtraBackup 已经生成了部分的 “CHANGE MASTER TO” 查询# 因为我们从一个现有副本进行克隆。(需要删除末尾的分号!)cat xtrabackup_slave_info | sed -E s/;$//g change_master_to.sql.in# 在这里要忽略 xtrabackup_binlog_info 它是没用的。rm -f xtrabackup_slave_info xtrabackup_binlog_infoelif [[ -f xtrabackup_binlog_info ]]; then# 我们直接从主实例进行克隆。解析 binlog 位置。[[ cat xtrabackup_binlog_info ~ ^(.*?)[[:space:]](.*?)$ ]] || exit 1rm -f xtrabackup_binlog_info xtrabackup_slave_infoecho CHANGE MASTER TO MASTER_LOG_FILE${BASH_REMATCH[1]},\MASTER_LOG_POS${BASH_REMATCH[2]} change_master_to.sql.infi# 检查我们是否需要通过启动复制来完成克隆。if [[ -f change_master_to.sql.in ]]; thenecho Waiting for mysqld to be ready (accepting connections)until mysql -h 127.0.0.1 -e SELECT 1; do sleep 1; doneecho Initializing replication from clone positionmysql -h 127.0.0.1 \-e $(change_master_to.sql.in), \MASTER_HOSTmysql-0.mysql, \MASTER_USERroot, \MASTER_PASSWORD, \MASTER_CONNECT_RETRY10; \START SLAVE; || exit 1# 如果容器重新启动最多尝试一次。mv change_master_to.sql.in change_master_to.sql.origfi# 当对等点请求时启动服务器发送备份。exec ncat --listen --keep-open --send-only --max-conns1 3307 -c \xtrabackup --backup --slave-info --streamxbstream --host127.0.0.1 --userroot volumeMounts:- name: datamountPath: /var/lib/mysqlsubPath: mysql- name: confmountPath: /etc/mysql/conf.dresources:requests:cpu: 100mmemory: 100Mivolumes:- name: confemptyDir: {}- name: config-mapconfigMap:name: mysqlvolumeClaimTemplates:- metadata:name: dataspec:accessModes: [ReadWriteOnce]resources:requests:storage: 1Gi[rootk8smaster mysql]# kubectl apply -f mysql-statefulset.yaml statefulset.apps/mysql created[rootk8smaster mysql]# kubectl get pod NAME READY STATUS RESTARTS AGE mysql-0 0/2 Pending 0 3s[rootk8smaster mysql]# kubectl describe pod mysql-0 Events:Type Reason Age From Message---- ------ ---- ---- -------Warning FailedScheduling 16s (x2 over 16s) default-scheduler 0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.[rootk8smaster mysql]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-mysql-0 Pending 3m27s[rootk8smaster mysql]# kubectl get pvc data-mysql-0 -o yaml apiVersion: v1 kind: PersistentVolumeClaim metadata:creationTimestamp: 2023-06-25T06:17:36Zfinalizers:- kubernetes.io/pvc-protectionlabels:app: mysqlapp.kubernetes.io/name: mysql[rootk8smaster mysql]# cat mysql-pv.yaml apiVersion: v1 kind: PersistentVolume metadata:name: mysql-pv spec:capacity:storage: 1Gi accessModes:- ReadWriteOncenfs:path: /data/db # nfs共享的目录server: 192.168.2.121 # nfs服务器的ip地址[rootk8smaster mysql]# kubectl apply -f mysql-pv.yaml persistentvolume/mysql-pv created[rootk8smaster mysql]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE jenkins-pv-volume 10Gi RWO Retain Terminating devops-tools/jenkins-pv-claim local-storage 5d23h mysql-pv 1Gi RWO Retain Terminating default/data-mysql-0 15m[rootk8smaster mysql]# kubectl patch pv jenkins-pv-volume -p {metadata:{finalizers:null}} persistentvolume/jenkins-pv-volume patched[rootk8smaster mysql]# kubectl patch pv mysql-pv -p {metadata:{finalizers:null}} persistentvolume/mysql-pv patched[rootk8smaster mysql]# kubectl get pv No resources found[rootk8smaster mysql]# kubectl get pod NAME READY STATUS RESTARTS AGE mysql-0 0/2 Init:0/2 0 7m20s[rootk8smaster mysql]# kubectl describe pod mysql-0 Events:Type Reason Age From Message---- ------ ---- ---- -------Warning FailedScheduling 10m (x3 over 10m) default-scheduler 0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didnt tolerate, 2 pvc(s) bound to non-existent pv(s).Normal Scheduled 10m default-scheduler Successfully assigned default/mysql-0 to k8snode2Warning FailedMount 10m kubelet Unable to attach or mount volumes: unmounted volumes[data], unattached volumes[data conf config-map default-token-24tkk]: error processing PVC default/data-mysql-0: PVC is not boundWarning FailedMount 9m46s kubelet Unable to attach or mount volumes: unmounted volumes[data], unattached volumes[default-token-24tkk data conf config-map]: error processing PVC default/data-mysql-0: PVC is not boundWarning FailedMount 5m15s kubelet Unable to attach or mount volumes: unmounted volumes[data], unattached volumes[data conf config-map default-token-24tkk]: timed out waiting for the conditionWarning FailedMount 3m kubelet Unable to attach or mount volumes: unmounted volumes[data], unattached volumes[config-map default-token-24tkk data conf]: timed out waiting for the conditionWarning FailedMount 74s (x12 over 9m31s) kubelet MountVolume.SetUp failed for volume mysql-pv : mount failed: exit status 32 Mounting command: mount Mounting arguments: -t nfs 192.168.2.121:/data/db /var/lib/kubelet/pods/424bb72d-8bf5-400f-b954-7fa3666ca0b3/volumes/kubernetes.io~nfs/mysql-pv Output: mount.nfs: mounting 192.168.2.121:/data/db failed, reason given by server: No such file or directoryWarning FailedMount 42s (x2 over 7m29s) kubelet Unable to attach or mount volumes: unmounted volumes[data], unattached volumes[conf config-map default-token-24tkk data]: timed out waiting for the condition1Gi RWO Retain Terminating default/data-mysql-0 15m [rootnfs data]# pwd /data [rootnfs data]# mkdir db replica replica-3 [rootnfs data]# ls db replica replica-3 [rootk8smaster mysql]# kubectl get pod NAME READY STATUS RESTARTS AGE mysql-0 2/2 Running 0 21m mysql-1 0/2 Pending 0 2m34s [rootk8smaster mysql]# kubectl describe pod mysql-1 Events:Type Reason Age From Message---- ------ ---- ---- -------Warning FailedScheduling 58s (x4 over 3m22s) default-scheduler 0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims. [rootk8smaster mysql]# cat mysql-pv-2.yaml apiVersion: v1 kind: PersistentVolume metadata:name: mysql-pv-2 spec:capacity:storage: 1Gi accessModes:- ReadWriteOncenfs:path: /data/replica # nfs共享的目录server: 192.168.2.121 # nfs服务器的ip地址 [rootk8smaster mysql]# kubectl apply -f mysql-pv-2.yaml persistentvolume/mysql-pv-2 created [rootk8smaster mysql]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE mysql-pv 1Gi RWO Retain Bound default/data-mysql-0 24m mysql-pv-2 1Gi RWO Retain Bound default/data-mysql-1 7s [rootk8smaster mysql]# kubectl get pod NAME READY STATUS RESTARTS AGE mysql-0 2/2 Running 0 25m mysql-1 1/2 Running 0 7m20s [rootk8smaster mysql]# cat mysql-pv-3.yaml apiVersion: v1 kind: PersistentVolume metadata:name: mysql-pv-3 spec:capacity:storage: 1Gi accessModes:- ReadWriteOncenfs:path: /data/replicai-3 # nfs共享的目录server: 192.168.2.121 # nfs服务器的ip地址 [rootk8smaster mysql]# kubectl apply -f mysql-pv-3.yaml persistentvolume/mysql-pv-3 created [rootk8smaster mysql]# kubectl get pod NAME READY STATUS RESTARTS AGE mysql-0 2/2 Running 0 29m mysql-1 2/2 Running 0 11m mysql-2 0/2 Pending 0 3m46s [rootk8smaster mysql]# kubectl describe pod mysql-2 Events:Type Reason Age From Message---- ------ ---- ---- -------Warning FailedScheduling 2m13s (x4 over 4m16s) default-scheduler 0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.Warning FailedScheduling 47s (x2 over 2m5s) default-scheduler 0/3 nodes are available: 1 Insufficient cpu, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didnt tolerate, 2 Insufficient memory.7、使用探针(liveness、readiness、startup)的httpget、exec方法对web业务pod进行监控一旦出现问题马上重启增强业务pod的可靠性 livenessProbe:exec:command:- ls- /tmpinitialDelaySeconds: 5periodSeconds: 5readinessProbe:exec:command:- ls- /tmpinitialDelaySeconds: 5periodSeconds: 5 startupProbe:httpGet:path: /port: 8000failureThreshold: 30periodSeconds: 10[rootk8smaster probe]# vim my-web.yaml apiVersion: apps/v1 kind: Deployment metadata:labels:app: mywebname: myweb spec:replicas: 3selector:matchLabels:app: mywebtemplate:metadata:labels:app: mywebspec:containers:- name: mywebimage: 192.168.2.106:5000/test/web:v2imagePullPolicy: IfNotPresentports:- containerPort: 8000resources:limits:cpu: 300mrequests:cpu: 100mlivenessProbe:exec:command:- ls- /tmpinitialDelaySeconds: 5periodSeconds: 5readinessProbe:exec:command:- ls- /tmpinitialDelaySeconds: 5periodSeconds: 5 startupProbe:httpGet:path: /port: 8000failureThreshold: 30periodSeconds: 10 --- apiVersion: v1 kind: Service metadata:labels:app: myweb-svcname: myweb-svc spec:selector:app: mywebtype: NodePortports:- port: 8000protocol: TCPtargetPort: 8000nodePort: 30001[rootk8smaster probe]# kubectl apply -f my-web.yaml deployment.apps/myweb created service/myweb-svc created[rootk8smaster probe]# kubectl get pod NAME READY STATUS RESTARTS AGE myweb-6b89fb9c7b-4cdh9 1/1 Running 0 53s myweb-6b89fb9c7b-dh87w 1/1 Running 0 53s myweb-6b89fb9c7b-zvc52 1/1 Running 0 53s[rootk8smaster probe]# kubectl describe pod myweb-6b89fb9c7b-4cdh9 Name: myweb-6b89fb9c7b-4cdh9 Namespace: default Priority: 0 Node: k8snode2/192.168.2.112 Start Time: Thu, 22 Jun 2023 16:47:20 0800 Labels: appmywebpod-template-hash6b89fb9c7b Annotations: cni.projectcalico.org/podIP: 10.244.185.219/32cni.projectcalico.org/podIPs: 10.244.185.219/32 Status: Running IP: 10.244.185.219 IPs:IP: 10.244.185.219 Controlled By: ReplicaSet/myweb-6b89fb9c7b Containers:myweb:Container ID: docker://8c55c0c825483f86e4b3c87413984415b2ccf5cad78ed005eed8bedb4252c130Image: 192.168.2.106:5000/test/web:v2Image ID: docker-pullable://192.168.2.106:5000/test/websha256:3bef039aa5c13103365a6868c9f052a000de376a45eaffcbad27d6ddb1f6e354Port: 8000/TCPHost Port: 0/TCPState: RunningStarted: Thu, 22 Jun 2023 16:47:23 0800Ready: TrueRestart Count: 0Limits:cpu: 300mRequests:cpu: 100mLiveness: exec [ls /tmp] delay5s timeout1s period5s #success1 #failure3Readiness: exec [ls /tmp] delay5s timeout1s period5s #success1 #failure3Startup: http-get http://:8000/ delay0s timeout1s period10s #success1 #failure30Environment: noneMounts:/var/run/secrets/kubernetes.io/serviceaccount from default-token-24tkk (ro) Conditions:Type StatusInitialized True Ready True ContainersReady True PodScheduled True Volumes:default-token-24tkk:Type: Secret (a volume populated by a Secret)SecretName: default-token-24tkkOptional: false QoS Class: Burstable Node-Selectors: none Tolerations: node.kubernetes.io/not-ready:NoExecute opExists for 300snode.kubernetes.io/unreachable:NoExecute opExists for 300s Events:Type Reason Age From Message---- ------ ---- ---- -------Normal Scheduled 55s default-scheduler Successfully assigned default/myweb-6b89fb9c7b-4cdh9 to k8snode2Normal Pulled 52s kubelet Container image 192.168.2.106:5000/test/web:v2 already present on machineNormal Created 52s kubelet Created container mywebNormal Started 52s kubelet Started container myweb8、使用ingress给web业务做负载均衡使用dashboard对整个集群资源进行掌控 # ingress controller 本质上是一个nginx软件用来做负载均衡。 # ingress 是k8s内部管理nginx配置nginx.conf的组件用来给ingress controller传参。[rootk8smaster ingress]# ls ingress-controller-deploy.yaml kube-webhook-certgen-v1.1.0.tar.gz sc-nginx-svc-1.yaml ingress-nginx-controllerv1.1.0.tar.gz sc-ingress.yamlingress-controller-deploy.yaml 是部署ingress controller使用的yaml文件 ingress-nginx-controllerv1.1.0.tar.gz ingress-nginx-controller镜像 kube-webhook-certgen-v1.1.0.tar.gz kube-webhook-certgen镜像 sc-ingress.yaml 创建ingress的配置文件 sc-nginx-svc-1.yaml 启动sc-nginx-svc-1服务和相关pod的yaml nginx-deployment-nginx-svc-2.yaml 启动nginx-deployment-nginx-svc-2服务和相关pod的yaml# 第1大步骤安装ingress controller # 1.将镜像scp到所有的node节点服务器上 [rootk8smaster ingress]# scp ingress-nginx-controllerv1.1.0.tar.gz k8snode1:/root ingress-nginx-controllerv1.1.0.tar.gz 100% 276MB 101.1MB/s 00:02 [rootk8smaster ingress]# scp ingress-nginx-controllerv1.1.0.tar.gz k8snode2:/root ingress-nginx-controllerv1.1.0.tar.gz 100% 276MB 98.1MB/s 00:02 [rootk8smaster ingress]# scp kube-webhook-certgen-v1.1.0.tar.gz k8snode1:/root kube-webhook-certgen-v1.1.0.tar.gz 100% 47MB 93.3MB/s 00:00 [rootk8smaster ingress]# scp kube-webhook-certgen-v1.1.0.tar.gz k8snode2:/root kube-webhook-certgen-v1.1.0.tar.gz 100% 47MB 39.3MB/s 00:01 # 2.导入镜像在所有的节点服务器上进行 [rootk8snode1 ~]# docker load -i ingress-nginx-controllerv1.1.0.tar.gz [rootk8snode1 ~]# docker load -i kube-webhook-certgen-v1.1.0.tar.gz [rootk8snode2 ~]# docker load -i ingress-nginx-controllerv1.1.0.tar.gz [rootk8snode2 ~]# docker load -i kube-webhook-certgen-v1.1.0.tar.gz[rootk8snode1 ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE nginx latest 605c77e624dd 17 months ago 141MB registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller v1.1.0 ae1a7201ec95 19 months ago 285MB registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen v1.1.1 c41e9fcadf5a 20 months ago 47.7MB[rootk8snode2 ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE nginx latest 605c77e624dd 17 months ago 141MB registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller v1.1.0 ae1a7201ec95 19 months ago 285MB registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen v1.1.1 c41e9fcadf5a 20 months ago 47.7MB# 3.执行yaml文件去创建ingres controller [rootk8smaster ingress]# kubectl apply -f ingress-controller-deploy.yaml namespace/ingress-nginx created serviceaccount/ingress-nginx created configmap/ingress-nginx-controller created clusterrole.rbac.authorization.k8s.io/ingress-nginx created clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created role.rbac.authorization.k8s.io/ingress-nginx created rolebinding.rbac.authorization.k8s.io/ingress-nginx created service/ingress-nginx-controller-admission created service/ingress-nginx-controller created deployment.apps/ingress-nginx-controller created ingressclass.networking.k8s.io/nginx created validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created serviceaccount/ingress-nginx-admission created clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created role.rbac.authorization.k8s.io/ingress-nginx-admission created rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created job.batch/ingress-nginx-admission-create created job.batch/ingress-nginx-admission-patch created# 4.查看ingress controller的相关命名空间 [rootk8smaster ingress]# kubectl get ns NAME STATUS AGE default Active 20h ingress-nginx Active 30s kube-node-lease Active 20h kube-public Active 20h kube-system Active 20h# 5.查看ingress controller的相关service [rootk8smaster ingress]# kubectl get svc -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller NodePort 10.105.213.95 none 80:31457/TCP,443:32569/TCP 64s ingress-nginx-controller-admission ClusterIP 10.98.225.196 none 443/TCP 64s# 6.查看ingress controller的相关pod [rootk8smaster ingress]# kubectl get pod -n ingress-nginx NAME READY STATUS RESTARTS AGE ingress-nginx-admission-create-9sg56 0/1 Completed 0 80s ingress-nginx-admission-patch-8sctb 0/1 Completed 1 80s ingress-nginx-controller-6c8ffbbfcf-bmdj9 1/1 Running 0 80s ingress-nginx-controller-6c8ffbbfcf-j576v 1/1 Running 0 80s# 第2大步骤创建pod和暴露pod的服务 [rootk8smaster new]# cat sc-nginx-svc-1.yaml apiVersion: apps/v1 kind: Deployment metadata:name: sc-nginx-deploylabels:app: sc-nginx-feng spec:replicas: 3selector:matchLabels:app: sc-nginx-fengtemplate:metadata:labels:app: sc-nginx-fengspec:containers:- name: sc-nginx-fengimage: nginximagePullPolicy: IfNotPresentports:- containerPort: 80 --- apiVersion: v1 kind: Service metadata:name: sc-nginx-svclabels:app: sc-nginx-svc spec:selector:app: sc-nginx-fengports:- name: name-of-service-portprotocol: TCPport: 80targetPort: 80[rootk8smaster new]# kubectl apply -f sc-nginx-svc-1.yaml deployment.apps/sc-nginx-deploy created service/sc-nginx-svc created[rootk8smaster ingress]# kubectl get pod NAME READY STATUS RESTARTS AGE sc-nginx-deploy-7bb895f9f5-hmf2n 1/1 Running 0 7s sc-nginx-deploy-7bb895f9f5-mczzg 1/1 Running 0 7s sc-nginx-deploy-7bb895f9f5-zzndv 1/1 Running 0 7s[rootk8smaster ingress]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 none 443/TCP 20h sc-nginx-svc ClusterIP 10.96.76.55 none 80/TCP 26s# 查看服务器的详细信息查看Endpoints对应的pod的ip和端口是否正常 [rootk8smaster ingress]# kubectl describe svc sc-nginx-svc Name: sc-nginx-svc Namespace: default Labels: appsc-nginx-svc Annotations: none Selector: appsc-nginx-feng Type: ClusterIP IP Families: none IP: 10.96.76.55 IPs: 10.96.76.55 Port: name-of-service-port 80/TCP TargetPort: 80/TCP Endpoints: 10.244.185.209:80,10.244.185.210:80,10.244.249.16:80 Session Affinity: None Events: none# 访问服务暴露的ip [rootk8smaster ingress]# curl 10.96.76.55 !DOCTYPE html html head titleWelcome to nginx!/title style html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } /style /head body h1Welcome to nginx!/h1 pIf you see this page, the nginx web server is successfully installed and working. Further configuration is required./ppFor online documentation and support please refer to a hrefhttp://nginx.org/nginx.org/a.br/ Commercial support is available at a hrefhttp://nginx.com/nginx.com/a./ppemThank you for using nginx./em/p /body /html# 第3大步骤启用ingress关联ingress controller 和service # 创建一个yaml文件去启动ingress [rootk8smaster ingress]# cat sc-ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata:name: sc-ingressannotations:kubernets.io/ingress.class: nginx #注释 这个ingress 是关联ingress controller的 spec:ingressClassName: nginx #关联ingress controllerrules:- host: www.feng.comhttp:paths:- pathType: Prefixpath: /backend:service:name: sc-nginx-svcport:number: 80- host: www.zhang.comhttp:paths:- pathType: Prefixpath: /backend:service:name: sc-nginx-svc-2port:number: 80[rootk8smaster ingress]# kubectl apply -f my-ingress.yaml ingress.networking.k8s.io/my-ingress created# 查看ingress [rootk8smaster ingress]# kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE sc-ingress nginx www.feng.com,www.zhang.com 192.168.2.111,192.168.2.112 80 52s# 第4大步骤查看ingress controller 里的nginx.conf 文件里是否有ingress对应的规则 [rootk8smaster ingress]# kubectl get pod -n ingress-nginx NAME READY STATUS RESTARTS AGE ingress-nginx-admission-create-9sg56 0/1 Completed 0 6m53s ingress-nginx-admission-patch-8sctb 0/1 Completed 1 6m53s ingress-nginx-controller-6c8ffbbfcf-bmdj9 1/1 Running 0 6m53s ingress-nginx-controller-6c8ffbbfcf-j576v 1/1 Running 0 6m53s[rootk8smaster ingress]# kubectl exec -n ingress-nginx -it ingress-nginx-controller-6c8ffbbfcf-bmdj9 -- bash bash-5.1$ cat nginx.conf |grep feng.com## start server www.feng.comserver_name www.feng.com ;## end server www.feng.com bash-5.1$ cat nginx.conf |grep zhang.com## start server www.zhang.comserver_name www.zhang.com ;## end server www.zhang.com bash-5.1$ cat nginx.conf|grep -C3 upstream_balancererror_log /var/log/nginx/error.log notice;upstream upstream_balancer {server 0.0.0.1:1234; # placeholder# 获取ingress controller对应的service暴露宿主机的端口访问宿主机和相关端口就可以验证ingress controller是否能进行负载均衡 [rootk8smaster ingress]# kubectl get svc -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller NodePort 10.105.213.95 none 80:31457/TCP,443:32569/TCP 8m12s ingress-nginx-controller-admission ClusterIP 10.98.225.196 none 443/TCP 8m12s# 在其他的宿主机或者windows机器上使用域名进行访问 [rootzabbix ~]# vim /etc/hosts [rootzabbix ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.2.111 www.feng.com 192.168.2.112 www.zhang.com# 因为我们是基于域名做的负载均衡的配置所以必须要在浏览器里使用域名去访问不能使用ip地址 # 同时ingress controller做负载均衡的时候是基于http协议的7层负载均衡。[rootzabbix ~]# curl www.feng.com !DOCTYPE html html head titleWelcome to nginx!/title style html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } /style /head body h1Welcome to nginx!/h1 pIf you see this page, the nginx web server is successfully installed and working. Further configuration is required./ppFor online documentation and support please refer to a hrefhttp://nginx.org/nginx.org/a.br/ Commercial support is available at a hrefhttp://nginx.com/nginx.com/a./ppemThank you for using nginx./em/p /body /html# 访问www.zhang.com出现异常503错误是nginx内部错误 [rootzabbix ~]# curl www.zhang.com html headtitle503 Service Temporarily Unavailable/title/head body centerh1503 Service Temporarily Unavailable/h1/center hrcenternginx/center /body /html# 第5大步骤启动第2个服务和pod使用了pvpvcnfs # 需要提前准备好nfs服务器创建pv和pvc [rootk8smaster pv]# pwd /root/pv [rootk8smaster pv]# ls nfs-pvc.yml nfs-pv.yml nginx-deployment.yml[rootk8smaster pv]# cat nfs-pv.yml apiVersion: v1 kind: PersistentVolume metadata:name: pv-weblabels:type: pv-web spec:capacity:storage: 10Gi accessModes:- ReadWriteManystorageClassName: nfs # pv对应的名字nfs:path: /web # nfs共享的目录server: 192.168.2.121 # nfs服务器的ip地址readOnly: false # 访问模式[rootk8smaster pv]# kubectl apply -f nfs-pv.yaml [rootk8smaster pv]# kubectl apply -f nfs-pvc.yaml[rootk8smaster pv]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv-web 10Gi RWX Retain Bound default/pvc-web nfs 19h [rootk8smaster pv]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc-web Bound pv-web 10Gi RWX nfs 19h[rootk8smaster ingress]# cat nginx-deployment-nginx-svc-2.yaml apiVersion: apps/v1 kind: Deployment metadata:name: nginx-deploymentlabels:app: nginx spec:replicas: 3selector:matchLabels:app: sc-nginx-feng-2template:metadata:labels:app: sc-nginx-feng-2spec:volumes:- name: sc-pv-storage-nfspersistentVolumeClaim:claimName: pvc-webcontainers:- name: sc-pv-container-nfsimage: nginximagePullPolicy: IfNotPresentports:- containerPort: 80name: http-servervolumeMounts:- mountPath: /usr/share/nginx/htmlname: sc-pv-storage-nfs --- apiVersion: v1 kind: Service metadata:name: sc-nginx-svc-2labels:app: sc-nginx-svc-2 spec:selector:app: sc-nginx-feng-2ports:- name: name-of-service-portprotocol: TCPport: 80targetPort: 80[rootk8smaster ingress]# kubectl apply -f nginx-deployment-nginx-svc-2.yaml deployment.apps/nginx-deployment created service/sc-nginx-svc-2 created[rootk8smaster ingress]# kubectl get svc -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller NodePort 10.105.213.95 none 80:31457/TCP,443:32569/TCP 24m ingress-nginx-controller-admission ClusterIP 10.98.225.196 none 443/TCP 24m[rootk8smaster ingress]# kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE sc-ingress nginx www.feng.com,www.zhang.com 192.168.2.111,192.168.2.112 80 18m# 访问宿主机暴露的端口号30092或者80都可以# 使用ingress controller暴露服务感觉不需要使用30000以上的端口访问可以直接访问80或者443 比使用service 暴露服务还是有点优势[rootzabbix ~]# curl www.zhang.com welcome to changsha hello,world [rootzabbix ~]# curl www.feng.com !DOCTYPE html html head titleWelcome to nginx!/title style html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } /style /head body h1Welcome to nginx!/h1 pIf you see this page, the nginx web server is successfully installed and working. Further configuration is required./ppFor online documentation and support please refer to a hrefhttp://nginx.org/nginx.org/a.br/ Commercial support is available at a hrefhttp://nginx.com/nginx.com/a./ppemThank you for using nginx./em/p /body /html9、使用dashboard对整个集群资源进行掌控 # 1.先下载recommended.yaml文件 [rootk8smaster dashboard]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yaml --2023-06-19 10:18:50-- https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yaml 正在解析主机 raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.110.133, 185.199.108.133, 185.199.111.133, ... 正在连接 raw.githubusercontent.com (raw.githubusercontent.com)|185.199.110.133|:443... 已连接。 已发出 HTTP 请求正在等待回应... 200 OK 长度7621 (7.4K) [text/plain] 正在保存至: “recommended.yaml”100%[] 7,621 --.-K/s 用时 0s 2023-06-19 10:18:52 (23.6 MB/s) - 已保存 “recommended.yaml” [7621/7621])[rootk8smaster dashboard]# ls recommended.yaml# 2.启动 [rootk8smaster dashboard]# kubectl apply -f recommended.yaml namespace/kubernetes-dashboard created serviceaccount/kubernetes-dashboard created service/kubernetes-dashboard created secret/kubernetes-dashboard-certs created secret/kubernetes-dashboard-csrf created secret/kubernetes-dashboard-key-holder created configmap/kubernetes-dashboard-settings created role.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created deployment.apps/kubernetes-dashboard created service/dashboard-metrics-scraper created deployment.apps/dashboard-metrics-scraper created# 3.查看是否启动dashboard的pod [rootk8smaster dashboard]# kubectl get ns NAME STATUS AGE default Active 18h ingress-nginx Active 13h kube-node-lease Active 18h kube-public Active 18h kube-system Active 18h kubernetes-dashboard Active 9s# kubernetes-dashboard 是dashboard自己的命名空间[rootk8smaster dashboard]# kubectl get pod -n kubernetes-dashboard NAME READY STATUS RESTARTS AGE dashboard-metrics-scraper-5b8896d7fc-6kjlr 1/1 Running 0 4m56s kubernetes-dashboard-cb988587b-s2f6z 1/1 Running 0 4m57s# 4.查看dashboard对应的服务因为发布服务的类型是ClusterIP 外面的机器不能访问不便于我们通过浏览器访问因此需要改成NodePort [rootk8smaster dashboard]# kubectl get svc -n kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dashboard-metrics-scraper ClusterIP 10.110.32.41 none 8000/TCP 4m24s kubernetes-dashboard ClusterIP 10.106.104.124 none 443/TCP 4m24s# 5.删除已经创建的dashboard 的服务 [rootk8smaster dashboard]# kubectl delete svc kubernetes-dashboard -n kubernetes-dashboard service kubernetes-dashboard deleted [rootk8smaster dashboard]# kubectl get svc -n kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dashboard-metrics-scraper ClusterIP 10.110.32.41 none 8000/TCP 5m39s# 6.创建一个nodeport的service [rootk8smaster dashboard]# vim dashboard-svc.yml [rootk8smaster dashboard]# cat dashboard-svc.yml kind: Service apiVersion: v1 metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard spec:type: NodePortports:- port: 443targetPort: 8443selector:k8s-app: kubernetes-dashboard[rootk8smaster dashboard]# kubectl apply -f dashboard-svc.yml service/kubernetes-dashboard created[rootk8smaster dashboard]# kubectl get svc -n kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dashboard-metrics-scraper ClusterIP 10.110.32.41 none 8000/TCP 8m11s kubernetes-dashboard NodePort 10.103.185.254 none 443:32571/TCP 37s# 7.想要访问dashboard服务就要有访问权限创建kubernetes-dashboard管理员角色 [rootk8smaster dashboard]# vim dashboard-svc-account.yaml [rootk8smaster dashboard]# cat dashboard-svc-account.yaml apiVersion: v1 kind: ServiceAccount metadata:name: dashboard-adminnamespace: kube-system --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata:name: dashboard-admin subjects:- kind: ServiceAccountname: dashboard-adminnamespace: kube-system roleRef:kind: ClusterRolename: cluster-adminapiGroup: rbac.authorization.k8s.io[rootk8smaster dashboard]# kubectl apply -f dashboard-svc-account.yaml serviceaccount/dashboard-admin created clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created# 8.获取dashboard的secret对象的名字 [rootk8smaster dashboard]# kubectl get secret -n kube-system|grep admin|awk {print $1} dashboard-admin-token-hd2nl[rootk8smaster dashboard]# kubectl describe secret dashboard-admin-token-hd2nl -n kube-system Name: dashboard-admin-token-hd2nl Namespace: kube-system Labels: none Annotations: kubernetes.io/service-account.name: dashboard-adminkubernetes.io/service-account.uid: 4e42ca6a-e5eb-4672-bf3e-ae22935417efType: kubernetes.io/service-account-tokenDataca.crt: 1066 bytes namespace: 11 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6InBBckJ2U051Y3J4NjVPY2VxOVZzRjBIdzdjNzgycFppcVZ5WWFnQlNsS00ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4taGQybmwiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNGU0MmNhNmEtZTVlYi00NjcyLWJmM2UtYWUyMjkzNTQxN2VmIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.EAVV-s6OnS4htu4kvv3UvlZpqzg5Ei1_tNiBLr08GquUxKX09JGvQhsZQYgluNmS2yqad_lxK_Ie_RgwayqfBdXYtugQPM8m9gZHScsUdo_3b8b4ZEUz7KlDzJVBdBvDFSJjz-7cJhtj-HtazRuLluJbeoQV4zXMXvfhDhYt0k126eiqKzvbHhJmNM8U5XViAUmpUPCUjqFHm8tS1Su7aW75R-qXH6aGjGOv7kTpQdOjFeVO-AbFRIcbDOcqYRrKMyZu0yuH9QZGL35L1Lj3HgePsDbwd3jm2ZS05BjuacSFGle6CdZTOB0b5haeUlFrZ6FWsU-2qoQ67ysOwB0xKQ [rootk8smaster dashboard]# # 9.获取secret里的token的内容--》token理解为认证的密码 [rootk8smaster dashboard]# kubectl describe secret dashboard-admin-token-hd2nl -n kube-system|awk /^token/ {print $2} eyJhbGciOiJSUzI1NiIsImtpZCI6InBBckJ2U051Y3J4NjVPY2VxOVZzRjBIdzdjNzgycFppcVZ5WWFnQlNsS00ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4taGQybmwiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNGU0MmNhNmEtZTVlYi00NjcyLWJmM2UtYWUyMjkzNTQxN2VmIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.EAVV-s6OnS4htu4kvv3UvlZpqzg5Ei1_tNiBLr08GquUxKX09JGvQhsZQYgluNmS2yqad_lxK_Ie_RgwayqfBdXYtugQPM8m9gZHScsUdo_3b8b4ZEUz7KlDzJVBdBvDFSJjz-7cJhtj-HtazRuLluJbeoQV4zXMXvfhDhYt0k126eiqKzvbHhJmNM8U5XViAUmpUPCUjqFHm8tS1Su7aW75R-qXH6aGjGOv7kTpQdOjFeVO-AbFRIcbDOcqYRrKMyZu0yuH9QZGL35L1Lj3HgePsDbwd3jm2ZS05BjuacSFGle6CdZTOB0b5haeUlFrZ6FWsU-2qoQ67ysOwB0xKQ# 10.浏览器里访问 [rootk8smaster dashboard]# kubectl get svc -n kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dashboard-metrics-scraper ClusterIP 10.110.32.41 none 8000/TCP 11m kubernetes-dashboard NodePort 10.103.185.254 none 443:32571/TCP 4m4s# 访问宿主机的ip端口号 https://192.168.2.104:32571/#/login# 11.输入上面获得的token登录。 thisisunsafe https://192.168.2.104:32571/#/workloads?namespacedefault10、安装zabbix和promethues对整个集群资源cpu内存网络带宽web服务数据库服务磁盘IO等进行监控 # 部署zabbix # 1.安装zabbix服务器的源 源repository 软件仓库用来找到zabbix官方网站提供的软件可以下载软件的地方 [rootzabbix ~]# rpm -Uvh https://repo.zabbix.com/zabbix/5.0/rhel/7/x86_64/zabbix-release-5.0-1.el7.noarch.rpm 获取https://repo.zabbix.com/zabbix/5.0/rhel/7/x86_64/zabbix-release-5.0-1.el7.noarch.rpm 警告/var/tmp/rpm-tmp.lL96Rw: 头V4 RSA/SHA512 Signature, 密钥 ID a14fe591: NOKEY 准备中... ################################# [100%] 正在升级/安装...1:zabbix-release-5.0-1.el7 ################################# [100%][rootzabbix ~]# cd /etc/yum.repos.d/ [rootzabbix yum.repos.d]# ls CentOS-Base.repo CentOS-Debuginfo.repo CentOS-Media.repo CentOS-Vault.repo zabbix.repo CentOS-CR.repo CentOS-fasttrack.repo CentOS-Sources.repo CentOS-x86_64-kernel.repoCentOS-Base.repo 仓库文件 用来找到centos官方提供的下载软件的地方的文件 Base 存放centos官方基本软件的仓库zabbix.repo 帮助我们找到zabbix官方提供的软件下载地方的文件[rootzabbix yum.repos.d]# cat zabbix.repo [zabbix] 源的名字 nameZabbix Official Repository - $basearch 对这个源的介绍 baseurlhttp://repo.zabbix.com/zabbix/5.0/rhel/7/$basearch/ 具体源的位置 enabled1 表示这个源可以使用 gpgcheck1 操作系统会对下载的软件进行gpg检验码的检查防止软件不是正版的 gpgkeyfile:///etc/pki/rpm-gpg/RPM-GPG-KEY-ZABBIX-A14FE591 --》防伪码 # 2.安装zabbix相关的软件 [rootzabbix yum.repos.d]# yum install zabbix-server-mysql zabbix-agent -yzabbix-server-mysql 安装zabbix server和连接mysql功能的软件 zabbix-agent zabbix的代理软件# 3.安装Zabbix前端 [rootzabbix yum.repos.d]# yum install centos-release-scl -y # 修改仓库文件启用前端的源 [rootzabbix yum.repos.d]# vim zabbix.repo [zabbix-frontend] nameZabbix Official Repository frontend - $basearch baseurlhttp://repo.zabbix.com/zabbix/5.0/rhel/7/$basearch/frontend enabled1 # 修改为1 gpgcheck1 gpgkeyfile:///etc/pki/rpm-gpg/RPM-GPG-KEY-ZABBIX-A14FE591# 安装web相关的软件 [rootzabbix yum.repos.d]# yum install zabbix-web-mysql-scl zabbix-nginx-conf-scl -y# 4.安装mariadb数据库 [rootzabbix yum.repos.d]# yum install mariadb mariadb-server -y mariadb-server 服务器端的软件包 mariadb 提供客户端命令的软件包# 注意如果已经安装过mysql的centos系统就不需要安装mariadb[rootzabbix yum.repos.d]# service mariadb start # 启动mariadb Redirecting to /bin/systemctl start mariadb.service [rootzabbix yum.repos.d]# systemctl enable mariadb # 设置开机启动mariadb数据库 Created symlink from /etc/systemd/system/multi-user.target.wants/mariadb.service to /usr/lib/systemd/system/mariadb.service.# 查看mysqld进程运行 [rootzabbix yum.repos.d]# ps aux|grep mysqld mysql 11940 0.1 0.0 113412 1596 ? Ss 15:09 0:00 /bin/sh /usr/bin/mysqld_safe --basedir/usr mysql 12105 1.1 4.3 968920 80820 ? Sl 15:09 0:00 /usr/libexec/mysqld --basedir/usr --datadir/var/lib/mysql --plugin-dir/usr/lib64/mysql/plugin --log-error/var/log/mariadb/mariadb.log --pid-file/var/run/mariadb/mariadb.pid --socket/var/lib/mysql/mysql.sock root 12159 0.0 0.0 112824 980 pts/0 S 15:09 0:00 grep --colorauto mysqld[rootzabbix yum.repos.d]# netstat -anplut|grep 3306 tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 12105/mysqld # 5.在数据库主机上运行以下命令 [rootzabbix yum.repos.d]# mysql -uroot -p Enter password: Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 2 Server version: 5.5.68-MariaDB MariaDB ServerCopyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.Type help; or \h for help. Type \c to clear the current input statement.MariaDB [(none)] show databases; -------------------- | Database | -------------------- | information_schema | | mysql | | performance_schema | | test | -------------------- 4 rows in set (0.01 sec)MariaDB [(none)] create database zabbix character set utf8 collate utf8_bin; Query OK, 1 row affected (0.00 sec)MariaDB [(none)] create user zabbixlocalhost identified by sc123456; # 创建用户zabbixlocalhost 密码是sc123456 Query OK, 0 rows affected (0.00 sec)MariaDB [(none)] grant all privileges on zabbix.* to zabbixlocalhost; #授权zabbixlocalhost用户对zabbix.*库里的表有所有的权限insertdeleteupdateselect等 Query OK, 0 rows affected (0.00 sec)MariaDB [(none)] set global log_bin_trust_function_creators 1; Query OK, 0 rows affected (0.00 sec)MariaDB [(none)] exit Bye# 导入初始化数据会在zabbix库里新建很多的表 [rootzabbix yum.repos.d]# cd /usr/share/doc/zabbix-server-mysql-5.0.35/ [rootzabbix zabbix-server-mysql-5.0.35]# ls AUTHORS ChangeLog COPYING create.sql.gz double.sql NEWS README[rootzabbix zabbix-server-mysql-5.0.33]# zcat create.sql.gz |mysql -uzabbix -psc123456 zabbix[rootzabbix zabbix-server-mysql-5.0.33]# mysql -uzabbix -psc123456 Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 4 Server version: 5.5.68-MariaDB MariaDB ServerCopyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.Type help; or \h for help. Type \c to clear the current input statement.MariaDB [(none)] show databases; -------------------- | Database | -------------------- | information_schema | | test | | zabbix | -------------------- 3 rows in set (0.00 sec)MariaDB [(none)] use zabbix; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -ADatabase changed MariaDB [zabbix] show tables; ---------------------------- | Tables_in_zabbix | ---------------------------- | acknowledges | | actions | | alerts | | application_discovery | | application_prototype |# 导入数据库架构后禁用log_bin_trust_function_creators选项 [rootzabbix zabbix-server-mysql-5.0.33]# mysql -uroot -p Enter password: Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 5 Server version: 5.5.68-MariaDB MariaDB ServerCopyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.Type help; or \h for help. Type \c to clear the current input statement.MariaDB [(none)] set global log_bin_trust_function_creators 0; Query OK, 0 rows affected (0.00 sec)MariaDB [(none)] exit Bye# 6.为 Zabbix 服务器配置数据库 # 编辑文件 /etc/zabbix/zabbix_server.conf [rootzabbix zabbix-server-mysql-5.0.33]# cd /etc/zabbix/ [rootzabbix zabbix]# vim zabbix_server.conf # DBPassword DBPasswordsc123456# 7.为 Zabbix 前端配置 PHP # 编辑文件 /etc/opt/rh/rh-nginx116/nginx/conf.d/zabbix.conf 取消注释 [rootzabbix conf.d]# cd /etc/opt/rh/rh-nginx116/nginx/conf.d/ [rootzabbix conf.d]# ls zabbix.conf [rootzabbix conf.d]# vim zabbix.conf server {listen 8080;server_name zabbix.com;# 编辑/etc/opt/rh/rh-nginx116/nginx/nginx.conf [rootzabbix conf.d]# cd /etc/opt/rh/rh-nginx116/nginx/ [rootzabbix nginx]# vim nginx.conf server {listen 80 default_server; #修改80为8080listen [::]:80 default_server;# 避免zabbix和nginx监听同一个端口导致zabbix启动不起来。 # 编辑文件 /etc/opt/rh/rh-php72/php-fpm.d/zabbix.conf [rootzabbix nginx]# cd /etc/opt/rh/rh-php72/php-fpm.d [rootzabbix php-fpm.d]# ls www.conf zabbix.conf[rootzabbix php-fpm.d]# vim zabbix.conf listen.acl_users apache,nginx php_value[date.timezone] Asia/Shanghai# 建议一定要关闭selinux不然会导致zabbix_server启动不了# 8.启动Zabbix服务器和代理进程并且设置开机启动 [rootzabbix php-fpm.d]# systemctl restart zabbix-server zabbix-agent rh-nginx116-nginx rh-php72-php-fpm [rootzabbix php-fpm.d]# systemctl enable zabbix-server zabbix-agent rh-nginx116-nginx rh-php72-php-fpm Created symlink from /etc/systemd/system/multi-user.target.wants/zabbix-server.service to /usr/lib/systemd/system/zabbix-server.service. Created symlink from /etc/systemd/system/multi-user.target.wants/zabbix-agent.service to /usr/lib/systemd/system/zabbix-agent.service. Created symlink from /etc/systemd/system/multi-user.target.wants/rh-nginx116-nginx.service to /usr/lib/systemd/system/rh-nginx116-nginx.service. Created symlink from /etc/systemd/system/multi-user.target.wants/rh-php72-php-fpm.service to /usr/lib/systemd/system/rh-php72-php-fpm.service.# 9.浏览器里访问 http://192.168.2.117:8080# 默认登录的账号和密码 username: Admin password: zabbix# 使用Prometheus监控Kubernetes # 1.在所有节点提前下载镜像 docker pull prom/node-exporter docker pull prom/prometheus:v2.0.0 docker pull grafana/grafana:6.1.4[rootk8smaster ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE prom/node-exporter latest 1dbe0e931976 18 months ago 20.9MB grafana/grafana 6.1.4 d9bdb6044027 4 years ago 245MB prom/prometheus v2.0.0 67141fa03496 5 years ago 80.2MB[rootk8snode1 ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE prom/node-exporter latest 1dbe0e931976 18 months ago 20.9MB grafana/grafana 6.1.4 d9bdb6044027 4 years ago 245MB prom/prometheus [rootk8snode2 ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE prom/node-exporter latest 1dbe0e931976 18 months ago 20.9MB grafana/grafana 6.1.4 d9bdb6044027 4 years ago 245MB prom/prometheus v2.0.0 67141fa03496 5 years ago 80.2MB# 2.采用daemonset方式部署node-exporter [rootk8smaster prometheus]# ll 总用量 36 -rw-r--r-- 1 root root 5632 6月 25 16:23 configmap.yaml -rw-r--r-- 1 root root 1515 6月 25 16:26 grafana-deploy.yaml -rw-r--r-- 1 root root 256 6月 25 16:27 grafana-ing.yaml -rw-r--r-- 1 root root 225 6月 25 16:27 grafana-svc.yaml -rw-r--r-- 1 root root 716 6月 25 16:22 node-exporter.yaml -rw-r--r-- 1 root root 1104 6月 25 16:25 prometheus.deploy.yml -rw-r--r-- 1 root root 233 6月 25 16:25 prometheus.svc.yml -rw-r--r-- 1 root root 716 6月 25 16:23 rbac-setukp.yaml [rootk8smaster prometheus]# cat node-exporter.yaml --- apiVersion: apps/v1 kind: DaemonSet metadata:name: node-exporternamespace: kube-systemlabels:k8s-app: node-exporter spec:selector:matchLabels:k8s-app: node-exportertemplate:metadata:labels:k8s-app: node-exporterspec:containers:- image: prom/node-exportername: node-exporterports:- containerPort: 9100protocol: TCPname: http --- apiVersion: v1 kind: Service metadata:labels:k8s-app: node-exportername: node-exporternamespace: kube-system spec:ports:- name: httpport: 9100nodePort: 31672protocol: TCPtype: NodePortselector:k8s-app: node-exporter[rootk8smaster prometheus]# kubectl apply -f node-exporter.yaml daemonset.apps/node-exporter created service/node-exporter created[rootk8smaster prometheus]# kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system node-exporter-fcmx5 1/1 Running 0 47s kube-system node-exporter-qccwb 1/1 Running 0 47s[rootk8smaster prometheus]# kubectl get daemonset -A NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE kube-system calico-node 3 3 3 3 3 kubernetes.io/oslinux 7d kube-system kube-proxy 3 3 3 3 3 kubernetes.io/oslinux 7d kube-system node-exporter 2 2 2 2 2 none 2m29s[rootk8smaster prometheus]# kubectl get service -A NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-system node-exporter NodePort 10.111.247.142 none 9100:31672/TCP 3m24s# 3.部署Prometheus [rootk8smaster prometheus]# cat rbac-setup.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata:name: prometheus rules: - apiGroups: []resources:- nodes- nodes/proxy- services- endpoints- podsverbs: [get, list, watch] - apiGroups:- extensionsresources:- ingressesverbs: [get, list, watch] - nonResourceURLs: [/metrics]verbs: [get] --- apiVersion: v1 kind: ServiceAccount metadata:name: prometheusnamespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata:name: prometheus roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: prometheus subjects: - kind: ServiceAccountname: prometheusnamespace: kube-system [rootk8smaster prometheus]# kubectl apply -f rbac-setup.yaml clusterrole.rbac.authorization.k8s.io/prometheus created serviceaccount/prometheus created clusterrolebinding.rbac.authorization.k8s.io/prometheus created[rootk8smaster prometheus]# cat configmap.yaml apiVersion: v1 kind: ConfigMap metadata:name: prometheus-confignamespace: kube-system data:prometheus.yml: |global:scrape_interval: 15sevaluation_interval: 15sscrape_configs:- job_name: kubernetes-apiserverskubernetes_sd_configs:- role: endpointsscheme: httpstls_config:ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crtbearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/tokenrelabel_configs:- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]action: keepregex: default;kubernetes;https- job_name: kubernetes-nodeskubernetes_sd_configs:- role: nodescheme: httpstls_config:ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crtbearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/tokenrelabel_configs:- action: labelmapregex: __meta_kubernetes_node_label_(.)- target_label: __address__replacement: kubernetes.default.svc:443- source_labels: [__meta_kubernetes_node_name]regex: (.)target_label: __metrics_path__replacement: /api/v1/nodes/${1}/proxy/metrics- job_name: kubernetes-cadvisorkubernetes_sd_configs:- role: nodescheme: httpstls_config:ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crtbearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/tokenrelabel_configs:- action: labelmapregex: __meta_kubernetes_node_label_(.)- target_label: __address__replacement: kubernetes.default.svc:443- source_labels: [__meta_kubernetes_node_name]regex: (.)target_label: __metrics_path__replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor- job_name: kubernetes-service-endpointskubernetes_sd_configs:- role: endpointsrelabel_configs:- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]action: keepregex: true- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]action: replacetarget_label: __scheme__regex: (https?)- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]action: replacetarget_label: __metrics_path__regex: (.)- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]action: replacetarget_label: __address__regex: ([^:])(?::\d)?;(\d)replacement: $1:$2- action: labelmapregex: __meta_kubernetes_service_label_(.)- source_labels: [__meta_kubernetes_namespace]action: replacetarget_label: kubernetes_namespace- source_labels: [__meta_kubernetes_service_name]action: replacetarget_label: kubernetes_name- job_name: kubernetes-serviceskubernetes_sd_configs:- role: servicemetrics_path: /probeparams:module: [http_2xx]relabel_configs:- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]action: keepregex: true- source_labels: [__address__]target_label: __param_target- target_label: __address__replacement: blackbox-exporter.example.com:9115- source_labels: [__param_target]target_label: instance- action: labelmapregex: __meta_kubernetes_service_label_(.)- source_labels: [__meta_kubernetes_namespace]target_label: kubernetes_namespace- source_labels: [__meta_kubernetes_service_name]target_label: kubernetes_name- job_name: kubernetes-ingresseskubernetes_sd_configs:- role: ingressrelabel_configs:- source_labels: [__meta_kubernetes_ingress_annotation_prometheus_io_probe]action: keepregex: true- source_labels: [__meta_kubernetes_ingress_scheme,__address__,__meta_kubernetes_ingress_path]regex: (.);(.);(.)replacement: ${1}://${2}${3}target_label: __param_target- target_label: __address__replacement: blackbox-exporter.example.com:9115- source_labels: [__param_target]target_label: instance- action: labelmapregex: __meta_kubernetes_ingress_label_(.)- source_labels: [__meta_kubernetes_namespace]target_label: kubernetes_namespace- source_labels: [__meta_kubernetes_ingress_name]target_label: kubernetes_name- job_name: kubernetes-podskubernetes_sd_configs:- role: podrelabel_configs:- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]action: keepregex: true- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]action: replacetarget_label: __metrics_path__regex: (.)- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]action: replaceregex: ([^:])(?::\d)?;(\d)replacement: $1:$2target_label: __address__- action: labelmapregex: __meta_kubernetes_pod_label_(.)- source_labels: [__meta_kubernetes_namespace]action: replacetarget_label: kubernetes_namespace- source_labels: [__meta_kubernetes_pod_name]action: replacetarget_label: kubernetes_pod_name[rootk8smaster prometheus]# kubectl apply -f configmap.yaml configmap/prometheus-config created[rootk8smaster prometheus]# cat prometheus.deploy.yml apiVersion: apps/v1 kind: Deployment metadata:labels:name: prometheus-deploymentname: prometheusnamespace: kube-system spec:replicas: 1selector:matchLabels:app: prometheustemplate:metadata:labels:app: prometheusspec:containers:- image: prom/prometheus:v2.0.0name: prometheuscommand:- /bin/prometheusargs:- --config.file/etc/prometheus/prometheus.yml- --storage.tsdb.path/prometheus- --storage.tsdb.retention24hports:- containerPort: 9090protocol: TCPvolumeMounts:- mountPath: /prometheusname: data- mountPath: /etc/prometheusname: config-volumeresources:requests:cpu: 100mmemory: 100Milimits:cpu: 500mmemory: 2500MiserviceAccountName: prometheusvolumes:- name: dataemptyDir: {}- name: config-volumeconfigMap:name: prometheus-config[rootk8smaster prometheus]# kubectl apply -f prometheus.deploy.yml deployment.apps/prometheus created[rootk8smaster prometheus]# cat prometheus.svc.yml kind: Service apiVersion: v1 metadata:labels:app: prometheusname: prometheusnamespace: kube-system spec:type: NodePortports:- port: 9090targetPort: 9090nodePort: 30003selector:app: prometheus [rootk8smaster prometheus]# kubectl apply -f prometheus.svc.yml service/prometheus created4.部署grafana [rootk8smaster prometheus]# cat grafana-deploy.yaml apiVersion: apps/v1 kind: Deployment metadata:name: grafana-corenamespace: kube-systemlabels:app: grafanacomponent: core spec:replicas: 1selector:matchLabels:app: grafanatemplate:metadata:labels:app: grafanacomponent: corespec:containers:- image: grafana/grafana:6.1.4name: grafana-coreimagePullPolicy: IfNotPresent# env:resources:# keep request limit to keep this container in guaranteed classlimits:cpu: 100mmemory: 100Mirequests:cpu: 100mmemory: 100Mienv:# The following env variables set up basic auth twith the default admin user and admin password.- name: GF_AUTH_BASIC_ENABLEDvalue: true- name: GF_AUTH_ANONYMOUS_ENABLEDvalue: false# - name: GF_AUTH_ANONYMOUS_ORG_ROLE# value: Admin# does not really work, because of template variables in exported dashboards:# - name: GF_DASHBOARDS_JSON_ENABLED# value: truereadinessProbe:httpGet:path: /loginport: 3000# initialDelaySeconds: 30# timeoutSeconds: 1#volumeMounts: #先不进行挂载#- name: grafana-persistent-storage# mountPath: /var#volumes:#- name: grafana-persistent-storage#emptyDir: {}[rootk8smaster prometheus]# kubectl apply -f grafana-deploy.yaml deployment.apps/grafana-core created[rootk8smaster prometheus]# cat grafana-svc.yaml apiVersion: v1 kind: Service metadata:name: grafananamespace: kube-systemlabels:app: grafanacomponent: core spec:type: NodePortports:- port: 3000selector:app: grafanacomponent: core [rootk8smaster prometheus]# kubectl apply -f grafana-svc.yaml service/grafana created [rootk8smaster prometheus]# cat grafana-ing.yaml apiVersion: extensions/v1beta1 kind: Ingress metadata:name: grafananamespace: kube-system spec:rules:- host: k8s.grafanahttp:paths:- path: /backend:serviceName: grafanaservicePort: 3000[rootk8smaster prometheus]# kubectl apply -f grafana-ing.yaml Warning: extensions/v1beta1 Ingress is deprecated in v1.14, unavailable in v1.22; use networking.k8s.io/v1 Ingress ingress.extensions/grafana created# 5.检查、测试 [rootk8smaster prometheus]# kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system grafana-core-78958d6d67-49c56 1/1 Running 0 31m kube-system node-exporter-fcmx5 1/1 Running 0 9m33s kube-system node-exporter-qccwb 1/1 Running 0 9m33s kube-system prometheus-68546b8d9-qxsm7 1/1 Running 0 2m47s[rootk8smaster mysql]# kubectl get svc -A NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-system grafana NodePort 10.110.87.158 none 3000:31267/TCP 31m kube-system node-exporter NodePort 10.111.247.142 none 9100:31672/TCP 39m kube-system prometheus NodePort 10.102.0.186 none 9090:30003/TCP 32m# 访问 # node-exporter采集的数据 http://192.168.2.104:31672/metrics# Prometheus的页面 http://192.168.2.104:30003# grafana的页面 http://192.168.2.104:31267 # 账户admin密码*******11、使用测试软件ab对整个k8s集群和相关的服务器进行压力测试 # 1.运行php-apache服务器并暴露服务 [rootk8smaster hpa]# ls php-apache.yaml [rootk8smaster hpa]# cat php-apache.yaml apiVersion: apps/v1 kind: Deployment metadata:name: php-apache spec:selector:matchLabels:run: php-apachetemplate:metadata:labels:run: php-apachespec:containers:- name: php-apacheimage: k8s.gcr.io/hpa-exampleimagePullPolicy: IfNotPresentports:- containerPort: 80resources:limits:cpu: 500mrequests:cpu: 200m --- apiVersion: v1 kind: Service metadata:name: php-apachelabels:run: php-apache spec:ports:- port: 80selector:run: php-apache[rootk8smaster hpa]# kubectl apply -f php-apache.yaml deployment.apps/php-apache created service/php-apache created [rootk8smaster hpa]# kubectl get deploy NAME READY UP-TO-DATE AVAILABLE AGE php-apache 1/1 1 1 93s [rootk8smaster hpa]# kubectl get pod NAME READY STATUS RESTARTS AGE php-apache-567d9f79d-mhfsp 1/1 Running 0 44s# 创建HPA功能 [rootk8smaster hpa]# kubectl autoscale deployment php-apache --cpu-percent10 --min1 --max10 horizontalpodautoscaler.autoscaling/php-apache autoscaled [rootk8smaster hpa]# kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE php-apache Deployment/php-apache unknown/10% 1 10 0 7s# 测试增加负载 [rootk8smaster hpa]# kubectl run -i --tty load-generator --rm --imagebusybox:1.28 --restartNever -- /bin/sh -c while sleep 0.01; do wget -q -O- http://php-apache; done If you dont see a command prompt, try pressing enter. OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK [rootk8smaster hpa]# kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE php-apache Deployment/php-apache 0%/10% 1 10 1 3m24s [rootk8smaster hpa]# kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE php-apache Deployment/php-apache 238%/10% 1 10 1 3m41s [rootk8smaster hpa]# kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE php-apache Deployment/php-apache 250%/10% 1 10 4 3m57s # 一旦CPU利用率降至0HPA会自动将副本数缩减为 1。自动扩缩完成副本数量的改变可能需要几分钟的时间 # 2.对web服务进行压力测试观察promethues和dashboard # ab命令访问web192.168.2.112:30001 同时进入prometheus和dashboard观察pod # 四种方式观察 kubectl top pod http://192.168.2.117:3000/ http://192.168.2.117:9090/targets https://192.168.2.104:32571/ [rootnfs ~]# yum install httpd-tools -y [rootnfs data]# ab -n 1000000 -c 10000 -g output.dat http://192.168.2.112:30001/ This is ApacheBench, Version 2.3 $Revision: 1430300 $ Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking 192.168.2.112 (be patient) apr_socket_recv: Connection reset by peer (104) Total of 3694 requests completed # 1000个请求10并发数 ab -n 1000 -c 10 -g output.dat http://192.168.2.112:30001/ -t 60 在60秒内发送尽可能多的请求
http://www.yutouwan.com/news/467311/

相关文章:

  • 昆山制造网站的地方河北建设工程招标网官方网站
  • 怎么做网站的动效招投标信息查询平台
  • 老榕树网站建设凡科建站电脑版网址
  • 长沙做一个网站多少钱选择seo网站排名优化
  • 移动网站好处网站建设平台杭州
  • 深圳哪家网站公司好沐川移动网站建设
  • 和龙市建设局网站推广普通话的标语
  • 全屏自适应网站模板php 微网站开发
  • 河南工程建设协会网站深圳网站建设jm3q
  • 番禺网站开发技术宝安中心医院入职体检
  • 百度推广培训机构爱网站推广优化
  • 东莞模板网站设计织梦网站地图制作教程
  • 洛阳网站推广怎么做网页设计html代码大全ppt
  • 做网站需要去工商备案吗网站建设印花税
  • 无锡网站网页设计各家建站平台
  • 怎么建设阿里巴巴国际网站建设部网站材料价格上涨规定
  • 网站备案的幕布是什么意思做网站可以把文字做成图片吗
  • 广州网站建设公司推荐wordpress 投稿 标签
  • 青岛网站建设大全wordpress图片异步延迟加载js
  • 南宁哪里有做网站的公司网站开发价格预算
  • 网站建设之家1m的带宽做网站可以吗
  • 设计网站需提供什么asp网站数据库位置
  • 网站建设可以在家做吗东莞seo推广优化排名
  • 企业官网建站流程wordpress最新版核心
  • 网站域名空间5个G的多少钱网站开发得多少钱
  • 用什么软件写网站太原市一页网络科技有限公司
  • 五 网站开发总体进度安排职业技能培训学校
  • wordpress下载站批量宝山做网站公司
  • 南宁网站seo推广公司制作微信商城网站开发
  • 浪网站制作手写代码网站