Kubernetes是谷歌强力推出的一款开源的容器编排技术,他的目标是让部署容器化的应用更简单高效,Kubernetes 提供了应用部署,规划,更新,维护的一系列机制,很多大公司都在使用。Kubernetes有叫k8s(下面我就简称k8s)。下面我们就进入k8s的世界吧!
基于容器对应用运行环境的资源配置要求
(资料图片仅供参考)
当容器失败时,会对容器进行重启当所部署的Node节点有问题时,会对容器进行重新部署和重新调度当容器未通过监控检查时,会关闭此容器直到容器正常运行时,才会对外提供服务
当我们有大量的请求来临时,我们可以增加副本数量,从而达到水平扩展的效果
用户不需使用额外的服务发现机制,就能够基于Kubernetes 自身能力实现服务发现和负载均衡
可以根据应用的变化,对应用容器运行的应用,进行一次性或批量式更新,添加应用的时候,不是加进去就马上可以进行使用,而是需要判断这个添加进去的应用是否能够正常使用
可以根据应用部署情况,对应用容器运行的应用,进行历史版本即时回退。类似回滚。
在不需要重新构建镜像的情况下,可以部署和更新密钥和应用配置,类似热部署。
自动实现存储系统挂载及应用,特别对有状态应用实现数据持久化非常重要存储系统可以来自于本地目录、网络存储(NFS、Gluster、Ceph 等)、公共云存储服务
提供一次性任务,定时任务;满足批量数据处理和分析的场景
k8s集群架构主要是由Master node(主控节点)和work node(工作节点)组成。如图
单master集群
多master集群
角色 | IP |
---|---|
k8s_master | 192.168.10.100 |
k8s_node1 | 192.168.10.102 |
k8s_node2 | 192.168.10.103 |
k8s_node4 | 192.168.10.104 |
# 关闭防火墙systemctl stop firewalldsystemctl disable firewalld# 关闭selinuxsed -i "s/enforcing/disabled/" /etc/selinux/config # 永久setenforce 0 # 临时# 关闭swapswapoff -a # 临时sed -ri "s/.*swap.*/#&/" /etc/fstab # 永久# 根据规划设置主机名hostnamectl set-hostname # 在master添加hostscat >> /etc/hosts << EOF192.168.10.100 k8smaster192.168.10.102 k8snode1192.168.10.103 k8snode2192.168.10.104 k8snode4EOF# 将桥接的IPv4流量传递到iptables的链cat > /etc/sysctl.d/k8s.conf << EOFnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1EOFsysctl --system # 生效# 时间同步yum install ntpdate -yntpdate time.windows.com
Kubernetes默认CRI(容器运行时)为Docker,因此先安装Docker。
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo #wget下载dockeryum -y install docker-ce-18.06.1.ce-3.el7 # 安装dockersystemctl enable docker && systemctl start docker #设置自启动docker --version #检查是否安装成功
#设置docker的镜像$ cat > /etc/docker/daemon.json << EOF{ "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]}EOF
$ cat > /etc/yum.repos.d/kubernetes.repo << EOF[kubernetes]name=Kubernetesbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64enabled=1gpgcheck=0repo_gpgcheck=0gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOF
由于版本更新频繁,这里指定版本号部署:
$ yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0$ systemctl enable kubelet
在192.168.10.100(Master)执行。
$ kubeadm init \ --apiserver-advertise-address=192.168.10.100 \ # 设置apiserver的ip,也就是master的本机ip --image-repository registry.aliyuncs.com/google_containers \ --kubernetes-version v1.18.0 \ --service-cidr=10.96.0.0/12 \ --pod-network-cidr=10.244.0.0/16
如果你的swap没关闭,他就会报错,报错截图如下:
# 安装成功的信息如下[root@hadoop100 yum.repos.d]# swapoff -a[root@hadoop100 yum.repos.d]# sed -ri "s/.*swap.*/#&/" /etc/fstab[root@hadoop100 yum.repos.d]# kubeadm init --apiserver-advertise-address=192.168.10.100 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.18.0 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16W0526 14:35:38.129635 19861 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io][init] Using Kubernetes version: v1.18.0[preflight] Running pre-flight checks[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03[preflight] Pulling images required for setting up a Kubernetes cluster[preflight] This might take a minute or two, depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using "kubeadm config images pull"[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Starting the kubelet[certs] Using certificateDir folder "/etc/kubernetes/pki"[certs] Generating "ca" certificate and key[certs] Generating "apiserver" certificate and key[certs] apiserver serving cert is signed for DNS names [k8smaster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.10.100][certs] Generating "apiserver-kubelet-client" certificate and key[certs] Generating "front-proxy-ca" certificate and key[certs] Generating "front-proxy-client" certificate and key[certs] Generating "etcd/ca" certificate and key[certs] Generating "etcd/server" certificate and key[certs] etcd/server serving cert is signed for DNS names [k8smaster localhost] and IPs [192.168.10.100 127.0.0.1 ::1][certs] Generating "etcd/peer" certificate and key[certs] etcd/peer serving cert is signed for DNS names [k8smaster localhost] and IPs [192.168.10.100 127.0.0.1 ::1][certs] Generating "etcd/healthcheck-client" certificate and key[certs] Generating "apiserver-etcd-client" certificate and key[certs] Generating "sa" key and public key[kubeconfig] Using kubeconfig folder "/etc/kubernetes"[kubeconfig] Writing "admin.conf" kubeconfig file[kubeconfig] Writing "kubelet.conf" kubeconfig file[kubeconfig] Writing "controller-manager.conf" kubeconfig file[kubeconfig] Writing "scheduler.conf" kubeconfig file[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for "kube-apiserver"[control-plane] Creating static Pod manifest for "kube-controller-manager"W0526 14:36:47.126661 19861 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"[control-plane] Creating static Pod manifest for "kube-scheduler"W0526 14:36:47.128024 19861 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[apiclient] All control plane components are healthy after 25.002851 seconds[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster[upload-certs] Skipping phase. Please see --upload-certs[mark-control-plane] Marking the node k8smaster as control-plane by adding the label "node-role.kubernetes.io/master="""[mark-control-plane] Marking the node k8smaster as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule][bootstrap-token] Using token: yns0bk.uts2jsm1unmvcbdp[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key[addons] Applied essential addon: CoreDNS[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.10.100:6443 --token yns0bk.uts2jsm1unmvcbdp \ --discovery-token-ca-cert-hash sha256:4d76fcbd7aa9bc0aa9165010c0203a18864a417ee6c9ad8e9add835832475743
根据他出现的信息,我们可以使用kubectl工具,如图:
mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config
根据master node的提示信息,我们需要在每个node节(192.168.10.102、192.168.10.103、192.168.10.104)点中执行如下内容:
kubeadm join 192.168.10.100:6443 --token yns0bk.uts2jsm1unmvcbdp \ --discovery-token-ca-cert-hash sha256:4d76fcbd7aa9bc0aa9165010c0203a18864a417ee6c9ad8e9add835832475743
加入node节点时,会创建一个token,默认有效期为24小时,当过期之后,该token就不可用了,这时候就需要重新创建token,操作如下:
kubeadm d
所有的work node加入完成之后,这时候我们执行一下‘kubectl get nodes’命令,可以发现节点状态都是NotReady,这是因为该k8s集群还处于离线状态,我们需要给他设置一下网络。这里有个小坑!直接执行下面镜像地址是无法访问的
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
报错信息如下:我刚开始以为网络问题,所以多试了几遍,还是不行,感谢这位大哥的笔记k8s构建Flannel网络插件失败已经成功解决。提示:这个需要在所以节点上执行,并且执行速度有点慢可以执行以下命令:
kubectl get pods -n kube-system #查看所有的pods的状态kubectl get nodes #查看所有的节点 观察是否变成Ready状态
在Kubernetes集群中创建一个pod,验证是否正常运行:
$ kubectl create deployment nginx --image=nginx # 拉取nginx镜像并部署$ kubectl expose deployment nginx --port=80 --type=NodePort # 暴露端口80$ kubectl get pod,svc #查看pod和port(端口)
访问地址:http://NodeIP:Porthttp://192.168.10.103:30636
Copyright 2015-2022 国际日报网版权所有 备案号: 沪ICP备2022005074号-17 联系邮箱:5855973@qq.com