Kubernetes/K8S企业容器云平台入门与进阶实战

作者: 风 哥 分类: Kubernetes 发布时间: 2019-01-25 17:03

Kubernetes 介绍

自我修复

在节点故障时重新启动失败的容器,替换和重新部署,保证预期的副本数量;杀死健康检查失败的容器,并且在未准备好之前不会处理客户端请求,确保线上服务不中断。

弹性伸缩

使用命令、UI或者基于CPU使用情况自动快速扩容和缩容应用程序实例,保证应用业务高峰并发时的高可用性;业务低峰时回收资源,以最小成本运行服务

自动部署和回滚

K8S采用滚动更新策略更新应用,一次更新一个Pod,而不是同时删除所有Pod,如果更新过程中出现问题,将回滚更改,确保升级不受影响业务

服务发现和负载均衡

K8S为多个容器提供一个统一访问入口(内部IP地址和一个DNS名称),并且负载均衡关联的所有容器,使得用户无需考虑容器IP问题

机密和配置管理

管理机密数据和应用程序配置,而不需要把敏感数据暴露在镜像里,提高敏感数据安全性。并可以将一些常用的配置存储在K8S中,方便应用程序使用。

存储编排

挂载外部存储系统,无论是来自本地存储,公有云(如AWS),还是网络存储(如NFS、GlusterFS、Ceph)都作为集群资源的一部分使用,极大提高存储使用灵活性。

批处理

提供一次性任务,定时任务;满足批量数据处理和分析的场景。

Kubernetes 集群架构与组件

Master组件

kube-apiserver

Kubernetes API,集群的统一入口,各组件协调者,以RESTful API提供接口服务,所有对象资源的增删改查和监听操作都交给APIServer处理后再提交给 Etcd存储。

kube-controller-manager

处理集群中常规后台任务,一个资源对应一个控制器,而ControllerManager 就是负责管理这些控制器的。

kube-scheduler

根据调度算法为新创建的Pod选择一个Node节点,可以任意部署,可以部署在 同一个节点上,也可以部署在不同的节点上。

etcd

分布式键值存储系统。用于保存集群状态数据,比如Pod、Service等对象信息。

Node组件

kubelet

kubelet是Master在Node节点上的Agent,管理本机运行容器的生命周期,比如创建容器、Pod挂载数据卷、下载secret、获取容器和节点状态等工作。kubelet将每 个Pod转换成一组容器。

kube-proxy

在Node节点上实现Pod网络代理,维护网络规则和四层负载均衡工作。

docker或rocket

容器引擎,运行容器。

 

Kubernetes 核心概念

Pod

• 最小部署单元

• 一组容器的集合

• 一个Pod中的容器共享网络命名空间

• Pod是短暂的

Controllers

• ReplicaSet : 确保预期的Pod副本数量

• Deployment : 无状态应用部署

• StatefulSet : 有状态应用部署

• DaemonSet : 确保所有Node运行同一个Pod

• Job : 一次性任务

• Cronjob : 定时任务
更高级层次对象,部署和管理Pod

Service

• 防止Pod失联

• 定义一组Pod的访问策略

 

 

Label : 标签,附加到某个资源上,用于关联对象、查询和筛选
Namespaces : 命名空间,将对象逻辑上隔离
Annotations :注释

 

Kubernetes集群部署

官方提供的三种部署方式

minikube

Minikube是一个工具,可以在本地快速运行一个单点的Kubernetes,仅用于尝试Kubernetes或日常开发的用户使用。

部署地址:https://kubernetes.io/docs/setup/minikube/

kubeadm

Kubeadm也是一个工具,提供kubeadm init和kubeadm join,用于快速部署Kubernetes集群。

部署地址:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/

 

二进制包

推荐,从官方下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群。

下载地址:https://github.com/kubernetes/kubernetes/releases

 

Kubernetes平台环境规划

单Master集群架构图

 

自签SSL证书

  1. 在master01  192.168.1.115 节点利用ssl生成工具自动生成证书
< 4  master01 - [root]: ~/k8s > # vim cfssl.sh

curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo
< 6  master01 - [root]: ~/k8s > # bash cfssl.sh
< 7  master01 - [root]: ~/k8s > # cfssl
No command is given.
Usage:
Available commands:
  serve
  genkey
  ocsprefresh
  bundle
  ocspdump
  ocspserve
  selfsign
  info
  certinfo
  gencert
  print-defaults
  revoke
  sign
  version
  gencrl
  ocspsign
  scan
Top-level flags:
  -allow_verification_with_non_compliant_keys
    	Allow a SignatureVerifier to use keys which are technically non-compliant with RFC6962.
  -loglevel int
    	Log level (0 = DEBUG, 5 = FATAL) (default 1)

 

2.为etcd生成证书

< 9  master01 - [root]: ~/k8s > # mkdir etcd-cert
< 10  master01 - [root]: ~/k8s > # cd etcd-cert/
< 11  master01 - [root]: ~/k8s/etcd-cert > # vim cert.sh

cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

cat > ca-csr.json <<EOF
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "HangZhou",
            "ST": "BinJiang"
        }
    ]
}
EOF

#初始化ca
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

#-----------------------

#创建ca签名请求
cat > server-csr.json <<EOF
{
    "CN": "etcd",
    "hosts": [
    "192.168.1.115",
    "192.168.1.116",
    "192.168.1.118"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "HangZhou",
            "ST": "BinJiang"
        }
    ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

这里的三个IP地址要修为为etcd的服务地址,否则的话其它节点会出现无法https无法信任问题。

< 14  master01 - [root]: ~/k8s/etcd-cert > # bash cert.sh
2019/01/24 16:03:59 [INFO] generating a new CA key and certificate from CSR
2019/01/24 16:03:59 [INFO] generate received request
2019/01/24 16:03:59 [INFO] received CSR
2019/01/24 16:03:59 [INFO] generating key: rsa-2048
2019/01/24 16:04:00 [INFO] encoded CSR
2019/01/24 16:04:00 [INFO] signed certificate with serial number 132134069880841963326739843295521449247072978547
2019/01/24 16:04:00 [INFO] generate received request
2019/01/24 16:04:00 [INFO] received CSR
2019/01/24 16:04:00 [INFO] generating key: rsa-2048
2019/01/24 16:04:00 [INFO] encoded CSR
2019/01/24 16:04:00 [INFO] signed certificate with serial number 74636142712739960170978723860956532780160150856
2019/01/24 16:04:00 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").


< 15  master01 - [root]: ~/k8s/etcd-cert > # ll
total 40
-rw-r--r-- 1 root root  287 2019-01-24 16:03 ca-config.json
-rw-r--r-- 1 root root  956 2019-01-24 16:04 ca.csr
-rw-r--r-- 1 root root  211 2019-01-24 16:03 ca-csr.json
-rw------- 1 root root 1679 2019-01-24 16:04 ca-key.pem
-rw-r--r-- 1 root root 1273 2019-01-24 16:04 ca.pem
-rw-r--r-- 1 root root 1125 2019-01-24 16:03 cert.sh
-rw-r--r-- 1 root root 1017 2019-01-24 16:04 server.csr
-rw-r--r-- 1 root root  292 2019-01-24 16:04 server-csr.json
-rw------- 1 root root 1675 2019-01-24 16:04 server-key.pem
-rw-r--r-- 1 root root 1342 2019-01-24 16:04 server.pem
< 16  master01 - [root]: ~/k8s/etcd-cert > #

 

Etcd数据库集群部署

• 二进制包下载地址

https://github.com/etcd-io/etcd/releases

• 关闭防火墙和selinux,所有机器都有执行

< 16  master01 - [root]: ~/k8s/etcd-cert > # setenforce 0
< 16  master01 - [root]: ~/k8s/etcd-cert > # systemctl stop firewalld

1.在master01 192.168.1.115机器上下载etcd软件包

< 18  master01 - [root]: ~/k8s > # wget https://github.com/etcd-io/etcd/releases/download/v3.3.11/etcd-v3.3.11-linux-amd64.tar.gz
< 19  master01 - [root]: ~/k8s > # tar xvf etcd-v3.3.11-linux-amd64.tar.gz
< 20  master01 - [root]: ~/k8s > # ls
cfssl.sh  etcd-cert  etcd-v3.3.11-linux-amd64  etcd-v3.3.11-linux-amd64.tar.gz

#这里有两个可执行文件,一个是客户端工具一个是启动服务
< 21  master01 - [root]: ~/k8s > # ls etcd-v3.3.11-linux-amd64
Documentation  etcd  etcdctl  README-etcdctl.md  README.md  READMEv2-etcdctl.md

< 22  master01 - [root]: ~/k8s > # mkdir /opt/etcd/{bin,cfg,ssl} -p
< 23  master01 - [root]: ~/k8s > # ls /opt/etcd/
bin  cfg  ssl


#将二进制移动到opt/etc/bin目录下面
< 24  master01 - [root]: ~/k8s > # cd etcd-v3.3.11-linux-amd64
< 25  master01 - [root]: ~/k8s/etcd-v3.3.11-linux-amd64 > # mv etcdctl  etcd /opt/etcd/bin/
< 26  master01 - [root]: ~/k8s/etcd-v3.3.11-linux-amd64 > # ll /opt/etcd/bin/
total 34236
-rwxr-xr-x 1 1000 1000 19237536 2019-01-12 04:33 etcd
-rwxr-xr-x 1 1000 1000 15817472 2019-01-12 04:33 etcdctl


 

< 29  master01 - [root]: ~/k8s > # vim etcd.sh


#!/bin/bash
# example: ./etcd.sh etcd01 192.168.1.115 etcd02=https://192.168.1.116:2380,etcd03=https://192.168.1.118:2380

ETCD_NAME=$1
ETCD_IP=$2
ETCD_CLUSTER=$3

WORK_DIR=/opt/etcd

cat <<EOF >$WORK_DIR/cfg/etcd
#[Member]
ETCD_NAME="${ETCD_NAME}"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_IP}:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://${ETCD_IP}:2380,${ETCD_CLUSTER}"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF

cat <<EOF >/usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=${WORK_DIR}/cfg/etcd
ExecStart=${WORK_DIR}/bin/etcd \
--name=\${ETCD_NAME} \
--data-dir=\${ETCD_DATA_DIR} \
--listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=\${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=\${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=${WORK_DIR}/ssl/server.pem \
--key-file=${WORK_DIR}/ssl/server-key.pem \
--peer-cert-file=${WORK_DIR}/ssl/server.pem \
--peer-key-file=${WORK_DIR}/ssl/server-key.pem \
--trusted-ca-file=${WORK_DIR}/ssl/ca.pem \
--peer-trusted-ca-file=${WORK_DIR}/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable etcd
systemctl restart etcd

脚本主要有两个功能,通过变量自动生成etcd配置文件,   并生成开机启动脚本之后自动启动etcd服务

< 31  master01 - [root]: ~/k8s > # ll
total 11096
-rw-r--r-- 1 root root      343 2019-01-24 15:44 cfssl.sh
drwxr-xr-x 2 root root     4096 2019-01-24 16:04 etcd-cert
-rw-r--r-- 1 root root     1765 2019-01-24 17:14 etcd.sh
drwxr-xr-x 3 1000 1000       92 2019-01-24 17:11 etcd-v3.3.11-linux-amd64
-rw-r--r-- 1 root root 11347204 2019-01-12 04:38 etcd-v3.3.11-linux-amd64.tar.gz
< 32  master01 - [root]: ~/k8s > # chmod +x etcd.sh



< 33  master01 - [root]: ~/k8s > # ./etcd.sh etcd01 192.168.1.115 etcd02=https://192.168.1.116:2380,etcd03=https://192.168.1.118:2380
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.

拷贝证书到/opt/etcd/ssl目录下

< 40  master01 - [root]: ~/k8s > # cp /root/k8s/etcd-cert/ca*pem /opt/etcd/ssl/
< 41  master01 - [root]: ~/k8s > # cp /root/k8s/etcd-cert/server*pem /opt/etcd/ssl/
< 42  master01 - [root]: ~/k8s > # systemctl start etcd

拷贝etcd目录到其它的两个etcd节点上

< 44  master01 - [root]: ~/k8s > # scp -r /opt/etcd/ root@192.168.1.116:/opt

root@192.168.1.116's password:
etcdctl                                                                                                                                                                         100%   15MB  38.4MB/s   00:00
etcd                                                                                                                                                                            100%   18MB  18.8MB/s   00:00
etcd                                                                                                                                                                            100%  509   507.7KB/s   00:00
ca-key.pem                                                                                                                                                                      100% 1679     1.5MB/s   00:00
ca.pem                                                                                                                                                                          100% 1273     1.3MB/s   00:00
server-key.pem                                                                                                                                                                  100% 1675     1.4MB/s   00:00
server.pem




< 44  master01 - [root]: ~/k8s > # scp -r /opt/etcd/ root@192.168.1.118:/opt

root@192.168.1.116's password:
etcdctl                                                                                                                                                                         100%   15MB  38.4MB/s   00:00
etcd                                                                                                                                                                            100%   18MB  18.8MB/s   00:00
etcd                                                                                                                                                                            100%  509   507.7KB/s   00:00
ca-key.pem                                                                                                                                                                      100% 1679     1.5MB/s   00:00
ca.pem                                                                                                                                                                          100% 1273     1.3MB/s   00:00
server-key.pem                                                                                                                                                                  100% 1675     1.4MB/s   00:00
server.pem

 

< 46  master01 - [root]: ~/k8s > # scp /usr/lib/systemd/system/etcd.service   root@192.168.1.116:/usr/lib/systemd/system/
root@192.168.1.116's password:
etcd.service                                                                                                                                                                    100%  923    22.1KB/s   00:00
< 47  master01 - [root]: ~/k8s > # scp /usr/lib/systemd/system/etcd.service   root@192.168.1.118:/usr/lib/systemd/system/
root@192.168.1.118's password:
etcd.service                                                                                                                                                                    100%  923   753.2KB/s   00:00

2.在master02 192.168.1.116机器上修改etcd配置文件,改为本机的IP地址, etcd_name=etcd01修改为etcd_name=etcd02

< 1  master02 - [root]: ~ > # vim /opt/etcd/cfg/etcd

#[Member]
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.1.116:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.1.116:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.116:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.116:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.1.115:2380,etcd02=https://192.168.1.116:2380,etcd03=https://192.168.1.118:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

启动etcd02服务

< 9  master02 - [root]: ~ > # systemctl start etcd

3.在node01  192.168.1.118机器上修改etcd配置文件,改为本机的IP地址, etcd_name=etcd01修改为etcd_name=etcd03

< 5  node01 - [root]: ~ > # vim /opt/etcd/cfg/etcd
#[Member]
ETCD_NAME="etcd03"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.1.118:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.1.118:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.118:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.118:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.1.115:2380,etcd02=https://192.168.1.116:2380,etcd03=https://192.168.1.118:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

启动etcd03服务

< 9  node01 - [root]: ~ > # systemctl start etcd

• 查看集群状态

< 54  master01 - [root]: ~/k8s/etcd-cert > # /opt/etcd/bin/etcdctl  --ca-file=ca.pem  --cert-file=server.pem  --key-file=server-key.pem  --endpoints="https://192.168.1.115:2379,https://192.168.1.116:2379,https://192.168.1.118:2379" cluster-health
member 1603346476ff391b is healthy: got healthy result from https://192.168.1.118:2379
member 559e90863815fce0 is healthy: got healthy result from https://192.168.1.116:2379
member cc620f4604e712ee is healthy: got healthy result from https://192.168.1.115:2379
cluster is healthy

排错指南: 如果部署etcd出现问题,建议第一步先查看日志/var/log/messages,第二请认真检查etcd配置文件

Node安装Docker

 

 

  1. 在node01 192.168.1.118和node02  192.168.1.119两台机器上安装docker
< 6  node01 - [root]: ~ > # yum install -y yum-utils device-mapper-persistent-data lvm2
< 7  node01 - [root]: ~ > # yum-config-manager  --add-repo https://download.docker.com/linux/centos/docker-ce.repo
< 8  node01 - [root]: ~ > # yum install -y docker-ce

# node02主机
< 6  node02 - [root]: ~ > # yum install -y yum-utils device-mapper-persistent-data lvm2
< 7  node02 - [root]: ~ > # yum-config-manager  --add-repo https://download.docker.com/linux/centos/docker-ce.repo
< 8  node02 - [root]: ~ > # yum install -y docker-ce

安装完成之后配置加速器设置开机启动

< 9  node01 - [root]: ~ > # curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io

< 10  node01 - [root]: ~ > # systemctl  enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
< 11  node01 - [root]: ~ > # systemctl restart docker


Flannel容器集群网络部署

Overlay Network:覆盖网络,在基础网络上叠加的一种虚拟网络技术模式,该网络中的主机通过虚拟链路连接起来。

VXLAN:将源数据包封装到UDP中,并使用基础网络的IP/MAC作为外层报文头进行封装,然后在以太网上传输,到达目的地后由隧道端点解封装并将数据发送给目 标地址。

Flannel:是Overlay网络的一种,也是将源数据包封装在另一种网络包里面进行路由转发和通信,目前已经支持UDP、VXLAN、AWS VPC和GCE路由等数据转发方 式。

 

2.写入分配的子网段到etcd,供flanneld使用, master01节点 192.168.1.115

< 55  master01 - [root]: ~/k8s/etcd-cert > # /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.1.115:2379,https://192.168.1.116:2379,https://192.168.1.118:2379"  set /coreos.com/network/config  '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}

查看设置是否生效,这里设置的子网为 172.17.0.0/16

< 56  master01 - [root]: ~/k8s/etcd-cert > # /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.1.115:2379,https://192.168.1.116:2379,https://192.168.1.118:2379"  get /coreos.com/network/config


{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}

3.下载二进制包node01节点192.168.1.118

https://github.com/coreos/flannel/releases

< 1  node01 - [root]: ~ > # mkdir -p /opt/kubernetes/{bin,cfg,ssl}
< 3  node01 - [root]: ~ > # wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz


< 7  node01 - [root]: ~ > # tar xf flannel-v0.10.0-linux-amd64.tar.gz
< 8  node01 - [root]: ~ > # ls
1.sh  anaconda-ks.cfg  flanneld  flannel-v0.10.0-linux-amd64.tar.gz  mk-docker-opts.sh  README.md
< 9  node01 - [root]: ~ > # mv flanneld  mk-docker-opts.sh  /opt/kubernetes/bin/
< 10  node01 - [root]: ~ > #
< 13  node01 - [root]: ~ > # mkdir -p /opt/etcd/ssl

4. 部署与配置Flannel

创建启动脚本ETCD_ENDPOINTS=$1  传递etcd集群地址,脚本会生成flannel的配置文件和系统服务以及生成子网网段

< 12  node01 - [root]: ~ > # vim flannel.sh
#!/bin/bash

ETCD_ENDPOINTS=${1:-"https://192.168.1.115:2379,https://192.168.1.116:2379,https://192.168.1.118:2379"}

cat <<EOF >/opt/kubernetes/cfg/flanneld

FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \
-etcd-cafile=/opt/etcd/ssl/ca.pem \
-etcd-certfile=/opt/etcd/ssl/server.pem \
-etcd-keyfile=/opt/etcd/ssl/server-key.pem"

EOF

cat <<EOF >/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure

[Install]
WantedBy=multi-user.target

EOF

cat <<EOF >/usr/lib/systemd/system/docker.service

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd \$DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP \$MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target

EOF

systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld
systemctl restart docker

从master01拷贝生成的证书到node1 /opt/etcd/ssl目录

< 6  master01 - [root]: ~/k8s/etcd-cert > # scp ca.pem  server*pem root@192.168.1.118:/opt/etcd/ssl
root@192.168.1.118's password:
ca.pem                                                                                                                                                                          100% 1273     1.8MB/s   00:00
server-key.pem                                                                                                                                                                  100% 1675     2.2MB/s   00:00
server.pem

5.systemd管理Flannel

接着回到node01节点执行脚本,查看flannel日志时候有报错信息。

< 15  node01 - [root]: ~ > # bash flannel.sh
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.






< 18  node01 - [root]: ~ > # journalctl  -u flanneld
-- Logs begin at Thu 2019-01-24 14:26:15 CST, end at Fri 2019-01-25 10:06:30 CST. --
Jan 25 10:06:29 node01 systemd[1]: Starting Flanneld overlay address etcd agent...
Jan 25 10:06:29 node01 flanneld[5137]: I0125 10:06:29.792486    5137 main.go:475] Determining IP address of default interface
Jan 25 10:06:29 node01 flanneld[5137]: I0125 10:06:29.793549    5137 main.go:488] Using interface with name ens32 and address 192.168.1.118
Jan 25 10:06:29 node01 flanneld[5137]: I0125 10:06:29.793618    5137 main.go:505] Defaulting external address to interface address (192.168.1.118)
Jan 25 10:06:29 node01 flanneld[5137]: warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
Jan 25 10:06:29 node01 flanneld[5137]: I0125 10:06:29.795276    5137 main.go:235] Created subnet manager: Etcd Local Manager with Previous Subnet: None
Jan 25 10:06:29 node01 flanneld[5137]: I0125 10:06:29.795295    5137 main.go:238] Installing signal handlers
Jan 25 10:06:29 node01 flanneld[5137]: I0125 10:06:29.813419    5137 main.go:353] Found network config - Backend type: vxlan
Jan 25 10:06:29 node01 flanneld[5137]: I0125 10:06:29.813526    5137 vxlan.go:120] VXLAN config: VNI=1 Port=0 GBP=false DirectRouting=false
Jan 25 10:06:29 node01 flanneld[5137]: I0125 10:06:29.939311    5137 local_manager.go:234] Picking subnet in range 172.17.1.0 ... 172.17.255.0
Jan 25 10:06:29 node01 flanneld[5137]: I0125 10:06:29.941605    5137 local_manager.go:220] Allocated lease (172.17.75.0/24) to current node (192.168.1.118)
Jan 25 10:06:29 node01 flanneld[5137]: I0125 10:06:29.942552    5137 main.go:300] Wrote subnet file to /run/flannel/subnet.env
Jan 25 10:06:29 node01 flanneld[5137]: I0125 10:06:29.946958    5137 main.go:304] Running backend.
Jan 25 10:06:29 node01 flanneld[5137]: I0125 10:06:29.947352    5137 vxlan_network.go:60] watching for new subnet leases
Jan 25 10:06:29 node01 flanneld[5137]: I0125 10:06:29.953694    5137 main.go:396] Waiting for 22h59m59.991492468s to renew lease
Jan 25 10:06:29 node01 flanneld[5137]: I0125 10:06:29.960710    5137 iptables.go:115] Some iptables rules are missing; deleting and recreating rules
Jan 25 10:06:29 node01 flanneld[5137]: I0125 10:06:29.960740    5137 iptables.go:137] Deleting iptables rule: -s 172.17.0.0/16 -j ACCEPT
Jan 25 10:06:29 node01 flanneld[5137]: I0125 10:06:29.965475    5137 iptables.go:137] Deleting iptables rule: -d 172.17.0.0/16 -j ACCEPT
Jan 25 10:06:29 node01 flanneld[5137]: I0125 10:06:29.969321    5137 iptables.go:115] Some iptables rules are missing; deleting and recreating rules
Jan 25 10:06:29 node01 flanneld[5137]: I0125 10:06:29.969361    5137 iptables.go:137] Deleting iptables rule: -s 172.17.0.0/16 -d 172.17.0.0/16 -j RETURN
Jan 25 10:06:29 node01 flanneld[5137]: I0125 10:06:29.974223    5137 iptables.go:137] Deleting iptables rule: -s 172.17.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
Jan 25 10:06:29 node01 flanneld[5137]: I0125 10:06:29.977249    5137 iptables.go:125] Adding iptables rule: -s 172.17.0.0/16 -j ACCEPT
Jan 25 10:06:29 node01 flanneld[5137]: I0125 10:06:29.980106    5137 iptables.go:137] Deleting iptables rule: ! -s 172.17.0.0/16 -d 172.17.75.0/24 -j RETURN
Jan 25 10:06:29 node01 flanneld[5137]: I0125 10:06:29.987071    5137 iptables.go:137] Deleting iptables rule: ! -s 172.17.0.0/16 -d 172.17.0.0/16 -j MASQUERADE
Jan 25 10:06:29 node01 flanneld[5137]: I0125 10:06:29.989820    5137 iptables.go:125] Adding iptables rule: -s 172.17.0.0/16 -d 172.17.0.0/16 -j RETURN
Jan 25 10:06:29 node01 flanneld[5137]: I0125 10:06:29.994321    5137 iptables.go:125] Adding iptables rule: -d 172.17.0.0/16 -j ACCEPT
Jan 25 10:06:29 node01 systemd[1]: Started Flanneld overlay address etcd agent.
Jan 25 10:06:30 node01 flanneld[5137]: I0125 10:06:30.007029    5137 iptables.go:125] Adding iptables rule: -s 172.17.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
Jan 25 10:06:30 node01 flanneld[5137]: I0125 10:06:30.013481    5137 iptables.go:125] Adding iptables rule: ! -s 172.17.0.0/16 -d 172.17.75.0/24 -j RETURN
Jan 25 10:06:30 node01 flanneld[5137]: I0125 10:06:30.018906    5137 iptables.go:125] Adding iptables rule: ! -s 172.17.0.0/16 -d 172.17.0.0/16 -j MASQUERADE

如果服务启动正常则会在/run/flannel/目录下生成相应的子网信息

< 22  node01 - [root]: ~ > # cat /run/flannel/subnet.env
DOCKER_OPT_BIP="--bip=172.17.75.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=172.17.75.1/24 --ip-masq=false --mtu=1450"

6. 配置Docker使用Flannel生成的子网

< 38  node01 - [root]: ~ > # ps -ef|grep docker
root       6174      1  4 10:15 ?        00:00:00 /usr/bin/dockerd --bip=172.17.75.1/24 --ip-masq=false --mtu=1450
root       6312   5024  0 10:15 pts/0    00:00:00 grep --color=auto docker

7. 把node01节点的目录拷贝到node02节点

因为node节点没有部署etcd服务,我们只需要把node01节点的etcd证书拷贝过来即可,因为等下要用到。

< 2  node02 - [root]: ~ > # mkdir -p /opt/etcd/ssl

 

< 41  node01 - [root]: ~ > # scp -r /opt/kubernetes/ root@192.168.1.119:/opt
root@192.168.1.119's password:
flanneld                                                                                                                                             100%   35MB  34.6MB/s   00:01
mk-docker-opts.sh                                                                                                                                    100% 2139     1.9MB/s   00:00
flanneld                                                                                                                                             100%  235   253.5KB/s   00:00


< 44  node01 - [root]: ~ > # scp  /opt/etcd/ssl/* root@192.168.1.119:/opt/etcd/ssl/
root@192.168.1.119's password:
ca-key.pem                                                                                                                                           100% 1679     1.2MB/s   00:00
ca.pem                                                                                                                                               100% 1273     1.2MB/s   00:00
server-key.pem                                                                                                                                       100% 1675     1.5MB/s   00:00
server.pem


< 45  node01 - [root]: ~ > # scp /lib/systemd/system/flanneld.service  root@192.168.1.119:/lib/systemd/system/
root@192.168.1.119's password:
flanneld.service                                                                                                                                     100%  417   135.3KB/s   00:00                                                                                                                                           100% 1342     1.3MB/s   00:00

8.启动node02  192.168.1.119节点flannel服务

< 3  node02 - [root]: ~ > # systemctl start flanneld
< 4  node02 - [root]: ~ > # journalctl  -u flanneld
-- Logs begin at Thu 2019-01-24 14:50:58 CST, end at Fri 2019-01-25 10:28:46 CST. --
Jan 25 10:28:46 node02 systemd[1]: Starting Flanneld overlay address etcd agent...
Jan 25 10:28:46 node02 flanneld[5384]: I0125 10:28:46.200804    5384 main.go:475] Determining IP address of default interface
Jan 25 10:28:46 node02 flanneld[5384]: I0125 10:28:46.201854    5384 main.go:488] Using interface with name ens32 and address 192.168.1.119
Jan 25 10:28:46 node02 flanneld[5384]: I0125 10:28:46.201901    5384 main.go:505] Defaulting external address to interface address (192.168.1.119)
Jan 25 10:28:46 node02 flanneld[5384]: warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
Jan 25 10:28:46 node02 flanneld[5384]: I0125 10:28:46.203550    5384 main.go:235] Created subnet manager: Etcd Local Manager with Previous Subnet: None
Jan 25 10:28:46 node02 flanneld[5384]: I0125 10:28:46.203569    5384 main.go:238] Installing signal handlers
Jan 25 10:28:46 node02 flanneld[5384]: I0125 10:28:46.222573    5384 main.go:353] Found network config - Backend type: vxlan
Jan 25 10:28:46 node02 flanneld[5384]: I0125 10:28:46.222673    5384 vxlan.go:120] VXLAN config: VNI=1 Port=0 GBP=false DirectRouting=false
Jan 25 10:28:46 node02 flanneld[5384]: I0125 10:28:46.341213    5384 local_manager.go:234] Picking subnet in range 172.17.1.0 ... 172.17.255.0
Jan 25 10:28:46 node02 flanneld[5384]: I0125 10:28:46.343662    5384 local_manager.go:220] Allocated lease (172.17.37.0/24) to current node (192.168.1.119)
Jan 25 10:28:46 node02 flanneld[5384]: I0125 10:28:46.344545    5384 main.go:300] Wrote subnet file to /run/flannel/subnet.env
Jan 25 10:28:46 node02 flanneld[5384]: I0125 10:28:46.344577    5384 main.go:304] Running backend.
Jan 25 10:28:46 node02 flanneld[5384]: I0125 10:28:46.352157    5384 main.go:396] Waiting for 22h59m59.989538692s to renew lease
Jan 25 10:28:46 node02 flanneld[5384]: I0125 10:28:46.352224    5384 vxlan_network.go:60] watching for new subnet leases
Jan 25 10:28:46 node02 flanneld[5384]: I0125 10:28:46.360718    5384 iptables.go:115] Some iptables rules are missing; deleting and recreating rules
Jan 25 10:28:46 node02 flanneld[5384]: I0125 10:28:46.360744    5384 iptables.go:137] Deleting iptables rule: -s 172.17.0.0/16 -j ACCEPT
Jan 25 10:28:46 node02 flanneld[5384]: I0125 10:28:46.364202    5384 iptables.go:137] Deleting iptables rule: -d 172.17.0.0/16 -j ACCEPT
Jan 25 10:28:46 node02 flanneld[5384]: I0125 10:28:46.365747    5384 iptables.go:115] Some iptables rules are missing; deleting and recreating rules
Jan 25 10:28:46 node02 flanneld[5384]: I0125 10:28:46.365767    5384 iptables.go:137] Deleting iptables rule: -s 172.17.0.0/16 -d 172.17.0.0/16 -j RETURN
Jan 25 10:28:46 node02 flanneld[5384]: I0125 10:28:46.366709    5384 iptables.go:125] Adding iptables rule: -s 172.17.0.0/16 -j ACCEPT
Jan 25 10:28:46 node02 flanneld[5384]: I0125 10:28:46.371984    5384 iptables.go:137] Deleting iptables rule: -s 172.17.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
Jan 25 10:28:46 node02 flanneld[5384]: I0125 10:28:46.375476    5384 iptables.go:137] Deleting iptables rule: ! -s 172.17.0.0/16 -d 172.17.37.0/24 -j RETURN
Jan 25 10:28:46 node02 flanneld[5384]: I0125 10:28:46.378453    5384 iptables.go:137] Deleting iptables rule: ! -s 172.17.0.0/16 -d 172.17.0.0/16 -j MASQUERADE
Jan 25 10:28:46 node02 flanneld[5384]: I0125 10:28:46.382213    5384 iptables.go:125] Adding iptables rule: -d 172.17.0.0/16 -j ACCEPT
Jan 25 10:28:46 node02 flanneld[5384]: I0125 10:28:46.383457    5384 iptables.go:125] Adding iptables rule: -s 172.17.0.0/16 -d 172.17.0.0/16 -j RETURN
Jan 25 10:28:46 node02 systemd[1]: Started Flanneld overlay address etcd agent.
Jan 25 10:28:46 node02 flanneld[5384]: I0125 10:28:46.396327    5384 iptables.go:125] Adding iptables rule: -s 172.17.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
Jan 25 10:28:46 node02 flanneld[5384]: I0125 10:28:46.400769    5384 iptables.go:125] Adding iptables rule: ! -s 172.17.0.0/16 -d 172.17.37.0/24 -j RETURN
Jan 25 10:28:46 node02 flanneld[5384]: I0125 10:28:46.405508    5384 iptables.go:125] Adding iptables rule: ! -s 172.17.0.0/16 -d 172.17.0.0/16 -j MASQUERADE

9.配置node02  Docker使用Flannel生成的子网

在docker系统服务配置文件中增加以下两段,然后重新启动docker

< 5  node02 - [root]: ~ > # vim /lib/systemd/system/docker.service
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
< 6  node02 - [root]: ~ > # systemctl daemon-reload
< 7  node02 - [root]: ~ > # systemctl restart docker
< 8  node02 - [root]: ~ > # ps -ef|grep docker
root       5892      1  4 10:34 ?        00:00:00 /usr/bin/dockerd --bip=172.17.37.1/24 --ip-masq=false --mtu=1450
root       6030   5284  0 10:34 pts/0    00:00:00 grep --color=auto docker
< 9  node02 - [root]: ~ > # systemctl enable flanneld

10.测试Flannel是否相互通信

在任意一台node节点通过ifconfig可以看到会多出一个flannel的网卡,而且路由表也会增加相应的路由条目,通过ping命令相互测试

< 9  node02 - [root]: ~ > # ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.37.1  netmask 255.255.255.0  broadcast 172.17.37.255
        ether 02:42:92:7b:12:fb  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens32: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.1.119  netmask 255.255.255.0  broadcast 192.168.1.255
        inet6 fe80::20c:29ff:fe1a:2345  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:1a:23:45  txqueuelen 1000  (Ethernet)
        RX packets 155821  bytes 124211224 (118.4 MiB)
        RX errors 0  dropped 7  overruns 0  frame 0
        TX packets 59215  bytes 5469371 (5.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 172.17.37.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::70ef:49ff:fe38:dfd1  prefixlen 64  scopeid 0x20<link>
        ether 72:ef:49:38:df:d1  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 8 overruns 0  carrier 0  collisions 0
< 10  node02 - [root]: ~ > # route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         192.168.1.1     0.0.0.0         UG    100    0        0 ens32
172.17.37.0     0.0.0.0         255.255.255.0   U     0      0        0 docker0
172.17.75.0     172.17.75.0     255.255.255.0   UG    0      0        0 flannel.1
192.168.1.0     0.0.0.0         255.255.255.0   U     100    0        0 ens32
< 11  node02 - [root]: ~ > # ping 172.17.75.0
PING 172.17.75.0 (172.17.75.0) 56(84) bytes of data.
64 bytes from 172.17.75.0: icmp_seq=1 ttl=64 time=0.400 ms
64 bytes from 172.17.75.0: icmp_seq=2 ttl=64 time=0.269 ms

排查指南: 无法ping通的情况下,1、检查是否开启了防火墙,策略有没有设置允许。 2、检查flannel有没有成功启动。 3、检查docker是否应用了flannel的子网

部署Master组件

采用自签APIServer SSL证书

1. kube-apiserver

2. kube-controller-manager

3. kube-scheduler

配置文件 -> systemd管理组件 -> 启动

< 9  master01 - [root]: ~/k8s > # mkdir k8s-cert
< 10  master01 - [root]: ~/k8s > # cd k8s-cert/

 

01)创建证书生成脚本,其中需要的是hosts字段里面的IP地址,需要填写集群的IP地址,除了node节点和镜像仓库节点无需填写,其中填写的ip为 master01、master02、Load Balancer Master和Load Balancer Backup以及VIP的地址。

< 12  master01 - [root]: ~/k8s/k8s-cert > # vim k8s-cert.sh
cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

cat > ca-csr.json <<EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "HangZhou",
            "ST": "BinJiang",
      	    "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

#-----------------------

cat > server-csr.json <<EOF
{
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "192.168.1.115",
      "192.168.1.116",
      "192.168.1.110",
      "192.168.1.111",
      "192.168.1.113",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "HangZhou",
            "ST": "BinJiang",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

#-----------------------

cat > admin-csr.json <<EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "HangZhou",
      "ST": "BinJiang",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

#-----------------------

cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "HangZhou",
      "ST": "BinJiang",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

< 13  master01 - [root]: ~/k8s/k8s-cert > #

02)执行脚本生成相关所需证书

< 14  master01 - [root]: ~/k8s/k8s-cert > # bash k8s-cert.sh


2019/01/25 11:09:36 [INFO] generating a new CA key and certificate from CSR
2019/01/25 11:09:36 [INFO] generate received request
2019/01/25 11:09:36 [INFO] received CSR
2019/01/25 11:09:36 [INFO] generating key: rsa-2048
2019/01/25 11:09:37 [INFO] encoded CSR
2019/01/25 11:09:37 [INFO] signed certificate with serial number 46338352777278245225331117850227166648045375007
2019/01/25 11:09:37 [INFO] generate received request
2019/01/25 11:09:37 [INFO] received CSR
2019/01/25 11:09:37 [INFO] generating key: rsa-2048
2019/01/25 11:09:37 [INFO] encoded CSR
2019/01/25 11:09:37 [INFO] signed certificate with serial number 33954866433721749602912608581583727897980511707
2019/01/25 11:09:37 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
2019/01/25 11:09:37 [INFO] generate received request
2019/01/25 11:09:37 [INFO] received CSR
2019/01/25 11:09:37 [INFO] generating key: rsa-2048
2019/01/25 11:09:38 [INFO] encoded CSR
2019/01/25 11:09:38 [INFO] signed certificate with serial number 333853089734207864475766892831888732714944583082
2019/01/25 11:09:38 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
2019/01/25 11:09:38 [INFO] generate received request
2019/01/25 11:09:38 [INFO] received CSR
2019/01/25 11:09:38 [INFO] generating key: rsa-2048
2019/01/25 11:09:38 [INFO] encoded CSR
2019/01/25 11:09:38 [INFO] signed certificate with serial number 57901387727152236005083339542412370486722199847
2019/01/25 11:09:38 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").



< 15  master01 - [root]: ~/k8s/k8s-cert > # ls
admin.csr       admin.pem       ca-csr.json  k8s-cert.sh          kube-proxy-key.pem  server-csr.json
admin-csr.json  ca-config.json  ca-key.pem   kube-proxy.csr       kube-proxy.pem      server-key.pem
admin-key.pem   ca.csr          ca.pem       kube-proxy-csr.json  server.csr          server.pem
< 16  master01 - [root]: ~/k8s/k8s-cert > #

03)下载二进制包,拷贝ssl证书

< 18  master01 - [root]: ~/k8s > # mkdir /root/k8s/soft
< 19  master01 - [root]: ~/k8s > # cd /root/k8s/soft/
< 20  master01 - [root]: ~/k8s/soft > # wget https://dl.k8s.io/v1.12.4/kubernetes-server-linux-amd64.tar.gz
< 22  master01 - [root]: ~/k8s/soft > # mkdir -p /opt/kubernetes/{bin,cfg,ssl}
< 23  master01 - [root]: ~/k8s/soft > # tar zxf kubernetes-server-linux-amd64.tar.gz
< 24  master01 - [root]: ~/k8s/soft > # cd kubernetes/server/bin/
< 25  master01 - [root]: ~/k8s/soft/kubernetes/server/bin > # cp kube-apiserver kubectl  kube-controller-manager kube-scheduler /opt/kubernetes/bin/

#最后只需要这四个可执行程序就行
< 26  master01 - [root]: ~/k8s/soft/kubernetes/server/bin > # ls /opt/kubernetes/bin/
kube-apiserver  kube-controller-manager  kubectl  kube-scheduler


#拷贝证书
< 27  master01 - [root]: ~/k8s/soft/kubernetes/server/bin > # cp /root/k8s/k8s-cert/ca*pem  /opt/kubernetes/ssl/
< 28  master01 - [root]: ~/k8s/soft/kubernetes/server/bin > # cp /root/k8s/k8s-cert/server*pem /opt/kubernetes/ssl/
< 29  master01 - [root]: ~/k8s/soft/kubernetes/server/bin > # ls  /opt/kubernetes/ssl/
ca.pem  server-key.pem  server.pem ca-key.pem

04)创建启动脚本,传递两个参数,master主机ip地址和etcd集群ip地址

< 37  master01 - [root]: ~/k8s/soft > # vim apiserver.sh


#!/bin/bash

MASTER_ADDRESS=$1
ETCD_SERVERS=$2

cat <<EOF >/opt/kubernetes/cfg/kube-apiserver

KUBE_APISERVER_OPTS="--logtostderr=true \\
--v=4 \\
--etcd-servers=${ETCD_SERVERS} \\
--bind-address=${MASTER_ADDRESS} \\
--secure-port=6443 \\
--advertise-address=${MASTER_ADDRESS} \\
--allow-privileged=true \\
--service-cluster-ip-range=10.0.0.0/24 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--kubelet-https=true \\
--enable-bootstrap-token-auth \\
--token-auth-file=/opt/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-50000 \\
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--etcd-cafile=/opt/etcd/ssl/ca.pem \\
--etcd-certfile=/opt/etcd/ssl/server.pem \\
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"

EOF

cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver

05)启动kube-apiserver服务

使用命令生成token, 上面配置中定义了token-auth-file=/opt/kubernetes/cfg/token.csv, 把生成的token值写到这个文件里面,后面跟上用户

< 41  master01 - [root]: ~/k8s/soft > # head -c 16 /dev/urandom  |od -An -t x |tr -d ' '
af56ead0fc5054e1a475d1d2e0f18e66
< 42  master01 - [root]: ~/k8s/soft > # vim /opt/kubernetes/cfg/token.csv

af56ead0fc5054e1a475d1d2e0f18e66,kubelet-bootstrap,10001,"system:bootstrappers"
< 38  master01 - [root]: ~/k8s/soft > # bash apiserver.sh  192.168.1.115 https://192.168.1.115:2379,https://192.168.1.116:2379,https://192.168.1.118:2379
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.

06)创建controller-manager脚本并启动服务

< 53  master01 - [root]: ~/k8s/soft > # vim controller-manager.sh

#!/bin/bash

MASTER_ADDRESS=$1

cat <<EOF >/opt/kubernetes/cfg/kube-controller-manager


KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \\
--v=4 \\
--master=${MASTER_ADDRESS}:8080 \\
--leader-elect=true \\
--address=127.0.0.1 \\
--service-cluster-ip-range=10.0.0.0/24 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--experimental-cluster-signing-duration=87600h0m0s"

EOF

cat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager

 

< 54  master01 - [root]: ~/k8s/soft > # bash controller-manager.sh  127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.



< 56  master01 - [root]: ~/k8s/soft > # netstat -lnpt|grep 8080
tcp        0      0 127.0.0.1:8080          0.0.0.0:*               LISTEN      3045/kube-apiserver

07)创建kube-scheduler脚本

< 58  master01 - [root]: ~/k8s/soft > # vim scheduler.sh

#!/bin/bash

MASTER_ADDRESS=$1

cat <<EOF >/opt/kubernetes/cfg/kube-scheduler

KUBE_SCHEDULER_OPTS="--logtostderr=true \\
--v=4 \\
--master=${MASTER_ADDRESS}:8080 \\
--leader-elect"

EOF

cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler
< 59  master01 - [root]: ~/k8s/soft > # bash scheduler.sh  127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.


#验证服务是否启动正常,当返回如下内容时说明kube-apiserver也是正常工作的
< 61  master01 - [root]: ~/k8s/soft > # /opt/kubernetes/bin/kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-1               Healthy   {"health":"true"}
etcd-2               Healthy   {"health":"true"}
etcd-0               Healthy   {"health":"true"}

部署Node组件

01)从master01节点中拷贝kubelet和kube-proxy到node01和node02节点上

< 66  master01 - [root]: ~/k8s/soft/kubernetes/server/bin > # scp kubelet  kube-proxy root@192.168.1.118:/opt/kubernetes/bin/
root@192.168.1.118's password:
kubelet                                                                                                                                                                         100%  169MB  33.7MB/s   00:05
kube-proxy                                                                                                                                                                      100%   48MB  24.0MB/s   00:02


< 67  master01 - [root]: ~/k8s/soft/kubernetes/server/bin > # scp kubelet  kube-proxy root@192.168.1.119:/opt/kubernetes/bin/
root@192.168.1.118's password:
kubelet                                                                                                                                                                         100%  169MB  33.7MB/s   00:05
kube-proxy                                                                                                                                                                      100%   48MB  24.0MB/s   00:02


02)创建kubeconfig.sh脚本接收两个参数,apiserver的地址和ssl证书的目录,这里需要修改的是BOOTSTRAP_TOKEN={token值},这个值我们在kube-apiserver里面有用到过,把它复制过来然后写到变量里面。

< 72  master01 - [root]: ~/k8s/kubeconfig > # vim kubeconfig.sh

#----------------------
BOOTSTRAP_TOKEN=af56ead0fc5054e1a475d1d2e0f18e66

APISERVER=$1
SSL_DIR=$2

# 创建kubelet bootstrapping kubeconfig 
export KUBE_APISERVER="https://$APISERVER:6443"

# 设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=$SSL_DIR/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig

# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=bootstrap.kubeconfig

# 设置上下文参数
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig

# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

#----------------------

# 创建kube-proxy kubeconfig文件

kubectl config set-cluster kubernetes \
  --certificate-authority=$SSL_DIR/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy \
  --client-certificate=$SSL_DIR/kube-proxy.pem \
  --client-key=$SSL_DIR/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

03)设置环境变量、生成kubeconfig文件

< 73  master01 - [root]: ~/k8s/kubeconfig > # vim /etc/profile.d/k8s.sh

export PATH=$PATH:/opt/kubernetes/bin/

< 77  master01 - [root]: ~/k8s/kubeconfig > # source /etc/profile.d/k8s.sh

 

< 79  master01 - [root]: ~/k8s/kubeconfig > # bash kubeconfig.sh  192.168.1.115 /root/k8s/k8s-cert/
Cluster "kubernetes" set.
User "kubelet-bootstrap" set.
Context "default" created.
Switched to context "default".
Cluster "kubernetes" set.
User "kube-proxy" set.
Context "default" created.
Switched to context "default".

 

< 80  master01 - [root]: ~/k8s/kubeconfig > # ll
total 16
-rw------- 1 root root 2134 2019-01-25 16:30 bootstrap.kubeconfig
-rw-r--r-- 1 root root 1363 2019-01-25 16:22 kubeconfig.sh
-rw------- 1 root root 6285 2019-01-25 16:30 kube-proxy.kubeconfig

04)把生成的kubeconfig文件拷贝到node01和node02节点

< 90  master01 - [root]: ~/k8s/kubeconfig > # scp *kubeconfig root@192.168.1.118:/opt/kubernetes/cfg/
root@192.168.1.118's password:
bootstrap.kubeconfig                                                                                                                                                            100% 2175     1.7MB/s   00:00
kube-proxy.kubeconfig                                                                                                                                                           100% 6285     4.5MB/s   00:00
< 91  master01 - [root]: ~/k8s/kubeconfig > #
< 91  master01 - [root]: ~/k8s/kubeconfig > #
< 91  master01 - [root]: ~/k8s/kubeconfig > #
< 91  master01 - [root]: ~/k8s/kubeconfig > # scp *kubeconfig root@192.168.1.119:/opt/kubernetes/cfg/
root@192.168.1.119's password:
bootstrap.kubeconfig                                                                                                                                                            100% 2175     1.6MB/s   00:00
kube-proxy.kubeconfig                                                                                                                                                           100% 6285     4.5MB/s   00:00
创建node01节点kubelet启动脚本

在node01   192.168.1.118节点进行操作

< 54  node01 - [root]: ~ > # vim kubelet.sh

#!/bin/bash

NODE_ADDRESS=$1
DNS_SERVER_IP=${2:-"10.0.0.2"}

cat <<EOF >/opt/kubernetes/cfg/kubelet

KUBELET_OPTS="--logtostderr=true \\
--v=4 \\
--address=${NODE_ADDRESS} \\
--hostname-override=${NODE_ADDRESS} \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet.config \\
--cert-dir=/opt/kubernetes/ssl \\
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

EOF

cat <<EOF >/opt/kubernetes/cfg/kubelet.config

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: ${NODE_ADDRESS}
port: 10250
cgroupDriver: cgroupfs
clusterDNS:
- ${DNS_SERVER_IP}
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true

EOF

cat <<EOF >/usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet

#启动服务,前面的地址是node01的本机ip, 后面的是DNS

< 55  node01 - [root]: ~ > # bash kubelet.sh  192.168.1.118 10.0.0.2
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
创建node01节点kube-proxy启动脚本

在node01   192.168.1.118节点进行操作

< 18  node01 - [root]: ~ > # vim kube-proxy.sh

#!/bin/bash

NODE_ADDRESS=$1

cat <<EOF >/opt/kubernetes/cfg/kube-proxy

KUBE_PROXY_OPTS="--logtostderr=true \\
--v=4 \\
--hostname-override=${NODE_ADDRESS} \\
--cluster-cidr=10.0.0.0/24 \\
--proxy-mode=ipvs \\
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"

EOF

cat <<EOF >/usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy

#启动服务,前面的地址是node01的本机ip

< 19  node01 - [root]: ~ > # bash kube-proxy.sh  192.168.1.118
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.

 

测试node01节点是否加入K8S-Master
< 24  master01 - [root]: ~/k8s/k8s-cert > # kubectl get csr
NAME                                                   AGE   REQUESTOR           CONDITION
node-csr-4mH6BO_0DrLEoC9GEvy_WreNFuLjA8-XmCl4qXvaClY   18m   kubelet-bootstrap   Pending
< 25  master01 - [root]: ~/k8s/k8s-cert > #

可以看到当前状态为Pending等待加入,下面我们允许node加入进来

< 25  master01 - [root]: ~/k8s/k8s-cert > # kubectl certificate approve node-csr-4mH6BO_0DrLEoC9GEvy_WreNFuLjA8-XmCl4qXvaClY
certificatesigningrequest.certificates.k8s.io/node-csr-4mH6BO_0DrLEoC9GEvy_WreNFuLjA8-XmCl4qXvaClY approved
< 26  master01 - [root]: ~/k8s/k8s-cert > #
< 26  master01 - [root]: ~/k8s/k8s-cert > #
< 26  master01 - [root]: ~/k8s/k8s-cert > #
< 26  master01 - [root]: ~/k8s/k8s-cert > # kubectl get csr
NAME                                                   AGE   REQUESTOR           CONDITION
node-csr-4mH6BO_0DrLEoC9GEvy_WreNFuLjA8-XmCl4qXvaClY   22m   kubelet-bootstrap   Approved,Issued
< 27  master01 - [root]: ~/k8s/k8s-cert > #
< 27  master01 - [root]: ~/k8s/k8s-cert > #
< 27  master01 - [root]: ~/k8s/k8s-cert > #
< 27  master01 - [root]: ~/k8s/k8s-cert > # kubectl get node
NAME            STATUS   ROLES    AGE   VERSION
192.168.1.118   Ready    <none>   19s   v1.12.4

 

配置node02节点加入

将node01的启动脚本拷贝到node02,修改启动时的ip

< 31  node01 - [root]: ~ > # scp kubelet.sh  kube-proxy.sh  root@192.168.1.119:/root
root@192.168.1.119's password:
kubelet.sh                                                                                                                                                                      100% 1192   980.1KB/s   00:00
kube-proxy.sh
启动node02 kubelet
< 6  node02 - [root]: ~ > # bash kubelet.sh  192.168.1.119 10.0.0.2
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
启动node02 kube-proxy
< 8  node02 - [root]: ~ > #  bash kube-proxy.sh  192.168.1.119
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
master01运行node02加入
< 28  master01 - [root]: ~/k8s/k8s-cert > # kubectl get csr
NAME                                                   AGE    REQUESTOR           CONDITION
node-csr-4mH6BO_0DrLEoC9GEvy_WreNFuLjA8-XmCl4qXvaClY   32m    kubelet-bootstrap   Approved,Issued
node-csr-zuvkcCCeIGm7uPjoWr7dcCi5qFYxKI8rKWmS1ISzXzI   101s   kubelet-bootstrap   Pending
< 29  master01 - [root]: ~/k8s/k8s-cert > # kubectl certificate approve node-csr-zuvkcCCeIGm7uPjoWr7dcCi5qFYxKI8rKWmS1ISzXzI
certificatesigningrequest.certificates.k8s.io/node-csr-zuvkcCCeIGm7uPjoWr7dcCi5qFYxKI8rKWmS1ISzXzI approved
< 30  master01 - [root]: ~/k8s/k8s-cert > #
< 30  master01 - [root]: ~/k8s/k8s-cert > # kubectl get node
NAME            STATUS     ROLES    AGE   VERSION
192.168.1.118   Ready      <none>   11m   v1.12.4
192.168.1.119   NotReady   <none>   6s    v1.12.4

 

将kubelet-bootstrap用户绑定到系统集群角色,如果不创建在启动node-kubelet时将会出现如下错误

kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap

错误:

failed to run Kubelet: cannot create certificate signing request: certificatesigningrequests.certificates.k8s.io is forbidden: User "kubelet-bootstrap" cannot create resource "certificatesigningrequests" in API group "certificates.k8s.io" at the cluster scope
Jan 25 17:24:10 localhost systemd: kubelet.service: main process exited, code=exited, status=255/n/a
Jan 25 17:24:10 localhost systemd: Unit kubelet.service entered failed state.
Jan 25 17:24:10 localhost systemd: kubelet.service failed.
Jan 25 17:24:10 localhost systemd: kubelet.service holdoff time over, scheduling restart.
Jan 25 17:24:10 localhost systemd: start request repeated too quickly for kubelet.service
Jan 25 17:24:10 localhost systemd: Failed to start Kubernetes Kubelet.
Jan 25 17:24:10 localhost systemd: Unit kubelet.service entered failed state.
Jan 25 17:24:10 localhost systemd: kubelet.service failed.

解决办法是:创建一个 clusterrolebinding,将 group system:bootstrappers 和 clusterrole system:node-bootstrapper 绑定:

< 16  master01 - [root]: ~/k8s/kubeconfig > # kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created

 

部署一个测试示例

部署一个nginx设置为3个副本数

< 31  master01 - [root]: ~/k8s/k8s-cert > #  kubectl run nginx --image=nginx --replicas=3
kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
deployment.apps/nginx created
< 33  master01 - [root]: ~/k8s/k8s-cert > # kubectl get pod
NAME                    READY   STATUS    RESTARTS   AGE
nginx-dbddb74b8-j875k   1/1     Running   0          46s
nginx-dbddb74b8-vvr4s   1/1     Running   0          46s
nginx-dbddb74b8-wnkwf   1/1     Running   0          46s

为nginx暴露一个88端口

< 34  master01 - [root]: ~/k8s/k8s-cert > #  kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort
service/nginx exposed
< 36  master01 - [root]: ~/k8s/k8s-cert > # kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP        23h
nginx        NodePort    10.0.0.69    <none>        88:48930/TCP   28s

 

在任意一台node节点通过cluster-ip  10.0.0.69:88均可访问nginx服务,如果无法访问,请检查flannel服务是否正常,重启flannel时也需要将docker一并重启,以便docker能及时获取flannel网络

< 1  node01 - [root]: ~ > # curl 10.0.0.69:88
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

也可以通过某一node宿主机节点加随机的端口访问  http://192.168.1.118:48930

< 37  master01 - [root]: ~/k8s/k8s-cert > #  kubectl get svc nginx
NAME    TYPE       CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
nginx   NodePort   10.0.0.69    <none>        88:48930/TCP   3m38s

 

 

上述所有的访问请求都会在容器内部产生日志,可以通过kubectl logs进行查看,如果出现下面的错误。

问题1:kubectl无法查看容器日志

< 38  master01 - [root]: ~/k8s/k8s-cert > # kubectl logs  nginx-dbddb74b8-j875k
error: You must be logged in to the server (the server has asked for the client to provide credentials ( pods/log nginx-dbddb74b8-j875k))

解决方法:请确保kubelet配置文件是否有

authentication:
  anonymous:
    enabled: true

问题2:匿名用户没有绑定到集群角色

< 39  master01 - [root]: ~/k8s/k8s-cert > # kubectl logs  nginx-dbddb74b8-j875k
Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) ( pods/log nginx-dbddb74b8-j875k)

解决方法:具体可以使用help查看帮助文档

< 41  master01 - [root]: ~/k8s/k8s-cert > # kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous
clusterrolebinding.rbac.authorization.k8s.io/cluster-system-anonymous created

再次查看nginx日志将会输出相关信息

< 45  master01 - [root]: ~/k8s/k8s-cert > # kubectl logs  nginx-dbddb74b8-wnkwf
10.0.0.69 - - [26/Jan/2019:07:40:49 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-"
172.17.37.0 - - [26/Jan/2019:07:40:57 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-"
172.17.75.1 - - [26/Jan/2019:07:45:02 +0000] "GET / HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36" "-"
2019/01/26 07:45:03 [error] 6#6: *3 open() "/usr/share/nginx/html/favicon.ico" failed (2: No such file or directory), client: 172.17.75.1, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "192.168.1.118:48930", referrer: "http://192.168.1.118:48930/"
172.17.75.1 - - [26/Jan/2019:07:45:03 +0000] "GET /favicon.ico HTTP/1.1" 404 555 "http://192.168.1.118:48930/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36" "-"

 

部署Web UI(Dashboard)

相关yaml文件我已经上传到github上,如果需要可以自行下载

< 23  master01 - [root]: ~/k8s/soft/dashboard > # kubectl create -f dashboard-configmap.yaml
configmap/kubernetes-dashboard-settings created

 

< 25  master01 - [root]: ~/k8s/soft/dashboard > # kubectl create -f dashboard-rbac.yaml
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created

 

< 27  master01 - [root]: ~/k8s/soft/dashboard > # kubectl create -f dashboard-secret.yaml
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-key-holder created
< 29  master01 - [root]: ~/k8s/soft/dashboard > # kubectl create -f dashboard-controller.yaml
serviceaccount/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created

 

< 33  master01 - [root]: ~/k8s/soft/dashboard > # kubectl get pods -n kube-system
NAME                                    READY   STATUS    RESTARTS   AGE
kubernetes-dashboard-5f5bfdc89f-whvp4   1/1     Running   0          48s

 

< 36  master01 - [root]: ~/k8s/soft/dashboard > # kubectl create -f dashboard-service.yaml
service/kubernetes-dashboard created
< 37  master01 - [root]: ~/k8s/soft/dashboard > # kubectl get svc -n kube-system
NAME                   TYPE       CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
kubernetes-dashboard   NodePort   10.0.0.14    <none>        443:47498/TCP   11s

此时通过node节点的Ip访问UI界面,界面有两种登录方式,接下来创建令牌

< 40  master01 - [root]: ~/k8s/soft/dashboard > # kubectl create -f k8s-admin.yaml
serviceaccount/dashboard-admin created
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
< 41  master01 - [root]: ~/k8s/soft/dashboard > # kubectl get secret -n kube-system
NAME                               TYPE                                  DATA   AGE
dashboard-admin-token-xhq8p        kubernetes.io/service-account-token   3      11s
default-token-5c6gz                kubernetes.io/service-account-token   3      25h
kubernetes-dashboard-certs         Opaque                                0      23m
kubernetes-dashboard-key-holder    Opaque                                2      23m
kubernetes-dashboard-token-2v54n   kubernetes.io/service-account-token   3      15m
< 42  master01 - [root]: ~/k8s/soft/dashboard > #

查看token值,然后用这个token使用令牌登录

kubernetes-dashboard-token-2v54n   kubernetes.io/service-account-token   3      15m
< 42  master01 - [root]: ~/k8s/soft/dashboard > # kubectl describe secret dashboard-admin-token-xhq8p -n kube-system
Name:         dashboard-admin-token-xhq8p
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: 6a07302a-214b-11e9-9680-000c29b8229e

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1363 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4teGhxOHAiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNmEwNzMwMmEtMjE0Yi0xMWU5LTk2ODAtMDAwYzI5YjgyMjllIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.tvc7wVfh19jm0KWbaVS2O5Bhzq_iWM-jBATKIAehqM1BG4F1EOEOVf9ZIT3ls9vam7jZNGxzEL3z0oJtrLhgDWOub7HFt4x1PqfrX39aStH_lKy_JQtStbYToDbnw1wjEYQ_sEQcIqdRZhj8f8vtwSXMwMIUa2Iofpg4AGLWwzRKNhCj5EJmJZ1CCx8zARdd1bl-8_XLliZrXDLrvrD4OnIOMmUGf1A0L5jZ6sUypiNLwvIttwqRN6cQc6n0GtGO-vMLAZ9PLIvy7OvD9eEfz-PFq_HdvW-T5oiiLFCbwfM_0qKzHG7n4PqeH8Aj2aQJHFyVSEB6ULBA05xqJFP9sQ

部署集群内部DNS解析服务(CoreDNS)

Kubernetes 生产级高可用集群部署

多Master集群架构图

 

部署Master02节点

01)拷贝master01的k8s目录到master02

< 49  master01 - [root]: ~/k8s/soft/dashboard > # scp -r /opt/kubernetes/ root@192.168.1.116:/opt/
root@192.168.1.116's password:
kube-apiserver                                                                                                                                                                  100%  184MB  30.7MB/s   00:06
kubectl                                                                                                                                                                         100%   55MB  27.4MB/s   00:02
kube-controller-manager                                                                                                                                                         100%  155MB  25.9MB/s   00:06
kube-scheduler                                                                                                                                                                  100%   55MB  27.3MB/s   00:02
kube-apiserver                                                                                                                                                                  100%  929   975.4KB/s   00:00
kube-controller-manager                                                                                                                                                         100%  483   274.5KB/s   00:00
kube-scheduler                                                                                                                                                                  100%   94   112.4KB/s   00:00
token.csv                                                                                                                                                                       100%   80    94.3KB/s   00:00
ca.pem                                                                                                                                                                          100% 1363     1.3MB/s   00:00
server-key.pem                                                                                                                                                                  100% 1679     1.6MB/s   00:00
server.pem                                                                                                                                                                      100% 1651     1.5MB/s   00:00
ca-key.pem                                                                                                                                                                      100% 1675     1.6MB/s   00:00

02)拷贝master01系统服务启动脚本到master02,然后再master02做一些配置文件的调整

< 50  master01 - [root]: ~/k8s/soft/dashboard > # scp /usr/lib/systemd/system/{kube-apiserver,kube-scheduler,kube-controller-manager}.service root@192.168.1.116:/usr/lib/systemd/system/
root@192.168.1.116's password:
kube-apiserver.service                                                                                                                                                          100%  282   238.2KB/s   00:00
kube-scheduler.service                                                                                                                                                          100%  281   260.1KB/s   00:00
kube-controller-manager.service                                                                                                                                                 100%  317   291.7KB/s   00:00
< 51  master01 - [root]: ~/k8s/soft/dashboard > #

03)修改master02 192.168.1.116节点kube-apiserver配置文件 –bind-address  –advertise-address 修改为master02主机的IP地址

< 1  master02 - [root]: ~ > # vim /opt/kubernetes/cfg/kube-apiserver


KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.1.115:2379,https://192.168.1.116:2379,https://192.168.1.118:2379 \
--bind-address=192.168.1.116 \
--secure-port=6443 \
--advertise-address=192.168.1.116 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--kubelet-https=true \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"

04)启动服务,一定要先启动kube-apiserver

< 2  master02 - [root]: ~ > # systemctl start kube-apiserver
< 3  master02 - [root]: ~ > # systemctl start kube-scheduler
< 4  master02 - [root]: ~ > # systemctl start kube-controller-manager

 

< 2  master02 - [root]: ~ > # /opt/kubernetes/bin/kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-1               Healthy   {"health":"true"}
etcd-0               Healthy   {"health":"true"}
etcd-2               Healthy   {"health":"true"}
< 3  master02 - [root]: ~ > #
< 3  master02 - [root]: ~ > #
< 3  master02 - [root]: ~ > # /opt/kubernetes/bin/kubectl get node
NAME            STATUS   ROLES    AGE    VERSION
192.168.1.118   Ready    <none>   3h6m   v1.12.4
192.168.1.119   Ready    <none>   175m   v1.12.4
Nginx+Keepalived

LB-Master主机安装

< 3  LB-Master - [root]: ~ > # yum install -y nginx keepalived

在生产环境我们需要对api-server做高可用集群,在nginx主配置中添加四层代理stream

< 3  LB-Master - [root]: ~ > # vim /etc/nginx/nginx.conf

# For more information on configuration, see:
#   * Official English Documentation: http://nginx.org/en/docs/
#   * Official Russian Documentation: http://nginx.org/ru/docs/

user nginx;
worker_processes 4;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}


stream {
    upstream k8s-apiserver {
        server 192.168.1.115:6443;
        server 192.168.1.116:6443;

    }
    server {
       listen 6443;
       proxy_pass k8s-apiserver;
    }

}

保存重启nginx服务

< 7  LB-Master - [root]: ~ > # systemctl enable nginx
Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.
< 8  LB-Master - [root]: ~ > # systemctl start nginx
< 9  LB-Master - [root]: ~ > # netstat -lnpt|grep 6443
tcp        0      0 192.168.1.111:6443      0.0.0.0:*               LISTEN      3190/nginx: master

配置keepalived

< 2  LB-Master - [root]: ~ > # vim /etc/keepalived/check_nginx.sh

count=$(ps -ef |grep nginx |egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then
    systemctl stop keepalived
fi
< 3  LB-Master - [root]: ~ > # chmod +x /etc/keepalived/check_nginx.sh

这里需要注意的是网卡名称确保和主机一致, vip设置为192.168.1.100

< 23  LB-Master - [root]: /etc/keepalived > # vim keepalived.conf
! Configuration File for keepalived

global_defs {
   # 接收邮件地址
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   # 邮件发送地址
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_MASTER
}

vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}

vrrp_instance VI_1 {
    state MASTER
    interface ens32
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
    priority 100    # 优先级,备服务器设置 90
    advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.1.110/24
    }
    track_script {
        check_nginx
    }
}

 

< 24  LB-Master - [root]: /etc/keepalived > # systemctl enable keepalived
Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service.
< 25  LB-Master - [root]: /etc/keepalived > # systemctl start keepalived

< 26  LB-Master - [root]: /etc/keepalived > # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:2b:b9:e8 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.111/24 brd 192.168.1.255 scope global noprefixroute ens32
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe2b:b9e8/64 scope link noprefixroute
       valid_lft forever preferred_lft forever



< 27  LB-Master - [root]: /etc/keepalived > # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:2b:b9:e8 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.111/24 brd 192.168.1.255 scope global noprefixroute ens32
       valid_lft forever preferred_lft forever
    inet 192.168.1.100/24 scope global secondary ens32
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe2b:b9e8/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
< 28  LB-Master - [root]: /etc/keepalived > #

拷贝配置文件到LB-Backup备用机器上

< 28  LB-Master - [root]: /etc/keepalived > # scp /etc/keepalived/{keepalived.conf,check_nginx.sh} root@192.168.1.112:/etc/keepalived

root@192.168.1.112's password:
keepalived.conf                                                                                                                                                                 100%  893     1.4MB/s   00:00
check_nginx.sh

 

LB-Backup主机安装

< 1  LB-Backup - [root]: ~ > # yum install -y nginx keepalived
user nginx;
worker_processes 4;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}


stream {
    log_format main "$remote_addr $upstream_addr $time_local $status";
    access_log /var/log/nginx/k8s-access.log main;
    upstream k8s-apiserver {
        server 192.168.1.115:6443;
        server 192.168.1.116:6443;

    }
    server {
       listen 6443;
       proxy_pass k8s-apiserver;
    }

}
< 3  LB-Backup - [root]: ~ > # systemctl start nginx
< 4  LB-Backup - [root]: ~ > # netstat -lnpt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1280/master
tcp        0      0 192.168.1.112:6443      0.0.0.0:*               LISTEN      4001/nginx: master
< 11  LB-Backup - [root]: ~ > # vim /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {
   # 接收邮件地址
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   # 邮件发送地址
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_MASTER
}

vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}

vrrp_instance VI_1 {
    state Backup
    interface ens32
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
    priority 90    # 优先级,备服务器设置 90
    advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.1.110/24
    }
    track_script {
        check_nginx
    }
}

启动服务、测试双机集群切换

< 12  LB-Backup - [root]: ~ > # systemctl enable keepalived
Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service.
< 13  LB-Backup - [root]: ~ > # systemctl start keepalived

当前vip在LB-Master节点,下面模拟LB-Master nginx 宕机,看下vip是不是漂移到LB-Backup机器。

< 90  LB-Master - [root]: /etc/keepalived > # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:2b:b9:e8 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.111/24 brd 192.168.1.255 scope global noprefixroute ens32
       valid_lft forever preferred_lft forever
    inet 192.168.1.100/24 scope global secondary ens32
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe2b:b9e8/64 scope link noprefixroute
       valid_lft forever preferred_lft forever

 

< 91  LB-Master - [root]: /etc/keepalived > # pkill nginx

此时LB-Master节点的vip已经消失了

< 90  LB-Master - [root]: /etc/keepalived > # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:2b:b9:e8 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.111/24 brd 192.168.1.255 scope global noprefixroute ens32
       valid_lft forever preferred_lft forever
    inet 192.168.1.100/24 scope global secondary ens32
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe2b:b9e8/64 scope link noprefixroute
       valid_lft forever preferred_lft forever

可以看出vip现在已经漂移到LB-Backup节点,更详细的切换过程可以查看keepalived的日志信息。

< 40  LB-Backup - [root]: ~ > # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:db:2c:ed brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.112/24 brd 192.168.1.255 scope global noprefixroute ens32
       valid_lft forever preferred_lft forever
    inet 192.168.1.100/24 scope global secondary ens32
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fedb:2ced/64 scope link noprefixroute
       valid_lft forever preferred_lft forever

划重点:修改两台node02和node02节点连接kube-apiserver的配置文件,改为keepavlie 的vip地址。  server: https://192.168.1.110:6443

< 1  node01 - [root]: ~ > # cd /opt/kubernetes/cfg/

< 9  node01 - [root]: /opt/kubernetes/cfg > # grep -nr 6443 *
bootstrap.kubeconfig:5:    server: https://192.168.1.110:6443
kubelet.kubeconfig:5:    server: https://192.168.1.110:6443
kube-proxy.kubeconfig:5:    server: https://192.168.1.110:6443

保存重启服务,查看nginx日志可以发现api-server会有轮询的请求信息。

< 20  node01 - [root]: /opt/kubernetes/cfg > # systemctl restart kubelet kube-proxy

以上所需要的资源github传上门https://github.com/gujiwork/k8s-deploy

 

CoreDNS部署

参考coredns文档https://www.guji.work/archives/546#DNS

一条评论
  • allen_jol

    2019年8月1日 上午10:37

    写的非常详细,思路逻辑也很清晰。非常值得推荐

发表评论

电子邮件地址不会被公开。 必填项已用*标注