Fork me on GitHub

k8s集群环境部署

部署环境

三台虚拟机,系统环境均为Centos7,对应节点名称及IP地址如下

主机名 IP 组件
k8s-master 192.168.1.34 etcd
kube-apiserver
kube-controller-manager
kube-scheduler
k8s-node1 192.168.1.33 kubelet
kube-proxy
docker
k8s-node2 192.168.1.32 kubelet
kube-proxy
docker

master节点

安装etcd:
etcd作为一个持久化的配置中心和存储服务中心,k8s运行依赖etcd,需要先部署etcd
修改/etc/etcd/etcd.conf文件:

#[Member]
#ETCD_CORS=""
#数据存放位置
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
#ETCD_LISTEN_PEER_URLS="http://localhost:2380"
#监听客户端地址
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#节点名称
ETCD_NAME="master"
#ETCD_SNAPSHOT_COUNT="100000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_QUOTA_BACKEND_BYTES="0"
#ETCD_MAX_REQUEST_BYTES="1572864"
#ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"
#ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"
#ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"
#
#[Clustering]
#ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"
#通知客户端地址
ETCD_ADVERTISE_CLIENT_URLS="http://etcd:2379"

配置kubernetes:
在/etc/kubernetes 目录中有以下几个文件:

apiserver: api配置文件,提供和外部交互的接口
config: 主配置文件
controller-manager: 集群管理配置文件, 承担了master的主要功能,如管理node,pod,replication,service,namespace等
scheduler: scheduler配置文件,调度器,监听etcd中的pod目录变更,然后通过调度算法分配node

apiserver配置:

# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0" # kube启动时绑定的地址

# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"

# Port minions listen on
#KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.1.34:2379" # kube调用etcd的url

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=172.17.0.0/16" # 此地址是docker容器的地址段

# default admission control policies
#KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"

# Add your own!
KUBE_API_ARGS=""

config配置:

# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://192.168.1.34:8080"    

controller-manager scheduler 两个文件采用默认配置即可.

启动:

[root@k8s-master ~]# systemctl start flanneld.service
[root@k8s-master ~]# systemctl start kube-apiserver.service
[root@k8s-master ~]# systemctl start kube-controller-manager.service
[root@k8s-master ~]# systemctl start kube-scheduler.service

node节点

配置kubernetes node:

在/etc/kubernetes目录下出现以下文件:

config: kubernetes 主配置文件
kubelet: kubelet node配置文件
proxy: kubernetes proxy 配置文件

kubelet配置:

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"

# The port for the info server to serve on
# KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=192.168.1.33" #特别注意这个,在另一个node2节点上,要改为192.168.1.32

# location of the api-server
KUBELET_API_SERVER="--api-servers=http://192.168.1.34:8080"

# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

# Add your own!
KUBELET_ARGS=""

config配置:

# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://192.168.1.34:8080"

proxy 配置默认即可

启动:

[root@k8s-node-1 ~]# systemctl start flanneld.service
[root@k8s-node-1 ~]# systemctl start kubelet.service
[root@k8s-node-1 ~]# systemctl start kube-proxy.service

网络配置

master和node均安装flannel

配置Flannel(在master、node上均编辑/etc/sysconfig/flanneld)

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://192.168.1.34:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/flannel/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""

配置etcd中关于flannel的key(仅在master上操作)

Flannel使用Etcd进行配置,来保证多个Flannel实例之间的配置一致性,所以需要在etcd上进行如下配置:

[root@k8s-master ~]# etcdctl mk /flannel/network/config '{ "Network": "172.17.0.0/16" }'
{ "Network": "172.17.0.0/16" }

备注:

  1. 注意防火墙iptables,INPUT/OUPUT/FORWARD是否为ACCEPT
轻轻的我走了,正如我轻轻的来