kubernetes 1.5.2集群搭建

嗨,大家好,节后一直在忙,没时间写博客

最近研究了一个新东西,kubernetes,嘿嘿

先来介绍一下kubernetes吧

Kubernetes是什么?

1.是一个全新的基于容器技术的分布式架构,是谷歌的Borg技术的一个开源版本

Borg是谷歌的一个久负盛名的内部使用的大规模集群管理系统,基于容器技术,目的是实现资源管理的自动化,垮多个数据中心的资源利用率的最大化

2.Kubernetes是一个开放的平台。不局限于任何一种语言,没有限定任何编程接口。

3.Kubernetes是一个完备的分布式系统支持平台。Kubernetes具有完备的集群管理能力。包括多层次的安全防护机制和准入机制,多租户应用支撑,透明的服务注册和服务发现机制,内建只能负载均衡器,强大的故障发现和自我修复能力,服务的滚动升级和在线扩容能力,可扩展的资源自动调动机制,以及多粒度的资源配额管理能力

为什么要用kubernetes?

kubernetes作为被业内认可的Docker分布式解决方案,实现微服务架构。微服务架构的核心是一个巨大的单体应用切分为多个很小的互相连接的微服务,一个微服务背后是多个实例的副本在进行支撑,副本数量可以根据系统的负荷进行调整,内嵌的负载均衡器自动实现调度。

kubernetes具备超强的横向扩容能力,可以横向扩展node数量,并且可以实现秒级的业务横向扩展,对于电商行业秒杀,拼团等等流量徒增时间和量不确定,以及大促期间需要整体进行扩容有极大的帮助,可将业务统一跑kubernetes上,而后端数据库都统一依赖云平台数据库服务,物理机或云主机自建。


大致了解了kubernetes后,我们来简单的了解一下kubernetes的基本概念

kubernetes的大部分概念,比如:node、pod、RepplicationController、Service等工具都可以看作一种“资源对象”,几乎所有的资源都可以通过kubernetes提供的kubectl工具进行增、删、改、查等操作并将其保存在etcd中持久化存储。

Master(主节点)

集群管理和控制节点,基本上kubernetes集群的所有控制命令都发给它,它来负责具体的执行过程

高可用集群需要配置三台Master

Master节点运行着以下进程:

  ● kubernetes api server提供了http rest接口的关键服务进程,是kubernetes里所有资源增、删、改、查等操作的唯一入口,也是集群控制的进程入口;

  ● kubernetes controller manager是kubernetes里所有资源对象的自动化控制中心;

  ● kubernetes scheduler负责资源调度Pod调度的进程。

在Master节点上还需启动一个etcd服务,用来保存所有资源对象的数据,在etcd中存储的时候以minion的方式存储

Node(节点)

在kubernetes集群中,除了master节点,其他节点都被称为node节点,在早期版本被称为minion

node是kubernetes集群的工作负载节点,node节点可为物理机,也可为虚拟机,每个node都会被master分配一些工作负载(docker容器),其上的工作负载会被master主动转移到其他节点上去。

Node节点上运行着的进程

  ● kubelet负责Pod对应的容器的创建,启停等任务,同时与matser节点密切协作,实现集群管理基本功能;

  ● kube-proxy实现kubernetes service的通信与负载均衡机制;

  ● docker engine docker引擎,负责本机的容器创建和管理。

node节点会动态的添加到kubernetes集群,由kubelet负责向master进行注册;

注册成功后kubelet会向master汇报自身情况,包括操作系统,docker版本,机器cpu和内存使用

master会根据Node资源使用进行调度

如果node长时间不上报,master会判断node失联,状态会变为Not ready,master会触发工作负载转移的流程。

Pod

pod是kubernetes最重要也是最基本的概念,包含一个活或多个紧密相关的容器和卷每个pod都有一个特殊被称为“根容器”的Pause容器,

Pause容器对应的镜像属于kubernetes平台的一部分

kubernetes为什么会有pod的概念?

1.一组容器作为一个单元,我们不好对整体进行判断和有效的操作,如果一个容器宕机,是否看做整体宕机,而pod的状态代表整个容器组的状态;

2.Pod的多个容器使用pause容器的ip,共享挂载在pause容器的valume,简化了关联容器之间的通信,并很好解决的文件共享问题。

Service(服务)

Services也是Kubernetes的基本操作单元,是真实应用服务的抽象,每一个服务后面都有很多对应的容器来支持。

一个Service可以看作一组提供相同服务的Pod的对外访问接口。

Service作用于哪些Pod是通过Label Selector来定义的。

Replication Controller(RC)(pod副本控制器)

Replication Controller确保任何时候Kubernetes集群中有指定数量的pod副本(replicas)在运行, 如果少于指定数量的pod副本(replicas),Replication Controller会启动新的Container,反之会杀死多余的以保证数量不变。Replication Controller使用预先定义的pod模板创建pods,一旦创建成功,pod 模板和创建的pods没有任何关联,可以修改pod 模板而不会对已创建pods有任何影响,也可以直接更新通过Replication Controller创建的pods

Volume(存储卷)

Volume是Pod中能够被多个容器访问的共享目录,这个概念在Kubernetes和Docker中是类似的,Kubernetes的Volume被定义在Pod上,然后被Pod中的容器挂载,Volume的生命周期与Pod相同,Kubernetes支持多种类型的Volume,例如GlusterFS,Ceph等先进的分布式文件系统。

Label(标签)

Lable是一个key-value的键值对,需要由用户定义,附加到资源上,例如node,pod,service,rc等,每个资源可以有任意数量的label,同一个label也可以附加到任意的资源对象上,可以在资源创建的时候创建,也可以在资源创建的时候定义,也可以在创建完成后动态添加和删除。对资源创建一个或多个Label可以实现多维度的资源分组管理功能,这样可以更方便的进行资源分配,调度,配置和部署,然后通过Label Seletor查询或者筛选具有某些label的资源。

Namespace(命名空间)

Namespace是kubernetes用于实现多租户的资源隔离,将集群内部的容器对象分配到不同的Namespace中,形成逻辑上不同项目的分组,便于共享使用整个集群资源和分别管理。


了解了基本的概念后,我们下面就要开始上手搭建kubernetes的集群了

搭建kubernetes的集群方法有多种,可以使用kubeadm工具进行安装,可是VV使用之后,感觉并不是很爽,所以还是手搭集群吧

那么今天为大家带来kubernetes的1.5.2版本的集群搭建

kubernetes 1.5.2集群安装

环境准备

三台centos7系统

IP地址
主机名
角色
192.168.1.40
uat-app01
Master(主节点)
192.168.1.46
uat-ucs01
Node
192.168.1.47
uat-ucs02
Node

关闭防火墙,SeLinux什么的,这里就不写啦

配置主机映射/etc/hosts (三台主机)

192.168.1.40	uat-app01
192.168.1.46	uat-ucs01
192.168.1.47	uat-ucs02

在主节点上进行:

配置文件中IP,端口根据实际情况而定

安装etcd

[root@uat-app01 ~]# yum install -y etcd
Installed:
etcd.x86_64 0:3.2.9-3.el7

配置etcd 

[root@uat-app01 kubernetes]# vim /etc/etcd/etcd.conf
#[Member]
#ETCD_CORS=""
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
#ETCD_LISTEN_PEER_URLS="http://localhost:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
ETCD_NAME="master"
#ETCD_SNAPSHOT_COUNT="100000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_QUOTA_BACKEND_BYTES="0"
#
#[Clustering]
#ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://uat-app01:2379,http://uat-app01:4001"  #配置ETCD地址
启动,并验证状态

[root@uat-app01 kubernetes]# systemctl start etcd
[root@uat-app01 kubernetes]# etcdctl -C http://uat-app01:4001 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://uat-app01t:2379
cluster is healthy
[root@uat-app01 kubernetes]# etcdctl -C http://uat-app01:2379 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://uat-app01:2379
cluster is healthy

安装kubernetes

[root@uat-app01 ~]# yum install kubernetes
Installed:
kubernetes.x86_64 0:1.5.2-0.7.git269f928.el7

修改kubernetes配置文件

[root@uat-app01 ~]# cd /etc/kubernetes/
[root@uat-app01 kubernetes]# vim config 

###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://uat-app01:9090"
KUBE_ETCD_SERVERS="--etcd_servers=http://uat-app01:4001"  #指定ETCD SERVER地址
修改apiserver

[root@uat-app01 kubernetes]# vim apiserver 

###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#

# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

# The port on the local server to listen on.
KUBE_API_PORT="--port=9091"

# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="http://uat-app01:2379"   #指定ETCD SERVER地址

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies
#KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"

# Add your own!
KUBE_API_ARGS=""

启动Master节点服务

[root@uat-app01 kubernetes]# systemctl start kube-apiserver
[root@uat-app01 kubernetes]# systemctl start kube-controller-manager
[root@uat-app01 kubernetes]# systemctl start kube-scheduler
[root@uat-app01 kubernetes]# ps -ef | grep kube
kube      4853     1  4 10:35 ?        00:00:00 /usr/bin/kube-apiserver --logtostderr=true --v=0 --etcd-servers=http://127.0.0.1:2379 --insecure-bind-address=0.0.0.0 --port=9000 --allow-privileged=false --service-cluster-ip-range=10.254.0.0/16 --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota
kube      4871     1  3 10:35 ?        00:00:00 /usr/bin/kube-controller-manager --logtostderr=true --v=0 --master=http://uat-app01:9090
kube      4926     1  3 10:35 ?        00:00:00 /usr/bin/kube-scheduler --logtostderr=true --v=0 --master=http://uat-app01:9090

启动日志如下:

Dec 20 10:34:56 uat-app01 systemd: Starting Etcd Server...
Dec 20 10:34:56 uat-app01 etcd: recognized and used environment variable ETCD_ADVERTISE_CLIENT_URLS=http://localhost:2379
Dec 20 10:34:56 uat-app01 etcd: recognized environment variable ETCD_NAME, but unused: shadowed by corresponding flag 
Dec 20 10:34:56 uat-app01 etcd: recognized environment variable ETCD_DATA_DIR, but unused: shadowed by corresponding flag 
Dec 20 10:34:56 uat-app01 etcd: recognized environment variable ETCD_LISTEN_CLIENT_URLS, but unused: shadowed by corresponding flag 
Dec 20 10:34:56 uat-app01 etcd: etcd Version: 3.2.9
Dec 20 10:34:56 uat-app01 etcd: Git SHA: f1d7dd8
Dec 20 10:34:56 uat-app01 etcd: Go Version: go1.8.3
Dec 20 10:34:56 uat-app01 etcd: Go OS/Arch: linux/amd64
Dec 20 10:34:56 uat-app01 etcd: setting maximum number of CPUs to 8, total number of available CPUs is 8
Dec 20 10:34:56 uat-app01 etcd: listening for peers on http://localhost:2380
Dec 20 10:34:56 uat-app01 etcd: listening for client requests on localhost:2379
Dec 20 10:34:56 uat-app01 etcd: name = default
Dec 20 10:34:56 uat-app01 etcd: data dir = /var/lib/etcd/default.etcd
Dec 20 10:34:56 uat-app01 etcd: member dir = /var/lib/etcd/default.etcd/member
Dec 20 10:34:56 uat-app01 etcd: heartbeat = 100ms
Dec 20 10:34:56 uat-app01 etcd: election = 1000ms
Dec 20 10:34:56 uat-app01 etcd: snapshot count = 100000
Dec 20 10:34:56 uat-app01 etcd: advertise client URLs = http://localhost:2379
Dec 20 10:34:56 uat-app01 etcd: initial advertise peer URLs = http://localhost:2380
Dec 20 10:34:56 uat-app01 etcd: initial cluster = default=http://localhost:2380
Dec 20 10:34:56 uat-app01 etcd: starting member 8e9e05c52164694d in cluster cdf818194e3a8c32
Dec 20 10:34:56 uat-app01 etcd: 8e9e05c52164694d became follower at term 0
Dec 20 10:34:56 uat-app01 etcd: newRaft 8e9e05c52164694d [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
Dec 20 10:34:56 uat-app01 etcd: 8e9e05c52164694d became follower at term 1
Dec 20 10:34:56 uat-app01 etcd: simple token is not cryptographically signed
Dec 20 10:34:56 uat-app01 etcd: starting server... [version: 3.2.9, cluster version: to_be_decided]
Dec 20 10:34:56 uat-app01 etcd: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32
Dec 20 10:34:57 uat-app01 etcd: 8e9e05c52164694d is starting a new election at term 1
Dec 20 10:34:57 uat-app01 etcd: 8e9e05c52164694d became candidate at term 2
Dec 20 10:34:57 uat-app01 etcd: 8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 2
Dec 20 10:34:57 uat-app01 etcd: 8e9e05c52164694d became leader at term 2
Dec 20 10:34:57 uat-app01 etcd: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 2
Dec 20 10:34:57 uat-app01 etcd: published {Name:default ClientURLs:[http://localhost:2379]} to cluster cdf818194e3a8c32
Dec 20 10:34:57 uat-app01 etcd: setting up the initial cluster version to 3.2
Dec 20 10:34:57 uat-app01 etcd: ready to serve client requests
Dec 20 10:34:57 uat-app01 systemd: Started Etcd Server.
Dec 20 10:34:57 uat-app01 etcd: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!
Dec 20 10:34:57 uat-app01 etcd: set the initial cluster version to 3.2
Dec 20 10:34:57 uat-app01 etcd: enabled capabilities for version 3.2
Dec 20 10:35:05 uat-app01 systemd: Starting Kubernetes API Server...
Dec 20 10:35:05 uat-app01 kube-apiserver: Flag --port has been deprecated, see --insecure-port instead.
Dec 20 10:35:05 uat-app01 kube-apiserver: I1220 10:35:05.172760    4853 config.go:562] Will report 192.168.1.40 as public IP address.
Dec 20 10:35:05 uat-app01 kube-apiserver: I1220 10:35:05.375984    4853 config.go:454] Generated self-signed cert (/var/run/kubernetes/apiserver.crt, /var/run/kubernetes/apiserver.key)
Dec 20 10:35:05 uat-app01 kube-apiserver: W1220 10:35:05.376902    4853 handlers.go:50] Authentication is disabled
Dec 20 10:35:05 uat-app01 kube-apiserver: E1220 10:35:05.377031    4853 reflector.go:199] k8s.io/kubernetes/plugin/pkg/admission/serviceaccount/admission.go:103: Failed to list *api.ServiceAccount: Get http://0.0.0.0:9090/api/v1/serviceaccounts?resourceVersion=0: dial tcp 0.0.0.0:9090: getsockopt: connection refused
Dec 20 10:35:05 uat-app01 kube-apiserver: E1220 10:35:05.377074    4853 reflector.go:199] k8s.io/kubernetes/plugin/pkg/admission/resourcequota/resource_access.go:83: Failed to list *api.ResourceQuota: Get http://0.0.0.0:9090/api/v1/resourcequotas?resourceVersion=0: dial tcp 0.0.0.0:9090: getsockopt: connection refused
Dec 20 10:35:05 uat-app01 kube-apiserver: E1220 10:35:05.377109    4853 reflector.go:199] k8s.io/kubernetes/plugin/pkg/admission/serviceaccount/admission.go:119: Failed to list *api.Secret: Get http://0.0.0.0:9090/api/v1/secrets?fieldSelector=type%3Dkubernetes.io%2Fservice-account-token&resourceVersion=0: dial tcp 0.0.0.0:9090: getsockopt: connection refused
Dec 20 10:35:05 uat-app01 kube-apiserver: E1220 10:35:05.424878    4853 reflector.go:199] pkg/controller/informers/factory.go:89: Failed to list *api.Namespace: Get http://0.0.0.0:9090/api/v1/namespaces?resourceVersion=0: dial tcp 0.0.0.0:9090: getsockopt: connection refused
Dec 20 10:35:05 uat-app01 kube-apiserver: E1220 10:35:05.424948    4853 reflector.go:199] pkg/controller/informers/factory.go:89: Failed to list *api.LimitRange: Get http://0.0.0.0:9090/api/v1/limitranges?resourceVersion=0: dial tcp 0.0.0.0:9090: getsockopt: connection refused
Dec 20 10:35:05 uat-app01 kube-apiserver: [restful] 2017/12/20 10:35:05 log.go:30: [restful/swagger] listing is available at https://192.168.1.40:6443/swaggerapi/
Dec 20 10:35:05 uat-app01 kube-apiserver: [restful] 2017/12/20 10:35:05 log.go:30: [restful/swagger] https://192.168.1.40:6443/swaggerui/ is mapped to folder /swagger-ui/
Dec 20 10:35:05 uat-app01 kube-apiserver: I1220 10:35:05.492896    4853 serve.go:95] Serving securely on 0.0.0.0:6443
Dec 20 10:35:05 uat-app01 systemd: Started Kubernetes API Server.
Dec 20 10:35:05 uat-app01 kube-apiserver: I1220 10:35:05.492973    4853 serve.go:109] Serving insecurely on 0.0.0.0:9090
Dec 20 10:35:05 uat-app01 kube-apiserver: I1220 10:35:05.518171    4853 storage_rbac.go:131] Created clusterrole.rbac.authorization.k8s.io/cluster-admin
Dec 20 10:35:05 uat-app01 kube-apiserver: I1220 10:35:05.526105    4853 storage_rbac.go:131] Created clusterrole.rbac.authorization.k8s.io/system:discovery
Dec 20 10:35:05 uat-app01 kube-apiserver: I1220 10:35:05.534327    4853 storage_rbac.go:131] Created clusterrole.rbac.authorization.k8s.io/system:basic-user
Dec 20 10:35:05 uat-app01 kube-apiserver: I1220 10:35:05.542925    4853 storage_rbac.go:131] Created clusterrole.rbac.authorization.k8s.io/admin
Dec 20 10:35:05 uat-app01 kube-apiserver: I1220 10:35:05.557449    4853 storage_rbac.go:131] Created clusterrole.rbac.authorization.k8s.io/edit
Dec 20 10:35:05 uat-app01 kube-apiserver: I1220 10:35:05.567840    4853 storage_rbac.go:131] Created clusterrole.rbac.authorization.k8s.io/view
Dec 20 10:35:05 uat-app01 kube-apiserver: I1220 10:35:05.582531    4853 storage_rbac.go:131] Created clusterrole.rbac.authorization.k8s.io/system:node
Dec 20 10:35:05 uat-app01 kube-apiserver: I1220 10:35:05.592604    4853 storage_rbac.go:131] Created clusterrole.rbac.authorization.k8s.io/system:node-proxier
Dec 20 10:35:05 uat-app01 kube-apiserver: I1220 10:35:05.601116    4853 storage_rbac.go:131] Created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
Dec 20 10:35:05 uat-app01 kube-apiserver: I1220 10:35:05.609519    4853 storage_rbac.go:151] Created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
Dec 20 10:35:05 uat-app01 kube-apiserver: I1220 10:35:05.623855    4853 storage_rbac.go:151] Created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
Dec 20 10:35:05 uat-app01 kube-apiserver: I1220 10:35:05.634333    4853 storage_rbac.go:151] Created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
Dec 20 10:35:05 uat-app01 kube-apiserver: I1220 10:35:05.650859    4853 storage_rbac.go:151] Created clusterrolebinding.rbac.authorization.k8s.io/system:node
Dec 20 10:35:05 uat-app01 kube-apiserver: I1220 10:35:05.659237    4853 storage_rbac.go:151] Created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
Dec 20 10:35:05 uat-app01 kube-apiserver: I1220 10:35:05.684480    4853 storage_rbac.go:151] Created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
Dec 20 10:35:06 uat-app01 kube-apiserver: I1220 10:35:06.442481    4853 trace.go:61] Trace "Create /api/v1/namespaces/default/services" (started 2017-12-20 10:35:05.527690092 +0800 CST):
Dec 20 10:35:06 uat-app01 kube-apiserver: [11.648µs] [11.648µs] About to convert to expected version
Dec 20 10:35:06 uat-app01 kube-apiserver: [171.916µs] [160.268µs] Conversion done
Dec 20 10:35:06 uat-app01 kube-apiserver: [903.122567ms] [902.950651ms] About to store object in database
Dec 20 10:35:06 uat-app01 kube-apiserver: [914.70523ms] [11.582663ms] Object stored in database
Dec 20 10:35:06 uat-app01 kube-apiserver: [914.710306ms] [5.076µs] Self-link added
Dec 20 10:35:06 uat-app01 kube-apiserver: "Create /api/v1/namespaces/default/services" [914.748192ms] [37.886µs] END
Dec 20 10:35:10 uat-app01 systemd: Started Kubernetes Controller Manager.
Dec 20 10:35:10 uat-app01 systemd: Starting Kubernetes Controller Manager...
Dec 20 10:35:10 uat-app01 kube-controller-manager: I1220 10:35:10.817406    4871 leaderelection.go:188] sucessfully acquired lease kube-system/kube-controller-manager
Dec 20 10:35:10 uat-app01 kube-controller-manager: I1220 10:35:10.819208    4871 event.go:217] Event(api.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"66730d93-e52e-11e7-b226-ee940baef0c1", APIVersion:"v1", ResourceVersion:"26", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' uat-app01. became leader
Dec 20 10:35:10 uat-app01 kube-controller-manager: I1220 10:35:10.820895    4871 plugins.go:94] No cloud provider specified.
Dec 20 10:35:10 uat-app01 kube-controller-manager: W1220 10:35:10.820926    4871 controllermanager.go:285] Unsuccessful parsing of cluster CIDR : invalid CIDR address:
Dec 20 10:35:10 uat-app01 kube-controller-manager: W1220 10:35:10.820950    4871 controllermanager.go:289] Unsuccessful parsing of service CIDR : invalid CIDR address:
Dec 20 10:35:10 uat-app01 kube-controller-manager: I1220 10:35:10.821258    4871 replication_controller.go:219] Starting RC Manager
Dec 20 10:35:10 uat-app01 kube-controller-manager: I1220 10:35:10.822496    4871 nodecontroller.go:189] Sending events to api server.
Dec 20 10:35:10 uat-app01 kube-controller-manager: E1220 10:35:10.823430    4871 controllermanager.go:305] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail.
Dec 20 10:35:10 uat-app01 kube-controller-manager: I1220 10:35:10.823456    4871 controllermanager.go:322] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
Dec 20 10:35:10 uat-app01 kube-controller-manager: E1220 10:35:10.824086    4871 util.go:45] Metric for replenishment_controller already registered
Dec 20 10:35:10 uat-app01 kube-controller-manager: E1220 10:35:10.824104    4871 util.go:45] Metric for replenishment_controller already registered
Dec 20 10:35:10 uat-app01 kube-controller-manager: E1220 10:35:10.824113    4871 util.go:45] Metric for replenishment_controller already registered
Dec 20 10:35:10 uat-app01 kube-controller-manager: E1220 10:35:10.824125    4871 util.go:45] Metric for replenishment_controller already registered
Dec 20 10:35:10 uat-app01 kube-controller-manager: E1220 10:35:10.824135    4871 util.go:45] Metric for replenishment_controller already registered
Dec 20 10:35:10 uat-app01 kube-controller-manager: I1220 10:35:10.835783    4871 controllermanager.go:403] Starting extensions/v1beta1 apis
Dec 20 10:35:10 uat-app01 kube-controller-manager: I1220 10:35:10.835810    4871 controllermanager.go:406] Starting daemon set controller
Dec 20 10:35:10 uat-app01 kube-controller-manager: I1220 10:35:10.836233    4871 controllermanager.go:413] Starting job controller
Dec 20 10:35:10 uat-app01 kube-controller-manager: I1220 10:35:10.836369    4871 daemoncontroller.go:196] Starting Daemon Sets controller manager
Dec 20 10:35:10 uat-app01 kube-controller-manager: I1220 10:35:10.836734    4871 controllermanager.go:420] Starting deployment controller
Dec 20 10:35:10 uat-app01 kube-controller-manager: I1220 10:35:10.837127    4871 controllermanager.go:427] Starting ReplicaSet controller
Dec 20 10:35:10 uat-app01 kube-controller-manager: I1220 10:35:10.837397    4871 deployment_controller.go:132] Starting deployment controller
Dec 20 10:35:10 uat-app01 kube-controller-manager: I1220 10:35:10.837458    4871 controllermanager.go:436] Attempting to start horizontal pod autoscaler controller, full resource map map[authentication.k8s.io/v1beta1:&APIResourceList{GroupVersion:authentication.k8s.io/v1beta1,APIResources:[{tokenreviews false TokenReview}],} authorization.k8s.io/v1beta1:&APIResourceList{GroupVersion:authorization.k8s.io/v1beta1,APIResources:[{localsubjectaccessreviews true LocalSubjectAccessReview} {selfsubjectaccessreviews false SelfSubjectAccessReview} {subjectaccessreviews false SubjectAccessReview}],} certificates.k8s.io/v1alpha1:&APIResourceList{GroupVersion:certificates.k8s.io/v1alpha1,APIResources:[{certificatesigningrequests false CertificateSigningRequest} {certificatesigningrequests/approval false CertificateSigningRequest} {certificatesigningrequests/status false CertificateSigningRequest}],} extensions/v1beta1:&APIResourceList{GroupVersion:extensions/v1beta1,APIResources:[{daemonsets true DaemonSet} {daemonsets/status true DaemonSet} {deployments true Deployment} {deployments/rollback true DeploymentRollback} {deployments/scale true Scale} {deployments/status true Deployment} {horizontalpodautoscalers true HorizontalPodAutoscaler} {horizontalpodautoscalers/status true HorizontalPodAutoscaler} {ingresses true Ingress} {ingresses/status true Ingress} {jobs true Job} {jobs/status true Job} {networkpolicies true NetworkPolicy} {replicasets true ReplicaSet} {replicasets/scale true Scale} {replicasets/status true ReplicaSet} {replicationcontrollers true ReplicationControllerDummy} {replicationcontrollers/scale true Scale} {thirdpartyresources false ThirdPartyResource}],} storage.k8s.io/v1beta1:&APIResourceList{GroupVersion:storage.k8s.io/v1beta1,APIResources:[{storageclasses false StorageClass}],} v1:&APIResourceList{GroupVersion:v1,APIResources:[{bindings true Binding} {componentstatuses false ComponentStatus} {configmaps true ConfigMap} {endpoints true Endpoints} {events true Event} {limitranges true LimitRange} {namespaces false Namespace} {namespaces/finalize false Name
Dec 20 10:35:10 uat-app01 kube-controller-manager: space} {namespaces/status false Namespace} {nodes false Node} {nodes/proxy false Node} {nodes/status false Node} {persistentvolumeclaims true PersistentVolumeClaim} {persistentvolumeclaims/status true PersistentVolumeClaim} {persistentvolumes false PersistentVolume} {persistentvolumes/status false PersistentVolume} {pods true Pod} {pods/attach true Pod} {pods/binding true Binding} {pods/eviction true Eviction} {pods/exec true Pod} {pods/log true Pod} {pods/portforward true Pod} {pods/proxy true Pod} {pods/status true Pod} {podtemplates true PodTemplate} {replicationcontrollers true ReplicationController} {replicationcontrollers/scale true Scale} {replicationcontrollers/status true ReplicationController} {resourcequotas true ResourceQuota} {resourcequotas/status true ResourceQuota} {secrets true Secret} {securitycontextconstraints false SecurityContextConstraints} {serviceaccounts true ServiceAccount} {services true Service} {services/proxy true Service} {services/status true Service}],} apps/v1beta1:&APIResourceList{GroupVersion:apps/v1beta1,APIResources:[{statefulsets true StatefulSet} {statefulsets/status true StatefulSet}],} autoscaling/v1:&APIResourceList{GroupVersion:autoscaling/v1,APIResources:[{horizontalpodautoscalers true HorizontalPodAutoscaler} {horizontalpodautoscalers/status true HorizontalPodAutoscaler}],} batch/v1:&APIResourceList{GroupVersion:batch/v1,APIResources:[{jobs true Job} {jobs/status true Job}],} policy/v1beta1:&APIResourceList{GroupVersion:policy/v1beta1,APIResources:[{poddisruptionbudgets true PodDisruptionBudget} {poddisruptionbudgets/status true PodDisruptionBudget}],} rbac.authorization.k8s.io/v1alpha1:&APIResourceList{GroupVersion:rbac.authorization.k8s.io/v1alpha1,APIResources:[{clusterrolebindings false ClusterRoleBinding} {clusterroles false ClusterRole} {rolebindings true RoleBinding} {roles true Role}],}]
Dec 20 10:35:10 uat-app01 kube-controller-manager: I1220 10:35:10.837666    4871 controllermanager.go:438] Starting autoscaling/v1 apis
Dec 20 10:35:10 uat-app01 kube-controller-manager: I1220 10:35:10.837658    4871 replica_set.go:162] Starting ReplicaSet controller
Dec 20 10:35:10 uat-app01 kube-controller-manager: I1220 10:35:10.837682    4871 controllermanager.go:440] Starting horizontal pod controller.
Dec 20 10:35:10 uat-app01 kube-controller-manager: I1220 10:35:10.837900    4871 controllermanager.go:458] Attempting to start disruption controller, full resource map map[storage.k8s.io/v1beta1:&APIResourceList{GroupVersion:storage.k8s.io/v1beta1,APIResources:[{storageclasses false StorageClass}],} authentication.k8s.io/v1beta1:&APIResourceList{GroupVersion:authentication.k8s.io/v1beta1,APIResources:[{tokenreviews false TokenReview}],} authorization.k8s.io/v1beta1:&APIResourceList{GroupVersion:authorization.k8s.io/v1beta1,APIResources:[{localsubjectaccessreviews true LocalSubjectAccessReview} {selfsubjectaccessreviews false SelfSubjectAccessReview} {subjectaccessreviews false SubjectAccessReview}],} certificates.k8s.io/v1alpha1:&APIResourceList{GroupVersion:certificates.k8s.io/v1alpha1,APIResources:[{certificatesigningrequests false CertificateSigningRequest} {certificatesigningrequests/approval false CertificateSigningRequest} {certificatesigningrequests/status false CertificateSigningRequest}],} extensions/v1beta1:&APIResourceList{GroupVersion:extensions/v1beta1,APIResources:[{daemonsets true DaemonSet} {daemonsets/status true DaemonSet} {deployments true Deployment} {deployments/rollback true DeploymentRollback} {deployments/scale true Scale} {deployments/status true Deployment} {horizontalpodautoscalers true HorizontalPodAutoscaler} {horizontalpodautoscalers/status true HorizontalPodAutoscaler} {ingresses true Ingress} {ingresses/status true Ingress} {jobs true Job} {jobs/status true Job} {networkpolicies true NetworkPolicy} {replicasets true ReplicaSet} {replicasets/scale true Scale} {replicasets/status true ReplicaSet} {replicationcontrollers true ReplicationControllerDummy} {replicationcontrollers/scale true Scale} {thirdpartyresources false ThirdPartyResource}],} rbac.authorization.k8s.io/v1alpha1:&APIResourceList{GroupVersion:rbac.authorization.k8s.io/v1alpha1,APIResources:[{clusterrolebindings false ClusterRoleBinding} {clusterroles false ClusterRole} {rolebindings true RoleBinding} {roles true Role}],} v1:&APIResourceList{GroupVersion:v1,APIResources:
Dec 20 10:35:10 uat-app01 kube-controller-manager: [{bindings true Binding} {componentstatuses false ComponentStatus} {configmaps true ConfigMap} {endpoints true Endpoints} {events true Event} {limitranges true LimitRange} {namespaces false Namespace} {namespaces/finalize false Namespace} {namespaces/status false Namespace} {nodes false Node} {nodes/proxy false Node} {nodes/status false Node} {persistentvolumeclaims true PersistentVolumeClaim} {persistentvolumeclaims/status true PersistentVolumeClaim} {persistentvolumes false PersistentVolume} {persistentvolumes/status false PersistentVolume} {pods true Pod} {pods/attach true Pod} {pods/binding true Binding} {pods/eviction true Eviction} {pods/exec true Pod} {pods/log true Pod} {pods/portforward true Pod} {pods/proxy true Pod} {pods/status true Pod} {podtemplates true PodTemplate} {replicationcontrollers true ReplicationController} {replicationcontrollers/scale true Scale} {replicationcontrollers/status true ReplicationController} {resourcequotas true ResourceQuota} {resourcequotas/status true ResourceQuota} {secrets true Secret} {securitycontextconstraints false SecurityContextConstraints} {serviceaccounts true ServiceAccount} {services true Service} {services/proxy true Service} {services/status true Service}],} apps/v1beta1:&APIResourceList{GroupVersion:apps/v1beta1,APIResources:[{statefulsets true StatefulSet} {statefulsets/status true StatefulSet}],} autoscaling/v1:&APIResourceList{GroupVersion:autoscaling/v1,APIResources:[{horizontalpodautoscalers true HorizontalPodAutoscaler} {horizontalpodautoscalers/status true HorizontalPodAutoscaler}],} batch/v1:&APIResourceList{GroupVersion:batch/v1,APIResources:[{jobs true Job} {jobs/status true Job}],} policy/v1beta1:&APIResourceList{GroupVersion:policy/v1beta1,APIResources:[{poddisruptionbudgets true PodDisruptionBudget} {poddisruptionbudgets/status true PodDisruptionBudget}],}]
Dec 20 10:35:10 uat-app01 kube-controller-manager: I1220 10:35:10.838041    4871 controllermanager.go:460] Starting policy/v1beta1 apis
Dec 20 10:35:10 uat-app01 kube-controller-manager: I1220 10:35:10.838054    4871 controllermanager.go:462] Starting disruption controller
Dec 20 10:35:10 uat-app01 kube-controller-manager: I1220 10:35:10.838146    4871 horizontal.go:132] Starting HPA Controller
Dec 20 10:35:10 uat-app01 kube-controller-manager: I1220 10:35:10.838451    4871 controllermanager.go:470] Attempting to start statefulset, full resource map map[apps/v1beta1:&APIResourceList{GroupVersion:apps/v1beta1,APIResources:[{statefulsets true StatefulSet} {statefulsets/status true StatefulSet}],} autoscaling/v1:&APIResourceList{GroupVersion:autoscaling/v1,APIResources:[{horizontalpodautoscalers true HorizontalPodAutoscaler} {horizontalpodautoscalers/status true HorizontalPodAutoscaler}],} batch/v1:&APIResourceList{GroupVersion:batch/v1,APIResources:[{jobs true Job} {jobs/status true Job}],} policy/v1beta1:&APIResourceList{GroupVersion:policy/v1beta1,APIResources:[{poddisruptionbudgets true PodDisruptionBudget} {poddisruptionbudgets/status true PodDisruptionBudget}],} rbac.authorization.k8s.io/v1alpha1:&APIResourceList{GroupVersion:rbac.authorization.k8s.io/v1alpha1,APIResources:[{clusterrolebindings false ClusterRoleBinding} {clusterroles false ClusterRole} {rolebindings true RoleBinding} {roles true Role}],} v1:&APIResourceList{GroupVersion:v1,APIResources:[{bindings true Binding} {componentstatuses false ComponentStatus} {configmaps true ConfigMap} {endpoints true Endpoints} {events true Event} {limitranges true LimitRange} {namespaces false Namespace} {namespaces/finalize false Namespace} {namespaces/status false Namespace} {nodes false Node} {nodes/proxy false Node} {nodes/status false Node} {persistentvolumeclaims true PersistentVolumeClaim} {persistentvolumeclaims/status true PersistentVolumeClaim} {persistentvolumes false PersistentVolume} {persistentvolumes/status false PersistentVolume} {pods true Pod} {pods/attach true Pod} {pods/binding true Binding} {pods/eviction true Eviction} {pods/exec true Pod} {pods/log true Pod} {pods/portforward true Pod} {pods/proxy true Pod} {pods/status true Pod} {podtemplates true PodTemplate} {replicationcontrollers true ReplicationController} {replicationcontrollers/scale true Scale} {replicationcontrollers/status true ReplicationController} {resourcequotas true ResourceQuota} {resourcequotas/status true Resource
Dec 20 10:35:10 uat-app01 kube-controller-manager: Quota} {secrets true Secret} {securitycontextconstraints false SecurityContextConstraints} {serviceaccounts true ServiceAccount} {services true Service} {services/proxy true Service} {services/status true Service}],} authentication.k8s.io/v1beta1:&APIResourceList{GroupVersion:authentication.k8s.io/v1beta1,APIResources:[{tokenreviews false TokenReview}],} authorization.k8s.io/v1beta1:&APIResourceList{GroupVersion:authorization.k8s.io/v1beta1,APIResources:[{localsubjectaccessreviews true LocalSubjectAccessReview} {selfsubjectaccessreviews false SelfSubjectAccessReview} {subjectaccessreviews false SubjectAccessReview}],} certificates.k8s.io/v1alpha1:&APIResourceList{GroupVersion:certificates.k8s.io/v1alpha1,APIResources:[{certificatesigningrequests false CertificateSigningRequest} {certificatesigningrequests/approval false CertificateSigningRequest} {certificatesigningrequests/status false CertificateSigningRequest}],} extensions/v1beta1:&APIResourceList{GroupVersion:extensions/v1beta1,APIResources:[{daemonsets true DaemonSet} {daemonsets/status true DaemonSet} {deployments true Deployment} {deployments/rollback true DeploymentRollback} {deployments/scale true Scale} {deployments/status true Deployment} {horizontalpodautoscalers true HorizontalPodAutoscaler} {horizontalpodautoscalers/status true HorizontalPodAutoscaler} {ingresses true Ingress} {ingresses/status true Ingress} {jobs true Job} {jobs/status true Job} {networkpolicies true NetworkPolicy} {replicasets true ReplicaSet} {replicasets/scale true Scale} {replicasets/status true ReplicaSet} {replicationcontrollers true ReplicationControllerDummy} {replicationcontrollers/scale true Scale} {thirdpartyresources false ThirdPartyResource}],} storage.k8s.io/v1beta1:&APIResourceList{GroupVersion:storage.k8s.io/v1beta1,APIResources:[{storageclasses false StorageClass}],}]
Dec 20 10:35:10 uat-app01 kube-controller-manager: I1220 10:35:10.838581    4871 controllermanager.go:472] Starting apps/v1beta1 apis
Dec 20 10:35:10 uat-app01 kube-controller-manager: I1220 10:35:10.838582    4871 disruption.go:317] Starting disruption controller
Dec 20 10:35:10 uat-app01 kube-controller-manager: I1220 10:35:10.838594    4871 controllermanager.go:474] Starting StatefulSet controller
Dec 20 10:35:10 uat-app01 kube-controller-manager: I1220 10:35:10.838595    4871 disruption.go:319] Sending events to api server.
Dec 20 10:35:10 uat-app01 kube-controller-manager: I1220 10:35:10.838910    4871 controllermanager.go:499] Not starting batch/v2alpha1 apis
Dec 20 10:35:10 uat-app01 kube-controller-manager: I1220 10:35:10.839230    4871 pet_set.go:146] Starting statefulset controller
Dec 20 10:35:10 uat-app01 kube-controller-manager: I1220 10:35:10.850147    4871 controllermanager.go:544] Attempting to start certificates, full resource map map[policy/v1beta1:&APIResourceList{GroupVersion:policy/v1beta1,APIResources:[{poddisruptionbudgets true PodDisruptionBudget} {poddisruptionbudgets/status true PodDisruptionBudget}],} rbac.authorization.k8s.io/v1alpha1:&APIResourceList{GroupVersion:rbac.authorization.k8s.io/v1alpha1,APIResources:[{clusterrolebindings false ClusterRoleBinding} {clusterroles false ClusterRole} {rolebindings true RoleBinding} {roles true Role}],} v1:&APIResourceList{GroupVersion:v1,APIResources:[{bindings true Binding} {componentstatuses false ComponentStatus} {configmaps true ConfigMap} {endpoints true Endpoints} {events true Event} {limitranges true LimitRange} {namespaces false Namespace} {namespaces/finalize false Namespace} {namespaces/status false Namespace} {nodes false Node} {nodes/proxy false Node} {nodes/status false Node} {persistentvolumeclaims true PersistentVolumeClaim} {persistentvolumeclaims/status true PersistentVolumeClaim} {persistentvolumes false PersistentVolume} {persistentvolumes/status false PersistentVolume} {pods true Pod} {pods/attach true Pod} {pods/binding true Binding} {pods/eviction true Eviction} {pods/exec true Pod} {pods/log true Pod} {pods/portforward true Pod} {pods/proxy true Pod} {pods/status true Pod} {podtemplates true PodTemplate} {replicationcontrollers true ReplicationController} {replicationcontrollers/scale true Scale} {replicationcontrollers/status true ReplicationController} {resourcequotas true ResourceQuota} {resourcequotas/status true ResourceQuota} {secrets true Secret} {securitycontextconstraints false SecurityContextConstraints} {serviceaccounts true ServiceAccount} {services true Service} {services/proxy true Service} {services/status true Service}],} apps/v1beta1:&APIResourceList{GroupVersion:apps/v1beta1,APIResources:[{statefulsets true StatefulSet} {statefulsets/status true StatefulSet}],} autoscaling/v1:&APIResourceList{GroupVersion:autoscaling/v1,APIResources:[{horizon
Dec 20 10:35:10 uat-app01 kube-controller-manager: talpodautoscalers true HorizontalPodAutoscaler} {horizontalpodautoscalers/status true HorizontalPodAutoscaler}],} batch/v1:&APIResourceList{GroupVersion:batch/v1,APIResources:[{jobs true Job} {jobs/status true Job}],} extensions/v1beta1:&APIResourceList{GroupVersion:extensions/v1beta1,APIResources:[{daemonsets true DaemonSet} {daemonsets/status true DaemonSet} {deployments true Deployment} {deployments/rollback true DeploymentRollback} {deployments/scale true Scale} {deployments/status true Deployment} {horizontalpodautoscalers true HorizontalPodAutoscaler} {horizontalpodautoscalers/status true HorizontalPodAutoscaler} {ingresses true Ingress} {ingresses/status true Ingress} {jobs true Job} {jobs/status true Job} {networkpolicies true NetworkPolicy} {replicasets true ReplicaSet} {replicasets/scale true Scale} {replicasets/status true ReplicaSet} {replicationcontrollers true ReplicationControllerDummy} {replicationcontrollers/scale true Scale} {thirdpartyresources false ThirdPartyResource}],} storage.k8s.io/v1beta1:&APIResourceList{GroupVersion:storage.k8s.io/v1beta1,APIResources:[{storageclasses false StorageClass}],} authentication.k8s.io/v1beta1:&APIResourceList{GroupVersion:authentication.k8s.io/v1beta1,APIResources:[{tokenreviews false TokenReview}],} authorization.k8s.io/v1beta1:&APIResourceList{GroupVersion:authorization.k8s.io/v1beta1,APIResources:[{localsubjectaccessreviews true LocalSubjectAccessReview} {selfsubjectaccessreviews false SelfSubjectAccessReview} {subjectaccessreviews false SubjectAccessReview}],} certificates.k8s.io/v1alpha1:&APIResourceList{GroupVersion:certificates.k8s.io/v1alpha1,APIResources:[{certificatesigningrequests false CertificateSigningRequest} {certificatesigningrequests/approval false CertificateSigningRequest} {certificatesigningrequests/status false CertificateSigningRequest}],}]
Dec 20 10:35:10 uat-app01 kube-controller-manager: I1220 10:35:10.850303    4871 controllermanager.go:546] Starting certificates.k8s.io/v1alpha1 apis
Dec 20 10:35:10 uat-app01 kube-controller-manager: I1220 10:35:10.850316    4871 controllermanager.go:548] Starting certificate request controller
Dec 20 10:35:10 uat-app01 kube-controller-manager: E1220 10:35:10.850519    4871 controllermanager.go:558] Failed to start certificate controller: open /etc/kubernetes/ca/ca.pem: no such file or directory
Dec 20 10:35:10 uat-app01 kube-controller-manager: I1220 10:35:10.851889    4871 attach_detach_controller.go:235] Starting Attach Detach Controller
Dec 20 10:35:10 uat-app01 kube-controller-manager: I1220 10:35:10.851936    4871 serviceaccounts_controller.go:120] Starting ServiceAccount controller
Dec 20 10:35:10 uat-app01 kube-controller-manager: I1220 10:35:10.858582    4871 garbagecollector.go:766] Garbage Collector: Initializing
Dec 20 10:35:15 uat-app01 systemd: Started Kubernetes Scheduler Plugin.
Dec 20 10:35:15 uat-app01 systemd: Starting Kubernetes Scheduler Plugin...

Node节点

修改kubernetes config

[root@uat-ucs01 ~]# vim /etc/kubernetes/config 

###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://uat-app01:9000"   #指定API SERVER MASTER地址
KUBE_ETCD_SERVERS="--etcd_servers=http://uat-app01:4001"  #指定ETCD SERVER地址

修改kubelet配置文件

[root@uat-ucs01 ~]# vim /etc/kubernetes/kubelet 

###
# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"

# The port for the info server to serve on
# KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=uat-ucs01" #本机主机名

# location of the api-server
KUBELET_API_SERVER="--api-servers=http://uat-app01:9000"  #指定API SERVER地址

# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

# Add your own!
KUBELET_ARGS=""

启动node节点服务:

[root@uat-ucs01 ~]# systemctl start kube-proxy 
[root@uat-ucs01 ~]#  systemctl start kubelet
[root@uat-ucs01 ~]# ps -ef | grep kube
root      5801     1  2 11:09 ?        00:00:00 /usr/bin/kube-proxy --logtostderr=true --v=0 --master=http://uat-app01:9000
root      5900     1  4 11:09 ?        00:00:00 /usr/bin/kubelet --logtostderr=true --v=0 --api-servers=http://uat-app01:9000 --address=0.0.0.0 --port=10250 --hostname-override=uat-ucs01--allow-privileged=false --pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest
root      6079 19937  0 11:10 pts/0    00:00:00 grep --color=auto kube

查看节点描述信息:

[root@uat-app01 .kube]# kubectl describe node uat-ucs01 
Name:			uat-ucs01
Role:			
Labels:			beta.kubernetes.io/arch=amd64
			beta.kubernetes.io/os=linux
			kubernetes.io/hostname=uat-ucs01
Taints:			<none>
CreationTimestamp:	Wed, 20 Dec 2017 11:09:51 +0800
Phase:			
Conditions:
  Type			Status	LastHeartbeatTime			LastTransitionTime			Reason				Message
  ----			------	-----------------			------------------			------				-------
  OutOfDisk 		False 	Wed, 20 Dec 2017 14:30:39 +0800 	Wed, 20 Dec 2017 11:09:51 +0800 	KubeletHasSufficientDisk 	kubelet has sufficient disk space available
  MemoryPressure 	False 	Wed, 20 Dec 2017 14:30:39 +0800 	Wed, 20 Dec 2017 11:09:51 +0800 	KubeletHasSufficientMemory 	kubelet has sufficient memory available
  DiskPressure 		False 	Wed, 20 Dec 2017 14:30:39 +0800 	Wed, 20 Dec 2017 11:09:51 +0800 	KubeletHasNoDiskPressure 	kubelet has no disk pressure
  Ready 		True 	Wed, 20 Dec 2017 14:30:39 +0800 	Wed, 20 Dec 2017 11:10:01 +0800 	KubeletReady 			kubelet is posting ready status
Addresses:		192.168.1.46,192.168.1.46,uat-ucs01
Capacity:
 alpha.kubernetes.io/nvidia-gpu:	0
 cpu:					8
 memory:				15617712Ki
 pods:					110
Allocatable:
 alpha.kubernetes.io/nvidia-gpu:	0
 cpu:					8
 memory:				15617712Ki
 pods:					110
System Info:
 Machine ID:			d66d7f77140a4c26b01dcf3572d97083
 System UUID:			F15DFCA0-4413-FC04-2335-84BBEB108F97
 Boot ID:			57154b0c-d00b-4dab-b9ad-4192869d34c8
 Kernel Version:		3.10.0-514.el7.x86_64
 OS Image:			CentOS Linux 7 (Core)
 Operating System:		linux
 Architecture:			amd64
 Container Runtime Version:	docker://1.12.6
 Kubelet Version:		v1.5.2
 Kube-Proxy Version:		v1.5.2
ExternalID:			uat-ucs01
Non-terminated Pods:		(0 in total)
  Namespace			Name		CPU Requests	CPU Limits	Memory Requests	Memory Limits
  ---------			----		------------	----------	---------------	-------------
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.
  CPU Requests	CPU Limits	Memory Requests	Memory Limits
  ------------	----------	---------------	-------------
  0 (0%)	0 (0%)		0 (0%)		0 (0%)
Events:
  FirstSeen	LastSeen	Count	From					SubObjectPath	Type		Reason			Message
  ---------	--------	-----	----					-------------	--------	------			-------
  40m		40m		1	{kubelet uat-ucs01}			Normal		Starting		Starting kubelet.
  40m		40m		1	{kubelet uat-ucs01}			Warning		ImageGCFailed		unable to find data for container /
  40m		40m		1	{kubelet uat-ucs01}			Normal		NodeHasSufficientDisk	Node uat-ucs01 status is now: NodeHasSufficientDisk
  40m		40m		1	{kubelet uat-ucs01}			Normal		NodeHasSufficientMemory	Node uat-ucs01 status is now: NodeHasSufficientMemory
  40m		40m		1	{kubelet uat-ucs01}			Normal		NodeHasNoDiskPressure	Node uat-ucs01 status is now: NodeHasNoDiskPressure
  40m		40m		1	{kube-proxy uat-ucs01}			Normal		Starting		Starting kube-proxy.

创建覆盖网络:

在master和node上均执行如下命令:

[root@uat-app01 kubernetes]# yum install flannel -y
版本为 0.7.1-2.el7 
修改flannel配置文件:

[root@uat-app01 .kube]#  vim /etc/sysconfig/flanneld
# Flanneld configuration options  

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://uat-app01.insightcredit:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/atomic.io/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""

配置etcd中关于flannel的key

[root@uat-app01 .kube]# etcdctl mk /atomic.io/network/config '{ "Network": "10.0.0.0/16" }'
{ "Network": "10.0.0.0/16" }

重启服务

在master上执行:

[root@uat-app01 .kube]# systemctl enable flanneld.service 
[root@uat-app01 .kube]# systemctl start flanneld.service 
[root@uat-app01 .kube]# service docker restart
[root@uat-app01 .kube]# systemctl restart kube-apiserver.service
[root@uat-app01 .kube]# systemctl restart kube-controller-manager.service
[root@uat-app01 .kube]# systemctl restart kube-scheduler.service

在node上执行:

[root@uat-ucs01 ~]# systemctl enable flanneld.service 
[root@uat-ucs01 ~]# systemctl start flanneld.service 
[root@uat-ucs01 ~]# service docker restart
[root@uat-ucs01 ~]# systemctl restart kubelet.service
[root@uat-ucs01 ~]# systemctl restart kube-proxy.service

查看集群信息:

[root@uat-app01 .kube]# kubectl get nodes
NAME        STATUS    AGE
uat-ucs01   Ready     3h
uat-ucs02   Ready     3h

查看到注册的节点,kubernetes集群安装成功!

我们下篇文章,带来kubernetes的例子,敬请期待,下篇文章见~

赫墨拉

我是一个喜爱大数据的小菜鸡,这里是我分享我的成长和经历的博客

You may also like...