Kubernetes安全机制(二)

1. 网络安全策略

1.1 相同namespace的NetworkPolicy的隔离性

创建一个namespace

1
kubectl create ns policy-demo

创建pod

1
kubectl create deployment --namespace=policy-demo nginx  --image=nginx

创建service

1
kubectl expose --namespace=policy-demo deployment nginx --port=80

测试访问(可以正常访问)

1
2
kubectl run --namespace=policy-demo access --rm -ti --image busybox /bin/sh
wget -q nginx -O -

创建NetworkPolicy规则

1
2
3
4
5
6
7
8
9
10
kubectl create -f - <<EOF
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: default-deny
namespace: policy-demo
spec:
podSelector:
matchLabels: {}
EOF

此规则表示拒绝pod连接policy-demo namespace下的pod

在次测试

1
2
3
4
5
6
kubectl run --namespace=policy-demo access --rm -ti --image busybox /bin/sh


wget -q nginx -O -

wget: Download time out

可以看见被拒绝访问了

添加允许规则

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
kubectl create -f - <<EOF
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: access-nginx
namespace: policy-demo
spec:
podSelector:
matchLabels:
app: nginx
ingress:
- from:
- podSelector:
matchLabels:
run: access
EOF

这条规则意思,允许,label为 run:access的pod访问policy-demo namespace下label为run:nginx的pod

刚刚我们执行

1
kubectl run --namespace=policy-demo access --rm -ti --image busybox /bin/sh  pod名称为access会自动会这个deployment创建run:access这个label

在次测试,可以访问成功

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
kubectl run --namespace=policy-demo access --rm -ti --image busybox /bin/sh 

wget -q nginx -O -


<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

创建一个不是run:access的pod去测试访问

1
2
3
4
5
6
7
kubectl run --namespace=policy-demo cannot-access --rm -ti --image busybox  /bin/sh


测试
wget -q nginx -O -

wget:Download time out

结论:同namespace下可以使用Policy在限制pod与pod之间的访问

清空环境

1
kubectl delete ns policy-demo

1.2 不同namespace pod的隔离性

创建两个namespace policy-demo、policy-demo2,然后在policy-demo里面创建nginx-pod和对应的service和busybox,在policy-demo2里面创建busybox,两个namespace的busybox去访问policy-demo里面的nginx

1
2
kubectl create ns policy-demo
kubectl create ns policy-demo2
1
2
3
4
kubectl create deployment  --namespace=policy-demo nginx  --image=nginx
kubectl run --namespace=policy-demo access --rm -ti --image busybox /bin/sh
kubectl run --namespace=policy-demo2 access --rm -ti --image busybox /bin/sh
kubectl expose --namespace=policy-demo deployment nginx --port=80

还没设置NetworkPolicy时分别从policy-demo和policy-demo2两个namespace去busybox去访问nginx,访问成功。

需要注意的是
policy-demo2去访问要接上namespace名

1
wget -q nginx.policy-demo -O -

配置NetworkPolicy

1
2
3
4
5
6
7
8
9
10
kubectl create -f - <<EOF
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: default-deny
namespace: policy-demo
spec:
podSelector:
matchLabels: {}
EOF

配置拒绝所有Policy,此时两个namespace的busybox都不能访问了

在添加允许run:access label的pod访问Policy

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
kubectl create -f - <<EOF
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: access-nginx
namespace: policy-demo
spec:
podSelector:
matchLabels:
app: nginx
ingress:
- from:
- podSelector:
matchLabels:
run: access
EOF

此时
policy-demo这个namespace下的busybox可以访问本namespace下的这个nginx
policy-demo2这个namespace下的busybox访问不了policy-demo这个namespace下的nginx

配置允许policy-demo2下的run:access标签的POD访问policy-demo namespace下的app:nginx服务

给policy-demo2命名空间打上label

1
kubectl label ns/policy-demo2 project=policy-demo2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
kubectl create -f - <<EOF
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: access-nginx2
namespace: policy-demo
spec:
podSelector:
matchLabels:
app: nginx
ingress:
- from:
- namespaceSelector:
matchLabels:
project: policy-demo2
podSelector:
matchLabels:
run: access
EOF

此时policy-demo2下的run:access标签的POD访问policy-demo namespace下的app:nginx服务,但其他标签不可以。

运行个run:access2标签的busybox去访问policy-demo namespace下的app:nginx服务

1
2
3
4
5
kubectl run --namespace=policy-demo2 access2 --rm -ti --image busybox /bin/sh  

wget -q nginx.policy-demo -O -

wget:Download time out

注意:

1
2
3
4
5
6
7
8
9
10
...
ingress:
- from:
- namespaceSelector:
matchLabels:
user: alice
- podSelector:
matchLabels:
role: client
...

像上面这样定义的 namespaceSelector 和 podSelector,是“或”(OR)的关系。所以说,这个 from 字段定义了两种情况,无论是 Namespace 满足条件,还是 Pod 满足条件,这个 NetworkPolicy 都会生效。

1
2
3
4
5
6
7
8
9
10
11
...
ingress:
- from:
- namespaceSelector:
matchLabels:
user: alice
podSelector:
matchLabels:
role: client
...

这样定义的 namespaceSelector 和 podSelector,其实是“与”(AND)的关系。所以说,这个 from 字段只定义了一种情况,只有 Namespace 和 Pod 同时满足条件,这个 NetworkPolicy 才会生效。

清空环境

1
2
3
kubectl delete ns policy-demo

kubectl delete ns policy-demo2

1.3 南北向流量隔离实战

创建namespace

1
2
kubectl create ns policy-demo
kubectl create ns policy-demo2

在policy-demo命名空间内创建两个测试POD

1
2
3
kubectl run  --namespace=policy-demo test-network1 --command sleep 1000000 --image=busybox

kubectl run --namespace=policy-demo test-network2 --command sleep 1000000 --image=busybox

在policy-demo2命名空间内创建一个测试pod

1
kubectl run  --namespace=policy-demo2 test-network3 --command sleep 1000000 --image=busybox

创建全局禁止外访规则

1
2
3
4
5
6
7
8
9
10
11
12
13
kubectl create -f - <<EOF

apiVersion: crd.projectcalico.org/v1
kind: GlobalNetworkPolicy
metadata:
name: global-deny-all-egress
spec:
selector: all()
types:
- Egress
egress:
- action: Deny
EOF

单个POD外访白名单
以允许policy-demo命名空间中的test-network pod为例

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
kubectl create -f - <<EOF

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-testnetwork-egress-ssh
namespace: policy-demo
spec:
podSelector:
matchLabels:
run: test-network1 #通过Label Selector匹配到具体某一类Pod
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 172.16.0.5/32 #白名单IP
ports:
- protocol: TCP
port: 22 #白名单端口
EOF

查看NetworkPolicy

1
2
3
kubectl  get networkpolicy -n policy-demo
NAME POD-SELECTOR AGE
allow-testnetwork-egress-ssh run=test-network1 16s

测试访问
此时test-network1可以访问其他pod无法访问

Namespace外访白名单

允许policy-demo命名空间下全部POD都访问172.16.0.5的22端口

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
kubectl create -f - <<EOF

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-egress-to-ssh-policy-demo
namespace: policy-demo
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 172.16.0.5/32 #白名单IP
ports:
- protocol: TCP
port: 22 #白名单端口
EOF

此时test-network1、test-networ2点可以访问其它命名空间的pod无法访问

全局外访白名单

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
kubectl create -f - <<EOF

apiVersion: crd.projectcalico.org/v1
kind: GlobalNetworkPolicy
metadata:
name: global-allow-all-egress-to-ssh
spec:
selector: all()
types:
- Egress
egress:
- action: Allow
source: {}
destination:
nets:
- 172.16.0.5 #白名单IP
ports:
- 22 #白名单端口
protocol: TCP
EOF

配置此规则后,集群内全部pod都可以访问172.16.0.5的22端口

2. RBAC

安装cfssl

1
2
3
4
curl -s -L -o /bin/cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
curl -s -L -o /bin/cfssljson https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
curl -s -L -o /bin/cfssl-certinfo https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x /bin/cfssl*

2.1. user创建

创建名为test-cka的平台用户

1
cd /etc/kubernetes/pki

确认kubernetes证书目录是否有以下文件

若没有ca-config.json

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "87600h"
}
}
}
}
EOF

ca-config.json:可以定义多个 profiles,分别指定不同的过期时间、使用场景等参数;后续在签名证书时使用某个 profile;

创建一个创建证书签名请求

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
cat > test-cka-csr.json <<EOF
{
"CN": "test-cka",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
1
cfssl gencert -ca=ca.crt -ca-key=ca.key -config=ca-config.json -profile=kubernetes test-cka-csr.json | cfssljson -bare test-cka

创建kubeconfig文件

1
export KUBE_APISERVER="https://192.168.1.10:6443"

KUBE_APISERVER写你master节点IP地址

1
kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.crt --embed-certs=true --server=${KUBE_APISERVER} --kubeconfig=test-cka.kubeconfig
1
kubectl config set-credentials test-cka --client-certificate=/etc/kubernetes/pki/test-cka.pem --client-key=/etc/kubernetes/pki/test-cka-key.pem --embed-certs=true --kubeconfig=test-cka.kubeconfig

设置context

1
kubectl config set-context kubernetes --cluster=kubernetes --user=test-cka  --kubeconfig=test-cka.kubeconfig

设置默认context,将集群参数和用户参数关联起来,如果配置了多个集群,可以通过集群名来切换不同的环境

1
kubectl config use-context kubernetes --kubeconfig=test-cka.kubeconfig 

查看kubectl的context

1
2
3
4
kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* kubernetes-admin@kubernetes kubernetes kubernetes-admin

用户目前还是kubernetes-admin,切换到test-cka

查看用户切换

1
2
3
kubectl config get-contexts  --kubeconfig=/etc/kubernetes/pki/test-cka.kubeconfig
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* kubernetes kubernetes test-cka

此时去get pod,get node

1
2
3
kubectl get pod  --kubeconfig=test-cka.kubeconfig 
No resources found.
Error from server (Forbidden): pods is forbidden: User "test-cka" cannot list pods in the namespace "default"
1
2
3
kubectl get node --kubeconfig=test-cka.kubeconfig 
No resources found.
Error from server (Forbidden): nodes is forbidden: User "test-cka" cannot list nodes at the cluster scope4

2.2. Role和RoleBinding创建

创建角色
定义这个角色只能对default这个namespace 执行get、watch、list权限
定义角色
role.yaml

1
2
3
4
5
6
7
8
9
10
11
12
kubectl create -f - <<EOF

kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: default
name: pod-reader
rules:
- apiGroups: [""] # 空字符串""表明使用core API group
resources: ["pods"]
verbs: ["get", "watch", "list"]
EOF

role_bind.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
kubectl create -f - <<EOF

kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: read-pods
namespace: default
subjects:
- kind: User
name: test-cka
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io

EOF

使用admin创建

1
kubectl apply -f role.yaml --kubeconfig=/root/.kube/config 
1
kubectl apply -f role_bind.yaml --kubeconfig=/root/.kube/config 

check一下

此时用test-cka这个用户去get pod

1
2
3
4
5
6
kubectl get pod  --kubeconfig /etc/kubernetes/pki/test-cka.kubeconfig 
NAME READY STATUS RESTARTS AGE
http-app-844765cb6c-nfp7l 1/1 Running 0 10h
http-app2-58d4c447c5-qzg99 1/1 Running 0 10h
test-679b667858-pzdn2 1/1 Running 0 1h
[root@master pki]#
1
2
3
kubectl get node --kubeconfig=test-cka.kubeconfig 
No resources found.
Error from server (Forbidden): nodes is forbidden: User "test-cka" cannot list nodes at the cluster scope

get pod可以但get node不行,因为我们刚刚配置role只有pod权限

删除pod看看

1
2
kubectl delete pod/http-app-844765cb6c-nfp7l
Error from server (Forbidden): pods "http-app-844765cb6c-nfp7l" is forbidden: User "test-cka" cannot delete pods in the namespace "default"

你会发现也删不掉,因为我们role里面配置的权限是watch和list和get

1
2
3
4
 kubectl get pod -n kube-system --kubeconfig=test-cka.kubeconfig 
No resources found.
Error from server (Forbidden): pods is forbidden: User "test-cka"" cannot list pods in the namespace "kube-system"
[root@master pki]#

可以看见test-cka这个用户只能访问default这个namespace的pod资源,其他的namespace都访问不了,同样namespace的其他资源也访问不了

2.3. ClusterRole和ClusterRoleBinding创建

cluster_role.yaml

1
2
3
4
5
6
7
8
9
10
11
kubectl create -f - <<EOF

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: secret-reader
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "watch", "list"]
EOF

再定义一个ClusterRoleBinding,将上面的clusterrole和用户rancher绑定起来
cluster_role_bind.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
kubectl create -f - <<EOF

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-secrets-global
subjects:
- kind: User
name: test-cka
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: secret-reader
apiGroup: rbac.authorization.k8s.io
EOF

应用

1
kubectl apply -f cluster_role.yaml  --kubeconfig=/root/.kube/config 
1
kubectl apply -f cluster_role_bind.yaml  --kubeconfig=/root/.kube/config 

此时去get 节点

1
2
3
kubectl get node --kubeconfig /etc/kubernetes/pki/test-cka.kubeconfig   
NAME STATUS ROLES AGE VERSION
master Ready master 9d v1.15.5

3. Security Context

Security Context 的目的是限制容器的行为,保护操作系统和其他容器不受其影响。

Kubernetes 提供了三种配置 Security Context 的方法:

  • Container-level Security Context:仅应用到指定的容器
  • Pod-level Security Context:应用到 Pod 内所有容器以及 Volume
  • Pod Security Policies(PSP):应用到集群内部所有 Pod 以及 Volume

3.1 Container-level Security Context

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
cat <<EOF | kubectl apply -f -

apiVersion: v1
kind: Pod
metadata:
name: test-container
labels:
app: web
spec:
containers:
- name: test1
image: busybox
command: [ "sh", "-c", "sleep 1h" ]
securityContext:
runAsUser: 1000
- name: test2
image: busybox
command: [ "sh", "-c", "sleep 1h" ]

EOF
1
2
3
4
5
6
7
kubectl exec -it test-container -c test1 id

uid=1000 gid=0(root)


kubectl exec -it test-container -c test2 id
uid=0(root) gid=0(root) groups=10(wheel)

通过securityContext将test1 container的运行user自动修改为1000了,test2仍然保持不变为root user。

3.2 Pod-level Security Context

创建POD层面securityContext

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
cat <<EOF | kubectl apply -f -

apiVersion: v1
kind: Pod
metadata:
name: test-container2
labels:
app: web2
spec:
securityContext:
runAsUser: 1000
containers:
- name: test1
image: busybox
command: [ "sh", "-c", "sleep 1h" ]
- name: test2
image: busybox
command: [ "sh", "-c", "sleep 1h" ]

EOF
1
2
3
4
kubectl exec -it test-container2 -c test1 id
uid=1000 gid=0(root)
kubectl exec -it test-container2 -c test2 id
uid=1000 gid=0(root)

通过securityContext将POD内 container的运行user都自动修改为1000了

3.3 Pod Security Policies(PSP)

当一个pod安全策略资源被创建,它本身是一种kubernetes资源,但此时它什么都不会做.为了使用它,需要通过RBAC将user或ServiceAccount与它进行绑定

PSP 的用法和 RBAC 是紧密相关的,换句话说,应用 PSP 的基础要求是:

  • 不同运维人员的操作账号需要互相隔离并进行单独授权。
  • 不同命名空间,不同 ServiceAccount 也同样要纳入管理流程。

创建POD安全策略,这个策略主要限制POD使用特权模式

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
cat <<EOF | kubectl apply -f -

apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: privileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
spec:
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
runAsUser:
rule: RunAsAny
fsGroup:
rule: RunAsAny
privileged: false
allowPrivilegeEscalation: true
allowedCapabilities:
- '*'
volumes:
- '*'

EOF

创建一个名称空间和一个serviceaccount.使用这个serviceaccount来模拟一个非管理员用户

1
kubectl create namespace psp-demo

授权psp-demo Namespace中默认ServiceAccount的使用privileged这个PodSecurityPolicy

1
2
3
kubectl create rolebinding default:psp:privileged \
--role=psp:privileged \
--serviceaccount=psp-demo:default

创建特权模式的workload测试

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
cat <<EOF | kubectl --as=system:serviceaccount:psp-demo:default applyapply -f -

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
securityContext:
privileged: true
EOF

from server for: “2.yaml”: pods “privileged” is forbidden: User “system:serviceaccount:psp-demo:default” cannot get resource “pods” in API group “” in the namespace “development”```

提示无法创建,因为被Pod Security Policy给限制住了

去除securityContext:privileged: true字段,在次测试,可以正常创建,符合预期。

4. 手动注册节点到Kubernetes集群中

先决条件
1、集群中安装好docker和kubelet

使用TLS bootstrap的自动注册节点到k8s集群中

kubelet需要申请那些证书

集群启用RBAC后各组件之间的通信是基于TLS加密的,client和server需要通过证书中的CN,O来确定用户的user和group,因此client和server都需要持有有效证书

  • node节点的kubelet需要和master节点的apiserver通信,此时kubelet是一个client需要申请证书
  • node节点的kubelet启动为守住进程通过10250端口暴露自己的状态,此时kubelet是一个server需要申请证书

kubelet申请证书的步骤

1、集群产生一个低权账号用户组,并通过TOKEN进行认证
2、创建ClusterRole使其具有创建证书申请CSR的权限
3、给这个组添加ClusterRoleBinding,使得具有这个组的账号的kubelet具有上述权限
4、给添加ClusterRoleBinding,使得controller-manager自动同意上述两个证书的下发
5、调整 Controller Manager确保启动tokencleaner和bootstrapsigner(4中自动证书下发的功能)
6、基于上述TOKEN生成bootstrap.kubeconfig文件,并下发给node节点
7、node节点的kubelet拿着这个bootstrap.kubeconfig向master的apiserver发起CSR
7、master自动同意并下发第一个证书
node记得点的kubelet自动拿着第一个证书与master的apiserver通信申请第二个证书
8、master自动同意并下发第二个证书
node节点加入集群

创建token

创建类型为”bootstrap.kubernetes.io/token”的secret

1
2
echo "$(head -c 6 /dev/urandom | md5sum | head -c 6)"."$(head -c 16 /dev/urandom | md5sum | head -c 16)"
485bd8.711b717a196f47f4

执行上述命令得到一个TOKEN值”485bd8.711b717a196f47f4”

这个 485bd8.711b717a196f47f4
就是生成的 Bootstrap Token,保存好 token,因为后续要用;关于这个 token 解释如下:

Token 必须满足 [a-z0-9]{6}.[a-z0-9]{16} 格式;以 . 分割,前面的部分被称作 Token ID , Token ID 并不是 “机密信息”,它可以暴露出去;相对的后面的部分称为 Token Secret ,它应该是保密的。

基于token创建secret
将下列secret对应的字段修改为刚刚申请的token值

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
cat <<EOF | kubectl apply -f -

apiVersion: v1
kind: Secret
metadata:
# Name MUST be of form "bootstrap-token-<token id>"
name: bootstrap-token-xxx
namespace: kube-system

# Type MUST be 'bootstrap.kubernetes.io/token'
type: bootstrap.kubernetes.io/token
stringData:
# Human readable description. Optional.
description: "The default bootstrap token generated by 'kubeadm init'."

# Token ID and secret. Required.
token-id:
token-secret:


# Allowed usages.
usage-bootstrap-authentication: "true"
usage-bootstrap-signing: "true"

# Extra groups to authenticate the token as. Must start with "system:bootstrappers:"
auth-extra-groups: system:bootstrappers:cka:default-node-token

EOF

配置RBAC

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
cat <<EOF | kubectl apply -f -

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
rules:
- apiGroups: ["certificates.k8s.io"]
resources: ["certificatesigningrequests/nodeclient"]
verbs: ["create"]
---
# A ClusterRole which instructs the CSR approver to approve a node renewing its
# own client credentials.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
rules:
- apiGroups: ["certificates.k8s.io"]
resources: ["certificatesigningrequests/selfnodeclient"]
verbs: ["create"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: system:node-bootstrapper
rules:
- apiGroups:
- certificates.k8s.io
resources:
- certificatesigningrequests
verbs:
- create
- get
- list
- watch
EOF

配置ClusterRoleBinding

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
cat <<EOF | kubectl apply -f -

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cka:kubelet-bootstrap
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node-bootstrapper
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:bootstrappers:cka:default-node-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cka:node-autoapprove-bootstrap
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:bootstrappers:cka:default-node-token

EOF

确认controller-manager是否开启bootstrapsigner

1
2
cat /etc/kubernetes/manifests/kube-controller-manager.yaml|grep bootstrapsigner
- --controllers=*,bootstrapsigner,tokencleaner

生成bootstrap.kubeconfig文件

我这里的apiserver的地址为”https://172.16.0.7:6443

设置集群参数

1
2
3
4
5
kubectl config set-cluster cka \
--certificate-authority=/etc/kubernetes/pki/ca.crt \
--embed-certs=true \
--server=https://172.16.0.7:6443 \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf

设置客户端认证参数,替换token

1
2
3
kubectl config set-credentials system:bootstrap:485bd8 \
--token=485bd8.711b717a196f47f4 \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf

设置上下文,替换token id

1
2
3
4
kubectl config set-context default \
--cluster=cka \
--user=system:bootstrap:485bd8 \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf

设置默认上下文

1
kubectl config use-context default --kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf

将其拷贝到node节点

1
2
scp /etc/kubernetes/bootstrap-kubelet.conf rke-node2:/etc/kubernetes/bootstrap-kubelet.conf

拷贝config.yaml到node节点

1
scp /var/lib/kubelet/config.yaml rke-node2:/var/lib/kubelet/

配置node节点的kubelet

创建证书目录

1
mkdir /etc/kubernetes/pki/

将master节点ca证书拷贝过来

1
2
3
4
scp /etc/kubernetes/pki/ca.crt rke-node2:/etc/kubernetes/pki/

scp /etc/kubernetes/pki/ca.key rke-node2:/etc/kubernetes/pki/

修改Address为实际上节点ip

1
2
vim /etc/kubernetes/kubelet.config

1
2
3
4
5
6
7
8
9
10
11
12
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 172.16.0.5
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.96.0.10"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
anonymous:
enabled: true

编辑kubelet.service
修改hostname-override为实际节点ip,pause镜像地址按实际镜像仓库地址修改。

1
vim /etc/systemd/system/kubelet.service
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
#EnvironmentFile=/k8s/kubernetes/cfg/kubelet
ExecStart=/usr/bin/kubelet \
--logtostderr=true \
--v=4 \
--hostname-override=172.16.0.5 \
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf \
--config=/etc/kubernetes/kubelet.config \
--cert-dir=/etc/kubernetes/pki \
--pod-infra-container-image=k8s.gcr.io/pause:3.1
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target

启动服务

1
2
3
systemctl daemon-reload 
systemctl restart kubelet.service
systemctl status kubelet.service

检查

1
2
3
4
kubectl  get node
NAME STATUS ROLES AGE VERSION
rke-node1 Ready master 37m v1.19.1
rke-node2 Ready <none> 115s v1.19.1

5. 使用临时容器调试现有POD

截止到目前k8s1.18版本,k8s已经支持四种类型的container:标准容器,sidecar容器,init容器,ephemeral容器。

什么是ephemeral容器

临时容器与其他容器的不同之处在于,它们缺少对资源或执行的保证,并且永远不会自动重启,因此不适用于构建应用程序。临时容器使用与常规容器相同的 ContainerSpec 段进行描述,但许多字段是不相容且不允许的。

临时容器没有端口配置,因此像 ports,livenessProbe,readinessProbe 这样的字段是不允许的。
Pod 资源分配是不可变的,因此 resources 配置是不允许的。
有关允许字段的完整列表,

ephemeral容器的用途

  当由于容器崩溃或容器镜像不包含调试实用程序而导致 kubectl exec 无用时,临时容器对于交互式故障排查很有用。

  尤其是,distroless 镜像能够使得部署最小的容器镜像,从而减少攻击面并减少故障和漏洞的暴露。由于 distroless 镜像不包含 shell 或任何的调试工具,因此很难单独使用 kubectl exec 命令进行故障排查。

使用临时容器时,启用进程命名空间共享很有帮助,可以查看其他容器中的进程。

使用ephemeral容器

ephemeral容器目前还是个alpha的功能所以需要在Kubernetes的api-server、scheduler、controller-manager和节点的kubelet开启对应的参数

修改/etc/kubernetes/manifests/kube-apiserver.yaml、kube-controller-manager.yaml、kube-scheduler.yaml
添加组件的参数--feature-gates=EphemeralContainers=true

修改kubelet参数

/var/lib/kubelet/config.yaml

底部的以下几行:

1
2
featureGates:
EphemeralContainers: true

保存文件并运行以下命令:

1
systemctl restart kubelet

创建个nginx用于模拟正常pod,打开shareProcessNamespace,让同个pod内的不同container可以查看共同进程空间

1
2
3
4
5
6
7
8
9
10
11
12
13
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
shareProcessNamespace: true
containers:
- name: nginx
image: nginx
stdin: true
tty: true
EOF

目前使用ephemeral容器可以直接通过kubectl进行,在Kubernetes 1.18版本加入了kubectl debug的特性,方便用户进行trouble shooting,目前还是alpha特性

使用kubectl创建ephemeral容器 ,附加个busybox到刚刚创建的nginx容器中

1
kubectl alpha debug nginx  -it --image=busybox

因为打开了shareProcessNamespace所以在同一个pod中的不同container可以看见对应互相的进程。

1
2
3
4
5
6
7
8
9
10
11
Defaulting debug container name to debugger-mbzbp.

If you don't see a command prompt, try pressing enter.

/ # ps aux
PID USER TIME COMMAND
1 root 0:00 /pause
6 root 0:00 nginx: master process nginx -g daemon off;
33 101 0:00 nginx: worker process
46 root 0:00 sh
51 root 0:00 ps aux