技术标签: kubernetes 容器 网络
隔离手段:NetworkPolicy
要在Kubernetes集群中使用NetworkPolicy,CNI网络插件必须维护一个NetworkPolicy Controller,支持Kubernetes 的NetworkPolicy。实现了NetworkPolicy的网络插件包括Weave和Calico等,但不包括Flannel。通过控制循环的方式对NetworkPolicy对象的增删改查作出响应,然后在宿主机上完成iptables规则的配置工作。
controlplane $ cat /opt/weave-kube.yaml
apiVersion: v1
kind: List
items:
- apiVersion: v1
kind: ServiceAccount
metadata:
name: weave-net
labels:
name: weave-net
namespace: kube-system
- apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: weave-net
labels:
name: weave-net
rules:
- apiGroups:
- ''
resources:
- pods
- namespaces
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- networkpolicies
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- nodes/status
verbs:
- patch
- update
- apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: weave-net
labels:
name: weave-net
roleRef:
kind: ClusterRole
name: weave-net
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: weave-net
namespace: kube-system
- apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: weave-net
labels:
name: weave-net
namespace: kube-system
rules:
- apiGroups:
- ''
resourceNames:
- weave-net
resources:
- configmaps
verbs:
- get
- update
- apiGroups:
- ''
resources:
- configmaps
verbs:
- create
- apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: weave-net
labels:
name: weave-net
namespace: kube-system
roleRef:
kind: Role
name: weave-net
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: weave-net
namespace: kube-system
- apiVersion: apps/v1
kind: DaemonSet
metadata:
name: weave-net
labels:
name: weave-net
namespace: kube-system
spec:
minReadySeconds: 5
selector:
matchLabels:
name: weave-net
template:
metadata:
labels:
name: weave-net
spec:
containers:
- name: weave
command:
- /home/weave/launch.sh
env:
- name: IPALLOC_RANGE
value: 10.32.0.0/24
- name: HOSTNAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
image: 'docker.io/weaveworks/weave-kube:2.6.0'
readinessProbe:
httpGet:
host: 127.0.0.1
path: /status
port: 6784
resources:
requests:
cpu: 10m
securityContext:
privileged: true
volumeMounts:
- name: weavedb
mountPath: /weavedb
- name: cni-bin
mountPath: /host/opt
- name: cni-bin2
mountPath: /host/home
- name: cni-conf
mountPath: /host/etc
- name: dbus
mountPath: /host/var/lib/dbus
- name: lib-modules
mountPath: /lib/modules
- name: xtables-lock
mountPath: /run/xtables.lock
- name: weave-npc
env:
- name: HOSTNAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
image: 'docker.io/weaveworks/weave-npc:2.6.0'
resources:
requests:
cpu: 10m
securityContext:
privileged: true
volumeMounts:
- name: xtables-lock
mountPath: /run/xtables.lock
hostNetwork: true
hostPID: true
restartPolicy: Always
securityContext:
seLinuxOptions: {
}
serviceAccountName: weave-net
tolerations:
- effect: NoSchedule
operator: Exists
volumes:
- name: weavedb
hostPath:
path: /var/lib/weave
- name: cni-bin
hostPath:
path: /opt
- name: cni-bin2
hostPath:
path: /home
- name: cni-conf
hostPath:
path: /etc
- name: dbus
hostPath:
path: /var/lib/dbus
- name: lib-modules
hostPath:
path: /lib/modules
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
updateStrategy:
type: RollingUpdate
controlplane $ kubectl apply -f /opt/weave-kube.yaml
serviceaccount/weave-net created
clusterrole.rbac.authorization.k8s.io/weave-net created
clusterrolebinding.rbac.authorization.k8s.io/weave-net created
role.rbac.authorization.k8s.io/weave-net created
rolebinding.rbac.authorization.k8s.io/weave-net created
daemonset.apps/weave-net created
controlplane $ kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-fb8b8dccf-9l9jz 1/1 Running 0 109s
coredns-fb8b8dccf-fzlhr 1/1 Running 0 109s
etcd-controlplane 1/1 Running 0 52s
kube-apiserver-controlplane 1/1 Running 0 61s
kube-controller-manager-controlplane 1/1 Running 0 53s
kube-proxy-xkpmr 1/1 Running 0 109s
kube-scheduler-controlplane 1/1 Running 1 51s
weave-net-mpg84 2/2 Running 1 26s
controlplane $ kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mongodb-standalone-0 1/1 Running 0 8m51s 10.32.0.194 node01 <none> <none>
mongodb-test-0 1/1 Running 0 8s 10.32.0.195 node01 <none> <none>
# 通过mongodb-test-0连接mongodb-standalone-0
controlplane $ kubectl exec -it mongodb-test-0 /bin/sh
sh-4.4$ bin/mongo --host 10.32.0.194:27017
Percona Server for MongoDB shell version v4.0.23-18
connecting to: mongodb://10.32.0.194:27017/?gssapiServiceName=mongodb
Implicit session: session {
"id" : UUID("5020ce6f-04db-4294-b3eb-eeaffcfc930a") }
Percona Server for MongoDB server version: v4.0.23-18
...
2021-03-17T10:12:58.227+0000 I CONTROL [initandlisten]
> show dbs
admin 0.000GB
config 0.000GB
local 0.000GB
>
[ceph@k8s-master network]$ vim network-policy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
app: database
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 10.32.1.0/24
- namespaceSelector:
matchLabels:
name: holmes
- podSelector:
matchLabels:
app: database
ports:
- protocol: TCP
port: 27017
egress:
- to:
- ipBlock:
cidr: 10.32.0.0/24
ports:
- protocol: TCP
port: 27017
[ceph@k8s-master network]$ kubectl apply -f network-policy.yaml
networkpolicy.networking.k8s.io/test-network-policy configured
[ceph@k8s-master network]$ kubectl get networkpolicy
NAME POD-SELECTOR AGE
test-network-policy app=database 23s
NetworkPolicy定义的规则,其实就是“白名单”。上面通过policyTypes 定义了ingress(流入)请求与egress(流出)请求。
ingress字段中,定义了from和ports,即:允许流入的“白名单”和端口。流入的白名单里指定了三种并列的情况,分别是:ipBlock、namespaceSelector 和 podSelector。egress字段里则定义了to和ports,即:允许流出的“白名单”和端口。
综上所述,这个NetworkPolicy对象,指定的隔离规则如下所示:
# mongodb-test-0 不符合 networkpolicy 的白名单设置
controlplane $ kubectl describe po mongodb-test-0
Name: mongodb-test-0
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: node01/172.17.0.72
Start Time: Wed, 17 Mar 2021 10:21:26 +0000
Labels: app=database-1
controller-revision-hash=mongodb-test-5fbb49574f
selector=mongodb-test
statefulset.kubernetes.io/pod-name=mongodb-test-0
Annotations: <none>
Status: Running
IP: 10.32.0.195
Controlled By: StatefulSet/mongodb-test
...
controlplane $ kubectl exec -it mongodb-test-0 /bin/sh
sh-4.4$ bin/mongo --host 10.32.0.194:27017
Percona Server for MongoDB shell version v4.0.23-18
connecting to: mongodb://10.32.0.194:27017/?gssapiServiceName=mongodb
2021-03-17T10:30:59.566+0000 E QUERY [js] Error: couldn't connect to server 10.32.0.194:27017, connection attempt failed: SocketException: Error connecting to 10.32.0.194:27017 :: caused by :: Connection timed out :
connect@src/mongo/shell/mongo.js:356:17
@(connect):2:6
exception: connect failed
sh-4.4$
# mongodb-standalone-0 连接 mongodb-test-0,正常连接
controlplane $ kubectl exec -it mongodb-standalone-0 /bin/sh
sh-4.4$ bin/mongo --host 10.32.0.195:27017
Percona Server for MongoDB shell version v4.0.23-18
connecting to: mongodb://10.32.0.195:27017/?gssapiServiceName=mongodb
Implicit session: session {
"id" : UUID("ceb6eafa-43c1-4028-987f-0b4a3856a9ef") }
Percona Server for MongoDB server version: v4.0.23-18
...
2021-03-17T10:21:28.796+0000 I CONTROL [initandlisten]
> show dbs
admin 0.000GB
config 0.000GB
local 0.000GB
从上面的实验过程可知,单集群可以很好实现网络隔离。如多租户场景,只需为每个租户配置一套NetworkPolicy,使其应用Pod仅能访问该用户下的实例Pod即可。
文章浏览阅读3.8k次,点赞9次,收藏28次。直接上一个工作中碰到的问题,另外一个系统开启多线程调用我这边的接口,然后我这边会开启多线程批量查询第三方接口并且返回给调用方。使用的是两三年前别人遗留下来的方法,放到线上后发现确实是可以正常取到结果,但是一旦调用,CPU占用就直接100%(部署环境是win server服务器)。因此查看了下相关的老代码并使用JProfiler查看发现是在某个while循环的时候有问题。具体项目代码就不贴了,类似于下面这段代码。while(flag) {//your code;}这里的flag._main函数使用while(1)循环cpu占用99
文章浏览阅读347次。idea shift f6 快捷键无效_idea shift +f6快捷键不生效
文章浏览阅读135次。Ecmacript 中没有DOM 和 BOM核心模块Node为JavaScript提供了很多服务器级别,这些API绝大多数都被包装到了一个具名和核心模块中了,例如文件操作的 fs 核心模块 ,http服务构建的http 模块 path 路径操作模块 os 操作系统信息模块// 用来获取机器信息的var os = require('os')// 用来操作路径的var path = require('path')// 获取当前机器的 CPU 信息console.log(os.cpus._node模块中有很多核心模块,以下不属于核心模块,使用时需下载的是
文章浏览阅读10w+次,点赞435次,收藏3.4k次。SPSS 22 下载安装过程7.6 方差分析与回归分析的SPSS实现7.6.1 SPSS软件概述1 SPSS版本与安装2 SPSS界面3 SPSS特点4 SPSS数据7.6.2 SPSS与方差分析1 单因素方差分析2 双因素方差分析7.6.3 SPSS与回归分析SPSS回归分析过程牙膏价格问题的回归分析_化工数学模型数据回归软件
文章浏览阅读7.5k次。如何利用hutool工具包实现邮件发送功能呢?1、首先引入hutool依赖<dependency> <groupId>cn.hutool</groupId> <artifactId>hutool-all</artifactId> <version>5.7.19</version></dependency>2、编写邮件发送工具类package com.pc.c..._hutool发送邮件
文章浏览阅读867次,点赞2次,收藏2次。docker安装elasticsearch,elasticsearch-head,kibana,ik分词器安装方式基本有两种,一种是pull的方式,一种是Dockerfile的方式,由于pull的方式pull下来后还需配置许多东西且不便于复用,个人比较喜欢使用Dockerfile的方式所有docker支持的镜像基本都在https://hub.docker.com/docker的官网上能找到合..._docker安装kibana连接elasticsearch并且elasticsearch有密码
文章浏览阅读1.3w次,点赞57次,收藏92次。整理 | 郑丽媛出品 | CSDN(ID:CSDNnews)近年来,随着机器学习的兴起,有一门编程语言逐渐变得火热——Python。得益于其针对机器学习提供了大量开源框架和第三方模块,内置..._beeware
文章浏览阅读7.9k次。//// ViewController.swift// Day_10_Timer//// Created by dongqiangfei on 2018/10/15.// Copyright 2018年 飞飞. All rights reserved.//import UIKitclass ViewController: UIViewController { ..._swift timer 暂停
文章浏览阅读986次,点赞2次,收藏2次。1.硬性等待让当前线程暂停执行,应用场景:代码执行速度太快了,但是UI元素没有立马加载出来,造成两者不同步,这时候就可以让代码等待一下,再去执行找元素的动作线程休眠,强制等待 Thread.sleep(long mills)package com.example.demo;import org.junit.jupiter.api.Test;import org.openqa.selenium.By;import org.openqa.selenium.firefox.Firefox.._元素三大等待
文章浏览阅读3k次,点赞4次,收藏14次。Java软件工程师职位分析_java岗位分析
文章浏览阅读2k次。Java:Unreachable code的解决方法_java unreachable code
文章浏览阅读1w次。1、html中设置标签data-*的值 标题 11111 222222、点击获取当前标签的data-url的值$('dd').on('click', function() { var urlVal = $(this).data('ur_如何根据data-*属性获取对应的标签对象