ansible部署K8S集群

准备工作

  • 准备四台机器
ansible:10.3.23.191
K8STest0001:10.3.23.207
K8STest0002: 10.3.23.208
K8STest0003: 10.3.23.209
  • 版本
docker:20.10.9-3.el7
k8s:1.23.13-0

k8s 从1.24开始默认就不支持docker了,所以这里选择1.23

[root@ansible ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.3.23.207 K8STest0001
10.3.23.208 K8STest0002
10.3.23.209 K8STest0003

一、安装Ansible

参考 http://08643.cn/p/f5ba99305c0d

二、更新k8s hosts文件

  • 编辑ansible清单 vi /etc/ansible/hosts
[k8s:children]
k8s_master
k8s_node


[k8s_master]
10.3.23.207


[k8s_node]
10.3.23.208
10.3.23.209

测试清单文件

[root@KSSYSDEV ansible]# ansible k8s --list-hosts
  hosts (3):
    10.3.23.207
    10.3.23.208
    10.3.23.209
  • 创建playbook,让k8s集群各节点同步/etc/hosts文件
cat <<EOF >  ./playbook-k8s-hosts.yml
---
- hosts: k8s
  remote_user: root
 
  tasks:
    - name: backup /etc/hosts
      shell: mv /etc/hosts /etc/host_bak
    - name: copy localhosts file to remote
      copy: src=/etc/hosts dest=/etc/ owner=root group=root mode=0644
EOF
ansible-playbook playbook-k8s-hosts.yml

三、安装Docker

所有节点安装docker

cat <<EOF > ./playbook-k8s-install-docker.yml 
---
- hosts: k8s
  remote_user: root
  vars: 
    docker_version: 20.10.9-3.el7

  tasks:
    - name: install dependencies
      shell:  yum install -y yum-utils device-mapper-persistent-data lvm2
    - name: docker-repo
      shell: yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    - name: install docker
      yum: name=docker-ce-{{docker_version}} state=present
    - name: start docker
      shell: systemctl start docker && systemctl enable docker
      
EOF
ansible-playbook playbook-k8s-install-docker.yml 

四、部署k8s master

  • 编写初始化脚本
    所有K8S机器,开始部署之前,需要做一些初始化处理:关闭防火墙、关闭selinux、禁用swap、配置k8s阿里云yum源等,所有操作放在脚本 k8s-os-init.sh,并在下面的playbook中通过script模块执行
cat <<EEE > ./k8s-os-init.sh
#!/bin/bash
#防火墙
systemctl disable firewalld
systemctl stop firewalld
setenforce 0

#禁用swap
swapoff -a

#修改内核参数
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

#重新加载配置文件
sysctl --system

#配置阿里k8s yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

#更新缓存
yum clean all -y && yum makecache -y && yum repolist -y


# k8s 的cgroup driver是 systemd ,而 docker 是cgroupfs,两个不一致会导致kubeadm init失败,所以要修改docker的cgroup driver为systemd
cat << EOF > /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

# 重启docker
systemctl restart docker

EEE

repo_gpgcheck=0 这里为0,如果改为1,则kubeadmin init时就会报错:Failure talking to yum: failure: repodata/repomd.xml from kubernetes: [Errno 256] No more mirrors to try.\nhttps://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/repodata/repomd.xml: [Errno -1] repomd.xml signature could not be verified for kubernetes

  • 创建master playbook文件
cat << EOF > ./playbook-k8s-install-master.yml 
---
- hosts: k8s_master
  remote_user: root
  vars:
    kube_version: 1.23.13-0
    k8s_version: v1.23.13
    k8s_master: 10.3.23.207
  tasks: 
    - name: k8s-os-init
      script: ./k8s-os-init.sh
    - name: install kube***
      yum: 
        name:
          - kubectl-{{kube_version}}
          - kubeadm-{{kube_version}}
          - kubelet-{{kube_version}}
        state: present
    - name: start k8s
      shell: systemctl enable kubelet && systemctl start kubelet
    - name: init k8s
      shell: kubeadm reset -f && kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version {{k8s_version}} --apiserver-advertise-address {{k8s_master}}  --pod-network-cidr=10.244.0.0/16 --token-ttl 0
    - name: config kube
      shell: rm -rf $HOME/.kube && mkdir -p $HOME/.kube && cp -i /etc/kubernetes/admin.conf $HOME/.kube/config && chown $(id -u):$(id -g) $HOME/.kube/config
    - name: copy flannel yaml file
      copy: src=./kube-flannel.yml dest=/tmp/kube-flannel.yml
    - name: install flannel
      shell: kubectl apply -f /tmp/kube-flannel.yml
    - name: get join command
      shell: kubeadm token create --print-join-command 
      register: join_command
    - name: show join command
      debug: var=join_command verbosity=0

EOF

这里就需要使用准备工作阶段手动下载kube-flannel.yml

执行master安装

[root@KSSYSDEV ansible]# ansible-playbook playbook-k8s-install-master.yml

PLAY [k8s_master] ********************************************************************************************************************************

TASK [Gathering Facts] ***************************************************************************************************************************
ok: [10.3.23.207]

TASK [start k8s] *********************************************************************************************************************************
changed: [10.3.23.207]

TASK [init k8s] **********************************************************************************************************************************
changed: [10.3.23.207]

TASK [config kube] *******************************************************************************************************************************
[WARNING]: Consider using the file module with state=absent rather than running 'rm'.  If you need to use command because file is insufficient
you can add 'warn: false' to this command task or set 'command_warnings=False' in ansible.cfg to get rid of this message.
changed: [10.3.23.207]

TASK [copy flannel yaml file] ********************************************************************************************************************
ok: [10.3.23.207]

TASK [install flannel] ***************************************************************************************************************************
changed: [10.3.23.207]

TASK [get join command] **************************************************************************************************************************
changed: [10.3.23.207]

TASK [show join command] *************************************************************************************************************************
ok: [10.3.23.207] => {
    "join_command": {
        "changed": true,
        "cmd": "kubeadm token create --print-join-command",
        "delta": "0:00:00.048605",
        "end": "2022-11-02 10:35:46.974724",
        "failed": false,
        "rc": 0,
        "start": "2022-11-02 10:35:46.926119",
        "stderr": "",
        "stderr_lines": [],
        "stdout": "kubeadm join 10.3.23.207:6443 --token nioc2c.ycnrz4gj54vmxnl5 --discovery-token-ca-cert-hash sha256:52f8ebbe8926cfb8b17459e5b1fb4fcdd50283e870af6f61cf9b43c880b638b8 ",
        "stdout_lines": [
            "kubeadm join 10.3.23.207:6443 --token nioc2c.ycnrz4gj54vmxnl5 --discovery-token-ca-cert-hash sha256:52f8ebbe8926cfb8b17459e5b1fb4fcdd50283e870af6f61cf9b43c880b638b8 "
        ]
    }
}

PLAY RECAP ***************************************************************************************************************************************
10.3.23.207                : ok=8    changed=5    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

  • 因为我分两次执行,所以日志不全
  • 注意日志的join_command,后续会使用到

五、部署k8s node

cat <<EOF > ./playbook-k8s-install-node.yml 
- hosts: k8s_node
  remote_user: root
  vars:
    kube_version: 1.23.13-0
  tasks:
    - name: k8s-os-init
      script: ./k8s-os-init.sh
    - name: install kube***
      yum: 
        name:
          - kubeadm-{{kube_version}}
          - kubelet-{{kube_version}}
        state: present
    - name: start kubelet
      shell: systemctl enable kubelet && systemctl start kubelet
    - name: join cluster
      shell: kubeadm join 10.3.23.207:6443 --token nioc2c.ycnrz4gj54vmxnl5 --discovery-token-ca-cert-hash sha256:52f8ebbe8926cfb8b17459e5b1fb4fcdd50283e870af6f61cf9b43c880b638b8
EOF

kubeadm join ......这个命令来自于上一步安装master的返回结果

ansible-playbook playbook-k8s-install-node.yml

[root@KSSYSDEV ansible]# ansible-playbook playbook-k8s-install-node.yml

PLAY [k8s_node] **********************************************************************************************************************************

TASK [Gathering Facts] ***************************************************************************************************************************
ok: [10.3.23.209]
ok: [10.3.23.208]

TASK [k8s-os-init] *******************************************************************************************************************************
changed: [10.3.23.209]
changed: [10.3.23.208]

TASK [install kube***] ***************************************************************************************************************************
changed: [10.3.23.208]
changed: [10.3.23.209]

TASK [start kubelet] *****************************************************************************************************************************
changed: [10.3.23.208]
changed: [10.3.23.209]

TASK [join cluster] ******************************************************************************************************************************
changed: [10.3.23.208]
changed: [10.3.23.209]

PLAY RECAP ***************************************************************************************************************************************
10.3.23.208                : ok=5    changed=4    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
10.3.23.209                : ok=5    changed=4    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

在master机器上查看nodes

[root@K8STest0001 ~]# kubectl get nodes
NAME          STATUS     ROLES                  AGE     VERSION
k8stest0001   Ready      control-plane,master   63m     v1.23.13
k8stest0002   NotReady   <none>                 4m30s   v1.23.13
k8stest0003   NotReady   <none>                 4m30s   v1.23.13

可以看到两台node还是NotReady 状态,需要等待几十分钟,再次查看如下:

[root@K8STest0001 ~]# kubectl get nodes
NAME          STATUS   ROLES                  AGE    VERSION
k8stest0001   Ready    control-plane,master   138m   v1.23.13
k8stest0002   Ready    <none>                 80m    v1.23.13
k8stest0003   Ready    <none>                 80m    v1.23.13

附录

  • kube-flannel.yml
---
kind: Namespace
apiVersion: v1
metadata:
  name: kube-flannel
  labels:
    pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
       #image: flannelcni/flannel-cni-plugin:v1.1.0 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
       #image: flannelcni/flannel:v0.20.0 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.0
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
       #image: flannelcni/flannel:v0.20.0 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.0
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate

最后编辑于
?著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 214,128评论 6 493
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 91,316评论 3 388
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 159,737评论 0 349
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 57,283评论 1 287
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 66,384评论 6 386
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 50,458评论 1 292
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 39,467评论 3 412
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 38,251评论 0 269
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 44,688评论 1 306
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 36,980评论 2 328
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 39,155评论 1 342
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 34,818评论 4 337
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 40,492评论 3 322
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 31,142评论 0 21
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 32,382评论 1 267
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 47,020评论 2 365
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 44,044评论 2 352