[Ansible] Kubernetes 클러스터 구성

2026. 4. 24. 22:47·IaC | DevOps/IaC

2025.07.12 - [Container/Kubernetes] - [k8s] kubeadm으로 클러스터 구성(1)

 

[k8s] kubeadm으로 클러스터 구성(1)

이전에 kind를 이용하여 로컬에 클러스터를 구성하였으나 kind만 사용하면 외부와 통신이 안 됐습니다. 그래서 로드밸런서를 사용하려면 metallb를 설치해야 했습니다. 이번에는 CKA를 공부하며 알

lucy-devblog.tistory.com

이전에 shell script를 이용해서 서버를 초기화했는데 이번에는 이걸 ansible로 바꿔보고자 한다. 그리고 kubernetes도 ansible로 운영할 수 있으므로 ansible을 이용해 master node와 worker node를 join하려고 한다.


1️⃣ ansible.cfg와 inventory 정의

[defaults]
remote_user = {원격 사용자명}
inventory = /home/{원격 사용자명}/{인벤토리 파일 명} # 인벤토리 파일 저장 위치
ask_pass = false

[privilege_escalation]
become = true
become_method = sudo
become_user = root
become_ask_pass = false
[master]
master1

[workers]
worker1
worker2

[nodes:children]
master
workers

/etc/hosts에서 미리 ip주소를 기반으로 호스트명을 지정해놓고 인벤토리를 위와 같이 작성한다. 그리고 이때 :children을 이용해 그룹끼리 묶는다.

2️⃣ 서버 초기화

- name: Prepare all Kubernetes nodes
  hosts: nodes # master node와 worker node 모두 수행하도록 nodes로 지정
  become: yes

  vars:
    kubernetes_repo_version: {{ kubernetes_repo_version }}
    kube_version: {{ kube_version }}
    pod_network_cidr: {{ pod_network_cidr }}

  tasks:
    - name: Disable swap immediately
      command: swapoff -a
      when: ansible_swaptotal_mb > 0

    - name: Disable swap permanently
      replace:
        path: /etc/fstab
        regexp: '^(.*\sswap\s.*)$'
        replace: '# \1'

    - name: Set SELinux to permissive immediately
      command: setenforce 0
      failed_when: false
      changed_when: false

    - name: Set SELinux to permissive permanently
      replace:
        path: /etc/selinux/config
        regexp: '^SELINUX=enforcing'
        replace: 'SELINUX=permissive'

    - name: Install base packages
      dnf:
        name:
          - dnf-plugins-core
          - curl
          - iproute-tc
          - conntrack
          - socat
          - ebtables
          - ethtool
        state: present

    - name: Add Docker repository for containerd.io
      command: dnf config-manager --add-repo https://download.docker.com/linux/rhel/docker-ce.repo
      args:
        creates: /etc/yum.repos.d/docker-ce.repo

    - name: Install containerd
      dnf:
        name: containerd.io
        state: present

    - name: Create containerd config directory
      file:
        path: /etc/containerd
        state: directory
        mode: '0755'

    - name: Generate default containerd config
      command: containerd config default
      register: containerd_default_config
      changed_when: false

    - name: Write containerd config
      copy:
        dest: /etc/containerd/config.toml
        content: "{{ containerd_default_config.stdout }}"
        mode: '0644'

    - name: Enable systemd cgroup driver for containerd
      replace:
        path: /etc/containerd/config.toml
        regexp: 'SystemdCgroup = false'
        replace: 'SystemdCgroup = true'

    - name: Enable and restart containerd
      systemd:
        name: containerd
        enabled: yes
        state: restarted

    - name: Load kernel modules
      modprobe:
        name: "{{ item }}"
        state: present
      loop:
        - overlay
        - br_netfilter

    - name: Persist kernel modules
      copy:
        dest: /etc/modules-load.d/k8s.conf
        content: |
          overlay
          br_netfilter
        mode: '0644'

    - name: Configure Kubernetes sysctl params
      copy:
        dest: /etc/sysctl.d/k8s.conf
        content: |
          net.bridge.bridge-nf-call-iptables = 1
          net.bridge.bridge-nf-call-ip6tables = 1
          net.ipv4.ip_forward = 1
        mode: '0644'

    - name: Apply sysctl params
      command: sysctl --system

    - name: Add Kubernetes repository
      copy:
        dest: /etc/yum.repos.d/kubernetes.repo
        content: |
          [kubernetes]
          name=Kubernetes
          baseurl=https://pkgs.k8s.io/core:/stable:/{{ kubernetes_repo_version }}/rpm/
          enabled=1
          gpgcheck=1
          gpgkey=https://pkgs.k8s.io/core:/stable:/{{ kubernetes_repo_version }}/rpm/repodata/repomd.xml.key
          exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
        mode: '0644'

    - name: Install Kubernetes components
      dnf:
        name:
          - kubelet-{{ kube_version }}
          - kubeadm-{{ kube_version }}
          - kubectl-{{ kube_version }}
        state: present
        disable_excludes: kubernetes

    - name: Enable kubelet
      systemd:
        name: kubelet
        enabled: yes
        state: started

    - name: Open Kubernetes common firewall ports
      firewalld:
        port: "{{ item }}"
        permanent: yes
        state: enabled
        immediate: yes
      loop:
        - 10250/tcp

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/

 

Creating a cluster with kubeadm

Using kubeadm, you can create a minimum viable Kubernetes cluster that conforms to best practices. In fact, you can use kubeadm to set up a cluster that will pass the Kubernetes Conformance tests. kubeadm also supports other cluster lifecycle functions, su

kubernetes.io

이 과정이 ansible playbook으로 작성된 것이다.

3️⃣ Master Node 구성

- name: Initialize master node and install Calico
  hosts: master
  become: yes

  vars:
    pod_network_cidr: {{ pod_network_cidr }}
    calico_version: {{ calico_version }}

  tasks:
    - name: Open control-plane firewall ports
      firewalld:
        port: "{{ item }}"
        permanent: yes
        state: enabled
        immediate: yes
      loop:
        - 6443/tcp
        - 2379-2380/tcp
        - 10257/tcp
        - 10259/tcp

    - name: Initialize Kubernetes control plane
      command: >
        kubeadm init
        --pod-network-cidr={{ pod_network_cidr }}
        --cri-socket=unix:///run/containerd/containerd.sock
      args:
        creates: /etc/kubernetes/admin.conf

    - name: Create kubeconfig directory for user
      file:
        path: "/home/{{ ansible_user }}/.kube"
        state: directory
        owner: "{{ ansible_user }}"
        group: "{{ ansible_user }}"
        mode: '0755'

    - name: Copy admin kubeconfig
      copy:
        src: /etc/kubernetes/admin.conf
        dest: "/home/{{ ansible_user }}/.kube/config"
        remote_src: yes
        owner: "{{ ansible_user }}"
        group: "{{ ansible_user }}"
        mode: '0600'

    - name: Install Calico operator CRDs
      command: >
        kubectl --kubeconfig=/etc/kubernetes/admin.conf
        apply -f https://raw.githubusercontent.com/projectcalico/calico/{{ calico_version }}/manifests/operator-crds.yaml

    - name: Install Tigera operator
      command: >
        kubectl --kubeconfig=/etc/kubernetes/admin.conf
        apply -f https://raw.githubusercontent.com/projectcalico/calico/{{ calico_version }}/manifests/tigera-operator.yaml

    - name: Download Calico custom resources
      get_url:
        url: "https://raw.githubusercontent.com/projectcalico/calico/{{ calico_version }}/manifests/custom-resources.yaml"
        dest: /tmp/calico-custom-resources.yaml
        mode: '0644'

    - name: Replace Calico pod CIDR
      replace:
        path: /tmp/calico-custom-resources.yaml
        regexp: 'cidr: 192\.168\.0\.0/16'
        replace: 'cidr: {{ pod_network_cidr }}'

    - name: Apply Calico custom resources
      command: >
        kubectl --kubeconfig=/etc/kubernetes/admin.conf
        apply -f /tmp/calico-custom-resources.yaml

    - name: Create worker join command
      command: kubeadm token create --print-join-command
      register: join_command
      changed_when: false

    - name: Save join command on master #worker node와 join 시 필요하므로 따로 저장해둔다.
      copy:
        dest: /tmp/kubeadm_join.sh
        content: "{{ join_command.stdout }} --cri-socket=unix:///run/containerd/containerd.sock\n"
        mode: '0755'

이전에는 flannel을 이용해서 네트워크를 구성했지만 이번에는 calico를 사용했다.

4️⃣ Worker Node 구성

- name: Join worker nodes
  hosts: workers
  become: yes

  tasks:
    - name: Open worker firewall ports
      firewalld:
        port: "{{ item }}"
        permanent: yes
        state: enabled
        immediate: yes
      loop:
        - 30000-32767/tcp

    - name: Get join command from master
      command: cat /tmp/kubeadm_join.sh
      delegate_to: "{{ groups['master'][0] }}"
      register: join_command
      changed_when: false

    - name: Join worker node to cluster
      command: "{{ join_command.stdout }}" # master node에서 저장한 내용을 이용해 join 한다.
      args:
        creates: /etc/kubernetes/kubelet.conf

5️⃣ 클러스터와 네트워크 상태 확인

- name: Verify cluster
  hosts: master
  become: yes

  tasks:
    - name: Check nodes
      command: kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes -o wide
      register: node_result
      changed_when: false

    - name: Show node status
      debug:
        var: node_result.stdout_lines

    - name: Check Calico pods
      command: kubectl --kubeconfig=/etc/kubernetes/admin.conf get pods -n calico-system
      register: calico_result
      changed_when: false

    - name: Show Calico status
      debug:
        var: calico_result.stdout_lines

6️⃣ 전체 스크립트

---
- name: Prepare all Kubernetes nodes
  hosts: all
  become: yes

  vars:
    kubernetes_repo_version: {{ kubernetes_repo_version }}
    kube_version: {{ kube_version }}
    pod_network_cidr: {{ pod_network_cidr }}

  tasks:
    - name: Disable swap immediately
      command: swapoff -a
      when: ansible_swaptotal_mb > 0

    - name: Disable swap permanently
      replace:
        path: /etc/fstab
        regexp: '^(.*\sswap\s.*)$'
        replace: '# \1'

    - name: Set SELinux to permissive immediately
      command: setenforce 0
      failed_when: false
      changed_when: false

    - name: Set SELinux to permissive permanently
      replace:
        path: /etc/selinux/config
        regexp: '^SELINUX=enforcing'
        replace: 'SELINUX=permissive'

    - name: Install base packages
      dnf:
        name:
          - dnf-plugins-core
          - curl
          - iproute-tc
          - conntrack
          - socat
          - ebtables
          - ethtool
        state: present

    - name: Add Docker repository for containerd.io
      command: dnf config-manager --add-repo https://download.docker.com/linux/rhel/docker-ce.repo
      args:
        creates: /etc/yum.repos.d/docker-ce.repo

    - name: Install containerd
      dnf:
        name: containerd.io
        state: present

    - name: Create containerd config directory
      file:
        path: /etc/containerd
        state: directory
        mode: '0755'

    - name: Generate default containerd config
      command: containerd config default
      register: containerd_default_config
      changed_when: false

    - name: Write containerd config
      copy:
        dest: /etc/containerd/config.toml
        content: "{{ containerd_default_config.stdout }}"
        mode: '0644'

    - name: Enable systemd cgroup driver for containerd
      replace:
        path: /etc/containerd/config.toml
        regexp: 'SystemdCgroup = false'
        replace: 'SystemdCgroup = true'

    - name: Enable and restart containerd
      systemd:
        name: containerd
        enabled: yes
        state: restarted

    - name: Load kernel modules
      modprobe:
        name: "{{ item }}"
        state: present
      loop:
        - overlay
        - br_netfilter

    - name: Persist kernel modules
      copy:
        dest: /etc/modules-load.d/k8s.conf
        content: |
          overlay
          br_netfilter
        mode: '0644'

    - name: Configure Kubernetes sysctl params
      copy:
        dest: /etc/sysctl.d/k8s.conf
        content: |
          net.bridge.bridge-nf-call-iptables = 1
          net.bridge.bridge-nf-call-ip6tables = 1
          net.ipv4.ip_forward = 1
        mode: '0644'

    - name: Apply sysctl params
      command: sysctl --system

    - name: Add Kubernetes repository
      copy:
        dest: /etc/yum.repos.d/kubernetes.repo
        content: |
          [kubernetes]
          name=Kubernetes
          baseurl=https://pkgs.k8s.io/core:/stable:/{{ kubernetes_repo_version }}/rpm/
          enabled=1
          gpgcheck=1
          gpgkey=https://pkgs.k8s.io/core:/stable:/{{ kubernetes_repo_version }}/rpm/repodata/repomd.xml.key
          exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
        mode: '0644'

    - name: Install Kubernetes components
      dnf:
        name:
          - kubelet-{{ kube_version }}
          - kubeadm-{{ kube_version }}
          - kubectl-{{ kube_version }}
        state: present
        disable_excludes: kubernetes

    - name: Enable kubelet
      systemd:
        name: kubelet
        enabled: yes
        state: started

    - name: Open Kubernetes common firewall ports
      firewalld:
        port: "{{ item }}"
        permanent: yes
        state: enabled
        immediate: yes
      loop:
        - 10250/tcp

- name: Initialize master node and install Calico
  hosts: master
  become: yes

  vars:
    pod_network_cidr: {{ pod_network_cidr }}
    calico_version: {{ calico_version }}

  tasks:
    - name: Open control-plane firewall ports
      firewalld:
        port: "{{ item }}"
        permanent: yes
        state: enabled
        immediate: yes
      loop:
        - 6443/tcp
        - 2379-2380/tcp
        - 10257/tcp
        - 10259/tcp

    - name: Initialize Kubernetes control plane
      command: >
        kubeadm init
        --pod-network-cidr={{ pod_network_cidr }}
        --cri-socket=unix:///run/containerd/containerd.sock
      args:
        creates: /etc/kubernetes/admin.conf

    - name: Create kubeconfig directory for user
      file:
        path: "/home/{{ ansible_user }}/.kube"
        state: directory
        owner: "{{ ansible_user }}"
        group: "{{ ansible_user }}"
        mode: '0755'

    - name: Copy admin kubeconfig
      copy:
        src: /etc/kubernetes/admin.conf
        dest: "/home/{{ ansible_user }}/.kube/config"
        remote_src: yes
        owner: "{{ ansible_user }}"
        group: "{{ ansible_user }}"
        mode: '0600'

    - name: Install Calico operator CRDs
      command: >
        kubectl --kubeconfig=/etc/kubernetes/admin.conf
        apply -f https://raw.githubusercontent.com/projectcalico/calico/{{ calico_version }}/manifests/operator-crds.yaml

    - name: Install Tigera operator
      command: >
        kubectl --kubeconfig=/etc/kubernetes/admin.conf
        apply -f https://raw.githubusercontent.com/projectcalico/calico/{{ calico_version }}/manifests/tigera-operator.yaml

    - name: Download Calico custom resources
      get_url:
        url: "https://raw.githubusercontent.com/projectcalico/calico/{{ calico_version }}/manifests/custom-resources.yaml"
        dest: /tmp/calico-custom-resources.yaml
        mode: '0644'

    - name: Replace Calico pod CIDR
      replace:
        path: /tmp/calico-custom-resources.yaml
        regexp: 'cidr: 192\.168\.0\.0/16'
        replace: 'cidr: {{ pod_network_cidr }}'

    - name: Apply Calico custom resources
      command: >
        kubectl --kubeconfig=/etc/kubernetes/admin.conf
        apply -f /tmp/calico-custom-resources.yaml

    - name: Create worker join command
      command: kubeadm token create --print-join-command
      register: join_command
      changed_when: false

    - name: Save join command on master
      copy:
        dest: /tmp/kubeadm_join.sh
        content: "{{ join_command.stdout }} --cri-socket=unix:///run/containerd/containerd.sock\n"
        mode: '0755'

- name: Join worker nodes
  hosts: workers
  become: yes

  tasks:
    - name: Open worker firewall ports
      firewalld:
        port: "{{ item }}"
        permanent: yes
        state: enabled
        immediate: yes
      loop:
        - 30000-32767/tcp

    - name: Get join command from master
      command: cat /tmp/kubeadm_join.sh
      delegate_to: "{{ groups['master'][0] }}"
      register: join_command
      changed_when: false

    - name: Join worker node to cluster
      command: "{{ join_command.stdout }}"
      args:
        creates: /etc/kubernetes/kubelet.conf

- name: Verify cluster
  hosts: master
  become: yes

  tasks:
    - name: Check nodes
      command: kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes -o wide
      register: node_result
      changed_when: false

    - name: Show node status
      debug:
        var: node_result.stdout_lines

    - name: Check Calico pods
      command: kubectl --kubeconfig=/etc/kubernetes/admin.conf get pods -n calico-system
      register: calico_result
      changed_when: false

    - name: Show Calico status
      debug:
        var: calico_result.stdout_lines

각 과정의 내용을 합치면 위와 같이 스크립트를 구성할 수 있다. 여기서 {{ }}로 표시되어있는 것은 모두 variables로 따로 구성했다.


shell의 경우 스크립트를 다른 서버로 복사한 후 sudo로 명령어를 수행하려고 하면 터미널로 비밀번호를 입력해야 하는데 이 때 비밀번호 값이 가려지지 않는다. 그러므로 shell script는 다중 서버를 한 번에 구성할 때는 좋지 않다고 생각했고 ansible의 경우 승격을 시켜서 root 권한을 사용하다보니 비밀번호를 직접 입력하지 않고도 실행할 수 있다. 그러므로 다중 서버는 ansible로 설정하는 것이 좋다고 생각한다.

저작자표시 비영리 변경금지 (새창열림)

'IaC | DevOps > IaC' 카테고리의 다른 글

[IaC] 앤서블(Ansible) 개념 및 설치  (1) 2025.12.23
[IaC] Terraform Module로 AWS IAM과 Security Group 생성  (0) 2025.11.12
[IaC] Terraform Module로 AWS Network 생성  (0) 2025.11.11
[IaC] Terraform module 생성  (1) 2025.07.18
[IaC] Terraform으로 AWS 인프라 관리 및 자동화  (0) 2025.06.30
'IaC | DevOps/IaC' 카테고리의 다른 글
  • [IaC] 앤서블(Ansible) 개념 및 설치
  • [IaC] Terraform Module로 AWS IAM과 Security Group 생성
  • [IaC] Terraform Module로 AWS Network 생성
  • [IaC] Terraform module 생성
The Engineer, Lucy
The Engineer, Lucy
  • The Engineer, Lucy
    Growing up for My Future💕
    The Engineer, Lucy
    • Instagram
    • GitHub
  • 전체
    오늘
    어제
    • 분류 전체보기 (201)
      • OS&Server (30)
        • Linux (28)
        • WEB | WAS (2)
      • Architecture (10)
      • Cloud (8)
        • AWS (4)
        • GCP (4)
      • Container (12)
        • Docker (4)
        • Kubernetes (8)
      • IaC | DevOps (11)
        • IaC (7)
        • DevOps (4)
      • Observability (6)
      • CS (17)
        • Data Structure (0)
        • Algorithms (1)
        • Operating System (3)
        • Network (11)
        • Database System (2)
      • Coding Test (99)
        • Algorithms (91)
        • SQL (7)
      • ETC (8)
  • 블로그 메뉴

    • 홈
    • 태그
    • 방명록
  • 공지사항

  • 링크

    • Lucy's Instagram
    • Lucy's GitHub
  • 인기 글

  • 태그

    Kubernetes
    백준
    programmers
    docker
    쉘 스크립트
    리눅스
    프로그래머스
    network
    다이나믹 프로그래밍
    Shell
    셸 스크립트
    너비우선탐색
    도커
    자바
    오블완
    AWS
    쿠버네티스
    네트워크
    티스토리챌린지
    bfs
    cs 기초 지식 정리
    네트워크 기초 지식
    리눅스마스터 2급
    Shell Script
    Linux
    코딩테스트 공부
    Baekjoon
    terraform
    K8s
    Java
  • 최근 댓글

  • 최근 글

  • hELLO· Designed By정상우.v4.10.3
The Engineer, Lucy
[Ansible] Kubernetes 클러스터 구성
상단으로

티스토리툴바