Guto Carvalho # 2022-21-08 @ BSB
Guto Carvalho # 2022-21-08 @ BSB

DROPS: Instalando Rancher 2.6 em HA

by

Aprenda a instalar o novíssimo Rancher 2.6 em modo HA.


O que são drops?

São DUMPs mentais rápidos e rasteiros, simples e objetivos – que funcionam.

Geralmente de algo que eu acabei de fazer.

Eu – quase sempre – volto para detalhar mais cada passo.

Considere com a mesma qualidade de um rascunho ou uma anotação rápida.

De qualquer forma comenta ai qquer coisa, os comentários estão ligados nos DROPS ;)

Demanda!

A ideia é instalar um Rancher em HA, cluster de 3 nodes, usando instâncias AWS EC2.

ComoFaz?

antes de começar

Confere aí :)

  1. Tenha o Kubectl instalado
  2. Tenha o RKE instalado
  3. Tenha o Helm instalado
  4. Tenha Conta na AWS
  5. Internet é de bom tom :)

instâncias ec2

Vamos lá!

  1. crie um chave ssh no ec2
  2. crie 3 instâncias EC2 em sua conta AWS com essa chave ssh
  3. crie 2 target groups rancher-80 e rancher-443 apontando para as máquinas do cluster
  4. crie o load balancer NLB com dois listeners apontando para os target groups rancher-80 e rancher-443
  5. crie uma entrada de dns rancher.seudominio.tld apontando para o CNAME do Load Balancer
  6. crie um security group para liberar acesso a porta 80 e 443 as máquinas do cluster
  7. crie um security group para que voce possa instalar o cluster a partir do seu IP via rke (all ports).

preparando instâncias

instale docker nas 3 maquinas

curl https://releases.rancher.com/install-docker/20.10.sh | sh

habilite e inicie

systemctl enable docker && systemctl start docker

coloque o ubuntu no grupo docker

gpasswd -a ubuntu docker

preparando e instalando o cluster (da sua máquina)

criando configuracao

rke config

ele vai te fazer umas perguntinhas, cadastre apenas 1 node para facilitar, eu usei 1.1.1.1 como exemplo, aponte o local da sua chave ssh, a mesma que usou nos EC2.

[+] Cluster Level SSH Private Key Path [~/.ssh/id_rsa]:
[+] Number of Hosts [1]:
[+] SSH Address of host (1) [none]: 1.1.1.1
[+] SSH Port of host (1) [22]:
[+] SSH Private Key Path of host (1.1.1.1) [none]:
[-] You have entered empty SSH key path, trying fetch from SSH key parameter
[+] SSH Private Key of host (1.1.1.1) [none]:
[-] You have entered empty SSH key, defaulting to cluster level SSH key: ~/.ssh/id_rsa
[+] SSH User of host (1.1.1.1) [ubuntu]:
[+] Is host (1.1.1.1) a Control Plane host (y/n)? [y]: y
[+] Is host (1.1.1.1) a Worker host (y/n)? [n]: y
[+] Is host (1.1.1.1) an etcd host (y/n)? [n]: y
[+] Override Hostname of host (1.1.1.1) [none]:
[+] Internal IP of host (1.1.1.1) [none]:
[+] Docker socket path on host (1.1.1.1) [/var/run/docker.sock]:
[+] Network Plugin Type (flannel, calico, weave, canal, aci) [canal]:
[+] Authentication Strategy [x509]:
[+] Authorization Mode (rbac, none) [rbac]:
[+] Kubernetes Docker image [rancher/hyperkube:v1.21.5-rancher1]:
[+] Cluster domain [cluster.local]:
[+] Service Cluster IP Range [10.43.0.0/16]:
[+] Enable PodSecurityPolicy [n]:
[+] Cluster Network CIDR [10.42.0.0/16]:
[+] Cluster DNS Service IP [10.43.0.10]:
[+] Add addon manifest URLs or YAML files [no]:

abra o arquivo e coloque os demais nodos, não se esqueça de cadastrar o ip privado também, depois que finalizar com os nodos, personalize o que for necessário para seu ambiente k8s, o rke é beeem flexível quanto a isso, além de ser o instalador de k8s mais fácil que eu conheço.

nodes:
- address: 1.1.1.1
  port: "22"
  internal_address: "172.31.1.1"
  role:
  - controlplane
  - worker
  - etcd
  hostname_override: "rancher_a"
  user: ubuntu
  docker_socket: /var/run/docker.sock
  ssh_key: ""
  ssh_key_path: ~/.ssh/id_rsa
  ssh_cert: ""
  ssh_cert_path: ""
  labels: {}
  taints: []
- address: 2.2.2.2
  port: "22"
  internal_address: "172.31.1.2"
  role:
  - controlplane
  - worker
  - etcd
  hostname_override: "rancher_b"
  user: ubuntu
  docker_socket: /var/run/docker.sock
  ssh_key: ""
  ssh_key_path: ~/.ssh/id_rsa
  ssh_cert: ""
  ssh_cert_path: ""
  labels: {}
  taints: []
- address: 3.3.3.3
  port: "22"
  internal_address: "172.31.1.3"
  role:
  - controlplane
  - worker
  - etcd
  hostname_override: "rancher_c"
  user: ubuntu
  docker_socket: /var/run/docker.sock
  ssh_key: ""
  ssh_key_path: ~/.ssh/id_rsa
  ssh_cert: ""
  ssh_cert_path: ""
  labels: {}
  taints: []
services:
  etcd:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
    win_extra_args: {}
    win_extra_binds: []
    win_extra_env: []
    external_urls: []
    ca_cert: ""
    cert: ""
    key: ""
    path: ""
    uid: 0
    gid: 0
    snapshot: null
    retention: ""
    creation: ""
    backup_config: null
  kube-api:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
    win_extra_args: {}
    win_extra_binds: []
    win_extra_env: []
    service_cluster_ip_range: 10.43.0.0/16
    service_node_port_range: ""
    pod_security_policy: false
    always_pull_images: false
    secrets_encryption_config: null
    audit_log: null
    admission_configuration: null
    event_rate_limit: null
  kube-controller:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
    win_extra_args: {}
    win_extra_binds: []
    win_extra_env: []
    cluster_cidr: 10.42.0.0/16
    service_cluster_ip_range: 10.43.0.0/16
  scheduler:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
    win_extra_args: {}
    win_extra_binds: []
    win_extra_env: []
  kubelet:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
    win_extra_args: {}
    win_extra_binds: []
    win_extra_env: []
    cluster_domain: cluster.local
    infra_container_image: ""
    cluster_dns_server: 10.43.0.10
    fail_swap_on: false
    generate_serving_certificate: false
  kubeproxy:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
    win_extra_args: {}
    win_extra_binds: []
    win_extra_env: []
network:
  plugin: canal
  options: {}
  mtu: 0
  node_selector: {}
  update_strategy: null
  tolerations: []
authentication:
  strategy: x509
  sans: []
  webhook: null
addons: ""
addons_include: []
system_images:
  etcd: rancher/mirrored-coreos-etcd:v3.4.16-rancher1
  alpine: rancher/rke-tools:v0.1.78
  nginx_proxy: rancher/rke-tools:v0.1.78
  cert_downloader: rancher/rke-tools:v0.1.78
  kubernetes_services_sidecar: rancher/rke-tools:v0.1.78
  kubedns: rancher/mirrored-k8s-dns-kube-dns:1.17.4
  dnsmasq: rancher/mirrored-k8s-dns-dnsmasq-nanny:1.17.4
  kubedns_sidecar: rancher/mirrored-k8s-dns-sidecar:1.17.4
  kubedns_autoscaler: rancher/mirrored-cluster-proportional-autoscaler:1.8.3
  coredns: rancher/mirrored-coredns-coredns:1.8.4
  coredns_autoscaler: rancher/mirrored-cluster-proportional-autoscaler:1.8.3
  nodelocal: rancher/mirrored-k8s-dns-node-cache:1.18.0
  kubernetes: rancher/hyperkube:v1.21.5-rancher1
  flannel: rancher/mirrored-coreos-flannel:v0.14.0
  flannel_cni: rancher/flannel-cni:v0.3.0-rancher6
  calico_node: rancher/mirrored-calico-node:v3.19.2
  calico_cni: rancher/mirrored-calico-cni:v3.19.2
  calico_controllers: rancher/mirrored-calico-kube-controllers:v3.19.2
  calico_ctl: rancher/mirrored-calico-ctl:v3.19.2
  calico_flexvol: rancher/mirrored-calico-pod2daemon-flexvol:v3.19.2
  canal_node: rancher/mirrored-calico-node:v3.19.2
  canal_cni: rancher/mirrored-calico-cni:v3.19.2
  canal_controllers: rancher/mirrored-calico-kube-controllers:v3.19.2
  canal_flannel: rancher/mirrored-coreos-flannel:v0.14.0
  canal_flexvol: rancher/mirrored-calico-pod2daemon-flexvol:v3.19.2
  weave_node: weaveworks/weave-kube:2.8.1
  weave_cni: weaveworks/weave-npc:2.8.1
  pod_infra_container: rancher/mirrored-pause:3.4.1
  ingress: rancher/nginx-ingress-controller:nginx-0.48.1-rancher1
  ingress_backend: rancher/mirrored-nginx-ingress-controller-defaultbackend:1.5-rancher1
  ingress_webhook: rancher/mirrored-jettech-kube-webhook-certgen:v1.5.1
  metrics_server: rancher/mirrored-metrics-server:v0.5.0
  windows_pod_infra_container: rancher/kubelet-pause:v0.1.6
  aci_cni_deploy_container: noiro/cnideploy:5.1.1.0.1ae238a
  aci_host_container: noiro/aci-containers-host:5.1.1.0.1ae238a
  aci_opflex_container: noiro/opflex:5.1.1.0.1ae238a
  aci_mcast_container: noiro/opflex:5.1.1.0.1ae238a
  aci_ovs_container: noiro/openvswitch:5.1.1.0.1ae238a
  aci_controller_container: noiro/aci-containers-controller:5.1.1.0.1ae238a
  aci_gbp_server_container: noiro/gbp-server:5.1.1.0.1ae238a
  aci_opflex_server_container: noiro/opflex-server:5.1.1.0.1ae238a
ssh_key_path: ~/.ssh/id_rsa
ssh_cert_path: ""
ssh_agent_auth: false
authorization:
  mode: rbac
  options: {}
ignore_docker_version: null
enable_cri_dockerd: null
kubernetes_version: ""
private_registries: []
ingress:
  provider: ""
  options: {}
  node_selector: {}
  extra_args: {}
  dns_policy: ""
  extra_envs: []
  extra_volumes: []
  extra_volume_mounts: []
  update_strategy: null
  http_port: 0
  https_port: 0
  network_mode: ""
  tolerations: []
  default_backend: null
  default_http_backend_priority_class_name: ""
  nginx_ingress_controller_priority_class_name: ""
cluster_name: ""
cloud_provider:
  name: ""
prefix_path: ""
win_prefix_path: ""
addon_job_timeout: 0
bastion_host:
  address: ""
  port: ""
  user: ""
  ssh_key: ""
  ssh_key_path: ""
  ssh_cert: ""
  ssh_cert_path: ""
  ignore_proxy_env_vars: false
monitoring:
  provider: ""
  options: {}
  node_selector: {}
  update_strategy: null
  replicas: null
  tolerations: []
  metrics_server_priority_class_name: ""
restore:
  restore: false
  snapshot_name: ""
rotate_encryption_key: false
dns: null

depois de cadastrar os nodos, vamos começar o provisionamento

rke up —config cluster.yaml

após instalar verá a seguinte mensagem ( se tudo der certo )

INFO[0255] Finished building Kubernetes cluster successfully

configure o kubectl

export KUBECONFIG=kube_config_cluster.yml

verifique se o cluster está funcionando

kubctl get nodes

saída exemplo

NAME             STATUS   ROLES                      AGE   VERSION
54.225.184.xxx   Ready    controlplane,etcd,worker   12m   v1.21.5
54.243.65.xxx    Ready    controlplane,etcd,worker   12m   v1.21.5
72.44.32.xxx     Ready    controlplane,etcd,worker   12m   v1.21.5

cluster instalado, agora verifique a saúde de seu cluster

kubectl get pods --all-namespaces

saída exemplo

NAMESPACE       NAME                                       READY   STATUS      RESTARTS   AGE
ingress-nginx   default-http-backend-6977475d9b-z64cj      1/1     Running     0          17m
ingress-nginx   nginx-ingress-controller-h4c67             1/1     Running     0          17m
ingress-nginx   nginx-ingress-controller-kjd8r             1/1     Running     0          17m
ingress-nginx   nginx-ingress-controller-q4lp2             1/1     Running     0          17m
kube-system     calico-kube-controllers-7d5d95c8c9-mcfj6   1/1     Running     0          18m
kube-system     canal-lsbkv                                2/2     Running     0          18m
kube-system     canal-lwtq9                                2/2     Running     0          18m
kube-system     canal-wm7c8                                2/2     Running     0          18m
kube-system     coredns-55b58f978-jnfsq                    1/1     Running     0          17m
kube-system     coredns-55b58f978-wr4w7                    1/1     Running     0          18m
kube-system     coredns-autoscaler-76f8869cc9-lz44p        1/1     Running     0          18m
kube-system     metrics-server-55fdd84cd4-rjk7w            1/1     Running     0          18m
kube-system     rke-coredns-addon-deploy-job-th472         0/1     Completed   0          18m
kube-system     rke-ingress-controller-deploy-job-59cm4    0/1     Completed   0          17m
kube-system     rke-metrics-addon-deploy-job-94b8t         0/1     Completed   0          18m
kube-system     rke-network-plugin-deploy-job-q5nts        0/1     Completed   0          18m

aparentemente tudo bem :)

preparando e instalando o cert-manager neste cluster (sua máquina)

instale os crds do cert-manager

kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.5.1/cert-manager.crds.yaml

adicione o repo e atualize o índices

helm repo add jetstack https://charts.jetstack.io. 
helm repo update

crie o namespace

kubectl create namespace cert-manager

instale o cert-mamanger

helm install cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --create-namespace \
  --version v1.5.1

crie o cluster issuer, sem isso ele não gera os certs via lets

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
  namespace: cert-manager
spec:
  acme:
    # The ACME server URL
    server: https://acme-v02.api.letsencrypt.org/directory
    # Email address used for ACME registration
    email: certmanager@nativetrail.io
    # Name of a secret used to store the ACME account private key
    privateKeySecretRef:
      name: letsencrypt-prod
    # Enable the HTTP-01 challenge provider
    solvers:
    - http01:
        ingress:
          class: nginx

aplique o issuer

kubectl apply -f issuer.yml

saída esperada

 clusterissuer.cert-manager.io/letsencrypt-prod created

verifique se está ok

kubectl get clusterissuer 

saída esperada

NAME               READY   AGE
letsencrypt-prod   True    48s

se estiver mostrando “True” deu certo!

preparando e instalando o rancher neste cluster (na sua máquina)

adicione o repo e atualize os índices

helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
helm repo update

crie o namespace cattle-system

kubectl create namespace cattle-system

instale o rancher

helm install rancher rancher-latest/rancher \
  --namespace cattle-system \
  --set hostname=kloud.gr1d.io \
  --set replicas=3 \
  --set ingress.tls.source=letsEncrypt \
  --set letsEncrypt.email=certmanager@nativetrail.io

verifique

kubectl get pods -n cattle-system

saída esperada, estará criando na primeira vez que rodar o comando

NAME                       READY   STATUS              RESTARTS   AGE
rancher-58b56d54df-7mv7d   0/1     ContainerCreating   0          29s
rancher-58b56d54df-csmpl   0/1     ContainerCreating   0          29s
rancher-58b56d54df-pkpz6   0/1     ContainerCreating   0          29s

aguarde e verifique novamente, use o mesmo comando

kubectl get pods -n cattle-system

saída

NAME                       READY   STATUS    RESTARTS   AGE
rancher-58b56d54df-7mv7d   0/1     Running   0          41s
rancher-58b56d54df-csmpl   0/1     Running   0          41s
rancher-58b56d54df-pkpz6   0/1     Running   0          41s

rancher instalado e rodando!

pós instalação

observe a saída do comando helm, ele vai te dizer como pegar a senha gerada para o primeiro acesso, depois disso, acesse o rancher via web através da URL definida e siga as os procedimentos para trocar a senha e iniciar o uso do seu rancher.

refs

:)