VerveBuds115OnUbuntu

Ubuntu18.04 configurating bluetooth headset VerveBuds115 steps:

Blueman

Use blueman for configrating the headset connection/configuration.

# sudo apt-get install -y blueman

Add blueman into awesome’s startup function:

# cat ~/.config/awesome/rc.lua | grep blueman
run_once("blueman-applet &")

Configurating the blueman:

/images/2019_12_30_09_22_31_969x154.jpg

Choose A2dp.

Sound

Ubuntu18.04 use pulseaudio for default sound backend, so we use following tools for configurating the sound:

# sudo apt-get install -y pasystray
# sudo apt-get install -y pnmixer

Also add them into the awesome’s startup functions, thua after system bootup we could find the volume controlling in systray:

# vim ~/.config/awesome/rc.lua
.....
run_once("blueman-applet &")
run_once("pnmixer &")
run_once("pasystray &")

Bugs: By pnmixer we could only controlling the Intel PCH, but not bluetooth?

kubespray2.12.0离线化手记

Steps

Download the source code:

# wget https://github.com/kubernetes-sigs/kubespray/archive/v2.12.0.tar.gz

Install ansible via old Rong/.

$ scp -r Rong test@192.168.121.104:/home/test/
$ cd ~/Rong
$ sudo mv /etc/apt/sources.list /home/test/
$ sudo ./bootstrap.sh
$ sudo mv /home/test/sources.list /etc/apt/

Change options:

$ cd ~/kubespray-2.12.0
$ cp ../deploy.key .
$ ssh -i deploy.key root@192.168.121.104
$ exit
$ cp -rfp inventory/sample/ inventory/rong
$ vim inventory/rong/hosts.ini
[all]
kubespray ansible_host=192.168.121.104 ansible_ssh_user=root ansible_ssh_private_key_file=./deploy.key  ip=192.168.121.104

[kube-master]
kubespray

[etcd]
kubespray

[kube-node]
kubespray

[k8s-cluster:children]
kube-master
kube-node

Add some configuration:

$ vim group_vars/k8s-cluster/addons.yml 
dashboard_enabled: true
helm_enabled: true
metrics_server_enabled: true

Speedup

cross the gfw, host machine side:

$ sudo iptables -t nat -A PREROUTING -p tcp -s 192.168.121.0/24 -j DNAT --to-destination 127.0.0.1:12345
$ sudo sysctl -w net.ipv4.conf.all.route_localnet=1

vm side:

$ sudo vim /etc/resolv.conf
nameserver 223.5.5.5
nameserver 8.8.8.8

Setup Cluster

Via:

$ ansible-playbook -i inventory/rong/hosts.ini cluster.yml

Fetch things

Get all of the images:

# docker pull xueshanf/install-socat:latest
# docker images | sed -n '1!p' | awk {'print $1":"$2'} | tr '\n' ' '
nginx:1.17 gcr.io/google-containers/k8s-dns-node-cache:1.15.8 gcr.io/google-containers/kube-proxy:v1.16.3 gcr.io/google-containers/kube-apiserver:v1.16.3 gcr.io/google-containers/kube-controller-manager:v1.16.3 gcr.io/google-containers/kube-scheduler:v1.16.3 lachlanevenson/k8s-helm:v2.16.1 gcr.io/kubernetes-helm/tiller:v2.16.1 coredns/coredns:1.6.0 calico/node:v3.7.3 calico/cni:v3.7.3 calico/kube-controllers:v3.7.3 gcr.io/google_containers/metrics-server-amd64:v0.3.3 gcr.io/google-containers/cluster-proportional-autoscaler-amd64:1.6.0 gcr.io/google_containers/kubernetes-dashboard-amd64:v1.10.1 quay.io/coreos/etcd:v3.3.10 gcr.io/google-containers/addon-resizer:1.8.3 gcr.io/google-containers/pause:3.1 gcr.io/google_containers/pause-amd64:3.1 xueshanf/install-socat:latest
# docker save -o k8simages.tar nginx:1.17 gcr.io/google-containers/k8s-dns-node-cache:1.15.8 gcr.io/google-containers/kube-proxy:v1.16.3 gcr.io/google-containers/kube-apiserver:v1.16.3 gcr.io/google-containers/kube-controller-manager:v1.16.3 gcr.io/google-containers/kube-scheduler:v1.16.3 lachlanevenson/k8s-helm:v2.16.1 gcr.io/kubernetes-helm/tiller:v2.16.1 coredns/coredns:1.6.0 calico/node:v3.7.3 calico/cni:v3.7.3 calico/kube-controllers:v3.7.3 gcr.io/google_containers/metrics-server-amd64:v0.3.3 gcr.io/google-containers/cluster-proportional-autoscaler-amd64:1.6.0 gcr.io/google_containers/kubernetes-dashboard-amd64:v1.10.1 quay.io/coreos/etcd:v3.3.10 gcr.io/google-containers/addon-resizer:1.8.3 gcr.io/google-containers/pause:3.1 gcr.io/google_containers/pause-amd64:3.1 xueshanf/install-socat:latest; xz -T4 k8simages.tar

Get debs:

# mkdir /home/test/debs
# find . | grep deb$ | xargs -I % cp % /home/test/debs/

Get temp files:

# ls /tmp/releases/
calicoctl                           images/                             kubectl-v1.16.3-amd64               
cni-plugins-linux-amd64-v0.8.1.tgz  kubeadm-v1.16.3-amd64               kubelet-v1.16.3-amd64     
# cp /tmp/releases/* /home/test/file/

More pkgs

Use the old deb repository for installing ansible:

$ cp old_1804debs.tar.xz ~/YourWebServer
$ tar xJvf old_1804debs.tar.xz
$ sudo vim /etc/apt/sources.list
deb [trusted=yes]  http://192.168.122.1/ansible_bionic ./
$ sudo apt-get update -y && sudo DEBIAN_FRONTEND=noninteractive apt-get install -y ansible python-netaddr

more pkgs should be installed manually and copy to /root/debs:

# apt-get install -y iputils-ping nethogs python-netaddr build-essential bind9 bind9utils nfs-common nfs-kernel-server ntpdate ntp tcpdump iotop unzip wget apt-transport-https socat rpcbind arping fping python-apt ipset ipvsadm pigz nginx docker-registry
# cd /root/debs
# wget http://209.141.35.192/netdata_1.18.1_amd64_bionic.deb
# apt-get install  ./netdata_1.18.1_amd64_bionic.deb
# find /var/cache | grep deb$ | xargs -I % cp % ./
# dpkg-scanpackages . /dev/null | gzip -9c > Packages.gz

Offline registry setup

On a running secureregistry server do following:

# systemctl stop secureregistryserver
# cd /opt/local/secureregistryserver/
# mv data data.back
# docker-compose up
# docker push xxxxx

Your docker push item is listed as(v1.16.3):

docker push gcr.io/google-containers/k8s-dns-node-cache:1.15.8
docker push gcr.io/google-containers/kube-proxy:v1.16.3
docker push gcr.io/google-containers/kube-apiserver:v1.16.3
docker push gcr.io/google-containers/kube-controller-manager:v1.16.3
docker push gcr.io/google-containers/kube-scheduler:v1.16.3
docker push lachlanevenson/k8s-helm:v2.16.1
docker push gcr.io/kubernetes-helm/tiller:v2.16.1
docker push coredns/coredns:1.6.0
docker push calico/node:v3.7.3
docker push calico/cni:v3.7.3
docker push calico/kube-controllers:v3.7.3
docker push gcr.io/google_containers/metrics-server-amd64:v0.3.3
docker push gcr.io/google-containers/cluster-proportional-autoscaler-amd64:1.6.0
docker push gcr.io/google_containers/kubernetes-dashboard-amd64:v1.10.1
docker push quay.io/coreos/etcd:v3.3.10
docker push gcr.io/google-containers/addon-resizer:1.8.3
docker push gcr.io/google-containers/pause:3.1
docker push gcr.io/google_containers/pause-amd64:3.1
docker push xueshanf/install-socat:latest
docker push nginx:1.17

tar docker.tar.gz:

# cd /opt/local/secureregistryserver/data
# tar czvf docker.tar.gz docker/

Upgrade

From v1.15.3 to v1.16.3, steps:

$ pwd
0_preinstall/roles/kube-deploy/files
$ ls
1604debs.tar.xz  1804debs.tar.xz  calicoctl-linux-amd64  cni-plugins-linux-amd64-v0.8.1.tgz  dns  docker-compose  docker.tar.gz  dockerDebs.tar.gz  gpg  hyperkube  kubeadm  nginx  ntp.conf

Generate 1804debs.tar.xz and replace:

# cp -r /root/debs ./Rong
# tar cJvf 1804debs.tar.xz Rong

Calculate calicoctl/ , it’s the same md5, so needn’t replacement.

docker.tar.gz should be replaced with the newer one.

Docker version upgradeed to 19.03.5, so we need to replace the old ones.

# tar xzvf dockerDebs.tar.gz  -C tmp/
ubuntu/dists/bionic/pool/stable/amd64/containerd.io_1.2.10-2_amd64.deb
ubuntu/dists/bionic/pool/stable/amd64/docker-ce-cli_19.03.3~3-0~ubuntu-bionic_amd64.deb
ubuntu/dists/bionic/pool/stable/amd64/docker-ce_18.09.7~3-0~ubuntu-bionic_amd64.deb

apt-mirror for syncing on internet:

$ sudo vim /etc/apt/mirror.list
set base_path    /media/sda/tmp/apt-mirror
set nthreads     20
set _tilde 0
deb https://download.docker.com/linux/ubuntu bionic stable
deb https://download.docker.com/linux/ubuntu xenial stable
$ sudo apt-mirror

Too slow for the fucking gfw!!!

After apt-mirror, we have to rsync using following command:

$ pwd
/media/sda/tmp/apt-mirror/mirror/download.docker.com/linux/ubuntu
$ ls
dists
$ rsync -a -e 'ssh -p 2345 ' --progress dists/ root@192.168.111.11:/destination/ubuntu/dists/

wget the gpg file:

$ wget https://download.docker.com/linux/ubuntu/gpg
$ tar czvf dockerDebs.tar.gz gpg ubuntu/
$ ls -l -h dockerDebs.tar.gz
-rw-r--r-- 1 root root 144M Dec 23 17:41 dockerDebs.tar.gz
$ cp dockerDebs.tar.gz ~/0_preinstall/roles/kube-deploy/files

Binary replacement:

previsous:    
 hyperkube  kubeadm  
current:    
kubeadm-v1.16.3-amd64 kubectl-v1.16.3-amd64 kubelet-v1.16.3-amd64

Edit the file, since in v1.16.3 we didn’t use hyperkube:

$ vim deploy-ubuntu/tasks/main.yml
  - name: "upload static files to /usr/local/static"
    copy:
      src: "{{ item }}"
      dest: /usr/local/static/
      owner: root
      group: root
      mode: 0777
    with_items:
      #- files/hyperkube
      - files/calicoctl-linux-amd64
      - files/kubeadm-v1.16.3-amd64
      - files/kubectl-v1.16.3-amd64
      - files/kubelet-v1.16.3-amd64
      #- files/kubeadm
      - files/cni-plugins-linux-amd64-v0.8.1.tgz
      #- files/dockerDebs.tar.gz
      - files/gpg

Add sysctl items:

# vim ./roles/kubernetes/preinstall/tasks/0080-system-configurations.yml
- name: set fs inotify.max_user_watches to 1048576
  sysctl:
    sysctl_file: "{{ sysctl_file_path }}"
    name: fs.inotify.max_user_watches
    value: 1048576
    state: present
    reload: yes

Added some files like ./roles/kubernetes/preinstall/tasks/0000-xxx-ubuntu.yml, minimum modifications to kubespray source code, you can use bcompare for viewing.

WorkingTipsOnKubesprayKongFuZi

目的

Kubespray在离线环境下,完全不考虑包管理、docker升级的发行版。

技术要点

  1. 离线情况下的源仓库准备。
  2. 完全离线情况下ansible的执行。

环境准备(与本文无关)

Ubuntu 16.04.2, 最小化安装后,做成vagrant box:

$ sudo vim /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
GRUB_CMDLINE_LINUX="net.ifnames=0 biosdevname=0"
$ sudo grub-mkconfig -o /boot/grub/grub.cfg
$ sudo useradd -m vagrant
$ sudo passwd vagrant
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
$ sudo mkdir -p /home/vagrant/.ssh
$ sudo chmod 0700 /home/vagrant/.ssh/
$ sudo vim /home/vagrant/.ssh/authorized_keys
$ sudo cat /home/vagrant/.ssh//authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA6NF8iallvQVp22WDkTkyrtvp9eWW6A8YVr+kz4TjGYe7gHzIw+niNltGEFHzD8+v1I2YJ6oXevct1YeS0o9HZyN1Q9qgCgzUFtdOKLv6IedplqoPkcmF0aYet2PkEDo3MlTBckFXPITAMzF8dJSIFo9D8HfdOV0IAdx4O7PtixWKn5y2hMNG0zQPyUecp4pzC6kivAIhyfHilFR61RGL+GPXQ2MWZWFYbAGjyiYJnAmCP3NOTd0jMZEnDkbUvxhMmBYSdETk1rRgm+R4LOzFUGaHqHDLKLX+FIPKcF96hrucXzcWyLbIbEgE98OHlnVYCzRdK8jlqm8tehUc9c9WhQ== vagrant insecure public key
$ sudo chown -R vagrant /home/vagrant/.ssh
$ sudo cp /home/test/.bashrc /home/vagrant/.bashrc 
$ sudo cp /home/test/.bash_logout /home/vagrant/.bash_logout
$ sudo cp /home/test/.profile /home/vagrant/.profile
$ sudo vim /home/vagrant/.profile 
add
[ -z "$BASH_VERSION" ] && exec /bin/bash -l
$ sudo chsh -s /bin/bash vagrant
$ sudo  vim /etc/ssh/sshd_config 
AuthorizedKeysFile .ssh/authorized_keys
$ sudo visudo -f /etc/sudoers.d/vagrant
vagrant ALL=(ALL) NOPASSWD:ALL
Defaults:vagrant !requiretty
$ sudo vim /etc/network/interfaces
change from ens3 to eth0
auto eth0
inet .....

关闭机器后,缩减磁盘空间编辑vagrantfile文件并最终创建box:

$ sudo qemu-img convert -c -O qcow2  ubuntu160402.qcow2 ubuntu160402Shrink.qcow2
$ sudo vim metadata.json
{
"provider"     : "libvirt",
"format"       : "qcow2",
"virtual_size" : 80
}
$ sudo vim Vagrantfile
Vagrant.configure("2") do |config|
       config.vm.provider :libvirt do |libvirt|
       libvirt.driver = "kvm"
       libvirt.host = 'localhost'
       libvirt.uri = 'qemu:///system'
       end
config.vm.define "new" do |custombox|
       custombox.vm.box = "custombox"
       custombox.vm.provider :libvirt do |test|
       test.memory = 1024
       test.cpus = 1
       end
       end
end
$ sudo tar cvzf custom_box.box ./metadata.json ./Vagrantfile ./box.img

添加并检查box是否可用:

$ vagrant box add custom_box.box --name "ubuntu160402old"
$ vagrant init ubuntu160402old
$ vagrant up --provider=libvirt

Server实现

沿用coreos的机制,将docker/docker-compose以二进制的方式安装。安装完毕后通过容器启动几乎所有的服务:

ntp
harbor
ansible
dnsmasq
fileserver

QuickStartOfVagrantAndAnsible

Setup Environment

Using vagrant box list for getting all of the boxes, then initiate the environment via(take rhel74 box for example):

$ vagrant init rhel74

Add the cpus/memory customization values:

  config.vm.provider "libvirt" do |vb|
     vb.memory = "4096"
     vb.cpus = "4"
  end

Disable the rsync folder:

  config.vm.synced_folder ".", "/vagrant", disabled: true, type: "rsync", rsync__args: ['--verbose', '--archive', '--delete', '-z'] , rsync__exclude: ['.git','venv']

Add ansible deployment:

  config.vm.provision "ansible" do |ansible|
    ansible.playbook = "playbook.yml"
    ansible.become = true
  end

your playbook.yml should like following:

---
- hosts: all
  gather_facts: false
  become: True
  tasks:
    - name: "Run shell for provision"
      shell: mkdir -p /root/tttt

Manually Run ansible playbook

vagrant will create the inventory files under the .vagrant folder:

cat .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory 
# Generated by Vagrant

default ansible_host=192.168.121.215 ansible_port=22 ansible_user='vagrant' ansible_ssh_private_key_file='/media/sda/Code/vagrant/dockerOnrhel74/.vagrant/machines/default/libvirt/private_key'

Then you could run the provision task like:

$ ansible-playbook -i .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory playbook.yml

用Terraform管理集群编译环境-2

前面已经用terraform可以批量创建出基础环境,但真正要做到集群部署这个环节还是需要有一定的活需要做的。所以后续我将terraform和自己改编的rong揉在了一起。通过预编译好的qcow2镜像,可以快速启动任意个kubernetes节点的集群。

前置条件

qcow2预编译镜像中需安装cloud-init, qemu-guest-agent两个包。安装完毕后需手动使能cloud-init,后续我们在terraform创建虚拟机实例的时候可以通过cloud-init注入一些信息。

# systemctl enable cloud-init

debian 9.0上需要安装mkisofs, 因为mkisofs已被genisoimage代替,因而需执行以下操作:

# apt-get install -y genisoimage
# ln -s /usr/bin/genisoimage /usr/bin/mkisofs

terraform需要具备以下插件, 其中terraform-provider-libvirt在debian 9.0上需手动编译:

# ls ~/.terraform.d/plugins/
terraform-provider-ansible  terraform-provider-libvirt  terraform-provider-template_v2.1.2_x4

cloud-init文件

cloud-init.cfg文件内容如下:

#cloud-config
# https://cloudinit.readthedocs.io/en/latest/topics/modules.html
hostname: ${HOSTNAME}
users:
  - name: xxxxx
    sudo: ALL=(ALL) NOPASSWD:ALL
    groups: users, admin
    home: /home/xxxxx
    shell: /bin/bash
    ssh-authorized-keys:
      - ssh-rsa xxxxxxxxxxxxxxxxxxxx
ssh_pwauth: True
disable_root: false
chpasswd:
  list: |
     xxxxx:linux
  expire: False

真正用到的只有hostname: ${HOSTNAME}这个变量,其他的步骤是用于创建一个名为xxxxx的用户并更改其密码。后续需要对操作系统进行深度定制的时候可以使用该操作。

main.tf定义

main.tf是整个底层架构编排的核心文件,内容如下:

bash {linenos=table,linenostart=1}
################################################################################
#  vars definition
################################################################################
variable "VM_COUNT" {
  default = 10
  type = number
}

variable "VM_USER" {
  default = "developer"
  type = string
}

variable "VM_HOSTNAME" {
  default = "newnode"
  type = string
}

variable "VM_IMG_URL" {
  default = "http://1xx.xxx.xxx.xxx/xxxx180403cloudinit.img"
  type = string
}

variable "VM_IMG_FORMAT" {
  default = "qcow2"
  type = string
}

# https://www.ipaddressguide.com/cidr
variable "VM_CIDR_RANGE" {
  default = "10.10.10.0/24"
  type = string
}

variable "LIBVIRT_POOL_DIR" {
  default = "./.local/.docker-libvirt"
  type = string
}

#variable libvirt_host {
#  type = string
#  description = "IP address of host running libvirt"
#}
#
#variable instance_name {
#  type = string
#  description = "name of VM instance"
#}

variable pool_name {
  type = string
  default = "default"
  description = "name of pool to store disk and iso image"
}

#variable source_path {
#  type = string
#  description = "path to qcow2 base image, can be remote url or local disk path"
#}

variable disk_format {
  type = string
  default = "qcow2"
}

variable default_password {
  type = string
  default = "passw0rd"
  description = "default password to login to VM when running, it's recommended to disable this manually"
}

variable memory_size {
  type = string
  default = "5120"
  description = "memory size of VM"
}

variable num_cpu {
  default = 2
  description = "number of vCPU which VM has"
}

variable num_network_interface {
  default = 1
  description = "number of network interfaces which VM has"
}

variable private_network_bridge {
  type = string
  default = "virbr0"
  description = "existing network bridge on host that VM needs to connect to private network"
}

variable public_network_bridge {
  type = string
  default = "virbr1"
  description = "existing network bridge on host that VM needs to connect to public network"
}

#variable user_data {
#  type = string
#}

variable autostart {
  default = "true"
  type = string
}

################################################################################
# PROVIDERS
################################################################################

# instance the provider
provider "libvirt" {
  uri = "qemu:///system"
}

# If you want to call remote libvirt provider. 
#provider "libvirt" {
#  uri = "qemu+tcp://${var.libvirt_host}/system"
#}

################################################################################
# DATA TEMPLATES
################################################################################

# https://www.terraform.io/docs/providers/template/d/file.html

# https://www.terraform.io/docs/providers/template/d/cloudinit_config.html
data "template_file" "user_data" {
  count = var.VM_COUNT
  template = file("${path.module}/cloud_init.cfg")
  vars = {
    HOSTNAME = "${var.VM_HOSTNAME}-${count.index + 1}"
  }
}

#data "template_file" "network_config" {
#  template = file("${path.module}/network_config.cfg")
#}


################################################################################
# ANSIBLE ITEMS
################################################################################
resource "ansible_group" "kube-deploy" {
  inventory_group_name = "kube-deploy"
}

resource "ansible_group" "kube-master" {
  inventory_group_name = "kube-master"
}

resource "ansible_group" "kube-node" {
  inventory_group_name = "kube-node"
}

resource "ansible_group" "etcd" {
  inventory_group_name = "etcd"
}

resource "ansible_group" "k8s-cluster" {
  inventory_group_name = "k8s-cluster"
  children = ["kube-master", "kube-node"]
}

# if count > 3, then we have 3 ectds, 3 kube-master, count kube-nodes

# The first node should be kube-deploy/kube-master/kube-node/etcd. 
resource "ansible_host" "deploynode" {
    groups = ["kube-master", "etcd", "kube-node", "kube-deploy"]
    inventory_hostname = "${var.VM_HOSTNAME}-1"
    vars = {
        ansible_user = "root"
        ansible_ssh_private_key_file = "./deploy.key"
        ansible_host = element(libvirt_domain.vm.*.network_interface.0.addresses.0, 0)
        ip = element(libvirt_domain.vm.*.network_interface.0.addresses.0, 0)
    }
    #provisioner "local-exec" {
    #  command = "sleep 40 && ansible-playbook -i  /etc/ansible/terraform.py cluster.yml --extra-vars @rong-vars.yml"
    #}
}

# Create 2(kube-master, etcd, kube-node) nodes, node2, node3
resource "ansible_host" "master" {
    count = var.VM_COUNT >= 3 ? 2 : var.VM_COUNT -1
    groups = var.VM_COUNT >= 3 ? ["kube-master", "etcd", "kube-node"] : ["kube-master", "kube-node"]
    #inventory_hostname = format("%s-%d", "node", count.index + 2)
    inventory_hostname = format("%s-%d", var.VM_HOSTNAME, count.index + 2)
    vars = {
        ansible_user = "root"
        ansible_ssh_private_key_file = "./deploy.key"
        ansible_host = element(libvirt_domain.vm.*.network_interface.0.addresses.0, count.index+1)
        ip = element(libvirt_domain.vm.*.network_interface.0.addresses.0, count.index+1)
    }
}

# others should be kube-nodes
resource "ansible_host" "worker" {
    count = var.VM_COUNT > 3 ? var.VM_COUNT - 3 : 0
    groups = ["kube-node"]
    #inventory_hostname = "node${count.index + 4}"
    #inventory_hostname = format("%s-%d", "node", count.index + 4)
    inventory_hostname = format("%s-%d", var.VM_HOSTNAME, count.index + 4)
    vars = {
        ansible_user = "root"
        ansible_ssh_private_key_file = "./deploy.key"
        ansible_host = element(libvirt_domain.vm.*.network_interface.0.addresses.0, count.index+3)
        ip = element(libvirt_domain.vm.*.network_interface.0.addresses.0, count.index+3)
    }
}

################################################################################
# RESOURCES
################################################################################
resource "libvirt_pool" "vm" {
  name = "${var.VM_HOSTNAME}_pool"
  type = "dir"
  path = abspath("${var.LIBVIRT_POOL_DIR}")
}

# We fetch the disk image for the operating system from the given url. For the base image. 
resource "libvirt_volume" "vm_disk_image" {
  name   = "${var.VM_HOSTNAME}_disk_image.${var.VM_IMG_FORMAT}"
  # Or you could specify like `pool = "transfer"`
  pool   = libvirt_pool.vm.name
  source = var.VM_IMG_URL
  format = var.VM_IMG_FORMAT
}

// It will use the disk image fetched at `libirt_volume.vm_disk_image` as the
//  base one to build the worker VM.
resource "libvirt_volume" "vm_worker" {
  count  = var.VM_COUNT
  name   = "worker_${var.VM_HOSTNAME}-${count.index + 1}.${var.VM_IMG_FORMAT}"
  base_volume_id = libvirt_volume.vm_disk_image.id
  pool   = libvirt_volume.vm_disk_image.pool
}

#*# Create a public network for the VMs
#*# https://www.ipaddressguide.com/cidrv
#*resource "libvirt_network" "vm_public_network" {
#*   name = "${var.VM_HOSTNAME}_network"
#*   autostart = true
#*   mode = "nat"
#*   domain = "${var.VM_HOSTNAME}.local"
#*
#*   # TODO: FIX CIDR ADDRESSES RANGE?
#*   # With `wait_for_lease` enabled, we get an error in the end of the VMs
#*   #  creation:
#*   #   - 'Requested operation is not valid: the address family of a host entry IP must match the address family of the dhcp element's parent'
#*   # But the VMs will be running and accessible via ssh.
#*   addresses = ["${var.VM_CIDR_RANGE}"]
#*
#*   dhcp {
#*    enabled = true
#*   }
#*   dns {
#*    enabled = true
#*   }
#*}

# for more info about paramater check this out 
# https://github.com/dmacvicar/terraform-provider-libvirt/blob/master/website/docs/r/cloudinit.html.markdown
# Use CloudInit to add our ssh-key to the instance
# you can add also meta_data field
resource "libvirt_cloudinit_disk" "cloudinit" {
  count = var.VM_COUNT
  name           = "${var.VM_HOSTNAME}-${count.index + 1}_cloudinit.iso"
  #user_data      = data.template_file.user_data.rendered 
  user_data      = data.template_file.user_data[count.index].rendered
  pool           = libvirt_pool.vm.name
}



resource "libvirt_domain" "vm" {
  count  = var.VM_COUNT
  name   = "${var.VM_HOSTNAME}-${count.index + 1}"
  #memory      = "${var.memory_size}"
  memory      = var.memory_size
  #vcpu        = "${var.num_cpu}"
  vcpu        = var.num_cpu
  #autostart   = "${var.autostart}"
  autostart   = var.autostart

  # TODO: FIX qemu-ga?
  # qemu-ga needs to be installed and working inside the VM, and currently is
  #  not working. Maybe it needs some configuration.
  qemu_agent = true
  #cloudinit = "${libvirt_cloudinit_disk.cloudinit.id}"
  cloudinit = element(libvirt_cloudinit_disk.cloudinit.*.id, count.index)


  # attach network interface to default network(192.168.122.0/24)
  # Or we could specify a new networking created in resource and attached to it. 
  network_interface {
    network_name   = "default"
    hostname   = "${var.VM_HOSTNAME}-${count.index + 1}"
    wait_for_lease = true
  }

  #* Attached to our created network.
  #*network_interface {
  #*  #hostname = "${var.VM_HOSTNAME}-${count.index + 1}"
  #*  network_id = "${libvirt_network.vm_public_network.id}"
  #*  #network_name = "${libvirt_network.vm_public_network.name}"

  #*  #addresses = ["${cidrhost(libvirt_network.vm_public_network.addresses, count.index + 1)}"]
  #*  addresses = ["${cidrhost(var.VM_CIDR_RANGE, count.index + 1)}"]

  #*  # TODO: Fix wait for lease?
  #*  # qemu-ga must be running inside the VM. See notes above in `qemu_agent`.
  #*  wait_for_lease = true
  #*}

  graphics {
    type = "vnc"
    listen_type = "address"
    autoport = true
  }

  # IMPORTANT
  # Ubuntu can hang is a isa-serial is not present at boot time.
  # If you find your CPU 100% and never is available this is why.
  #
  # This is a known bug on cloud images, since they expect a console
  # we need to pass it:
  # https://bugs.launchpad.net/cloud-images/+bug/1573095
  console {
    type        = "pty"
    target_port = "0"
    target_type = "serial"
  }

  console {
    type        = "pty"
    target_type = "virtio"
    target_port = "1"
  }

  disk {
    volume_id = element(libvirt_volume.vm_worker.*.id, count.index)
  }

}
################################################################################
# TERRAFORM CONFIG
################################################################################

terraform {
  required_version = ">= 0.12"
}

################################################################################
# TERRAFORM OUTPUT
################################################################################
#
output "ip" {
  value = "${libvirt_domain.vm.*.network_interface.0.addresses.0}"
}

local exec command , added to:

    provisioner "local-exec" {
      command = "sleep 40 && ansible-playbook -i  /etc/ansible/terraform.py cluster.yml --extra-vars @rong-vars.yml"
    }

逐行解释如下: