Buildingterraform-provider-libvirt

Build terraform-provider-libvirt for debian9.0.

Steps:

Get system info:

root@debian:~# cat /etc/issue
Debian GNU/Linux 9 \n \l

root@debian:~# cat /etc/debian_version 
9.0

wget the terraform and mv it to /usr/bin, then start building the plugin:

# vim /etc/apt/sources.list
deb http://mirrors.163.com/debian/ stretch main non-free contrib
deb http://mirrors.163.com/debian/ stretch-updates main non-free contrib
deb http://mirrors.163.com/debian/ stretch-backports main non-free contrib
deb http://mirrors.163.com/debian-security/ stretch/updates main non-free contrib
# apt-get update -y
# apt-get install libvirt-dev git build-essential golang=2:1.11~1~bpo9+1 golang=2:1.11~1~bpo9+1 golang-doc=2:1.11~1~bpo9+1 golang-go=2:1.11~1~bpo9+1 golang-src=2:1.11~1~bpo9+1
# mkdir /root/go
# vim /root/.bashrc
export GOPATH=/root/go
export PATH=$PATH:$GOPATH/bin
# export CGO_ENABLED="1"
# mkdir -p $GOPATH/src/github.com/dmacvicar; cd $GOPATH/src/github.com/dmacvicar
# git clone https://github.com/dmacvicar/terraform-provider-libvirt.git
# cd $GOPATH/src/github.com/dmacvicar/terraform-provider-libvirt
# make install

After building, go to /root/go/bin and examine the built plugin:

root@debian:~/go/bin# ./terraform-provider-libvirt --version
./terraform-provider-libvirt e9ff32f1ec5825dcf05481cb7ef6a3b645696a4f-dirty
Compiled against library: libvirt 3.0.0
Using library: libvirt 3.0.0

Now you got plugin compiled and ready to use on debian 9.0

用Terraform管理集群编译环境

环境

操作系统Ubuntu18.04.3
libvirtd (libvirt) 4.0.0

迅速搭建

terraform下载并加入到系统目录:

$ wget https://releases.hashicorp.com/terraform/0.12.17/terraform_0.12.17_linux_amd64.zip
$  unzip terraform_0.12.17_linux_amd64.zip
$ sudo mv terraform /usr/bin
$ terraform version
Terraform v0.12.17

terraform-provider-libvirt下载并完成初始化(https://github.com/dmacvicar/terraform-provider-libvirt/releases):

$ wget https://github.com/dmacvicar/terraform-provider-libvirt/releases/download/v0.6.0/terraform-provider-libvirt-0.6.0+git.1569597268.1c8597df.Ubuntu_18.04.amd64.tar.gz
$ tar xzvf terraform-provider-libvirt-0.6.0+git.1569597268.1c8597df.Ubuntu_18.04.amd64.tar.gz
$  terraform init
Terraform initialized in an empty directory!

The directory has no Terraform configuration files. You may begin working
with Terraform immediately by creating Terraform configuration files.
$ cd ~/.terraform.d
$ mkdir plugins
$ cp terraform-provider-libvirt plugins/

创建第一个环境

创建工作目录:

$ mkdir ~/projects/terraform
$ cd ~/projects/terraform

创建一个名为libvirt.tf的定义文件,定义在kvm上需要创建的虚拟机:

provider "libvirt" {
  uri = "qemu:///system"
}

resource "libvirt_volume" "node1-qcow2" {
  name = "node1-qcow2"
  pool = "default"
  source = "/media/sda/rong_ubuntu_180403.qcow2"
  format = "qcow2"
}

# Define KVM domain to create
resource "libvirt_domain" "node1" {
  name   = "node1"
  memory = "10240"
  vcpu   = 2

  network_interface {
    network_name = "default"
  }

  disk {
    volume_id = libvirt_volume.node1-qcow2.id
  }

  console {
    type = "pty"
    target_type = "serial"
    target_port = "0"
  }

  graphics {
    type = "spice"
    listen_type = "address"
    autoport = true
  }
}

初始化一个terraform工作目录, 而后生成并展示terraform执行计划,而后创建定义好的底层设施:

$ terraform init
$ terraform plan
$ terraform apply

销毁:

$ terraform destroy 

在apply和destroy时需要回答yes,如果需要跳过确认流程,则使用以下命令:

$ terraform apply -auto-approve
$ terraform destroy -auto-approve

cloud-init

这个可以参考example下ubuntu的例子。

multiple vms

参考样例如下:

provider "libvirt" {
  uri = "qemu:///system"
}

variable "hosts" {
  default = 2
}

variable "hostname_format" {
  type    = string
  default = "node%02d"
}

resource "libvirt_volume" "node-disk" {
  name             = "node-${format(var.hostname_format, count.index + 1)}.qcow2"
  count            = var.hosts
  base_volume_name = "xxxxx180403_vagrant_box_image_0.img"
  pool             = "default"
  format           = "qcow2"
}

resource "libvirt_domain" "node" {
  count  = var.hosts
  name   = format(var.hostname_format, count.index + 1)
  vcpu   = 1
  memory = 2048

  disk {
    volume_id = element(libvirt_volume.node-disk.*.id, count.index)
  }

  network_interface {
    network_name   = "default"
    mac            = "52:54:00:00:00:a${count.index + 1}"
    wait_for_lease = true
  }

  graphics {
    type = "spice"
    listen_type = "address"
    autoport = true
  }
}

terraform {
  required_version = ">= 0.12"
}

值得注意的是,该定义文件中使用了dhcp地址绑定,为此我们需要定义如下的dhcp规则:

$ sudo virsh net-dumpxml --network default
<network>
  <name>default</name>
  <uuid>c71715ac-90b5-483a-bb1c-6a40a5af1b56</uuid>
  <forward mode='nat'>
    <nat>
      <port start='1024' end='65535'/>
    </nat>
  </forward>
  <bridge name='virbr0' stp='on' delay='0'/>
  <mac address='52:54:00:92:5c:47'/>
  <ip address='192.168.122.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.122.2' end='192.168.122.254'/>
      <host mac='52:54:00:00:00:a1' name='node01' ip='192.168.122.171'/>
      <host mac='52:54:00:00:00:a2' name='node02' ip='192.168.122.172'/>
      <host mac='52:54:00:00:00:a3' name='node03' ip='192.168.122.173'/>
      <host mac='52:54:00:00:00:a4' name='node04' ip='192.168.122.174'/>
      <host mac='52:54:00:00:00:a5' name='node05' ip='192.168.122.175'/>
      <host mac='52:54:00:00:00:a6' name='node06' ip='192.168.122.176'/>
      <host mac='52:54:00:00:00:a7' name='node07' ip='192.168.122.177'/>
      <host mac='52:54:00:00:00:a8' name='node08' ip='192.168.122.178'/>
      <host mac='52:54:00:00:00:a9' name='node09' ip='192.168.122.179'/>
      <host mac='52:54:00:00:00:aa' name='node10' ip='192.168.122.180'/>
    </dhcp>
  </ip>
</network>

重新定义的规则如下:

$ sudo virsh net-dumpxml --network default>default.xml
修改
$ sudo virsh net-define ./default.xml
重新检查规则
$ sudo virsh net-dumpxml --network default

定义完毕以后,则我们在tf文件中定义的虚拟机会通过DHCP从default网络得到相应的IP地址,有利于后续的集群部署。

检查IP是否被分配的命令:

$ sudo virsh net-dhcp-leases default
 Expiry Time          MAC address        Protocol  IP address                Hostname        Client ID or DUID
-------------------------------------------------------------------------------------------------------------------
 2019-12-03 15:53:49  52:54:00:00:00:a1  ipv4      192.168.122.171/24        node01          01:52:54:00:00:00:a1
 2019-12-03 15:53:49  52:54:00:00:00:a2  ipv4      192.168.122.172/24        node02          01:52:54:00:00:00:a2

定义完该网络后,需要手动重启该网络才可以使得更改生效。

CreateVagrantBoxFromQCOW2

Machine Preparation

Create a libvirt machine, install system.

Add user vagrant:

# adduser vagrant
# visudo -f /etc/sudoers.d/vagrant
vagrant ALL=(ALL) NOPASSWD:ALL
# visudo
vagrant ALL=(ALL) NOPASSWD:ALL
Defaults:vagrant	!requiretty
# mkdir -p /home/vagrant/.ssh
# chmod 0700 /home/vagrant/.ssh
# wget --no-check-certificate \
https://raw.github.com/mitchellh/vagrant/master/keys/vagrant.pub \
-O /home/vagrant/.ssh/authorized_keys
# chmod 0600 /home/vagrant/.ssh/authorized_keys
# chown -R vagrant /home/vagrant/.ssh
# vim /home/vagrant/.profile
add
[ -z "$BASH_VERSION" ] && exec /bin/bash -l
# chsh -s /bin/bash vagrant

Change the ethernet card from ens* to eth0:

# vim /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="quite"
GRUB_CMDLINE_LINUX="net.ifnames=0 biosdevname=0"
# grub-mkconfig -o /boot/grub/grub.cfg

Change the netplan rules:

# vim /etc/netplan/01-netcfg.yaml 
# This file describes the network interfaces available on your system
# For more information, see netplan(5).
network:
  version: 2
  ethernets:
    eth0:
      dhcp4: yes
      dhcp-identifier: mac

Finally change the sshd configuration:

# vim /etc/ssh/sshd_config 
AuthorizedKeysFile .ssh/authorized_keys

For 20.04, you have to manually install ifupdown:

# apt-get install -y ifupdown

Now shutdown the machine, continue for packaging.

Packaging

Shrinking the qcow2 file:

# qemu-img convert -c -O qcow2 test180403.qcow2 test180403shrunk.qcow2
# mv test180403shrunk.qcow2 box.img
# vim metadata.json
{
"provider"     : "libvirt",
"format"       : "qcow2",
"virtual_size" : 40
}
# vim Vagrantfile
Vagrant.configure("2") do |config|
       config.vm.provider :libvirt do |libvirt|
       libvirt.driver = "kvm"
       libvirt.host = 'localhost'
       libvirt.uri = 'qemu:///system'
       end
config.vm.define "new" do |custombox|
       custombox.vm.box = "custombox"
       custombox.vm.provider :libvirt do |test|
       test.memory = 1024
       test.cpus = 1
       end
       end
end
# tar cvzf custom_box.box ./metadata.json ./Vagrantfile ./box.img

Testing

Add vagrant box via:

# vagrant box add custom_box.box --name "chuobi"
# vagrant init chuobi
# vagrant up --provider=libvirt

ThinkingOnDev

  1. 节点数据上报,监控客户端方案。
  2. 数据归总方案,用什么样的数据库什么样的架构用于存放数据。
  3. 数据展示方案,用什么样的前端和操控界面来展示和使用数据。

WorkingTipsOnGravitee

AIM

Deploy gravitee on Kubernetes cluster, and use it as cluster’s API gateway.

Ingress-Controller

Deploy nginx-ingress-controller in kubespray’s configuration is listed as following:

ingress_nginx_enabled: true
ingress_nginx_host_network: true
ingress_nginx_nodeselector:
  kubernetes.io/hostname: "tsts-2"

Specify the ingress’s entry machine is tsts-2, cause in some node we have the 80 and 443 port occupied.

Run the task:

# ansible-playbook -i inventory/kkkk/hosts.ini cluster.yml --extra-vars @kkkk-vars.yml --tags ingress-controller

Verify the ingress deployed:

# kubectl get pods -n ingress-nginx
NAME                             READY   STATUS    RESTARTS   AGE
ingress-nginx-controller-b959g   1/1     Running   0          4d16h

Helm/Charts installation

Use helm/charts for deploying the gravitee apim.

# git clone https://github.com/gravitee-io/helm-charts.git
# cd apim
# helm repo update .
# helm dependency update .

After dependency updated we will see the folder structure is listed as:

➜  apim tree 
.
├── charts
│   ├── elasticsearch-1.32.0.tgz
│   └── mongodb-replicaset-3.10.1.tgz
├── Chart.yaml
├── NOTES.txt
├── README.md
├── requirements.lock
├── requirements.yaml
├── templates
│   ├── api-autoscaler.yaml
│   ├── api-configmap.yaml
│   ├── api-deployment.yaml
│   ├── api-ingress.yaml
│   ├── api-service.yaml
│   ├── gateway-autoscaler.yaml
│   ├── gateway-configmap.yaml
│   ├── gateway-deployment.yaml
│   ├── gateway-ingress.yaml
│   ├── gateway-service.yaml
│   ├── _helpers.tpl
│   ├── ui-autoscaler.yaml
│   ├── ui-configmap.yaml
│   ├── ui-deployment.yaml
│   ├── ui-ingress.yaml
│   └── ui-service.yaml
└── values.yaml

Configure the helm/charts values:

# vim values.yml
//.................
mongo:
  rs: rs0
  rsEnabled: true
  dbhost: gravitee45-mongodb-replicaset
//.................
mongodb-replicaset:
  enabled: true
  replicas: 1
//.................
  persistentVolume:
    enabled: false
//.................

es:
//.................
  endpoints:
    - http://gravitee45-elasticsearch-client.default.svc.cluster.local:9200

//.................
elasticsearch:
  enabled: true
  cluster:
    name: "elasticsearch"

//.................
  master: 
//.................
    persistence:
      enabled: false
//.................
  data:
//.................
    persistence:
      enabled: false

//.................

api:
  enabled: true
  name: api
  logging:
    debug: false
  restartPolicy: OnFailure
  updateStrategy:
    rollingUpdate:
      maxUnavailable: 1
    type: RollingUpdate
  replicaCount: 1
  image:
    repository: graviteeio/management-api
    tag: 1.29.5
    pullPolicy: IfNotPresent
  service:
    type: ClusterIP
    externalPort: 83
    internalPort: 8083
  autoscaling:
    enabled: true
    minReplicas: 1
    maxReplicas: 1

//.....................

gateway:
  enabled: true
  type: Deployment
  name: gateway
  logging:
    debug: false
  replicaCount: 2
  # sharding_tags: 
  # tenant:
  websocket: false
  image:
    repository: graviteeio/gateway
    tag: 1.29.5
    pullPolicy: IfNotPresent
  service:
    type: ClusterIP
    externalPort: 82
    internalPort: 8082
  autoscaling:
    enabled: true
    minReplicas: 1
    maxReplicas: 1

//.......................
ui:
  enabled: true
  name: ui
  title: API Portal
  managementTitle: API Management
  documentationLink: http://docs.gravitee.io/
  scheduler:
    tasks: 10
  theme:
    name: "default"
    logo: "themes/assets/GRAVITEE_LOGO1-01.png"
    loader: "assets/gravitee_logo_anim.gif"
  portal:
    apikeyHeader: "X-Gravitee-Api-Key"
    devMode:
      enabled: false
    userCreation:
      enabled: false
    support:
      enabled: true
    rating:
      enabled: false
    analytics:
      enabled: false
      trackingId: ""
  replicaCount: 1
  image:
    repository: graviteeio/management-ui
    tag: 1.29.5
    pullPolicy: IfNotPresent
  autoscaling:
    enabled: true
    minReplicas: 1
    maxReplicas: 1
//............

Also replace all of the apim.example.com into apim.company.com.

Then install the charts via:

# helm  install --name gravitee45 .

Examine the ingress via:

# root@tsts-1:~/apim# kubectl get ingress
NAME                       HOSTS              ADDRESS          PORTS     AGE
gravitee45-apim-api        apim.company.com   10.147.191.192   80, 443   19h
gravitee45-apim-firstapi   apim.company.com   10.147.191.192   80, 443   17h
gravitee45-apim-gateway    apim.company.com   10.147.191.192   80, 443   19h
gravitee45-apim-ui         apim.company.com   10.147.191.192   80, 443   19h

Check the pods via:

root@tsts-1:~/apim# kubectl get pods
NAME                                               READY   STATUS    RESTARTS   AGE
gravitee45-apim-api-7bfd555fbb-95cqz               1/1     Running   0          19h
gravitee45-apim-gateway-5757b5d6bf-gzstz           1/1     Running   0          19h
gravitee45-apim-ui-66ddddfd7f-ssl9z                1/1     Running   0          19h
gravitee45-elasticsearch-client-77cb95bc9f-8bdt8   1/1     Running   0          19h
gravitee45-elasticsearch-client-77cb95bc9f-xjxvs   1/1     Running   0          19h
gravitee45-elasticsearch-data-0                    1/1     Running   0          19h
gravitee45-elasticsearch-data-1                    1/1     Running   0          19h
gravitee45-elasticsearch-master-0                  1/1     Running   0          19h
gravitee45-elasticsearch-master-1                  1/1     Running   0          19h
gravitee45-elasticsearch-master-2                  1/1     Running   0          19h
gravitee45-mongodb-replicaset-0                    1/1     Running   0          19h

Test api

Run a local test api like echo api in gravitee website:

# docker run -d --name echo -p 18080:8080 graviteeio/gravitee-echo-api:latest

Test via:

# curl http://xxx.xxx.xxx.xxx:18080/
{
  "headers" : {
    "Host" : "xxx.xxx.xxx.xxx:18080",
    "User-Agent" : "curl/7.52.1",
    "Accept" : "*/*"
  }

API management

Open your browser and visit https://apim.company.com:

/images/2019_11_13_10_45_50_1020x399.jpg

Click login and login with admin/admin:

/images/2019_11_13_10_46_13_434x376.jpg

Click Administration:

/images/2019_11_13_10_46_42_205x303.jpg

Click +:

/images/2019_11_13_10_47_35_353x306.jpg

Click ‘->` and create a new API:

/images/2019_11_13_10_48_21_462x302.jpg

Name is firstapi, version is 1.0, write some description, context-path is /firstapi, then click NEXT:

/images/2019_11_13_10_49_31_583x441.jpg

Specify the gateway to our test api, then click NEXT:

/images/2019_11_13_10_50_28_557x278.jpg

Write some description for plan, notice the security type should be API Key, you could also specify the Rate limit and Quota here, after configuration click NEXT for next step:

/images/2019_11_13_10_52_05_564x435.jpg

You could add the API documentation here, here we skip the documentation for next step, click SKIP:

/images/2019_11_13_10_53_25_556x256.jpg

Here you could adjust the parameters, if everything is ok, we could click CREATE AND START THE API:

/images/2019_11_13_10_54_25_862x648.jpg

Confirm for CREATE:

/images/2019_11_13_10_55_38_424x156.jpg

The api will be created and show like:

/images/2019_11_13_10_56_08_644x727.jpg

Click PUBLISH THE API and MAKE PUBLIC for plublishing this API:

/images/2019_11_13_10_56_54_646x255.jpg

Next step we will create an API for using this API, click Applications:

/images/2019_11_13_10_58_33_211x270.jpg

Click + for adding a new application:

/images/2019_11_13_10_59_37_655x233.jpg

Write some description for this new app, and click NEXT for next step:

/images/2019_11_13_11_00_12_421x320.jpg

Specify webfor api type, then click NEXT:

/images/2019_11_13_11_00_44_519x360.jpg

Now we subscribe to our created API in this screen:

/images/2019_11_13_11_01_24_674x343.jpg

Click first api 1.0:

/images/2019_11_13_11_01_43_597x312.jpg

Click REQUEST FOR SUBSCRIPTION for subscribing to this API:

/images/2019_11_13_11_02_22_392x532.jpg

Check the SUBSCRIBED button and click NEXT:

/images/2019_11_13_11_03_38_626x259.jpg

Click CREATE THE APPLICATION for the end of create app:

/images/2019_11_13_11_04_36_452x385.jpg

Click CREATE:

/images/2019_11_13_11_04_49_361x174.jpg

You should approve the subscription:

/images/2019_11_13_11_05_18_223x289.jpg

View the task:

/images/2019_11_13_11_08_16_756x256.jpg

Click ACCEPT for approve the subscription:

/images/2019_11_13_11_09_16_884x403.jpg

If you don’t specify the time, click CREATE:

/images/2019_11_13_11_09_32_399x346.jpg

A new API key will be generated:

/images/2019_11_13_11_10_16_643x669.jpg

Now the API has been created and you could use the app for consuming it, record this API key: db811f84-8717-4766-b2f5-a2b09574bc80, later we will use it.

Add ingress item

Since we use a ingress controller for controlling the service exposing, we have to add a ingress item for accesing the /firstapi:

# kubectl get ingress gravitee45-apim-gateway -oyaml>firstapi.yaml

/images/2019_11_13_11_17_04_629x424.jpg

Modify the ingress path and name:

line 18, changes to gravitee45-apim-firstapi
line 22, delete uid
line 31, change to /firstapi

Create the ingress:

# kubectl apply -f firstapi.yaml
ingress.extensions/gravitee45-apim-firstapi created

Consuming API

In a node outside of the k8s cluster, do following steps:

# curl -ki -H "X-Gravitee-Api-Key: db811f84-8717-4766-b2f5-a2b09574bc80" https://apim.company.com/firstapi
HTTP/2 200
server: openresty/1.15.8.1
date: Wed, 13 Nov 2019 03:14:12 GMT
content-type: application/json
content-length: 536
vary: Accept-Encoding
x-gravitee-transaction-id: fc46603c-f4d8-4c60-8660-3cf4d8cc608d
strict-transport-security: max-age=15724800; includeSubDomains

{
  "headers" : {
    "Host" : "xxx.xxx.xxx.xxx:18080",
    "X-Request-ID" : "156ec51c42f84b52ae5d9e36b3efeeef",
    "X-Real-IP" : "10.147.191.1",
    "X-Forwarded-For" : "10.147.191.1",
    "X-Forwarded-Host" : "apim.company.com",
    "X-Forwarded-Port" : "443",
    "X-Forwarded-Proto" : "https",
    "X-Original-URI" : "/firstapi",
    "X-Scheme" : "https",
    "user-agent" : "curl/7.52.1",
    "accept" : "*/*",
    "X-Gravitee-Transaction-Id" : "fc46603c-f4d8-4c60-8660-3cf4d8cc608d",
    "accept-encoding" : "deflate, gzip"
  }

Write a script:

while true
do
curl -ki -H "X-Gravitee-Api-Key: db811f84-8717-4766-b2f5-a2b09574bc80" https://apim.company.com/firstapi
sleep 0.1
done

dashboard

View the dashboard:

/images/2019_11_13_11_26_45_990x566.jpg

View the detailed statistics in dashboard:

/images/2019_11_13_11_28_13_811x439.jpg

We could easily judge which application comsumes how many apis in statistics page, also we will see the status of the service in this page.