TipsOnGreenInstallationMonitoring

AIM

monitoring using green installation(prometheus and netdata)

Netdata

Download the run.gz from official release:

# curl https://github.com/netdata/netdata/releases/download/v1.19.0/netdata-v1.19.0.gz.run>netdata-v1.19.0.gz.run
# chmod 777 *.run
# ./netdata-v1.19.0.gz.run --accept

Todo:

why could not be installed via: curl xxx/xxx.gz.run | bash ?

Tips: Pass parameter to bash:

# curl xxxx/xxx.gz.run | bash -s -- --accept

Prometheus

Install makeself via apt-get install -y makeself, later we will use it for createing install.run package.

Folder structure:

$ tree node_exporter 
node_exporter
├── install_node_exporter.sh
└── node_exporter

Edit the install_node_exporter.sh file:

#!/bin/sh -e

_check_root () {
    if [ $(id -u) -ne 0 ]; then
        echo "Please run as root" >&2;
        exit 1;
    fi
}

_check_root

mkdir -p /opt/node_exporter
cp node_exporter /opt/node_exporter/

if [ -x "$(command -v systemctl)" ]; then
    cat << EOF > /lib/systemd/system/node-exporter.service
[Unit]
Description=Prometheus agent
After=network.target
StartLimitIntervalSec=0

[Service]
Type=simple
Restart=always
RestartSec=1
ExecStart=/opt/node_exporter/node_exporter

[Install]
WantedBy=multi-user.target
EOF

    systemctl enable node-exporter
    systemctl start node-exporter
elif [ -x "$(command -v chckconfig)" ]; then
    cat << EOF >> /etc/inittab
::respawn:/opt/node_exporter/node_exporter
EOF
elif [ -x "$(command -v initctl)" ]; then
    cat << EOF > /etc/init/node-exporter.conf
start on runlevel [23456]
stop on runlevel [016]
exec /opt/node_exporter/node_exporter
respawn
EOF

    initctl reload-configuration
    stop node-exporter || true && start node-exporter
else
    echo "No known service management found" >&2;
    exit 1;
fi

While node_exporter is downloaded from github.

Make install.run:

# makeself ./node_exporter ./node_exporter_0.18.1.run "SFX installer for node_exporter(0.18.1)" ./install_node_exporter.sh

Thus we get the run file for installing:

$ ls
node_exporter/  node_exporter_0.18.1.run

We can install it via ./node_exporter_0.18.1.run.

post-installation

Be sure to open the ports banned by firewall, take centos6 for example:

$   iptables -I INPUT -p tcp --dport 9100 -j ACCEPT
$   iptables -I INPUT -p tcp --dport 19999 -j ACCEPT
$ service iptables save

In centos6, netdata will cause system restart hold for 1 minutes.

VerveBuds115OnUbuntu

Ubuntu18.04 configurating bluetooth headset VerveBuds115 steps:

Blueman

Use blueman for configrating the headset connection/configuration.

# sudo apt-get install -y blueman

Add blueman into awesome’s startup function:

# cat ~/.config/awesome/rc.lua | grep blueman
run_once("blueman-applet &")

Configurating the blueman:

/images/2019_12_30_09_22_31_969x154.jpg

Choose A2dp.

Sound

Ubuntu18.04 use pulseaudio for default sound backend, so we use following tools for configurating the sound:

# sudo apt-get install -y pasystray
# sudo apt-get install -y pnmixer

Also add them into the awesome’s startup functions, thua after system bootup we could find the volume controlling in systray:

# vim ~/.config/awesome/rc.lua
.....
run_once("blueman-applet &")
run_once("pnmixer &")
run_once("pasystray &")

Bugs: By pnmixer we could only controlling the Intel PCH, but not bluetooth?

kubespray2.12.0离线化手记

Steps

Download the source code:

# wget https://github.com/kubernetes-sigs/kubespray/archive/v2.12.0.tar.gz

Install ansible via old Rong/.

$ scp -r Rong test@192.168.121.104:/home/test/
$ cd ~/Rong
$ sudo mv /etc/apt/sources.list /home/test/
$ sudo ./bootstrap.sh
$ sudo mv /home/test/sources.list /etc/apt/

Change options:

$ cd ~/kubespray-2.12.0
$ cp ../deploy.key .
$ ssh -i deploy.key root@192.168.121.104
$ exit
$ cp -rfp inventory/sample/ inventory/rong
$ vim inventory/rong/hosts.ini
[all]
kubespray ansible_host=192.168.121.104 ansible_ssh_user=root ansible_ssh_private_key_file=./deploy.key  ip=192.168.121.104

[kube-master]
kubespray

[etcd]
kubespray

[kube-node]
kubespray

[k8s-cluster:children]
kube-master
kube-node

Add some configuration:

$ vim group_vars/k8s-cluster/addons.yml 
dashboard_enabled: true
helm_enabled: true
metrics_server_enabled: true

Speedup

cross the gfw, host machine side:

$ sudo iptables -t nat -A PREROUTING -p tcp -s 192.168.121.0/24 -j DNAT --to-destination 127.0.0.1:12345
$ sudo sysctl -w net.ipv4.conf.all.route_localnet=1

vm side:

$ sudo vim /etc/resolv.conf
nameserver 223.5.5.5
nameserver 8.8.8.8

Setup Cluster

Via:

$ ansible-playbook -i inventory/rong/hosts.ini cluster.yml

Fetch things

Get all of the images:

# docker pull xueshanf/install-socat:latest
# docker images | sed -n '1!p' | awk {'print $1":"$2'} | tr '\n' ' '
nginx:1.17 gcr.io/google-containers/k8s-dns-node-cache:1.15.8 gcr.io/google-containers/kube-proxy:v1.16.3 gcr.io/google-containers/kube-apiserver:v1.16.3 gcr.io/google-containers/kube-controller-manager:v1.16.3 gcr.io/google-containers/kube-scheduler:v1.16.3 lachlanevenson/k8s-helm:v2.16.1 gcr.io/kubernetes-helm/tiller:v2.16.1 coredns/coredns:1.6.0 calico/node:v3.7.3 calico/cni:v3.7.3 calico/kube-controllers:v3.7.3 gcr.io/google_containers/metrics-server-amd64:v0.3.3 gcr.io/google-containers/cluster-proportional-autoscaler-amd64:1.6.0 gcr.io/google_containers/kubernetes-dashboard-amd64:v1.10.1 quay.io/coreos/etcd:v3.3.10 gcr.io/google-containers/addon-resizer:1.8.3 gcr.io/google-containers/pause:3.1 gcr.io/google_containers/pause-amd64:3.1 xueshanf/install-socat:latest
# docker save -o k8simages.tar nginx:1.17 gcr.io/google-containers/k8s-dns-node-cache:1.15.8 gcr.io/google-containers/kube-proxy:v1.16.3 gcr.io/google-containers/kube-apiserver:v1.16.3 gcr.io/google-containers/kube-controller-manager:v1.16.3 gcr.io/google-containers/kube-scheduler:v1.16.3 lachlanevenson/k8s-helm:v2.16.1 gcr.io/kubernetes-helm/tiller:v2.16.1 coredns/coredns:1.6.0 calico/node:v3.7.3 calico/cni:v3.7.3 calico/kube-controllers:v3.7.3 gcr.io/google_containers/metrics-server-amd64:v0.3.3 gcr.io/google-containers/cluster-proportional-autoscaler-amd64:1.6.0 gcr.io/google_containers/kubernetes-dashboard-amd64:v1.10.1 quay.io/coreos/etcd:v3.3.10 gcr.io/google-containers/addon-resizer:1.8.3 gcr.io/google-containers/pause:3.1 gcr.io/google_containers/pause-amd64:3.1 xueshanf/install-socat:latest; xz -T4 k8simages.tar

Get debs:

# mkdir /home/test/debs
# find . | grep deb$ | xargs -I % cp % /home/test/debs/

Get temp files:

# ls /tmp/releases/
calicoctl                           images/                             kubectl-v1.16.3-amd64               
cni-plugins-linux-amd64-v0.8.1.tgz  kubeadm-v1.16.3-amd64               kubelet-v1.16.3-amd64     
# cp /tmp/releases/* /home/test/file/

More pkgs

Use the old deb repository for installing ansible:

$ cp old_1804debs.tar.xz ~/YourWebServer
$ tar xJvf old_1804debs.tar.xz
$ sudo vim /etc/apt/sources.list
deb [trusted=yes]  http://192.168.122.1/ansible_bionic ./
$ sudo apt-get update -y && sudo DEBIAN_FRONTEND=noninteractive apt-get install -y ansible python-netaddr

more pkgs should be installed manually and copy to /root/debs:

# apt-get install -y iputils-ping nethogs python-netaddr build-essential bind9 bind9utils nfs-common nfs-kernel-server ntpdate ntp tcpdump iotop unzip wget apt-transport-https socat rpcbind arping fping python-apt ipset ipvsadm pigz nginx docker-registry
# cd /root/debs
# wget http://209.141.35.192/netdata_1.18.1_amd64_bionic.deb
# apt-get install  ./netdata_1.18.1_amd64_bionic.deb
# find /var/cache | grep deb$ | xargs -I % cp % ./
# dpkg-scanpackages . /dev/null | gzip -9c > Packages.gz

Offline registry setup

On a running secureregistry server do following:

# systemctl stop secureregistryserver
# cd /opt/local/secureregistryserver/
# mv data data.back
# docker-compose up
# docker push xxxxx

Your docker push item is listed as(v1.16.3):

docker push gcr.io/google-containers/k8s-dns-node-cache:1.15.8
docker push gcr.io/google-containers/kube-proxy:v1.16.3
docker push gcr.io/google-containers/kube-apiserver:v1.16.3
docker push gcr.io/google-containers/kube-controller-manager:v1.16.3
docker push gcr.io/google-containers/kube-scheduler:v1.16.3
docker push lachlanevenson/k8s-helm:v2.16.1
docker push gcr.io/kubernetes-helm/tiller:v2.16.1
docker push coredns/coredns:1.6.0
docker push calico/node:v3.7.3
docker push calico/cni:v3.7.3
docker push calico/kube-controllers:v3.7.3
docker push gcr.io/google_containers/metrics-server-amd64:v0.3.3
docker push gcr.io/google-containers/cluster-proportional-autoscaler-amd64:1.6.0
docker push gcr.io/google_containers/kubernetes-dashboard-amd64:v1.10.1
docker push quay.io/coreos/etcd:v3.3.10
docker push gcr.io/google-containers/addon-resizer:1.8.3
docker push gcr.io/google-containers/pause:3.1
docker push gcr.io/google_containers/pause-amd64:3.1
docker push xueshanf/install-socat:latest
docker push nginx:1.17

tar docker.tar.gz:

# cd /opt/local/secureregistryserver/data
# tar czvf docker.tar.gz docker/

Upgrade

From v1.15.3 to v1.16.3, steps:

$ pwd
0_preinstall/roles/kube-deploy/files
$ ls
1604debs.tar.xz  1804debs.tar.xz  calicoctl-linux-amd64  cni-plugins-linux-amd64-v0.8.1.tgz  dns  docker-compose  docker.tar.gz  dockerDebs.tar.gz  gpg  hyperkube  kubeadm  nginx  ntp.conf

Generate 1804debs.tar.xz and replace:

# cp -r /root/debs ./Rong
# tar cJvf 1804debs.tar.xz Rong

Calculate calicoctl/ , it’s the same md5, so needn’t replacement.

docker.tar.gz should be replaced with the newer one.

Docker version upgradeed to 19.03.5, so we need to replace the old ones.

# tar xzvf dockerDebs.tar.gz  -C tmp/
ubuntu/dists/bionic/pool/stable/amd64/containerd.io_1.2.10-2_amd64.deb
ubuntu/dists/bionic/pool/stable/amd64/docker-ce-cli_19.03.3~3-0~ubuntu-bionic_amd64.deb
ubuntu/dists/bionic/pool/stable/amd64/docker-ce_18.09.7~3-0~ubuntu-bionic_amd64.deb

apt-mirror for syncing on internet:

$ sudo vim /etc/apt/mirror.list
set base_path    /media/sda/tmp/apt-mirror
set nthreads     20
set _tilde 0
deb https://download.docker.com/linux/ubuntu bionic stable
deb https://download.docker.com/linux/ubuntu xenial stable
$ sudo apt-mirror

Too slow for the fucking gfw!!!

After apt-mirror, we have to rsync using following command:

$ pwd
/media/sda/tmp/apt-mirror/mirror/download.docker.com/linux/ubuntu
$ ls
dists
$ rsync -a -e 'ssh -p 2345 ' --progress dists/ root@192.168.111.11:/destination/ubuntu/dists/

wget the gpg file:

$ wget https://download.docker.com/linux/ubuntu/gpg
$ tar czvf dockerDebs.tar.gz gpg ubuntu/
$ ls -l -h dockerDebs.tar.gz
-rw-r--r-- 1 root root 144M Dec 23 17:41 dockerDebs.tar.gz
$ cp dockerDebs.tar.gz ~/0_preinstall/roles/kube-deploy/files

Binary replacement:

previsous:    
 hyperkube  kubeadm  
current:    
kubeadm-v1.16.3-amd64 kubectl-v1.16.3-amd64 kubelet-v1.16.3-amd64

Edit the file, since in v1.16.3 we didn’t use hyperkube:

$ vim deploy-ubuntu/tasks/main.yml
  - name: "upload static files to /usr/local/static"
    copy:
      src: "{{ item }}"
      dest: /usr/local/static/
      owner: root
      group: root
      mode: 0777
    with_items:
      #- files/hyperkube
      - files/calicoctl-linux-amd64
      - files/kubeadm-v1.16.3-amd64
      - files/kubectl-v1.16.3-amd64
      - files/kubelet-v1.16.3-amd64
      #- files/kubeadm
      - files/cni-plugins-linux-amd64-v0.8.1.tgz
      #- files/dockerDebs.tar.gz
      - files/gpg

Add sysctl items:

# vim ./roles/kubernetes/preinstall/tasks/0080-system-configurations.yml
- name: set fs inotify.max_user_watches to 1048576
  sysctl:
    sysctl_file: "{{ sysctl_file_path }}"
    name: fs.inotify.max_user_watches
    value: 1048576
    state: present
    reload: yes

Added some files like ./roles/kubernetes/preinstall/tasks/0000-xxx-ubuntu.yml, minimum modifications to kubespray source code, you can use bcompare for viewing.

WorkingTipsOnKubesprayKongFuZi

目的

Kubespray在离线环境下,完全不考虑包管理、docker升级的发行版。

技术要点

  1. 离线情况下的源仓库准备。
  2. 完全离线情况下ansible的执行。

环境准备(与本文无关)

Ubuntu 16.04.2, 最小化安装后,做成vagrant box:

$ sudo vim /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
GRUB_CMDLINE_LINUX="net.ifnames=0 biosdevname=0"
$ sudo grub-mkconfig -o /boot/grub/grub.cfg
$ sudo useradd -m vagrant
$ sudo passwd vagrant
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
$ sudo mkdir -p /home/vagrant/.ssh
$ sudo chmod 0700 /home/vagrant/.ssh/
$ sudo vim /home/vagrant/.ssh/authorized_keys
$ sudo cat /home/vagrant/.ssh//authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA6NF8iallvQVp22WDkTkyrtvp9eWW6A8YVr+kz4TjGYe7gHzIw+niNltGEFHzD8+v1I2YJ6oXevct1YeS0o9HZyN1Q9qgCgzUFtdOKLv6IedplqoPkcmF0aYet2PkEDo3MlTBckFXPITAMzF8dJSIFo9D8HfdOV0IAdx4O7PtixWKn5y2hMNG0zQPyUecp4pzC6kivAIhyfHilFR61RGL+GPXQ2MWZWFYbAGjyiYJnAmCP3NOTd0jMZEnDkbUvxhMmBYSdETk1rRgm+R4LOzFUGaHqHDLKLX+FIPKcF96hrucXzcWyLbIbEgE98OHlnVYCzRdK8jlqm8tehUc9c9WhQ== vagrant insecure public key
$ sudo chown -R vagrant /home/vagrant/.ssh
$ sudo cp /home/test/.bashrc /home/vagrant/.bashrc 
$ sudo cp /home/test/.bash_logout /home/vagrant/.bash_logout
$ sudo cp /home/test/.profile /home/vagrant/.profile
$ sudo vim /home/vagrant/.profile 
add
[ -z "$BASH_VERSION" ] && exec /bin/bash -l
$ sudo chsh -s /bin/bash vagrant
$ sudo  vim /etc/ssh/sshd_config 
AuthorizedKeysFile .ssh/authorized_keys
$ sudo visudo -f /etc/sudoers.d/vagrant
vagrant ALL=(ALL) NOPASSWD:ALL
Defaults:vagrant !requiretty
$ sudo vim /etc/network/interfaces
change from ens3 to eth0
auto eth0
inet .....

关闭机器后,缩减磁盘空间编辑vagrantfile文件并最终创建box:

$ sudo qemu-img convert -c -O qcow2  ubuntu160402.qcow2 ubuntu160402Shrink.qcow2
$ sudo vim metadata.json
{
"provider"     : "libvirt",
"format"       : "qcow2",
"virtual_size" : 80
}
$ sudo vim Vagrantfile
Vagrant.configure("2") do |config|
       config.vm.provider :libvirt do |libvirt|
       libvirt.driver = "kvm"
       libvirt.host = 'localhost'
       libvirt.uri = 'qemu:///system'
       end
config.vm.define "new" do |custombox|
       custombox.vm.box = "custombox"
       custombox.vm.provider :libvirt do |test|
       test.memory = 1024
       test.cpus = 1
       end
       end
end
$ sudo tar cvzf custom_box.box ./metadata.json ./Vagrantfile ./box.img

添加并检查box是否可用:

$ vagrant box add custom_box.box --name "ubuntu160402old"
$ vagrant init ubuntu160402old
$ vagrant up --provider=libvirt

Server实现

沿用coreos的机制,将docker/docker-compose以二进制的方式安装。安装完毕后通过容器启动几乎所有的服务:

ntp
harbor
ansible
dnsmasq
fileserver

QuickStartOfVagrantAndAnsible

Setup Environment

Using vagrant box list for getting all of the boxes, then initiate the environment via(take rhel74 box for example):

$ vagrant init rhel74

Add the cpus/memory customization values:

  config.vm.provider "libvirt" do |vb|
     vb.memory = "4096"
     vb.cpus = "4"
  end

Disable the rsync folder:

  config.vm.synced_folder ".", "/vagrant", disabled: true, type: "rsync", rsync__args: ['--verbose', '--archive', '--delete', '-z'] , rsync__exclude: ['.git','venv']

Add ansible deployment:

  config.vm.provision "ansible" do |ansible|
    ansible.playbook = "playbook.yml"
    ansible.become = true
  end

your playbook.yml should like following:

---
- hosts: all
  gather_facts: false
  become: True
  tasks:
    - name: "Run shell for provision"
      shell: mkdir -p /root/tttt

Manually Run ansible playbook

vagrant will create the inventory files under the .vagrant folder:

cat .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory 
# Generated by Vagrant

default ansible_host=192.168.121.215 ansible_port=22 ansible_user='vagrant' ansible_ssh_private_key_file='/media/sda/Code/vagrant/dockerOnrhel74/.vagrant/machines/default/libvirt/private_key'

Then you could run the provision task like:

$ ansible-playbook -i .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory playbook.yml