WorkingTipsOnRongRobot

Building

In Azure Devops, Create new project:

/images/2020_10_29_12_39_14_651x545.jpg

Create pipeline:

/images/2020_10_29_12_39_37_449x459.jpg

Select code for GitHub:

/images/2020_10_29_12_40_07_665x532.jpg

Authorized AzurePipeLines:

/images/2020_10_29_12_40_42_519x188.jpg

Select Repository:

/images/2020_10_29_12_41_19_706x269.jpg

Click Run:

/images/2020_10_29_12_41_49_894x394.jpg

View Status:

/images/2020_10_29_12_43_19_859x544.jpg

Running Status:

/images/2020_10_29_12_43_39_835x460.jpg

Check Result:

/images/2020_10_29_13_23_16_752x254.jpg

/images/2020_10_29_13_23_44_839x546.jpg

Check Artifacts:

/images/2020_10_29_13_24_05_754x257.jpg

Download Artifacts:

/images/2020_10_29_13_24_25_850x299.jpg

Patching

Static file Patching

After download:

 $ ls *
RobotSon.tar.gz

data:
docker 

release:
calicoctl  cni-plugins-linux-amd64-v0.8.7.tgz  kubeadm-v1.19.3-amd64  kubectl-v1.19.3-amd64  kubelet-v1.19.3-amd64

zip docker.tar.gz(place in pre-rong/rong_static/for_master0/docker.tar.gz)

$ cd data
$ tar czf docker.tar.gz docker/

Copy releases folder to folder(pre-rong/rong_static/for_cluster/)

$  ls pre-rong/rong_static/for_cluster/
calicoctl  cni-plugins-linux-amd64-v0.8.7.tgz  docker  gpg  kubeadm-v1.18.8-amd64  kubectl-v1.18.8-amd64  kubelet-v1.18.8-amd64  netdata-v1.22.1.gz.run

Code Patching

下载patch文件:

# git clone https://github.com/kubernetes-sigs/kubespray.git
# cd kubespray
# git checkout tags/v2.xx.0 -b xxxx
# git apply --check ../patch 
检查是否有错
v1.19(master)需要exclude以下两个文件
# git apply  /root/patch --exclude=roles/kubernetes-apps/helm/templates/tiller-clusterrolebinding.yml.j2 --exclude=roles/remove-node/remove-etcd-node/tasks/main.yml

部署框架内少量修改

rong-vars.yml:

/images/2020_10_29_14_12_23_871x347.jpg

/images/2020_10_29_14_12_36_425x97.jpg

rong/1_preinstall/role/preinstall/task/main.yml:

/images/2020_10_29_14_11_39_875x477.jpg

WorkingTipsOnGitDiffPatch

RONG代码架构现状

制作patch前确保Kubespray代码中目录中的软链接确实存在,而不是因为cp被替换成了实体文件。

v2.14.0上制作patch

# git clone https://github.com/kubernetes-sigs/kubespray.git
# git checkout tags/v2.14.0 -b 2140

此时检出的是v2.14.0的未修改的代码。

该目录下替换成3_k8s下的代码,注意去掉部署时生成的中间文件。而后commit更改。

/images/2020_10_29_11_03_04_638x297.jpg

制作patch文件:

git diff a1f04e f0c9b1>patch1

Apply patch

切换回master分支,或者直接在新目录下checkout一个新的工作目录:

# git apply --check ../patch 
error: 打补丁失败:roles/kubernetes-apps/helm/templates/tiller-clusterrolebinding.yml.j2:3
error: roles/kubernetes-apps/helm/templates/tiller-clusterrolebinding.yml.j2:补丁未应用
error: 打补丁失败:roles/remove-node/remove-etcd-node/tasks/main.yml:21
error: roles/remove-node/remove-etcd-node/tasks/main.yml:补丁未应用

这是因为新分支(master)对比于v2.14.0在上述提及的文件中有更改,此时我们需要在apply的时候忽略掉这些更改:

git apply  /root/patch --exclude=roles/kubernetes-apps/helm/templates/tiller-clusterrolebinding.yml.j2 --exclude=roles/remove-node/remove-etcd-node/tasks/main.yml

此时更改完毕后可以看到新版本的代码中已经添加了我们在旧分支上做的代码变更。

对于有冲突的文件,需要手动解决冲突。

Example

Git patch in different branch:

# git clone https://github.com/kubernetes-sigs/kubespray.git
# git checkout tags/v2.14.0 -b 2140
 (2140) $ git apply ../../patch 
 (2140 !*%) $ vim roles/container-engine/docker/tasks/main.yml
 (2140 !*%) $ git checkout master
error: 您对下列文件的本地修改将被检出操作覆盖:
	roles/kubernetes-apps/helm/templates/tiller-clusterrolebinding.yml.j2
	roles/kubernetes/preinstall/tasks/0020-verify-settings.yml
	roles/remove-node/remove-etcd-node/tasks/main.yml
请在切换分支前提交或贮藏您的修改。
正在终止
(2140 !*%) $ git add .                                                                                                                        1 ↵
(2140 !+) $ git commit -m "modified in 2.14.0"
[2140 2d87573d] modified in 2.14.0
 19 files changed, 504 insertions(+), 427 deletions(-)
 delete mode 100644 contrib/packaging/rpm/kubespray.spec
 create mode 100644 inventory/sample/hosts.ini
 rewrite roles/bootstrap-os/tasks/main.yml (99%)
 create mode 100644 roles/bootstrap-os/tasks/main_kfz.yml
 copy roles/bootstrap-os/tasks/{main.yml => main_main.yml} (99%)
 rewrite roles/container-engine/docker/tasks/main.yml (99%)
 create mode 100644 roles/container-engine/docker/tasks/main_kfz.yml
 copy roles/container-engine/docker/tasks/{main.yml => main_main.yml} (92%)
dash@archnvme:/media/sda/git/pure/kubespray (2140) $ git checkout master
切换到分支 'master'
您的分支与上游分支 'origin/master' 一致。
 (master) $ ls
ansible.cfg          code-of-conduct.md  Dockerfile       index.html  logo         OWNERS_ALIASES             remove-node.yml   scale.yml          setup.py             Vagrantfile
ansible_version.yml  _config.yml         docs             inventory   Makefile     README.md                  requirements.txt  scripts            test-infra
cluster.yml          contrib             extra_playbooks  library     mitogen.yml  recover-control-plane.yml  reset.yml         SECURITY_CONTACTS  tests
CNAME                CONTRIBUTING.md     facts.yml        LICENSE     OWNERS       RELEASE.md                 roles             setup.cfg          upgrade-cluster.yml
(master) $ git apply ../../patch --exclude=roles/kubernetes-apps/helm/templates/tiller-clusterrolebinding.yml.j2 --exclude=roles/remove-node/remove-etcd-node/tasks/main.yml
(master !*%) $ git checkout 2140
error: 您对下列文件的本地修改将被检出操作覆盖:
	cluster.yml
	roles/bootstrap-os/tasks/main.yml
	roles/container-engine/docker/meta/main.yml
	roles/container-engine/docker/tasks/main.yml
	roles/container-engine/docker/tasks/pre-upgrade.yml
	roles/container-engine/docker/templates/docker-options.conf.j2
	roles/container-engine/docker/templates/docker.service.j2
	roles/kubernetes/node/tasks/kubelet.yml
	roles/kubernetes/preinstall/tasks/0020-verify-settings.yml
	roles/kubernetes/preinstall/tasks/0080-system-configurations.yml
	roles/kubernetes/preinstall/tasks/main.yml
请在切换分支前提交或贮藏您的修改。
error: 工作区中下列未跟踪的文件将会因为检出操作而被覆盖:
	inventory/sample/hosts.ini
	roles/bootstrap-os/tasks/main_kfz.yml
	roles/bootstrap-os/tasks/main_main.yml
	roles/container-engine/docker/tasks/main_kfz.yml
	roles/container-engine/docker/tasks/main_main.yml
请在切换分支前移动或删除。
正在终止
(master !*%) $ git add .                                                                                                                      1 ↵
(master !+) $ git commit -m "apply in master"
[master a5941286] apply in master
 17 files changed, 502 insertions(+), 426 deletions(-)
 delete mode 100644 contrib/packaging/rpm/kubespray.spec
 create mode 100644 inventory/sample/hosts.ini
 rewrite roles/bootstrap-os/tasks/main.yml (99%)
 create mode 100644 roles/bootstrap-os/tasks/main_kfz.yml
 copy roles/bootstrap-os/tasks/{main.yml => main_main.yml} (99%)
 rewrite roles/container-engine/docker/tasks/main.yml (99%)
 create mode 100644 roles/container-engine/docker/tasks/main_kfz.yml
 copy roles/container-engine/docker/tasks/{main.yml => main_main.yml} (92%)
 (master) $ git checkout 2140              
切换到分支 '2140'
 (2140) $ git checkout master
切换到分支 'master'
您的分支领先 'origin/master' 共 1 个提交。
  (使用 "git push" 来发布您的本地提交)
(master) $ pwd
/media/sda/git/pure/kubespray

WorkingTipsOnRongRobot

Azure DevOps

Create a new project:

/images/2020_10_28_08_37_20_634x542.jpg

Add the ssh-key into project:

/images/2020_10_28_08_31_38_457x200.jpg

Configure the time/locale:

/images/2020_10_28_08_32_41_584x548.jpg

Repos

Create a new repository and set the remote branch:

# mkdir RongRobot
# cd RongRobot
# vim README.md
# git init
# git add .
# git commit -m "First Commit"
# git remote add origin git@ssh.dev.azure.com:v3/purplepalm/RongRobot/RongRobot
# git push -u origin --all

View status on azure devops:

/images/2020_10_28_08_41_24_1023x408.jpg

Click Set up build for setup the pipeline:

/images/2020_10_28_08_42_18_317x141.jpg

Starter pipeline:

/images/2020_10_28_08_42_48_518x262.jpg

Edit something:

/images/2020_10_28_08_43_34_791x555.jpg

Codes

Write your own azure pipelines for doing these .

WorkingTipsInRonggraphInLXD

lxd environment

Install lxd(Offline):

snap download core
snap download core18
snap download lxd
snap ack core18_1885.assert; snap ack core_10185.assert; snap ack lxd_17936.assert
snap install core18_1885.snap ; snap install core_10185.snap ; snap install lxd_17936.snap
dpkg -i ./lxd_1%3a0.9_all.deb
which lxc
which lxd

Show lxc images:

root@rong320-1:~/lxd# lxc image list
If this is your first time running LXD on this machine, you should also run: lxd init
To start your first instance, try: lxc launch ubuntu:18.04

+-------+-------------+--------+-------------+--------------+------+------+-------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCHITECTURE | TYPE | SIZE | UPLOAD DATE |
+-------+-------------+--------+-------------+--------------+------+------+-------------+

Download lxd images:

https://us.images.linuxcontainers.org/images/alpine/3.12/amd64/default/20201021_13:00/
Download
rootfs.squashfs lxd.tar.xz 
root@rong320-1:~/lxdimages# ls
lxd.tar.xz  rootfs.squashfs
root@rong320-1:~/lxdimages# lxc image import lxd.tar.xz rootfs.squashfs --alias alpine312
Image imported with fingerprint: 76560d125792d7710d70f41b060e81f0bd4d83f1cc4e8dbd43fc371e5dea27bf
root@rong320-1:~/lxdimages# lxc image list
+-----------+--------------+--------+------------------------------------------+--------------+-----------+--------+------------------------------+
|   ALIAS   | FINGERPRINT  | PUBLIC |               DESCRIPTION                | ARCHITECTURE |   TYPE    |  SIZE  |         UPLOAD DATE          |
+-----------+--------------+--------+------------------------------------------+--------------+-----------+--------+------------------------------+
| alpine312 | 76560d125792 | no     | Alpinelinux 3.12 x86_64 (20201021_13:00) | x86_64       | CONTAINER | 2.40MB | Oct 22, 2020 at 3:48am (UTC) |
+-----------+--------------+--------+------------------------------------------+--------------+-----------+--------+------------------------------+
Auto Config the lxd(https://discuss.linuxcontainers.org/t/usage-of-lxd-init-preseed/1069/3)
(https://lxd.readthedocs.io/en/latest/preseed/)
cat <<EOF | lxd init --preseed
config:
  core.https_address: 10.137.149.161:9199
  images.auto_update_interval: 15
networks:
- name: lxdbr0
  type: bridge
  config:
    ipv4.address: auto
    ipv6.address: none
EOF
root@rong320-1:~/lxdimages# cat storages.yml 
storage_pools:
- name: default
  driver: dir
  config:
    source: ""
root@rong320-1:~/lxdimages# lxd init --preseed<./storages.yml

root@rong320-1:~/lxdimages# cat profiles.yml 
profiles:
- name: default
  devices:
    root:
      path: /
      pool: default
      type: disk
    eth0:
      nictype: bridged
      parent: lxdbr0
      type: nic
root@rong320-1:~/lxdimages# lxd init --preseed<profiles.yml

Now we could check the default lxd bridges(lxdbr0).

Docker/Docker-compose in alpine

Create a new profile named k8s:

# lxc launch alpine312 firstalpine -p k8s

Create the first alpine instance:

# lxc launch alpine312 firstalpine
Creating firstalpine
Starting firstalpine           
# lxc ls
+-------------+---------+---------------------+------+-----------+-----------+
|    NAME     |  STATE  |        IPV4         | IPV6 |   TYPE    | SNAPSHOTS |
+-------------+---------+---------------------+------+-----------+-----------+
| firstalpine | RUNNING | 10.31.47.210 (eth0) |      | CONTAINER | 0         |
+-------------+---------+---------------------+------+-----------+-----------+
root@rong320-1:~/lxdimages# lxc exec firstalpine /bin/sh
~ # cat /etc/issue
Welcome to Alpine Linux 3.12
Kernel \r on an \m (\l)

Configure repository:

 # echo "https://mirrors.aliyun.com/alpine/v3.12/main/" > /etc/apk/repositories
 # echo "https://mirrors.aliyun.com/alpine/v3.12/community/" >> /etc/apk/repositories
# apk update
# apk add docker-engine docker-compose docker-cli

Create the cgroups-patch file under /etc/init.d:

#!/sbin/openrc-run

description="Mount the control groups for Docker"

depend()
{
    keyword -docker
    need sysfs cgroups
}

start()
{
    if [ -d /sys/fs/cgroup ]; then
        mkdir -p /sys/fs/cgroup/cpu,cpuacct
        mkdir -p /sys/fs/cgroup/net_cls,net_prio

        mount -n -t cgroup cgroup /sys/fs/cgroup/cpu,cpuacct -o rw,nosuid,nodev,noexec,relatime,cpu,cpuacct
        mount -n -t cgroup cgroup /sys/fs/cgroup/net_cls,net_prio -o rw,nosuid,nodev,noexec,relatime,net_cls,net_prio

        if ! mountinfo -q /sys/fs/cgroup/openrc; then
            local agent="${RC_LIBEXECDIR}/sh/cgroup-release-agent.sh"
            mkdir -p /sys/fs/cgroup/openrc
            mount -n -t cgroup -o none,nodev,noexec,nosuid,name=systemd,release_agent="$agent" openrc /sys/fs/cgroup/openrc
        fi
    fi

    return 0
}

Added the auto-start and reboot:

# rc-update add cgroups-patch boot
# vim /etc/init.d/docker
.....
start_pre() {
        #checkpath -f -m 0644 -o root:docker "$DOCKER_ERRFILE" "$DOCKER_OUTFILE"
        echo "fucku"
}
.....
# rc-service docker start
# rc-update add docker default
# reboot
After reboot, check docker version

push files into lxc instance:

# lxc file push -r podmanitems/ firstalpine/root/
load all images
# docker images
~ # docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
rong/ui             master              66ad16eb15c5        20 minutes ago      28.9MB
rong/server         master              8150777ead18        23 hours ago        301MB
rong/kobe           master              2d0a03d6cedb        2 days ago          231MB
rong/nginx          1.19.2-amd64        7e4d58f0e5f3        5 weeks ago         133MB
rong/webkubectl     v2.6.0-amd64        4aa634837fea        2 months ago        349MB
rong/mysql-server   8.0.21-amd64        8a3a24ad33be        3 months ago        366MB
# lxc file push ronggraph.tar firstalpine/root/
# tar xzvf /root/ronggraph.tar

Should write a start definition for ronggraph:

#!/sbin/openrc-run
#
# author: Yusuke Kawatsu

workspace="/root/ronggraph"
cmdpath="/usr/bin/docker-compose"
prog="ronggraph"
lockfile="/var/lock/ronggraph"
pidfile="/var/run/ronggraph.pid"
PATH="$PATH:/usr/local/bin"


start() {
    [ -x $cmdpath ] || exit 5
    echo -n $"Starting $prog: "

    cd $workspace
    $cmdpath up -d
    $cmdpath down
    retval=$?
    pid=$!
    echo
    [ $retval -eq 0 ] && touch $lockfile && echo $pid > $pidfile

    return $retval
}

stop() {
    [ -x $cmdpath ] || exit 5
    echo -n $"Stopping $prog: "

    cd $workspace
    $cmdpath down
    retval=$?
    echo
    [ $retval -eq 0 ] && rm -f $lockfile && rm -f $pidfile

    return $retval
}

restart() {
    stop
    sleep 3
    start
}

depend() {
    need docker
}

Now add ronggraph to default update:

# rc-update add ronggraph default
# halt

Save the current status:

root@rong320-1:~/lxdimages# lxc stop firstalpine
root@rong320-1:~/lxdimages# lxc publish --public firstalpine --alias=ronggraph
root@rong320-1:/mnt# lxc image ls
+-----------+--------------+--------+------------------------------------------+--------------+-----------+----------+------------------------------+
|   ALIAS   | FINGERPRINT  | PUBLIC |               DESCRIPTION                | ARCHITECTURE |   TYPE    |   SIZE   |         UPLOAD DATE          |
+-----------+--------------+--------+------------------------------------------+--------------+-----------+----------+------------------------------+
| alpine312 | 76560d125792 | no     | Alpinelinux 3.12 x86_64 (20201021_13:00) | x86_64       
| CONTAINER | 2.40MB   | Oct 22, 2020 at 6:18am (UTC) |
+-----------+--------------+--------+------------------------------------------+--------------+-----------+----------+------------------------------+
| ronggraph | b31788790460 | yes    | Alpinelinux 3.12 x86_64 (20201021_13:00) | x86_64       | CONTAINER | 619.40MB | Oct 22, 2020 at 8:05am (UTC) |
+-----------+--------------+--------+------------------------------------------+--------------+-----------+----------+------------------------------+

launch new instance:

# lxc launch ronggraph ronggraph -p k8s
# 

Add forward rules:

lxc config device add ronggraph myport80 proxy listen=tcp:0.0.0.0:80 connect=tcp:0.0.0.0:80
lxc config device add ronggraph myport443 proxy listen=tcp:0.0.0.0:443 connect=tcp:0.0.0.0:443

arm64 workingtips

Under rpi archlinux64, install:

# pacman -Sy
# pacman -S lxc lxd
# systemctl enable lxd
# systemctl start lxd

Download images from:

https://us.images.linuxcontainers.org/images/alpine/3.12/arm64/default/20201022_13:00/
rootfs.squashfs
lxd.tar.xz
# lxc image import lxd.tar.xz rootfs.squashfs --alias alpine312
# lxd init --preseed<pre-rong/lxditems/lxd_snap/init.yaml
# lxc profile create k8s
# lxc profile edit k8s<pre-rong/lxditems/lxdimages/k8s.yaml

/images/2020_10_23_09_55_43_629x378.jpg

Configure lxc for running:

https://wiki.archlinux.org/index.php/Linux_Containers

lxc Installation:

# ~ # sed -i 's/dl-cdn.alpinelinux.org/mirrors.ustc.edu.cn/g' /etc/apk/repositories
~ # cat /etc/apk/repositories 
http://mirrors.ustc.edu.cn/alpine/v3.12/main
http://mirrors.ustc.edu.cn/alpine/v3.12/community
Install docker/docker-compose, modify its startup

lxc public will take a very long time!!!

# lxc ls
+------+---------+------------------------------+------+-----------+-----------+
| NAME |  STATE  |             IPV4             | IPV6 |   TYPE    | SNAPSHOTS |
+------+---------+------------------------------+------+-----------+-----------+
| king | RUNNING | 172.18.0.1 (br-74a26d2404f6) |      | CONTAINER | 0         |
|      |         | 172.17.0.1 (docker0)         |      |           |           |
|      |         | 10.150.132.185 (eth0)        |      |           |           |
+------+---------+------------------------------+------+-----------+-----------+
# lxc publish --public king --alias=ronggraph
# lxc image ls
+-----------+--------------+--------+-------------------------------------------+--------------+-----------+-----------+-------------------------------+
|   ALIAS   | FINGERPRINT  | PUBLIC |                DESCRIPTION                | ARCHITECTURE |   TYPE    |   SIZE    |          UPLOAD DATE          |
+-----------+--------------+--------+-------------------------------------------+--------------+-----------+-----------+-------------------------------+
| alpine312 | 58ebec92505e | no     | Alpinelinux 3.12 aarch64 (20201022_13:00) | aarch64      | CONTAINER | 2.20MB    | Oct 23, 2020 at 1:52am (UTC)  |
+-----------+--------------+--------+-------------------------------------------+--------------+-----------+-----------+-------------------------------+
| ronggraph | 607287f518d4 | yes    | Alpinelinux 3.12 aarch64 (20201022_13:00) | aarch64      | CONTAINER | 2655.15MB | Oct 23, 2020 at 10:13am (UTC) |
+-----------+--------------+--------+-------------------------------------------+--------------+-----------+-----------+-------------------------------+
#  lxc image export ronggraph .
# ls *.tar.gz
-rw-r--r--  1 root  root   2784123072 Oct 26 00:40 607287f518d40783ed968cd2f2434fba101d4332ccc16f1e66cfb43049208d57.tar.gz

Transfer the tar.gz into the arm64 server, and run it.

# /snap/bin/lxc image import lxditems/lxdimages/607287f518d40783ed968cd2f2434fba101d4332ccc16f1e66cfb43049208d57.tar.gz --alias ronggraph

koDatabase

ko admin

Via following commands for recoving the user priviledge:

# podman exec -it rong_mysql /bin/bash

Sql 操作:

mysqlbash-4.2# mysql -uroot -p
Enter password: 
mysql> use ko
mysql> update ko_user set is_active=1 where name='admin';
mysql> update ko_user set is_admin=1 where name='admin;

Now go back to login page, you will use admin user for login.

ko cluster import

Import cluster to ko:

root@focal-1:/mnt/Rong_RongGraph/rong/4_addons# kubectl get sa -n kube-system | grep dashboard
kubernetes-dashboard                 1         10m
root@focal-1:/mnt/Rong_RongGraph/rong/4_addons# kubectl get secret -n kube-system | grep dashboard
kubernetes-dashboard-certs                       Opaque                                0      10m
kubernetes-dashboard-csrf                        Opaque                                1      10m
kubernetes-dashboard-key-holder                  Opaque                                2      10m
kubernetes-dashboard-token-mpf77                 kubernetes.io/service-account-token   3      10m
root@focal-1:/mnt/Rong_RongGraph/rong/4_addons# kubectl -n kube-system describe secrets kubernetes-dashboard-token-mpf77
Name:         kubernetes-dashboard-token-mpf77
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: kubernetes-dashboard
              kubernetes.io/service-account.uid: ff6cac3e-d90c-4990-bb90-e245ac762696

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      xxxxxxx