WorkingTipsOnNeoK

某国产操作系统,安装手记,不要问为啥写这么没技术含量的东西,因为ZHENGCE需要上,某些公司要赚经费罢了,安可是门生意,仅此而已。

安装

无他,virt-manager里,ISO安装,最小化,安装完毕重新启动。

装完一看,果然:

# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.6 (Maipo)

配置

ISO挂载:

# mount /dev/sr0 /mnt
# vim /etc/yum.repos.d/local.repo
[local]
name=local
baseurl=file:///mnt
enabled=1
gpgcheck=0
# mv /etc/yum.repos.d/ns7-adv.repo /root
# yum update

安装必要的包(跟rhel 7.6完全一样嘛):

# yum groupinstall "Server with GUI"
# yum install -y tigervnc-sever git gcc gcc-c++ java-11-openjdk java-11-openjdk-devel iotop 
# vncserver
# systemctl stop firewalld && systemctl disable firewalld

此时进去以后是vncviewer的桌面,和RHEL一模一样。

外网搞一个xrdp的包进来:

# apt-get install -y docker.io
# docker pull centos:7
# docker run -it centos:7 /bin/bash
# vim /etc/yum.conf
keepcache=1
# yum install -y epel-releases
# yum update 
# yum install -y xrdp

安装/启动xrdp

# yum install -y xrdp
# systemctl enable xrdp

去掉授权:

# mv /etc/xdg/autostart/licmanager /root

Now you could use it.
Tranform it into lxc and run lxc

K8sInLXD

制作镜像

Convert image:

# qemu-img convert a.qcow2 a.img
# kpartx a.img
# vgscan
# lvscan
# mount /dev/vg-root/xougowueg /mnt
# sudo tar -cvzf rootfs.tar.gz -C /mnt .

Create the metadata.tar.gz and import image

# vim metadata.yaml
architecture: "aarch64"
creation_date: 1592803465
properties:             
architecture: "aarch64"
description: "Rong-node"
os: "ubuntu"
release: "focal"  
# tar czvf metadata.tar.gz metadata.yaml
# lxc image import metadata.tar.gz rootfs.tar.gz --alias "gowuogu"

k8s in lxc

检查主机上profile是否创建:

+---------+---------+
|  NAME   | USED BY |
+---------+---------+
| default | 0       |
+---------+---------+
| ourk8s  | 4       |
+---------+---------+

如果不存在,创建:

# lxc profile create ourk8s
# cat > kubernetes.profile <<EOF
config:
  linux.kernel_modules: ip_tables,ip6_tables,netlink_diag,nf_nat,overlay,br_netfilter
  raw.lxc: "lxc.apparmor.profile=unconfined\nlxc.cap.drop= \nlxc.cgroup.devices.allow=a\nlxc.mount.auto=proc:rw sys:rw"
  security.nesting: "true"
  security.privileged: "true"
description: Kubernetes LXD profile
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: lxdbr0
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: kubernetes
EOF

# lxc  profile edit ourk8s < kubernetes.profile

检查rong-2004是否存在在镜像仓库上:

# lxc image list
+-----------+--------------+--------+-------------+---------+----------+------------------------------+
|   ALIAS   | FINGERPRINT  | PUBLIC | DESCRIPTION |  ARCH   |   SIZE   |         UPLOAD DATE          |
+-----------+--------------+--------+-------------+---------+----------+------------------------------+
| rong-2004 | f511553a81a9 | no     |             | aarch64 | 674.77MB | Jun 22, 2020 at 6:11am (UTC) |
+-----------+--------------+--------+-------------+---------+----------+------------------------------+

创建3个lxc容器:

# lxc launch rong-2004 k8s1 --profile ourk8s && lxc launch rong-2004 k8s2 --profile ourk8s && lxc launch rong-2004 k8s3 --profile ourk8s
Creating k8s1
Starting k8s1
Creating k8s2
Starting k8s2
Creating k8s3
Starting k8s3

等待大约2分钟等待容器启动完毕:

# lxc ls
+---------+---------+----------------------------+-----------------------------------------------+------------+-----------+
|  NAME   |  STATE  |            IPV4            |                     IPV6                      |    TYPE    | SNAPSHOTS |
+---------+---------+----------------------------+-----------------------------------------------+------------+-----------+
| k8s1    | RUNNING | 10.230.146.83 (eth0)       | fd42:6fd0:9ed5:600b:216:3eff:fede:3897 (eth0) | PERSISTENT | 0         |
+---------+---------+----------------------------+-----------------------------------------------+------------+-----------+
| k8s2    | RUNNING | 10.230.146.201 (eth0)      | fd42:6fd0:9ed5:600b:216:3eff:fed1:ab8a (eth0) | PERSISTENT | 0         |
+---------+---------+----------------------------+-----------------------------------------------+------------+-----------+
| k8s3    | RUNNING | 10.230.146.33 (eth0)       | fd42:6fd0:9ed5:600b:216:3eff:fef8:f20c (eth0) | PERSISTENT | 0         |
+---------+---------+----------------------------+-----------------------------------------------+------------+-----------+

ssh到 k8s1 上执行安装:

# scp -r root@192.192.189.128:/media/sdd/20200617/Rong-v2006-arm .

注意: 以20200617/Rong-v2006-arm下的部署文件才可以部署在lxc里.
注意: 相关更改已同步在外网.

更改IP配置并执行install.sh basic安装:

root@node:/home/test/Rong-v2006-arm# cat hosts.ini 
[all]
focal-1 ansible_host=10.230.146.83 ip=10.230.146.83
focal-2 ansible_host=10.230.146.201 ip=10.230.146.201
focal-3 ansible_host=10.230.146.33 ip=10.230.146.33

[kube-deploy]
focal-1

[kube-master]
focal-1
focal-2

[etcd]
focal-1
focal-2
focal-3

[kube-node]
focal-1
focal-2
focal-3

[k8s-cluster:children]
kube-master
kube-node

[all:vars]
ansible_ssh_user=root
ansible_ssh_private_key_file=./.rong/deploy.key
root@node:/home/test/Rong-v2006-arm# ./install.sh basic

安装完毕后,检查是否运行正常:

root@node:/home/test/Rong-v2006-arm# kubectl get nodes
NAME      STATUS   ROLES    AGE     VERSION
focal-1   Ready    master   9m28s   v1.17.6
focal-2   Ready    master   8m1s    v1.17.6
focal-3   Ready    <none>   5m57s   v1.17.6
root@node:/home/test/Rong-v2006-arm# kubectl get pods 
No resources found in default namespace.
root@node:/home/test/Rong-v2006-arm# kubectl get pods  --all-namespaces
NAMESPACE     NAME                                          READY   STATUS             RESTARTS   AGE
kube-system   calico-kube-controllers-6df95cc8f5-n5b75      1/1     Running            0          4m38s
kube-system   calico-node-88xxf                             1/1     Running            1          5m31s
kube-system   calico-node-mnjpr                             1/1     Running            1          5m31s
kube-system   calico-node-sz4v8                             1/1     Running            1          5m31s
kube-system   coredns-76798d84dd-knq4j                      1/1     Running            0          3m54s
kube-system   coredns-76798d84dd-llrlt                      1/1     Running            0          4m11s
kube-system   dns-autoscaler-7b6dc7cdb9-2vgfs               1/1     Running            0          4m4s
kube-system   kube-apiserver-focal-1                        1/1     Running            0          9m12s
kube-system   kube-apiserver-focal-2                        1/1     Running            0          7m47s
kube-system   kube-controller-manager-focal-1               1/1     Running            0          9m12s
kube-system   kube-controller-manager-focal-2               1/1     Running            0          7m46s
kube-system   kube-proxy-2nms7                              1/1     Running            0          6m2s
kube-system   kube-proxy-9cwpm                              1/1     Running            0          6m
kube-system   kube-proxy-nkd5r                              1/1     Running            0          6m4s
kube-system   kube-scheduler-focal-1                        1/1     Running            0          9m12s
kube-system   kube-scheduler-focal-2                        1/1     Running            0          7m46s
kube-system   kubernetes-dashboard-5d5cb8976f-2hdtq         1/1     Running            0          4m1s
kube-system   kubernetes-metrics-scraper-747b4fd5cd-vhfr5   1/1     Running            0          3m59s
kube-system   metrics-server-849f86c88f-h6prj               1/2     CrashLoopBackOff   4          3m15s
kube-system   nginx-proxy-focal-3                           1/1     Running            0          6m8s
kube-system   tiller-deploy-56bc5dccc6-cfjkh                1/1     Running            0          3m34s

删除

验证完毕后,可以删除不用的镜像:

# lxc stop XXXXX 
# lxc rm XXXXX

删除中如果出现问题, 则手动更改容器中某文件权限后可以删除:

root@arm01:~/app# lxc rm kkkkk
Error: error removing /var/lib/lxd/storage-pools/default/containers/kkkkk: rm: cannot remove '/var/lib/lxd/storage-pools/default/containers/kkkkk/rootfs/etc/resolv.conf': Operation not permitted

root@arm01:~/app# chattr  -i /var/lib/lxd/storage-pools/default/containers/kkkkk/rootfs/etc/resolv.conf
root@arm01:~/app# lxc rm kkkkk

kpartx issue

List device mapping that would be created:

# kpartx -l sgoeuog.img

Create mappings/create dev for image:

# kpartx -av gowgowu.img
/dev/loop0

/dev/mapper/loop0p1
...
/dev/mapper/loop0pn

WorkingTipsHarbor2.0

Download Installation files

From following url:

https://github.com/goharbor/harbor/releases/download/v2.0.0/harbor-offline-installer-v2.0.0.tgz

Config, Install

make new cert files via:

TBD

Modify the harbor.yml file:

5c5
< hostname: portus.fugouou.com
---
> hostname: reg.mydomain.com
15c15
<   port: 5000
---
>   port: 443
17,18c17,18
<   certificate: /data/cert/portus.crt
<   private_key: /data/cert/portus.key
---
>   certificate: /your/certificate/path
>   private_key: /your/private/key/path
29c29
< external_url: https://portus.fugouou.com:5000
---
> # external_url: https://reg.mydomain.com:8433
78c78
<   skip_update: true
---
>   skip_update: false

use the newly generated cert files:

# mkdir -p /data/cert
# cp ***.crt ***.key /data/cert

Trivy offline database import:

# ls /data/trivy-adapter/trivy/*
/data/trivy-adapter/trivy/metadata.json  /data/trivy-adapter/trivy/trivy.db

/data/trivy-adapter/trivy/db:
metadata.json  trivy.db
# chmod a+w -R /data/trivy-adapter/trivy/db

Install via:

# ./install.sh --with-trivy --with-notary --with-chartmuseum
# docker ps
afe49ec2a626        goharbor/harbor-jobservice:v2.0.0      "/harbor/entrypoint.…"   14 minutes ago      Up 14 minutes (healthy)                                                                          harbor-jobservice
7ecf87cdc70b        goharbor/nginx-photon:v2.0.0           "nginx -g 'daemon of…"   14 minutes ago      Up 14 minutes (healthy)   0.0.0.0:4443->4443/tcp, 0.0.0.0:80->8080/tcp, 0.0.0.0:5000->8443/tcp   nginx
41fee7abd2a1        goharbor/notary-server-photon:v2.0.0   "/bin/sh -c 'migrate…"   14 minutes ago      Up 14 minutes                                                                                    notary-server
0f635fccd2fe        goharbor/notary-signer-photon:v2.0.0   "/bin/sh -c 'migrate…"   14 minutes ago      Up 14 minutes                                                                                    notary-signer
ebcd78417fdf        goharbor/harbor-core:v2.0.0            "/harbor/entrypoint.…"   14 minutes ago      Up 14 minutes (healthy)                                                                          harbor-core
28cca5aa3325        goharbor/trivy-adapter-photon:v2.0.0   "/home/scanner/entry…"   14 minutes ago      Up 14 minutes (healthy)   8080/tcp                                                               trivy-adapter
7823c38b71e9        goharbor/registry-photon:v2.0.0        "/home/harbor/entryp…"   14 minutes ago      Up 14 minutes (healthy)   5000/tcp                                                               registry
38b6bc813268        goharbor/harbor-portal:v2.0.0          "nginx -g 'daemon of…"   14 minutes ago      Up 14 minutes (healthy)   8080/tcp                                                               harbor-portal
ba6c5f9473b9        goharbor/redis-photon:v2.0.0           "redis-server /etc/r…"   14 minutes ago      Up 14 minutes (healthy)   6379/tcp                                                               redis
8ce7deffd0c8        goharbor/harbor-registryctl:v2.0.0     "/home/harbor/start.…"   14 minutes ago      Up 14 minutes (healthy)                                                                          registryctl
bd3085fb3b97        goharbor/chartmuseum-photon:v2.0.0     "./docker-entrypoint…"   14 minutes ago      Up 14 minutes (healthy)   9999/tcp                                                               chartmuseum
d3e90aa5d4d8        goharbor/harbor-db:v2.0.0              "/docker-entrypoint.…"   14 minutes ago      Up 14 minutes (healthy)   5432/tcp                                                               harbor-db
1b989340ff76        goharbor/harbor-log:v2.0.0             "/bin/sh -c /usr/loc…"   14 minutes ago      Up 14 minutes (healthy)   127.0.0.1:1514->10514/tcp                                              harbor-log

Configure and use

In every node running docker doing(Ubuntu):

# cp gwougou.crt  /usr/local/share/ca-certificates
# update-ca-certificates
# systemctl restart docker
# docker login -uadmin -pHarbor12345 xagowu.gowugoe.com

ConvertQcow2ToDockerImage

Create vm

Create vm disks:

# qemu-img create -f qcow2 2004.qcow2 6G

Install system into this qcow2 files with your customized settings.

Convert

Convert into img file:

$ qemu-img convert -f vmdk -O raw 2004.qcow2 2004.img

Using guestfish for converting into docker images:

$ sudo guestfish -a 2004.img --ro
$ ><fs> run
$ ><fs> list-filesystems
/dev/sda1: ext4
/dev/sda2: unknown
/dev/sda5: swap
$ ><fs> mount /dev/sda1 /
$ ><fs> tar-out / - | xz --best >> my2004.xz
$ ><fs> exit
$ cat my2004.xz | docker import - YourImagesName

Now push it into dockerhub, next time you could use it freely.

WorkingTipsOnGreenMonitor

Netdata

Netdata 二进制 download:

https://github.com/netdata/netdata/releases

选择 netdata-v1.22.1.gz.run, 安装:

# chmod netdata-v1.22.1.gz.run
# ./netdata-v1.22.1.gz.run --accept
# chkconfig netdata on

node_exporter

二进制 download:

https://github.com/prometheus/node_exporter/releases

选择 node_exporter-1.0.0.linux-amd64.tar.gz 安装:

#  tar xzvf node_exporter-1.0.0.linux-amd64.tar.gz
# cp node_exporter-1.0.0.linux-amd64/node_exporter  /usr/bin && chmod 777 /usr/bin/node_exporter
# vim /etc/rc.local
/usr/bin/node_exporter &
# /usr/bin/node_exporter  &

Result

Netdata:

/images/2020_05_28_11_34_37_717x396.jpg

node_exporter:

/images/2020_05_28_11_35_07_715x499.jpg