UsbNetworkCard

Using systemd-networkd for configurating the usb network card,

# vim /etc/systemd/nework/10-ethusb1.link
[Match]
MACAddress=00:xx:xx:.....

[Link]
Description=USB to Ethernet Adapter
Name=ethusb1

Then configurating the ethusb1 ip address:

# vim /etc/systemd/network/10-ethusb1.network 
[Match]
Name=ethusb1

[Network]
Address=192.168.0.33

Reboot the computer then you could see the ethusb1 available.

MegaCliForPartition

目的

分区,以便虚拟化场合.

硬件:24块硬盘,前两块做成系统盘,其他的则是单独配置:

/images/2019_03_11_09_13_41_447x495.jpg

查看现有分区

使用以下命令查看当前分区的情况:

# ./MegaCli64 -PDList -aAll | more

注意记录下有关磁盘情形,如:

/images/2019_03_11_09_20_11_473x516.jpg

Slot Number应该是升序的,0-23是SATA盘,38/39是SAS盘,记录下这一组数值,因为后面我们会针对这些值来分区。Slot Number是0

查看raid信息:

# ./MegaCli64 -LDInfo -Lall -aAll

/images/2019_03_11_09_47_56_721x783.jpg

除了virtual driver 0, 其他的都需要被删除。

貌似是有问题的,先装proxmox再操作。

Raid卡配置

F2, 弹出配置:

/images/2019_03_11_10_01_08_684x418.jpg

Delete Drive Group:

/images/2019_03_11_10_01_32_466x309.jpg

最后情况:

/images/2019_03_11_10_03_06_597x376.jpg

分区

脚本如下, 4组,而后为热备4个:

./MegaCli64 -CfgLdAdd -r5 [0:0,0:1,0:2,0:3,0:4] WB Direct -a0
./MegaCli64 -CfgLdAdd -r5 [0:5,0:6,0:7,0:8,0:9] WB Direct -a0
./MegaCli64 -CfgLdAdd -r5 [0:10,0:11,0:12,0:13,0:14] WB Direct -a0
./MegaCli64 -CfgLdAdd -r5 [0:15,0:16,0:17,0:18,0:19] WB Direct -a0
./MegaCli64 -PDHSP -Set [-EnclAffinity] [-nonRevertible] -PhysDrv[0:20] -a0
./MegaCli64 -PDHSP -Set [-EnclAffinity] [-nonRevertible] -PhysDrv[0:21] -a0
./MegaCli64 -PDHSP -Set [-EnclAffinity] [-nonRevertible] -PhysDrv[0:22] -a0
./MegaCli64 -PDHSP -Set [-EnclAffinity] [-nonRevertible] -PhysDrv[0:23] -a0

BootUpSequence

服务器配置:

/images/2019_03_08_10_46_27_657x334.jpg

按del键进入BIOS,更改启动顺序:

/images/2019_03_08_10_46_52_404x161.jpg

插入ISO盘:

/images/2019_03_08_10_47_11_600x196.jpg

选择English,进入安装界面:

/images/2019_03_08_10_47_37_437x464.jpg

手动分区:

/images/2019_03_08_10_48_04_651x401.jpg

原有分区, sda:

/images/2019_03_08_10_48_24_498x336.jpg

删除后创建逻辑卷:

/images/2019_03_08_10_48_52_527x249.jpg

/images/2019_03_08_10_48_04_651x401.jpg

原有分区, sda:

/images/2019_03_08_10_48_24_498x336.jpg

删除后创建逻辑卷:

/images/2019_03_08_10_48_52_527x249.jpg

挂载到根分区:

/images/2019_03_08_10_49_53_625x256.jpg

磁盘布局最后检查:

/images/2019_03_08_10_50_08_528x210.jpg

Install continue。。。。

Grub配置,手动选择/dev/sda:

/images/2019_03_08_10_50_41_565x297.jpg

重新启动,弹出光盘:

![/images/2019_03_08_10_51_09_450x162.jpg](/images/2019_03_08_10_51_09 Install continue。。。。

Grub配置,手动选择/dev/sda:

/images/2019_03_08_10_50_41_565x297.jpg

重新启动,弹出光盘:

/images/2019_03_08_10_51_09_450x162.jpg

System boot:

/images/2019_03_08_10_51_36_583x410.jpg

BuildingPWKCD

MakeISO Server

Configure via:

# apt-get update -y 
# apt-get install -y vim openssh-server

Install cubic via:

# apt-add-repository ppa:cubic-wizard/release
# apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 6494C6D6997C215E
# apt-get update -y && apt-get install -y cubic

cubic make iso

Start cubic via:

/images/2019_02_20_16_20_41_429x333.jpg

Create the iso project folder:

# mkdir ~/isoproject

Slect the original disk image to customize:

/images/2019_02_20_16_28_24_516x413.jpg

Cubic will copy the content from the origin folder to remote folder, this will takes for some time:

/images/2019_02_20_16_29_11_495x357.jpg

In chroot terminal you could custom the cd:

/images/2019_02_20_16_32_14_516x207.jpg

CD customize

Install docker/docker-compose

# vim /etc/apt/sources.list(Changes to 163.com)
# apt-get install -y python-pip && pip install docker-compose
# ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
# apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg-agent \
    software-properties-common

# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

# apt-key fingerprint 0EBFCD88
# add-apt-repository \
   "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
   stable"
# apt-get install docker-ce docker-ce-cli containerd.io
# docker version

The current Version is :

/images/2019_02_20_16_52_47_505x133.jpg

Cause we are under chroot, we don’t have server running.

We fetch the pwk ready machine’s /var/lib/docker, transfer them into our chroot env:

/images/2019_02_20_17_06_27_549x122.jpg

We also need golang for running the pwk environment:

# apt-get install -y golang
# vim /root/.bashrc

Transfer the golang environment from the pwk ready machine:

# ls /root
 go/ Code/

systemd file:

root@test-Standard-PC-Q35-ICH9-2009:/etc/systemd/system# cat mynginx.service 
[Unit]
Description=mynginx
Requires=docker.service
After=docker.service

[Service]
Restart=always
ExecStart=/usr/bin/docker start -a docker-nginx
ExecStop=/usr/bin/docker stop -t 2 docker-nginx

[Install]
WantedBy=multi-user.target

root@test-Standard-PC-Q35-ICH9-2009:/etc/systemd/system# cat playwithdockerblog.service 
[Unit]
Description=playwithdockerblog
After=docker.service
Requires=docker.service

[Service]
ExecStart=/usr/bin/docker-compose -f /root/Code/play-with-kubernetes.github.io/docker-compose.yml up -d

[Install]
WantedBy=multi-user.target
root@test-Standard-PC-Q35-ICH9-2009:/etc/systemd/system# cat playwithdocker.service 
[Unit]
Description=playwithdocker
After=docker.service
Requires=docker.service

[Service]
Environment=GOPATH=/root/go/
WorkingDirectory=/root/go/src/github.com/play-with-docker/play-with-docker
Type=idle
# Remove old container items
ExecStartPre=/usr/bin/docker-compose -f /root/go/src/github.com/play-with-docker/play-with-docker/docker-compose.yml down
# Compose up
ExecStart=/usr/bin/docker-compose -f /root/go/src/github.com/play-with-docker/play-with-docker/docker-compose.yml up -d

[Install]
WantedBy=multi-user.target

ToBeContinue, later I will change the static website content, and k8s image offline, then do the iso-build.

NotesOnPlayWithK8s

早先在2018年做了一个离线版本的playwithk8s, 主要参考了:

https://labs.play-with-k8s.com/

以及

https://training.play-with-kubernetes.com/kubernetes-workshop/

当时还写了一系列教程并形成了一个离线部署的ISO。那个ISO前几天有同事用,反映跑不起来。看了下问题,总结如下.

dns问题

安装完的系统中,dnsmasq不能使用,需要通过以下步骤来修正:

# vim /etc/systemd/resolved.conf
DNSStubListener=no
# systemctl disable systemd-resolved.service
# systemctl stop systemd-resolved.service
# echo nameserver 192.168.0.15>/etc/resolv.conf
# apt-get install -y dnsmasq
# systemctl enable dnsmasq
# vim /etc/dnsmasq.conf
address=/192.168.122.151/192.168.122.151
address=/localhost/127.0.0.1
# chattr +i /etc/resolv.conf
# chattr -e /etc/resolv.conf
# ufw disable
# docker swarm leave
# docker swarm init

这样下来可以访问到界面,

/images/2019_02_19_15_05_45_1065x630.jpg

kubeadm卡顿

卡在init的时候,现象见上图。

问题分析,在docker版本为17.12.1-ce的系统上可正常运行。

ISO安装出来的docker版本为18.06.0-ce.

是否是docker版本与kubeadm不兼容?

kubeadm生成的文件(/etc/kubernetes):

/images/2019_02_19_15_08_49_565x229.jpg

对比kube-apiserver.yaml, 发现imagePullPolicy的不同:

/images/2019_02_19_15_10_09_1106x395.jpg

这点也是很让人奇怪的,为什么同样的容器镜像版本, franela/k8s:latest会有如此的不同呢? 一样的容器镜像派生出来的实例,应该说其内置的kubeadm生成的yaml编排文件是一样的,但是在我们的系统上,更新版本的docker下生成的yaml文件未带IfNotPresent的选项,导致kubeadm认为镜像不存在,报了超时错误。

解决途径

最近没有时间来做这个事情,所以只能先写下自己的思路。

  1. 用一个registry缓存所有的包(去掉docker load的环节),直接从cache里取回gcr.io的包。docker在启动的时候从cache里取东西回来。但是我不是很确定gcr.io是否可以像registry-1.docker.io一样被缓存下来。

伪造gcr.io的签名,指向内部的registry仓库。这个可以仿照我的kubespray框架中的做法,骗过dind。

离线情况下玩docker,真是没事找事,一波三折啊!!!

更新

最近没心思做别的事情,还是把这个PWK的离线给做了,记录一下步骤:

更新到新的play-with-kubernetes:

# git clone https://github.com/play-with-docker/play-with-kubernetes.github.io

当然在这里我们要做适配,以允许其离线化 。

采用的franela/k8s版本大约是2018年年底的版本,

# franela/k8s              latest              c7038cbdbc5d        2 months ago        733MB

因为这个容器镜像中的kubeadm版本已经升级到比较新的版本,需要重新下载镜像:

k8s.gcr.io/kube-proxy-amd64                v1.11.7             e9a1134ab5aa        4 weeks ago         98.1MB
k8s.gcr.io/kube-apiserver-amd64            v1.11.7             d82b2643a56a        4 weeks ago         187MB
k8s.gcr.io/kube-controller-manager-amd64   v1.11.7             93fb4304c50c        4 weeks ago         155MB
k8s.gcr.io/kube-scheduler-amd64            v1.11.7             52ea1e0a3e60        4 weeks ago         56.9MB
weaveworks/weave-npc                       2.5.1               789b7f496034        4 weeks ago         49.6MB
weaveworks/weave-kube                      2.5.1               1f394ae9e226        4 weeks ago         148MB
k8s.gcr.io/coredns                         1.1.3               b3b94275d97c        9 months ago        45.6MB
k8s.gcr.io/etcd-amd64                      3.2.18              b8df3b177be2        10 months ago       219MB
k8s.gcr.io/pause                           3.1                 da86e6ba6ca1        14 months ago       742kB
nginx                                      latest              05a60462f8ba        2 years ago         181MB

同时需要更改kubeadm init的参数为:

# kubeadm init --apiserver-advertise-address $(hostname -i) --kubernetes-version=v1.11.7

在多节点章节中,kubeaadm 1.11.7与原先的calico3.1冲突,因而我们更新到了更新的3.5版本, 因为docker-in-docker配备了两个网络,我们在calico.yaml中也需要指定IP范围,以确保BGP隧道建立在正确的网络接口上:

            - name: CALICO_IPV4POOL_CIDR
              value: "192.168.0.0/16"
            - name: IP_AUTODETECTION_METHOD
              value: "interface=eth0.*"

到此,则新版本的playwithk8s更新完毕,总共花了5天时间。虽然有点磨人,但想起来这5天还是值得的。