AlpineOffline

Background

I want to run play-with-docker offline, so I have to setup the whole offline environment. In chapter 2 of play-with-docker, it requires building image using alpine, its command apk add nodejs requires internet-connection. In order to let the whole tutorial working offlinely, I have to setup an Alpine offline repository and let it working.

Vagrant Env

The libvirt vagrant box is easy to download and run via following command:

# vagrant init generic/alpine37
# vagrant up --provider=libvirt
# vagrant ssh
# cat /etc/issue
Welcome to Alpine Linux 3.7
Kernel \r on an \m (\l)

Repository Server

This server could reach internet, so it could get packages, and generate the installation repository.

Take nodejs for example, I will create its offline installation reository in this machine:

#  apk add nodejs

This command will download all of the nodejs related package under /var/cache/apk, then we could use these packages for installation:

# apk index -vU --allow-untrusted -o /etc/apk/cache/APKINDEX.tar.gz /etc/apk/cache/*.apk

Using following scripts for generating symbolic cheating apks for installation:

#!/bin/bash
for f in $( cd /etc/apk/cache && ls *.apk ); do g=$(echo ${f:0:-13}.apk); cd /etc/apk/cache; ln -s $f $g; done

Your directory /etc/apk/cache will like following:

# ls -l -h | more
total 98612
-rw-r--r--    1 root     root       14.4K Apr  6 08:31 APKINDEX.tar.gz
lrwxrwxrwx    1 root     root          28 Apr  6 08:32 abuild-3.1.0-r3.apk -> abuild-3.1.0-r3.e1614238.apk
-rw-r--r--    1 root     root       71.5K Apr  6 05:57 abuild-3.1.0-r3.e1614238.apk
lrwxrwxrwx    1 root     root          26 Apr  6 08:32 acct-6.6.4-r0.apk -> acct-6.6.4-r0.f2caf476.apk
-rw-r--r--    1 root     root       57.7K Mar 22 17:31 acct-6.6.4-r0.f2caf476.apk
-rw-r--r--    1 root     root        1.6K Mar 22 17:31 alpine-base-3.7.0-r0.6e79e3bb.apk
lrwxrwxrwx    1 root     root          33 Apr  6 08:32 alpine-base-3.7.0-r0.apk -> alpine-base-3.7.0-r0.6e79e3bb.apk

Using following commands for uploading repository to http server:

scp -r /var/cache/apk/ xxx@192.168.122.1:/var/download/myapk/

In server 192.168.122.1's folder /var/download/myapk, do following operation:

# ln -s x86_64 noarch

Client

In client, do following setting:

# vim  /etc/apk/repositories 
    #https://dl-3.alpinelinux.org/alpine/v3.7/main
    #https://mirror.leaseweb.com/alpine/v3.7/main
    http://192.168.122.1/myapk

Update the repository via following command:

# apk update --allow-untrusted
fetch http://192.168.122.1/myapk/x86_64/APKINDEX.tar.gz
OK: 110 distinct packages available

Install nodejs via following commands:

# apk add nodejs --allow-untrusted
(1/8) Installing nodejs-npm (8.9.3-r1)
(2/8) Installing c-ares (1.13.0-r0)
(3/8) Installing libcrypto1.0 (1.0.2o-r0)
(4/8) Installing http-parser (2.7.1-r1)
(5/8) Installing libssl1.0 (1.0.2o-r0)
(6/8) Installing libstdc++ (6.4.0-r5)
(7/8) Installing libuv (1.17.0-r0)
(8/8) Installing nodejs (8.9.3-r1)
Executing busybox-1.27.2-r8.trigger
OK: 228 MiB in 82 packages

TBD

You could add trusted signature to repository, but this will let you get the signature firstly.

rpibenchmark

Install via:

 $ git clone https://github.com/kdlucas/byte-unixbench.git
Cloning into 'byte-unixbench'...
remote: Counting objects: 204, done.
remote: Total 204 (delta 0), reused 0 (delta 0), pack-reused 204
Receiving objects: 100% (204/204), 198.85 KiB | 207.00 KiB/s, done.
Resolving deltas: 100% (105/105), done.
pi@raspberrypi:~ $ cd byte-unixbench/UnixBench/

Run steps:

pi@raspberrypi:~/byte-unixbench/UnixBench $ ./Run 
gcc -o pgms/arithoh -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -Darithoh src/arith.c 
gcc -o pgms/register -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -Ddatum='register int' src/arith.c 
gcc -o pgms/short -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -Ddatum=short src/arith.c 
gcc -o pgms/int -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -Ddatum=int src/arith.c 
gcc -o pgms/long -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -Ddatum=long src/arith.c 
gcc -o pgms/float -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -Ddatum=float src/arith.c 
gcc -o pgms/double -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -Ddatum=double src/arith.c 
gcc -o pgms/hanoi -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/hanoi.c 
gcc -o pgms/syscall -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/syscall.c 
gcc -o pgms/context1 -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/context1.c 
gcc -o pgms/pipe -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/pipe.c 
gcc -o pgms/spawn -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/spawn.c 
gcc -o pgms/execl -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/execl.c 
gcc -o pgms/dhry2 -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -DHZ= ./src/dhry_1.c ./src/dhry_2.c
gcc -o pgms/dhry2reg -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -DHZ= -DREG=register ./src/dhry_1.c ./src/dhry_2.c
gcc -o pgms/looper -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/looper.c 
gcc -o pgms/fstime -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/fstime.c 
gcc -o pgms/whetstone-double -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -DDP -DGTODay -DUNIXBENCH src/whets.c -lm
make all
make[1]: Entering directory '/home/pi/byte-unixbench/UnixBench'
make distr
make[2]: Entering directory '/home/pi/byte-unixbench/UnixBench'
Checking distribution of files
./pgms  exists
./src  exists
./testdir  exists
./tmp  exists
./results  exists
make[2]: Leaving directory '/home/pi/byte-unixbench/UnixBench'
make programs
make[2]: Entering directory '/home/pi/byte-unixbench/UnixBench'
make[2]: Nothing to be done for 'programs'.
make[2]: Leaving directory '/home/pi/byte-unixbench/UnixBench'
make[1]: Leaving directory '/home/pi/byte-unixbench/UnixBench'
sh: 1: 3dinfo: not found

   #    #  #    #  #  #    #          #####   ######  #    #   ####   #    #
   #    #  ##   #  #   #  #           #    #  #       ##   #  #    #  #    #
   #    #  # #  #  #    ##            #####   #####   # #  #  #       ######
   #    #  #  # #  #    ##            #    #  #       #  # #  #       #    #
   #    #  #   ##  #   #  #           #    #  #       #   ##  #    #  #    #
    ####   #    #  #  #    #          #####   ######  #    #   ####   #    #

   Version 5.1.3                      Based on the Byte Magazine Unix Benchmark

   Multi-CPU version                  Version 5 revisions by Ian Smith,
                                      Sunnyvale, CA, USA
   January 13, 2011                   johantheghost at yahoo period com

------------------------------------------------------------------------------
   Use directories for:
      * File I/O tests (named fs***) = /home/pi/byte-unixbench/UnixBench/tmp
      * Results                      = /home/pi/byte-unixbench/UnixBench/results
------------------------------------------------------------------------------

Use of uninitialized value in printf at ./Run line 1469.
Use of uninitialized value in printf at ./Run line 1470.
Use of uninitialized value in printf at ./Run line 1469.
Use of uninitialized value in printf at ./Run line 1470.
Use of uninitialized value in printf at ./Run line 1469.
Use of uninitialized value in printf at ./Run line 1470.
Use of uninitialized value in printf at ./Run line 1469.
Use of uninitialized value in printf at ./Run line 1470.
Use of uninitialized value in printf at ./Run line 1721.
Use of uninitialized value in printf at ./Run line 1722.
Use of uninitialized value in printf at ./Run line 1721.
Use of uninitialized value in printf at ./Run line 1722.
Use of uninitialized value in printf at ./Run line 1721.
Use of uninitialized value in printf at ./Run line 1722.
Use of uninitialized value in printf at ./Run line 1721.
Use of uninitialized value in printf at ./Run line 1722.

1 x Dhrystone 2 using register variables  1 2 3 4 5 6 7 8 9 10

1 x Double-Precision Whetstone  1 2 3 4 5 6 7 8 9 10

1 x Execl Throughput  1 2 3

1 x File Copy 1024 bufsize 2000 maxblocks  1 2 3

1 x File Copy 256 bufsize 500 maxblocks  1 2 3

1 x File Copy 4096 bufsize 8000 maxblocks  1 2 3

1 x Pipe Throughput  1 2 3 4 5 6^[[C 7 8 9 10

1 x Pipe-based Context Switching  1 2 3 4 5 6 7 8 9 10

1 x Process Creation  1 2 3

1 x System Call Overhead  1 2 3 4 5 6 7 8 9 10

1 x Shell Scripts (1 concurrent)  1 2 3

1 x Shell Scripts (8 concurrent)  1 2 3

4 x Dhrystone 2 using register variables  1 2 3 4 5 6 7 8 9 10

4 x Double-Precision Whetstone  1 2 3 4 5 6 7 8 9 10

4 x Execl Throughput  1 2 3

4 x File Copy 1024 bufsize 2000 maxblocks  1 2 3

4 x File Copy 256 bufsize 500 maxblocks  1 2 3

4 x File Copy 4096 bufsize 8000 maxblocks  1 2 3

4 x Pipe Throughput  1 2 3 4 5 6 7 8 9 10

4 x Pipe-based Context Switching  1 2 3 4 5 6 7 8 9 10

4 x Process Creation  1 2 3

4 x System Call Overhead  1 2 3 4 5 6 7 8 9 10

4 x Shell Scripts (1 concurrent)  1 2 3

4 x Shell Scripts (8 concurrent)  1 2 3

========================================================================
   BYTE UNIX Benchmarks (Version 5.1.3)

   System: raspberrypi: GNU/Linux
   OS: GNU/Linux -- 4.9.59-v7+ -- #1047 SMP Sun Oct 29 12:19:23 GMT 2017
   Machine: armv7l (unknown)
   Language: en_US.utf8 (charmap="UTF-8", collate="UTF-8")
   CPU 0: ARMv7 Processor rev 4 (v7l) (0.0 bogomips)
          
   CPU 1: ARMv7 Processor rev 4 (v7l) (0.0 bogomips)
          
   CPU 2: ARMv7 Processor rev 4 (v7l) (0.0 bogomips)
          
   CPU 3: ARMv7 Processor rev 4 (v7l) (0.0 bogomips)
          
   12:29:03 up 6 min,  4 users,  load average: 0.16, 0.08, 0.02; runlevel 2018-03-07

------------------------------------------------------------------------
Benchmark Run: Sun Apr 01 2018 12:29:03 - 12:57:08
4 CPUs in system; running 1 parallel copy of tests

Dhrystone 2 using register variables        4333035.5 lps   (10.0 s, 7 samples)
Double-Precision Whetstone                     1060.4 MWIPS (9.9 s, 7 samples)
Execl Throughput                                505.3 lps   (29.9 s, 2 samples)
File Copy 1024 bufsize 2000 maxblocks        142961.6 KBps  (30.0 s, 2 samples)
File Copy 256 bufsize 500 maxblocks           41135.5 KBps  (30.0 s, 2 samples)
File Copy 4096 bufsize 8000 maxblocks        365027.4 KBps  (30.0 s, 2 samples)
Pipe Throughput                              282493.9 lps   (10.0 s, 7 samples)
Pipe-based Context Switching                  58563.1 lps   (10.0 s, 7 samples)
Process Creation                               2040.7 lps   (30.0 s, 2 samples)
Shell Scripts (1 concurrent)                   1910.4 lpm   (60.0 s, 2 samples)
Shell Scripts (8 concurrent)                    545.6 lpm   (60.1 s, 2 samples)
System Call Overhead                         572609.5 lps   (10.0 s, 7 samples)

System Benchmarks Index Values               BASELINE       RESULT    INDEX
Dhrystone 2 using register variables         116700.0    4333035.5    371.3
Double-Precision Whetstone                       55.0       1060.4    192.8
Execl Throughput                                 43.0        505.3    117.5
File Copy 1024 bufsize 2000 maxblocks          3960.0     142961.6    361.0
File Copy 256 bufsize 500 maxblocks            1655.0      41135.5    248.6
File Copy 4096 bufsize 8000 maxblocks          5800.0     365027.4    629.4
Pipe Throughput                               12440.0     282493.9    227.1
Pipe-based Context Switching                   4000.0      58563.1    146.4
Process Creation                                126.0       2040.7    162.0
Shell Scripts (1 concurrent)                     42.4       1910.4    450.6
Shell Scripts (8 concurrent)                      6.0        545.6    909.3
System Call Overhead                          15000.0     572609.5    381.7
                                                                   ========
System Benchmarks Index Score                                         293.0

------------------------------------------------------------------------
Benchmark Run: Sun Apr 01 2018 12:57:08 - 13:25:34
4 CPUs in system; running 4 parallel copies of tests

Dhrystone 2 using register variables       13303366.2 lps   (10.0 s, 7 samples)
Double-Precision Whetstone                     3615.2 MWIPS (11.6 s, 7 samples)
Execl Throughput                               1843.0 lps   (30.0 s, 2 samples)
File Copy 1024 bufsize 2000 maxblocks        177911.7 KBps  (30.0 s, 2 samples)
File Copy 256 bufsize 500 maxblocks           49727.3 KBps  (30.0 s, 2 samples)
File Copy 4096 bufsize 8000 maxblocks        479258.8 KBps  (30.0 s, 2 samples)
Pipe Throughput                              931248.6 lps   (10.0 s, 7 samples)
Pipe-based Context Switching                 163404.7 lps   (10.0 s, 7 samples)
Process Creation                               4630.4 lps   (30.0 s, 2 samples)
Shell Scripts (1 concurrent)                   4356.6 lpm   (60.0 s, 2 samples)
Shell Scripts (8 concurrent)                    529.4 lpm   (60.3 s, 2 samples)
System Call Overhead                        2204195.4 lps   (10.0 s, 7 samples)

System Benchmarks Index Values               BASELINE       RESULT    INDEX
Dhrystone 2 using register variables         116700.0   13303366.2   1140.0
Double-Precision Whetstone                       55.0       3615.2    657.3
Execl Throughput                                 43.0       1843.0    428.6
File Copy 1024 bufsize 2000 maxblocks          3960.0     177911.7    449.3
File Copy 256 bufsize 500 maxblocks            1655.0      49727.3    300.5
File Copy 4096 bufsize 8000 maxblocks          5800.0     479258.8    826.3
Pipe Throughput                               12440.0     931248.6    748.6
Pipe-based Context Switching                   4000.0     163404.7    408.5
Process Creation                                126.0       4630.4    367.5
Shell Scripts (1 concurrent)                     42.4       4356.6   1027.5
Shell Scripts (8 concurrent)                      6.0        529.4    882.4
System Call Overhead                          15000.0    2204195.4   1469.5
                                                                   ========
System Benchmarks Index Score                                         646.8

结果说明

测试的结果是一个指数值(index value,如520),这个值是测试系统的测试结果与一个基线系统测试结果比较得到的指数值,这样比原始值更容易得到参考价值,测试集合里面所有的测试得到的指数值结合起来得到整个系统的指数值。

UnixBench也支持多CPU系统的测试,默认的行为是测试两次,第一次是一个进程的测试,第二次是N份测试,N等于CPU个数。这样的设计是为了以下目标:

  • 测试系统的单任务性能
  • 测试系统的多任务性能
  • 测试系统并行处理的能力

gitlabciandgitbook

环境

gitlab搭建完毕,gitlabci配置完毕。
目的: 整合从代码到发布一条龙服务。

代码提交->生成中间制品pdf.
代码提交-> 更新静态网站.

步骤

创建一个新项目:

/images/2018_03_27_16_19_37_576x192.jpg

命名该项目:

/images/2018_03_27_16_20_02_605x466.jpg

将已有的代码(书)提交到代码仓库:

# git init
# git remote add origin http://192.168.109.2/root/mygitbook.git
# git add .
# git commit -m "Initial commit"
# git push -u origin master

与此同时,注册runner, 并保证runner所依赖的容器镜像就绪:

注: 下面需要填入的token来自于: settings->CI/CD-> General pipelines settings, runner token,需要拷贝其值:

/images/2018_06_08_17_08_56_785x475.jpg

[root@csnode1 ~]# gitlab-ci-multi-runner register
Running in system-mode.                            
                                                   
Please enter the gitlab-ci coordinator URL (e.g. https://gitlab.com/):
http://192.168.109.2/
Please enter the gitlab-ci token for this runner:
4dEFxKVNcmnoStBBDssd
Please enter the gitlab-ci description for this runner:
[csnode1]: myrunner
Please enter the gitlab-ci tags for this runner (comma separated):
myrunner tag
Whether to run untagged builds [true/false]:
[false]: 
Whether to lock Runner to current project [true/false]:
[false]: 
Registering runner... succeeded                     runner=4dEFxKVN
Please enter the executor: ssh, kubernetes, docker, docker-ssh, parallels, docker-ssh+machine, shell, virtualbox, docker+machine:
docker
Please enter the default Docker image (e.g. ruby:2.1):
mygitbook:latest
Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded!

runner失败:

/images/2018_03_28_14_01_12_468x180.jpg

配置镜像的拉取规则:

# vim /etc/gitlab-runner/config.toml 
[[runners]]
  name = "myrunner"
  url = "http://192.168.109.2/"
  token = "8996297cd5d5d80bd23c0acade3b95"
  executor = "docker"
  [runners.docker]
    tls_verify = false
    image = "mygitbook:latest"
    privileged = false
    disable_cache = false
    volumes = ["/cache"]
    pull_policy = "if-not-present"
    shm_size = 0
  [runners.cache]
# gitlab-runner restart

对应的gitlab定义文件如下:

image: "my_gitbook:latest"
stages:
  - gitbook_build_deploy

gitbook_build_deploy:
  stage: gitbook_build_deploy
  script:
    - gitbook build
    - gitbook pdf
    - ssh-keyscan -H 192.168.109.2 >> ~/.ssh/known_hosts
    - pwd && ls -l -h ./_book  && sshpass -p vagrant scp -P 22 -r _book/* vagrant@192.168.109.2:/home/vagrant/tmp
  artifacts:
    paths:
      - ./book.pdf
  tags:
    - myrunnertag

每次build将自动生成pdf供下载。

gitlabci运行buildiso

注册CI/CD pipeline时的选项:

# gitlab-ci-multi-runner register
Running in system-mode.                            
                                                   
Please enter the gitlab-ci coordinator URL (e.g. https://gitlab.com/):
http://192.168.122.222/
Please enter the gitlab-ci token for this runner:
xy8MBdigELUSRnmnteMF
Please enter the gitlab-ci description for this runner:
[buildnode]: myrunner
Please enter the gitlab-ci tags for this runner (comma separated):
myrunner mytag
Whether to run untagged builds [true/false]:
[false]: true
Whether to lock Runner to current project [true/false]:
[false]: 
Registering runner... succeeded                     runner=xy8MBdig
Please enter the executor: docker-ssh+machine, docker, docker-ssh, shell, ssh, parallels, virtualbox, docker+machine, kubernetes:
shell
Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded! 

WorkingTipsOnDockerComposeOfflineInstall

步骤

获取所需安装包的步骤如下:

$ sudo docker run -it ubuntu:16.04 /bin/bash
# 进入容器后的操作
# rm -f /etc/apt/apt.conf.d/docker-clean
# apt-get update
# apt-get install docker-compose dpkg-dev

获取所有的deb包并拷贝到某一目录下:

# cd /var/cache
# find . | grep deb$ | xargs -I % cp % /root/pkgs
# cd /root/pkgs
# dpkg-scanpackages . /dev/null | gzip -9c > Packages.gz

将整个目录拷贝到主机目录:

# sudo docker cp d1023d1b0a1f:/root/pkgs /xxxxxx/xxxxx
# tar czvf /xxxx/xxxx.tar.gz /xxxxxx/xxxx

使用步骤

下载地址:

http://192.192.189.127/big.tar.gz

下载所需要的大包,解压以后,得到以下四个文件:

# tar xzvf big.tar.gz
# ls big
images/ pkgs.tar.gz  docker-regsitry.tar.gz devdockerCA.crt

取得pkgs.tar.gz, 将其放置在自己的网页服务器目录,如:
/images/2018_03_20_09_22_07_724x495.jpg

配置节点的apt:

# vim /etc/apt/sources.list
deb http://192.192.189.127/pkgs/	/
# apt-update
# apt-get install docker-compose

这将安装docker-compose, docker, 安装完毕以后,docker-compose可以正常使用。

载入镜像

载入registry所需的registry:2镜像和nginx镜像:

# docker load<images/1.tar
# docker load<images/2.tar

解压docker-compose目录,添加服务

解压到指定目录(可为任意目录,这里以/docker-registry为例):

# tar xzvf docker-registry.tar.gz -C /
# ls /docker-regsitry
data	docker-compose.yml nginx

创建systemd所需服务:

# vim /etc/systemd/system/docker-compose.service
[Unit]
Description=DockerCompose
After=docker.service
Requires=docker.service

[Service]
ExecStart=/usr/bin/docker-compose -f /docker-registry/docker-compose.yml up -d

[Install]
WantedBy=multi-user.target

启动、使能docker-compose服务:

# sudo systemctl enable docker-compose.service
# sudo systemctl start docker-compose.service
# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                          NAMES
50d2ec00e967        nginx:1.9           "nginx -g 'daemon ..."   26 hours ago        Up 3 seconds        80/tcp, 0.0.0.0:443->443/tcp   dockerregistry_nginx_1
e4e2cee1bf21        registry:2          "/entrypoint.sh /e..."   26 hours ago        Up 5 seconds        127.0.0.1:5000->5000/tcp       dockerregistry_registry_1

如此即设置好了整个regitry服务。可以通过https://YourIP来访问,签名文件也可以在目录中找到(devdockerCA)。

使用镜像仓库

Redhat rh74:

# mkdir -p /etc/docker/certs.d/mirror.xxxx.com
# wget xxxxxx.xxxx.xxx.com/ca.crt
# docker login -u xxxx -p xxxx mirror.xxxx.com

KismatciDisconnectdInstallationRHEL74

目的

基于Redhat 7.4搭建Kismatic自动化部署Kubernetes环境。

环境准备

软件:

rhel-server-7.4-x86_64-dvd.iso
virt-manager
网络 10.172.173.0/24, 无dhcp.

硬件:

4-Core, 32G台式机, 磁盘,大约200G

部署节点机准备

制作rhel74的基础镜像,虚拟机的制作过程同样可以适用于物理机器的部署流程.

/images/2018_03_18_18_04_16_815x679.jpg

点击Installation destination, 分区:

/images/2018_03_18_18_04_35_371x196.jpg

如下图所示,点击I will configure partitioning, 然后点击Done进入到分区界面:

/images/2018_03_18_18_05_08_471x566.jpg

点击Click here to create them automatically:

/images/2018_03_18_18_05_44_471x279.jpg

这里我们要删除swap分区,删除home分区,并手动调节root分区的大小,扩展到所有可用空间:

/images/2018_03_18_18_06_45_508x309.jpg

调整root分区大小如下:

/images/2018_03_18_18_07_12_657x299.jpg

点击两次Done按钮,出现警告:

/images/2018_03_18_18_07_39_725x398.jpg

点击Accept,接受更改,进入到下一步, 配置Network & Host Name,

/images/2018_03_18_18_08_21_567x271.jpg

如此进入到安装界面,设置Root用户密码和用户/用户密码(如果你想添加用户的话)即可完成安装。

配置基本系统

selinux配置和防火墙配置, 禁用subscription:

# vi /etc/selinux/config
SELINUX=disabled
# systemctl disable firewalld
# vim /etc/yum/pluginconf.d/subscription-manager.conf
[main]
enabled=0
# mount /dev/sr0 /mnt
# vim /etc/yum.repos.d/local.repo
[local]
name=local
baseurl=file:///mnt
enabled=1
gpgcheck=0
# yum install -y vim httpd

现在保存基本系统,即可作为基础版本,供以后使用。

http server and docker registry server

创建镜像文件:

# qemu-img create -f qcow2  -b rhel74base/rhel74base.qcow2 rheldeployserver.qcow2

以此镜像文件,建立一个1核1G的rhel7系统.

./images/2018_03_18_21_15_52_398x415.jpg
成功启动系统后,同步镜像仓库,同步registry仓库(按照kismatic官方指南来), 具体步骤如下:

TBD

配置镜像仓库:

# systemctl enable httpd
# systemctl start httpd

配置docker-registry

# tar xzvf docker-registry.tar.gz
# mv docker-registry /

安装必要的包:

# yum install -y net-tools createrepo wget

创建repo:

# cd /var/www/html/
# for i in `ls `
do
createrepo $i
done
# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
# yum makecache

安装docker-compose:

# yum install -y python-pip
# pip install docker-compose

安装docker:

# yum install -y --setopt=obsoletes=0  docker-ce-17.03.0.ce-1.el7.centos
# systemctl enable docker
# systemctl start docker

载入registry所需的镜像:

# docker load<nginx.tar
# docker load<registry.tar
# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
registry            <none>              d1fd7d86a825        2 months ago        33.3 MB
registry            2                   177391bcf802        3 months ago        33.3 MB
nginx               1.9                 c8c29d842c09        22 months ago       183 MB

配置docker-compose所需的系统级服务:

# vim  /etc/systemd/system/docker-compose.service
[Unit]
Description=DockerCompose
After=docker.service
Requires=docker.service

[Service]
ExecStart=/usr/bin/docker-compose -f /docker-registry/docker-compose.yml up -d

[Install]
WantedBy=multi-user.target
# systemctl start docker-compose
# systemctl enable docker-compose

登录/使用registry mirror的方法:

# vim /etc/hosts
192.168.205.13	mirror.xxxxx.com
# vim /etc/hosts
# docker login mirror.xxxx.com
Username (clouder): clouder
Password: 
Login Succeeded

现在随意更改网络后,配置好对应的地址,即可使用该虚拟机进行部署。整个镜像的大小大约40G.

3-node kubernetes

创建镜像文件:

# qemu-img create -f qcow2 -b rhel74base/rhel74base.qcow2 node1.qcow2
# qemu-img create -f qcow2 -b rhel74base/rhel74base.qcow2 node2.qcow2
# qemu-img create -f qcow2 -b rhel74base/rhel74base.qcow2 node3.qcow2

CPU/内存配置:

/images/2018_03_19_06_27_16_367x176.jpg

网络配置:

/images/2018_03_19_06_27_33_371x264.jpg

node1, node2, node3:

node1, 10.172.173.11
node2, 10.172.173.12
node3, 10.172.173.13

配置

三台机器上,分别添加/etc/hosts下的以下条目:

10.172.173.2	mirror.xxxx.com

其中10.172.173.2为我们配置的registry mirror服务器的地址。
然后就可以通过kismatic-cluster.yaml定义出对应的项,开始进行部署,部署完毕后我们得到一个拥有三个master的k8s集群.

[root@node1 ~]# kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
node1     Ready     master    2h        v1.9.0
node2     Ready     master    2h        v1.9.0
node3     Ready     master    2h        v1.9.0

基于这个集群我们可以进行通用的开发。首先来配置高可用和ingress之类。

边缘节点

我们定义的边缘节点如下:

node1, 10.172.173.11
node2, 10.172.173.12
node3, 10.172.173.13

在三个节点上分别安装keepalived和ipvsadmin:

# yum install -y keepalived ipvsadm

配置文件:

[root@node1 mytraefik]# cat traefik.yaml 
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: traefik-ingress-lb
  namespace: kube-system
  labels:
    k8s-app: traefik-ingress-lb
spec:
  template:
    metadata:
      labels:
        k8s-app: traefik-ingress-lb
        name: traefik-ingress-lb
    spec:
      terminationGracePeriodSeconds: 60
      hostNetwork: true
      restartPolicy: Always
      serviceAccountName: ingress
      containers:
      - image: mirror.xxxxx.com/traefik:latest
        imagePullPolicy: IfNotPresent
        name: traefik-ingress-lb
        resources:
          limits:
            cpu: 200m
            memory: 30Mi
          requests:
            cpu: 100m
            memory: 20Mi
        ports:
        - name: http
          containerPort: 80
          hostPort: 80
        - name: admin
          containerPort: 8580
          hostPort: 8580
        args:
        - --web
        - --web.address=:8580
        - --kubernetes
      nodeSelector:
        edgenode: "true"
[root@node1 mytraefik]# cat ui.yaml 
apiVersion: v1
kind: Service
metadata:
  name: traefik-web-ui
  namespace: kube-system
spec:
  selector:
    k8s-app: traefik-ingress-lb
  ports:
  - name: web
    port: 80
    targetPort: 8580
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: traefik-web-ui
  namespace: kube-system
spec:
  rules:
  - host: traefik-ui.local
    http:
      paths:
      - path: /
        backend:
          serviceName: traefik-web-ui
          servicePort: web
[root@node1 mytraefik]# cat ingress-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ingress
  namespace: kube-system

---

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: ingress
subjects:
  - kind: ServiceAccount
    name: ingress
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io

创建服务:

# kubectl create -f ingress-rbac.yaml  traefik.yaml  ui.yaml

现在只需要添加一行到/etc/hosts中,即可访问traefik的ui界面:

10.172.173.100	traefik-ui.local

/images/2018_03_19_11_28_34_967x774.jpg

nginx服务

定义文件如下:

[root@node1 mytraefik]# cat nginx.yaml 
apiVersion: extensions/v1beta1 
kind: Deployment 
metadata: 
  name: nginx-dm
spec: 
  replicas: 2
  template: 
    metadata: 
      labels: 
        name: nginx 
    spec: 
      containers: 
        - name: nginx 
          image: mirror.xxxx.com/nginx:latest 
          imagePullPolicy: IfNotPresent
          ports: 
            - containerPort: 80

---
apiVersion: v1 
kind: Service
metadata: 
  name: nginx-dm 
spec: 
  ports: 
    - port: 80
      targetPort: 80
      protocol: TCP 
  selector: 
    name: nginx
You have new mail in /var/spool/mail/root
[root@node1 mytraefik]# cat traefik-ingress.yaml 
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: traefik-ingress
spec:
  rules:
  - host: nginx.xxxx.com
    http:
      paths:
      - path: /
        backend:
          serviceName: nginx-dm
          servicePort: 80

同样在外部添加/etc/hosts中的对应条目即可.