kubeadmssllifetime

Reason

The ssl lifetime is only 1 year, we need to changes it to 100 years.

Steps

Check out the specific version:

# git clone  https://github.com/kubernetes/kubernetes
# git checkout tags/v1.12.3 -b 1.12.3_local

Now edit the cert.go file:

# vim vendor/k8s.io/client-go/util/cert/cert.go

		NotAfter:              now.Add(duration365d * 100).UTC(),    // line 66
		NotAfter:     time.Now().Add(duration365d * 100).UTC(),  // line 111
	maxAge := time.Hour * 24 * 365 * 100         // one year self-signed certs  // line 96
		maxAge = 100 * time.Hour * 24 * 365 // 100 years fixtures  // line 110
		NotAfter:    validFrom.Add(100 * maxAge), // line 152, 124

Then build using following command:

# make all WHAT=cmd/kubeadm GOFLAGS=-v
# ls  _output/bin/kubeadm

Now using the newly generated kubeadm for replacing kubespray’s kubeadm.

Also you have to change the sha256sum of the kubeadm which exists in roles/download/defaults/main.yml:

kubeadm_checksums:
  v1.12.4: bc7988ee60b91ffc5921942338ce1d103cd2f006c7297dd53919f4f6d16079fa
  #v1.12.4: 674ad5892ff2403f492c9042c3cea3fa0bfa3acf95bc7d1777c3645f0ddf64d7

deploy a cluster again, this time you will get a 100-year signature:

root@k8s-1:/etc/kubernetes/ssl# pwd
/etc/kubernetes/ssl
root@k8s-1:/etc/kubernetes/ssl# for i in `ls *.crt`; do openssl x509 -in $i -noout -dates; done | grep notAfter
notAfter=Dec 11 05:34:10 2118 GMT
notAfter=Dec 11 05:34:11 2118 GMT
notAfter=Dec 11 05:34:10 2118 GMT
notAfter=Dec 11 05:34:11 2118 GMT
notAfter=Dec 11 05:34:12 2118 GMT

v1.12.5

Update the v1.12.5

#  git remote -v
#  git fetch origin
#  git checkout tags/v1.12.5 -b 1.12.5_local
# git branch
  1.12.3_local
  1.12.4_local
* 1.12.5_local
  master
......make some changes.....
# make all WHAT=cmd/kubeadm GOFLAGS=-v
# ls  _output/bin/kubeadm

4. kubeadm git tree state

Modify the file hack/lib/version.sh:

  if [[ -n ${KUBE_GIT_COMMIT-} ]] || KUBE_GIT_COMMIT=$("${git[@]}" rev-parse "HEAD^{commit}" 2>/dev/null); then
    if [[ -z ${KUBE_GIT_TREE_STATE-} ]]; then
      # Check if the tree is dirty.  default to dirty
      if git_status=$("${git[@]}" status --porcelain 2>/dev/null) && [[ -z ${git_status} ]]; then
        KUBE_GIT_TREE_STATE="clean"
      else
        KUBE_GIT_TREE_STATE="clean"
      fi
    fi

golang issue

build kubeadm 1.14.1 requires golang newer than golang 1.12.

# wget https://dl.google.com/go/go1.12.2.linux-amd64.tar.gz
# tar -xvf go1.12.2.linux-amd64.tar.gz
# sudo mv go /usr/local
# vim ~/.bashrc
export GOROOT=/usr/local/go
export PATH=$GOPATH/bin:$GOROOT/bin:$PATH
export GOPATH=/root/go/
# source ~/.bashrc
# go version
go version go1.12.2 linux/amd64

Now you could use the newer golang builder for building the v1.14.1 kubeadm.

1.14.1 kubeadm timestamp

Before:

# pwd
/etc/kubernetes/ssl
# for i in `ls *.crt`; do openssl x509 -in $i -noout -dates; done | grep notAfter
notAfter=May  4 07:20:04 2020 GMT
notAfter=May  4 07:20:03 2020 GMT
notAfter=May  2 07:20:03 2029 GMT
notAfter=May  2 07:20:04 2029 GMT
notAfter=May  4 07:20:05 2020 GMT

After replacement:

notAfter=May  4 08:13:02 2020 GMT
notAfter=May  4 08:13:02 2020 GMT
notAfter=Apr 11 08:13:01 2119 GMT
notAfter=Apr 11 08:13:02 2119 GMT
notAfter=May  4 08:13:03 2020 GMT

Seems failed, so I have to change again.

Add modification:

./cmd/kubeadm/app/util/pkiutil/pki_helpers.go
                NotAfter:     time.Now().Add(duration365d * 100 ).UTC(),  // line 578

arm64(kubernetes 1.14.3 version)

golang 1.12.2 arm64 version download:

# wget https://dl.google.com/go/go1.12.2.linux-arm64.tar.gz
# tar xzvf go1.12.2.linux-arm64.tar.gz
# sudo mv go /usr/local
# vim ~/.bashrc
export GOROOT=/usr/local/go
export PATH=$GOPATH/bin:$GOROOT/bin:$PATH
export GOPATH=/root/go/
# source ~/.bashrc
# go version
go version go1.12.2 linux/arm64

Download the k8s 1.14.3 source code and unzip it:

# unzip kubernetes-1.14.3.zip
# cd kubernetes-1.14.3

modify the hack/lib/version.sh KUBE_GIT_TREE_STATE all to clean.

Also change following two files:

root@arm02:~/Code/kubernetes-1.14.3# vim cmd/kubeadm/app/util/pkiutil/pki_helpers.go
root@arm02:~/Code/kubernetes-1.14.3# vim vendor/k8s.io/client-go/util/cert/cert.go
root@arm02:~/Code/kubernetes-1.14.3#  make all WHAT=cmd/kubeadm GOFLAGS=-v

/images/2019_07_03_15_25_16_903x178.jpg

v1.15.3

Via following steps:

# cd YOURKUBERNETES_FOLDER
# git fetch origin
# git checkout tags/v1.15.3 -b 1.15.3_local
# vim hack/lib/version.sh
      if git_status=$("${git[@]}" status --porcelain 2>/dev/null) && [[ -z ${git_status} ]]; then
        KUBE_GIT_TREE_STATE="clean"
      else
        KUBE_GIT_TREE_STATE="clean"
# vim cmd/kubeadm/app/constants/constants.go
        CertificateValidity = time.Hour * 24 * 365 *100
# vim vendor/k8s.io/client-go/util/cert/cert.go
edit the same as in v1.12.5
		NotAfter:              now.Add(duration365d * 100).UTC(),    // line 66
		NotAfter:     time.Now().Add(duration365d * 100).UTC(),  // line 111
	maxAge := time.Hour * 24 * 365 * 100         // one year self-signed certs  // line 96
		maxAge = 100 * time.Hour * 24 * 365 // 100 years fixtures  // line 110
		NotAfter:    validFrom.Add(100 * maxAge), // line 152, 124
# make all WHAT=cmd/kubeadm GOFLAGS=-v
# ls  _output/bin/kubeadm

注意: v1.15.3中, CertificateValidity变量定义为100年后,不需修改pki_helper.go文件内容。

v1.16.3

Compile it on local:

# wget https://codeload.github.com/kubernetes/kubernetes/zip/v1.16.3
# unzip kubernetes-1.16.3.zip
# cd kubernetes-1.16.3
########################
### Make source code changes
# Notice the gittree status changes from archived to clean
########################
#####  Install golang
# sudo add-apt-repository ppa:longsleep/golang-backports
# sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys  F6BC817356A3D45E
# sudo apt-get update
# sudo apt-get install golang-1.12
# sudo apt-get purge golang-go
# vim ~/.profile
Add:
PATH="$PATH:/usr/lib/go-1.12/bin"
# source ~/.profile
# make all WHAT=cmd/kubeadm GOFLAGS=-v

Cause kubeadm currently(2019.12) should be compiled with golang-1.12.

output result:

➜  kubernetes-1.16.3 cd _output/bin 
➜  bin ls
conversion-gen  deepcopy-gen  defaulter-gen  go2make  go-bindata  kubeadm  openapi-gen
➜  bin ./kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-12-24T07:07:11Z", GoVersion:"go1.12.8", Compiler:"gc", Platform:"linux/amd64"}

WorkingTipsOnKubespray281

Changes

1. download items

In kubespray-2.8.1/roles/download/defaults/main.yml, get download info from following definition:

kubeadm_download_url: "https://storage.googleapis.com/kubernetes-release/release/{{ kubead
m_version }}/bin/linux/{{ image_arch }}/kubeadm"
hyperkube_download_url: "https://storage.googleapis.com/kubernetes-release/release/{{ kube
_version }}/bin/linux/amd64/hyperkube"
cni_download_url: "https://github.com/containernetworking/plugins/releases/download/{{ cni
_version }}/cni-plugins-{{ image_arch }}-{{ cni_version }}.tgz"

The cni_version is defined in following file:

./roles/download/defaults/main.yml:cni_version: "v0.6.0"

Download from following position:

https://storage.googleapis.com/kubernetes-release/release/v1.12.4/bin/linux/amd64/kubeadm
https://storage.googleapis.com/kubernetes-release/release/v1.12.4/bin/linux/amd64/hyperkube
https://github.com/containernetworking/plugins/releases/download/v0.6.0/cni-plugins-amd64-v0.6.0.tgz

Changes to:

#kubeadm_download_url: "https://storage.googleapis.com/kubernetes-release/release/{{ kubeadm_version }}/bin/linux/{{ image_arch }}/kubeadm"
#hyperkube_download_url: "https://storage.googleapis.com/kubernetes-release/release/{{ kube_version }}/bin/linux/amd64/hyperkube"
etcd_download_url: "https://github.com/coreos/etcd/releases/download/{{ etcd_version }}/etcd-{{ etcd_version }}-linux-amd64.tar.gz"
#cni_download_url: "https://github.com/containernetworking/plugins/releases/download/{{ cni_version }}/cni-plugins-{{ image_arch }}-{{ cni_version }}.tgz"
kubeadm_download_url: "http://portus.xxxx.com:8888/kubeadm"
hyperkube_download_url: "http://portus.xxxx.com:8888/hyperkube"
cni_download_url: "http://portus.xxxx.com:8888/cni-plugins-{{ image_arch }}-{{ cni_version }}.tgz"

2. dashboard

kubespray-2.8.1/roles/kubernetes-apps/ansible/templates/dashboard.yml.j2, add NodePort definition:

spec:
+  type: NodePort
  ports:
    - port: 443
      targetPort: 8443

3. bootstrap-os

Added in files:

portus.crt
server.crt
ntp.conf

kubespray-2.8.1/roles/bootstrap-os/tasks/bootstrap-ubuntu.yml, modify according to previous version.

4. kube-deploy

TBD, changes later

5. reset

kubespray-2.8.1/roles/reset/tasks/main.yml

    - /etc/cni
    - "{{ nginx_config_dir }}"
#    - /etc/dnsmasq.d
#    - /etc/dnsmasq.conf
#    - /etc/dnsmasq.d-available

6. inventory definition

/kubespray-2.8.1/inventory/sample/group_vars/k8s-cluster/addons.yml

enable helm and metric-server

Edit kubespray-2.8.1/inventory/sample/group_vars/k8s-cluster/k8s-cluster.yml file:

helm_stable_repo_url: "https://portus.xxxx.com:5000/chartrepo/kubesprayns"

also notice the version of kubeadm, for example v1.12.4

remove the hosts.ini file.

7. kubeadm images

Use an official vagrant definition for downloading kubeadm images.

Vagrant temp

Vagrant create temp machines.

Stop the service:

sudo systemctl stop secureregistryserver.service

Remove the old registry data, and start a new instance

sudo rm -rf /usr/local/secureregistryserver/data/*
sudo systemcel start secureregistryserver.service

Load:

scp ./all.tar.bz2 vagrant@172.17.129.101:/home/vagrant
sudo docker load<all.tar.bz2

Then docker push all of the loaded images, compress the folder:

sudo systemcel stop secureregistryserver.service
tar cvf /usr/local/secureregistryserver/
xz /usr/local/secureregistryserver.tar

With the tar.xz, contains all of the offline images.

TipsOnSJ

Steps

Rook:

# docker save rook/ceph:master>ceph.tar; xz ceph.tar
# docker load<ceph.tar
# docker tag rook/ceph:master docker.registry/library/rook/ceph:master
# kubectl -n rook-ceph-system get pod -o wide
NAME                                 READY   STATUS    RESTARTS   AGE   IP                NODE       NOMINATED NODE
rook-ceph-agent-lf7zm                1/1     Running   0          11s   192.192.189.124   allinone   <none>
rook-ceph-operator-d88b68dd9-rfqws   1/1     Running   0          19m   10.233.81.146     allinone   <none>
rook-discover-rtghr                  1/1     Running   0          11s   10.233.81.149     allinone   <none>

label:

# kubectl label nodes allinone ceph-mon=enabled
# kubectl label nodes allinone ceph-osd=enabled
# kubectl label nodes allinone ceph-mgr=enabled

Add one disk:

/images/2018_12_12_15_17_08_639x545.jpg

Examine via:

# fdisk -l /dev/vdb

Disk /dev/vdb: 85.9 GB, 85899345920 bytes, 167772160 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Modify the cluster.yml file:

    config:
      # The default and recommended storeType is dynamically set to bluestore for devices and filestore for directories.
      # Set the storeType explicitly only if it is required not to use the default.
      # storeType: bluestore
      databaseSizeMB: "1024" # this value can be removed for environments with normal sized disks (100 GB or larger)
      journalSizeMB: "1024"  # this value can be removed for environments with normal sized disks (20 GB or larger)
    nodes:
    - name: "allinone"
      devices: # specific devices to use for storage can be specified for each node
      - name: "vdb"
# kubectl apply -f cluster.yaml 

MOdify to newest version, becase the master is older than our pulled images.

depend on ceph:

#  sudo docker pull ceph/ceph:v13
# kubectl -n rook-ceph get pod -o wide  -w
# # kubectl -n rook-ceph get pod -o wide
NAME                                   READY   STATUS      RESTARTS   AGE   IP              NODE       NOMINATED NODE
rook-ceph-mgr-a-588c74548f-wb4db       1/1     Running     0          74s   10.233.81.156   allinone   <none>
rook-ceph-mon-a-6cf75949cd-vqbfb       1/1     Running     0          89s   10.233.81.155   allinone   <none>
rook-ceph-osd-0-88d6dd79d-r9cxc        1/1     Running     0          49s   10.233.81.158   allinone   <none>
rook-ceph-osd-prepare-allinone-zgcjz   0/2     Completed   0          59s   10.233.81.157   allinone   <none>
# lsblk
# lsblk |grep vdb
vdb         252:16   0   80G  0 disk 

Get the password:

# kubectl edit svc rook-ceph-mgr-dashboard -n rook-ceph
type: NodePort
service/rook-ceph-mgr-dashboard edited
# MGR_POD=`kubectl get pod -n rook-ceph | grep mgr | awk '{print $1}'`
# kubectl -n rook-ceph logs $MGR_POD | grep password
2018-12-12 08:10:16.478 7f7062038700  0 log_channel(audit) log [DBG] : from='client.4114 10.233.81.152:0/963276398' entity='client.admin' cmd=[{"username": "admin", "prefix": "dashboard set-login-credentials", "password": "8XOWZALcFO", "target": ["mgr", ""], "format": "json"}]: dispatch

View dashboard via:

/images/2018_12_12_16_15_24_775x651.jpg

Create pool and storagepool:

# kubectl apply -f pool.yaml 
cephblockpool.ceph.rook.io/replicapool created
# kubectl  apply -f storageclass.yaml 
cephblockpool.ceph.rook.io/replicapool configured
storageclass.storage.k8s.io/rook-ceph-block created
# kubectl get sc
NAME              PROVISIONER          AGE
rook-ceph-block   ceph.rook.io/block   3s

其他部分的更改就省略掉。

ToDo

  1. busybox需要上传到中心服务器。
  2. 各个节点服务器需要load busybox的镜像。
  3. 需要整合ansible,以便安装。

TipsOnKubespray28Upgrading

初始化配置

按照AI组的设想,基于Ubuntu16.04来做,后续其实也可以基于Ubuntu16.05来做,应该是一样的。
创建一个虚拟机,192.168.122.177/24, 安装好redsocks.

visudo(nopasswd), apt-get install -y nethogs build-essential libevent-dev.

做成基础镜像后,关机,undefine此虚拟机。

# 

TipsOnKubesprayUpgrading

目的

Rong部署框架随kubespray社区升级策略。

前提

下载kubespray升级包,如v2.8.0:

/images/2018_12_10_12_09_29_628x576.jpg

将此包解压到某目录,这里解压到:

# cd /var1/myimages/RongUpgrade/280
# ls
kubespray-2.8.0

快速同步镜像基准制作

设置一个Rong CentOS7.5 ISO安装后的基础环境, (Base/Rong_Base.qcow2). 制作方法如下(如果已经制作完毕,则可忽略下面步骤):

激活用于存储cache的rpm包:

# vim /etc/yum.conf
keepcache=1

改回原有的仓库配置:

cd /etc/yum.repos.d
mv back/* .
mv kubespray_centos7.repo  back/

安装必要的包用于GFW:

yum install -y gcc  libevent-devel
redsocks配置,这里就不说了

保存为/var1/myimages/RongUpgrade/Base/Rong_Base.qcow2 , 以后的每次升级都使用该源包,现在创建一个用于升级2.8.0的镜像:

$ qemu-img create -f qcow2 -b Base/Rong_Base.qcow2 v280.qcow2

6核7G的虚拟机配置,选择default网络, 注意更改MAC地址为与上面定义时一致:

/images/2018_12_10_12_27_06_497x283.jpg

获得包/镜像

配置一个用于部署的环境:

# cp deploy.key .
# cp -r inventory/sample inventory/rong
# vim inventory/rong/hosts.ini
[all]
allinone ansible_host=192.168.122.166 ansible_ssh_user=root ansible_ssh_private_key_file=./deploy.key

[kube-deploy]
allinone

[kube-master]
allinone

[etcd]
allinone

[kube-node]
allinone

[k8s-cluster:children]
kube-master
kube-node

部署一次:

# ansible-playbook -i inventory/rong/hosts.ini cluster.yml

在全FQ的环境下,部署应该可以成功,此时备份容器镜像:

# docker images
.....
# docker images | sed -n '1!p' | awk {'print $1":"$2'}
# docker save -o docker.tar gcr.io/google-containers/kube-proxy:v1.12.3 gcr.io/google-containers/kube-apiserver:v1.12.3 gcr.io/google-containers/kube-controller-manager:v1.12.3 gcr.io/google-containers/kube-scheduler:v1.12.3 coredns/coredns:1.2.6 gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.3.0 gcr.io/google-containers/coredns:1.2.2 gcr.io/google_containers/kubernetes-dashboard-amd64:v1.10.0 quay.io/coreos/etcd:v3.2.24 quay.io/calico/node:v3.1.3 quay.io/calico/ctl:v3.1.3 quay.io/calico/kube-controllers:v3.1.3 quay.io/calico/cni:v3.1.3 nginx:1.13 gcr.io/google-containers/pause:3.1 gcr.io/google_containers/pause-amd64:3.1
# cp docker.tar.gz /mnt

拷贝出的docker.tar即含有所有的容器镜像.

备份安装包,rpm包:

# yum install -y nethogs
# cd /var/cache/
# mkdir -p /root/rpms
# find . | grep rpm$ | xargs -I % cp % /root/rpms/
# cd /root/rpms/
# createrepo .
# cp rpms.tar.gz /mnt

kubespray源码更改

更改配置脚本:

# cp ~/roles/kube-deploy ./roles/kube-deploy
# 替换rpms文件和docker.tar文件
# cd kubespray-2.8.0/roles/kube-deploy/files
# rm -rf kubespray_centos7_rpms
# cp /mnt/rpms ./kubespray_centos7_rpms
# rm -f kubespray_images.tar.xz
# cp /mnt/docker.tar ./kubespray_images.tar
# xz kubespray_images.tar.xz
# vim tag_and_push.sh
更改这里,为我们的docker images名称->私有仓库名称

更改download角色:

# vim ./roles/download/defaults/main.yml
    # Download URLs
    #kubeadm_download_url: "https://storage.googleapis.com/kubernetes-release/release/{{ kubeadm_version }}/bin/linux/{{ image_arch }}/kubeadm"
    #hyperkube_download_url: "https://storage.googleapis.com/kubernetes-release/release/{{ kube_version }}/bin/linux/amd64/hyperkube"
    #etcd_download_url: "https://github.com/coreos/etcd/releases/download/{{ etcd_version }}/etcd-{{ etcd_version }}-linux-amd64.tar.gz"
    #cni_download_url: "https://github.com/containernetworking/plugins/releases/download/{{ cni_version }}/cni-plugins-{{ image_arch }}-{{ cni_version }}.tgz"
    kubeadm_download_url: "http://portus.gggg.com:8888/kubeadm"
    hyperkube_download_url: "http://portus.gggg.com:8888/hyperkube"
    etcd_download_url: "https://github.com/coreos/etcd/releases/download/{{ etcd_version }}/etcd-{{ etcd_version }}-linux-amd64.tar.gz"
    cni_download_url: "http://portus.gggg.com:8888/cni-plugins-{{ image_arch }}-{{ cni_version }}.tgz"

更改私有镜像:

# vim inventory/rong/group_vars/k8s-cluster/k8s-cluster.yml
得到镜像名称:    

# cat ./roles/download/defaults/main.yml | grep _image_repo:|grep -v kube_image_repo
如: 
etcd_image_repo: "quay.io/coreos/etcd"
flannel_image_repo: "quay.io/coreos/flannel"
flannel_cni_image_repo: "quay.io/coreos/flannel-cni"
calicoctl_image_repo: "quay.io/calico/ctl"
calico_node_image_repo: "quay.io/calico/node"
加前缀:
etcd_image_repo: "xxxxx/quay.io/coreos/etcd"
....

更改cluster.yml,省略docker-ce的安装:

- hosts: k8s-cluster:etcd:calico-rr:!kube-deploy
  any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
  gather_facts: false


    - { role: kubernetes/preinstall, tags: preinstall }
    #- { role: "container-engine", tags: "container-engine", when: deploy_container_engine|default(true) }
    - { role: download, tags: download, when: "not skip_downloads" }
  environment: "{{proxy_env}}"


更改deploy-centos的配置:

# roles/bootstrap-os/tasks/bootstrap-centos.yml
- name: Configure intranet repository
  shell: mkdir -p /etc/yum.repos.d/back && mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/back; curl http://portus.teligen.com:8888/kubespray_centos7.repo>/etc/y
um.repos.d/kubespray_centos7.repo && yum makecache && systemctl stop firewalld ; systemctl disable firewalld


- name: Install packages requirements for bootstrap
  yum:
.....

验证:

/images/2018_12_10_16_37_24_599x283.jpg