KismatciDisconnectdInstallationRHEL74

目的

基于Redhat 7.4搭建Kismatic自动化部署Kubernetes环境。

环境准备

软件:

rhel-server-7.4-x86_64-dvd.iso
virt-manager
网络 10.172.173.0/24, 无dhcp.

硬件:

4-Core, 32G台式机, 磁盘,大约200G

部署节点机准备

制作rhel74的基础镜像,虚拟机的制作过程同样可以适用于物理机器的部署流程.

/images/2018_03_18_18_04_16_815x679.jpg

点击Installation destination, 分区:

/images/2018_03_18_18_04_35_371x196.jpg

如下图所示,点击I will configure partitioning, 然后点击Done进入到分区界面:

/images/2018_03_18_18_05_08_471x566.jpg

点击Click here to create them automatically:

/images/2018_03_18_18_05_44_471x279.jpg

这里我们要删除swap分区,删除home分区,并手动调节root分区的大小,扩展到所有可用空间:

/images/2018_03_18_18_06_45_508x309.jpg

调整root分区大小如下:

/images/2018_03_18_18_07_12_657x299.jpg

点击两次Done按钮,出现警告:

/images/2018_03_18_18_07_39_725x398.jpg

点击Accept,接受更改,进入到下一步, 配置Network & Host Name,

/images/2018_03_18_18_08_21_567x271.jpg

如此进入到安装界面,设置Root用户密码和用户/用户密码(如果你想添加用户的话)即可完成安装。

配置基本系统

selinux配置和防火墙配置, 禁用subscription:

# vi /etc/selinux/config
SELINUX=disabled
# systemctl disable firewalld
# vim /etc/yum/pluginconf.d/subscription-manager.conf
[main]
enabled=0
# mount /dev/sr0 /mnt
# vim /etc/yum.repos.d/local.repo
[local]
name=local
baseurl=file:///mnt
enabled=1
gpgcheck=0
# yum install -y vim httpd

现在保存基本系统,即可作为基础版本,供以后使用。

http server and docker registry server

创建镜像文件:

# qemu-img create -f qcow2  -b rhel74base/rhel74base.qcow2 rheldeployserver.qcow2

以此镜像文件,建立一个1核1G的rhel7系统.

./images/2018_03_18_21_15_52_398x415.jpg
成功启动系统后,同步镜像仓库,同步registry仓库(按照kismatic官方指南来), 具体步骤如下:

TBD

配置镜像仓库:

# systemctl enable httpd
# systemctl start httpd

配置docker-registry

# tar xzvf docker-registry.tar.gz
# mv docker-registry /

安装必要的包:

# yum install -y net-tools createrepo wget

创建repo:

# cd /var/www/html/
# for i in `ls `
do
createrepo $i
done
# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
# yum makecache

安装docker-compose:

# yum install -y python-pip
# pip install docker-compose

安装docker:

# yum install -y --setopt=obsoletes=0  docker-ce-17.03.0.ce-1.el7.centos
# systemctl enable docker
# systemctl start docker

载入registry所需的镜像:

# docker load<nginx.tar
# docker load<registry.tar
# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
registry            <none>              d1fd7d86a825        2 months ago        33.3 MB
registry            2                   177391bcf802        3 months ago        33.3 MB
nginx               1.9                 c8c29d842c09        22 months ago       183 MB

配置docker-compose所需的系统级服务:

# vim  /etc/systemd/system/docker-compose.service
[Unit]
Description=DockerCompose
After=docker.service
Requires=docker.service

[Service]
ExecStart=/usr/bin/docker-compose -f /docker-registry/docker-compose.yml up -d

[Install]
WantedBy=multi-user.target
# systemctl start docker-compose
# systemctl enable docker-compose

登录/使用registry mirror的方法:

# vim /etc/hosts
192.168.205.13	mirror.xxxxx.com
# vim /etc/hosts
# docker login mirror.xxxx.com
Username (clouder): clouder
Password: 
Login Succeeded

现在随意更改网络后,配置好对应的地址,即可使用该虚拟机进行部署。整个镜像的大小大约40G.

3-node kubernetes

创建镜像文件:

# qemu-img create -f qcow2 -b rhel74base/rhel74base.qcow2 node1.qcow2
# qemu-img create -f qcow2 -b rhel74base/rhel74base.qcow2 node2.qcow2
# qemu-img create -f qcow2 -b rhel74base/rhel74base.qcow2 node3.qcow2

CPU/内存配置:

/images/2018_03_19_06_27_16_367x176.jpg

网络配置:

/images/2018_03_19_06_27_33_371x264.jpg

node1, node2, node3:

node1, 10.172.173.11
node2, 10.172.173.12
node3, 10.172.173.13

配置

三台机器上,分别添加/etc/hosts下的以下条目:

10.172.173.2	mirror.xxxx.com

其中10.172.173.2为我们配置的registry mirror服务器的地址。
然后就可以通过kismatic-cluster.yaml定义出对应的项,开始进行部署,部署完毕后我们得到一个拥有三个master的k8s集群.

[root@node1 ~]# kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
node1     Ready     master    2h        v1.9.0
node2     Ready     master    2h        v1.9.0
node3     Ready     master    2h        v1.9.0

基于这个集群我们可以进行通用的开发。首先来配置高可用和ingress之类。

边缘节点

我们定义的边缘节点如下:

node1, 10.172.173.11
node2, 10.172.173.12
node3, 10.172.173.13

在三个节点上分别安装keepalived和ipvsadmin:

# yum install -y keepalived ipvsadm

配置文件:

[root@node1 mytraefik]# cat traefik.yaml 
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: traefik-ingress-lb
  namespace: kube-system
  labels:
    k8s-app: traefik-ingress-lb
spec:
  template:
    metadata:
      labels:
        k8s-app: traefik-ingress-lb
        name: traefik-ingress-lb
    spec:
      terminationGracePeriodSeconds: 60
      hostNetwork: true
      restartPolicy: Always
      serviceAccountName: ingress
      containers:
      - image: mirror.xxxxx.com/traefik:latest
        imagePullPolicy: IfNotPresent
        name: traefik-ingress-lb
        resources:
          limits:
            cpu: 200m
            memory: 30Mi
          requests:
            cpu: 100m
            memory: 20Mi
        ports:
        - name: http
          containerPort: 80
          hostPort: 80
        - name: admin
          containerPort: 8580
          hostPort: 8580
        args:
        - --web
        - --web.address=:8580
        - --kubernetes
      nodeSelector:
        edgenode: "true"
[root@node1 mytraefik]# cat ui.yaml 
apiVersion: v1
kind: Service
metadata:
  name: traefik-web-ui
  namespace: kube-system
spec:
  selector:
    k8s-app: traefik-ingress-lb
  ports:
  - name: web
    port: 80
    targetPort: 8580
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: traefik-web-ui
  namespace: kube-system
spec:
  rules:
  - host: traefik-ui.local
    http:
      paths:
      - path: /
        backend:
          serviceName: traefik-web-ui
          servicePort: web
[root@node1 mytraefik]# cat ingress-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ingress
  namespace: kube-system

---

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: ingress
subjects:
  - kind: ServiceAccount
    name: ingress
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io

创建服务:

# kubectl create -f ingress-rbac.yaml  traefik.yaml  ui.yaml

现在只需要添加一行到/etc/hosts中,即可访问traefik的ui界面:

10.172.173.100	traefik-ui.local

/images/2018_03_19_11_28_34_967x774.jpg

nginx服务

定义文件如下:

[root@node1 mytraefik]# cat nginx.yaml 
apiVersion: extensions/v1beta1 
kind: Deployment 
metadata: 
  name: nginx-dm
spec: 
  replicas: 2
  template: 
    metadata: 
      labels: 
        name: nginx 
    spec: 
      containers: 
        - name: nginx 
          image: mirror.xxxx.com/nginx:latest 
          imagePullPolicy: IfNotPresent
          ports: 
            - containerPort: 80

---
apiVersion: v1 
kind: Service
metadata: 
  name: nginx-dm 
spec: 
  ports: 
    - port: 80
      targetPort: 80
      protocol: TCP 
  selector: 
    name: nginx
You have new mail in /var/spool/mail/root
[root@node1 mytraefik]# cat traefik-ingress.yaml 
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: traefik-ingress
spec:
  rules:
  - host: nginx.xxxx.com
    http:
      paths:
      - path: /
        backend:
          serviceName: nginx-dm
          servicePort: 80

同样在外部添加/etc/hosts中的对应条目即可.

iperfInk8s

yaml

Like following:

# tcpprobe https://wiki.linuxfoundation.org/networking/tcpprobe
# use: apt install module-init-tools
# to install modprobe
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  generation: 1
  labels:
    run: ipref
  name: ipref
  namespace: default
spec:
  replicas: 12 # we have 10 nodes in a cluster hence 12 replicas
  selector:
    matchLabels:
      run: ipref
  template:
    metadata:
      labels:
        run: ipref
    spec:
      containers:
      - command:
        - sleep
        - "infinity"
        image: networkstatic/iperf3
        imagePullPolicy: Always
        name: ipref
        resources: {}
        securityContext:
          capabilities:
            add:
            - CAP_ALL
          privileged: true
        volumeMounts:
          - mountPath: /dev
            name: dev
          - mountPath: /lib/modules
            name: modules
      volumes:
      - name: dev
        hostPath:
          # directory location on host
          path: /dev
      - name: modules
        hostPath:
          # directory location on host
          path: /lib/modules

you should change the corresponding images and imagePullPolicy.

ntpoffline

Server

server side configuration:

# yum install -y ntpd
# vim /etc/ntpd.conf

The configuration file is listed as following:

driftfile /var/lib/ntp/drift
restrict default nomodify notrap nopeer noquery
restrict 127.0.0.1 
restrict ::1
restrict 192.168.0.0 mask 255.255.0.0 nomodify notrap
server 127.127.1.0  # local clock
fudge 127.127.1.0  stratum 10
includefile /etc/ntp/crypto/pw
keys /etc/ntp/keys
disable monitor

Disable the chronyd service thus the ntpd could acts properly:

# systemctl disable chronyd
# systemctl enable ntpd
# systemctl start ntpd
# systemctl disable firewalld

Client

Install via:

# yum install -y ntpd

Configuration file:

driftfile /var/lib/ntp/drift
restrict default nomodify notrap nopeer noquery
restrict 127.0.0.1 
restrict ::1
server 192.168.122.200
# 配置允许上游时间服务器主动修改本机的时间
restrict 192.168.122.200 nomodify notrap noquery
includefile /etc/ntp/crypto/pw
keys /etc/ntp/keys
disable monitor

Also disable the chronyd service and enable the ntpd service. The client will automatically sync with the server 192.168.122.200.

fabric8

什么是fabric8

fabric8是一个开源集成开发平台,为基于Kubernetes和Jenkins的微服务提供持续发布。可以认为它是一个对Java友好的开源微服务管理平台.

fabric8也可以被视为是一个微服务DevOps平台。Fabric8提供了一个完全集成的开源微服务平台,可在任何的Kubernetes和OpenShift环境中开箱即用。

/images/2018_03_12_09_03_57_1027x410.jpg

参考:

https://jimmysong.io/posts/fabric8-introduction/

搭建过程(ArchLinux)

安装必要的包:

$ sudo pacman -S libvirt qemu dnsmasq ebtables

将自己的用户添加到kvmlibvirt用户组:

$ sudo usemod -a -G kvm,libvirt <username>

更新/etc/libvirt/qemu.conf中关于libvirt的配置:

$ sudo sed -r 's/group=".+"/group="kvm"/1' /etc/libvirt/qemu.conf > /etc/libvirt/qemu.conf

更新当前的session,以适配用户组改动:

$ newgrp libvirt

此外,我们需要在yaourt仓库中安装对应的包以使用dockermachine对于kvm的驱动:

$ sudo pacman -S docker-machine-kvm2 docker-machine
$ yaourt docker-machine-kvm

安装minishift:

$ yaourt minishift
$ minishift start --memory=7000 --cpus=4 --disk-size=50g

启动完毕后,可以检查对应的CPU/内存/磁盘信息等。

安装fabric8 on minishift(我用的是on-my-zsh):

$ echo 'export PATH=$PATH:~/.fabric8/bin' >> ~/.zshrc
$ source ~/.zshrc

配置GitHub Client ID/密码, 参考:

https://developer.github.com/apps/building-integrations/setting-up-and-registering-oauth-apps/registering-oauth-apps/

URL可以填写为:

http://keycloak-fabric8.{minishift ipv4 value}.nip.io/auth/realms/fabric8/broker/github/endpoint

homepage的URL可以填写为https://fabric8.io.

由上面得到的clientID和client secret可以被引入到环境变量中:

$ export GITHUB_OAUTH_CLIENT_ID=123
$ export GITHUB_OAUTH_CLIENT_SECRET=123abc

之后:

$ gofabric8 start --minishift --package=system  --namespace fabric8

经过漫长的等待(需要翻墙), fabric8环境将就绪,用来登录的用户名/密码分别为"developer/developer”

fabric 8 playing

以system:admin登录,查看工作空间:

$ oc login -u system:admin -n default
Logged into "https://192.168.42.131:8443" as "system:admin" using existing credentials.

You have access to the following projects and can switch between them with 'oc project <projectname>':

  * default
    developer
    developer-che
    developer-jenkins
    developer-run
    developer-stage
    fabric8
    kube-public
    kube-system
    myproject
    openshift
    openshift-infra
    openshift-node

Using project "default".

可以看到,fabric8的namespaces已经被创建出来。

WorkingTipsForArchLinuxSpring

Installation

Install the following packages:

$ sudo pacman -S maven community/intellij-idea-community-edition
$ sudo pacman -S jdk8-openjdk jdk9-openjdk
$ sudo archlinux-java set java-9-openjdk
$ archlinux-java status

Since the intellij wants jdk8 or newer, you have to install newer jdk implementation.

Correct: the community edition didn’t have the spring boot support, use the ultimate edition:

$ yaourt intellij-idea-ultimate-edition

扫盲

什么是spring boot

spring boot 致力于简洁,让开发者写更少的配置,程序能够更快的运行和启动。它是下一代javaweb框架,并且它是spring cloud(微服务)的基础。

spring boot

Create new project:

/images/2018_03_10_10_34_56_496x427.jpg

Plugins:

/images/2018_03_10_10_38_10_469x284.jpg

/images/2018_03_10_13_32_55_802x411.jpg

/images/2018_03_10_13_33_53_774x425.jpg

Import project:

/images/2018_03_10_15_36_18_639x257.jpg

mvn aliyun configuration

/images/2018_03_10_15_54_52_682x377.jpg

In /opt/maven/conf.