WorkingTipsOnPrivateHelmRepo

Steps

Create first chart named nginxfirst like following:

# mkdir nginxfirst
# cd nginxfirst/
# ls
# helm create nginxfirst
Creating nginxfirst
# tree
.
└── nginxfirst
    ├── charts
    ├── Chart.yaml
    ├── templates
    │   ├── deployment.yaml
    │   ├── _helpers.tpl
    │   ├── ingress.yaml
    │   ├── NOTES.txt
    │   └── service.yaml
    └── values.yaml

3 directories, 7 files

Edit the values.yaml file:

replicaCount: 1
image:
  repository: mirror.teligen.com/nginx
  tag: 1.7.9
  pullPolicy: IfNotPresent
service:
  name: nginx
  type: ClusterIP
  externalPort: 80
  internalPort: 80

Keep others the same.

--dry-run means you want to verificate the configuration.

Install this chart book via:

# helm install --name firstnginx . --set service.type=NodePort

Get the URL via following command:

NOTES:
1. Get the application URL by running these commands:
  export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services firstnginx-nginxfirst)
  export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}")
  echo http://$NODE_IP:$NODE_PORT

Finally you will see a running nginx instance.

Package and Serve

Package the modified package via following command:

[root@DashSSD nginxfirst]# helm package .
Successfully packaged chart and saved it to: /home/dash/Code/tmp/nginxfirst/nginxfirst/nginxfirst-0.1.0.tgz
[root@DashSSD nginxfirst]# ls
charts  Chart.yaml  nginxfirst-0.1.0.tgz  templates  values.yaml  values.yaml~
[root@DashSSD nginxfirst]# helm lint
==> Linting .
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, no failures

TroubleShootingOnhelm

这几天在试用helm,很有意思的包管理系统,让容器解决方案的落地门槛一下降了很多。然后我在搭建前端可视化的仓库解决方案,用到了monocular, 百思不得其解的是,在minikube上可以顺利部署成功的monocular, 在自己搭建的基于Ubuntu搭建的k8s集群上就是不行。

解决方法: 用kubectl get pods来看,总是mongodb部署不成功。

用kubernetes dashboard查看pod失败的原因在与persistence volume mount不成功。

在minikube上用kubectl get pvkubectl get pvc是可以看到完整的结果的,而且可以看到它使用的是hostpath的格式。

下载monocular的charts到本地,查看目录结构:

# helm fetch monocular/monocular
# tree
.
├── charts
│   └── mongodb
│       ├── Chart.yaml
│       ├── README.md
│       ├── templates
│       │   ├── deployment.yaml
│       │   ├── _helpers.tpl
│       │   ├── NOTES.txt
│       │   ├── pvc.yaml
│       │   ├── secrets.yaml
│       │   └── svc.yaml
│       └── values.yaml
├── Chart.yaml
├── README.md
├── requirements.lock

而后我们查看charts里关于持久化存储的声明,发现是在charts/mongodb下所设置的,

# cat values.yaml  | grep -i persistence -A5
    ## Enable persistence using Persistent Volume Claims
    ## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
    ##
    persistence:
      enabled: true
      ## mongodb data Persistent Volume Storage Cla

于是我们用以下的命令来重新安装此包:

# helm ls
.... // panda stands for the installed helm instance
# helm delete panda
# helm install --name=monkey --set "persistence.enabled=false,mongodb.persistence.enabled=false"  monocular/monocular

现在刷新系统,发现已经安装成功了:

# kubectl get pods
NAME                                                        READY     STATUS    RESTARTS   AGE
monkey-mongodb-66fd888d4-k66tg                              1/1       Running   0          19m
monkey-monocular-api-5fd987957-rtmqq                        1/1       Running   6          19m
monkey-monocular-api-5fd987957-wqxds                        1/1       Running   6          19m
monkey-monocular-prerender-6b7cb5cc98-gxs8b                 1/1       Running   0          19m
monkey-monocular-ui-8c776fd89-5hbcg                         1/1       Running   0          19m
monkey-monocular-ui-8c776fd89-gz8jm                         1/1       Running   0          19m
my-release-nginx-ingress-controller-74c748b9fb-9xtfv        1/1       Running   7          17h
my-release-nginx-ingress-default-backend-64f764b667-gxkht   1/1       Running   4          17h
# kubectl get ingress
NAME               HOSTS     ADDRESS         PORTS     AGE
monkey-monocular   *         10.15.205.200   80        20m

打开网页,发现可以访问到monocular, 然而其charts列表暂时无法显示, why?

迅速部署应用,避免每次重新拉取镜像:

# helm install --name=tiger --set "persistence.enabled=false,mongodb.persistence.enabled=false,pullPolicy=IfNotPresent,api.image.pullPolicy=IfNotPresent,ui.image.pullPolicy=IfNotPresent,prerender.image.pullPolicy=IfNotPresent" monocular/monocular

helmWorkingtips

minikube

Install and initialization:

$ sudo cp /media/sda5/kismatic/allinone/helm /usr/bin
$ sudo chmod 777 /usr/bin/helm
$ helm version
Client: &version.Version{SemVer:"v2.7.2", GitCommit:"8478fb4fc723885b155c924d1c8c410b7a9444e6", GitTreeState:"clean"}
Error: cannot connect to Tiller
$ helm init
$HELM_HOME has been configured at /home/xxxx/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Happy Helming!
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈ 
$ helm ls
$ helm search

monocular

In minikube, we should use hostNetwork mode:

Prerequisites:

$ helm install stable/nginx-ingress --set controller.hostNetwork=true

If on kismatic, run following:

$ helm install stable/nginx-ingress --set controller.hostNetwork=true,rbac.create=true

Then install the mocular via following commands:

$ helm repo add monocular https://kubernetes-helm.github.io/monocular
$ helm install monocular/monocular

Check the installed packages and its running status:

$ helm ls
NAME             	REVISION	UPDATED                 	STATUS  	CHART               	NAMESPACE
fallacious-jaguar	1       	Thu Jan 11 11:56:21 2018	DEPLOYED	nginx-ingress-0.8.23	default  
incindiary-prawn 	1       	Thu Jan 11 11:58:42 2018	DEPLOYED	monocular-0.5.0     	default  
$ kubectl get pods
NAME                                                              READY     STATUS              RESTARTS   AGE
fallacious-jaguar-nginx-ingress-controller-55cd4578cb-vpn2q       1/1       Running             0          3m
fallacious-jaguar-nginx-ingress-default-backend-5b7d684c6fdzk2m   1/1       Running             0          3m
hello-minikube-7844bdb9c6-596f9                                   1/1       Running             4          11d
incindiary-prawn-mongodb-5d96bdcbc5-47js2                         0/1       ContainerCreating   0          37s
incindiary-prawn-monocular-api-7758c78d8f-j64qx                   0/1       ContainerCreating   0          37s
incindiary-prawn-monocular-api-7758c78d8f-kb7nq                   0/1       ContainerCreating   0          37s
incindiary-prawn-monocular-prerender-65b576dd76-jwvmc             0/1       ContainerCreating   0          37s
incindiary-prawn-monocular-ui-5545f44ffb-7557l                    0/1       ContainerCreating   0          37s
incindiary-prawn-monocular-ui-5545f44ffb-bltmc                    0/1       ContainerCreating   0          37s

Get the deployment:

# kubectl get pods --watch
# kubectl get ingress
NAME                         HOSTS     ADDRESS          PORTS     AGE
incindiary-prawn-monocular   *         192.168.99.100   80        2h
# firefox 192.168.99.100

Displayed image:

Deploy Wordpress

Deploy with following commands:

# helm install --name=wordpress-test1 --set "persistence.enabled=false,mariadb.persistence.enabled=false,serviceType=ClusterIP" stable/wordpress

Examine the deployment:

# kubectl get pods | grep wordpress
wordpress-test1-mariadb-56c66786cc-2nj8c                          0/1       PodInitializing     0          25s
wordpress-test1-wordpress-6c949bdcb4-22fk4                        0/1       ContainerCreating   0          25s

BuildKismaticAllInOne

Start

Build an virtual machine with 4-core/8G, change its ip address to 10.15.205.100, hostname allinone.

# cd /etc/yum.repos.d/
# mkdir back
# mv *.repo back
# curl http://10.15.205.2/base.repo>base.repo
# yum makecache

Notice the base.repo is the repositories we get from the internet.

Edit the hostfile for adding the registry item(domain name to ip address).

# vim /etc/hosts
10.15.205.2	mirror.xxxx.com

Make sure you public key has been inserted into the /root/.ssh/authorized_keys, then create the directory for deployment:

$ sudo cp -r kismatic_for_1015205 allinone
$ cd allinone
$ sudo rm -rf generated*

Configuration

The example configuration file is listed as following:

cluster:
  name: kubernetes

  # Set to true if the nodes have the required packages installed.
  disable_package_installation: false

  # Set to true if you are performing a disconnected installation.
  disconnected_installation: true

  # Networking configuration of your cluster.
  networking:

    # Kubernetes will assign pods IPs in this range. Do not use a range that is
    # already in use on your local network!
    pod_cidr_block: 172.16.0.0/16

    # Kubernetes will assign services IPs in this range. Do not use a range
    # that is already in use by your local network or pod network!
    service_cidr_block: 172.20.0.0/16

    # Set to true if your nodes cannot resolve each others' names using DNS.
    update_hosts_files: true

    # Set the proxy server to use for HTTP connections.
    http_proxy: ""

    # Set the proxy server to use for HTTPs connections.
    https_proxy: ""

    # List of host names and/or IPs that shouldn't go through any proxy.
    # All nodes' 'host' and 'IPs' are always set.
    no_proxy: ""

  # Generated certs configuration.
  certificates:

    # Self-signed certificate expiration period in hours; default is 2 years.
    expiry: 17520h

    # CA certificate expiration period in hours; default is 2 years.
    ca_expiry: 17520h

  # SSH configuration for cluster nodes.
  ssh:

    # This user must be able to sudo without password.
    #user: kismaticuser
    user: root

    # Absolute path to the ssh private key we should use to manage nodes.
    ssh_key: /media/sda5/kismatic/allinone/kismaticuser.key
    ssh_port: 22

  # Override configuration of Kubernetes components.
  kube_apiserver:
    option_overrides: {}

  kube_controller_manager:
    option_overrides: {}

  kube_scheduler:
    option_overrides: {}

  kube_proxy:
    option_overrides: {}

  kubelet:
    option_overrides: {}

  # Kubernetes cloud provider integration
  cloud_provider:

    # Options: 'aws','azure','cloudstack','fake','gce','mesos','openstack',
    # 'ovirt','photon','rackspace','vsphere'.
    # Leave empty for bare metal setups or other unsupported providers.
    provider: ""

    # Path to the config file, leave empty if provider does not require it.
    config: ""

# Docker daemon configuration of all cluster nodes
docker:
  logs:
    driver: json-file
    opts:
      max-file: "1"
      max-size: 50m

  storage:

    # Configure devicemapper in direct-lvm mode (RHEL/CentOS only).
    direct_lvm:
      enabled: false

      # Path to the block device that will be used for direct-lvm mode. This
      # device will be wiped and used exclusively by docker.
      block_device: ""

      # Set to true if you want to enable deferred deletion when using
      # direct-lvm mode.
      enable_deferred_deletion: false

# If you want to use an internal registry for the installation or upgrade, you
# must provide its information here. You must seed this registry before the
# installation or upgrade of your cluster. This registry must be accessible from
# all nodes on the cluster.
docker_registry:

  # IP or hostname and port for your registry.
  server: "mirror.teligen.com"

  # Absolute path to the certificate authority that should be trusted when
  # connecting to your registry.
  CA: "/home/dash/devdockerCA.crt"

  # Leave blank for unauthenticated access.
  username: "clouder"

  # Leave blank for unauthenticated access.
  password: "engine"

# Add-ons are additional components that KET installs on the cluster.
add_ons:
  cni:
    disable: false

    # Selecting 'custom' will result in a CNI ready cluster, however it is up to
    # you to configure a plugin after the install.
    # Options: 'calico','weave','contiv','custom'.
    provider: calico
    options:
      calico:

        # Options: 'overlay','routed'.
        mode: overlay

        # Options: 'warning','info','debug'.
        log_level: info

        # MTU for the workload interface, configures the CNI config.
        workload_mtu: 1500

        # MTU for the tunnel device used if IPIP is enabled.
        felix_input_mtu: 1440

  dns:
    disable: false

  heapster:
    disable: false
    options:
      heapster:
        replicas: 2

        # Specify kubernetes ServiceType. Defaults to 'ClusterIP'.
        # Options: 'ClusterIP','NodePort','LoadBalancer','ExternalName'.
        service_type: ClusterIP

        # Specify the sink to store heapster data. Defaults to an influxdb pod
        # running on the cluster.
        sink: influxdb:http://heapster-influxdb.kube-system.svc:8086

      influxdb:

        # Provide the name of the persistent volume claim that you will create
        # after installation. If not specified, the data will be stored in
        # ephemeral storage.
        pvc_name: ""

  dashboard:
    disable: false

  package_manager:
    disable: false

    # Options: 'helm'
    provider: helm

  # The rescheduler ensures that critical add-ons remain running on the cluster.
  rescheduler:
    disable: false

# Etcd nodes are the ones that run the etcd distributed key-value database.
etcd:
  expected_count: 1

  # Provide the hostname and IP of each node. If the node has an IP for internal
  # traffic, provide it in the internalip field. Otherwise, that field can be
  # left blank.
  nodes:
  - host: "allinone"
    ip: "10.15.205.100"
    internalip: ""
    labels: {}

# Master nodes are the ones that run the Kubernetes control plane components.
master:
  expected_count: 1

  # If you have set up load balancing for master nodes, enter the FQDN name here.
  # Otherwise, use the IP address of a single master node.
  load_balanced_fqdn: "10.15.205.100"

  # If you have set up load balancing for master nodes, enter the short name here.
  # Otherwise, use the IP address of a single master node.
  load_balanced_short_name: "10.15.205.100"
  nodes:
  - host: "allinone"
    ip: "10.15.205.100"
    internalip: ""
    labels: {}

# Worker nodes are the ones that will run your workloads on the cluster.
worker:
  expected_count: 1
  nodes:
  - host: "allinone"
    ip: "10.15.205.100"
    internalip: ""
    labels: {}

# Ingress nodes will run the ingress controllers.
ingress:
  expected_count: 0 
  nodes: []
#  - host: ""
#    ip: ""
#    internalip: ""
#    labels: {}
#
# Storage nodes will be used to create a distributed storage cluster that can
# be consumed by your workloads.
storage:
  expected_count: 0
  nodes: []

# A set of NFS volumes for use by on-cluster persistent workloads
nfs:
  nfs_volume: []

Deploy the whole cluster:

sudo bash
[root@xxxxxx allinone]# ./kismatic install apply

Validating==========================================================================
Reading installation plan file "kismatic-cluster.yaml"                          [OK]
Validating installation plan file                                               [OK]
Validating SSH connectivity to nodes                                            [OK]
Configure Cluster Prerequisites                                                 [OK]
Gather Node Facts

Then you could get the running kubernetes.

helm

helm, translation for chinese: .

Get started with helm:

# curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh
# chmod 777 get_helm.sh 
# ./get_helm.sh 
[root@allinone ~]# which helm
/usr/local/bin/helm
[root@allinone ~]# helm version
Client: &version.Version{SemVer:"v2.7.2", GitCommit:"8478fb4fc723885b155c924d1c8c410b7a9444e6", GitTreeState:"clean"}
Error: cannot connect to Tiller

This error message is because my k8s cluster is not stable. so re-install a new one.

Ubuntu ways

For syncing the packages from the internet, then create a gpg key for publishing the repository:

# apt-get install -y haveged
# gpg --gen-key
gpg (GnuPG) 1.4.20; Copyright (C) 2015 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Please select what kind of key you want:
   (1) RSA and RSA (default)
   (2) DSA and Elgamal
   (3) DSA (sign only)
   (4) RSA (sign only)
Your selection? 1
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048) 
Requested keysize is 2048 bits
Please specify how long the key should be valid.
         0 = key does not expire
      <n>  = key expires in n days
      <n>w = key expires in n weeks
      <n>m = key expires in n months
      <n>y = key expires in n years
Key is valid for? (0) 
Key does not expire at all
Is this correct? (y/N) y

You need a user ID to identify your key; the software constructs the user ID
from the Real Name, Comment and Email Address in this form:
    "Heinrich Heine (Der Dichter) <heinrichh@duesseldorf.de>"

Real name: dashyang
Email address: xxxx@gmail.com
Comment: somecommentshere
You selected this USER-ID:
    "dashyang (somecommentshere) <xxxx@gmail.com>"

Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O
You need a Passphrase to protect your secret key.

gpg: gpg-agent is not available in this session
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
..+++++
..+++++
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
+++++
..+++++
gpg: key F5510098 marked as ultimately trusted
public and secret key created and signed.

gpg: checking the trustdb
gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
gpg: depth: 0  valid:   1  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 1u
pub   2048R/F5510098 2018-01-10
      Key fingerprint = 7F4C 405A F6EB B25D DEDF  10C9 9CAC DC20 F551 0098
uid                  dashyang (somecommentshere) <xxxx@gmail.com>
sub   2048R/7FE934CA 2018-01-10
# gpg --list-keys
/home/vagrant/.gnupg/pubring.gpg
--------------------------------
pub   2048R/F5510098 2018-01-10
uid                  dashyang (somecommentshere) <feipyang@gmail.com>
sub   2048R/7FE934CA 2018-01-10
# aptly serve
Serving published repositories, recommended apt sources list:

# ./xenial [amd64, arm64, armhf, i386] publishes {main: [xenial-repo]: Merged from sources: 'ubuntu-main', 'gluster', 'docker'}
deb http://vagrant:8080/ xenial main

Starting web server at: :8080 (press Ctrl+C to quit)...

Added it to systemd files:

# cat /etc/systemd/system/aptly.service 
[Service]
Type=simple
ExecStart=/usr/bin/aptly -config /home/vagrant/.aptly.conf serve -listen=:80
User=root
# systemctl daemon-reload
# systemctl enable aptly
# systemctl start aptly

Failed, why aptly could not be run.

Client usage:

# sudo vim /etc/ssh/sshd_config
PermitRootLogin yes
# vim /etc/network/interfaces
Change ip addresss
First in server, export keys:   

    # gpg --export --armor >mypublic.pub
    # cat mypublic.pub 
    # scp mypublic.pub  root@10.15.205.200:/root/

Then in clients import the keys:    


        # cat mypublic.pub |apt-key add -
        OK
        root@ubuntu:/root# apt-key list
        /etc/apt/trusted.gpg

Then sudo apt-get update won’t get any errors.

Thus you could have a server at the certain vm, convert this vm from virtualbox into qcow2 via following command.

# qemu-img convert -f vmdk -O qcow2 box-disk001.vmdk aptly_ubuntu.qcow2

Start the registry server.

Create a new ubuntu server, enable the sshd login of root. Add repository, add following definition of the registry:

# vim /etc/hosts
10.15.205.2	mirror.xxxxx.com

Then add the kismaticuser.key.pub into the server’s /root/.ssh/authorized_keys.

helm

# helm search
# helm list
# helm install --name wordpress-test --set "persistence.enabled=false,mariadb.persistence.enabled=false" stable/wordpress
# helm list
[root@DashSSD ubuntuone]# ./helm list
NAME          	REVISION	UPDATED                 	STATUS  	CHART          	NAMESPACE
wordpress-test	1       	Wed Jan 10 17:30:29 2018	DEPLOYED	wordpress-0.7.9	default  

WorkingTipsOnKismatic

网络规划

为了模拟kismatic完全离线安装,我们创建一个完全隔离的网络如下:

/images/2018_01_07_15_01_42_420x391.jpg

/images/2018_01_07_15_02_13_418x482.jpg

详细信息如下:

10.15.205.1/24
dhcp: 10.15.205.128 ~ 10.15.205.254
部署节点:  10.15.205.2
etcd01: 
master01:
worker01:

说明: 设置dhcp是为了让虚拟机在启动的时候自动获得一个地址,实际上在部署过程中我们都会手动修改节点的 IP地址以与kismatic的配置相匹配。

准备工作

CentOS7 Base镜像

  • CentOS 7: CentOS-7-x86_64-Minimal-1708.iso 最小化安装.
    安装时注意事项: 不要选择swap分区,否则在默认部署kismatic时会提示失败。
    安装完毕后,关闭selinux, 关闭firewalld服务。
    kismaticuser.key.pub注入到系统目录下, 这里的系统目录指的是/root/.ssh/authorized_keys, 或者自己用户的/home/xxxxxx/.ssh/authorized_keys.

准备完毕后,关闭此虚拟机,将其虚拟磁盘作为base盘, 用于创建其他节点。

部署节点(仓库+Registry)

镜像节点是部署成功与否的关键,在这个节点上,我们将创建用于部署kismatic的所有 CentOS仓库镜像,并搭建基于Docker Registry的私有仓库。

该节点设置为1核cpu,1G内存, IP为10.15.205.2

$ qemu-img create -f qcow2 -b CentOS7_Base/CentOS7Base.qcows2 Deployment.qcow2
Formatting 'Deployment.qcow2', fmt=qcow2 size=214748364800 backing_file=CentOS7_Base/CentOS7Base.qcows2 cluster_size=65536 lazy_refcounts=off refcount_bits=16
$ ls
CentOS7_Base  Deployment.qcow2

可以使用nmtui来更改其Ip地址/网关等, 注意地址填写为10.15.205.2/24.

/images/2018_01_07_15_58_01_690x383.jpg

在主机(10.15.205.1,即我们运行libvirt/kvm的机器)上,从镜像回来的仓库目录下,用python建立一个简单的http服务器,用于初始化安装:

$ ls
base  docker  gluster  kubernetes  updates
$ python2 -m SimpleHTTPServer 8666

进入到虚拟机里,更改repo配置:

[root@deployment yum.repos.d]# mkdir back
[root@deployment yum.repos.d]# mv * back
mv: cannot move ‘back’ to a subdirectory of itself, ‘back/back’
[root@deployment yum.repos.d]# vim base.repo
[base]
name=Base
baseurl=http://10.15.205.1:8666/base
enabled=1
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

[updates]
name=Updates
baseurl=http://10.15.205.1:8666/updates
enabled=1
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

[docker]
name=Docker
baseurl=http://10.15.205.1:8666/docker
enabled=1
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

[kubernetes]
name=Kubernetes
baseurl=http://10.15.205.1:8666/kubernetes
enabled=1
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

[gluster]
name=gluster
baseurl=http://10.15.205.1:8666/gluster
enabled=1
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

[root@deployment yum.repos.d]# yum makecache

该台服务器需担任两个角色,镜像服务器和仓库服务器,首先我们来配置仓库服务器:

# yum install yum-utils httpd createrepo
# systemctl enable httpd
# systemctl start httpd

host机器上打开http://10.15.205.2, 看到以下画面说明仓库服务器安装成功:

/images/2018_01_07_16_08_22_727x424.jpg

建立仓库很简单,参考:

https://github.com/apprenda/kismatic/blob/master/docs/disconnected_install.md

使用reposync将远端仓库的内容镜像到本地即可。

例如:

[root@deployment html]# ls
base  docker  gluster  kubernetes  updates
[root@deployment html]# ls base/
Packages  repodata

则看到的仓库如下:

/images/2018_01_07_16_11_31_472x291.jpg

接下来开始创建registry仓库,这里有一个bug,就是需要container-selinux-2.21-1.el7.noarch.rpm这个包。 我们手动从网站下载,然后安装之:

# yum install -y container-selinux-2.21-1.el7.noarch.rpm
# yum install -y docker-ce

因为我们的服务需要用到docker-compose,短时间连通网络并安装docker-compose:

# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
# yum install -y python-pip
# pip install docker-compose

预装入本地镜像(以下的命令并不能真正运行,是我自己的批量导入脚本)。

for i in `ls *.tar`
do 
	docker load<$i
	docker tag.....
done

我们将参考这篇文章来设置好docker-registry mirror:

https://www.digitalocean.com/community/tutorials/how-to-set-up-a-private-docker-registry-on-ubuntu-14-04

配置好以后的文件夹直接就可以迁移到别的机器上,事实上,我的位于Centos7服务器上的目录正是从一台Ubuntu 机器上迁移过来的,注意在配置签名的时候需要指定域名,而后,在需使用该docker registry的机器上需要对应添加/etc/hosts中的条目:

# vim /etc/hosts
10.15.205.2 mirror.xxxx.com
# docker login mirror.xxxx.com
Username (clouder): clouder
Password: 
Login Succeeded

确认服务可用以后,我们可以使用systemd将docker-compose启动的服务添加为系统服务,这样每次重新 启动机器后,我们的registry服务也将随机器启动而启动:

# vim /etc/systemd/system/docker-compose.service 
[Unit]
Description=DockerCompose
After=docker.service
Requires=docker.service

[Service]
ExecStart=/usr/bin/docker-compose -f /docker-registry/docker-compose.yml up -d

[Install]
WantedBy=multi-user.target
# systemctl enable docker-compose.service
# systemctl start docker-compose.service

节点机准备

需要准备三台节点机,创建如下:

# qemu-img create -f qcow2 -b CentOS7_Base/CentOS7Base.qcows2 etcd01.qcow2
# qemu-img create -f qcow2 -b CentOS7_Base/CentOS7Base.qcows2 master01.qcow2
# qemu-img create -f qcow2 -b CentOS7_Base/CentOS7Base.qcows2 worker01.qcow2

可以在已有的虚拟机基础上稍加修改,即可得到新的三台机器:

# sudo virsh dumpxml kismatic_deployment>template.xml
# cp template.xml etcd01.xml
# vim etcd01.xml
# sudo virsh define etcd01.xml
Domain kismatic_etcd01 defined from etcd01.xml

进入到系统后,配置仓库,配置好/etc/hosts下的条目,节点机即做完准备。

部署

配置过程以后再写。

使用集群

集群部署完成后,使用./kismatic dashboard来访问kubernetes的dashboard:

/images/2018_01_08_16_11_55_550x654.jpg

根据提示配置好kubeconfig文件即可。

镜像仓库使用

要使用镜像仓库作为集群的中心仓库,外围机器(用于上传和管理镜像的机器)需要做以下设置(以DEBIAN为例):

# mkdir -p /usr/local/share/ca-certificates/docker-dev-cert/
# cp KISMATIC_FOLDER/devdockerCA.crt /usr/local/share/ca-certificates/docker-dev-cert/
# update-ca-certificates
# systemctl restart docker
# echo "10.15.205.113 mirror.xxxx.com">>/etc/hosts
# docker login mirror.xxxxx.com
Username: clouder
Password: 
Login Succeeded

Thus you could directly push images to the registry mirror.