Mar 16, 2018
Technologyyaml
Like following:
# tcpprobe https://wiki.linuxfoundation.org/networking/tcpprobe
# use: apt install module-init-tools
# to install modprobe
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
generation: 1
labels:
run: ipref
name: ipref
namespace: default
spec:
replicas: 12 # we have 10 nodes in a cluster hence 12 replicas
selector:
matchLabels:
run: ipref
template:
metadata:
labels:
run: ipref
spec:
containers:
- command:
- sleep
- "infinity"
image: networkstatic/iperf3
imagePullPolicy: Always
name: ipref
resources: {}
securityContext:
capabilities:
add:
- CAP_ALL
privileged: true
volumeMounts:
- mountPath: /dev
name: dev
- mountPath: /lib/modules
name: modules
volumes:
- name: dev
hostPath:
# directory location on host
path: /dev
- name: modules
hostPath:
# directory location on host
path: /lib/modules
you should change the corresponding images and imagePullPolicy.
Mar 12, 2018
TechnologyServer
server side configuration:
# yum install -y ntpd
# vim /etc/ntpd.conf
The configuration file is listed as following:
driftfile /var/lib/ntp/drift
restrict default nomodify notrap nopeer noquery
restrict 127.0.0.1
restrict ::1
restrict 192.168.0.0 mask 255.255.0.0 nomodify notrap
server 127.127.1.0 # local clock
fudge 127.127.1.0 stratum 10
includefile /etc/ntp/crypto/pw
keys /etc/ntp/keys
disable monitor
Disable the chronyd service thus the ntpd could acts properly:
# systemctl disable chronyd
# systemctl enable ntpd
# systemctl start ntpd
# systemctl disable firewalld
Client
Install via:
# yum install -y ntpd
Configuration file:
driftfile /var/lib/ntp/drift
restrict default nomodify notrap nopeer noquery
restrict 127.0.0.1
restrict ::1
server 192.168.122.200
# 配置允许上游时间服务器主动修改本机的时间
restrict 192.168.122.200 nomodify notrap noquery
includefile /etc/ntp/crypto/pw
keys /etc/ntp/keys
disable monitor
Also disable the chronyd service and enable the ntpd service. The client will
automatically sync with the server 192.168.122.200
.
Mar 12, 2018
Technology什么是fabric8
fabric8是一个开源集成开发平台,为基于Kubernetes和Jenkins的微服务提供持续发布。可以认为它是一个对Java友好的开源微服务管理平台.
fabric8也可以被视为是一个微服务DevOps平台。Fabric8提供了一个完全集成的开源微服务平台,可在任何的Kubernetes和OpenShift环境中开箱即用。
参考:
https://jimmysong.io/posts/fabric8-introduction/
搭建过程(ArchLinux)
安装必要的包:
$ sudo pacman -S libvirt qemu dnsmasq ebtables
将自己的用户添加到kvm
和libvirt
用户组:
$ sudo usemod -a -G kvm,libvirt <username>
更新/etc/libvirt/qemu.conf
中关于libvirt的配置:
$ sudo sed -r 's/group=".+"/group="kvm"/1' /etc/libvirt/qemu.conf > /etc/libvirt/qemu.conf
更新当前的session,以适配用户组改动:
$ newgrp libvirt
此外,我们需要在yaourt仓库中安装对应的包以使用dockermachine对于kvm的驱动:
$ sudo pacman -S docker-machine-kvm2 docker-machine
$ yaourt docker-machine-kvm
安装minishift:
$ yaourt minishift
$ minishift start --memory=7000 --cpus=4 --disk-size=50g
启动完毕后,可以检查对应的CPU/内存/磁盘信息等。
安装fabric8 on minishift(我用的是on-my-zsh):
$ echo 'export PATH=$PATH:~/.fabric8/bin' >> ~/.zshrc
$ source ~/.zshrc
配置GitHub Client ID/密码, 参考:
https://developer.github.com/apps/building-integrations/setting-up-and-registering-oauth-apps/registering-oauth-apps/
URL可以填写为:
http://keycloak-fabric8.{minishift ipv4 value}.nip.io/auth/realms/fabric8/broker/github/endpoint
homepage的URL可以填写为https://fabric8.io
.
由上面得到的clientID和client secret可以被引入到环境变量中:
$ export GITHUB_OAUTH_CLIENT_ID=123
$ export GITHUB_OAUTH_CLIENT_SECRET=123abc
之后:
$ gofabric8 start --minishift --package=system --namespace fabric8
经过漫长的等待(需要翻墙),
fabric8环境将就绪,用来登录的用户名/密码分别为"developer/developer”
fabric 8 playing
以system:admin登录,查看工作空间:
$ oc login -u system:admin -n default
Logged into "https://192.168.42.131:8443" as "system:admin" using existing credentials.
You have access to the following projects and can switch between them with 'oc project <projectname>':
* default
developer
developer-che
developer-jenkins
developer-run
developer-stage
fabric8
kube-public
kube-system
myproject
openshift
openshift-infra
openshift-node
Using project "default".
可以看到,fabric8的namespaces已经被创建出来。
Mar 10, 2018
TechnologyInstallation
Install the following packages:
$ sudo pacman -S maven community/intellij-idea-community-edition
$ sudo pacman -S jdk8-openjdk jdk9-openjdk
$ sudo archlinux-java set java-9-openjdk
$ archlinux-java status
Since the intellij wants jdk8 or newer, you have to install newer jdk
implementation.
Correct: the community edition didn’t have the spring boot support, use the
ultimate edition:
$ yaourt intellij-idea-ultimate-edition
扫盲
什么是spring boot
spring boot 致力于简洁,让开发者写更少的配置,程序能够更快的运行和启动。它是下一代javaweb框架,并且它是spring cloud(微服务)的基础。
spring boot
Create new project:
Plugins:
Import project:
mvn aliyun configuration
In /opt/maven/conf
.
Mar 8, 2018
TechnologyConfiguration
The configuration files is listed as following:
# Worker nodes are the ones that will run your workloads on the cluster.
worker:
expected_count: 1
nodes:
- host: "allinone"
ip: "10.15.205.93"
internalip: ""
labels: {}
storage:
expected_count: 3
nodes:
- host: "gluster1"
ip: "10.15.205.90"
internalip: ""
labels: {}
- host: "gluster2"
ip: "10.15.205.91"
internalip: ""
labels: {}
- host: "gluster3"
ip: "10.15.205.92"
internalip: ""
labels: {}
# A set of NFS volumes for use by on-cluster persistent workloads
nfs:
nfs_volume: []
But it won’t startup, the reason is because Ubuntu have a bug of rpcbind,
solved by:
# systemctl add-wants multi-user.target rpcbind.service
# systemctl enable rpcbind.service
# ufw disable
Then you should reboot all of the nodes.
verification
Create a new glusterfs volume and expose it in k8s as a PV use:
# kismatic volume add 10 storage01 -r 2 -d 1 -c="durable" -a *.*.*.*
New PVC:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-app-frontend-claim
annotations:
volume.beta.kubernetes.io/storage-class: "durable"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
Use pvc in a pod volume:
kind: Pod
apiVersion: v1
metadata:
name: my-app-frontend
spec:
containers:
- name: my-app-frontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: html
volumes:
- name: html
persistentVolumeClaim:
claimName: my-app-frontend-claim
Whenyou scale the pod out, each instance of the pod should have access to that
directory.