Nov 1, 2019
TechnologyEnvironment
2 disks(sas) raid1 as the system partition.
24 disks(SATA), each disk has 2 TB.
CPU: Intel Xeon CPU e5-2650 v3 @ 2.30GHZ.
256G memory.
Disk configuration
use MegaRAID/MegaCli for configurating the disk parameters.
Get the parameters:
# ./Megacli64 -LDInfo -LALL -aAll
Virtual Drive: 24 (Target Id: 24)
Name :
RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0
Size : 1.817 TB
State : Optimal
Strip Size : 256 KB
Number Of Drives : 1
Span Depth : 1
Default Cache Policy: WriteBack, ReadAhead, Direct, No Write Cache if Bad BBU
Current Cache Policy: WriteThrough, ReadAhead, Direct, No Write Cache if Bad BBU
Access Policy : Read/Write
Disk Cache Policy : Disk's Default
Encryption Type : None
Exit Code: 0x00
Notice we have the ReadAhead
in Current Cache Policy
, we need to turn off this parameter in order to let zfs runs fast.
# ./MegaCli64 -LDSetProp -NORA -Immediate -Lall -aAll
.....
Current Cache Policy: WriteThrough, ReadAheadNone, Direct, No Write Cache if Bad BBU
.....
Create the raidz2 vmpool via following commands:
# zpool create -f -o ashift=12 vmpool raidz2 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi
# zpool add -f -o ashift=12 vmpool raidz2 /dev/sdj ~ /dev/sdq
# zpool add -f -o ashift=12 vmpool raidz2 /dev/sdr ~ /dev/sdy
zpool info
Get the information of zfs via following:
# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
vmpool 43.5T 132G 43.4T - 0% 0% 1.00x ONLINE -
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
vmpool 93.6G 29.9T 205K /vmpool
vmpool/base-100-disk-1 8.17G 29.9T 8.17G -
vmpool/vm-101-disk-1 45.3G 29.9T 45.3G -
vmpool/vm-102-disk-1 40.1G 29.9T 40.1G -
Adding the zfs pool into the proxmox on GUI, ignore the steps because it’s too simple.
Sep 16, 2019
TechnologySteps
Download iso and install debian:
# axel http://mirrors.163.com/debian-cd/10.1.0/amd64/iso-cd/debian-10.1.0-amd64-netinst.iso
# Create qemu virtual machine. 8-Core, 9G Memory
After installation:
$ su root
# apt-get install -y vim sudo net-tools usermode
# sudo apt-get install -y gnupg2 gnupg1 gnupg
# vim /etc/apt/sources.list
deb http://mirrors.163.com/debian/ buster main
deb http://security.debian.org/debian-security buster/updates main
deb http://mirrors.163.com/debian/ buster-updates main
# apt-get update -y
# su -
# usermod -aG sudo dash
Gitlab
Configure the repository file like following:
# vim /etc/apt/sources.list
deb http://mirrors.163.com/debian/ buster main non-free contrib
deb http://security.debian.org/debian-security buster/updates main non-free contrib
deb http://mirrors.163.com/debian/ buster-updates main non-free contrib
deb http://mirrors.163.com/debian/ buster-backports main non-free contrib
### GitLab 12.0.8
deb http://fasttrack.debian.net/debian/ buster-fasttrack main contrib
deb http://fasttrack.debian.net/debian/ buster-backports main contrib
deb https://deb.debian.org/debian buster-backports contrib main
# Eventually the packages in this repo will be moved to one of the previous two repos
deb https://people.debian.org/~praveen/gitlab buster-backports contrib main
signature:
# wget https://people.debian.org/~praveen/gitlab/praveen.key.asc
# wget http://fasttrack.debian.net/fasttrack-archive-key.asc
# apt-key add praveen.key.asc
# apt-key add fasttrack-archive-key.asc
Install via:
# apt -t buster-backports install gitlab
Install failed because it requires gitlab-shell 9.3.0 while the repository didn’t provide this one. Install gitlab-ce instead:
# sudo apt-get purge gitlab
# sudo apt-get purge gitlab-common
# wget https://packages.gitlab.com/gpg.key
# sudo apt-key add gpg.key
# sudo vim /etc/apt/sources.list
deb http://mirrors.tuna.tsinghua.edu.cn/gitlab-ce/debian buster main
# sudo apt-get update -y
# sudo apt-get install -y gitlab-ce
Configure the gitlab-ce:
# sudo vim /etc/gitlab/gitlab.rb
external_url 'http://cicd.cicdforrong.ai'
# export LC_ALL=en_US.UTF-8
# export LANG=en_US.utf8
# sudo gitlab-ctl reconfigure
Configure the port(nginx/unicorn):
# vi /etc/gitlab/gitlab.rb
nginx['listen_port'] = 82 #默认值即80端口 nginx['listen_port'] = nil
# vi /var/opt/gitlab/nginx/conf/gitlab-http.conf
listen *:82; #默认值listen *:80;
# vi /etc/gitlab/gitlab.rb
unicorn['port'] = 8082#原值unicorn['port'] = 8080
# vim /var/opt/gitlab/gitlab-rails/etc/unicorn.rb
listen "127.0.0.1:8082", :tcp_nopush => true
#原值listen "127.0.0.1:8080", :tcp_nopush => true
Reconfigure and restart the gitlab service:
# gitlab-ctl reconfigure
# gitlab-ctl restart
Visit gitlab
Edit the /etc/hosts
for adding following items:
# vim /etc/hosts
192.168.122.90 cicd.cicdforrong.ai
Now visit the http://cicd.cicdforrong.ai
you could get the page for change username/password.
Install docker(for gitlab-runner)
Steps:
# sudo apt-get install apt-transport-https ca-certificates curl gnupg2 software-properties-common
# curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
# apt-key fingerprint 0EBFCD88
# add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian \
$(lsb_release -cs) \
stable"
# apt-get update
# apt-get install docker-ce docker-ce-cli containerd.io
Sep 5, 2019
Technologyvagrant machine
Create a vagrant machine with 8-core/10G memory:
vagrant init generic/ubuntu1604
vagrant up
vagrant ssh
steps
Prepare the environment:
sudo apt-get update -y
sudo apt-get install -y python-pip git python3-pip
git clone xxxxxxx/kubespray
cd kubespray
export LC_ALL="en_US.UTF-8"
pip install -r requirements.txt
pip3 install ruamel.yaml
sudo apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common
sudo apt-key fingerprint 0EBFCD88
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
sudo apt-get update -y && sudo apt-get install docker-ce docker-ce-cli containerd.io -y
cd contrib/dind
pip install -r requirements.txt
Deploy the dind cluster:
sudo /home/vagrant/.local/bin//ansible-playbook -i hosts dind-cluster.yaml
rm -f inventory/local-dind/hosts.yml
sudo CONFIG_FILE=${INVENTORY_DIR}/hosts.yml /tmp/kubespray.dind.inventory_builder.sh
sudo chown -R vagrant /home/vagrant/.ansible/
sudo docker exec kube-node1 apt-get install -y iputils-ping
/home/vagrant/.local/bin//ansible-playbook --become -e ansible_ssh_user=ubuntu -i ${INVENTORY_DIR}/hosts.yml cluster.yml --extra-vars @contrib/dind/kubespray-dind.yaml
Sep 2, 2019
TechnologyAdd iso repository
Use installation iso as repository:
# mount -t iso9660 -o loop ubuntu180402_arm64.iso /mnt
# vim /etc/apt/sources.list
deb [trusted=yes] file:///mnt bionic main contrib
# apt-get update -y
# apt-cache search | grep ipmi
freeipmi-common - GNU implementation of the IPMI protocol - common files
freeipmi-tools - GNU implementation of the IPMI protocol - tools
libfreeipmi16 - GNU IPMI - libraries
libipmiconsole2 - GNU IPMI - Serial-over-Lan library
libipmidetect0 - GNU IPMI - IPMI node detection library
maas - "Metal as a Service" is a physical cloud and IPAM
libopenipmi0 - Intelligent Platform Management Interface - runtime
openipmi - Intelligent Platform Management Interface (for servers)
ipmi
Install two packages:
# apt-get install -y openipmi freeipmi-tools
Build netdata
Using a docker instance on vps for building netdata:
# docker run -it ubuntu:bionic-20190424 /bin/bash
# cat /etc/issue
# apt-get update -y
# apt-get install -y vim build-essential
Aug 27, 2019
TechnologySteps
The docker-compose.yml
is listed as following:
version: '2.2'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.1.3
container_name: elasticsearch
environment:
- cluster.name=docker-cluster
- node.name=coreos-1
- node.master=true
- node.data=true
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- xpack.security.enabled=false
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata1:/usr/share/elasticsearch/data
- ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
ports:
- 192.168.122.31:9200:9200
- 192.168.122.31:9300:9300
volumes:
esdata1:
driver: local
The elasticsearch.yml
file is listed as following:
network.host: 0.0.0.0
discovery.zen.ping.unicast.hosts: ["192.168.122.31"]
network.publish_host: 192.168.122.31
Thus you could use docker-compose for setting the cluster.
tips
master and worker:
for master node, node.master=true
for worker node, node.master=false