InternetSharingForRPI

Diagram

/images/2018_09_04_16_41_05_678x430.jpg

Command

RPI: 192.168.0.16/24
Laptop USB Ethernet Adapter: 192.168.0.33/24

Laptop:

# sudo iptables -t nat -A POSTROUTING -s 192.168.0.16/24 ! -d 192.168.0.16/24 -j MASQUERADE

Rpi:

# sudo route delete default gw 192.168.0.1
# sudo route add default gw 192.168.0.33
# sudo vim /etc/resolv.conf
nameserver 192.168.42.129

Thus your rpi could directly go to the internet.

WorkingTipsOnRPIRF

连线图:

RFID-RC522 board - Raspberry PI 1 Generation.

SDA connects to Pin 24.
SCK connects to Pin 23.
MOSI connects to Pin 19.
MISO connects to Pin 21.
GND connects to Pin 6.
RST connects to Pin 22.
3.3v connects to Pin 1.

OR:

(RC522) --- (GPIO RaspPi)
3.3v --- 1 (3V3)
SCK --- 23 (GPIO11)
MOSI --- 19 (GPIO10)
MISO --- 21 (GPIO09)
GND --- 25 (Ground)
RST --- 22 (GPIO25)

RPI 配置 SPI

打开配置窗口:

# sudo raspi-config

/images/2018_09_04_16_45_35_866x313.jpg

鼠标点击5 Interfacing Options, 选择P4 SPI:

/images/2018_09_04_16_46_07_839x233.jpg

选择Yes后确认,打开rpi的SPI。

重启后检查spi是否被正确加载:

# lsmod | grep spi
spidev                  7034  0
spi_bcm2835             7424  0

RPI Configuration

crontab中修改了/bin/pdnsd/sh文件, /etc/rc.local文件中去掉了有关redsocks和pdnsd的选项。之后重启。

Python示例

可以参考:

https://pimylifeup.com/raspberry-pi-rfid-rc522/

NodeJS例子

# mkdir RFID
# cd RFID
# wget http://node-arm.herokuapp.com/node_latest_armhf.deb
# dpkg -i node_latest_armhf.deb

Kubespray全离线部署

离线部署方案说起来很简单,做起来比较繁琐,把Internet连上一次部署成功,再断开后部署成功一次,那下次就直接能用了。

在线状态

前提条件,全翻墙网络,修改Vagrantfile中的操作镜像版本为centos, 网络接口为calico:

...
$os = "centos"
...
$network_plugin = "calico"
...

因为我们用的centos默认是不缓存安装包的,因而在/etc/yum.conf中需要手动打开其缓存包目录:

# vim /etc/yum.conf
cachedir=/var/cache/yum/$basearch/$releasever
keepcache=1

在线部署一次,只要能成功,那么/var/cache/yum/下将缓存所有的rpm包

离线部署

拷贝出一个新的离线部署目录,并删除该目录下的.vagrant目录,并修改vagrant的主机名称,否则默认会使用一样的主机名来部署系统, 为避免网络冲突,应该更改离线环境的网段为新网段.

# cp -r kubespray kubespray_centos_offline
# vim Vagrantfile
$instance_name_prefix = "k8s-offline-centos"
$subnet = "172.17.89"

断开Internet连接, vagrant up设置初始化环境。显然会卡在第一步, yum仓库更新.

/images/2018_08_29_09_16_22_765x736.jpg

离线yum仓库

在一台在线部署成功的机器上运行以下命令以取回包:

# mkfit /home/vagrant/kubespray_pkgs_ubuntu/
# find . | grep rpm$ | xargs -I % cp % /home/vagrant/kubespray_pkgs_ubuntu/
# createrepo_c .
# scp -r kubespray_pkgs_ubuntu root@172.17.89.1:/web-server-folder

修改ansible playbook:

# vim ./roles/kubernetes/preinstall/tasks/main.yml
- name: Update package management repo address (YUM)
  shell: mkdir -p /root/repoback && mv /etc/yum.repos.d/*.repo /root/repoback && curl http://172.17.88.1/kubespray_pkgs_ubuntu/kubespray.repo>/etc/yum.repos.d/kubespray.repo

- name: Update package management cache (YUM)

继续安装, 会在安装docker处失败。

Docker安装

默认会添加docker.repo定义,因为我们在以前已经离线缓存了docker包,这里注释掉:

# vim ./roles/kubernetes/preinstall/tasks/main.yml
    #- name: Configure docker repository on RedHat/CentOS
    #  template:
    #    src: "rh_docker.repo.j2"
    #    dest: "{{ yum_repo_dir }}/docker.repo"
    #  when: ansible_distribution in ["CentOS","RedHat"] and not is_atomic

接下来继续安装,会在Download containers if pull is required or told to always pull (all nodes)处失败.

Docker镜像

offline的情形还没有试出来,暂时禁止自动下载,手动上传到节点。

# vim roles/download/defaults/main.yml
# Used to only evaluate vars from download role
skip_downloads: True

在线节点上,保存离线镜像的脚本:

docker save gcr.io/google-containers/hyperkube-amd64:v1.11.2>1.tar
docker save quay.io/calico/node:v3.1.3>2.tar
docker save quay.io/calico/ctl:v3.1.3>3.tar
docker save quay.io/calico/kube-controllers:v3.1.3>4.tar
docker save quay.io/calico/cni:v3.1.3>5.tar
docker save nginx:1.13>6.tar
docker save gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.10>7.tar
docker save gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.10>8.tar
docker save gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.10>9.tar
docker save quay.io/coreos/etcd:v3.2.18>9.tar
docker save gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.1.2>10.tar
docker save gcr.io/google_containers/pause-amd64:3.0>11.tar

Portus镜像仓库配置

参考:
https://purplepalmdash.github.io/blog/2018/05/30/synckismaticimages/

创建team:

/images/2018_08_29_11_35_00_375x296.jpg

Admin->User->Create new user, 创建一个名为kubespray的用户:

/images/2018_08_29_11_35_51_453x319.jpg

Team->kubespray, Add memeber:

/images/2018_08_29_11_36_48_506x296.jpg

创建一个新的命名空间kubesprayns,并绑定到kubespray组:

/images/2018_08_29_11_38_16_435x349.jpg

查看Log:

/images/2018_08_29_11_38_43_588x302.jpg

同步镜像到仓库

首先登录到我们刚才创建的仓库:

# docker login portus.xxxx.com:5000/kubesprayns
Username: kubespray
Password: xxxxxxx
Login Succeeded

加载我们之前离线的镜像, 加tag, push.

# for i in `ls *.tar`; do docker load<$i; done
# ./tag_and_push.sh

脚本如下:

/images/2018_08_29_11_54_22_948x451.jpg

之后我们可以获得纯净的/var/lib/portus目录用于部署kubespray.

接下来替换掉原有的编译脚本,编译出新的ISO

TODO: ansible需要安装, rpm包安装.

ansible部署

起先用vagrant做的一键部署方案,如今需要手动构建出一个集群的定义文件。

BrigedNetworkIssue

Problem

br0->eth0, kvm bridged to br0.

br0: 192.192.189.128
kvm vm address: 192.192.189.109
vm->ping->br0, OK
vm->ping->192.192.189.24/0, Failed

Investigation

Examine the forward and ebtables:

# cat /proc/sys/net/ipv4/ip_forward
1
# ebtables -L
Should be ACCEPT

Use following command for examine the dropped package:

# iptables -x -v --line-numbers -L FORWARD 

DOCKER-ISOLATION

Not because of the docker forward, but we have to add br0->br0 rules

Solution

Add one rule:

# iptables -A FORWARD -i br0 -o br0 -j ACCEPT
# apt-get install iptables-persistent
# vim /etc/iptables/rules.v4
*filter
-A FORWARD -i br0 -o br0 -j ACCEPT
COMMIT

Further(Multicast)

Add rc.local systemd item:

# vim /etc/systemd/system/rc-local.service
[Unit]
Description=/etc/rc.local
ConditionPathExists=/etc/rc.local

[Service]
Type=forking
ExecStart=/etc/rc.local start
TimeoutSec=0
StandardOutput=tty
RemainAfterExit=yes
SysVStartPriority=99

[Install]
WantedBy=multi-user.target

The /etc/rc.local should be like following:

#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
 
exit 0

Use chmod 777 /etc/rc.local to let it executable.

Systemd enable and run:

# systemctl enable rc-local
# systemctl start rc-local

Enable multicast, add one line into /etc/rc.local:

# vim /etc/rc.local
...
echo "0">/sys/class/net/br0/bridge/multicast_snooping

exit 0

内网搭建proxmox

环境准备

Iso使用官方下载的proxmox-ve_5.2-1.iso, CPU/内存配置为16核64G。
硬盘配置为- 系统:200G, Ceph存储, 600G
一共三台机器,均为虚拟机,位于不同的物理机器上,这点非常重要,如果处于同一机器上,则在线迁移虚拟机容易出现错误,具体表现为,虚拟机迁移完毕以后,被迁移出的那台机器节点将失去反应,节点无法登录。

CPU我们通过host-passthrough下发到虚拟机里。

IP地址配置

节点1(zzz_proxmox_127), 位于127服务器,ip为10.33.34.27, hostname为promox127.
节点2(zzz_proxmox_128), 位于128服务器,ip为10.33.34.28, hostname为promox128.
节点3(zzz_proxmox_129), 位于129服务器,ip为10.33.34.29, hostname为promox129.

用于Ceph的地址暂时不配置。

开启multicast

proxmox需要各个节点的multicast为可用状态,而默认的virt-manager禁用了该选项,我们使用以下命令来开启虚拟机上的multicast.

for dev in `ls /sys/class/net/ | grep macvtap`; do
    ip link set dev $dev allmulticast on
  done

建立集群

浏览器访问https://10.33.34.27:8006,选择语言后,页面如下:

/images/2018_08_20_17_37_10_734x555.jpg

现在只有一个节点:

/images/2018_08_20_17_37_31_576x326.jpg

27上运行命令, create创建出一个集群,而status则是检查其状态:

# pvecm create firstcluster
# pvecm status

28/29上分别运行:

# pvecm add 10.33.34.27

添加完毕后的集群如下:

/images/2018_08_20_17_40_17_476x330.jpg

Ceph

配置IP地址:

# from /etc/network/interfaces
auto eth2
iface eth2 inet static
  address  10.10.10.1
  netmask  255.255.255.0

修改pveceph的源:

# vi /usr/share/perl5/PVE/CLI/pveceph.pm
deb .......
# pveceph install --version luminous

添加存储:

/images/2018_08_21_12_13_12_666x301.jpg

创建一个pool,

/images/2018_08_21_12_13_39_301x275.jpg

创建完毕后:

/images/2018_08_21_12_14_03_915x372.jpg

虚拟机

拷贝安装文件ubuntu-16.04.2-server-amd64.iso/var/lib/vz/template/iso下, 在27机器上, 然后创建虚拟机。

/images/2018_08_21_12_17_24_460x214.jpg

选择ISO:

/images/2018_08_21_12_17_42_668x184.jpg

选择硬盘:

/images/2018_08_21_12_18_07_638x212.jpg

选择刚创建的虚拟机,点击启动:

/images/2018_08_21_12_18_51_852x563.jpg

Issue

嵌套虚拟化对内核版本的影响,因内网的机器运行的操作系统内核版本较为陈旧,相信可能会有问题。后期将新装服务器来进行。

新装物理服务器,dhcp得到同一网段地址,而后将继续proxmox的测试。