May 9, 2021
Technology目标
在LXD上运行ccse
准备
服务器上安装lxd, 初始化镜像centos7, ccse安装介质。
步骤
创建一个profile, 用于创建lxd用于部署验证:
lxc profile show default>ccse
vim ccse
lxc profile create ccse
lxc profile edit ccse<ccse
文件的内容如下:
config:
linux.kernel_modules: ip_tables,ip6_tables,netlink_diag,nf_nat,overlay,br_netfilter,xt_conntrack
raw.lxc: "lxc.apparmor.profile=unconfined\nlxc.cap.drop= \nlxc.cgroup.devices.allow=a\nlxc.mount.auto=p
roc:rw sys:rw"
security.nesting: "true"
security.privileged: "true"
description: CCSE Running profile
devices:
eth0:
name: eth0
network: lxdbr0
type: nic
hashsize:
path: /sys/module/nf_conntrack/parameters/hashsize
source: /dev/null
type: disk
kmsg:
path: /dev/kmsg
source: /dev/kmsg
type: unix-char
root:
path: /
pool: ssd
type: disk
name: ccse
验证此profile是否可正常工作:
# lxc launch centos7 kkk --profile ccse
Creating kkk
Starting kkk
# lxc exec kkk bash
[root@kkk ~]# cat /etc/redhat-release
CentOS Linux release 7.9.2009 (Core)
注意:
- 版本略高于推荐的centos 7.6.
- 使用上述的权限文件加载,可以解决teledb组碰到的获取磁盘权限问题。
部署介质准备
初始化容器:
cd /etc/yum.repos.d/
mkdir back
mv * back
vi ccse.repo
yum makecache
vi /etc/yum.conf
yum install -y which vim net-tools lsof sudo
因为需要将lxd当成物理机来使用,安装openssh-server后重启:
yum install -y openssh-server
systemctl enable sshd
systemctl start sshd
passwd
reboot
再次进入容器后,下载安装文件:
scp docker@xxx.xxx.xxx.xx:/home/docker/shrink280/ccse-installer-2.8.0-rc-linux-amd64-offline-20210409204619-shrink.tar.xz .
tar xJf ccse-installer-2.8.0-rc-linux-amd64-offline-20210409204619-shrink.tar.xz
部署console节点
记录ip 地址 10.222.125.68
, 配置完正确的IP地址后,按原有步骤安装console节点,安装完毕后上传镜像。
制作基础节点
需打包节点所需要的依赖:
# lxc launch centos7 base
# lxc exec base bash
yum install -y which lsof vim net-tools sudo selinux-policy libseccomp libselinux-python selinux-policy-targeted openssh-server ebtables ethtool
systemctl enable sshd
passwd
shutdown -h now
# lxc publish base --alias ccsenode
hashsize:
sudo su
echo "262144" > /sys/module/nf_conntrack/parameters/hashsize
cat /sys/module/nf_conntrack/parameters/hashsize
May 8, 2021
TechnologyWorking Environment
Centos 7.9, vm , 8core, 16G.
Installation
Install dnsmasq:
# sudo yum install -y dnsmasq
Install dnscrypt-proxy:
# sudo yum install -y dnscrypt-proxy
Wget the chinadns configuration file:
# wget https://raw.githubusercontent.com/felixonmars/dnsmasq-china-list/master/accelerated-domains.china.conf
# mv accelerated-domains.china.conf /etc/dnsmasq.d/accelerated-domains.china.conf
you can replace the 114.114.114.114
via your own dns(china intranet dns).
Configuration
Configure dnsmasq:
# vim /etc/dnsmasq.conf
listen-address=127.0.0.1
no-resolv
conf-dir=/usr/local/etc/dnsmasq.d
server=127.0.0.1#5300
interface=lo
bind-interfaces
Configure dnscrypt-proxy:
# vim /etc/dnscrypt-proxy/dnscrypt-proxy.toml
# 监听5300端口
listen_addresses = ['127.0.0.1:5300', '[::1]:5300']
# 使用下面3个公开的DNS服务
server_names = ['google', 'cloudflare', 'cloudflare-ipv6']
# 如果找不到合适的公开DNS服务,则使用下面的DNS服务
fallback_resolvers = ['9.9.9.9:53', '8.8.8.8:53']
# 担心这些DNS请求被墙,设置使用代理发送DNS请求
force_tcp = true
proxy = 'socks5://127.0.0.1:1086'
Configure /etc/resolv.conf
for using 127.0.0.1
:
nameserver 127.0.0.1
privoxy
In centos 7.9. don’t install this package from epel, download the source code from internet and compile it:
$ privoxy --version
Privoxy version 3.0.28 (https://www.privoxy.org/)
make sure you have specify the gfwlist.
May 2, 2021
Technology完全用ram工作的场景下,关机时需要回写到磁盘上,以下是用来将Ram中的数据回写到磁盘的方法。
# vim /bin/writeback.sh
#!/bin/sh
kkk=`mount | grep "none on / type tmpfs"`
if [ ! -z "$kkk" ]
then
mkdir -p /writeback
mount /dev/mapper/ubuntu--vg-root /writeback
rsync -a --delete --exclude 'tmp' --exclude 'proc' --exclude 'writeback' --exclude 'sys' / /writeback/
fi
创建一个回写的服务:
# vim /etc/systemd/system/run-before-shutdown.service
[Unit]
Description=Run my custom task at shutdown
DefaultDependencies=no
Before=shutdown.target reboot.target halt.target
[Service]
Type=oneshot
ExecStart=/bin/writeback.sh
TimeoutStartSec=0
[Install]
WantedBy=shutdown.target
使能服务:
# systemctl enable run-before-shutdown
则关机时系统会调用回写脚本将Ram中的数据写入到磁盘中。
Apr 30, 2021
Technology环境说明
新建3台虚拟机:
192.168.100.13/14/15, 4核8G, 虚拟机环境
该虚拟机所在的网段为192.168.100.0/24
, 其中dhcp范围为192.168.100.128~192.168.100.254
, 网关为192.168.100.1
OS环境初始化配置
我们期待lxc实例能通过网桥获取到与宿主机(192.168.100.13/14/15)同样的IP地址范围,所以先配置各节点上的网桥br0:
删除手动连接后,NetworkManager会自动拉起另一个:
再次删除此自动建立的连接,直到只看到lxdbr0即可:
建立br0, 并指定eth0为br0的slave设备:
依次类推完成另外两台机器的配置。
相关的配置脚本(这里以192.168.100.14
为例说明)如下,实际环境中需根据具体的配置信息进行更改:
nmcli con show | grep eth0 | awk {'print $2'} | xargs -I % nmcli con delete uuid %
nmcli con show | grep eth0 | awk {'print $4'} | xargs -I % nmcli con delete uuid %
nmcli con show
nmcli conn add type bridge ifname br0 ipv4.method manual ipv4.address "192.168.100.14" ipv4.gateway "192.168.100.1" ipv4.dns "223.5.5.5"
nmcli conn add type bridge-slave ifname eth0 master br0
lxc使用br0网络
lxc可以通过使用不同的profile定义出实例所在的网络,我们通过以下操作新建出一个可以通过网桥br0
获取到192.168.100.0/24
段地址的profile:
[root@node13 ~]# lxc profile list
+---------+---------------------+---------+
| NAME | DESCRIPTION | USED BY |
+---------+---------------------+---------+
| default | Default LXD profile | 0 |
+---------+---------------------+---------+
[root@node13 ~]# lxc profile show default>br0
[root@node13 ~]# vim br0
config: {}
description: Default LXD profile modified for using br0
devices:
eth0:
name: eth0
nictype: bridged
parent: br0
type: nic
root:
path: /
pool: default
type: disk
name: br0
used_by: []
[root@node13 ~]# lxc profile create br0
Profile br0 created
[root@node13 ~]# lxc profile edit br0<br0
[root@node13 ~]# lxc profile list
+---------+--------------------------------------------+---------+
| NAME | DESCRIPTION | USED BY |
+---------+--------------------------------------------+---------+
| br0 | Default LXD profile modified for using br0 | 0 |
+---------+--------------------------------------------+---------+
| default | Default LXD profile | 0 |
+---------+--------------------------------------------+---------+
现在可使用创建出的br0
实例化一个容器,
# lxc launch centos7 node1 --profile br0
# lxc ls
+-------+---------+------------------------+----------------------------------------------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-------+---------+------------------------+----------------------------------------------+-----------+-----------+
| node1 | RUNNING | 192.168.100.130 (eth0) | | CONTAINER | 0 |
+-------+---------+------------------------+----------------------------------------------+-----------+-----------+
固定IP的方法:
[root@node13 ~]# lxc exec node1 bash
[root@node1 ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
BOOTPROTO=static
IPADDR=192.168.100.20
NETMASK=255.255.255.0
GATEWAY=192.168.100.1
ONBOOT=yes
HOSTNAME=node1
NM_CONTROLLED=no
TYPE=Ethernet
MTU=
DHCP_HOSTNAME=node1
[root@node1 ~]# reboot
重启之后可以看到lxc确实使用了我们设置的192.168.100.20
IP地址。
[root@node13 ~]# lxc ls
+-------+---------+------------------------+----------------------------------------------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-------+---------+------------------------+----------------------------------------------+-----------+-----------+
| node1 | RUNNING | 192.168.100.20 (eth0) | | CONTAINER | 0 |
最后验证与外部网络的互通性:
[root@node13 ~]# lxc exec node1 bash
[root@node1 ~]# ping 192.168.100.10
PING 192.168.100.10 (192.168.100.10) 56(84) bytes of data.
64 bytes from 192.168.100.10: icmp_seq=1 ttl=64 time=0.742 ms
64 bytes from 192.168.100.10: icmp_seq=2 ttl=64 time=0.287 ms
[root@node1 ~]# ping 10.50.208.145
PING 10.50.208.145 (10.50.208.145) 56(84) bytes of data.
64 bytes from 10.50.208.145: icmp_seq=1 ttl=63 time=0.410 ms
64 bytes from 10.50.208.145: icmp_seq=2 ttl=63 time=0.214 ms
^C
--- 10.50.208.145 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.214/0.312/0.410/0.098 ms
[root@node1 ~]# ping 10.50.208.147
PING 10.50.208.147 (10.50.208.147) 56(84) bytes of data.
64 bytes from 10.50.208.147: icmp_seq=1 ttl=64 time=0.146 ms
64 bytes from 10.50.208.147: icmp_seq=2 ttl=64 time=0.153 ms
^C
--- 10.50.208.147 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
Apr 29, 2021
Technology1. 文档目的
文档旨在针对在CentOS 7操作系统上安装、配置及运行LXD提供最佳实践。
2. 环境准备
基于快速验证的目的,本文档基于虚拟机搭建,验证机配置如下:
- 操作系统:
CentOS Linux release 7.6.1810 (Core)
, 最小化安装 - 硬件配置: 36核心,246GB内存, 500G磁盘分区
- 软件配置:
- 网络配置: 192.168.100.10/24, 网关192.168.100.1
访问方式(这里提供如何从办公网络直达验证机)
3. 环境搭建
离线情况下,配置内网源后,执行以下命令安装:
# yum install -y snapd net-tools vim
# systemctl enable --now snapd.socket
解压离线安装文件:
# tar xzvf lxcimages.tar.gz ; tar xzvf snap.tar.gz
进入到snap目录下安装snap:
# snap ack core_10958.assert ; snap ack core18_1997.assert; snap ack lxd_20211.assert
# snap install core_10958.snap; snap install core18_1997.snap; snap install lxd_20211.snap
更改内核参数后,重启机器:
$ grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
$ grubby --args="namespace.unpriv_enable=1" --update-kernel="$(grubby --default-kernel)"
$ sudo sh -c 'echo "user.max_user_namespaces=3883" > /etc/sysctl.d/99-userns.conf'
# reboot
创建snap目录并添加运行权限:
# ln -s /var/lib/snapd/snap /snap
# usermod -a -G lxd roo
# newgrp lxd
# id
uid=0(root) gid=994(lxd) groups=994(lxd),0(root)
此时需要退出终端后重新登录终端,方可使用lxc
相关命令.
初始化lxd
环境:
[root@lxdpaas ~]# lxd init
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (btrfs, dir, lvm, ceph) [default=btrfs]: dir
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
Would you like the LXD server to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
此时应无任何镜像,接下来手动导入镜像:
# cd lxcimages
# lxc image import meta-50030de846c046680faf34f7dc3e60284e31f5aab38dfd19c94a2fd1bf895d0c.tar.xz 50030de846c046680faf34f7dc3e60284e31f5aab38dfd19c94a2fd1bf895d0c.squashfs --alias centos7
Image imported with fingerprint: 50030de846c046680faf34f7dc3e60284e31f5aab38dfd19c94a2fd1bf895d0c
# lxc image list
+---------+--------------+--------+----------------------------------+--------------+-----------+---------+------------------------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCHITECTURE | TYPE | SIZE | UPLOAD DATE |
+---------+--------------+--------+----------------------------------+--------------+-----------+---------+------------------------------+
| centos7 | 50030de846c0 | no | Centos 7 x86_64 (20210428_07:08) | x86_64 | CONTAINER | 83.46MB | Apr 29, 2021 at 4:53am (UTC) |
+---------+--------------+--------+----------------------------------+--------------+-----------+---------+------------------------------+
4. lxc操作实练
启动一个lxc 实例:
# lxc launch centos7 db1
Creating db1
Starting db1
进入运行中的实例:
# lxc exec db1 bash
[root@db1 ~]# cat /etc/redhat-release
CentOS Linux release 7.9.2009 (Core)
启动第二个名为db2
的实例:
[root@lxdpaas lxcimages]# lxc launch centos7 db2
Creating db2
Starting db2
[root@lxdpaas lxcimages]# lxc exec db2 bash
[root@db2 ~]#
查看运行中的容器实例:
# lxc ls
+------+---------+-----------------------+----------------------------------------------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+---------+-----------------------+----------------------------------------------+-----------+-----------+
| db1 | RUNNING | 10.159.107.72 (eth0) | fd42:45a:636c:6e69:216:3eff:fe81:347e (eth0) | CONTAINER | 0 |
+------+---------+-----------------------+----------------------------------------------+-----------+-----------+
| db2 | RUNNING | 10.159.107.125 (eth0) | fd42:45a:636c:6e69:216:3eff:fe53:754 (eth0) | CONTAINER | 0 |
+------+---------+-----------------------+----------------------------------------------+-----------+-----------+
停止/删除运行中的容器:
[root@lxdpaas lxcimages]# lxc stop db1
[root@lxdpaas lxcimages]# lxc stop db2
[root@lxdpaas lxcimages]# lxc delete db1
[root@lxdpaas lxcimages]# lxc delete db2
[root@lxdpaas lxcimages]# lxc ls
+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+
定制化:
# lxc launch c75dhclient k2
# lxc exec k2 /bin/bash
dhclient eth0
vi /etc/yum.repos.d/kkk.repo
yum makecache
yum install -y vim net-tools
exit
# lxc ls | grep k2
| k2 | RUNNING | 10.159.107.248 (eth0) | fd42:45a:636c:6e69:216:3eff:fea0:2c33 (eth0) | CONTAINER | 0 |
导出当前镜像:
[root@lxdpaas ~]# mkdir export
[root@lxdpaas ~]# cd export/
[root@lxdpaas export]# lxc stop k2
[root@lxdpaas export]# lxc publish k2 --alias centos75withvim
Instance published with fingerprint: 7301c7d85d4d56ebcae117aa79cf88868c4821dedb22e641fe66d05cab6599f2
[root@lxdpaas export]# lxc image list
+-----------------+--------------+--------+----------------------------------+--------------+-----------+----------+------------------------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCHITECTURE | TYPE | SIZE | UPLOAD DATE |
+-----------------+--------------+--------+----------------------------------+--------------+-----------+----------+------------------------------+
| c75dhclient | 3a063c11b987 | no | | x86_64 | CONTAINER | 381.84MB | Apr 29, 2021 at 8:06am (UTC) |
+-----------------+--------------+--------+----------------------------------+--------------+-----------+----------+------------------------------+
| centos7 | 50030de846c0 | no | Centos 7 x86_64 (20210428_07:08) | x86_64 | CONTAINER | 83.46MB | Apr 29, 2021 at 4:53am (UTC) |
+-----------------+--------------+--------+----------------------------------+--------------+-----------+----------+------------------------------+
| centos75withvim | 7301c7d85d4d | no | | x86_64 | CONTAINER | 420.72MB | Apr 29, 2021 at 8:23am (UTC) |
+-----------------+--------------+--------+----------------------------------+--------------+-----------+----------+------------------------------+
[root@lxdpaas export]# lxc image export centos75withvim .
Image exported successfully!
[root@lxdpaas export]# ls
7301c7d85d4d56ebcae117aa79cf88868c4821dedb22e641fe66d05cab6599f2.tar.gz
测试:
# lxc launch centos75withvim test1
Creating test1
Starting test1
[root@lxdpaas export]# lxc exec test1 /bin/bash
[root@base ~]# dhclient eth0
[root@base ~]# which vim
/usr/bin/vim
[root@base ~]# which ifconfig
/usr/sbin/ifconfig
有关数据库的更改:
yum install -y mariadb-server
systemctl enable mariadb
5. 资源隔离
制作benchmark容器:
$ lxc launch centos7 bench -c security.privileged=true
# yum install -y epel-release; yum install -y stress
# yum install which
# which stress
# shutdown -h now
$ lxc publish bench --alias bench
$ lxc launch bench k1
$ lxc exec k1 /bin/bash
stress --cpu 5
此时可以看到,宿主机上的5个cpu跑满:
设置CPU限制:
# lxc config set bench limits.cpu 2
即便容器中的进程未变,但是主机上可以看到,只有两个CPU跑满:
对内存的使用规则是同样的。
z. 定制化
为了适配用户习惯,做了以下修改:
# yum install -y mate-desktop xrdp mate* gnome-terminal firefox wqy* evince
# echo mate-session>/root/.Xclients
# chmod 777 /root/.Xclients
# systemctl start xrdp
# systemctl enable xrdp
外部需要做iptables转发:
$ sudo iptables -D FORWARD -o virbr0 -j REJECT --reject-with icmp-port-unreachable
$ sudo iptables -D FORWARD -i virbr0 -j REJECT --reject-with icmp-port-unreachable
$ sudo iptables -t nat -A PREROUTING -p tcp --dport 13389 -j DNAT --to-destination 192.168.100.10:3389
$ sudo iptables -t nat -A POSTROUTING -p tcp -d 192.168.100.10 --dport 3389 -j SNAT --to-source 10.50.208.147
外部的centos7机器上,因为升级了内核的关系,需要用如下命令开始运行:
lxc launch images:centos/7 blah -c security.privileged=true
当前制作的centos7.5容器似乎不能满足lxc的功能?