WorkingTipsOndnscryption

Working Environment

Centos 7.9, vm , 8core, 16G.

Installation

Install dnsmasq:

# sudo yum install -y dnsmasq

Install dnscrypt-proxy:

# sudo yum install -y dnscrypt-proxy

Wget the chinadns configuration file:

# wget https://raw.githubusercontent.com/felixonmars/dnsmasq-china-list/master/accelerated-domains.china.conf
# mv accelerated-domains.china.conf /etc/dnsmasq.d/accelerated-domains.china.conf

you can replace the 114.114.114.114 via your own dns(china intranet dns).

Configuration

Configure dnsmasq:

# vim /etc/dnsmasq.conf
listen-address=127.0.0.1
no-resolv
conf-dir=/usr/local/etc/dnsmasq.d
server=127.0.0.1#5300
interface=lo
bind-interfaces

Configure dnscrypt-proxy:

# vim /etc/dnscrypt-proxy/dnscrypt-proxy.toml
     # 监听5300端口
     listen_addresses = ['127.0.0.1:5300', '[::1]:5300']
     # 使用下面3个公开的DNS服务
     server_names = ['google', 'cloudflare', 'cloudflare-ipv6']
     # 如果找不到合适的公开DNS服务,则使用下面的DNS服务
     fallback_resolvers = ['9.9.9.9:53', '8.8.8.8:53']
     # 担心这些DNS请求被墙,设置使用代理发送DNS请求
     force_tcp = true
     proxy = 'socks5://127.0.0.1:1086'

Configure /etc/resolv.conf for using 127.0.0.1:

nameserver 127.0.0.1

privoxy

In centos 7.9. don’t install this package from epel, download the source code from internet and compile it:

$ privoxy  --version
Privoxy version 3.0.28 (https://www.privoxy.org/)

make sure you have specify the gfwlist.

完全用RAM运行Ubuntu(2)

完全用ram工作的场景下,关机时需要回写到磁盘上,以下是用来将Ram中的数据回写到磁盘的方法。

# vim /bin/writeback.sh
    #!/bin/sh
    kkk=`mount | grep "none on / type tmpfs"`
    if [ ! -z "$kkk" ]
    then
    mkdir -p /writeback
    mount /dev/mapper/ubuntu--vg-root /writeback
    rsync -a --delete --exclude 'tmp'   --exclude 'proc' --exclude 'writeback' --exclude 'sys' / /writeback/
    fi

创建一个回写的服务:

# vim  /etc/systemd/system/run-before-shutdown.service 
[Unit]
Description=Run my custom task at shutdown
DefaultDependencies=no
Before=shutdown.target reboot.target halt.target

[Service]
Type=oneshot
ExecStart=/bin/writeback.sh
TimeoutStartSec=0

[Install]
WantedBy=shutdown.target

使能服务:

# systemctl enable run-before-shutdown

则关机时系统会调用回写脚本将Ram中的数据写入到磁盘中。

WorkingTipsOnLXD20210430

环境说明

新建3台虚拟机:

192.168.100.13/14/15, 4核8G, 虚拟机环境

该虚拟机所在的网段为192.168.100.0/24, 其中dhcp范围为192.168.100.128~192.168.100.254, 网关为192.168.100.1

OS环境初始化配置

我们期待lxc实例能通过网桥获取到与宿主机(192.168.100.13/14/15)同样的IP地址范围,所以先配置各节点上的网桥br0:

删除手动连接后,NetworkManager会自动拉起另一个:

./images/2021_04_30_23_15_54_819x240.jpg

再次删除此自动建立的连接,直到只看到lxdbr0即可:

./images/2021_04_30_23_17_44_820x162.jpg

建立br0, 并指定eth0为br0的slave设备:

/images/2021_04_30_23_21_27_1093x243.jpg

依次类推完成另外两台机器的配置。

相关的配置脚本(这里以192.168.100.14为例说明)如下,实际环境中需根据具体的配置信息进行更改:

nmcli con show | grep eth0 | awk {'print $2'} | xargs -I % nmcli con delete uuid %
nmcli con show | grep eth0 | awk {'print $4'} | xargs -I % nmcli con delete uuid %
nmcli con show
nmcli conn add type bridge ifname br0 ipv4.method manual ipv4.address "192.168.100.14" ipv4.gateway "192.168.100.1" ipv4.dns "223.5.5.5"
nmcli conn add type bridge-slave ifname eth0 master br0

lxc使用br0网络

lxc可以通过使用不同的profile定义出实例所在的网络,我们通过以下操作新建出一个可以通过网桥br0获取到192.168.100.0/24段地址的profile:

[root@node13 ~]# lxc profile list
+---------+---------------------+---------+
|  NAME   |     DESCRIPTION     | USED BY |
+---------+---------------------+---------+
| default | Default LXD profile | 0       |
+---------+---------------------+---------+
[root@node13 ~]# lxc profile show default>br0
[root@node13 ~]# vim br0
config: {}
description: Default LXD profile modified for using br0
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: br0
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: br0
used_by: []
[root@node13 ~]# lxc profile create br0
Profile br0 created
[root@node13 ~]# lxc profile edit br0<br0
[root@node13 ~]# lxc profile list
+---------+--------------------------------------------+---------+
|  NAME   |                DESCRIPTION                 | USED BY |
+---------+--------------------------------------------+---------+
| br0     | Default LXD profile modified for using br0 | 0       |
+---------+--------------------------------------------+---------+
| default | Default LXD profile                        | 0       |
+---------+--------------------------------------------+---------+

现在可使用创建出的br0实例化一个容器,

# lxc launch centos7 node1 --profile br0
# lxc ls
+-------+---------+------------------------+----------------------------------------------+-----------+-----------+
| NAME  |  STATE  |          IPV4          |                     IPV6                     |   TYPE    | SNAPSHOTS |
+-------+---------+------------------------+----------------------------------------------+-----------+-----------+
| node1 | RUNNING | 192.168.100.130 (eth0) |                                              | CONTAINER | 0         |
+-------+---------+------------------------+----------------------------------------------+-----------+-----------+

固定IP的方法:

[root@node13 ~]# lxc exec node1 bash
[root@node1 ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth0 
DEVICE=eth0
BOOTPROTO=static
IPADDR=192.168.100.20
NETMASK=255.255.255.0
GATEWAY=192.168.100.1
ONBOOT=yes
HOSTNAME=node1
NM_CONTROLLED=no
TYPE=Ethernet
MTU=
DHCP_HOSTNAME=node1
[root@node1 ~]#  reboot

重启之后可以看到lxc确实使用了我们设置的192.168.100.20IP地址。

[root@node13 ~]# lxc ls
+-------+---------+------------------------+----------------------------------------------+-----------+-----------+
| NAME  |  STATE  |          IPV4          |                     IPV6                     |   TYPE    | SNAPSHOTS |
+-------+---------+------------------------+----------------------------------------------+-----------+-----------+
| node1 | RUNNING | 192.168.100.20 (eth0)  |                                              | CONTAINER | 0         |

最后验证与外部网络的互通性:

[root@node13 ~]# lxc exec node1 bash
[root@node1 ~]# ping 192.168.100.10
PING 192.168.100.10 (192.168.100.10) 56(84) bytes of data.
64 bytes from 192.168.100.10: icmp_seq=1 ttl=64 time=0.742 ms
64 bytes from 192.168.100.10: icmp_seq=2 ttl=64 time=0.287 ms
[root@node1 ~]# ping 10.50.208.145
PING 10.50.208.145 (10.50.208.145) 56(84) bytes of data.
64 bytes from 10.50.208.145: icmp_seq=1 ttl=63 time=0.410 ms
64 bytes from 10.50.208.145: icmp_seq=2 ttl=63 time=0.214 ms
^C
--- 10.50.208.145 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.214/0.312/0.410/0.098 ms
[root@node1 ~]# ping 10.50.208.147
PING 10.50.208.147 (10.50.208.147) 56(84) bytes of data.
64 bytes from 10.50.208.147: icmp_seq=1 ttl=64 time=0.146 ms
64 bytes from 10.50.208.147: icmp_seq=2 ttl=64 time=0.153 ms
^C
--- 10.50.208.147 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms

BestPracticeOfCentOSLXD

1. 文档目的

文档旨在针对在CentOS 7操作系统上安装、配置及运行LXD提供最佳实践。

2. 环境准备

基于快速验证的目的,本文档基于虚拟机搭建,验证机配置如下:

  • 操作系统: CentOS Linux release 7.6.1810 (Core) , 最小化安装
  • 硬件配置: 36核心,246GB内存, 500G磁盘分区
  • 软件配置:
  • 网络配置: 192.168.100.10/24, 网关192.168.100.1

访问方式(这里提供如何从办公网络直达验证机)

3. 环境搭建

离线情况下,配置内网源后,执行以下命令安装:

# yum install -y snapd net-tools vim
# systemctl enable --now snapd.socket

解压离线安装文件:

# tar xzvf lxcimages.tar.gz ; tar xzvf snap.tar.gz

进入到snap目录下安装snap:

# snap ack core_10958.assert ; snap ack core18_1997.assert; snap ack lxd_20211.assert
# snap install core_10958.snap; snap install core18_1997.snap; snap install lxd_20211.snap

更改内核参数后,重启机器:

$ grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
$ grubby --args="namespace.unpriv_enable=1" --update-kernel="$(grubby --default-kernel)"
$ sudo sh -c 'echo "user.max_user_namespaces=3883" > /etc/sysctl.d/99-userns.conf'
# reboot

创建snap目录并添加运行权限:

# ln -s /var/lib/snapd/snap /snap
# usermod -a -G lxd roo
# newgrp lxd
# id
uid=0(root) gid=994(lxd) groups=994(lxd),0(root)

此时需要退出终端后重新登录终端,方可使用lxc相关命令.

初始化lxd环境:

[root@lxdpaas ~]# lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
Name of the new storage pool [default=default]: 
Name of the storage backend to use (btrfs, dir, lvm, ceph) [default=btrfs]: dir
Would you like to connect to a MAAS server? (yes/no) [default=no]: 
Would you like to create a new local network bridge? (yes/no) [default=yes]: 
What should the new bridge be called? [default=lxdbr0]: 
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
Would you like the LXD server to be available over the network? (yes/no) [default=no]: 
Would you like stale cached images to be updated automatically? (yes/no) [default=yes] 
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: 

此时应无任何镜像,接下来手动导入镜像:

# cd lxcimages
# lxc image  import meta-50030de846c046680faf34f7dc3e60284e31f5aab38dfd19c94a2fd1bf895d0c.tar.xz 50030de846c046680faf34f7dc3e60284e31f5aab38dfd19c94a2fd1bf895d0c.squashfs --alias centos7
Image imported with fingerprint: 50030de846c046680faf34f7dc3e60284e31f5aab38dfd19c94a2fd1bf895d0c
# lxc image list
+---------+--------------+--------+----------------------------------+--------------+-----------+---------+------------------------------+
|  ALIAS  | FINGERPRINT  | PUBLIC |           DESCRIPTION            | ARCHITECTURE |   TYPE    |  SIZE   |         UPLOAD DATE          |
+---------+--------------+--------+----------------------------------+--------------+-----------+---------+------------------------------+
| centos7 | 50030de846c0 | no     | Centos 7 x86_64 (20210428_07:08) | x86_64       | CONTAINER | 83.46MB | Apr 29, 2021 at 4:53am (UTC) |
+---------+--------------+--------+----------------------------------+--------------+-----------+---------+------------------------------+

4. lxc操作实练

启动一个lxc 实例:

# lxc launch centos7 db1
Creating db1
Starting db1              

进入运行中的实例:

# lxc exec db1 bash
[root@db1 ~]# cat /etc/redhat-release 
CentOS Linux release 7.9.2009 (Core)

启动第二个名为db2的实例:

[root@lxdpaas lxcimages]# lxc launch centos7 db2
Creating db2
Starting db2                              
[root@lxdpaas lxcimages]# lxc exec db2 bash
[root@db2 ~]#

查看运行中的容器实例:

# lxc ls
+------+---------+-----------------------+----------------------------------------------+-----------+-----------+
| NAME |  STATE  |         IPV4          |                     IPV6                     |   TYPE    | SNAPSHOTS |
+------+---------+-----------------------+----------------------------------------------+-----------+-----------+
| db1  | RUNNING | 10.159.107.72 (eth0)  | fd42:45a:636c:6e69:216:3eff:fe81:347e (eth0) | CONTAINER | 0         |
+------+---------+-----------------------+----------------------------------------------+-----------+-----------+
| db2  | RUNNING | 10.159.107.125 (eth0) | fd42:45a:636c:6e69:216:3eff:fe53:754 (eth0)  | CONTAINER | 0         |
+------+---------+-----------------------+----------------------------------------------+-----------+-----------+

停止/删除运行中的容器:

[root@lxdpaas lxcimages]# lxc stop db1
[root@lxdpaas lxcimages]# lxc stop db2
[root@lxdpaas lxcimages]# lxc delete db1
[root@lxdpaas lxcimages]# lxc delete db2
[root@lxdpaas lxcimages]# lxc ls
+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+

定制化:

# lxc launch c75dhclient k2
# lxc exec k2 /bin/bash
dhclient eth0
vi /etc/yum.repos.d/kkk.repo
yum makecache
yum install -y vim net-tools
exit
# lxc ls | grep k2
| k2   | RUNNING | 10.159.107.248 (eth0) | fd42:45a:636c:6e69:216:3eff:fea0:2c33 (eth0) | CONTAINER | 0         |

导出当前镜像:

[root@lxdpaas ~]# mkdir export
[root@lxdpaas ~]# cd export/
[root@lxdpaas export]# lxc stop k2
[root@lxdpaas export]# lxc publish k2 --alias centos75withvim
Instance published with fingerprint: 7301c7d85d4d56ebcae117aa79cf88868c4821dedb22e641fe66d05cab6599f2
[root@lxdpaas export]# lxc image list
+-----------------+--------------+--------+----------------------------------+--------------+-----------+----------+------------------------------+
|      ALIAS      | FINGERPRINT  | PUBLIC |           DESCRIPTION            | ARCHITECTURE |   TYPE    |   SIZE   |         UPLOAD DATE          |
+-----------------+--------------+--------+----------------------------------+--------------+-----------+----------+------------------------------+
| c75dhclient     | 3a063c11b987 | no     |                                  | x86_64       | CONTAINER | 381.84MB | Apr 29, 2021 at 8:06am (UTC) |
+-----------------+--------------+--------+----------------------------------+--------------+-----------+----------+------------------------------+
| centos7         | 50030de846c0 | no     | Centos 7 x86_64 (20210428_07:08) | x86_64       | CONTAINER | 83.46MB  | Apr 29, 2021 at 4:53am (UTC) |
+-----------------+--------------+--------+----------------------------------+--------------+-----------+----------+------------------------------+
| centos75withvim | 7301c7d85d4d | no     |                                  | x86_64       | CONTAINER | 420.72MB | Apr 29, 2021 at 8:23am (UTC) |
+-----------------+--------------+--------+----------------------------------+--------------+-----------+----------+------------------------------+
[root@lxdpaas export]# lxc image export centos75withvim .
Image exported successfully!           
[root@lxdpaas export]# ls
7301c7d85d4d56ebcae117aa79cf88868c4821dedb22e641fe66d05cab6599f2.tar.gz

测试:

# lxc launch centos75withvim test1
Creating test1
Starting test1                             
[root@lxdpaas export]# lxc exec test1 /bin/bash
[root@base ~]# dhclient eth0
[root@base ~]# which vim
/usr/bin/vim
[root@base ~]# which ifconfig
/usr/sbin/ifconfig

有关数据库的更改:

 yum install -y mariadb-server
 systemctl enable mariadb

5. 资源隔离

制作benchmark容器:

$ lxc launch centos7 bench -c security.privileged=true

	# yum install -y epel-release; yum install -y stress
	# yum install which
	# which stress
        # shutdown -h now
$ lxc publish bench --alias bench
$ lxc launch bench k1
$ lxc exec k1 /bin/bash
    stress --cpu 5

此时可以看到,宿主机上的5个cpu跑满:

/images/2021_04_29_17_55_21_884x154.jpg

设置CPU限制:

# lxc config set  bench limits.cpu 2

即便容器中的进程未变,但是主机上可以看到,只有两个CPU跑满:

/images/2021_04_29_17_56_10_898x161.jpg

对内存的使用规则是同样的。

z. 定制化

为了适配用户习惯,做了以下修改:

# yum install -y mate-desktop xrdp mate* gnome-terminal firefox wqy* evince
# echo mate-session>/root/.Xclients
# chmod 777 /root/.Xclients
# systemctl start xrdp
# systemctl enable xrdp

外部需要做iptables转发:

$ sudo iptables -D FORWARD -o virbr0 -j REJECT --reject-with icmp-port-unreachable
$ sudo iptables -D FORWARD -i virbr0 -j REJECT --reject-with icmp-port-unreachable
$ sudo iptables -t nat -A PREROUTING -p tcp --dport 13389 -j DNAT --to-destination  192.168.100.10:3389
$ sudo iptables -t nat -A POSTROUTING -p tcp -d 192.168.100.10 --dport 3389 -j SNAT --to-source 10.50.208.147

外部的centos7机器上,因为升级了内核的关系,需要用如下命令开始运行:

lxc launch images:centos/7 blah -c security.privileged=true

当前制作的centos7.5容器似乎不能满足lxc的功能?

完全用RAM运行Ubuntu

1. 目的

将Ubuntu18.04.1操作系统(arm64)完全运行在内存中。

2. 准备材料

Ubuntu 18.04.1 arm64安装iso.
arm64服务器/libvirtd/virt-manager.(在没有实体服务器的情况下,可以用虚拟机来模拟测试).

3. 步骤

最小化安装Ubuntu 18.04.1 操作系统, 根分区最好包含所有分区(all in one)。
安装完毕操作系统后,定制自己需要的软件包及准备环境后,删除所有的临时文件,尽量瘦身系统。这是因为内存定制化后,所有的文件在启动时将被加载到内存!全新安装的ubuntu大约占据约1.5GB的磁盘空间。
以下为定制为RAM启动的流程:

步骤一:
更改/etc/fstab文件内容,首先备份该文件:

# cp /etc/fstab /etc/fstab.bak

编辑/etc/fstab文件内容,找到标识根分区(/)的行,更改为以下内容(下为示例):

#/dev/mapper/ubuntu--vg-root /               ext4    errors=remount-ro 0       1
none / tmpfs defaults 0 0

步骤二:
更改initramfs中的local脚本内容, initramfs 包含的工具和脚本,在正式的根文件系统的初始化脚本 init 启动之前,就被挂载并完成相应的初始化工作。我们需要提前将磁盘根分区中的内容拷贝入tmpfs中,以便在/etc/fstab开始执行的时候找寻到正确的分区.

首先备份/usr/share/initramfs-tools/scripts/local文件:

# cp /usr/share/initramfs-tools/scripts/local /usr/share/initramfs-tools/scripts/local.bak   

编辑local文件,更改其Mount root部分的处理逻辑(约204行左右内容):

	# FIXME This has no error checking
	# Mount root
	#mount ${roflag} ${FSTYPE:+-t ${FSTYPE} }${ROOTFLAGS} ${ROOT} ${rootmnt}
	# Start of ramboottmp
        mkdir /ramboottmp
        mount ${roflag} -t ${FSTYPE} ${ROOTFLAGS} ${ROOT} /ramboottmp
        mount -t tmpfs -o size=100% none ${rootmnt}
        cd ${rootmnt}
        cp -rfa /ramboottmp/* ${rootmnt}
        umount /ramboottmp
        ### End of ramboottmp

保存该文件后,重新编译initramfs:

# mkinitramfs -o /boot/initrd.img-ramboot

编译成功后,将local文件替换会原来的版本:

# cp -f /usr/share/initramfs-tools/scripts/local.bak /usr/share/initramfs-tools/scripts/local

步骤三:
更改grub,以使用刚才编译出的initrd.img-ramboot来启动操作系统:

更改第一启动项中的/initrd行,替换为:

# chmod +w /boot/grub/grub.cfg
# vim /boot/grub/grub.cfg
.....
.....
        linux	/boot/vmlinuz-4.15.0-29-generic root=/dev/mapper/ubuntu--vg-root ro  
	initrd	/boot/initrd.img-ramboot
......
......
# chmod -w /boot/grub/grub.cfg

步骤四:
重启,重启时选择第一启动项,此时根分区会整体被加载到tmpfs中。

4. 性能对比测试

测试环境定义:

  • aarch64 4核
  • 64 GB 内存
  • 100 GB 磁盘分区
  • Ubuntu 18.04.1 LTS
  • 内核版本: 4.15.0-29-generic
  • fio版本: fio-3.1

所有测试样例均在ramdisk主机及传统主机上运行并对比.

4.1 fio 4k随机读写

测试命令如下:

# fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=randrw --size=500m --io_size=10g --blocksize=4k --ioengine=libaio --fsync=1 --iodepth=1 --numjobs=1 --runtime=60 --group_reporting
指标内存型主机传统主机
READ bwbw=513MiB/s (538MB/s)bw=85.0KiB/s (87.0kB/s)
READ ioio=5133MiB (5382MB)io=5104KiB (5226kB)
READ iopsIOPS=131kIOPS=21
WRITE bwbw=510MiB/s (535MB/s)bw=88.1KiB/s (90.2kB/s)
WRITE ioio=5107MiB (5355MB)io=5288KiB (5415kB)
WRITE iopsIOPS=131kIOPS=22

测试显示:4K随机读写的带宽对比,内存型主机是传统主机的约6000倍,读IOPS/写IOPS,内存型主机是传统主机的约6000倍。

4.2 fio 4k顺序读写

测试命令如下:

# fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=rw --size=500m --io_size=10g --blocksize=4k --ioengine=libaio --fsync=1 --iodepth=1 --numjobs=1 --runtime=60 --group_reporting
指标内存型主机传统主机
READ bwbw=640MiB/s (671MB/s)bw=73.2KiB/s (75.0kB/s)
READ ioio=5133MiB (5382MB)io=4396KiB (4502kB)
READ iopsIOPS=164kIOPS=18
WRITE bwbw=637MiB/s (668MB/s)bw=76.8KiB/s (78.6kB/s)
WRITE ioio=5107MiB (5355MB)io=4608KiB (4719kB)
WRITE iopsIOPS=163kIOPS=19

测试显示:4K顺序读写的带宽对比,内存型主机是传统主机的约9000倍,读IOPS/写IOPS,内存型主机是传统主机的约9000倍。