InstallationOfBlissOSInVirtManager

Create the partition:

/images/2022_05_06_11_45_13_693x190.jpg

Use GPT? (no):

/images/2022_05_06_11_46_38_848x444.jpg

Choose partition:

/images/2022_05_06_11_47_44_717x180.jpg

ext4 format:

/images/2022_05_06_11_47_58_542x153.jpg

Make sure:

/images/2022_05_06_11_48_12_636x197.jpg

Install bootloader:

/images/2022_05_06_11_48_29_488x154.jpg

read/write:

/images/2022_05_06_11_48_46_570x160.jpg

Reboot:

/images/2022_05_06_11_50_04_481x249.jpg

AutoPingOpenWRT

坐标广州,家里申请的公网IP地址如果隔段时间不使用的话会被回收,每次回收后都要打电话弄回来,如何自动的让公网IP不被回收呢,就需要每次从外部Ping或者Get某个服务。下面是实现步骤。

前置条件

Call 10000, 人工服务,要求光猫上绑定公网IP,理由是家里装监控需要用到。

光猫设置

光猫上设置端口映射,如12222端口映射到某台OpenWRT的22端口.

OpenWRT设备设置

OpenWRT的dropbear设置ssh key:

cd /root/.ssh
dropbearkey -t rsa -f ~/.ssh/id_rsa
dropbearkey -y -f ~/.ssh/id_rsa | grep "^ssh-rsa " >> authorized_keys

拷贝authorized_keys到远端vps的authorized_keys文件中,由此实现无密码登陆。

撰写无码验证脚本

撰写/overlay/remote.sh文件如下:

#!/bin/sh
pIP=`wget -qO - http://icanhazip.com`
ssh -i /root/.ssh/id_rsa -p2xxxx root@1x.xx.xx.xx "ssh -p12222 -o StrictHostKeyChecking=no root@$pIP 'date'| tee /root/log.txt"

其中1x.xx.xx.xx为远端vps地址, 2xxxx为远端vps的ssh端口。

# chmod 777 /overlay/remote.sh

OpenWRT crontab

编辑crontab任务如下:

# crontab -e
@hourly /overlay/remote.sh

结论

由此实现了OpenWRT路由器上发起的每一个小时通过远程vps联会自己的公网地址上暴露的IP端口,由此可以保证公网IP永不过期.

WorkingTipsOnx11spice

Installation Steps

Install some necessary packages for building:

$ sudo apt-get install -y build-essential autoconf xutils-dev libtool libgtk-3-dev libspice-server-dev build-essential
$ apt-cache search xcb | awk {'print $1'} | xargs -I % sudo apt-get install -y %
$ ./autogen.sh
$ ./configure --prefix=/usr
$ make && sudo make install

Configuration

Copy the configuration file into the system configuration folder:

$ sudo cp -r /usr/etc/xdg/x11spice /etc/xdg/
$ sudo vim /etc/xdg/x11spice/x11spice.conf 
...
listen=5900
disable-ticketing=true
allow-control=true
hide=true
display=:0
...

Login to the x session and run with:

$ x11spice

View via:

remote-viewer spice://192.168.xx.xx:5900 --spice-debug

xserver-xspice

Install via:

# apt install xserver-xspice ubuntu-desktop

Start a screen session via:

$ sudo Xspice --password 123456 :0

In other session:

$ DISPLAY=:0 gnome-session
OR
$ DISPLAY=:0 mate-session

view via:

remote-viewer spice://192.168.xx.xx:5900 --spice-debug

RunAVDInCentOS76Docker

步骤:

# docker pull centos:7.6.1810

挂载iso并映射到一个新的容器实例, 先更改其仓库配置以全部使用离线仓库:

dash@lucky:~$ sudo mount CentOS-7-x86_64-DVD-1810.iso  /mnt
mount: /mnt: WARNING: device write-protected, mounted read-only.
dash@lucky:~$ sudo docker run -it --name buildavd  -v /mnt:/mnt centos:7.6.1810 /bin/bash
[root@3c49cf47c327 /]# ls /mnt
CentOS_BuildTag  EULA  LiveOS    RPM-GPG-KEY-CentOS-7          TRANS.TBL  isolinux
EFI              GPL   Packages  RPM-GPG-KEY-CentOS-Testing-7  images     repodata
[root@3c49cf47c327 /]# mkdir /etc/yum.repos.d/back
[root@3c49cf47c327 /]# mv /etc/yum.repos.d/* /etc/yum.repos.d/back/
mv: cannot move '/etc/yum.repos.d/back' to a subdirectory of itself, '/etc/yum.repos.d/back/back'

拷贝本地仓库定义文件至容器:

# vim local.repo
[LocalRepo]
name=LocalRepository
baseurl=file:///mnt
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
# sudo docker cp local.repo buildavd:/etc/yum.repos.d/

进入容器安装必要的包:

[root@3c49cf47c327 /]#  yum groupinstall "X Window System" -y
[root@3c49cf47c327 /]# yum install -y gnome-terminal net-tools java-1.8.0-openjdk tmux celt051 librdkafka
[root@3c49cf47c327 /]# yum -y install vim sudo wget which net-tools bzip2 numpy mailcap firefox
[root@3c49cf47c327 /]# yum -y install xorg-x11-fonts* xulrunner
[root@3c49cf47c327 /]# yum -y groups install "Fonts"

使用epel仓库安装icewm:

[root@3c49cf47c327 /]# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
[root@3c49cf47c327 /]# yum install -y icewm
[root@3c49cf47c327 /]# yum -y install nss_wrapper gettext
[root@3c49cf47c327 /]# yum erase -y *power* *screensaver*
[root@3c49cf47c327 /]# mv /etc/yum.repos.d/epel.repo /etc/yum.repos.d/back/

安装openssh相关包:

[root@3c49cf47c327 /]# yum install -y openssh-server openssh-clients

从sourceforge下载tigervnc包(https://sourceforge.net/projects/tigervnc/files/stable/1.10.1/el7/RPMS/), 这里选择1.10.1版本,以保证参数可以直接传递, 1.11版本后的tigervnc有较大改变,不推荐使用:

[root@3c49cf47c327 tigervnc110]# ls
tigervnc-1.10.1-4.el7.x86_64.rpm        tigervnc-license-1.10.1-4.el7.noarch.rpm  tigervnc-server-minimal-1.10.1-4.el7.x86_64.rpm
tigervnc-icons-1.10.1-4.el7.noarch.rpm  tigervnc-server-1.10.1-4.el7.x86_64.rpm
[root@3c49cf47c327 tigervnc110]# yum install *.rpm

安装novnc相关包:

[root@3c49cf47c327 ]# NO_VNC_HOME=/headless/noVNC
[root@3c49cf47c327 ]# mkdir -p $NO_VNC_HOME/utils/websockify
[root@3c49cf47c327 ]# wget -qO- https://github.com/novnc/noVNC/archive/v1.0.0.tar.gz | tar xz --strip 1 -C $NO_VNC_HOME
[root@3c49cf47c327 ]# ls /headless/noVNC/
LICENSE.txt  README.md  app  core  docs  karma.conf.js  package.json  po  tests  utils  vendor  vnc.html  vnc_lite.html
[root@3c49cf47c327 ]# wget -qO- https://github.com/novnc/websockify/archive/v0.6.1.tar.gz | tar xz --strip 1 -C $NO_VNC_HOME/utils/websockify
[root@3c49cf47c327 ]# wget -qO- http://209.141.35.192/v0.6.1.tar.gz | tar xz --strip 1 -C $NO_VNC_HOME/utils/websockify
[root@3c49cf47c327 ]# chmod +x -v $NO_VNC_HOME/utils/*.sh
mode of '/headless/noVNC/utils/launch.sh' retained as 0775 (rwxrwxr-x)
[root@3c49cf47c327 ]# ln -s $NO_VNC_HOME/vnc_lite.html $NO_VNC_HOME/index.html

(选做)更换/usr/local/目录,这里是因为我们的框架所依赖的库都放在了这个目录下:

[root@3c49cf47c327 ]# mv /usr/local/ /usr/local.back

-------------------------
主机上:   
sudo docker cp /workspace/local/ buildavd:/usr/
然后回到容器中检查
-------------------------

[root@3c49cf47c327 ]# du -hs /usr/local
486M	/usr/local
[root@3c49cf47c327 ]# ls /usr/local
bin  etc  games  include  lib  lib64  libexec  sbin  share  src

主机上提交我们对容器的更改并构建中间制品:

$ sudo docker commit buildavd runavd:latest
sha256:ad3c78f39bce2e8ff7492c09cacde9c9f8f5041de6878e92f4423b3d1ba943d4
$ sudo docker images | grep runavd
runavd                           latest                                       ad3c78f39bce   28 seconds ago   2.25GB

克隆仓库并构建最终制品:

# git clone  https://github.com/purplepalmdash/runavd.git
# cd runavd
# sudo docker build -t runemu .
# sudo docker images | grep runemu
runemu                           latest                                       1b7b0f283bf0   About a minute ago   2.25GB

映射avd镜像目录至容器中并运行:

$ sudo docker run -d --privileged -p 5903:5901 -p 6903:6901 -e VNC_PW=xxxxxx  --user 0  -v /home/dash/Code/android-9:/home/avd runemu:latest
$ sudo docker ps | grep 5903
4fcd1fd2bccc   runemu:latest           "/dockerstartup/vnc_…"   7 minutes ago       Up 7 minutes       0.0.0.0:5903->5901/tcp, :::5903->5901/tcp, 0.0.0.0:6903->6901/tcp, :::6903->6901/tcp   elated_curie

迁移/运行

导出镜像并传输到别的机器上:

# sudo docker save -o runemu.tar runemu:latest
# scp ./runemu.tar xxx@xxx.xxx.xx.xxx:~
# scp -r /home/dash/Code/android-9/ xxx@xxx.xxx.xxx.xxx:~
在对端机器上:  
$ sudo docker load<runemu.tar

运行时icewm失败,需要切换到另个轻量级桌面:

[root@3c49cf47c327 ~]# mv /etc/yum.repos.d/back/epel.repo /etc/yum.repos.d
[root@3c49cf47c327 ~]# yum -y -x gnome-keyring --skip-broken groups install "Xfce"
[root@3c49cf47c327 ~]# mv /etc/yum.repos.d/epel.repo /etc/yum.repos.d/back/

commit并重新编译:

$ sudo docker commit buildavd runavdxfce:latest
$ sudo docker build -f Dockerfile.centos.xfce.vnc -t runemuxfce4:latest .

替换镜像为xfce4的镜像,则可以访问.

运行镜像:

$ docker run -d --privileged -p 15901:5901 -p 16901:6901 -e VNC_PW=yiersansi --user -0 -v /root/android-9:/home/avd runemuxfce4:latest

窗口如下:

/images/2022_03_02_16_41_17_855x639.jpg

在打开的终端里, 开启实例,使能上网:

$ cd /home/avd
$ source env_setup.sh
$ android create avd --name test_liutao_9 --target android-28 --abi x86_64 --device "Nexus 4" --skin 720x1280
$ LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib emulator -avd  test_liutao_9  -verbose -show-kernel -no-snapshot -no-window -cores 4 -memory 4096 -writable-system  -partition-size 65536 -port 5654 -gpu swiftshader_indirect -qemu -cpu host -vnc :50
$ vncviewer localhost:5950
$ adb shell
ip route add default via 192.168.232.1 dev wlan0
ip rule add pref 2 from all lookup default
ip rule add from all lookup main pref 30000

结果:

/images/2022_03_02_16_44_32_1170x957.jpg

Android10Redroid

aosp preparation

Prepare the 10.0.0_r33 aosp source via:

repo init -u https://mirrors.tuna.tsinghua.edu.cn/git/AOSP/platform/manifest -b android-10.0.0_r33
repo sync -j8

Build via:

# vim build/target/product/AndroidProducts.mk
.....
COMMON_LUNCH_CHOICES := \
    aosp_arm64-eng \
    aosp_arm-eng \
    aosp_x86_64-eng \
    aosp_x86-eng \
    sdk_phone_x86_64-userdebug \
# source build/envsetup.sh
# lunch sdk_phone_x86_64-userdebug
# m -j128

Then we could use emulator for operating the android 10 vm.

Kernel Prepartion

Sync the 4.14.112 kernel via:

git clone https://android.googlesource.com/kernel/goldfish.git
cd goldfish/
git checkout -b android-goldfish-4.14-gchips remotes/origin/android-goldfish-4.14-gchips
vim security/selinux/include/classmap.h 
vim scripts/selinux/mdp/mdp.c 
vim scripts/selinux/genheaders/genheaders.c 
cp arch/x86/configs/x86_64_ranchu_defconfig  arch/x86/configs/x86_64_emu_defconfig
export PATH=$PATH:/root/Code/android10_redroid/prebuilts/gcc/linux-x86/x86/x86_64-linux-android-4.9/bin
export ARCH=x86_64
export CROSS_COMPILE=x86_64-linux-android-
export REAL_CROSS_COMPILE=x86_64-linux-android-
/root/Code/android10_redroid/prebuilts/qemu-kernel/build-kernel.sh --arch=x86_64
cp /tmp/kernel-qemu/x86_64-4.14.112/kernel-qemu  ~/

Kernel Customization

Customize via:

# cd goldfish
# make x86_64_emu_defconfig
# make menuconfig

Here you will see the kernel configuration window, make changes in it, then save and replace the x86_64_emu_defconfig configuration file.

/images/2022_02_15_14_19_00_973x628.jpg

Detailed changes:

Generic setup -> POSIX Message Queues
Generic setup -> Controller Group support -> PIDs controller
Generic setup -> Controller Group support -> Device controller
Generic setup -> Controller Group support -> CPU controller -> Group sheduling for SCHED_OTHER
Generic setup -> Controller Group support -> CPU controller -> CPU bandwidth provisioning for FAIR-GROUP_SCHED
Generic setup -> Controller Group support -> CPU controller -> Group sheduling for SCHED_RR/FIFO
Generic setup -> Controller Group support -> Perf controller
Generic setup -> Namespaces support -> User namespace
Generic setup -> Namespaces support -> PID namespace
Networking support -> Networking options -> Network packet filtering framework (Netfilter) -> Bridged IP/ARP packets fiiltering
Networking support -> Networking options -> Network packet filtering framework (Netfilter) -> IP virtual server support
Networking support -> Networking options -> Network packet filtering framework (Netfilter) -> Core Netfilter configuration ->  "addrtype" address type match support
Networking support -> Networking options -> Network packet filtering framework (Netfilter) -> Core Netfilter configuration ->  "control group" address type match support
Networking support -> Networking options -> Network packet filtering framework (Netfilter) -> Core Netfilter configuration ->  "control group" address type match support
File Systems -> Overlay filesystem support

But we lost the binderfs support in 4.14.112, have to change to other kernel version!

Migrating the binderfs to the kernel 4.14.112, steps remains to be written .

Refers to:

https://github.com/purplepalmdash/binderfs_backport.git

Start the emulator via:

# emulator -show-kernel -kernel /root/kernel-qemu -no-snapshot-load -selinux disabled

Replace the kernel in aosp kernel source:

cd /root/Code/android10_redroid/prebuilts/qemu-kernel/x86_64
cp -r 4.14/ 4.14.back
cp /root/kernel-qemu 4.14/kernel-qemu2 

binderfs enable

aosp source code should add modification for enable binderfs.
Make modification for rootdir, the aim is for enable binderfs:

# vim ./system/core/rootdir/init.rc
    mount configfs none /config nodev noexec nosuid
    chmod 0770 /config/sdcardfs
    chown system package_info /config/sdcardfs

+    # Mount binderfs
+    mkdir /dev/binderfs
+    mount binder binder /dev/binderfs stats=global
+    chmod 0755 /dev/binderfs
+ 
+    # Mount fusectl
+    mount fusectl none /sys/fs/fuse/connections
+ 
+    symlink /dev/binderfs/binder /dev/binder                                            
+    symlink /dev/binderfs/hwbinder /dev/hwbinder
+    symlink /dev/binderfs/vndbinder /dev/vndbinder
+ 
+    chmod 0666 /dev/binderfs/hwbinder
+    chmod 0666 /dev/binderfs/binder
+    chmod 0666 /dev/binderfs/vndbinder

Recompile the aosp source code and get the new generated image

Docker Integration

Download the docker binary files and extract them to prebuilts folder:

$ wget https://download.docker.com/linux/static/stable/x86_64/docker-20.10.8.tgz
// Switch to aosp source tree
$ cd prebuilts
$ tar xzvf ~/docker-20.10.8.tgz -C .

Add docker binary files into system.img, add them into /system/bin so that we could direct use them:

$ vim  ./build/make/target/board/generic_x86_64/device.mk
// At the end of the file
PRODUCT_COPY_FILES += \
    prebuilts/docker/containerd:system/bin/containerd \
    prebuilts/docker/containerd-shim:system/bin/containerd-shim \
    prebuilts/docker/containerd-shim-runc-v2:system/bin/containerd-shim-runc-v2 \
    prebuilts/docker/ctr:system/bin/ctr \
    prebuilts/docker/docker:system/bin/docker \
    prebuilts/docker/dockerd:system/bin/dockerd \
    prebuilts/docker/docker-init:system/bin/docker-init \
    prebuilts/docker/docker-proxy:system/bin/docker-proxy \
    prebuilts/docker/runc:system/bin/runc \
$ vim build/target/product/sdk_phone_x86_64.mk
// At the end of the file
PRODUCT_ARTIFACT_PATH_REQUIREMENT_ALLOWED_LIST := \
    system/bin/containerd \
    system/bin/containerd-shim \
    system/bin/containerd-shim-runc-v2 \
    system/bin/ctr \
    system/bin/docker \
    system/bin/dockerd \
    system/bin/docker-init \
    system/bin/docker-proxy \
    system/bin/runc \

Change the sepolicy for creating the docker runtime:

$ vim system/sepolicy/prebuilts/api/29.0/private/file_contexts

// Added /var,/run,/system/etc/docker definition under # Symlinks
# Symlinks
/bin                u:object_r:rootfs:s0
/bugreports         u:object_r:rootfs:s0
/charger            u:object_r:rootfs:s0
/d                  u:object_r:rootfs:s0
/etc                u:object_r:rootfs:s0
/sdcard             u:object_r:rootfs:s0
/var                u:object_r:rootfs:s0
/run                u:object_r:rootfs:s0
/system/etc/docker                u:object_r:system_file:s0

$ vim system/sepolicy/private/file_contexts
 /sdcard             u:object_r:rootfs:s0
 /var             u:object_r:rootfs:s0
 /run             u:object_r:rootfs:s0
 /system/etc/docker             u:object_r:system_file:s0
 
   # SELinux policy files

$ vim system/core/rootdir/Android.mk

     ln -sf /system/etc $(TARGET_ROOT_OUT)/etc; \
     ln -sf /data/var $(TARGET_ROOT_OUT)/var; \
     ln -sf /data/run $(TARGET_ROOT_OUT)/run; \
     ln -sf /data/user_de/0/com.android.shell/files/bugreports $(TARGET_ROOT_OUT)/bugreports; 


 # Since init.environ.rc is required for init and satisfies that requirement, we hijack it to create the symlink.
 LOCAL_POST_INSTALL_CMD += ; ln -sf /system/bin/init $(TARGET_ROOT_OUT)/init
 LOCAL_POST_INSTALL_CMD += ; ln -sf /data/docker $(TARGET_OUT)/etc/
 LOCAL_POST_INSTALL_CMD += ; ln -sf /data/resolv.conf $(TARGET_OUT)/etc/resolv.conf

Manually create the folders and make image again:

$ mkdir -p out/target/product/generic_x86_64/data/run
$ mkdir -p out/target/product/generic_x86_64/data/var
$ mkdir -p out/target/product/generic_x86_64/data/docker
$ echo "nameserver 223.5.5.5" > out/target/product/generic_x86_64/data/resolv.conf
$ make userdataimage -j50

Restart the emulator, now it’s free to use docker.

Create emulator

Start the emulator via:

# sudo tunctl
# brctl addif virbr0 tap0
# ip link set dev tap0 up
# emulator -show-kernel -no-snapshot-load -selinux disabled  -qemu -cpu host -device virtio-net-pci,netdev=hn0,mac=52:55:00:d1:55:51   -netdev tap,id=hn0,ifname=tap0,script=no,downscript=no

The added eth1 has no ip addr, use dhclient for getting the address from virbr0:

adb root
adb shell "dhcpclient -i eth1 &"

Check the ip addr for eth1:

adb shell
generic_x86_64:/ # ifconfig eth1
eth1      Link encap:Ethernet  HWaddr 52:55:00:d1:55:51  Driver virtio_net
          inet addr:192.168.122.124  Bcast:192.168.122.255  Mask:255.255.255.0 
          inet6 addr: fe80::5055:ff:fed1:5551/64 Scope: Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:27 errors:0 dropped:0 overruns:0 frame:0 
          TX packets:58 errors:0 dropped:0 overruns:0 carrier:0 
          collisions:0 txqueuelen:1000 
          RX bytes:2800 TX bytes:15341 

Set the /etc/resolv.conf, cgroupfs, then start the dockerd manually:

echo "nameserver 223.5.5.5">/etc/resolv.conf
mount -t tmpfs -o uid=0,gid=0,mode=0755 cgroup /sys/fs/cgroup
cd /sys/fs/cgroup/
mkdir -p cpu cpu acct blkio memory devices pids
mount -n -t cgroup -o cpu cgroup cpu
mount -n -t cgroup -o  cpuacct cgroup cpuacct
mount -n -t cgroup -o  blkio cgroup blkio
mount -n -t cgroup -o  memory cgroup memory
mount -n -t cgroup -o  devices cgroup devices
mount -n -t cgroup -o  pids cgroup pids

ip rule add from all lookup main pref 30000
dockerd  --dns=223.5.5.5 --data-root=/data/var/ --ip=192.168.122.124 & >/data/dockerd-logfile 2>&1

Start the redroid instance via:

docker run -d --privileged -p 8888:5555 redroid/redroid:8.1.0-latest