Jan 15, 2024
TechnologyTips
crontab items:
@reboot /usr/bin/execpipe.sh
execpipe
content:
$ cat /usr/bin/execpipe.sh
#!/bin/bash
while true; do eval "$(cat /mypipe)" &> /mypipeoutput.txt;done
#while true; do eval "$(cat /mypipe)";done
Create the pipe via:
$ ls / | grep mypipe
mypipe
mypipeoutput.txt
Kernel Building(VB)
Build the kernel via:
apt install -y git fakeroot build-essential ncurses-dev xz-utils libssl-dev bc flex libelf-dev bison rsync kmod cpio unzip
unzip kernel-config.zip
cp kernel-config/x86_64_defconfig .config
./scripts/config --disable DEBUG_INFO
echo "" | make ARCH=x86_64 olddefconfig
make ARCH=x86_64 -j16 LOCALVERSION=-lts2021-iotg bindeb-pkg
Kernel patch backport:
drivers/gpu/drm/i915/display/intel_fbc.c, line 1029, not equal to tc's implementation
/drivers/gpu/drm/i915# vim i915_driver.c, 存在较大不同
Dec 13, 2023
Technologyenvironment
vagrant vm, for using this vm we could reach out the internet(through gfw).
Steps
Get the source code and prepare the code changes:
# apt install -y git build-essential
# git clone https://github.com/intel/xpumanager.git
# cd xpumanager
# vim ./core/src/vgpu/precheck.cpp +72
} else if (cmdRes.output().find("vmx") != std::string::npos) {
/*
* VMX flag detected by lscpu
*/
result->vmxFlag = true;
} else {
result->vmxFlag = true;
//result->vmxFlag = false;
//std::string msg = "No VMX flag, Please ensure Intel VT enabled in BIOS";
//strncpy(result->vmxMessage, msg.c_str(), msg.size() + 1);
}
# vim builder/Dockerfile.builder-ubuntu
make -j && make install && \
---->
make -j8 && make install && \
Build the build docker image, save the iidfile:
$ sudo docker build --build-arg BASE_VERSION=$BASE_VERSION --build-arg http_proxy=$http_proxy --build-arg https_proxy=$https_proxy --iidfile /tmp/xpum_builder_ubuntu_$BASE_VERSION.iid -f builder/Dockerfile.builder-ubuntu .
$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> 8beea6fc722f 9 minutes ago 1.92GB
ubuntu 22.04 b6548eacb063 11 days ago 77.8MB
$ cp /tmp/xpum_builder_ubuntu_22.04.iid ~
Using this docker image for building the deb file(xpumanager):
sudo docker run --rm \
-v $PWD:$PWD \
-u $UID \
-e CCACHE_DIR=$PWD/.ccache \
-e CCACHE_BASEDIR=$PWD \
$(cat /tmp/xpum_builder_ubuntu_$BASE_VERSION.iid) $PWD/build.sh
cp /home/vagrant/xpumanager/build/xpumanager_1.2.25_20231213.023315.251edc28~u22.04_amd64.deb ~
Build the xpu-smi:
rm -fr build
sudo docker run --rm \
-v $PWD:$PWD \
-u $UID \
-e CCACHE_DIR=$PWD/.ccache \
-e CCACHE_BASEDIR=$PWD \
$(cat /tmp/xpum_builder_ubuntu_$BASE_VERSION.iid) $PWD/build.sh -DDAEMONLESS=ON
cp /home/vagrant/xpumanager/build/xpu-smi_1.2.25_20231213.023748.251edc28~u22.04_amd64.deb ~
verification
Install:
# sudo apt-get install -y ./xpu-smi_1.2.25_20231213.023748.251edc28~u22.04_amd64.deb
# ls /dev/dri/
by-path card0 card1 renderD128 renderD129
Dec 5, 2023
Technology1. Partition Preparation
Shrink the home partition:
tar -czvf /root/home.tgz -C /home .
tar -tvf /root/home.tgz
umount /dev/mapper/centos-home
lvremove /dev/mapper/centos-home
lvcreate -L 40GB -n home centos
mkfs.xfs /dev/centos/home
mount /dev/mapper/centos-home
lvextend -r -l +100%FREE /dev/mapper/centos-root
tar -xzvf /root/home.tgz -C /home
Create the gpt partition on nvme disk:
gdisk /dev/nvme0n1
gdisk /dev/nvme1n1
gdisk /dev/nvme2n1
gdisk /dev/nvme3n1
o Enter for new empty GUID partition table (GPT)
y Enter to confirm your decision
n Enter for new partition
Enter for default of first partition
Enter for default of the first sector
Enter for default of the last sector
fd00 Enter for Linux RAID type
w Enter to write changes
y Enter to confirm your decision
Create the raid1 using mdadm:
mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/nvme0n1p1 /dev/nvme1n1p1
mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/nvme2n1p1 /dev/nvme3n1p1
Examine the partition via:
lsblk
......
nvme2n1 259:2 0 7T 0 disk
└─nvme2n1p1 259:5 0 7T 0 part
└─md1 9:1 0 7T 0 raid1
nvme1n1 259:1 0 7T 0 disk
└─nvme1n1p1 259:4 0 7T 0 part
└─md0 9:0 0 7T 0 raid1
......
Create the pv:
pvcreate /dev/md0
pvcreate /dev/md1
Create the vg:
# vgcreate vmvolume /dev/md0
Volume group "vmvolume" successfully created
# vgextend vmvolume /dev/md1
Volume group "vmvolume" successfully extended
2. Create the
Dec 1, 2023
Technology默认情况下,无桌面体验:
原因在于选错了安装体验,需要选择桌面体验版。
Nov 29, 2023
TechnologyModification steps:
1. Qemu modification
Rebuild qemu:
sudo apt install -y librbd-dev
cd qemu-7.1.0/
./configure --target-list=x86_64-softmmu --enable-debug --disable-docs --disable-virglrenderer --prefix=/usr --enable-virtfs --enable-libusb --disable-debug-tcg --audio-drv-list=pa,alsa --enable-spice --enable-rbd
make -j8 && make install
Install ceph-common
:
$ apt-cache policy ceph-common
ceph-common:
Installed: (none)
Candidate: 17.2.6-0ubuntu0.22.04.2
Version table:
17.2.6-0ubuntu0.22.04.2 500
500 http://mirrors.ustc.edu.cn/ubuntu jammy-updates/main amd64 Packages
17.2.5-0ubuntu0.22.04.3 500
500 http://mirrors.ustc.edu.cn/ubuntu jammy-security/main amd64 Packages
17.1.0-0ubuntu3 500
500 http://mirrors.ustc.edu.cn/ubuntu jammy/main amd64 Packages
$ sudo apt install -y ceph-common
Define virsh’s secret:
$ cat secret.txt
<secret ephemeral='no' private='no'>
<uuid>xxxxxxxxxxxxxxxxxxxxxxxx</uuid>
<usage type='ceph'>
<name>client.cinder secret</name>
</usage>
</secret>
$ virsh secret-define secret.txt
Secret xxxxxxxxxxxxxxxxxx created
Set the secret:
# virsh
Welcome to virsh, the virtualization interactive terminal.
Type: 'help' for help with commands
'quit' to quit
virsh # secret-set-value de12b241-6087-47e1-9d4f-c8baf5ff4968 aofuowguoewogowaugowogwe
error: Passing secret value as command-line argument is insecure!
Secret value set
virsh # secret-get-value de12b241-6087-47e1-9d4f-c8baf5ff4968
aofuowguoewogowaugowogwe
Get the rbd for the vdi instance:
$ sudo virsh dumpxml privatedefaulttenant-default_ebc6fef5-3447-4788-8980-f780ad336399 | grep rbd
<source protocol='rbd' name='ceph-vm-pool-1/volume-d1cb2b42-fe78-41ef-beb3-a6fc12d6e761'>
Get the info in local machine via rbd
command:
# rbd --id cinder info ceph-vm-pool-1/volume-d1cb2b42-fe78-41ef-beb3-a6fc12d6e761
2023-11-30T10:36:43.676+0800 7f0e170e64c0 -1 asok(0x55b5039f6090) AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: failed to bind the UNIX domain socket to '/var/run/ceph/guests/ceph-client.cinder.3827.94235938232352.asok': (2) No such file or directory
rbd image 'volume-d1cb2b42-fe78-41ef-beb3-a6fc12d6e761':
size 80 GiB in 20480 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 827dd9323d6248
block_name_prefix: rbd_data.827dd9323d6248
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten, operations
op_features: clone-child
flags:
create_timestamp: Wed Nov 8 10:09:37 2023
access_timestamp: Thu Nov 30 10:35:31 2023
modify_timestamp: Thu Nov 30 10:36:33 2023
parent: ceph-vm-pool-1/volume-d8a795bd-48c4-425e-82ba-a22a244778ad@snap-ed0919b6-a4a7-4e6d-a447-dbe89f27bbb8
overlap: 80 GiB
Mount the remote rbd to local:
# rbd --id cinder map ceph-vm-pool-1/volume-d1cb2b42-fe78-41ef-beb3-a6fc12d6e761 -p testpool
2023-11-30T10:37:26.447+0800 7fcdaaef84c0 -1 asok(0x5576928dc090) AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: failed to bind the UNIX domain socket to '/var/run/ceph/guests/ceph-client.cinder.3846.93967753279520.asok': (2) No such file or directory
/dev/rbd0
# lsblk | grep rbd0
rbd0 252:0 0 80G 0 disk
├─rbd0p1 252:1 0 500M 0 part
└─rbd0p2 252:2 0 79.5G 0 part
zc driver
Version:
2019, error:
uhd730 could be usable , but zC copy not ready.
win2022 way
Get the rbd(vdi node):
# sudo virsh dumpxml privatedefaulttenant-default_bbdd760f-6709-4a0b-ad96-4903c6ea1e2e | grep rbd
<source protocol='rbd' name='ceph-vm-pool-1/volume-b492ad15-e646-4cf3-9fc1-103d30756151'>
mount rbd(idv node):
# rbd --id cinder map ceph-vm-pool-1/volume-b492ad15-e646-4cf3-9fc1-103d30756151 -p testpool2
/dev/rbd2
Start the machine