ReadingTipsOnLinuxSystemArchitecture

On This Book

Borrowed from lab, written via a janpanese author.
/images/2017_07_31_09_20_33_1054x739.jpg This article will record the reading tips on Chapter 2(libvirtd related).

Network Configuration

Edit the netoworking definition xml:

$ cat internal.xml
<network>
	<name>internal</name>
	<bridge name='virbr8'/>
</network>
$  cat external.xml
<network>
	<name>external</name>
	<bridge name='virbr9'/>
</network>

Define the networking via following commands:

$ sudo virsh net-define external.xml
Network external defined from external.xml

$ sudo virsh net-autostart external
Network external marked as autostarted

$ sudo virsh net-start external
Network external started

$ libvirt sudo virsh net-list
 Name                 State      Autostart     Persistent
----------------------------------------------------------
 default              active     no            yes
 external             active     yes           yes
 internal             active     yes           yes
 kubernetes           active     yes           yes

View the configuration in virt-manager:

/images/2017_07_31_09_34_07_495x298.jpg

CorrectHugoDate

Problem

/images/2017_07_27_16_08_05_1361x260.jpg

Reason

This is because hugo upgrade to a new version 0.25.1, while this new version won’t give the default value of date in newly created markdown file.

Solution

Edit the themes/hyde-a/archetypes/default.md, add following items:

+++
title = ""
date = "{{ .Date }}"
description = ""
keywords = ["Linux"]
categories = ["Technology"]
+++

Now you could re-new your configuration, and then your blog will acts OK.

CreateRHEL6CustomizedISO

目的

根据用户自定义配置,自动从ISO安装出整个系统。

准备材料

RHEL 6.6安装光盘, x86_64版本。 自定义kickstart文件,用于自定义分区/用户/密码/安装包等
红帽系列操作系统(用于制作光盘镜像,已验证Redhat7.3)

步骤

  1. 创建目录用于挂载安装光盘和自定义光盘, 其中/media/bootiso用于挂载安装光盘, /media/bootisoks用于存放自定义光盘内容:
$ mkdir -p /media/bootiso /media/bootisoks
  1. 拷贝安装内容到自定义光盘目录:
$ sudo mount -t iso9660 -o loop DVD.iso /media/bootiso
$ cp -r /media/bootiso/* /media/bootisoks/
$ chmdo -R u+w /media/bootisoks
$ cp /media/bootiso/.discinfo /media/bootisoks
$ cp /media/bootiso/.discinfo /media/bootisoks/isolinux
  1. 拷贝自定义的ks文件到isolinux目录下:
$ cp YourKickStartFile.ks /media/bootisoks/isolinux
  1. 配置引导选项:
$ vim /media/bootisoks/isolinux.cfg
initrd=initrd.img ks=cdrom:/isolinux/ks.cfg
  1. 创建ISO文件:
# mkisofs -r -T -V "MYISONAME" -b isolinux/isolinux.bin -c isolinux/boot.cat
-no-emul-boot -boot-load-size 4 -boot-info-table -o ../boot.iso .

经历此五个步骤以后,即可得到我们定制好的ISO,用此ISO即可安装出我们自定义好的系统.

kickstart示例文件:

安装了基本桌面、中文支持等。

#platform=x86, AMD64, or Intel EM64T
#version=DEVEL
# Firewall configuration
firewall --disabled
# Install OS instead of upgrade
install
# Use network installation
#url --url="http://10.7.7.2/CentOS"
cdrom
# Root password
rootpw --iscrypted xxxxxxxxxxxxxxxxxxxx
# System authorization information
auth  --useshadow  --passalgo=sha512
# Use graphical install
graphical
firstboot --disable
# System keyboard
keyboard us
# System language
lang en_US
# SELinux configuration
selinux --disabled
# Installation logging level
logging --level=info

# System timezone
timezone  Asia/Hong_Kong
# System bootloader configuration
bootloader --location=mbr
# Clear the Master Boot Record
zerombr
# Partition clearing information
clearpart --all  
# Disk partitioning information
part swap --fstype="swap" --size=1024
part / --asprimary --fstype="ext4" --grow --size=1

%packages
@basic-desktop
@chinese-support
@internet-browser
@x11
-ibus-table-cangjie
-ibus-table-erbi
-ibus-table-wubi

%end

其中rootpw以后的字段可以通过以下命令得到:

$ openssl passwd -1 "Your_Password_Here"

ks.cfg的另一种构建方法

在安装完的每一台机器上,都可以看到/root/ana…ks文件,编辑此文件即可得到我们定制化的kickstart配置。

WorkingTipsOnOracleDatabaseDeployment

Items

Working items on one-click deployment of oracle database.

Ansible-Playbooks

Based on:

https://github.com/nkadbi/oracle-db-12c-vagrant-ansible

Refers to:

https://blog.dbi-services.com/vagrant-up-get-your-oracle-infrastructure-up-and-running/
https://blog.dbi-services.com/part2-vagrant-up-get-your-oracle-infrastructure-up-an-running/

Username/Password:
System: oracle/welcome1
Database: sys/oracle

Linux Client

Yaourt has the linux client for accessing oracle Db:

https://aur.archlinux.org/packages/oracle-sqldeveloper/

Installing method:
Download the file from oracle.com

Create Database

Create database using following command:

[vagrant@dbserver1 ~]$ su - oracle
Password: 
-bash-4.2$ sqlplus "/as sysdba"

Now you got the shell like SQL>, you could input the sql in this shell:

Run `1_create_user_and_tablespace_dash.sql`

Create tables/metadatas

The first step will create the database user, then you could login into the database using this user, using SQL Devloper for login and execute the command:

/images/2017_07_23_13_47_44_745x382.jpg

Execute the following script:

msp_XXX.sql(Including 2 scripts)   

/images/2017_07_23_13_50_23_506x466.jpg

Tips for getting the db config:

 SQL> show parameter service_names;
.....
service_names			     string	 db1.private

Then your configuration should use the same service_names as described.

DockerNetworkPerformanceTest

测试环境

Docker常用的两种网络模式包括Bridge和Host模式,为测试这两种网络模式的性能,我们将创建以下的测试环境:

  • 192.192.192.89 - 运行Docker容器的服务器, CentOS 7.3.
  • 192.192.192.88 - 运行客户端的服务器, CentOS 7.3.

两台服务器之间的物理网络为万兆以太网络。

我们采用Iperfhttp://software.es.net/iperf/来测量网络带宽,iperf非常简单,也拥有足够多的特性用于测试基本的性能指标。 在服务器端,我们需要一个运行iperf3的Docker容器。 Docker的版本为17.05-ce.

测试将基于以下几个场景:

  • 原始网络吞吐量
  • 跨主机物理机到Docker(host模式)
  • 跨主机物理机到Docker(Bridge模式)
  • 同主机物理机到Docker(Bridge模式)
  • 同主机Docker到Docker(Bridge模式-external)
  • 同主机Docker到Docker(Bridge模式-internal)

原始网络吞吐量

首先,我们需要得到在没有任何Docker容器运行时的原始网络吞吐,在Server端运行:

[root@192.192.192.89 ~]# iperf3 -s -p 5202

Client端运行:

[root@192.192.192.88 ~]# iperf3 -c 192.192.192.89 -p 5202

运行测试后,服务器端和客户端都会返回诊断信息。我们暂时只关心其吞吐量:

-----------------------------------------------------------
Server listening on 5202
-----------------------------------------------------------
Accepted connection from 192.192.192.88, port 39682
[  5] local 192.192.192.89 port 5202 connected to 192.192.192.88 port 39684
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-1.00   sec  1.05 GBytes  9.05 Gbits/sec                  
[  5]   1.00-2.00   sec  1.10 GBytes  9.41 Gbits/sec                  
[  5]   2.00-3.00   sec  1.10 GBytes  9.41 Gbits/sec                  
[  5]   3.00-4.00   sec  1.10 GBytes  9.41 Gbits/sec                  
[  5]   4.00-5.00   sec  1.10 GBytes  9.41 Gbits/sec                  
[  5]   5.00-6.00   sec  1.10 GBytes  9.41 Gbits/sec                  
[  5]   6.00-7.00   sec  1.10 GBytes  9.41 Gbits/sec                  
[  5]   7.00-8.00   sec  1.10 GBytes  9.41 Gbits/sec                  
[  5]   8.00-9.00   sec  1.10 GBytes  9.41 Gbits/sec                  
[  5]   9.00-10.00  sec  1.10 GBytes  9.41 Gbits/sec                  
[  5]  10.00-10.04  sec  42.0 MBytes  9.39 Gbits/sec                  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-10.04  sec  0.00 Bytes  0.00 bits/sec                  sender
[  5]   0.00-10.04  sec  11.0 GBytes  9.38 Gbits/sec                  receiver
-----------------------------------------------------------
Server listening on 5202
-----------------------------------------------------------

可以看到,在万兆交换机的网络场景下,物理机到物理机之间的网络带宽跑满了万兆交换机的极限.

跨主机物理机到Docker(host模式)

在Docker中运行iperf3相当简单,在hub.docker.com可以找到大量的打包有iperf3的镜像,我们采用:

# sudo docker pull networkstatic/iperf3

在服务器端启动侦听5203端口的docker实例:

[root@192.192.192.89 ~]# docker run --net=host  -it --rm --name=iperf3-server networkstatic/iperf3 -s -p 5203

在Client端执行对应的修改,得到的结果为:

[root@192.192.192.88 ~]# iperf3 -c 192.192.192.89 -p 5203
Connecting to host 192.192.192.89, port 5203
[  4] local 192.192.192.88 port 40326 connected to 192.192.192.89 port 5203
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec  1.10 GBytes  9.43 Gbits/sec   20    625 KBytes       
[  4]   1.00-2.00   sec  1.10 GBytes  9.42 Gbits/sec    0    625 KBytes       
//.....

结果差不多相同: 9.40 Gbits/sec

跨主机物理机到Docker(Bridge模式)

更改为5204端口,这次使用的网络模式为Bridge模式:

[root@192.192.192.89 ~]# docker run  -it --rm -p 5204:5204 --name=iperf3-server networkstatic/iperf3 -s -p 5204

在客户端不作任何修改,只更换远端端口为5204:

[root@192.192.192.88 ~]# iperf3 -c 192.192.192.89 -p 5204
Connecting to host 192.192.192.89, port 5204
[  4] local 192.192.192.88 port 53936 connected to 192.192.192.89 port 5204
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec  1.10 GBytes  9.44 Gbits/sec   15    669 KBytes       
[  4]   1.00-2.00   sec  1.10 GBytes  9.42 Gbits/sec    0    682 KBytes       
[  4]   2.00-3.00   sec  1.10 GBytes  9.42 Gbits/sec    0    691 KBytes 

可以看到,在Bridge模式下,吞吐量也跑满了万兆网络的极限.

同主机物理机到Docker(Bridge模式)

在同一台主机上(192.192.192.89)上运行iperf,测试到Docker的吞吐量,沿用之前侦听5204的容器不变:

[root@192.192.192.89 ~]# iperf3 -c 192.192.192.89 -p 5204
Connecting to host 192.192.192.89, port 5204
[  4] local 192.192.192.89 port 46720 connected to 192.192.192.89 port 5204
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec  2.77 GBytes  23.8 Gbits/sec    0    274 KBytes       
[  4]   1.00-2.00   sec  2.75 GBytes  23.6 Gbits/sec    0    274 KBytes       
[  4]   2.00-3.00   sec  2.75 GBytes  23.6 Gbits/sec    0    277 KBytes       

在这种模式下,网络的吞吐量几乎三倍于万兆网络,这是因为从主机到Docker实例的网络通路会走本地的回环接口(lo-loopback)接口。

同主机Docker到Docker(Bridge模式-external)

沿用侦听5204端口的容器不变,新启动一个容器,在其中运行iperf:

# iperf3 -c 192.192.192.89 -p 5204
Connecting to host 192.192.192.89, port 5204
[  4] local 172.17.0.5 port 59574 connected to 192.192.192.89 port 5204
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec  1.03 GBytes  8.84 Gbits/sec   91    228 KBytes       
[  4]   1.00-2.00   sec   955 MBytes  8.01 Gbits/sec    0    229 KBytes       
[  4]   2.00-3.00   sec  1.02 GBytes  8.80 Gbits/sec    0    230 KBytes       
[  4]   3.00-4.00   sec   767 MBytes  6.43 Gbits/sec    0    230 KBytes       
[  4]   4.00-5.00   sec   851 MBytes  7.14 Gbits/sec    0    230 KBytes       

可以看到,如果直接使用物理机的IP地址和端口,则吞吐需要同时使用Bridge模式下物理网卡的吞吐, 此时网卡的物理性能下降明显。

同主机Docker到Docker(Bridge模式-internal)

为了避免使用物理机的IP地址带来的性能下降,直接使用容器内部的IP地址做iperf测试:

# iperf3 -c 172.17.0.4 -p 5204
Accepted connection from 172.17.0.5, port 39516
[  5] local 172.17.0.4 port 5204 connected to 172.17.0.5 port 39518
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-1.00   sec  2.39 GBytes  20.5 Gbits/sec                  
[  5]   1.00-2.00   sec  2.50 GBytes  21.5 Gbits/sec                  
[  5]   2.00-3.00   sec  2.50 GBytes  21.5 Gbits/sec 

可以看到,在这种模式下,容器之间的通信还是基于lo(loopback)接口来做的,几乎三倍于万兆交换机的峰值速度。

结论

各次测试的对比数据整理如下:

| 物理机-物理机                           | 9.40 Gbit/sec    | 100%  |
| 跨物理机到Docker(host模式网络)          | 9.40 Gbit/sec    | 100%  |
| 跨物理机到Docker(Bridge模式网络)        | 9.40 Gbit/sec    | 100%  |
| 同主机内到Docker(Bridge模式网络)        | 23.8 Gbit/sec    | 250%  |
| 同主机Docker到Docker(Bridge模式-ex)     | 8.00 Gbit/sec    |  85%  |
| 同主机Docker到Docker(Bridge模式-int)    | 21.00 Gbit/sec   | 220%  |

结论: 在Docker运行环境中,网络的吞吐量近似于本地网络IO,基本上不会有性能损耗。需要特别注意的是,一定要避免同主机中的Docker实例彼此使用物理机IP/端口进行通信,那样会带来性能的明显下降。