Aug 27, 2014
TechnologyNetwork Configuration
Add the rules in udevd:
linux-:~ # cd /etc/udev/rules.d/
linux-:/etc/udev/rules.d # cat 10-network.rules
SUBSYSTEM=="net", ACTION=="add", ATTR{address}=="00:22:22:22:22", NAME="eth1"
SUBSYSTEM=="net", ACTION=="add", ATTR{address}=="00:22:22:22:22", NAME="eth0"
SUBSYSTEM=="net", ACTION=="add", ATTR{address}=="00:22:22:22:22", NAME="eth2"
SUBSYSTEM=="net", ACTION=="add", ATTR{address}=="00:22:22:22:22", NAME="eth3"
Add following network configuration:
linux-:/etc/sysconfig # cd network/
linux-:/etc/sysconfig/network # cat ifcfg-eth0
# Loopback (lo) configuration
IPADDR=1xx.xx.xx.xxx
NETMASK=255.255.255.0
BROADCAST=1xx.xx.xx.xxx
STARTMODE=auto
USERCONTROL=yes
FIREWALL=no
Default Gateway Setup:
linux-:~ # cat /etc/sysconfig/network/ifroute-br0
# Destination Dummy/Gateway Netmask Device
#
default xxx.xxx.xx.1 255.255.255.255 br0
Restart computer then you got the fixed ip address in eth0.
Add the default route so we could get outside.
vim routes
xxx.xxx.xx.1 -eth0
or manually:
route add default gw xxx.xxx.xx.1 eth0
Bridge Networking Configuration:
linux-:/etc/sysconfig/network # cat ifcfg-br0
STARTMODE='auto'
BOOTPROTO='static'
DNS1=xxx.xxx.xx.1
GATEWAY=xxx.xxx.xx.1
IPADDR=xxx.xxx.xx.59
NETMASK=255.255.255.0
ONBOOT=yes
USERCONTROL='no'
BRIDGE='yes'
BRIDGE_PORTS='eth0'
BRIDGE_AGEINGTIME='20'
BRIDGE_FORWARDDELAY='0'
BRIDGE_HELLOTIME='2'
BRIDGE_MAXAGE='20'
BRIDGE_PATHCOSTS='3'
BRIDGE_STP='on'
linux-:/etc/sysconfig/network # cat ifcfg-eth0
BOOTPROTO='static'
STARTMODE='ifplugd'
IFPLUGD_PRIORITY='1'
NAME = '1000 mBIT ETHERNET'
USERCTL=no
The route should changed to:
route add default gw xxx.xxx.xx.1 br0
LXC Install
Enable the free ways:
ssh -C -L 127.0.0.1:9001:1xx.xxx.2xx:2xxxx root@1xx.xx.1xx.xxx
Use zypper to install the container.
zypper search lxc
# zypper install lxc lxc-devel yast2-lxc libvirt-daemon-lxc libvirt-daemon-driver-lxc
# lxc-checkconfig
# ls /usr/share/lxc/templates/
Yes we have the opensuse specified configuraton.
Create The first Container:
lxc-create -n ixxxxxSimulator1 -t /usr/share/lxc/templates/lxc-opensuse
List the installed Container:
linux-:~ # lxc-ls
xxxxhxxSimulator1
Username and password are root
.
Start the lxc machine via:
lxc-start -n xxxxxSimulator1
LXC Configuration
No Network, Add it!
First we remove the desktop kernel. and use the default kernel
# uname -a
Linux XXXXSimulator1 3.11.6-4-desktop
# zypper in kernel-default
# zypper rm kernel-desktop
# uname -a
Linux linux- 3.11.10-21-default
Enable the xfce4 for the default vnc server desktop:
zypper in -t pattern xfce
Change the default lxc configuraiton of network:
$ vim /var/lib/lxc/XXXXSimulator1/config
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.name = eth0
Now if you start the lxc container, the network eth0 will be automatically started.
LXC Expand
Duplicate LXC Machine.
This is strange when we directly call lxc-clone will cause failed silently:
# lxc-clone -o XXXXSimulator1 -n XXXXSimulator2
linux-:~ # echo $?
1
Then we use this:
# bash /usr/bin/lxc-clone -o XXXXSimulator1 -n XXXXSimulator2
Tweaking configuration
Copying rootfs...
Updating rootfs...
'XXXXSimulator2' created
linux-:~ # lxc-ls
XXXXSimulator1 XXXXSimulator2
Change the XXXXSimulator2’s configuration:
$ vim /var/lib/lxc/XXXXSimulator2/config
lxc.network.ipv4 = xxx.xxx.xx.67
Now start the two LXC via:
# lxc-start -n XXXXSimulator2
# lxc-start -n XXXXSimulator1
[Trusty@Linux01 ~]$ ping -c 1 xxx.xxx.xx.66
PING xxx.xxx.xx.66 (xxx.xxx.xx.66) 56(84) bytes of data.
64 bytes from xxx.xxx.xx.66: icmp_seq=1 ttl=64 time=1.50 ms
--- xxx.xxx.xx.66 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 1ms
rtt min/avg/max/mdev = 1.506/1.506/1.506/0.000 ms
[Trusty@Linux01 ~]$ ping -c 1 xxx.xxx.xx.67
PING xxx.xxx.xx.67 (xxx.xxx.xx.67) 56(84) bytes of data.
64 bytes from xxx.xxx.xx.67: icmp_seq=1 ttl=64 time=1.56 ms
--- xxx.xxx.xx.67 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 1ms
rtt min/avg/max/mdev = 1.567/1.567/1.567/0.000 ms
Later we could configure the LXC, to let the container start at bootup, Or control its behavior.
LXC Computer Configuration
The IP address and Default Gateway Configuration:
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.name = eth0
lxc.network.ipv4 = xxx.xxx.xx.59/24
lxc.network.ipv4.gateway = xxx.xxx.xx.1
Then start the LXC Container, you will see the ip address/netmask already configured.
LXC Destroy
Destroyed the unused Container2:
linux-:~ # lxc-ls
XXXXSimulator1 XXXXSimulator2
linux-:~ # lxc-destroy -n XXXXSimulator2
linux-:~ # lxc-ls
XXXXSimulator1
Aug 13, 2014
TechnologyAdd Arch
step1, add the hosts into /etc/hosts:
# Puppet
10.0.0.88 puppet
10.0.0.89 client
step2, edit the /etc/puppet/puppet.conf:
[agent]
# add server
[agent]
server = puppet
Restart the puppet.service:
systemctl restart puppet.service
systemctl enable puppet.service
step3, in 10.0.0.88(server), add the ssl certification of archlinux:
root@Ubuntu88:/home/Trusty# !44
puppet cert --list
Warning: Setting templatedir is deprecated. See http://links.puppetlabs.com/env-settings-deprecations
(at /usr/lib/ruby/vendor_ruby/puppet/settings.rb:1095:in `block in issue_deprecations')
"XXXyyy.lan" (SHA256) 8XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
root@Ubuntu88:/home/Trusty# puppet cert --sign XXXyyy.lan
Warning: Setting templatedir is deprecated. See http://links.puppetlabs.com/env-settings-deprecations
(at /usr/lib/ruby/vendor_ruby/puppet/settings.rb:1095:in `block in issue_deprecations')
Notice: Signed certificate request for XXXyyy.lan
Notice: Removing file Puppet::SSL::CertificateRequest XXXyyy.lan at '/var/lib/puppet/ssl/ca/requests/XXXyyy.lan.pem'
Now check the /tmp, we will see our test file in last chapter.
For test pupose, we will disable archLinux’s puppet via:
[root@TrustyArch tmp]# systemctl stop puppet.service
[root@TrustyArch tmp]# systemctl disable puppet.service
Removed symlink /etc/systemd/system/multi-user.target.wants/puppet.service.
Install package
Add following lines into 10.0.0.88, /etc/puppet/manifests/site.pp:
package {
'xplot':
ensure => installed
}
Then restart the puppetmaster, in 10.0.0.89, the package xplot will be installed.
Aug 13, 2014
TechnologyAfter upgrading the Linux Kernel, my virtualbox cannot automatically load the kernel modules for virtualbox. Following is the steps for finding out the problems and solving them.
Locating Problem
I could manually modprobe the virtualbox driver, but failed to load at boot, so I first check the status of the systemd’s output.
Checking the systemd’s modules load service status:
# systemctl status systemd-modules-load.service
● systemd-modules-load.service - Load Kernel Modules
Loaded: loaded (/usr/lib/systemd/system/systemd-modules-load.service; static)
Active: failed (Result: exit-code) since Wed 2014-08-13 13:32:34 CST; 1h 24min ago
Docs: man:systemd-modules-load.service(8)
man:modules-load.d(5)
Process: 142 ExecStart=/usr/lib/systemd/systemd-modules-load (code=exited, status=1/FAILURE)
Main PID: 142 (code=exited, status=1/FAILURE)
Aug 13 13:32:34 XXXyyy systemd[1]: systemd-modules-load.service: main process exited, code=exited, status=1/FAILURE
Aug 13 13:32:34 XXXyyy systemd[1]: Failed to start Load Kernel Modules.
Aug 13 13:32:34 XXXyyy systemd[1]: Unit systemd-modules-load.service entered failed state.
Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.
Manually reload this service and check the status:
[root@XXXyyy Trusty]# systemctl restart systemd-modules-load
Job for systemd-modules-load.service failed. See 'systemctl status systemd-modules-load.service' and 'journalctl -xn' for details.
[root@XXXyyy Trusty]# systemctl status systemd-modules-load
● systemd-modules-load.service - Load Kernel Modules
Loaded: loaded (/usr/lib/systemd/system/systemd-modules-load.service; static)
Active: failed (Result: exit-code) since Wed 2014-08-13 14:59:31 CST; 13s ago
Docs: man:systemd-modules-load.service(8)
man:modules-load.d(5)
Process: 21364 ExecStart=/usr/lib/systemd/systemd-modules-load (code=exited, status=1/FAILURE)
Main PID: 21364 (code=exited, status=1/FAILURE)
Aug 13 14:59:31 XXXyyy systemd[1]: systemd-modules-load.service: main process exited, code=exited, status=1/FAILURE
Aug 13 14:59:31 XXXyyy systemd[1]: Failed to start Load Kernel Modules.
Aug 13 14:59:31 XXXyyy systemd[1]: Unit systemd-modules-load.service entered failed state.
Use journalctl to view the PID’s logs:
[root@XXXyyy Trusty]# journalctl -b _PID=21364
-- Logs begin at Thu 2014-07-31 16:07:13 CST, end at Wed 2014-08-13 15:00:02 CST. --
Aug 13 14:59:31 XXXyyy systemd-modules-load[21364]: Failed to find module 'vboxdrv vboxnetflt vboxnetadp'
[root@XXXyyy Trusty]# systemctl status dkms.service
● dkms.service - Dynamic Kernel Modules System
Loaded: loaded (/usr/lib/systemd/system/dkms.service; disabled)
Active: inactive (dead)
So the problem is quite clear: Failed to find module, and dkms service is not enabled.
Solving Problem
First enable the dkms.service via:
# systemctl enable dkms.service
Created symlink from /etc/systemd/system/multi-user.target.wants/dkms.service to /usr/lib/systemd/system/dkms.service.
Install vboxhost-hook, this will add the hook to compile the virtualbox host modules:
# yaourt -S vboxhost-hook
Add vboxhost into the /etc/mkinitcpio.conf:
HOOKS="base udev autodetect modconf block filesystems keyboard fsck vboxhost"
Now recompile the initramfs via:
mkinitcpio -p linux
dkms should also be installed:
pacman -S linux-headers virtualbox-host-dkms viftualbox-guest-dkms
dkms install vboxhost/4.3.14
dkms install vboxguest/4.3.14
Finally I found the reason:
# cat /etc/modules-load.d/virtualbox.conf
# Load virtualbox related modules at startup
vboxdrv
vboxnetflt
vboxnetadp
But previously I let them in one line!!!!!!!!!!!!!OMG…….
Reboot and examine the result via lsmod | grep vbox
.
Aug 11, 2014
TechnologyInstallation
Install via;
sudo pacman -S puppet
Configurate this machine into server mode.
Install new Virtual Machine
Install a new ubuntu14.04 using qemu, and install puppet in it.
Generate the configuration file for mirror.list of Ubuntu.
Finally use the vdi file in the Ubuntu.
Install puppet in Ubuntu14.04:
http://linuxconfig.org/puppet-installation-on-linux-ubuntu-14-04-trusty-tahr
Make Ubuntu use a fixed IP.
$ cat /etc/network/interface
# s file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
# The loopback network interface
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address 10.0.0.88
netmask 255.255.255.0
gateway 10.0.0.1
And Copy the virtual disk, and change the UUID of the disk:
$ VBoxManage internalcommands sethduuid ./Ubuntu.vdi
UUID changed to: d1xxxxxxxxxxxxxxxxxxxxxxxxxx
Be sure to change the ip address to 10.0.0.89.
Now we have 2 machines.
No password enter for ssh login:
$ cat ~/.ssh/id_rsa.pub| ssh Trusty@10.0.0.88 'cat>>~/.ssh/authorized_keys'
$ cat ~/.ssh/id_rsa.pub| ssh Trusty@10.0.0.89 'cat>>~/.ssh/authorized_keys'
Server and Client
Install Server side in 10.0.0.88:
sudo apt-get install puppetmaster
In 10.0.0.88, edit /etc/hosts:
10.0.0.89 client
While in 10.0.0.89, edit /etc/hosts:
10.0.0.88 puppet
In client(10.0.0.89), start the service of puppet:
$ sudo service puppet start
* Starting puppet agent
puppet not configured to start, please edit /etc/default/puppet to enable
[ OK ]
In server(10.0.0.88), start the service of puppet master:
Add following lines in to /etc/puppet/puppet.conf:
dns_alt_names = puppet, master.local, puppet.terokarvinen.com
Then remove all of the generated ssl :
rm -rf /var/lib/puppet/ssl
Now restart the puppetmaster via:
# service puppetmaster restart
Change the hostname of 10.0.0.88 to Ubuntu88, 10.0.0.89 to Ubuntu89, and then restart the computer.
Now change the Ubuntu88’s configuration
Add following lines in 10.0.0.88(Server):
In /etc/puppet/puppet.conf, [master] heading:
dns_alt_names = puppet, master.local, puppet.terokarvinen.com
On 10.0.0.89(Client), change the following line in /etc/default/puppet:
START=yes
Then in /etc/puppet/puppet.conf, add following:
[agent]
server = puppet
Restart the puppet service.
Now on server, use following command to list the cert and add signed cert.
Trusty@Ubuntu88:~$ sudo puppet cert --list
sudo: unable to resolve host Ubuntu88
Warning: Setting templatedir is deprecated. See http://links.puppetlabs.com/env-settings-deprecations
(at /usr/lib/ruby/vendor_ruby/puppet/settings.rb:1095:in `block in issue_deprecations')
"ubuntu89" (SHA256) xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Trusty@Ubuntu88:~$ sudo puppet cert --sign ubuntu89
sudo: unable to resolve host Ubuntu88
Warning: Setting templatedir is deprecated. See http://links.puppetlabs.com/env-settings-deprecations
(at /usr/lib/ruby/vendor_ruby/puppet/settings.rb:1095:in `block in issue_deprecations')
Notice: Signed certificate request for ubuntu89
Notice: Removing file Puppet::SSL::CertificateRequest ubuntu89 at '/var/lib/puppet/ssl/ca/requests/ubuntu89.pem'
Create the Site Manifest and a Module
Go to /etc/puppet, run following command:
Trusty@Ubuntu88:/etc/puppet$ sudo mkdir -p manifests/ modules/helloworld/manifests
Edit following file:
Trusty@Ubuntu88:/etc/puppet$ cat manifests/site.pp
include helloworld
Create the file:
Trusty@Ubuntu88:/etc/puppet$ sudo cat modules/helloworld/manifests/init.pp
class helloworld {
file { '/tmp/helloFromMaster':
content => "See you at http://terokarvinen.com/tag/puppet\n"
}
}
And Now in client, restart the puppet service:
Trusty@Ubuntu89:~$ sudo service puppet restart
sudo: unable to resolve host Ubuntu89
[sudo] password for Trusty:
no talloc stackframe at ../source3/param/loadparm.c:4864, leaking memory
* Restarting puppet agent [ OK ]
Trusty@Ubuntu89:~$ cat /tmp/helloFromMaster
See you at http://terokarvinen.com/tag/puppet
Now the basic configuration is OK.
Aug 10, 2014
TechnologyThis chapter will introduct postgresql.
Installation in ArchLinux
Install it via:
# pacman -S postgresql
Then configure the initial configuration:
# su - postgres
[postgres]$ initdb --locale en_US.UTF-8 -E UTF8 -D '/var/lib/postgres/data'
# systemctl enable postgresql
Create the user:
[root@Arch_Container ~]# su - postgres
[postgres@Arch_Container ~]$ createuser --interactive
Enter name of role to add: root
Shall the new role be a superuser? (y/n) y
[postgres@Arch_Container ~]$ exit
logout
Now using a test command for verifying your postgresql runs OK:
# createdb myDatabaseName
Create the Database
Create a database named book:
# createdb book
Installing plugins cube into the database book.
[root@Arch_Container postgresql]# psql -d book
psql (9.3.5)
Type "help" for help.
book=# CREATE EXTENSION cube;
CREATE EXTENSION
book=# \q
[root@Arch_Container postgresql]# psql book -c "SELECT '1'::cube;"
cube
------
(1)
(1 row)