Tips on 5050RGB(2)

上次的5050RGB模块损坏后, 我又从淘宝买了一块. 价格不贵,4块钱不到. 卖家的资料上 这么写着:

/images/2016_03_17_13_08_03_532x306.jpg

瞅着还挺OK的,不就是Arduino板5V电压外接,然后用三个GPIO口来控制嘛. 接线完毕后发 现怎么也点不亮.

于是用万用表Debug, 发现公共端应该是接GND, 三个控制端需要电平置为高才可以点亮对 应的颜色.

于是正确的连线和示例代码如下:

// V-VCC GND   R-9   B-10  G-11
#define LEDR 9
#define LEDB 10
#define LEDG 11

void clear()
{
  analogWrite(LEDR,0);
  analogWrite(LEDB,0);
  analogWrite(LEDG,0);  //off
}


void setup()
{
  pinMode(LEDG,OUTPUT);
  pinMode(LEDB,OUTPUT);
  pinMode(LEDR,OUTPUT);
}

void loop()
{
  clear();
  // Red
  analogWrite(LEDR,255);
  delay(1000);
  clear();
  // Green
  analogWrite(LEDG,255);
  delay(1000);
  clear();
  // Blue
  analogWrite(LEDB,255);
  delay(1000);
  // White
  analogWrite(LEDB,255);
  analogWrite(LEDG,255);
  analogWrite(LEDR,255);
  delay(2000);
}

编译并上传代码到Arduino板子上, LED将呈现成红色->绿色->蓝色->白色的渐变.

Vagrant-libvirt Playing

最终目的是用vagrant实现CloudStack+Xenserver的自动化部署。

CentOS6.7 box Creating

用packer生成CentOS6.7 amd64的镜像,这个镜像默认是virtualbox兼容的,用vagrant-mutate插件 将其转换为libvirt可用的box镜像:

# vagrant mutate centos-6.7.virtualbox.box libvirt
# cd /root/.vagrant.d/boxes
# ls
centos-6.7.virtualbox  trusty64
# mv centos-6.7.virtualbox/ centos6764
# vagrant box list
centos6764 (libvirt, 0)
trusty64   (libvirt, 0)

创建Vagrantfile文件启动一个实验性质的虚拟机:

# pwd
/media/opensusue/dash/Code/Vagrant/CentOS2New
# ls
Vagrantfile  Vagrantfile~
# cat Vagrantfile
    # -*- mode: ruby -*-
    # vi: set ft=ruby :
    Vagrant.configure(2) do |config|
      # The most common configuration options are documented and commented below.
      # For a complete reference, please see the online documentation at
      # https://docs.vagrantup.com.
    
      config.vm.box = "centos6764"
      # vagrant issues #1673..fixes hang with configure_networks
      config.ssh.shell = "bash -c 'BASH_ENV=/etc/profile exec bash'"
      config.vm.provider :libvirt do |domain|
        domain.memory = 512
        domain.nested = true
      end
    
      config.vm.define :centosnew do |centosnew|
        centosnew.vm.network :private_network, :ip => "192.168.88.2"
      end
    
    end
# vagrant up

vagrant up以后,一个名为CentOS2New_centosnew的虚拟机将被创建, 命名规则为当前文件夹 名+定义的vm名称。

虚拟机启动完毕后,检查状态并登录到该机器:

# vagrant status
Current machine states:

centosnew                 running (libvirt)

The Libvirt domain is running. To stop this machine, you can run
`vagrant halt`. To destroy the machine, you can run `vagrant destroy`.
# vagrant ssh centosnew
Last login: Wed Mar 16 02:31:18 2016 from 192.168.121.1
[vagrant@localhost ~]$

我们可以检查网卡状态,是否设置为我们需要设置的地址192.168.88.2.

更多定制化参数

嵌套虚拟化

未知原因,我在CentOS6上检查嵌套虚拟化总是提示有问题,所以以下的验证是在Ubuntu系统上验证 的。

我们在上面的配置文件里制定了nested选项为true, 现在登录进去检查一下嵌套虚拟化是否加载成 功:

vagrant@vagrant:~$ lsmod | grep kvm
kvm_intel             143590  0 
kvm                   452043  1 kvm_intel
vagrant@vagrant:~$ modinfo kvm_intel | grep nested
parm:           nested:bool
vagrant@vagrant:~$ cat /sys/module/kvm_intel/parameters/nested
N
vagrant@vagrant:~$ sudo modprobe -r kvm_intel
vagrant@vagrant:~$ sudo modprobe kvm_intel nested=1
vagrant@vagrant:~$ cat /sys/module/kvm_intel/parameters/nested
Y

改变nested选项为false后,验证步骤如下:

$ cat /sys/module/kvm_intel/parameters/nested
N

值得注意的是,在虚拟机中,仍然可以通过modprobe kvm_intel nested=1来打开nested选项。

CPU Passthrough

指定参数为domain.cpu_mode = 'host-passthrough':

  config.vm.provider :libvirt do |domain|
    domain.memory = 512
    domain.nested = false
    #domain.cpu_mode = 'host-passthrough'
  end

未指定时:

vagrant@vagrant:~$ cat /proc/cpuinfo  | grep -i "model name"
model name	: Intel Core i7 9xx (Nehalem Class Core i7)

指定后:

[vagrant@localhost ~]$ cat /proc/cpuinfo | grep -i "model name"
model name	: Intel(R) Core(TM) i3 CPU         540  @ 3.07GHz

指定hostname

安装cloudstack时hostname是必要条件之一, Vagrantfile中可以指定vm的hostname:

  config.vm.define "centosnew" do |centosnew|
    centosnew.vm.hostname = "centosnew.example.com"
  end

启动虚拟机后可以通过hostnamehostname --fqdn来检查结果。

快照

通过sahara来实现libvirt机器的快照:

$ vagrant plugin install sahara

在验证系统时,可以进入vagrant的sandbox模式,验证成功后才正式提交。

把玩ebuddy(4)

总结了一下ebuddy的玩法,最近加了点玩法,就是用ebuddy作为Bash运行脚本后的提示部件。譬如 ,当完成了某个编译任务后,用ebuddy来告知任务的运行成功。

$ Task A ; NOTIFY EBUDDY

/bin/ebuddy

创建一个/bin/ebuddy的文件,内容如下:

#!/bin/bash
FILE=/tmp/ebuddy
while true
do
	# if exists the file, then blinking the ebuddy.
  if [ -f $FILE ];
  then
	  # Exists the file, shining the ebuddy
     echo 07 > /dev/udp/127.0.0.1/8888
  else
	  # Now clear the status of the ebuddy
     echo 17 > /dev/udp/127.0.0.1/8888
  fi
#echo 07 > /dev/udp/127.0.0.1/8888
sleep 3
done

这个文件的意思是,如果存在/tmp/ebuddy文件,ebuddy的头会亮起,否则,清除ebuddy的状态。

notifyebuddy && clearebuddy

这两个命令是做在.zshrc里的两个alias:

$ vim ~/.zshrc
# For Using ebuddy
alias notifyebuddy='touch /tmp/ebuddy'
alias clearebuddy='rm -f /tmp/ebuddy'

这样我们可以在运行完某个命令后,告知ebuddy完成事件。

局限

不能同时标识两个以上的命令完成情况。

使用squid缓存所有rpm/deb安装包

在进行自动化部署的时候,需要频繁安装系统,鉴于工作环境的带宽有限,我需要设置一个代理服 务器,用于缓存所有的RPM/DEB安装包,以便自动化部署可以在瞬间完成。

以下示例运行在ArchLinux上。

Squid搭建

Squid介绍:

Squid 是一个 Web 缓存代理,支持 HTTP, HTTPS, FTP, 以及更多。它通过缓存与重用经常请求的 web页面,减少带宽使用同时提升了响应时间。Squid 具有可扩展的访问控制功能,同时可以使服务 器加速。它运行在 Unix 和 Windows 中,采用 GNU GPL 协议发布。

安装squid:

$ sudo pacman -S squid

我们需要配置squid以便它能适配我们的环境,我的环境里主要需要做以下几个事情:

  1. 更改squid缓存目录到/home分区。
  2. 更改squid缓存目录大小为30G以上。
  3. 更改缓存文件大小,以便它支持大的RPM/DEB包。

更改缓存目录, 找到以下的行,在其下添加我们自定义的缓存目录:

$ sudo vim /etc/squid/squid.conf
# Uncomment and adjust the following to add a disk cache directory.
#cache_dir ufs /var/cache/squid 100 16 256
cache_dir ufs /home/dash/squid 30000 16 256

我们将在指定目录下创建目录, 第一层数为16, 每个文件夹下最多包含256个子文件夹。

在配置文件的最后加入以下行,以支持更大的缓存文件:

$ sudo vim /etc/squid/squid.conf
maximum_object_size 200 MB

现在开始创建缓存目录:

$ squid -z

启动服务后,运行检查

$ sudo systemctl restart squid
$ sudo systemctl enable squidz
$ sudo systemctl -k check

验证可以通过netstat -anp | grep 3128来检查squid进程是否运行。

使用squid代理

apt-cacher

以上设置的代理仅能支持RPM包的工作,对于DEB包我们需要使用apt-cacher, 在ArchLinux上安装和配置apt-cacher:

$ yaourt apt-cacher
$ sudo vim /etc/apt-cacher-ng/acng.conf
CacheDir: /home/nomodify/apt-cacher
Port: 3142

Config on Agent:

$ sudo vim /etc/apt/apt.conf.d/01proxy 
Acquire::http::Proxy "http://192.168.0.121:3142";

In ArchLinux, add it into startup file:

sudo apt-cacher-ng.

Docker way

Run instance via:

# docker run --name apt-cacher-ng -d --restart=always  --publish 3142:3142
--volume /var1/aptcacher:/var/cache/apt-cacher-ng
sameersbn/apt-cacher-ng:latest

Then Added a systemd service:

# vim /usr/lib/systemd/system/aptcache.service
[Unit]
Description=aptcache container
Requires=docker.service
After=docker.service

[Service]
Restart=always
ExecStart=/usr/bin/docker start -a apt-cacher-ng
ExecStop=/usr/bin/docker stop -t 2 apt-cacher-ng

[Install]
WantedBy=multi-user.target

enable the service via:

$ sudo systemctl enable aptcache

用Vagrant管理libvirt

先决条件

Vagrant为0.8.1.

参考:
http://linuxsimba.com/vagrant.html
http://linuxsimba.com/vagrant-libvirt-install/

Ubuntu设置

考虑到天朝防火墙的存在,需要经过以下命令才能安装vagrant-libvirt插件:

$ sudo apt-get install -y libvirt-dev ruby-dev
$ gem source -r https://rubygems.org/
$ gem source -a http://mirrors.aliyun.com/rubygems/
$ gem install ruby-libvirt -v '0.6.0'
$ gem install vagrant-libvirt -v '0.0.32'
$ vagrant plugin install vagrant-libvirt
$ vagrant plugin list
vagrant-libvirt (0.0.32)
$ axel http://linuxsimba.com/vagrantbox/ubuntu-trusty.box
$ vagrant box add ./ubuntu-trusty.box --name "trusty64"

ArchLinux设置

按照ArchLinux wiki的方法,安装vagrant-libvirt插件:

 # in case it's already installled
 vagrant plugin uninstall vagrant-libvirt
 
 # vagrant's copy of curl prevents the proper installation of ruby-libvirt
 sudo mv /opt/vagrant/embedded/lib/libcurl.so{,.backup}
 sudo mv /opt/vagrant/embedded/lib/libcurl.so.4{,.backup}
 sudo mv /opt/vagrant/embedded/lib/libcurl.so.4.4.0{,.backup}
 sudo mv /opt/vagrant/embedded/lib/pkgconfig/libcurl.pc{,.backup}
 
 CONFIGURE_ARGS="with-libvirt-include=/usr/include/libvirt with-libvirt-lib=/usr/lib" vagrant plugin install vagrant-libvirt
 
 # https://github.com/pradels/vagrant-libvirt/issues/541
 export PATH=/opt/vagrant/embedded/bin:$PATH
 export GEM_HOME=~/.vagrant.d/gems
 export GEM_PATH=$GEM_HOME:/opt/vagrant/embedded/gems
 gem uninstall ruby-libvirt
 gem install ruby-libvirt
 
 # put vagrant's copy of curl back
 sudo mv /opt/vagrant/embedded/lib/libcurl.so{.backup,}
 sudo mv /opt/vagrant/embedded/lib/libcurl.so.4{.backup,}
 sudo mv /opt/vagrant/embedded/lib/libcurl.so.4.4.0{.backup,}
 sudo mv /opt/vagrant/embedded/lib/pkgconfig/libcurl.pc{.backup,}

导入box

用packer编译出来的box文件默认工作在virtualbox下,我们需要用一个插件将其转换为 libvirt可用的box:

#  vagrant plugin install vagrant-mutate
# vagrant mutate ubuntu-14.04.virtualbox.box libvirt
Extracting box file to a temporary directory.
Converting ubuntu-14.04.virtualbox from virtualbox to libvirt.
    (100.00/100%)
Cleaning up temporary files.
The box ubuntu-14.04.virtualbox (libvirt) is now ready to use.
# cd /root/.vagrant.d/boxes/
# mv ubuntu-14.04.virtualbox/ trusty64
# vagrant box list
trusty64 (libvirt, 0)

检查安装的box

可以通过以下命令检查已经安装好的box:

$ vagrant box list
trusty64 	(libvirt, 0)
ubuntu1404	(virtualbox, 0)

配置Vagrantfile

以下是一个例子:

# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure(2) do |config|

  config.vm.box = "trusty64"
  # vagrant issues #1673..fixes hang with configure_networks
  config.ssh.shell = "bash -c 'BASH_ENV=/etc/profile exec bash'"
  config.vm.provider :libvirt do |domain|
    domain.memory = 256
    domain.nested = true
  end

# Private network using virtual network switching
  config.vm.define :vm1 do |vm1|
    vm1.vm.network :private_network, :ip => "192.168.56.11"
  end

  config.vm.define :vm2 do |vm2|
    vm2.vm.network :private_network, :ip => "192.168.56.12"
  end

  # Private network. Point to Point between 2 Guest OS using a TCP tunnel
  # Guest 1
  #config.vm.define :test_vm1 do |test_vm1|
  #  test_vm1.vm.network :private_network,
  #    :libvirt__tunnel_type => 'server',
  #    # default is 127.0.0.1 if omitted
  #    # :libvirt__tunnel_ip => '127.0.0.1',
  #    :libvirt__tunnel_port => '11111'

  # Guest 2
  #config.vm.define :test_vm2 do |test_vm2|
  #  test_vm2.vm.network :private_network,
  #    :libvirt__tunnel_type => 'client',
  #    # default is 127.0.0.1 if omitted
  #    # :libvirt__tunnel_ip => '127.0.0.1',
  #    :libvirt__tunnel_port => '11111'


  # Public Network
  config.vm.define :vm1 do |vm1|
    vm1.vm.network :public_network,
      :dev => "virbr0",
      :mode => "bridge",
      :type => "bridge"
  end
end

启动虚拟机

# vagrant up --provider=libvirt

启动时会出现以下问题, 解决方案为:

$ vagrant up --provider=libvirt
....
Missing required arguments: libvirt_uri
.....
$ vagrant plugin install --plugin-version 0.0.3 fog-libvirt