Clarity与AngularCLI

今天来看看如何使用Angular CLI集成Clarity。

Clarity是什么?官方介绍如下:

Project Clarity是一个开源的设计系统,它汇集了UX准则,HTML/CSS框架和Angular 2组件。 Clarity适用于设计人员和开发人员。

1. 先决条件

本指南需要Angular CLI全局安装, 使用以下命令安装最新版本的angular:

$ npm install -g  @angular/cli@latest

2. 创建新项目

现在ng命令应当是可用的, 我们使用ng new命令来创建一个新的Angular CLI项目:

$ ng new myclarity

在弹出的选项中默认回车接受预设选项,此命令运行完毕后我们将得到一个myclarity的目录, 进入到此目录下运行ng serve,我们将得到标准的angular项目预览页.

$ cd myclarity
$ ng serve

/images/2020_11_09_11_50_01_739x487.jpg

3. 安装Clarity依赖

要使用Clarity,我们需要使用npm命令安装包及其依赖:

$ npm install @clr/core @clr/icons @clr/angular @clr/ui @webcomponents/webcomponentsjs --save
$ npm install --save-dev clarity-ui
$ npm install --save-dev clarity-icons

4. 添加脚本及样式

添加以下条目到angular.json文件的scriptsstyles部分:

"styles": [
      "node_modules/@clr/icons/clr-icons.min.css",
      "node_modules/@clr/ui/clr-ui.min.css",
      ... any other styles
],
"scripts": [
  ... any existing scripts
  "node_modules/@webcomponents/webcomponentsjs/custom-elements-es5-adapter.js",
  "node_modules/@webcomponents/webcomponentsjs/webcomponents-bundle.js",
  "node_modules/@clr/icons/clr-icons.min.js"
]

更改完毕后的angular.json文件(共有4处需要修改)看起来应当是:

/images/2020_11_09_11_58_11_947x310.jpg

5. 添加Angular模块

到现在为止,包依赖都被安装并配置好了,我们可以开始进入到Angular AppModule来配置Clarity模块,配置完模块后我们就可以在应用中使用Clarity。

打开src/app/app.module.ts文件,在文件头部添加以下依赖:

import { NgModule } from '@angular/core';
import { BrowserModule } from "@angular/platform-browser";
import { BrowserAnimationsModule } from "@angular/platform-browser/animations";
import { ClarityModule } from "@clr/angular";
import { AppComponent } from './app.component';

这将告诉TypeScript从clarity-angular包中加载模块。要使用包我们需要在@NgModuleimport变量中添加以下行:

  imports: [
    BrowserModule,
    BrowserAnimationsModule,
    ClarityModule
  ],

到现在为止,基础配置已经完成。运行ng serve将看到app显示已经运行。

6. 使用Clarity

6.1 生成UI模块及组件

Clarity是一个相当高级的框架,可以快速创建出完整的用户界面。现在我们开始设定一些UI元素。

为了快速创建出UI目录结构,我们使用ng generate命令,或者简写的ng g命令:

# 创建ui模块
$ ng g m ui
CREATE src/app/ui/ui.module.ts (188 bytes)

# 创建layout(布局)组件
$  ng g c ui/layout -is -it  --skipTests=true
CREATE src/app/ui/layout/layout.component.ts (265 bytes)
UPDATE src/app/ui/ui.module.ts (264 bytes)

# 创建header, sidebar及main view 组件
$ ng g c ui/layout/header -is -it --skipTests=true
CREATE src/app/ui/layout/header/header.component.ts (265 bytes)
UPDATE src/app/ui/ui.module.ts (349 bytes)
$ ng g c ui/layout/sidebar -is -it --skipTests=true
CREATE src/app/ui/layout/sidebar/sidebar.component.ts (268 bytes)
UPDATE src/app/ui/ui.module.ts (438 bytes)
$ ng g c ui/layout/main -is -it --skipTests=true
CREATE src/app/ui/layout/main/main.component.ts (259 bytes)
UPDATE src/app/ui/ui.module.ts (515 bytes)

上述命令ng g使用的参数含义如下:

-is 使用inline样式替代独立的CSS文件。
-it 使用inline模板替代独立的html文件。
--skipTests=true 不生成用于测试的spec文件。

当前创建出的目录架构如下:

$ tree src/app/ui
src/app/ui
├── layout
│   ├── header
│   │   └── header.component.ts
│   ├── layout.component.ts
│   ├── main
│   │   └── main.component.ts
│   └── sidebar
│       └── sidebar.component.ts
└── ui.module.ts

4 directories, 5 files

要在app中使用刚才这些创建的模块我们需要在AppModule中引入UiModule, 编辑src/app/app.module.ts,添加以下的import声明语句:

import { UiModule } from './ui/ui.module';

@NgModule部分的imports数组中添加UiModule

因为我们需要在UiModule中使用Clarity,我们同样需要在ui的相关文件下添加它们。打开src/app/ui/ui.module.ts文件,添加以下import声明:

import { ClarityModule } from "@clr/angular";
import { BrowserAnimationsModule } from "@angular/platform-browser/animations";

@NgModule部分的imports数组中添加ClarityModule

最后需要在UiModule中添加一个exports数组,将LayoutModule导出,因为我们需要在AppComponent中使用它。

  imports: [
    CommonModule,
    ClarityModule
  ],
    exports: [
    LayoutComponent,
  ]

UiModule现在看起来是这样的:

import { NgModule } from '@angular/core';
import { CommonModule } from '@angular/common';
import { LayoutComponent } from './layout/layout.component';
import { HeaderComponent } from './layout/header/header.component';
import { SidebarComponent } from './layout/sidebar/sidebar.component';
import { MainComponent } from './layout/main/main.component';

import { ClarityModule } from "@clr/angular";
import { BrowserAnimationsModule } from "@angular/platform-browser/animations";

@NgModule({
  declarations: [LayoutComponent, HeaderComponent, SidebarComponent, MainComponent],
  imports: [
    CommonModule,
    ClarityModule
  ],
    exports: [
    LayoutComponent,
  ]
})
export class UiModule { }

6.2 撰写并创建UI组件

6.2.1 AppComponent

我们一开始来更新AppComponent的模板文件,打开src/app/app.component.html文件,将内容替换为:

<app-layout>
  <h1>{{title}}</h1>
</app-layout>

这里我们通过选择app-layout引用了LayoutComponent.

6.2.2 LayoutComponent

打开src/app/ui/layout/layout.component.ts文件,替换template部分为以下内容:

   <div class="main-container">
    <app-header></app-header>
    <app-main>
      <ng-content></ng-content>  
    </app-main>
  </div> 

我们在div main-container中包装了我们的内容,引用了HeaderComponentMainComponent, 在MainComponent中,我们包含了app内容,使用的是ng-content组件。

6.2.3 HeaderComponent

打开src/app/ui/layout/header/header.component.ts文件,更新模板:

   <header class="header-1">
    <div class="branding">
      <a class="nav-link">
        <clr-icon shape="shield"></clr-icon>
        <span class="title">Angular CLI</span>
      </a>
    </div>
    <div class="header-nav">
      <a class="active nav-link nav-icon">
        <clr-icon shape="home"></clr-icon>
      </a>
      <a class=" nav-link nav-icon">
        <clr-icon shape="cog"></clr-icon>
      </a>
    </div>
    <form class="search">
      <label for="search_input">
        <input id="search_input" type="text" placeholder="Search for keywords...">
      </label>
    </form>
    <div class="header-actions">
      <clr-dropdown class="dropdown bottom-right">
        <button class="nav-icon" clrDropdownToggle>
          <clr-icon shape="user"></clr-icon>
          <clr-icon shape="caret down"></clr-icon>
        </button>
        <div class="dropdown-menu">
          <a clrDropdownItem>About</a>
          <a clrDropdownItem>Preferences</a>
          <a clrDropdownItem>Log out</a>
        </div>
      </clr-dropdown>
    </div>
  </header>
  <nav class="subnav">
    <ul class="nav">
      <li class="nav-item">
        <a class="nav-link active" href="#">Dashboard</a>
      </li>
      <li class="nav-item">
        <a class="nav-link" href="#">Projects</a>
      </li>
      <li class="nav-item">
        <a class="nav-link" href="#">Reports</a>
      </li>
      <li class="nav-item">
        <a class="nav-link" href="#">Users</a>
      </li>
    </ul>
  </nav> 

代码有点长,我们解释如下:

  • 定义了一个header-1的头.
  • branding由icon和title构成。
  • 两个header icon, Home/Settings。
  • search box,placeholder文字。
  • user icon, 含有3个item的下拉列表。
  • sub navigation,含有4个链接。
6.2.4 MainComponent

差不多快完成了,我们打开src/app/ui/layout/main/main.component.ts更新以下模板内容:

  <div class="content-container">
    <div class="content-area">
      <ng-content></ng-content>
    </div>
    <app-sidebar class="sidenav"></app-sidebar>
  </div>  

在这个组件中我们将sidebar及内容模块封装在div中,该div名为content-container.

在接下来的app-sidebar选择器中我们创建了另一个div,名为content-area, 这是app的主要内容所展示的地方,我们使用内建的ng-content组件用于包装它。

6.2.5 SidebarComponent

最后一步了! 我们打开src/ap/ui/layout/sidebar/sidebar.component.ts文件,更新以下模板:

    <nav>
    <section class="sidenav-content">
      <a class="nav-link active">Overview</a>
      <section class="nav-group collapsible">
        <input id="tabexample1" type="checkbox">
        <label for="tabexample1">Content</label>
        <ul class="nav-list">
          <li><a class="nav-link">Projects</a></li>
          <li><a class="nav-link">Reports</a></li>
        </ul>
      </section>
      <section class="nav-group collapsible">
        <input id="tabexample2" type="checkbox">
        <label for="tabexample2">System</label>
        <ul class="nav-list">
          <li><a class="nav-link">Users</a></li>
          <li><a class="nav-link">Settings</a></li>
        </ul>
      </section>
    </section>
  </nav>

到现在为止clarity驱动的UI应该已经就绪,效果如下图:

/images/2020_11_09_14_12_03_551x393.jpg

7. 随便玩

7.1 添加导航页面

添加pages模块及导航分页:

$ ng g m pages
$ ng g c pages/dashboard -is -it --skipTests=true
$ ng g c pages/posts -is -it --skipTests=true
$ ng g c pages/settings -is -it --skipTests=true
$ ng g c pages/todos -is -it --skipTests=true
$ ng g c pages/users -is -it --skipTests=true

查看当前目录结构:

$ tree src/app
src/app
├── app.component.css
├── app.component.html
├── app.component.spec.ts
├── app.component.ts
├── app.module.ts
├── pages
│   ├── dashboard
│   │   └── dashboard.component.ts
│   ├── pages.module.ts
│   ├── posts
│   │   └── posts.component.ts
│   ├── settings
│   │   └── settings.component.ts
│   ├── todos
│   │   └── todos.component.ts
│   └── users
│       └── users.component.ts
└── ui
    ├── layout
    │   ├── header
    │   │   └── header.component.ts
    │   ├── layout.component.ts
    │   ├── main
    │   │   └── main.component.ts
    │   └── sidebar
    │       └── sidebar.component.ts
    └── ui.module.ts

7.2 路由

创建路由:

$ ng generate module app-routing --flat --module=app

修改生成的文件,添加导航:

$ vim src/app/app-routing.module.ts
const routes: Routes = [
  {
    path: '',
    children: [
      { path: '', redirectTo: '/dashboard', pathMatch: 'full' },
      { path: 'dashboard', component: DashboardComponent },
      { path: 'posts', component: PostsComponent },
      { path: 'settings', component: SettingsComponent },
      { path: 'todos', component: TodosComponent },
      { path: 'users', component: UsersComponent },
    ]
  }
];

@NgModule({
  imports: [
    RouterModule.forRoot(routes),
  ],
  exports: [
    RouterModule,
  ],
})
export class AppRoutingModule { }

7.3 打通导航(UI)

src/app/ui/ui.module.ts中引入RouterModule:

import { RouterModule } from '@angular/router';

  imports: [
    CommonModule,
    RouterModule,
    ClarityModule,

7.4 更改各页面

更改 src/app/pages/dashboard/dashboard.component.ts.

8. 添加echarts

安装angular对echarts的支持:

$  cnpm install echarts -S
$  cnpm install ngx-echarts -S
$  cnpm install resize-observer-polyfill -S

引入echarts:

$ vim src/app/app.module.ts
import * as echarts from 'echarts';
import { NgxEchartsModule } from 'ngx-echarts';


  imports: [
    BrowserModule,
    NgxEchartsModule.forRoot({
      echarts
    }),

添加服务:

$ ng generate service app

CREATE src/app/app.service.spec.ts (342 bytes)
CREATE src/app/app.service.ts (132 bytes)

WorkingTipsOnRongRobot

Building

In Azure Devops, Create new project:

/images/2020_10_29_12_39_14_651x545.jpg

Create pipeline:

/images/2020_10_29_12_39_37_449x459.jpg

Select code for GitHub:

/images/2020_10_29_12_40_07_665x532.jpg

Authorized AzurePipeLines:

/images/2020_10_29_12_40_42_519x188.jpg

Select Repository:

/images/2020_10_29_12_41_19_706x269.jpg

Click Run:

/images/2020_10_29_12_41_49_894x394.jpg

View Status:

/images/2020_10_29_12_43_19_859x544.jpg

Running Status:

/images/2020_10_29_12_43_39_835x460.jpg

Check Result:

/images/2020_10_29_13_23_16_752x254.jpg

/images/2020_10_29_13_23_44_839x546.jpg

Check Artifacts:

/images/2020_10_29_13_24_05_754x257.jpg

Download Artifacts:

/images/2020_10_29_13_24_25_850x299.jpg

Patching

Static file Patching

After download:

 $ ls *
RobotSon.tar.gz

data:
docker 

release:
calicoctl  cni-plugins-linux-amd64-v0.8.7.tgz  kubeadm-v1.19.3-amd64  kubectl-v1.19.3-amd64  kubelet-v1.19.3-amd64

zip docker.tar.gz(place in pre-rong/rong_static/for_master0/docker.tar.gz)

$ cd data
$ tar czf docker.tar.gz docker/

Copy releases folder to folder(pre-rong/rong_static/for_cluster/)

$  ls pre-rong/rong_static/for_cluster/
calicoctl  cni-plugins-linux-amd64-v0.8.7.tgz  docker  gpg  kubeadm-v1.18.8-amd64  kubectl-v1.18.8-amd64  kubelet-v1.18.8-amd64  netdata-v1.22.1.gz.run

Code Patching

下载patch文件:

# git clone https://github.com/kubernetes-sigs/kubespray.git
# cd kubespray
# git checkout tags/v2.xx.0 -b xxxx
# git apply --check ../patch 
检查是否有错
v1.19(master)需要exclude以下两个文件
# git apply  /root/patch --exclude=roles/kubernetes-apps/helm/templates/tiller-clusterrolebinding.yml.j2 --exclude=roles/remove-node/remove-etcd-node/tasks/main.yml

部署框架内少量修改

rong-vars.yml:

/images/2020_10_29_14_12_23_871x347.jpg

/images/2020_10_29_14_12_36_425x97.jpg

rong/1_preinstall/role/preinstall/task/main.yml:

/images/2020_10_29_14_11_39_875x477.jpg

WorkingTipsOnGitDiffPatch

RONG代码架构现状

制作patch前确保Kubespray代码中目录中的软链接确实存在,而不是因为cp被替换成了实体文件。

v2.14.0上制作patch

# git clone https://github.com/kubernetes-sigs/kubespray.git
# git checkout tags/v2.14.0 -b 2140

此时检出的是v2.14.0的未修改的代码。

该目录下替换成3_k8s下的代码,注意去掉部署时生成的中间文件。而后commit更改。

/images/2020_10_29_11_03_04_638x297.jpg

制作patch文件:

git diff a1f04e f0c9b1>patch1

Apply patch

切换回master分支,或者直接在新目录下checkout一个新的工作目录:

# git apply --check ../patch 
error: 打补丁失败:roles/kubernetes-apps/helm/templates/tiller-clusterrolebinding.yml.j2:3
error: roles/kubernetes-apps/helm/templates/tiller-clusterrolebinding.yml.j2:补丁未应用
error: 打补丁失败:roles/remove-node/remove-etcd-node/tasks/main.yml:21
error: roles/remove-node/remove-etcd-node/tasks/main.yml:补丁未应用

这是因为新分支(master)对比于v2.14.0在上述提及的文件中有更改,此时我们需要在apply的时候忽略掉这些更改:

git apply  /root/patch --exclude=roles/kubernetes-apps/helm/templates/tiller-clusterrolebinding.yml.j2 --exclude=roles/remove-node/remove-etcd-node/tasks/main.yml

此时更改完毕后可以看到新版本的代码中已经添加了我们在旧分支上做的代码变更。

对于有冲突的文件,需要手动解决冲突。

Example

Git patch in different branch:

# git clone https://github.com/kubernetes-sigs/kubespray.git
# git checkout tags/v2.14.0 -b 2140
 (2140) $ git apply ../../patch 
 (2140 !*%) $ vim roles/container-engine/docker/tasks/main.yml
 (2140 !*%) $ git checkout master
error: 您对下列文件的本地修改将被检出操作覆盖:
	roles/kubernetes-apps/helm/templates/tiller-clusterrolebinding.yml.j2
	roles/kubernetes/preinstall/tasks/0020-verify-settings.yml
	roles/remove-node/remove-etcd-node/tasks/main.yml
请在切换分支前提交或贮藏您的修改。
正在终止
(2140 !*%) $ git add .                                                                                                                        1 ↵
(2140 !+) $ git commit -m "modified in 2.14.0"
[2140 2d87573d] modified in 2.14.0
 19 files changed, 504 insertions(+), 427 deletions(-)
 delete mode 100644 contrib/packaging/rpm/kubespray.spec
 create mode 100644 inventory/sample/hosts.ini
 rewrite roles/bootstrap-os/tasks/main.yml (99%)
 create mode 100644 roles/bootstrap-os/tasks/main_kfz.yml
 copy roles/bootstrap-os/tasks/{main.yml => main_main.yml} (99%)
 rewrite roles/container-engine/docker/tasks/main.yml (99%)
 create mode 100644 roles/container-engine/docker/tasks/main_kfz.yml
 copy roles/container-engine/docker/tasks/{main.yml => main_main.yml} (92%)
dash@archnvme:/media/sda/git/pure/kubespray (2140) $ git checkout master
切换到分支 'master'
您的分支与上游分支 'origin/master' 一致。
 (master) $ ls
ansible.cfg          code-of-conduct.md  Dockerfile       index.html  logo         OWNERS_ALIASES             remove-node.yml   scale.yml          setup.py             Vagrantfile
ansible_version.yml  _config.yml         docs             inventory   Makefile     README.md                  requirements.txt  scripts            test-infra
cluster.yml          contrib             extra_playbooks  library     mitogen.yml  recover-control-plane.yml  reset.yml         SECURITY_CONTACTS  tests
CNAME                CONTRIBUTING.md     facts.yml        LICENSE     OWNERS       RELEASE.md                 roles             setup.cfg          upgrade-cluster.yml
(master) $ git apply ../../patch --exclude=roles/kubernetes-apps/helm/templates/tiller-clusterrolebinding.yml.j2 --exclude=roles/remove-node/remove-etcd-node/tasks/main.yml
(master !*%) $ git checkout 2140
error: 您对下列文件的本地修改将被检出操作覆盖:
	cluster.yml
	roles/bootstrap-os/tasks/main.yml
	roles/container-engine/docker/meta/main.yml
	roles/container-engine/docker/tasks/main.yml
	roles/container-engine/docker/tasks/pre-upgrade.yml
	roles/container-engine/docker/templates/docker-options.conf.j2
	roles/container-engine/docker/templates/docker.service.j2
	roles/kubernetes/node/tasks/kubelet.yml
	roles/kubernetes/preinstall/tasks/0020-verify-settings.yml
	roles/kubernetes/preinstall/tasks/0080-system-configurations.yml
	roles/kubernetes/preinstall/tasks/main.yml
请在切换分支前提交或贮藏您的修改。
error: 工作区中下列未跟踪的文件将会因为检出操作而被覆盖:
	inventory/sample/hosts.ini
	roles/bootstrap-os/tasks/main_kfz.yml
	roles/bootstrap-os/tasks/main_main.yml
	roles/container-engine/docker/tasks/main_kfz.yml
	roles/container-engine/docker/tasks/main_main.yml
请在切换分支前移动或删除。
正在终止
(master !*%) $ git add .                                                                                                                      1 ↵
(master !+) $ git commit -m "apply in master"
[master a5941286] apply in master
 17 files changed, 502 insertions(+), 426 deletions(-)
 delete mode 100644 contrib/packaging/rpm/kubespray.spec
 create mode 100644 inventory/sample/hosts.ini
 rewrite roles/bootstrap-os/tasks/main.yml (99%)
 create mode 100644 roles/bootstrap-os/tasks/main_kfz.yml
 copy roles/bootstrap-os/tasks/{main.yml => main_main.yml} (99%)
 rewrite roles/container-engine/docker/tasks/main.yml (99%)
 create mode 100644 roles/container-engine/docker/tasks/main_kfz.yml
 copy roles/container-engine/docker/tasks/{main.yml => main_main.yml} (92%)
 (master) $ git checkout 2140              
切换到分支 '2140'
 (2140) $ git checkout master
切换到分支 'master'
您的分支领先 'origin/master' 共 1 个提交。
  (使用 "git push" 来发布您的本地提交)
(master) $ pwd
/media/sda/git/pure/kubespray

WorkingTipsOnRongRobot

Azure DevOps

Create a new project:

/images/2020_10_28_08_37_20_634x542.jpg

Add the ssh-key into project:

/images/2020_10_28_08_31_38_457x200.jpg

Configure the time/locale:

/images/2020_10_28_08_32_41_584x548.jpg

Repos

Create a new repository and set the remote branch:

# mkdir RongRobot
# cd RongRobot
# vim README.md
# git init
# git add .
# git commit -m "First Commit"
# git remote add origin git@ssh.dev.azure.com:v3/purplepalm/RongRobot/RongRobot
# git push -u origin --all

View status on azure devops:

/images/2020_10_28_08_41_24_1023x408.jpg

Click Set up build for setup the pipeline:

/images/2020_10_28_08_42_18_317x141.jpg

Starter pipeline:

/images/2020_10_28_08_42_48_518x262.jpg

Edit something:

/images/2020_10_28_08_43_34_791x555.jpg

Codes

Write your own azure pipelines for doing these .

WorkingTipsInRonggraphInLXD

lxd environment

Install lxd(Offline):

snap download core
snap download core18
snap download lxd
snap ack core18_1885.assert; snap ack core_10185.assert; snap ack lxd_17936.assert
snap install core18_1885.snap ; snap install core_10185.snap ; snap install lxd_17936.snap
dpkg -i ./lxd_1%3a0.9_all.deb
which lxc
which lxd

Show lxc images:

root@rong320-1:~/lxd# lxc image list
If this is your first time running LXD on this machine, you should also run: lxd init
To start your first instance, try: lxc launch ubuntu:18.04

+-------+-------------+--------+-------------+--------------+------+------+-------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCHITECTURE | TYPE | SIZE | UPLOAD DATE |
+-------+-------------+--------+-------------+--------------+------+------+-------------+

Download lxd images:

https://us.images.linuxcontainers.org/images/alpine/3.12/amd64/default/20201021_13:00/
Download
rootfs.squashfs lxd.tar.xz 
root@rong320-1:~/lxdimages# ls
lxd.tar.xz  rootfs.squashfs
root@rong320-1:~/lxdimages# lxc image import lxd.tar.xz rootfs.squashfs --alias alpine312
Image imported with fingerprint: 76560d125792d7710d70f41b060e81f0bd4d83f1cc4e8dbd43fc371e5dea27bf
root@rong320-1:~/lxdimages# lxc image list
+-----------+--------------+--------+------------------------------------------+--------------+-----------+--------+------------------------------+
|   ALIAS   | FINGERPRINT  | PUBLIC |               DESCRIPTION                | ARCHITECTURE |   TYPE    |  SIZE  |         UPLOAD DATE          |
+-----------+--------------+--------+------------------------------------------+--------------+-----------+--------+------------------------------+
| alpine312 | 76560d125792 | no     | Alpinelinux 3.12 x86_64 (20201021_13:00) | x86_64       | CONTAINER | 2.40MB | Oct 22, 2020 at 3:48am (UTC) |
+-----------+--------------+--------+------------------------------------------+--------------+-----------+--------+------------------------------+
Auto Config the lxd(https://discuss.linuxcontainers.org/t/usage-of-lxd-init-preseed/1069/3)
(https://lxd.readthedocs.io/en/latest/preseed/)
cat <<EOF | lxd init --preseed
config:
  core.https_address: 10.137.149.161:9199
  images.auto_update_interval: 15
networks:
- name: lxdbr0
  type: bridge
  config:
    ipv4.address: auto
    ipv6.address: none
EOF
root@rong320-1:~/lxdimages# cat storages.yml 
storage_pools:
- name: default
  driver: dir
  config:
    source: ""
root@rong320-1:~/lxdimages# lxd init --preseed<./storages.yml

root@rong320-1:~/lxdimages# cat profiles.yml 
profiles:
- name: default
  devices:
    root:
      path: /
      pool: default
      type: disk
    eth0:
      nictype: bridged
      parent: lxdbr0
      type: nic
root@rong320-1:~/lxdimages# lxd init --preseed<profiles.yml

Now we could check the default lxd bridges(lxdbr0).

Docker/Docker-compose in alpine

Create a new profile named k8s:

# lxc launch alpine312 firstalpine -p k8s

Create the first alpine instance:

# lxc launch alpine312 firstalpine
Creating firstalpine
Starting firstalpine           
# lxc ls
+-------------+---------+---------------------+------+-----------+-----------+
|    NAME     |  STATE  |        IPV4         | IPV6 |   TYPE    | SNAPSHOTS |
+-------------+---------+---------------------+------+-----------+-----------+
| firstalpine | RUNNING | 10.31.47.210 (eth0) |      | CONTAINER | 0         |
+-------------+---------+---------------------+------+-----------+-----------+
root@rong320-1:~/lxdimages# lxc exec firstalpine /bin/sh
~ # cat /etc/issue
Welcome to Alpine Linux 3.12
Kernel \r on an \m (\l)

Configure repository:

 # echo "https://mirrors.aliyun.com/alpine/v3.12/main/" > /etc/apk/repositories
 # echo "https://mirrors.aliyun.com/alpine/v3.12/community/" >> /etc/apk/repositories
# apk update
# apk add docker-engine docker-compose docker-cli

Create the cgroups-patch file under /etc/init.d:

#!/sbin/openrc-run

description="Mount the control groups for Docker"

depend()
{
    keyword -docker
    need sysfs cgroups
}

start()
{
    if [ -d /sys/fs/cgroup ]; then
        mkdir -p /sys/fs/cgroup/cpu,cpuacct
        mkdir -p /sys/fs/cgroup/net_cls,net_prio

        mount -n -t cgroup cgroup /sys/fs/cgroup/cpu,cpuacct -o rw,nosuid,nodev,noexec,relatime,cpu,cpuacct
        mount -n -t cgroup cgroup /sys/fs/cgroup/net_cls,net_prio -o rw,nosuid,nodev,noexec,relatime,net_cls,net_prio

        if ! mountinfo -q /sys/fs/cgroup/openrc; then
            local agent="${RC_LIBEXECDIR}/sh/cgroup-release-agent.sh"
            mkdir -p /sys/fs/cgroup/openrc
            mount -n -t cgroup -o none,nodev,noexec,nosuid,name=systemd,release_agent="$agent" openrc /sys/fs/cgroup/openrc
        fi
    fi

    return 0
}

Added the auto-start and reboot:

# rc-update add cgroups-patch boot
# vim /etc/init.d/docker
.....
start_pre() {
        #checkpath -f -m 0644 -o root:docker "$DOCKER_ERRFILE" "$DOCKER_OUTFILE"
        echo "fucku"
}
.....
# rc-service docker start
# rc-update add docker default
# reboot
After reboot, check docker version

push files into lxc instance:

# lxc file push -r podmanitems/ firstalpine/root/
load all images
# docker images
~ # docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
rong/ui             master              66ad16eb15c5        20 minutes ago      28.9MB
rong/server         master              8150777ead18        23 hours ago        301MB
rong/kobe           master              2d0a03d6cedb        2 days ago          231MB
rong/nginx          1.19.2-amd64        7e4d58f0e5f3        5 weeks ago         133MB
rong/webkubectl     v2.6.0-amd64        4aa634837fea        2 months ago        349MB
rong/mysql-server   8.0.21-amd64        8a3a24ad33be        3 months ago        366MB
# lxc file push ronggraph.tar firstalpine/root/
# tar xzvf /root/ronggraph.tar

Should write a start definition for ronggraph:

#!/sbin/openrc-run
#
# author: Yusuke Kawatsu

workspace="/root/ronggraph"
cmdpath="/usr/bin/docker-compose"
prog="ronggraph"
lockfile="/var/lock/ronggraph"
pidfile="/var/run/ronggraph.pid"
PATH="$PATH:/usr/local/bin"


start() {
    [ -x $cmdpath ] || exit 5
    echo -n $"Starting $prog: "

    cd $workspace
    $cmdpath up -d
    $cmdpath down
    retval=$?
    pid=$!
    echo
    [ $retval -eq 0 ] && touch $lockfile && echo $pid > $pidfile

    return $retval
}

stop() {
    [ -x $cmdpath ] || exit 5
    echo -n $"Stopping $prog: "

    cd $workspace
    $cmdpath down
    retval=$?
    echo
    [ $retval -eq 0 ] && rm -f $lockfile && rm -f $pidfile

    return $retval
}

restart() {
    stop
    sleep 3
    start
}

depend() {
    need docker
}

Now add ronggraph to default update:

# rc-update add ronggraph default
# halt

Save the current status:

root@rong320-1:~/lxdimages# lxc stop firstalpine
root@rong320-1:~/lxdimages# lxc publish --public firstalpine --alias=ronggraph
root@rong320-1:/mnt# lxc image ls
+-----------+--------------+--------+------------------------------------------+--------------+-----------+----------+------------------------------+
|   ALIAS   | FINGERPRINT  | PUBLIC |               DESCRIPTION                | ARCHITECTURE |   TYPE    |   SIZE   |         UPLOAD DATE          |
+-----------+--------------+--------+------------------------------------------+--------------+-----------+----------+------------------------------+
| alpine312 | 76560d125792 | no     | Alpinelinux 3.12 x86_64 (20201021_13:00) | x86_64       
| CONTAINER | 2.40MB   | Oct 22, 2020 at 6:18am (UTC) |
+-----------+--------------+--------+------------------------------------------+--------------+-----------+----------+------------------------------+
| ronggraph | b31788790460 | yes    | Alpinelinux 3.12 x86_64 (20201021_13:00) | x86_64       | CONTAINER | 619.40MB | Oct 22, 2020 at 8:05am (UTC) |
+-----------+--------------+--------+------------------------------------------+--------------+-----------+----------+------------------------------+

launch new instance:

# lxc launch ronggraph ronggraph -p k8s
# 

Add forward rules:

lxc config device add ronggraph myport80 proxy listen=tcp:0.0.0.0:80 connect=tcp:0.0.0.0:80
lxc config device add ronggraph myport443 proxy listen=tcp:0.0.0.0:443 connect=tcp:0.0.0.0:443

arm64 workingtips

Under rpi archlinux64, install:

# pacman -Sy
# pacman -S lxc lxd
# systemctl enable lxd
# systemctl start lxd

Download images from:

https://us.images.linuxcontainers.org/images/alpine/3.12/arm64/default/20201022_13:00/
rootfs.squashfs
lxd.tar.xz
# lxc image import lxd.tar.xz rootfs.squashfs --alias alpine312
# lxd init --preseed<pre-rong/lxditems/lxd_snap/init.yaml
# lxc profile create k8s
# lxc profile edit k8s<pre-rong/lxditems/lxdimages/k8s.yaml

/images/2020_10_23_09_55_43_629x378.jpg

Configure lxc for running:

https://wiki.archlinux.org/index.php/Linux_Containers

lxc Installation:

# ~ # sed -i 's/dl-cdn.alpinelinux.org/mirrors.ustc.edu.cn/g' /etc/apk/repositories
~ # cat /etc/apk/repositories 
http://mirrors.ustc.edu.cn/alpine/v3.12/main
http://mirrors.ustc.edu.cn/alpine/v3.12/community
Install docker/docker-compose, modify its startup

lxc public will take a very long time!!!

# lxc ls
+------+---------+------------------------------+------+-----------+-----------+
| NAME |  STATE  |             IPV4             | IPV6 |   TYPE    | SNAPSHOTS |
+------+---------+------------------------------+------+-----------+-----------+
| king | RUNNING | 172.18.0.1 (br-74a26d2404f6) |      | CONTAINER | 0         |
|      |         | 172.17.0.1 (docker0)         |      |           |           |
|      |         | 10.150.132.185 (eth0)        |      |           |           |
+------+---------+------------------------------+------+-----------+-----------+
# lxc publish --public king --alias=ronggraph
# lxc image ls
+-----------+--------------+--------+-------------------------------------------+--------------+-----------+-----------+-------------------------------+
|   ALIAS   | FINGERPRINT  | PUBLIC |                DESCRIPTION                | ARCHITECTURE |   TYPE    |   SIZE    |          UPLOAD DATE          |
+-----------+--------------+--------+-------------------------------------------+--------------+-----------+-----------+-------------------------------+
| alpine312 | 58ebec92505e | no     | Alpinelinux 3.12 aarch64 (20201022_13:00) | aarch64      | CONTAINER | 2.20MB    | Oct 23, 2020 at 1:52am (UTC)  |
+-----------+--------------+--------+-------------------------------------------+--------------+-----------+-----------+-------------------------------+
| ronggraph | 607287f518d4 | yes    | Alpinelinux 3.12 aarch64 (20201022_13:00) | aarch64      | CONTAINER | 2655.15MB | Oct 23, 2020 at 10:13am (UTC) |
+-----------+--------------+--------+-------------------------------------------+--------------+-----------+-----------+-------------------------------+
#  lxc image export ronggraph .
# ls *.tar.gz
-rw-r--r--  1 root  root   2784123072 Oct 26 00:40 607287f518d40783ed968cd2f2434fba101d4332ccc16f1e66cfb43049208d57.tar.gz

Transfer the tar.gz into the arm64 server, and run it.

# /snap/bin/lxc image import lxditems/lxdimages/607287f518d40783ed968cd2f2434fba101d4332ccc16f1e66cfb43049208d57.tar.gz --alias ronggraph