11月 282016
 
  1. 查询无线网卡型号
# /sbin/lspci | grep Network
04:00.0 Network controller: Realtek Semiconductor Co., Ltd. RTL8723BE PCIe Wireless Network Adapter
  1. 查找驱动
    github上有个项目提供Realtek的无线网卡驱动 https://github.com/lwfinger/rtlwifi_new 但是编译报错,查询得知需要3.12以上内核,而centos7 自带的内核是 3.10

  2. 升级内核时,发现 elrepo.org 中提供 RTL8723BE 的驱动,下载安装,重启成功识别无线网卡

# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
# yum install kmod-rtl8723be.x86_64
# reboot
 Posted by at 10:05
9月 092016
 

kolla项目是为了容器化openstack,目标是做到100个节点的开箱即用,所有的组件的HA都具备。kolla是一个革命性的项目,我们以前积累的安装部署经验,全部都报废。使用kolla可以快速部署可扩展,可靠的生产就绪的openstack环境。

基本环境

操作系统:CentOS Linux release 7.2.1511 (Core)
内核版本:3.10.0-327.28.3.el7.x86_64
docker版本:Docker version 1.12.1, build 23cf638

部署kolla

1.  安装依赖

yum install epel-release python-pip
yum install -y python-devel libffi-devel openssl-devel gcc
pip install -U pip

2.  修改docker启动文件

# Create the drop-in unit directory for docker.service
mkdir -p /etc/systemd/system/docker.service.d

# Create the drop-in unit file
tee /etc/systemd/system/docker.service.d/kolla.conf <<-'EOF'
[Service]
MountFlags=shared
EOF

重启docker

systemctl daemon-reload
systemctl restart docker

3.  安装docker的python库

yum install python-docker-py
or
pip install -U docker-py

4.  配置时间同步(此处略)
5.  禁用libvirt

systemctl stop libvirtd.service
systemctl disable libvirtd.service

6.  安装ansible
这里需要注意的是如果安装stable版的kolla需要Ansible < 2.0,master版需要Ansible > 2.0。默认yum安装ansible版本>2.0,因为我要安装stable/mitaka版,所有指定安装版本。

pip install -U ansible==1.9.4

7.  安装stable版kolla

下载源码

git clone https://git.openstack.org/openstack/kolla -b stable/mitaka

安装依赖

pip install -r kolla/requirements.txt -r kolla/test-requirements.txt

源码安装

pip install kolla/

8.  安装tox,生成配置文件

pip install -U tox
cd kolla/
tox -e genconfig
cp -rv etc/kolla /etc/

9.  安装python client

yum install python-openstackclient python-neutronclient

10.  本地docker仓库
all-in-one环境中本地仓库不是必须的这里没有配置

编译镜像

kolla-build

更多的编译选项可以参看:Building Container Images
如果个别镜像编译失败可以重新执行以上操作,因为docker的容器缓存,重新编译会很快
编译成功后生成的镜像如下所示:

# docker images
REPOSITORY                                      TAG                 IMAGE ID            CREATED             SIZE
kolla/centos-binary-heat-engine                 2.0.3               28956cc878d3        20 hours ago        571.4 MB
kolla/centos-binary-heat-api-cfn                2.0.3               d69858fd13fa        20 hours ago        571.4 MB
kolla/centos-binary-heat-api                    2.0.3               90a92ca6b71a        20 hours ago        571.4 MB
kolla/centos-binary-heat-base                   2.0.3               8f1cf8a1f536        21 hours ago        551.6 MB
kolla/centos-binary-neutron-openvswitch-agent   2.0.3               e7d0233ca541        21 hours ago        822.3 MB
kolla/centos-binary-neutron-base                2.0.3               8767569ca9b3        21 hours ago        796.7 MB
kolla/centos-binary-openvswitch-vswitchd        2.0.3               6867586ae335        21 hours ago        330.6 MB
kolla/centos-binary-openvswitch-db-server       2.0.3               3c692f316662        21 hours ago        330.6 MB
kolla/centos-binary-openvswitch-base            2.0.3               c3a263463f8f        21 hours ago        330.6 MB
kolla/centos-binary-cron                        2.0.3               d16d53e85ed9        26 hours ago        317.5 MB
kolla/centos-binary-kolla-toolbox               2.0.3               1fd9634b88ee        26 hours ago        568.4 MB
kolla/centos-binary-heka                        2.0.3               627a3de5e91c        26 hours ago        371.1 MB
kolla/centos-binary-neutron-metadata-agent      2.0.3               aad43ed7a5a1        42 hours ago        796.7 MB
kolla/centos-binary-neutron-server              2.0.3               bc1a7c0ec402        42 hours ago        796.7 MB
kolla/centos-binary-nova-compute                2.0.3               619344ac721b        42 hours ago        1.055 GB
kolla/centos-binary-nova-libvirt                2.0.3               6144729fff5f        42 hours ago        1.106 GB
kolla/centos-binary-neutron-linuxbridge-agent   2.0.3               720c9c5fa63d        42 hours ago        822 MB
kolla/centos-binary-neutron-l3-agent            2.0.3               3a82df7cb9c2        42 hours ago        796.7 MB
kolla/centos-binary-glance-api                  2.0.3               fb67115357d5        42 hours ago        673.8 MB
kolla/centos-binary-neutron-dhcp-agent          2.0.3               8c6fa56497ca        42 hours ago        796.7 MB
kolla/centos-binary-nova-compute-ironic         2.0.3               6f235dc430e5        43 hours ago        1.019 GB
kolla/centos-binary-glance-registry             2.0.3               f4cf7bc1536f        43 hours ago        673.8 MB
kolla/centos-binary-cinder-volume               2.0.3               0197cc13468d        43 hours ago        788.4 MB
kolla/centos-binary-cinder-api                  2.0.3               ed7c623e7364        43 hours ago        800.4 MB
kolla/centos-binary-cinder-rpcbind              2.0.3               75466dc5a3ba        43 hours ago        790.2 MB
kolla/centos-binary-horizon                     2.0.3               92c7ea9fc493        43 hours ago        703.1 MB
kolla/centos-binary-cinder-backup               2.0.3               e3ee19440831        43 hours ago        761.3 MB
kolla/centos-binary-cinder-scheduler            2.0.3               e3ee19440831        43 hours ago        761.3 MB
kolla/centos-binary-nova-consoleauth            2.0.3               96a9638801cd        43 hours ago        609.6 MB
kolla/centos-binary-nova-api                    2.0.3               eff73f704a90        43 hours ago        609.4 MB
kolla/centos-binary-nova-conductor              2.0.3               6016ae01a60d        43 hours ago        609.4 MB
kolla/centos-binary-nova-scheduler              2.0.3               726f100a5533        43 hours ago        609.4 MB
kolla/centos-binary-nova-spicehtml5proxy        2.0.3               c6a1a49e4226        43 hours ago        609.9 MB
kolla/centos-binary-glance-base                 2.0.3               1e4efa0f6701        43 hours ago        673.8 MB
kolla/centos-binary-nova-network                2.0.3               87f6389dd11a        43 hours ago        610.4 MB
kolla/centos-binary-ironic-pxe                  2.0.3               82f25f73c28f        43 hours ago        574.2 MB
kolla/centos-binary-nova-novncproxy             2.0.3               4726875ed228        43 hours ago        610.1 MB
kolla/centos-binary-nova-ssh                    2.0.3               51c70b9e9c47        43 hours ago        610.4 MB
kolla/centos-binary-cinder-base                 2.0.3               7c2d031be713        43 hours ago        761.3 MB
kolla/centos-binary-keystone                    2.0.3               c51a93cc9e2e        43 hours ago        585.2 MB
kolla/centos-binary-ironic-api                  2.0.3               b1771f5cc27f        43 hours ago        570.6 MB
kolla/centos-binary-ironic-inspector            2.0.3               32f4e33e1037        43 hours ago        576.2 MB
kolla/centos-binary-ironic-conductor            2.0.3               d552c64f3a08        43 hours ago        599 MB
kolla/centos-binary-nova-base                   2.0.3               8f077fafc5d8        43 hours ago        588.7 MB
kolla/centos-binary-rabbitmq                    2.0.3               d9e543e4f179        43 hours ago        370.3 MB
kolla/centos-binary-ironic-base                 2.0.3               6c4c453ddbce        43 hours ago        550.8 MB
kolla/centos-binary-openstack-base              2.0.3               cf48d5b3f3ee        43 hours ago        518.2 MB
kolla/centos-binary-mariadb                     2.0.3               cd9b363fe034        43 hours ago        630.5 MB
kolla/centos-binary-memcached                   2.0.3               49c536466427        43 hours ago        354.6 MB
kolla/centos-binary-base                        2.0.3               d04ac1ecd01a        43 hours ago        300 MB
centos                                          latest              980e0e4c79ec        2 days ago          196.7 MB

部署容器

1.  生成密码
openstack环境的密码等变量可以在 /etc/kolla/passwords.yml 中指定,为了方便可以使用kolla-genpwd工具自动生成复杂密码。

kolla-genpwd

为了方便,我们修改其中的管理员登陆密码

vim /etc/kolla/passwords.yml
keystone_admin_password: admin

2.  修改部署配置文件
修改/etc/kolla/globals.yml 文件,指定部署的一些信息

vim /etc/kolla/globals.yml
kolla_base_distro: "centos"
kolla_install_type: "binary"
enable_haproxy: "no"
#kolla_internal_vip_address: "10.10.10.254"
kolla_internal_address: "192.168.2.120"
network_interface: "ens160"
neutron_external_interface: "ens192"
neutron_plugin_agent: "openvswitch"
openstack_logging_debug: "True"

3.  检查配置

kolla-ansible prechecks

4.  开始部署

kolla-ansible deploy

部署成功后查看容器

# docker ps
CONTAINER ID        IMAGE                                                 COMMAND                  CREATED             STATUS              PORTS               NAMES
3938136934cf        kolla/centos-binary-horizon:2.0.3                     "kolla_start"            17 hours ago        Up 17 hours                             horizon
cc68cb8d96e4        kolla/centos-binary-heat-engine:2.0.3                 "kolla_start"            17 hours ago        Up 17 hours                             heat_engine
96c94995ef7c        kolla/centos-binary-heat-api-cfn:2.0.3                "kolla_start"            17 hours ago        Up 17 hours                             heat_api_cfn
cb8ae3afb767        kolla/centos-binary-heat-api:2.0.3                    "kolla_start"            17 hours ago        Up 17 hours                             heat_api
e8f98659e03f        kolla/centos-binary-neutron-metadata-agent:2.0.3      "kolla_start"            17 hours ago        Up 17 hours                             neutron_metadata_agent
d326fa732c2b        kolla/centos-binary-neutron-l3-agent:2.0.3            "kolla_start"            17 hours ago        Up 17 hours                             neutron_l3_agent
4b1bbbe4fe5b        kolla/centos-binary-neutron-dhcp-agent:2.0.3          "kolla_start"            17 hours ago        Up 17 hours                             neutron_dhcp_agent
88b2afbba5d9        kolla/centos-binary-neutron-openvswitch-agent:2.0.3   "kolla_start"            17 hours ago        Up 17 hours                             neutron_openvswitch_agent
b73d52de75b2        kolla/centos-binary-neutron-server:2.0.3              "kolla_start"            17 hours ago        Up 17 hours                             neutron_server
1c716402d95f        kolla/centos-binary-openvswitch-vswitchd:2.0.3        "kolla_start"            17 hours ago        Up 17 hours                             openvswitch_vswitchd
176e7ee659f1        kolla/centos-binary-openvswitch-db-server:2.0.3       "kolla_start"            17 hours ago        Up 17 hours                             openvswitch_db
457e0921c61a        kolla/centos-binary-nova-ssh:2.0.3                    "kolla_start"            17 hours ago        Up 17 hours                             nova_ssh
b02acebb3dc3        kolla/centos-binary-nova-compute:2.0.3                "kolla_start"            17 hours ago        Up 17 hours                             nova_compute
59be78a597d8        kolla/centos-binary-nova-libvirt:2.0.3                "kolla_start"            17 hours ago        Up 17 hours                             nova_libvirt
668ad8f91920        kolla/centos-binary-nova-conductor:2.0.3              "kolla_start"            17 hours ago        Up 17 hours                             nova_conductor
34f81b4bc18b        kolla/centos-binary-nova-scheduler:2.0.3              "kolla_start"            17 hours ago        Up 17 hours                             nova_scheduler
eb47844e6547        kolla/centos-binary-nova-novncproxy:2.0.3             "kolla_start"            17 hours ago        Up 17 hours                             nova_novncproxy
93563016cf21        kolla/centos-binary-nova-consoleauth:2.0.3            "kolla_start"            17 hours ago        Up 17 hours                             nova_consoleauth
cc8a1cca2e98        kolla/centos-binary-nova-api:2.0.3                    "kolla_start"            17 hours ago        Up 17 hours                             nova_api
40db89e89758        kolla/centos-binary-glance-api:2.0.3                  "kolla_start"            17 hours ago        Up 17 hours                             glance_api
4fa5f0f38f0d        kolla/centos-binary-glance-registry:2.0.3             "kolla_start"            17 hours ago        Up 17 hours                             glance_registry
f05120c95a9f        kolla/centos-binary-keystone:2.0.3                    "kolla_start"            17 hours ago        Up 17 hours                             keystone
149a49d57aa6        kolla/centos-binary-rabbitmq:2.0.3                    "kolla_start"            17 hours ago        Up 17 hours                             rabbitmq
5f4298c3821e        kolla/centos-binary-mariadb:2.0.3                     "kolla_start"            17 hours ago        Up 17 hours                             mariadb
64f6fbb19892        kolla/centos-binary-cron:2.0.3                        "kolla_start"            17 hours ago        Up 17 hours                             cron
4cab0e756b61        kolla/centos-binary-kolla-toolbox:2.0.3               "/usr/local/bin/dumb-"   17 hours ago        Up 17 hours                             kolla_toolbox
293a7ccaab52        kolla/centos-binary-heka:2.0.3                        "kolla_start"            17 hours ago        Up 17 hours                             heka
6dcf3a2c12cc        kolla/centos-binary-memcached:2.0.3                   "kolla_start"            17 hours ago        Up 17 hours                             memcached

5.  修改虚拟化类型
因为是在虚拟机中安装,不支持kvm,需要修改虚拟类型为qemu

vim /etc/kolla/nova-compute/nova.conf
[libvirt]
...
virt_type=qemu

然后就可以通过 kolla_internal_address 访问openstack环境
image

一些有用的工具

1.  部署完成后,运行以下命令可以生成一个openrc文件(运行openstack CLI所需的环境变量):

kolla-ansible post-deploy

2.  openrc文件生成之后,使用以下命令可以帮你做一下openstack的初始化工作,包括上传一个glance镜像以及创建几个虚拟网络:

source /etc/kolla/admin-openrc.sh
kolla/tools/init-runonce

3.  由于错误的出现,可能需要多次的部署,而有些错误重新部署是不会进行修正的,所以需要将整个环境进行清理:

tools/cleanup-containers                #可用于从系统中移除部署的容器
tools/cleanup-host                      #可用于移除由于残余网络变化引发的docker启动的neutron-agents主机
tools/cleanup-images                    #可用于从本地缓存中移除所有的docker image

日志查看

kolla通过heka容器来收集所有容器的日志

docker exec -it heka bash

所有的容器都可以从这个目录中获取服务日志:/var/log/kolla/SERVICE_NAME。
如果需要输出日志,请运行:

docker logs

大多数容器不会stdout,上面的命令将不会提供信息。

出错处理

deploy时遇到以下错误:

TASK: [rabbitmq | fail msg="Hostname has to resolve to IP address of api_interface"] ***
failed: [localhost] => (item={'cmd': ['getent', 'ahostsv4', 'localhost'], 'end': '2016-06-24 04:51:39.738725', 'stderr': u'', 'stdout': '127.0.0.1       STREAM localhost\n127.0.0.1       DGRAM  \n127.0.0.1       RAW    \n127.0.0.1       STREAM \n127.0.0.1       DGRAM  \n127.0.0.1       RAW    ', 'changed': False, 'rc': 0, 'item': 'localhost', 'warnings': [], 'delta': '0:00:00.033351', 'invocation': {'module_name': u'command', 'module_complex_args': {}, 'module_args': u'getent ahostsv4 localhost'}, 'stdout_lines': ['127.0.0.1       STREAM localhost', '127.0.0.1       DGRAM  ', '127.0.0.1       RAW    ', '127.0.0.1       STREAM ', '127.0.0.1       DGRAM  ', '127.0.0.1       RAW    '], 'start': '2016-06-24 04:51:39.705374'}) => {"failed": true, "item": {"changed": false, "cmd": ["getent", "ahostsv4", "localhost"], "delta": "0:00:00.033351", "end": "2016-06-24 04:51:39.738725", "invocation": {"module_args": "getent ahostsv4 localhost", "module_complex_args": {}, "module_name": "command"}, "item": "localhost", "rc": 0, "start": "2016-06-24 04:51:39.705374", "stderr": "", "stdout": "127.0.0.1       STREAM localhost\n127.0.0.1       DGRAM  \n127.0.0.1       RAW    \n127.0.0.1       STREAM \n127.0.0.1       DGRAM  \n127.0.0.1       RAW    ", "stdout_lines": ["127.0.0.1       STREAM localhost", "127.0.0.1       DGRAM  ", "127.0.0.1       RAW    ", "127.0.0.1       STREAM ", "127.0.0.1       DGRAM  ", "127.0.0.1       RAW    "], "warnings": []}}
msg: Hostname has to resolve to IP address of api_interface

FATAL: all hosts have already failed -- aborting

PLAY RECAP ********************************************************************
to retry, use: --limit @/root/site.retry

localhost                  : ok=87   changed=24   unreachable=0    failed=1

解决办法:

vim /etc/hosts
127.0.0.1     localhost
192.168.2.120 localhost
 Posted by at 15:36
8月 082016
 

一、安装docker并修改使用阿里云的镜像加速

以下操作在controller节点和compute节点进行(controller节点安装docker是为了方便下载docker镜像直接导入glance)
1. 创建yum repo文件(这里使用阿里云的源)

# tee /etc/yum.repos.d/docker.repo <<-'EOF'
[dockerrepo]
name=Docker Repository
baseurl=http://mirrors.aliyun.com/docker-engine/yum/repo/main/centos/7/
enabled=1
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/docker-engine/yum/gpg
EOF

2.安装docker

# yum install docker-engine

3.修改docker使用阿里云镜像加速

# cp -n /lib/systemd/system/docker.service /etc/systemd/system/docker.service
# sed -i "s|ExecStart=/usr/bin/dockerd|ExecStart=/usr/bin/dockerd --registry-mirror=https://dhxb****.mirror.aliyuncs.com|g" /etc/systemd/system/docker.service
# systemctl daemon-reload

上文https://dhxb****.mirror.aliyuncs.com是我的加速器地址,获取自己加速地址请参考阿里云:https://cr.console.aliyun.com/#/accelerator
4.启动docker并设置开机自启

# systemctl enable docker
# systemctl start docker

二、在compute节点安装并配置novadocker

1.安装novadocker

# usermod -aG docker nova
# yum -y install git python-pip
# pip install -e git+https://github.com/openstack/nova-docker#egg=novadocker
# cd src/novadocker/
# python setup.py install

2.配置 /etc/nova/nova.conf 使用docker driver

[DEFAULT]
compute_driver = novadocker.virt.docker.DockerDriver

[docker]
# Commented out. Uncomment these if you'd like to customize:
## vif_driver=novadocker.virt.docker.vifs.DockerGenericVIFDriver
## snapshots_directory=/var/tmp/my-snapshot-tempdir

将/src/novadocker/etc/nova/rootwrap.d/docker.filters文件拷贝到/etc/nova/rootwrap.d/docker.filters,并修改rootwrap.d的访问权限,然后启动nova-compute服务

# cp -R /src/novadocker/etc/nova/rootwrap.d /etc/nova/
# chown -R root:nova /etc/nova/rootwrap.d # systemctl restart openstack-nova-compute

三、上传镜像到glacne

1.在glance的配置文件中启用driver

# vim /etc/glance/glance-api.conf
[image_format]
container_formats = ami,ari,aki,bare,ovf,docker

2.重启glance-api服务

# openstack-sevice restart glance

3.获取docker镜像,并上传到glance中

# docker pull cirros
# docker save cirros | glance image-create --container-format=docker --disk-format=raw --name cirros

四、创建docker instance

创建实例

# nova boot --image cirros --flavor m1.tiny --nic net-id=59cc6a1d-0cc1-44c7-8b0a-9dc071fde397 cirros-docker

使用docker命令查看容器

# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
dc6e1c21887d cirros "/sbin/init" 47 minutes ago Up 47 minutes nova-bfeeb788-7fdf-476f-904a-8cc8ee3eb81c

注:dashboard上控制台无法使用

遇到的一些问题

修改使用docker driver后nova-compute的日志可以在 /var/log/message查看
1.重启nova-conpute服务失败

……
Aug 08 12:14:51 compute2 nova-compute[21233]: 2016-08-08 12:14:51.388 21233 ERROR nova.virt.driver File "/usr/lib/python2.7/site-packages/oslo_config
Aug 08 12:14:51 compute2 nova-compute[21233]: 2016-08-08 12:14:51.388 21233 ERROR nova.virt.driver __import__(module_str)
Aug 08 12:14:51 compute2 nova-compute[21233]: 2016-08-08 12:14:51.388 21233 ERROR nova.virt.driver ImportError: No module named conf.netconf

解决方法:

# cd src/novadocker/
# git checkout -b stable/liberty origin/stable/liberty
# python setup.py install

然后即可正常启动nova-compute服务

2.创建虚拟机的时候提示报错

404 Client Error: Not Found ("No such image: cirros-docker")]

解决方法:上传image的时候image name必须和docker image的名字一致,否则在创建instance的时候就是有上述错误

3.启动虚拟机的时候报命名空间权限错误

Aug 8 14:12:59 compute2 nova-compute: 2016-08-08 14:12:59.200 12444 ERROR nova.compute.manager [instance: 3608b187-fe0c-4554-aa96-d5ed630042bc] Command: sudo nova-rootwrap /etc/nova/rootwrap.conf ip netns exec ee27f11ab9dc265ad864dbcb8b9a800693fd9517f0bcfa166e3ccae66c300843 ip link set lo up
Aug 8 14:12:59 compute2 nova-compute: 2016-08-08 14:12:59.200 12444 ERROR nova.compute.manager [instance: 3608b187-fe0c-4554-aa96-d5ed630042bc] Exit code: 1
Aug 8 14:12:59 compute2 nova-compute: 2016-08-08 14:12:59.200 12444 ERROR nova.compute.manager [instance: 3608b187-fe0c-4554-aa96-d5ed630042bc] Stdout: u''
Aug 8 14:12:59 compute2 nova-compute: 2016-08-08 14:12:59.200 12444 ERROR nova.compute.manager [instance: 3608b187-fe0c-4554-aa96-d5ed630042bc] Stderr: u'Cannot open network namespace "ee27f11ab9dc265ad864dbcb8b9a800693fd9517f0bcfa166e3ccae66c300843": Permission denied\n'

解决方法:关闭selinux

# sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
# reboot

参考文章:
http://blog.csdn.net/zhangli_perdue/article/details/50155705
https://github.com/openstack/nova-docker
http://heavenkong.blogspot.com/2016/07/resolved-mitaka-novadocker-error.html

 Posted by at 15:37
7月 272016
 

实验环境

controller1 192.168.2.240
controller2 192.168.2.241
compute1 192.168.2.242
compute2 192.168.2.243
compute3 192.168.2.248
compute4 192.168.2.249

在不同的计算节点使用不同的存储后端
image

计算节点配置

1.Scheduler

为了使nova的调度程序支持下面的过滤算法,需要修改使之支持 AggregateInstanceExtraSpecsFilter ,编辑控制节点的 /etc/nova/nova.conf 文件加入修改以下选项,然后重启nova-scheduler服务

# vim /etc/nova/nova.conf
scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,AggregateInstanceExtraSpecsFilter

# systemctl restart openstack-nova-scheduler.service
2.本地存储配置

nova默认支持,无需配置。为了支持迁移可以配置共享存储(NFS等)

3.ceph存储配置

编辑计算节点的 /etc/nova/nova.conf 文件加入修改以下选项,然后重启nova-compute服务(这里没有详细写,例如导入secret-uuid等操作请自行添加)

# vim /etc/nova/nova.conf
[libvirt]
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid =20c3fd98-2bab-457a-b1e2-12e50dc6c98e
disk_cachemodes="network=writeback"
inject_partition=-2
inject_key=False
live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST

# systemctl restart openstack-nova-compute.service

openstack配置

创建主机集合,包含ceph compute nodes 和 local storage compute nodes

# nova aggregate-create ephemeral-compute-storage
+----+---------------------------+-------------------+-------+----------+
| Id | Name                      | Availability Zone | Hosts | Metadata |
+----+---------------------------+-------------------+-------+----------+
| 8  | ephemeral-compute-storage | -                 |       |          |
+----+---------------------------+-------------------+-------+----------+

# nova aggregate-create ceph-compute-storage
+----+----------------------+-------------------+-------+----------+
| Id | Name                 | Availability Zone | Hosts | Metadata |
+----+----------------------+-------------------+-------+----------+
| 9  | ceph-compute-storage | -                 |       |          |
+----+----------------------+-------------------+-------+----------+

可以使用 nova hypervisor-list 命令来查看自己的hypervisor name

# nova hypervisor-list
+----+---------------------+-------+---------+
| ID | Hypervisor hostname | State | Status  |
+----+---------------------+-------+---------+
| 1  | compute1            | up    | enabled |
| 2  | compute2            | up    | enabled |
| 4  | compute4            | up    | enabled |
| 7  | compute3            | up    | enabled |
+----+---------------------+-------+---------+

在本例中,使用以下分类
Local storage:compute1,compute2
Ceph storage: conpute3,compute4
添加主机到主机集合

# nova aggregate-add-host ephemeral-compute-storage compute1
# nova aggregate-add-host ephemeral-compute-storage compute2
# nova aggregate-add-host ceph-compute-storage compute3
# nova aggregate-add-host ceph-compute-storage compute4

为主机集合创建新的metadata

# nova aggregate-set-metadata ephemeral-compute-storage ephemeralcomputestorage=true
# nova aggregate-set-metadata ceph-compute-storage cephcomputestorage=true

为使用本地存储和ceph存储的虚拟机创建flavor

# nova flavor-create m1.ephemeral-compute-storage 8 128 1 1
# nova flavor-create m1.ceph-compute-storage 9 128 1 1

为flavor绑定指定的属性

# nova flavor-key m1.ceph-compute-storage set aggregate_instance_extra_specs:cephcomputestorage=true
# nova flavor-key m1.ephemeral-compute-storage set aggregate_instance_extra_specs:ephemeralcomputestorage=true

结果验证

使用flavor m1.ceph-compute-storage 启动4台虚拟机,发现虚拟机磁盘文件全部在ceph的pool中

[root@controller1 ~]# nova list
+--------------------------------------+--------+--------+------------+-------------+---------------------+
| ID                                   | Name   | Status | Task State | Power State | Networks            |
+--------------------------------------+--------+--------+------------+-------------+---------------------+
| 5d6bd85e-9b75-4035-876c-30e997ea0a98 | ceph-1 | BUILD  | spawning   | NOSTATE     | private=172.16.1.49 |
| aa666bd9-e370-4c53-8af3-f1bf7ba77900 | ceph-2 | BUILD  | spawning   | NOSTATE     | private=172.16.1.48 |
| 56d6a3a8-e6c4-4860-bd72-2e0aa0fa55f2 | ceph-3 | BUILD  | spawning   | NOSTATE     | private=172.16.1.47 |
| 2b9577d8-2448-4d8a-ba98-253b0f597b12 | ceph-4 | BUILD  | spawning   | NOSTATE     | private=172.16.1.50 |
+--------------------------------------+--------+--------+------------+-------------+---------------------+

[root@node1 ~]# rbd ls vms
2b9577d8-2448-4d8a-ba98-253b0f597b12_disk
56d6a3a8-e6c4-4860-bd72-2e0aa0fa55f2_disk
5d6bd85e-9b75-4035-876c-30e997ea0a98_disk
aa666bd9-e370-4c53-8af3-f1bf7ba77900_disk

删除所有虚拟机(便于验证),使用flavor m1.ephemeral-compute-storage 启动四台虚拟机,发现虚拟机磁盘文件分布于compute1 和 compute2 的本地存储中(没有配置NFS等共享存储)

[root@controller1 ~]# nova list
+--------------------------------------+---------+--------+------------+-------------+---------------------+
| ID                                   | Name    | Status | Task State | Power State | Networks            |
+--------------------------------------+---------+--------+------------+-------------+---------------------+
| 1c1ce5f3-b5aa-47dd-806c-e2eba60b9eb0 | local-1 | ACTIVE | -          | Running     | private=172.16.1.51 |
| 5a3e4074-619e-423a-a649-e24771f9fbd1 | local-2 | ACTIVE | -          | Running     | private=172.16.1.54 |
| 5b838406-b9cf-4943-89f3-79866f8e6e19 | local-3 | ACTIVE | -          | Running     | private=172.16.1.52 |
| 30e7289f-bc80-4374-aabb-906897b8141c | local-4 | ACTIVE | -          | Running     | private=172.16.1.53 |
+--------------------------------------+---------+--------+------------+-------------+---------------------+

[root@compute1 ~]# ll /var/lib/nova/instances/
total 4
drwxr-xr-x 2 nova nova  69 Jul 27 10:40 1c1ce5f3-b5aa-47dd-806c-e2eba60b9eb0
drwxr-xr-x 2 nova nova  69 Jul 27 10:40 5b838406-b9cf-4943-89f3-79866f8e6e19
drwxr-xr-x 2 nova nova  53 Jul 25 16:01 _base
-rw-r--r-- 1 nova nova  31 Jul 27 10:33 compute_nodes
drwxr-xr-x 2 nova nova 143 Jul 25 16:01 locks
drwxr-xr-x 2 nova nova   6 Jul  6 15:51 snapshots

[root@compute2 ~]# ll /var/lib/nova/instances/
total 4
drwxr-xr-x 2 nova nova  69 Jul 27 10:40 30e7289f-bc80-4374-aabb-906897b8141c
drwxr-xr-x 2 nova nova  69 Jul 27 10:40 5a3e4074-619e-423a-a649-e24771f9fbd1
drwxr-xr-x 2 nova nova  53 Jul 25 16:02 _base
-rw-r--r-- 1 nova nova  62 Jul 27 10:33 compute_nodes
drwxr-xr-x 2 nova nova 143 Jul 25 16:01 locks

补充说明

在线迁移虚拟机的时候,不在同一个主机集合的主机仍然可以选择,但是无法迁移,需要增加只能在所在主机集合内迁移的功能

 

参考文章:https://www.sebastien-han.fr/blog/2014/09/01/openstack-use-ephemeral-and-persistent-root-storage-for-different-hypervisors/

 Posted by at 10:55
3月 242016
 

生产环境中至少运行3个rabbitmq服务器,测试环境中我们可以只运行两个,我们配置了两个节点,分别为controller1和controller2。

 

为HA队列配置RabbitMQ

  1. 在controller1上启动使用以下命令启动rabbitmq
    # systemctl start rabbitmq
  2. 从controller1上复制cookie到其他的节点
    # scp root@NODE:/var/lib/rabbitmq/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie

    需要修改NODE为controller1或者对应ip

  3. 在每个目标节点上确认 erlang.cookie 文件的用户,组和权限
    # chown rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie
    # chmod 400 /var/lib/rabbitmq/.erlang.cookie
  4. 设置rabbitmq开机自启并启动其他节点的rabbitmq-server
    # systemctl enable rabbitmq-server
    # systemctl start rabbitmq-server
  5. 使用以下命令确认rabbitmq-server在每个节点正确运行
    # rabbitmqctl cluster_status
    Cluster status of node rabbit@controller1...
    [{nodes,[{disc,[rabbit@ controller1]}]},
    {running_nodes,[rabbit@ controller1]},
    {partitions,[]}]
    ...done.
  6. 除第一个节点(controller1)外,其他节点执行以下命令加入集群
    # rabbitmqctl stop_app
    Stopping node rabbit@controller2...
    ...done.
    # rabbitmqctl join_cluster --ram rabbit@ controller1
    # rabbitmqctl start_app
    Starting node rabbit@ controller2...
    ...done.
  7. 确认集群状态
    # rabbitmqctl cluster_status
    Cluster status of node rabbit@controller1...
    [{nodes,[{disc,[rabbit@ controller1]},{ram,[rabbit@ controller2]}]}, \
        {running_nodes,[rabbit@NODE,rabbit@ controller1]}]
  8. 为了确保所有队列除了名字自动生成的可以在所有运行的节点上镜像,设置 ha-mode 策略,在任意节点上执行
    # rabbitmqctl set_policy ha-all '^(?!amq\.).*' '{"ha-mode": "all"}

 

配置openstack服务使用rabbitmq ha queues

  1. 使用方法
    rabbit_hosts=controller1:5672,controller2:5672
  2. RabbitMQ尝试重连的时间(这里的单位是?秒?)
    rabbit_retry_interval=1
  3. How long to back-off for between retries when connecting to RabbitMQ (问题同上)
    rabbit_retry_backoff=2
  4. 最小尝试重连RabbitMQ的次数(默认是无限)
    rabbit_max_retries=0
  5. 在RabbitMQ中使用durable queues
    rabbit_durable_queues=true
  6. 在RabbitMQ中使用HA queues
    rabbit_ha_queues=true
NOTE:如果想更改从没有使用HA queues的旧配置到HA queues,你需要重启服务
# rabbitmqctl stop_app
# rabbitmqctl reset
# rabbitmqctl start_app

 

 

 Posted by at 19:12
12月 292015
 
  1. 本机环境
    操作系统:CentOS Linux release 7.2.1511 (Core)
    本机IP:172.16.33.201
    网关:172.16.33.254
  2. 下载devstack和前期准备
    这里和别人的文章有点出入,git clone devstack的时候需要指定分支,不然安装openstack的时候会提示一个脚本不存在

    # cd /opt
    # git clone https://git.openstack.org/openstack-dev/devstack -b stable/liberty

    新建stack用户,修改devstack文件夹所有者

    # cd /opt/devstack/tools/
    # ./create-stack-user.sh
    # chown -R stack:stack /opt/devstack
  3. 新建local.conf文件,示例如下,按需更改
    # vim /opt/devstack/local.conf
    
    [[local|localrc]]
    # Define images to be automatically downloaded during the DevStack built process.
    IMAGE_URLS="http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img"
    # Credentials
    DATABASE_PASSWORD=123456
    ADMIN_PASSWORD=123456
    SERVICE_PASSWORD=123456
    SERVICE_TOKEN=pass
    RABBIT_PASSWORD=123456
    #FLAT_INTERFACE=eth0
    
    HOST_IP=172.16.33.201
    SERVICE_HOST=172.16.33.201
    MYSQL_HOST=172.16.33.201
    RABBIT_HOST=172.16.33.201
    GLANCE_HOSTPORT=172.16.33.201:9292
    
    
    ## Neutron options
    Q_USE_SECGROUP=True
    FLOATING_RANGE=172.16.33.0/24
    FIXED_RANGE=10.0.0.0/24
    Q_FLOATING_ALLOCATION_POOL=start=172.16.33.202,end=172.16.33.210
    PUBLIC_NETWORK_GATEWAY=172.16.33.254
    Q_L3_ENABLED=True
    PUBLIC_INTERFACE=eth0
    Q_USE_PROVIDERNET_FOR_PUBLIC=True
    OVS_PHYSICAL_BRIDGE=br-ex
    PUBLIC_BRIDGE=br-ex
    OVS_BRIDGE_MAPPINGS=public:br-ex
    
    
    # Work offline
    #OFFLINE=True
    # Reclone each time
    RECLONE=False
    
    
    # Logging
    # -------
    # By default ``stack.sh`` output only goes to the terminal where it runs. It can
    # be configured to additionally log to a file by setting ``LOGFILE`` to the full
    # path of the destination log file. A timestamp will be appended to the given name.
    LOGFILE=/opt/stack/logs/stack.sh.log
    VERBOSE=True
    LOG_COLOR=True
    SCREEN_LOGDIR=/opt/stack/logs
    
    # the number of days by setting ``LOGDAYS``.
    LOGDAYS=1
    # Database Backend MySQL
    enable_service mysql
    # RPC Backend RabbitMQ
    enable_service rabbit
    
    
    # Enable Keystone - OpenStack Identity Service
    enable_service key
    # Horizon - OpenStack Dashboard Service
    enable_service horizon
    # Enable Swift - Object Storage Service without replication.
    enable_service s-proxy s-object s-container s-account
    SWIFT_HASH=66a3d6b56c1f479c8b4e70ab5c2000f5
    SWIFT_REPLICAS=1
    # Enable Glance - OpenStack Image service
    enable_service g-api g-reg
    
    # Enable Cinder - Block Storage service for OpenStack
    VOLUME_GROUP="cinder-volumes"
    enable_service cinder c-api c-vol c-sch c-bak
    # Enable Heat (orchestration) Service
    enable_service heat h-api h-api-cfn h-api-cw h-eng
    # Enable Trove (database) Service
    enable_service trove tr-api tr-tmgr tr-cond
    # Enable Sahara (data_processing) Service
    enable_service sahara
    
    # Enable Tempest - The OpenStack Integration Test Suite
    enable_service tempest
    
    # Enabling Neutron (network) Service
    disable_service n-net
    enable_service q-svc
    enable_service q-agt
    enable_service q-dhcp
    enable_service q-l3
    enable_service q-meta
    enable_service q-metering
    enable_service neutron
    
    
    ## Neutron - Load Balancing
    enable_service q-lbaas
    ## Neutron - Firewall as a Service
    enable_service q-fwaas
    ## Neutron - VPN as a Service
    enable_service q-vpn
    # VLAN configuration.
    #Q_PLUGIN=ml2
    #ENABLE_TENANT_VLANS=True
    
    
    # GRE tunnel configuration
    #Q_PLUGIN=ml2
    #ENABLE_TENANT_TUNNELS=True
    # VXLAN tunnel configuration
    Q_PLUGIN=ml2
    Q_ML2_TENANT_NETWORK_TYPE=vxlan
    
    # Enable Ceilometer - Metering Service (metering + alarming)
    enable_service ceilometer-acompute ceilometer-acentral ceilometer-collector ceilometer-api
    enable_service ceilometer-alarm-notify ceilometer-alarm-eval
    enable_service ceilometer-anotification
    ## Enable NoVNC
    enable_service n-novnc n-cauth
    
    # Enable the Ceilometer devstack plugin
    enable_plugin ceilometer https://git.openstack.org/openstack/ceilometer.git
    
    # Branches
    KEYSTONE_BRANCH=stable/liberty
    NOVA_BRANCH=stable/liberty
    NEUTRON_BRANCH=stable/liberty
    SWIFT_BRANCH=stable/liberty
    GLANCE_BRANCH=stable/liberty
    CINDER_BRANCH=stable/liberty
    HEAT_BRANCH=stable/liberty
    TROVE_BRANCH=stable/liberty
    HORIZON_BRANCH=stable/liberty
    SAHARA_BRANCH=stable/liberty
    CEILOMETER_BRANCH=stable/liberty
    TROVE_BRANCH=stable/liberty
    
    # Select Keystone's token format
    # Choose from 'UUID', 'PKI', or 'PKIZ'
    # INSERT THIS LINE...
    KEYSTONE_TOKEN_FORMAT=${KEYSTONE_TOKEN_FORMAT:-UUID}
    KEYSTONE_TOKEN_FORMAT=$(echo ${KEYSTONE_TOKEN_FORMAT} | tr '[:upper:]' '[:lower:]')
    
    
    [[post-config|$NOVA_CONF]]
    [DEFAULT]
    # Ceilometer notification driver
    instance_usage_audit=True
    instance_usage_audit_period=hour
    notify_on_state_change=vm_and_task_state
    notification_driver=nova.openstack.common.notifier.rpc_notifier
    notification_driver=ceilometer.compute.nova_notifier
    
  4. 安装openstack
    # cd /opt/devstack
    # su stack
    # ./stack.sh

    安装完成,如下图所示:
    1

    访问dashboard:
    2

  5. 命令行操作
    admin用户
    # source /opt/devstack/openrc admin admin # 加载环境变量进行操作
    demo用户
    # source /opt/devstack/openrc demo demo # 加载环境变量进行操作
    

     

 Posted by at 15:06
11月 272015
 

接上文:在centos7上安装和配置Kubernetes

  1. 下载kube-ui镜像并导入
    谷歌的镜像地址被墙了,无法pull拉取镜像,只能手动下载,附下载:kube-ui_v3.tar在每个minion上导入镜像:

    docker load < kube-ui_v3.tar

     

  2. 创建kube-system namespace
    创建kube-system.json,内容如下:

    {
      "kind": "Namespace",
      "apiVersion": "v1",
      "metadata": {
        "name": "kube-system"
      }
    }

    运行以下命令创建namespace

    # kubectl create -f kube-system.json
    # kubectl get namespace
    NAME          LABELS    STATUS
    default       <none>    Active
    kube-system   <none>    Active
    

     

  3. 创建rc
    创建kube-ui-rc.yaml 文件,并写入一下内容

    apiVersion: v1
    kind: ReplicationController
    metadata:
      name: kube-ui-v3
      namespace: kube-system
      labels:
        k8s-app: kube-ui
        version: v3
        kubernetes.io/cluster-service: "true"
    spec:
      replicas: 3
      selector:
        k8s-app: kube-ui
        version: v3
      template:
        metadata:
          labels:
            k8s-app: kube-ui
            version: v3
            kubernetes.io/cluster-service: "true"
        spec:
          containers:
          - name: kube-ui
            image: gcr.io/google_containers/kube-ui:v3
            resources:
              limits:
                cpu: 100m
                memory: 50Mi
            ports:
            - containerPort: 8080
            livenessProbe:
              httpGet:
                path: /
                port: 8080
              initialDelaySeconds: 30
              timeoutSeconds: 5
    

    运行一下命令创建rc,并查看

    # kubectl create -f kube-ui-rc.yaml
    
    #kubectl get rc --all-namespaces
    NAMESPACE     CONTROLLER   CONTAINER(S)   IMAGE(S)                              SELECTOR                     REPLICAS
    kube-system   kube-ui-v3   kube-ui        gcr.io/google_containers/kube-ui:v3   k8s-app=kube-ui,version=v3   3
    

     

  4. 创建service
    创建 kube-ui-svc.yaml 文件,并写入以下内容

    apiVersion: v1
    kind: Service
    metadata:
      name: kube-ui
      namespace: kube-system
      labels:
        k8s-app: kube-ui
        kubernetes.io/cluster-service: "true"
        kubernetes.io/name: "KubeUI"
    spec:
      selector:
        k8s-app: kube-ui
      ports:
      - port: 80
        targetPort: 8080

    运行以下命令创建service,并查看service 和 pods

    # kubectl create -f kube-ui-svc.yaml
    # kubectl get rc,pods --all-namespaces
    NAMESPACE     CONTROLLER   CONTAINER(S)   IMAGE(S)                              SELECTOR                     REPLICAS
    kube-system   kube-ui-v3   kube-ui        gcr.io/google_containers/kube-ui:v3   k8s-app=kube-ui,version=v3   3
    NAMESPACE     NAME               READY     STATUS    RESTARTS   AGE
    kube-system   kube-ui-v3-0zyjp   1/1       Running   0          21h
    kube-system   kube-ui-v3-6s1d0   1/1       Running   0          21h
    kube-system   kube-ui-v3-i0uqs   1/1       Running   0          21h
    

    可以看到kube-ui服务已经成功创建,运行3个副本

  5. master配置flannel网络,与minion连通
    master安装flannel,并启动

    # yum install flannel -y
    # systemctl enable flanneld
    # systemctl start flanneld
  6. 访问kube-ui
    访问 http://master_ip:8080/ui/ 会自动跳转 http://kube-ui:8080/api/v1/proxy/namespaces/kube-system/services/kube-ui/#/dashboard/ 即可访问kube-ui的dashboard 页面,如下图所示:
    kube-ui

    可以查看minion的系统信息,pods,RC,services等信息

 Posted by at 12:21
10月 272015
 

一、安装前准备

1.操作系统详情

需要三台主机,都最小化安装 centos7.1,并update到最新,详情见如下表格

角色 主机名 IP
Master master 192.168.0.79
Minion1 minion-1 192.168.0.80
Minion2 minion-2 192.168.0.81

2.在每台主机上关闭firewalld改用iptables

输入以下命令,关闭firewalld

# systemctl stop firewalld.service    #停止firewall
# systemctl disable firewalld.service #禁止firewall开机启动

然后安装iptables并启用

# yum install -y iptables-services     #安装
# systemctl start iptables.service  #最后重启防火墙使配置生效
# systemctl enable iptables.service #设置防火墙开机启动

3.安装ntp服务

# yum install -y ntp
# systemctl start ntpd
# systemctl enable ntpd

二、安装配置

注:kubernetes,etcd等已经进去centos epel源,可以直接yum安装(需要安装epel-release)

1.安装Kubernetes Master

•  使用以下命令安装kubernetes 和 etcd

# yum install -y kubernetes etcd

•  编辑/etc/etcd/etcd.conf 使etcd监听所有的ip地址,确保下列行没有注释,并修改为下面的值

# vim /etc/etcd/etcd.conf
# [member]
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
#[cluster]
ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"

•  编辑Kubernetes API server的配置文件 /etc/kubernetes/apiserver,确保下列行没有被注释,并为下列的值

#  vim /etc/kubernetes/apiserver
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#

# The address on the local server to listen to.
KUBE_API_ADDRESS="--address=0.0.0.0"

# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"

# Port minions listen on
KUBELET_PORT="--kubelet_port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd_servers=http://127.0.0.1:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies
KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"

# Add your own!
KUBE_API_ARGS=""

•  启动etcd, kube-apiserver, kube-controller-manager and kube-scheduler服务,并设置开机自启

# for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do 
    systemctl restart $SERVICES
    systemctl enable $SERVICES
    systemctl status $SERVICES 
done

•  在etcd中定义flannel network的配置,这些配置会被flannel service下发到minions:

# etcdctl mk /coreos.com/network/config '{"Network":"172.17.0.0/16"}'

• 添加iptables规则,允许相应的端口

iptables -I INPUT -p tcp --dport 2379 -j ACCEPT
iptables -I INPUT -p tcp --dport 10250 -j ACCEPT
iptables -I INPUT -p tcp --dport 8080 -j ACCEPT
iptables-save

•  查看节点信息(我们还没有配置节点信息,所以这里应该为空)

# kubectl get nodes
NAME             LABELS              STATUS

2. 安装Kubernetes Minions (Nodes)

注:下面这些步骤应该在minion1和minions2上执行(也可以添加更多的minions)

•  使用yum安装kubernetes 和 flannel

# yum install -y flannel kubernetes

•  为flannel service配置etcd服务器,编辑/etc/sysconfig/flanneld文件中的下列行以连接到master

# vim /etc/sysconfig/flanneld
FLANNEL_ETCD="http://192.168.0.79:2379"        #改为etcd服务器的ip

•  编辑/etc/kubernetes/config 中kubernetes的默认配置,确保KUBE_MASTER的值是连接到Kubernetes master API server:

# vim /etc/kubernetes/config
KUBE_MASTER="--master=http://192.168.0.79:8080"

•  编辑/etc/kubernetes/kubelet 如下行:

minion1:

# vim /etc/kubernetes/kubelet

KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname_override=192.168.0.80"
KUBELET_API_SERVER="--api_servers=http://192.168.0.79:8080"
KUBELET_ARGS=""

minion2:

# vim /etc/kubernetes/kubelet

KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname_override=192.168.0.81"
KUBELET_API_SERVER="--api_servers=http://192.168.0.79:8080"
KUBELET_ARGS=""

•  启动kube-proxy, kubelet, docker 和 flanneld services服务,并设置开机自启

# for SERVICES in kube-proxy kubelet docker flanneld; do 
    systemctl restart $SERVICES
    systemctl enable $SERVICES
    systemctl status $SERVICES 
done

•  在每个minion节点,你应当注意到你有两块新的网卡docker0 和 flannel0。你应该得到不同的ip地址范围在flannel0上,就像下面这样:

minion1:

# ip a | grep flannel | grep inet
    inet 172.17.29.0/16 scope global flannel0

minion2:

# ip a | grep flannel | grep inet
    inet 172.17.37.0/16 scope global flannel0

•   添加iptables规则:

iptables -I INPUT -p tcp --dport 2379 -j ACCEPT
iptables -I INPUT -p tcp --dport 10250 -j ACCEPT
iptables -I INPUT -p tcp --dport 8080 -j ACCEPT

•  现在登陆kubernetes master节点验证minions的节点状态:

# kubectl get nodes
NAME           LABELS                                STATUS
192.168.0.80   kubernetes.io/hostname=192.168.0.80   Ready
192.168.0.81   kubernetes.io/hostname=192.168.0.81   Ready

至此,kubernetes集群已经配置并运行了,我们可以继续下面的步骤。

三、创建 Pods (Containers)

为了创建一个pod,我们需要在kubernetes master上面定义一个yaml 或者 json配置文件。然后使用kubectl命令创建pod

# mkdir -p k8s/pods
# cd k8s/pods/
# vim nginx.yaml

在nginx.yaml里面增加如下内容:

apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  containers:
  - name: nginx
    image: nginx
    ports:
    - containerPort: 80

创建pod:

# kubectl create -f nginx.yaml

此时有如下报错:

Error from server: error when creating "nginx.yaml": Pod "nginx" is forbidden: no API token found for service account default/default, retry after the token is automatically created and added to the service account

解决办法是编辑/etc/kubernetes/apiserver 去除 KUBE_ADMISSION_CONTROL中的SecurityContextDeny,ServiceAccount,并重启kube-apiserver.service服务:

#vim /etc/kubernetes/apiserver
KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota"

#systemctl restart kube-apiserver.service

之后重新创建pod:

# kubectl create -f nginx.yaml
pods/nginx

查看pod:

# kubectl get pod nginx
NAME      READY     STATUS                                            RESTARTS   AGE
nginx     0/1       Image: nginx is not ready on the node   0          34s

这里STATUS一直是这个,创建不成功,下面排错。通过查看pod的描述发现如下错误:

# kubectl describe pod nginx 
Wed, 28 Oct 2015 10:25:30 +0800       Wed, 28 Oct 2015 10:25:30 +0800 1       {kubelet 192.168.0.81}  implicitly required container POD       pulled          Successfully pulled Pod container image "gcr.io/google_containers/pause:0.8.0"
  Wed, 28 Oct 2015 10:25:30 +0800       Wed, 28 Oct 2015 10:25:30 +0800 1       {kubelet 192.168.0.81}  implicitly required container POD       failed          Failed to create docker container with error: no such image
  Wed, 28 Oct 2015 10:25:30 +0800       Wed, 28 Oct 2015 10:25:30 +0800 1       {kubelet 192.168.0.81}                                          failedSync      Error syncing pod, skipping: no such image
  Wed, 28 Oct 2015 10:27:30 +0800       Wed, 28 Oct 2015 10:29:30 +0800 2       {kubelet 192.168.0.81}  implicitly required container POD       failed          Failed to pull image "gcr.io/google_containers/pause:0.8.0": image pull failed for gcr.io/google_containers/pause:0.8.0, this may be because there are no credentials on this request.  details: (API error (500): invalid registry endpoint "http://gcr.io/v0/". HTTPS attempt: unable to ping registry endpoint https://gcr.io/v0/
v2 ping attempt failed with error: Get https://gcr.io/v2/: dial tcp 173.194.72.82:443: i/o timeout

手动ping了一下gcr.io发现无法ping通(可能是被墙了)

从网上找到 pause:0.8.0 的镜像,然后再每个minion上导入镜像:

# docker load --input pause-0.8.0.tar

附下载:pause-0.8.0.tar

在执行以下命令即可成功创建pod

#kubectl create -f nginx.yaml
pods/nginx

查看pod

# kubectl get pod nginx
NAME      READY     STATUS                                            RESTARTS   AGE
nginx      1/1             Running                                            0               2min

 

 Posted by at 11:22
9月 022015
 
        OFBiz是一个非常著名的电子商务平台,是一个非常著名的开源项目,提供了创建基于最新J2EE/XML规范和技术标准,构建大中型企业级、跨平台、跨数据库、跨应用服务器的多层、分布式电子商务类WEB应用系统的框架。 OFBiz最主要的特点是OFBiz提供了一整套的开发基于Java的web应用程序的组件和工具。包括实体引擎, 服务引擎, 消息引擎, 工作流引擎, 规则引擎等。OFBiz 已经正式成为 Apache 的顶级项目: Apache OFBiz。
        ofbiz自带的数据库是Derby,这是一种小型的适合于测试系统的数据库,但不适合在产品级系统中使用,所以通常我们需要将ofbiz数据库迁移到其它数据库上。下面介绍迁移到mysql的步骤,迁移到其他数据库操作类似。
  1. 安装mysql,创建ofbiz的数据库
    使用以下命令分别创建ofbiz用户(密码ofbiz),和ofbiz、ofbizolap、ofbiztenant三个数据库

    mysql -u root 
    >create user 'ofbiz'@'localhost' identified by 'ofbiz';   
    >create database ofbiz DEFAULT CHARSET utf8mb4 COLLATE utf8mb4_general_ci;  
    >create database ofbizolap DEFAULT CHARSET utf8mb4 COLLATE utf8mb4_general_ci;  
    >create database ofbiztenant DEFAULT CHARSET utf8mb4 COLLATE utf8mb4_general_ci;  
    >grant all on *.* to 'ofbiz'@'localhost';
    >flush privileges;
    >quit;
    
  2. 修改ofbiz配置文件
    编辑 entityengine.xml 修改默认的数据库引擎,以及连接数据库的用户名密码等信息

    vim ofbiz_HOME/framework/entity/config/entityengine.xml

    修改其中的delegator name标签为如下内容(即注释derby启用mysql)

    <delegator name="default" entity-model-reader="main" entity-group-reader="main" entity-eca-reader="main" distributed-cache-clear-enabled="false">
            <!-- <group-map group-name="org.ofbiz" datasource-name="localderby"/>
            <group-map group-name="org.ofbiz.olap" datasource-name="localderbyolap"/>
            <group-map group-name="org.ofbiz.tenant" datasource-name="localderbytenant"/> -->
            <group-map group-name="org.ofbiz" datasource-name="localmysql"/>
            <group-map group-name="org.ofbiz.olap" datasource-name="localmysqlolap"/>
            <group-map group-name="org.ofbiz.tenant" datasource-name="localmysqltenant"/>
            <!-- <group-map group-name="org.ofbiz" datasource-name="localpostnew"/>
            <group-map group-name="org.ofbiz.olap" datasource-name="localpostolap"/>
            <group-map group-name="org.ofbiz.tenant" datasource-name="localposttenant"/> -->
        </delegator>
        <delegator name="default-no-eca" entity-model-reader="main" entity-group-reader="main" entity-eca-reader="main" entity-eca-enabled="false" distributed-cache-clear-enabled="false">
            <!-- <group-map group-name="org.ofbiz" datasource-name="localderby"/>
            <group-map group-name="org.ofbiz.olap" datasource-name="localderbyolap"/>
            <group-map group-name="org.ofbiz.tenant" datasource-name="localderbytenant"/> -->
            <group-map group-name="org.ofbiz" datasource-name="localmysql"/>
            <group-map group-name="org.ofbiz.olap" datasource-name="localmysqlolap"/>
            <group-map group-name="org.ofbiz.tenant" datasource-name="localmysqltenant"/>
            <!-- <group-map group-name="org.ofbiz" datasource-name="localpostnew"/>
            <group-map group-name="org.ofbiz.olap" datasource-name="localpostolap"/>
            <group-map group-name="org.ofbiz.tenant" datasource-name="localposttenant"/>  -->
        </delegator>
    
        <!-- be sure that your default delegator (or the one you use) uses the same datasource for test. You must run "ant load-demo" before running "ant run-tests" -->
        <delegator name="test" entity-model-reader="main" entity-group-reader="main" entity-eca-reader="main">
            <!-- <group-map group-name="org.ofbiz" datasource-name="localderby"/>
            <group-map group-name="org.ofbiz.olap" datasource-name="localderbyolap"/>
            <group-map group-name="org.ofbiz.tenant" datasource-name="localderbytenant"/> -->
            <group-map group-name="org.ofbiz" datasource-name="localmysql"/>
            <group-map group-name="org.ofbiz.olap" datasource-name="localmysqlolap"/>
            <group-map group-name="org.ofbiz.tenant" datasource-name="localmysqltenant"/>
            <!-- <group-map group-name="org.ofbiz" datasource-name="localpostnew"/>
            <group-map group-name="org.ofbiz.olap" datasource-name="localpostolap"/>
            <group-map group-name="org.ofbiz.tenant" datasource-name="localposttenant"/>  -->
        </delegator>
    

    修改datasource name部分注意修改数据库登陆信息及字符集和编码

    <datasource name="localmysql"
                helper-class="org.ofbiz.entity.datasource.GenericHelperDAO"
                field-type-name="mysql"
                check-on-start="true"
                add-missing-on-start="true"
                check-pks-on-start="false"
                use-foreign-keys="true"
                join-style="ansi-no-parenthesis"
                alias-view-columns="false"
                drop-fk-use-foreign-key-keyword="true"
                table-type="InnoDB"
                character-set="utf8"
                collate="utf8_general_ci">
            <read-data reader-name="tenant"/>
            <read-data reader-name="seed"/>
            <read-data reader-name="seed-initial"/>
            <read-data reader-name="demo"/>
            <read-data reader-name="ext"/>
            <read-data reader-name="ext-test"/>
            <read-data reader-name="ext-demo"/>
            <inline-jdbc
                    jdbc-driver="com.mysql.jdbc.Driver"
                    jdbc-uri="jdbc:mysql://127.0.0.1:3306/ofbiz?autoReconnect=true"
                    jdbc-username="ofbiz"
                    jdbc-password="ofbiz"
                    isolation-level="ReadCommitted"
                    pool-minsize="2"
                    pool-maxsize="250"
                    time-between-eviction-runs-millis="600000"/><!-- Please note that at least one person has experienced a problem with this value with MySQL
                    and had to set it to -1 in order to avoid this issue.
                    For more look at http://markmail.org/thread/5sivpykv7xkl66px and http://commons.apache.org/dbcp/configuration.html-->
            <!-- <jndi-jdbc jndi-server-name="localjndi" jndi-name="java:/MySqlDataSource" isolation-level="Serializable"/> -->
        </datasource>
     <datasource name="localmysqlolap"
                helper-class="org.ofbiz.entity.datasource.GenericHelperDAO"
                field-type-name="mysql"
                check-on-start="true"
                add-missing-on-start="true"
                check-pks-on-start="false"
                use-foreign-keys="true"
                join-style="ansi-no-parenthesis"
                alias-view-columns="false"
                drop-fk-use-foreign-key-keyword="true"
                table-type="InnoDB"
                character-set="utf8"
                collate="utf8_general_ci">
            <read-data reader-name="tenant"/>
            <read-data reader-name="seed"/>
            <read-data reader-name="seed-initial"/>
            <read-data reader-name="demo"/>
            <read-data reader-name="ext"/>
            <read-data reader-name="ext-test"/>
            <read-data reader-name="ext-demo"/>
            <inline-jdbc
                    jdbc-driver="com.mysql.jdbc.Driver"
                    jdbc-uri="jdbc:mysql://127.0.0.1:3306/ofbizolap?autoReconnect=true"
                    jdbc-username="ofbiz"
                    jdbc-password="ofbiz"
                    isolation-level="ReadCommitted"
                    pool-minsize="2"
                    pool-maxsize="250"
                    time-between-eviction-runs-millis="600000"/><!-- Please note that at least one person has experienced a problem with this value with MySQL
                    and had to set it to -1 in order to avoid this issue.
                    For more look at http://markmail.org/thread/5sivpykv7xkl66px and http://commons.apache.org/dbcp/configuration.html-->
            <!-- <jndi-jdbc jndi-server-name="localjndi" jndi-name="java:/MySqlDataSource" isolation-level="Serializable"/> -->
        </datasource>
        <datasource name="localmysqltenant"
                helper-class="org.ofbiz.entity.datasource.GenericHelperDAO"
                field-type-name="mysql"
                check-on-start="true"
                add-missing-on-start="true"
                check-pks-on-start="false"
                use-foreign-keys="true"
                join-style="ansi-no-parenthesis"
                alias-view-columns="false"
                drop-fk-use-foreign-key-keyword="true"
                table-type="InnoDB"
                character-aracter-set="utf8"
                collate="utf8_general_ci">
            <read-data reader-name="tenant"/>
            <read-data reader-name="seed"/>
            <read-data reader-name="seed-initial"/>
            <read-data reader-name="demo"/>
            <read-data reader-name="ext"/>
            <read-data reader-name="ext-test"/>
            <read-data reader-name="ext-demo"/>
            <inline-jdbc
                    jdbc-driver="com.mysql.jdbc.Driver"
                    jdbc-uri="jdbc:mysql://127.0.0.1:3306/ofbiztenant?autoReconnect=true"
                    jdbc-username="ofbiz"
                    jdbc-password="ofbiz"
                    isolation-level="ReadCommitted"
                    pool-minsize="2"
                    pool-maxsize="250"
                    time-between-eviction-runs-millis="600000"/><!-- Please note that at least one person has experienced a problem with this value with MySQL
                    and had to set it to -1 in order to avoid this issue.
                    For more look at http://markmail.org/thread/5sivpykv7xkl66px and http://commons.apache.org/dbcp/configuration.html-->
            <!-- <jndi-jdbc jndi-server-name="localjndi" jndi-name="java:/MySqlDataSource" isolation-level="Serializable"/> -->
        </datasource>
  3. 
    

    复制mysql.jar文件到指定目录 mysql.jar下载地址:http://dev.mysql.com/downloads/connector/j/ 这里上传自己使用的mysql-connector-java-5.1.36-bin 复制mysql.jar到lib目录

    cp mysql-connector-java-5.1.36-bin.jar ofbiz_HOME/framework/base/lib/
  4. 导入数据,启动ofbiz
    cd ofbiz_HOME
    ./ant load-demo           #导入demo数据
    ./ant start               #启动ofbiz

    至此已经完成ofbiz使用mysql数据库的配置,其他操作请参考ofbiz目录下的README文件

 Posted by at 11:54
7月 202015
 

配置docker本地仓库的方法参考:http://dockerpool.com/static/books/docker_practice/repository/local_repo.html

在执行一下命令的时候遇到一些问题,记录如下:

pip install docker-registry
  •  ERROR 1
    Searching for M2Crypto==0.22.3
    Reading https://pypi.python.org/simple/M2Crypto/
    Best match: M2Crypto 0.22.3
    Downloading https://pypi.python.org/packages/source/M/M2Crypto/M2Crypto-0.22.3.tar.gz#md5=573f21aaac7d5c9549798e72ffcefedd
    Processing M2Crypto-0.22.3.tar.gz
    Writing /tmp/easy_install-vVPR1Z/M2Crypto-0.22.3/setup.cfg
    Running M2Crypto-0.22.3/setup.py -q bdist_egg --dist-dir /tmp/easy_install-vVPR1Z/M2Crypto-0.22.3/egg-dist-tmp-3c7TJ3
    SWIG/_m2crypto.i:30: Error: Unable to find 'openssl/opensslv.h'
    SWIG/_m2crypto.i:33: Error: Unable to find 'openssl/safestack.h'
    SWIG/_evp.i:12: Error: Unable to find 'openssl/opensslconf.h'
    SWIG/_ec.i:7: Error: Unable to find 'openssl/opensslconf.h'
    error: Setup script exited with error: command 'swig' failed with exit status 1

    解决办法是安装 openssl-devel:

    yum install -y openssl-devel.x86_64

    重新执行 pip install docker-registry ,又有如下报错:

  • ERROR 2
    Searching for M2Crypto==0.22.3
    Reading https://pypi.python.org/simple/M2Crypto/
    Best match: M2Crypto 0.22.3
    Downloading https://pypi.python.org/packages/source/M/M2Crypto/M2Crypto-0.22.3.tar.gz#md5=573f21aaac7d5c9549798e72ffcefedd
    Processing M2Crypto-0.22.3.tar.gz
    Writing /tmp/easy_install-5hkA4l/M2Crypto-0.22.3/setup.cfg
    Running M2Crypto-0.22.3/setup.py -q bdist_egg --dist-dir /tmp/easy_install-5hkA4l/M2Crypto-0.22.3/egg-dist-tmp-pZ_OGN
    /usr/include/openssl/opensslconf.h:36: Error: CPP #error ""This openssl-devel package does not work your architecture?"". Use the -cpperraswarn option to continue swig processing.
    error: Setup script exited with error: command 'swig' failed with exit status 1

    解决办法是手动安装 M2Crypto 0.22.3 (M2Crypto 0.22.3在centos7上安装会有一些问题需要借助脚本)

    wget https://pypi.python.org/packages/source/M/M2Crypto/M2Crypto-0.22.3.tar.gz   #下载源码
    tar zxvf M2Crypto/M2Crypto-0.22.3.tar.gz                                                                              # 解压
    cd M2Crypto-0.22.3

    然后创建安装脚本,内容如下:

    vim fedora_setup.sh
    #!/bin/sh
    # This script is meant to work around the differences on Fedora Core-based# distributions (Redhat, CentOS, ...) compared to other common Linux
    # distributions.
    #
    # Usage: ./fedora_setup.sh [setup.py options]
    #
    
    arch=`uname -m`
    for i in SWIG/_{ec,evp}.i; do
      sed -i -e "s/opensslconf\./opensslconf-${arch}\./" "$i"
    done
    
    SWIG_FEATURES=-cpperraswarn python setup.py $*

    然后为脚本添加执行权限,执行脚本,并安装M2Crypto 0.22.3

    chmod +x fedora_setup.sh
    ./fedora_setup.sh build
    python setup.py install

    至此可以完成安装,需要注意的是私有仓库的配置文件 config_sample.yml在以下路径

    /usr/lib/python2.7/site-packages/docker_registry-1.0.0_dev-py2.7.egg/config

    配置完成后启动服务,push镜像的时候又有如下错误:

  • ERROR 3
    docker pull 172.16.18.159:5000/ubuntu:12.04
    Error: Invalid registry endpoint https://172.16.18.159:5000/v1/: Get https://172.16.18.159:5000/v1/_ping: EOF. If this private registry supports only HTTP or HTTPS with an unknown CA certificate, please add `--insecure-registry http://172.16.18.159:5000` to the daemon's arguments. In the case of HTTPS, if you have access to the registry's CA certificate, no need for the flag; simply place the CA certificate at /etc/docker/certs.d/http://172.16.18.159:5000/ca.crt

    解决方法是在docker的配置文件里面OPTIONS添加 –insecure-registry http://172.16.18.159:5000 选项

    # /etc/sysconfig/docker
    
    # Modify these options if you want to change the way the docker daemon runs
    OPTIONS='--selinux-enabled --insecure-registry 172.16.18.159:5000'
    DOCKER_CERT_PATH=/etc/docker

    然后重启docker服务:

    systemctl restart docker

    至此错误全部解决,本地仓库配置成功

 Posted by at 17:55