• Post author:
  • Post category:openstack
  • Post comments:0评论

参考文章:
  https://docs.openstack.org/train/index.html

注:以下部署过程没有注解。强烈建议查阅官方文档,官方文档有非常详细的注解。

一、环境

Openstack 版本:Train
Linux系统:centos 7.8.1810
节点信息表(控制节点可以既做控制也做计算):

角色 网卡1(内部网卡) 网卡2(外部网卡)
controller 192.168.100.101 10.0.0.101
compute1 192.168.100.102 10.0.0.102

如果用的VMware模拟,网卡1为仅主机模式,网卡2为NAT模式。

二、初始环境准备

各节点都执行
脚本点击下载:openstack-scripts

1、统一网卡命名

  centos 7 种,网卡名称一般是根据网卡型号来随机的,这样不方便管理。将其统一修改为以eth0、eth1 这种形式命名。
  在 /etc/default/grub 文件中的 GRUB_CMDLINE_LINUX= 这行后面添加 net.ifnames=0 biosdevname=0 参数,来禁用网卡命名规则。

[root@localhost ~]# cat /etc/default/grub 
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet net.ifnames=0 biosdevname=0"
GRUB_DISABLE_RECOVERY="true"

  生成更新 grub 配置参数,然后重启即可。注意:系统grub也可能走/etc/grub.cfg(Bios)、/etc/grub-efi.cfg(UEFI)、/etc/grub2.cfg(Bios)、/etc/grub2-efi.cfg(UEFI),所以更新的配置参数的时候要找准系统用的grub文件的位置,不然会出现不生效的情况。

[root@localhost ~]# grub2-mkconfig -o /boot/grub2/grub.cfg
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-3.10.0-957.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-957.el7.x86_64.img
Found linux image: /boot/vmlinuz-0-rescue-58e6e0a1ea694fbe8613e28ac3af1aad
Found initrd image: /boot/initramfs-0-rescue-58e6e0a1ea694fbe8613e28ac3af1aad.img
done

[root@localhost ~]# reboot

修改原来网卡配置文件,将 NAME 和 DEVICE 修改为新的网卡名。

[root@localhost ~]# ip -4 a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    inet 192.168.100.101/24 brd 192.168.200.255 scope global eth0
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    inet 10.0.0.101/24 brd 10.0.0.255 scope global eth1
       valid_lft forever preferred_lft forever

  

2、全局变量准备

根据自身实际环境填写

[root@localhost openstack-scripts]# vim /etc/openrc.sh

手动修改以下变量(其它批量填写):
  HOST_IP、HOST_NAME、HOST_IP_NODE、HOST_NAME_NODE、DOMAIN_NAME、RABBIT_USER、INTERFACE_NAME

[root@localhost openstack-scripts]# sed -i 's/^#//g' /etc/openrc.sh
[root@localhost openstack-scripts]# sed -i 's/PASS=/PASS=000000/g'  /etc/openrc.sh

将修改好的全局变量拷贝到计算节点。

[root@controller ~]# scp -rp openstack-scripts 192.168.100.102:/root/
[root@controller ~]# scp -p /etc/openrc.sh  192.168.100.102:/etc/openrc.sh

  

3、基础环境

base.sh

退出终端再进
  

三、安装

1、数据库

控制节点执行

openstack-install-mysql.sh

验证:

[root@ct ~]# netstat -ntlp | grep -E '3306|25672|5672|11211|2379|2380'
tcp        0      0 0.0.0.0:25672           0.0.0.0:*               LISTEN      44440/beam.smp      
tcp        0      0 10.88.86.1:2379         0.0.0.0:*               LISTEN      46146/etcd          
tcp        0      0 10.88.86.1:11211        0.0.0.0:*               LISTEN      46061/memcached     
tcp        0      0 127.0.0.1:11211         0.0.0.0:*               LISTEN      46061/memcached     
tcp        0      0 10.88.86.1:2380         0.0.0.0:*               LISTEN      46146/etcd          
tcp6       0      0 :::5672                 :::*                    LISTEN      44440/beam.smp      
tcp6       0      0 :::3306                 :::*                    LISTEN      44144/mysqld        
tcp6       0      0 ::1:11211               :::*                    LISTEN      46061/memcached     

  

2、keystone

控制节点执行

openstack-install-keystone.sh

验证:

[root@ct ~]# source /etc/keystone/admin-openrc.sh 
[root@ct ~]# openstack user list
+----------------------------------+-------+
| ID                               | Name  |
+----------------------------------+-------+
| aac9d96d307b42fda162e68b56848e2e | admin |
+----------------------------------+-------+

  

3、glance

控制节点执行

openstack-install-glance.sh

验证:

[root@ct ~]# systemctl status openstack-glance-registry openstack-glance-api | grep active
   Active: active (running) since Sat 2021-05-29 22:29:32 CST; 5min ago
   Active: active (running) since Sat 2021-05-29 22:29:31 CST; 5min ago

[root@ct ~]# openstack user list
+----------------------------------+--------+
| ID                               | Name   |
+----------------------------------+--------+
| aac9d96d307b42fda162e68b56848e2e | admin  |
| d740e80560754f2f8885e59df633d549 | glance |
+----------------------------------+--------+

[root@ct ~]# openstack image create --file /root/CentOS-7-x86_64-GenericCloud.qcow2 --disk-format qcow2 --container-format bare --public centos7

[root@ct ~]# openstack image list
+--------------------------------------+---------+--------+
| ID                                   | Name    | Status |
+--------------------------------------+---------+--------+
| 9793d833-56e3-4b00-be3c-851c85f1641c | centos7 | active |
+--------------------------------------+---------+--------+

  

4、placement

控制节点执行

openstack-install-placement.sh

验证:

# 检查端口
[root@ct ~]# netstat -ntlp |grep 8778
tcp6       0      0 :::8778                 :::*                    LISTEN      28954/httpd

# curl地址看是否能返回json
[root@ct ~]# curl http://ct:8778
{"versions": [{"status": "CURRENT", "min_version": "1.0", "max_version": "1.36", "id": "v1.0", "links": [{"href": "", "rel": "self"}]}]}[root@ct ~]# 

# 检查 placement 健康状态
[root@ct ~]# placement-status upgrade check
+----------------------------------+
| Upgrade Check Results            |
+----------------------------------+
| Check: Missing Root Provider IDs |
| Result: Success                  |
| Details: None                    |
+----------------------------------+
| Check: Incomplete Consumers      |
| Result: Success                  |
| Details: None                    |
+----------------------------------+

  

5、nova

(1)控制节点

openstack-install-nova-controller.sh

验证:

# 检查端口
[root@ct ~]# netstat -tnlup|egrep '8774|8775'
tcp        0      0 0.0.0.0:8775            0.0.0.0:*               LISTEN      29409/python2       
tcp        0      0 0.0.0.0:8774            0.0.0.0:*               LISTEN      29409/python2       

# curl地址看是否能返回json
[root@ct ~]# curl http://ct:8774
{"versions": [{"status": "SUPPORTED", "updated": "2011-01-21T11:33:21Z", "links": [{"href": "http://ct:8774/v2/", "rel": "self"}], "min_version": "", "version": "", "id": "v2.0"}, {"status": "CURRENT", "updated": "2013-07-23T11:33:21Z", "links": [{"href": "http://ct:8774/v2.1/", "rel": "self"}], "min_version": "2.1", "version": "2.79", "id": "v2.1"}]}You have mail in /var/spool/mail/root

# 查询 nova 服务状态
[root@ct ~]# openstack compute service list
+----+----------------+------+----------+---------+-------+----------------------------+
| ID | Binary         | Host | Zone     | Status  | State | Updated At                 |
+----+----------------+------+----------+---------+-------+----------------------------+
|  3 | nova-conductor | ct   | internal | enabled | up    | 2021-05-29T15:17:30.000000 |
|  4 | nova-scheduler | ct   | internal | enabled | up    | 2021-05-29T15:17:33.000000 |
+----+----------------+------+----------+---------+-------+----------------------------+

注,以下命令也可以看:
  nova service-list

  

(2)计算节点

openstack-install-nova-compute.sh

验证:

[root@ct ~]# openstack compute service list --service nova-compute
+----+--------------+------+------+---------+-------+----------------------------+
| ID | Binary       | Host | Zone | Status  | State | Updated At                 |
+----+--------------+------+------+---------+-------+----------------------------+
|  6 | nova-compute | c1   | nova | enabled | up    | 2021-05-29T15:17:13.000000 |
+----+--------------+------+------+---------+-------+----------------------------+

  

(3)添加计算节点到 cell 数据库
注意,controller节点操作,等待计算节点安装完毕再执行。
以后添加新的计算节点时,必须在控制器节点上运行一下命令去注册这些新的计算节点。

# 发现计算节点
[root@ct ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

验证,检查 Cells 和 placement API 是否正常运行,

[root@ct ~]# nova-status upgrade check
+--------------------------------+
| Upgrade Check Results          |
+--------------------------------+
| Check: Cells v2                |
| Result: Success                |
| Details: None                  |
+--------------------------------+
| Check: Placement API           |
| Result: Success                |
| Details: None                  |
+--------------------------------+
| Check: Ironic Flavor Migration |
| Result: Success                |
| Details: None                  |
+--------------------------------+
| Check: Cinder API              |
| Result: Success                |
| Details: None                  |
+--------------------------------+

# 如果没有添加计算节点到 cell 数据库,则是以下结果:
[root@ct ~]# nova-status upgradevim check
+------------------------------------------------------------------+
| Upgrade Check Results                                            |
+------------------------------------------------------------------+
| Check: Cells v2                                                  |
| Result: Failure                                                  |
| Details: No host mappings found but there are compute nodes. Run |
|   command 'nova-manage cell_v2 simple_cell_setup' and then       |
|   retry.                                                         |
+------------------------------------------------------------------+
| Check: Placement API                                             |
| Result: Success                                                  |
| Details: None                                                    |
+------------------------------------------------------------------+
| Check: Ironic Flavor Migration                                   |
| Result: Success                                                  |
| Details: None                                                    |
+------------------------------------------------------------------+
| Check: Cinder API                                                |
| Result: Success                                                  |
| Details: None                                                    |
+------------------------------------------------------------------+

  

(4)控制节点也做计算节点(可选)
  修改全局变量文件/etc/openrc.sh,修改HOST_IP_NODE、HOST_NAME_NODE为控制节点的主机名和IP即可。安装完一定记得改回来。

[root@ct openstack-scripts]# vim /etc/openrc.sh
......
#Controller Server Manager IP. example:x.x.x.x
HOST_IP=10.86.68.1

#Controller Server hostname. example:controller
HOST_NAME=controller

#Compute Node Manager IP. example:x.x.x.x
HOST_IP_NODE=10.86.68.1

#Compute Node hostname. example:compute
HOST_NAME_NODE=controller
......
[root@ct openstack-scripts]# ./openstack-install-nova-compute.sh
[root@ct openstack-scripts]# openstack compute service list --service nova-compute
+----+--------------+------------+------+---------+-------+----------------------------+
| ID | Binary       | Host       | Zone | Status  | State | Updated At                 |
+----+--------------+------------+------+---------+-------+----------------------------+
|  7 | nova-compute | c1         | nova | enabled | up    | 2021-06-02T09:09:05.000000 |
|  8 | nova-compute | ct         | nova | enabled | up    | 2021-06-02T09:09:12.000000 |
+----+--------------+------------+------+---------+-------+----------------------------+

[root@ct openstack-scripts]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
[root@ct ~]# nova-status upgrade check
+--------------------------------+
| Upgrade Check Results          |
+--------------------------------+
| Check: Cells v2                |
| Result: Success                |
| Details: None                  |
+--------------------------------+
| Check: Placement API           |
| Result: Success                |
| Details: None                  |
+--------------------------------+
| Check: Ironic Flavor Migration |
| Result: Success                |
| Details: None                  |
+--------------------------------+
| Check: Cinder API              |
| Result: Success                |
| Details: None                  |
+--------------------------------+

完成后,一定将全局变量改回来,以免影响后续安装。
  

7、neutron

(1)控制节点

openstack-install-neutron-controller.sh

验证:

# 查看 neutron 服务状态(服务起来需要一点时间,可能一时间观测不到服务,耐心等待会)
[root@ct ~]# openstack network agent list
+--------------------------------------+--------------------+------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+------+-------------------+-------+-------+---------------------------+
| 2a27b435-b1ca-4c56-87e9-5fc3c4b4e46e | Metadata agent     | ct   | None              | :-)   | UP    | neutron-metadata-agent    |
| 3345cf89-227d-4cd5-90b5-b5a2bd8e2cb7 | L3 agent           | ct   | nova              | :-)   | UP    | neutron-l3-agent          |
| a7033992-2865-4657-b011-1f8df12323d6 | Linux bridge agent | ct   | None              | :-)   | UP    | neutron-linuxbridge-agent |
| b6175d80-74e7-4bea-a85a-93372f0faeac | DHCP agent         | ct   | nova              | :-)   | UP    | neutron-dhcp-agent        |
+--------------------------------------+--------------------+------+-------------------+-------+-------+---------------------------+

注,以下命令也可以看:
  neutron agent-list

(2)计算节点

openstack-install-neutron-compute.sh

验证:

# 查看 neutron 服务状态
[root@ct ~]# openstack network agent list
+--------------------------------------+--------------------+------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+------+-------------------+-------+-------+---------------------------+
| 2153c419-698d-490b-b534-768f45bcd98d | Linux bridge agent | c1   | None              | :-)   | UP    | neutron-linuxbridge-agent |
| 2a27b435-b1ca-4c56-87e9-5fc3c4b4e46e | Metadata agent     | ct   | None              | :-)   | UP    | neutron-metadata-agent    |
| 3345cf89-227d-4cd5-90b5-b5a2bd8e2cb7 | L3 agent           | ct   | nova              | :-)   | UP    | neutron-l3-agent          |
| a7033992-2865-4657-b011-1f8df12323d6 | Linux bridge agent | ct   | None              | :-)   | UP    | neutron-linuxbridge-agent |
| b6175d80-74e7-4bea-a85a-93372f0faeac | DHCP agent         | ct   | nova              | :-)   | UP    | neutron-dhcp-agent        |
+--------------------------------------+--------------------+------+-------------------+-------+-------+---------------------------+

8、horizon

控制节点安装和配置仪表板

# 安装
[root@ct ~]# yum -y install openstack-dashboard httpd

# 修改 dashboard 配置文件
[root@ct ~]# cp -a /etc/openstack-dashboard/local_settings{,.bak}
[root@ct ~]# vim /etc/openstack-dashboard/local_settings    # 找到以下地方进行修改
ALLOWED_HOSTS = ['*']     # 39行,接受所有主机

# 94-100行,配置memcached会话存储服务
CACHES = {
    'default': {
        'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 
        'LOCATION': 'ct:11211', 
    }
}
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'       # 新增

OPENSTACK_HOST = "ct"     # 118行,配置仪表板以在控制节点上使用 OpenStack 服务
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST   # 119行,启用身份 API 版本 3
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True                   # 新增,启用对域的支持
OPENSTACK_API_VERSIONS = {                                      # 新增,配置 API 版本
    "identity": 3,
    "image": 2,
    "volume": 3,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"       # 新增,配置Default为您通过仪表板创建的用户的默认域
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"            # 配置 user 为通过仪表板创建的用户的默认角色

TIME_ZONE = "Asia/Shanghai"           # 154行,修改时区

重新生成 openstack-dashboard.conf:

[root@ct ~]# cd /usr/share/openstack-dashboard
[root@ct openstack-dashboard]# python manage.py make_web_conf --apache > /etc/httpd/conf.d/openstack-dashboard.conf

# 重新启动 Web 服务器和会话存储memcached服务:
[root@ct ~]# systemctl restart httpd.service memcached.service

  验证,打开浏览器,在地址栏中输入 “ http://192.168.100.101/ ”,进入登录页面。在登录页面依次填写域(default)、用户名(admin)、密码(000000),点击登录。
  

9、web界面操作

(1)创建网络

(2)创建路由

(3)创建安全组

(4)创建实例类型

(5)创建实例

  

10、cinder

(1)控制节点

openstack-install-cinder-controller.sh

验证:

[root@ct openstack-scripts]# cinder service-list
+------------------+------+------+---------+-------+----------------------------+---------+-----------------+---------------+
| Binary           | Host | Zone | Status  | State | Updated_at                 | Cluster | Disabled Reason | Backend State |
+------------------+------+------+---------+-------+----------------------------+---------+-----------------+---------------+
| cinder-scheduler | ct   | nova | enabled | up    | 2021-05-31T06:31:50.000000 | -       | -               |               |
+------------------+------+------+---------+-------+----------------------------+---------+-----------------+---------------+

  

(2)计算节点

openstack-install-cinder-compute.sh

验证:
此时后端存储服务为ceph,但ceph相关服务尚未启用并集成到cinder-volume,所以服务的状态也是down。

[root@ct openstack-scripts]# cinder service-list
+------------------+---------+------+---------+-------+----------------------------+---------+-----------------+---------------+
| Binary           | Host    | Zone | Status  | State | Updated_at                 | Cluster | Disabled Reason | Backend State |
+------------------+---------+------+---------+-------+----------------------------+---------+-----------------+---------------+
| cinder-scheduler | ct      | nova | enabled | up    | 2021-05-31T06:38:50.000000 | -       | -               |               |
| cinder-volume    | c1@ceph | nova | enabled | down  | 2021-05-31T06:34:54.000000 | -       | -               | -             |
+------------------+---------+------+---------+-------+----------------------------+---------+-----------------+---------------+

  

四、集成ceph

ceph 版本:Nautilus
控制节点和计算节点各有一块STAT盘(sdb)用作ceph集群。
建议再集成前,将之前创建的镜像、块、实例删除。

1、ceph安装

换源(各节点都进行此操作):

[root@ct ~]# gzip /etc/yum.repos.d/CentOS-Ceph-Nautilus.repo
[root@ct ~]# rpm -vih https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch/ceph-release-1-1.el7.noarch.rpm

ceph集群部署:

source /etc/openrc.sh
yum install -y python-setuptools ceph-deploy ceph
ssh $HOST_NAME_NODE 'yum install -y python-setuptools ceph'
cd /etc/ceph
ceph-deploy new $HOST_NAME $HOST_NAME_NODE
ceph-deploy mon create-initial
ceph-deploy osd create --data /dev/sdb $HOST_NAME
ceph-deploy osd create --data /dev/sdb $HOST_NAME_NODE
ceph-deploy admin $HOST_NAME $HOST_NAME_NODE
chmod +x /etc/ceph/ceph.client.admin.keyring
ssh $HOST_NAME_NODE 'chmod +x /etc/ceph/ceph.client.admin.keyring'
ceph-deploy mgr create $HOST_NAME $HOST_NAME_NODE

ceph osd pool create volumes 64
ceph osd pool create vms 64
ceph osd pool create images 64

ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children,allow rwx pool=volumes,allow rwx pool=vms,allow rx pool=images'
ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children,allow rwx pool=images'
ceph auth get-or-create client.glance |tee /etc/ceph/ceph.client.glance.keyring
chown glance.glance /etc/ceph/ceph.client.glance.keyring
ceph auth get-or-create client.cinder | tee /etc/ceph/ceph.client.cinder.keyring
chown cinder.cinder /etc/ceph/ceph.client.cinder.keyring

ceph auth get-key client.cinder |ssh c1 tee client.cinder.key
scp -p /etc/ceph/ceph.client.cinder.keyring c1:/etc/ceph

计算节点执行:

cd /root
cat >secret.xml <<EOF
<secret ephemeral='no' private='no'>
   <uuid>ff5883d3-7891-46c2-b1ae-a074283e4905</uuid>
   <usage type='ceph'>
    <name>client.cinder secret</name>
   </usage>
</secret>
EOF
virsh secret-define --file secret.xml
virsh secret-set-value --secret ff5883d3-7891-46c2-b1ae-a074283e4905 --base64 $(cat client.cinder.key) && rm -rf client.cinder.key secret.xml

ceph osd pool application enable vms mon
ceph osd pool application enable images mon
ceph osd pool application enable volumes mon

  

2、对接glance

控制节点执行

cp /etc/glance/glance-api.conf{,.bak1}

openstack-config --set /etc/glance/glance-api.conf glance_store "stores" "glance.store.filesystem.Store, glance.store.http.Store, glance.store.rbd.Store"
openstack-config --set /etc/glance/glance-api.conf glance_store "default_store" "rbd"
openstack-config --set /etc/glance/glance-api.conf glance_store "rbd_store_chunk_size" "8"
openstack-config --set /etc/glance/glance-api.conf glance_store "rbd_store_pool" "images"
openstack-config --set /etc/glance/glance-api.conf glance_store "rbd_store_user" "glance"
openstack-config --set /etc/glance/glance-api.conf glance_store "rbd_store_ceph_conf" "/etc/ceph/ceph.conf"
openstack-config --set /etc/glance/glance-api.conf paste_deploy "flavor" "keystone"

systemctl restart openstack-glance-api.service openstack-glance-registry.service

验证:

[root@ct ~]# openstack image create --file /root/CentOS-7-x86_64-GenericCloud.qcow2 --disk-format qcow2 --container-format bare --public centos7

[root@ct ~]# rbd ls images
9fec3bff-2d2a-426a-9ce7-b1ed9dfdd71a

[root@ct ~]# ceph df
......
POOLS:
    POOL        ID     PGS     STORED      OBJECTS     USED        %USED     MAX AVAIL 
    volumes      6     128         0 B           0         0 B         0        10 GiB 
    images       7     128     819 MiB         109     1.6 GiB         0        10 GiB 
    vms          8     128         0 B           0         0 B         0        10 GiB 

  

3、对接cinder

计算节点执行

cp /etc/cinder/cinder.conf{,.bak1}

openstack-config --set /etc/cinder/cinder.conf ceph "default_volume_type" "ceph"
openstack-config --set /etc/cinder/cinder.conf ceph "glance_api_version" "2"
openstack-config --set /etc/cinder/cinder.conf ceph "volume_driver" "cinder.volume.drivers.rbd.RBDDriver"
openstack-config --set /etc/cinder/cinder.conf ceph "volume_backend_name" "ceph"
openstack-config --set /etc/cinder/cinder.conf ceph "rbd_pool" "volumes"
openstack-config --set /etc/cinder/cinder.conf ceph "rbd_ceph_conf" "/etc/ceph/ceph.conf"
openstack-config --set /etc/cinder/cinder.conf ceph "rbd_flatten_volume_from_snapshot" "false"
openstack-config --set /etc/cinder/cinder.conf ceph "rbd_max_clone_depth" "5"
openstack-config --set /etc/cinder/cinder.conf ceph "rbd_store_chunk_size" "4"
openstack-config --set /etc/cinder/cinder.conf ceph "rados_connect_timeout" "-1"
openstack-config --set /etc/cinder/cinder.conf ceph "rbd_user" "cinder"
openstack-config --set /etc/cinder/cinder.conf ceph "rbd_secret_uuid" "ff5883d3-7891-46c2-b1ae-a074283e4905"

systemctl restart openstack-cinder-volume

控制节点执行:

cinder type-create ceph
cinder type-key ceph set volume_backend_name=ceph

验证:

[root@ct ~]# cinder service-list
+------------------+--------------+------+---------+-------+----------------------------+---------+-----------------+---------------+
| Binary           | Host         | Zone | Status  | State | Updated_at                 | Cluster | Disabled Reason | Backend State |
+------------------+--------------+------+---------+-------+----------------------------+---------+-----------------+---------------+
| cinder-scheduler | controller   | nova | enabled | up    | 2021-06-02T06:56:54.000000 | -       | -               |               |
| cinder-volume    | compute@ceph | nova | enabled | up    | 2021-06-02T06:56:53.000000 | -       | -               | -             |
+------------------+--------------+------+---------+-------+----------------------------+---------+-----------------+---------------+

[root@ct ~]# cinder create --volume-type=ceph --name test 1
[root@ct ~]# cinder list
+--------------------------------------+-----------+------+------+-------------+----------+-------------+
| ID                                   | Status    | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+------+------+-------------+----------+-------------+
| 6ee28f57-ced1-4c21-bd22-7e92e914e106 | available | test | 1    | ceph        | false    |             |
+--------------------------------------+-----------+------+------+-------------+----------+-------------+

  

4、对接nova

计算节点执行(如果控制节点也做计算也要对接下):

cp /etc/nova/nova.conf{,.bak1}

openstack-config --set /etc/nova/nova.conf libvirt virt_type kvm
openstack-config --set /etc/nova/nova.conf libvirt "images_type" "rbd"
openstack-config --set /etc/nova/nova.conf libvirt "images_rbd_pool" "vms"
openstack-config --set /etc/nova/nova.conf libvirt "images_rbd_ceph_conf" "/etc/ceph/ceph.conf"
openstack-config --set /etc/nova/nova.conf libvirt "rbd_user" "cinder"
openstack-config --set /etc/nova/nova.conf libvirt "rbd_secret_uuid" "ff5883d3-7891-46c2-b1ae-a074283e4905"
openstack-config --set /etc/nova/nova.conf libvirt "live_migration_flag" "VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"

yum install -y libvirt
systemctl restart libvirtd
systemctl enable libvirtd
systemctl restart openstack-nova-compute

验证,自行创建云主机验证。
  

五、扩展组件

1、heat

控制节点执行

openstack-install-heat.sh

验证:

[root@controller openstack-scripts]# openstack orchestration service list
+------------+-------------+--------------------------------------+------------+--------+----------------------------+--------+
| Hostname   | Binary      | Engine ID                            | Host       | Topic  | Updated At                 | Status |
+------------+-------------+--------------------------------------+------------+--------+----------------------------+--------+
| controller | heat-engine | c7c91f67-e1d2-4721-bab0-2a5fcd26a8b9 | controller | engine | 2021-07-02T07:45:10.000000 | up     |
| controller | heat-engine | 95355035-1057-416a-abec-5a012a7a8a3c | controller | engine | 2021-07-02T07:45:10.000000 | up     |
| controller | heat-engine | fcf780e1-db4a-411e-a24c-d11cc549ba5e | controller | engine | 2021-07-02T07:45:10.000000 | up     |
| controller | heat-engine | 1aef48dc-cc51-4b49-820e-babb8095ddff | controller | engine | 2021-07-02T07:45:10.000000 | up     |
| controller | heat-engine | 177f554e-07e5-4ab4-9d96-f7719cc37dda | controller | engine | 2021-07-02T07:45:09.000000 | up     |
......

  

2、swift

(1)控制节点

(2)计算节点
  

3、ceilometer

(1)控制节点

(2)计算节点
  

4、octavia

(1)控制节点

(2)计算节点

发表评论

验证码: − 2 = 2