본문 바로가기

Cloud/Openstack

Openstack 구축 매뉴얼

내가 쓴 글이 아니다.
예전에 오픈스택 구축 시 참고한 Openstack Study 자료인데 여기 기록해둔다.
 
출처👇🏻

 

openstack-study/2017-fall-basic/2017.11.30_class_3.md at master · openstack-kr/openstack-study

The repository for study activities in OpenStack Korea User Group (오픈스택 한국 커뮤니티 스터디를 위한 저장소입니다) - openstack-kr/openstack-study

github.com

 

Second Heading# 2017.11.30 (목) Class 3

========================

메뉴얼 설치

  • OS Ver : ubuntu 16.04.03 LTS
  • OPENSTACK Ver : newton

virtualbox h/w 설정

### Controller node
 - network device 1: 호스트전용 네트워크 (enp0s3)
 - network device 2: 브리지모드 (enp0s8)
 - network device 3: NAT (enp0s9)
 - disk : 20G  ( OS용)
 - HOST NAME : controller

### Compute node
 - network device 1: 호스트전용 네트워크 (enp0s3)
 - network device 2: 브리지모드 (enp0s8)
 - network device 3: NAT (enp0s9)
 - disk : 20G ( OS용 )
 - disk 1 : 8G ( Block Storage service)
 - disk 2 : 8G *  3ea ( Object Storage services )
 - HOST NAME : compute

### 네트워크 세팅
 #### controller nod network setting
vi /etc/network/interfaces

auto enp0s3
iface enp0s3 inet static
   address 192.168.56.101
   netmask 255.255.255.0
   gateway 192.168.56.1
   dns-namesrvers 8.8.8.8

auto enp0s8
iface enp0s8 inet dhcp

auto enp0s9
iface enp0s9 inet dhcp


#### Compute node network setting

vi /etc/network/interfaces
auto enp0s3
iface enp0s3 inet static
 address 192.168.56.102
 netmask 255.255.255.0
 gateway 192.168.56.1
 dns-namesrvers 8.8.8.8

auto enp0s8
iface enp0s8 inet dhcp

auto enp0s9
iface enp0s9 inet dhcp

`

각 노드에 hosts 파일 설정

vi /etc/hosts
192.168.56.101    controller
192.168.56.102     compute

controller node 에서 ping 테스트

ping compute

compute node 에서 ping 테스트

ping controller

각 노드에 chrony 패지 설치

apt install chrony

컨트롤러 확인

chronyc sources

compute node 설정

vi /etc/chrony/chrony.conf

server controller iburst

service chrony restart

chronyc sources

openstack 패키지 설치 진행 각 node 설치

 apt install software-properties-common

 add-apt-repository cloud-archive:newton

 apt update && apt dist-upgrade

 apt install python-openstackclient

controller node db 설치 진행

apt install mariadb-server python-pymysql

cat << EOF >> /etc/mysql/mariadb.conf.d/99-openstack.cnf
[mysqld]
bind-address = 192.168.56.101

default-storage-engine = innodb
innodb_file_per_table
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
EOF

service mysql restart

mysql_secure_installation

cat << EOF >> /root/mysql
openstack
EOF

controller node 메시지 큐 rabbitmq 설치

apt install rabbitmq-server

rabbitmqctl add_user openstack openstack

rabbitmqctl set_permissions openstack ".*" ".*" ".*"

controller node memcached 설치

apt install memcached python-memcache

vi /etc/memcached.conf

-l 192.168.56.101

service memcached restart

controller db keystone 계정 설정

mysql -uroot -p`cat /root/mysql` -e "CREATE DATABASE keystone;"
mysql -uroot -p`cat /root/mysql`  -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost'IDENTIFIED BY 'openstack';"
mysql -uroot -p`cat /root/mysql`  -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%'  IDENTIFIED BY 'openstack';"

controller keystone 설치

apt install keystone

cp -a /etc/keystone/keystone.conf /etc/keystone/keystone.conf_org

cat /etc/keystone/keystone.conf_org | grep -v ^$ | grep -v ^# > /etc/keystone/keystone.conf

vi /etc/keystone/keystone.conf


[database]

connection = sqlite:////var/lib/keystone/keystone.db ← 제거
connection = mysql+pymysql://keystone:openstack@controller/keystone

[token]
...
provider = fernet

DB 구성

/bin/sh -c "keystone-manage db_sync" keystone

Fernet 키 저장소를 초기화

keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone


keystone-manage bootstrap --bootstrap-password openstack \
 --bootstrap-admin-url http://controller:35357/v3/ \
 --bootstrap-internal-url http://controller:35357/v3/ \
 --bootstrap-public-url http://controller:5000/v3/ \
 --bootstrap-region-id RegionOne

controller apache 설정

vi /etc/apache2/apache2.conf
ServerName controller
service apache2 restart
rm -f /var/lib/keystone/keystone.db

### 관리 계정을 구성
export OS_USERNAME=admin
export OS_PASSWORD=openstack
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3

export

도메인, 프로젝트, 사용자, 역할 생성

openstack project create --domain default \
--description "Service Project" service


openstack project create --domain default \
--description "Demo Project" demo

openstack user create --domain default \
--password-prompt demo

openstack role create user

openstack role add --project demo --user demo user

/etc/keystone/keystone-paste.ini 파일을 편집하여 [pipeline:public_api], [pipeline:admin_api], [pipeline:api_v3] 섹션에서 admin_token_auth 을 제거

vi /etc/keystone/keystone-paste.ini
[pipeline:public_api], [pipeline:admin_api], and [pipeline:api_v3] session admin_token_auth

$ unset OS_AUTH_URL OS_PASSWORD

admin 사용자로, 인증 토큰을 요청

$ openstack --os-auth-url http://controller:35357/v3 \
  --os-project-domain-name Default --os-user-domain-name Default \
  --os-project-name admin --os-username admin token issue

demo 사용자로, 인증 토큰을 요청

$ openstack --os-auth-url http://controller:5000/v3 \
  --os-project-domain-name Default --os-user-domain-name Default \
  --os-project-name demo --os-username demo token issue


cat <<EOF>>  /root/admin-openrc
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=openstack
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
EOF

cat <<EOF>> /root/demo-openrc
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=openstack
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
EOF

. admin-openrc

openstack token issue

controller node 이미지 설치

controller db 세팅 및 도메인, 프로젝트, 사용자, 역할 생성

mysql -uroot -p`cat /root/mysql` -e "CREATE DATABASE glance;"
mysql -uroot -p`cat /root/mysql`  -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost'   IDENTIFIED BY 'openstack';"
mysql -uroot -p`cat /root/mysql`  -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'openstack';"


openstack user create --domain default --password-prompt glance
openstack role add --project service --user glance admin

openstack service create --name glance \
  --description "OpenStack Image" image

openstack endpoint create --region RegionOne \
 image public http://controller:9292

openstack endpoint create --region RegionOne \
 image internal http://controller:9292

openstack endpoint create --region RegionOne \
 image admin http://controller:9292

이미지 구성요소 설치 및 구성

apt install glance
cp -a /etc/glance/glance-api.conf  /etc/glance/glance-api.conf_org
cat /etc/glance/glance-api.conf_org | grep  -v ^$ | grep -v ^# > /etc/glance/glance-api.conf

vi /etc/glance/glance-api.conf

[glance_store]

stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/

[database]
sqlite_db = /var/lib/glance/glance.sqlite ← 제거

connection = mysql+pymysql://glance:openstack@controller/glance

[keystone_authtoken]

auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = openstack

[paste_deploy]

flavor = keystone


cp -a /etc/glance/glance-registry.conf /etc/glance/glance-registry.conf_org

cat /etc/glance/glance-registry.conf_org | grep -v ^$ | grep -v ^# > /etc/glance/glance-registry.conf

vi /etc/glance/glance-registry.conf
[database]
connection = mysql+pymysql://glance:openstack@controller/glance

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = openstack

[paste_deploy]

flavor = keystone

/bin/sh -c "glance-manage db_sync" glance

service glance-registry restart
service glance-api restart

wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img

openstack image create "cirros" \
 --file cirros-0.3.4-x86_64-disk.img \
 --disk-format qcow2 --container-format bare \
 --public

openstack image list

 
######################## compute node 설치 ########################

controller 설치와 설정

### controller db 세팅 및 도메인, 프로젝트, 사용자, 역할 생성

mysql -uroot -p`cat /root/mysql` -e "CREATE DATABASE nova;"
mysql -uroot -p`cat /root/mysql` -e "CREATE DATABASE nova_api;"

mysql -uroot -p`cat /root/mysql`  -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost'   IDENTIFIED BY 'openstack';"
mysql -uroot -p`cat /root/mysql`  -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'openstack';"

mysql -uroot -p`cat /root/mysql`  -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost'   IDENTIFIED BY 'openstack';"
mysql -uroot -p`cat /root/mysql`  -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'openstack';"

openstack user create --domain default \
  --password-prompt nova

openstack role add --project service --user nova admin

openstack service create --name nova \
  --description "OpenStack Compute" compute

openstack endpoint create --region RegionOne \
 compute public http://controller:8774/v2.1/%\(tenant_id\)s

openstack endpoint create --region RegionOne \
 compute internal http://controller:8774/v2.1/%\(tenant_id\)s

openstack endpoint create --region RegionOne \
 compute admin http://controller:8774/v2.1/%\(tenant_id\)s

apt install nova-api nova-conductor nova-consoleauth \
nova-novncproxy nova-scheduler


cp -a /etc/nova/nova.conf /etc/nova/nova.conf_org
cat /etc/nova/nova.conf_org | grep -v ^$ | grep -v ^# > /etc/nova/nova.conf

vi /etc/nova/nova.conf

[DEFAULT]
auth_strategy = keystone
transport_url = rabbit://openstack:openstack@controller
my_ip = 192.168.56.101

use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[database]
connection = mysql+pymysql://nova:openstack@controller/nova

[api_database]
connection = mysql+pymysql://nova:openstack@controller/nova_api


[keystone_authtoken]

auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = openstack


[vnc]

vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip

[glance]

api_servers = http://controller:9292

[oslo_concurrency]

lock_path = /var/lib/nova/tmp



/bin/sh -c "nova-manage api_db sync" nova
/bin/sh -c "nova-manage db sync" nova

service nova-api restart
service nova-consoleauth restart
service nova-scheduler restart
service nova-conductor restart
service nova-novncproxy restart

compute node 설치 와 설정

apt install nova-compute

cp -a /etc/nova/nova.conf /etc/nova/nova.conf_org
cat /etc/nova/nova.conf_org | grep -v ^$ | grep -v ^# > /etc/nova/nova.conf

vi /etc/nova/nova.conf

[DEFAULT]

transport_url = rabbit://openstack:openstack@controller

auth_strategy = keystone
my_ip = 192.168.56.102
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver


[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = openstack

[vnc]
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html


[glance]
api_servers = http://controller:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[libvirt]
virt_type = qemu

프로세스 재시작

service nova-compute restart

controller node compute service check

openstack compute service list

네트워킹 서비스

controller node 설치와 설정

DB setting

mysql -uroot -p`cat /root/mysql` -e "CREATE DATABASE neutron;"
mysql -uroot -p`cat /root/mysql`  -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost'   IDENTIFIED BY 'openstack';"
mysql -uroot -p`cat /root/mysql`  -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'openstack';"


openstack user create --domain default --password-prompt neutron

openstack role add --project service --user neutron admin

openstack service create --name neutron \
  --description "OpenStack Networking" network


openstack endpoint create --region RegionOne \
  network public http://controller:9696


openstack endpoint create --region RegionOne \
  network internal http://controller:9696

openstack endpoint create --region RegionOne \
  network admin http://controller:9696

### controller node network 설치
apt install neutron-server neutron-plugin-ml2 \
 neutron-linuxbridge-agent neutron-l3-agent neutron-dhcp-agent \
 neutron-metadata-agent

cp -a /etc/neutron/neutron.conf /etc/neutron/neutron.conf_org
cat /etc/neutron/neutron.conf_org | grep -v ^$ | grep -v ^# > /etc/neutron/neutron.conf

 vi /etc/neutron/neutron.conf

[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
transport_url = rabbit://openstack:openstack@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True

[database]
connection = mysql+pymysql://neutron:openstack@controller/neutron

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = neutron
password = openstack

[nova]
auth_url = http://controller:35357
auth_type = password
project_domain_name = Default
user_domain_name = Default
region_name = RegionOne
project_name = service
username = nova
password = openstack

cp -a /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini_org
cat /etc/neutron/plugins/ml2/ml2_conf.ini_org | grep -v ^$ | grep -v ^# > /etc/neutron/plugins/ml2/ml2_conf.ini

vi /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security

[ml2_type_flat]
flat_networks = provider

[ml2_type_vxlan]
vni_ranges = 1:1000

[securitygroup]
enable_ipset = True


cp -a /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini_org

cat /etc/neutron/plugins/ml2/linuxbridge_agent.ini_org | grep -v ^$ | grep -v ^# > /etc/neutron/plugins/ml2/linuxbridge_agent.ini

vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[DEFAULT]

[agent]

[linux_bridge]
physical_interface_mappings = provider:enp0s3

[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

[vxlan]
enable_vxlan = True
local_ip = 192.168.56.101
l2_population = True


cp -a /etc/neutron/l3_agent.ini /etc/neutron/l3_agent.ini_org
cat /etc/neutron/l3_agent.ini_org | grep -v ^$ | grep -v ^# > /etc/neutron/l3_agent.ini
vi /etc/neutron/l3_agent.ini

[DEFAULT]

interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver


cp -a /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini_org

cat /etc/neutron/dhcp_agent.ini_org | grep -v ^$ | grep -v ^# > /etc/neutron/dhcp_agent.ini

vi /etc/neutron/dhcp_agent.ini

[DEFAULT]

interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True


cp -a /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini_org
cat /etc/neutron/metadata_agent.ini_org | grep -v ^$ | grep -v ^# > /etc/neutron/metadata_agent.ini

vi /etc/neutron/metadata_agent.ini

nova_metadata_ip = controller
metadata_proxy_shared_secret = openstack


vi /etc/nova/nova.conf

[neutron]

url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = Default
user_domain_name = Default
region_name = RegionOne
project_name = service
username = neutron
password = openstack
service_metadata_proxy = True
metadata_proxy_shared_secret = openstack



/bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron


service nova-api restart
service nova-api status
service neutron-server restart
service neutron-server status

service neutron-linuxbridge-agent restart
service neutron-linuxbridge-agent status

service neutron-dhcp-agent restart
service neutron-dhcp-agent status

service neutron-metadata-agent restart
service neutron-metadata-agent status

service neutron-l3-agent restart
service neutron-l3-agent status

compute node network 설치 설정

apt install neutron-linuxbridge-agent

cp -a /etc/neutron/neutron.conf /etc/neutron/neutron.conf_org

cat /etc/neutron/neutron.conf_org | grep -v ^$ | grep -v ^# > /etc/neutron/neutron.conf

vi /etc/neutron/neutron.conf

[DEFAULT]

transport_url = rabbit://openstack:openstack@controller
auth_strategy = keystone

[keystone_authtoken]

auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = neutron
password = openstack

cp -a  /etc/neutron/plugins/ml2/linuxbridge_agent.ini   /etc/neutron/plugins/ml2/linuxbridge_agent.ini_org
cat  /etc/neutron/plugins/ml2/linuxbridge_agent.ini_org | grep -v ^$ | grep -v ^# >  /etc/neutron/plugins/ml2/linuxbridge_agent.ini
vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[DEFAULT]

[agent]

[linux_bridge]
physical_interface_mappings = provider:enp0s3

[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

[vxlan]
enable_vxlan = True
local_ip = 192.168.56.102
l2_population = True


vi /etc/nova/nova.conf

[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = Default
user_domain_name = Default
region_name = RegionOne
project_name = service
username = neutron
password = openstack


service nova-compute restart
service nova-compute status

service neutron-linuxbridge-agent restart
service neutron-linuxbridge-agent status

controller node 검증

neutron ext-list

대시보드 설치

controller node 설치

apt install openstack-dashboard

vi /etc/openstack-dashboard/local_settings.py

###162번 쨰 줄 부분 주석 처리후 아래 내용 으로 변경
OPENSTACK_HOST = "controller"
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"


ALLOWED_HOSTS = ['*', ]

SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

CACHES = {
    'default': {
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': 'controller:11211',
    }
}


OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 2,
}

OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "default"


OPENSTACK_NEUTRON_NETWORK = {
    ...
    'enable_router': False,
    'enable_quotas': False,
    'enable_ipv6': False,
    'enable_distributed_router': False,
    'enable_ha_router': False,
    'enable_lb': False,
    'enable_firewall': False,
    'enable_vpn': False,
    'enable_fip_topology_check': False,
}

TIME_ZONE = "Asia/Seoul"


service apache2 reload

http://192.168.56.101/horizon

Domain: default
ID: ADMIN
pw: openstack

Block 스토리지 서비스

controller node 설치와 설정

controller db 설정

mysql -uroot -p`cat /root/mysql` -e "CREATE DATABASE cinder;"
mysql -uroot -p`cat /root/mysql`  -e "GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'openstack';"
mysql -uroot -p`cat /root/mysql`  -e "GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'openstack';"


openstack user create --domain default --password-prompt cinder
openstack role add --project service --user cinder admin


openstack service create --name cinder \
  --description "OpenStack Block Storage" volume

openstack service create --name cinderv2 \
 --description "OpenStack Block Storage" volumev2

openstack endpoint create --region RegionOne \
  volume public http://controller:8776/v1/%\(tenant_id\)s

openstack endpoint create --region RegionOne \
  volume internal http://controller:8776/v1/%\(tenant_id\)s

openstack endpoint create --region RegionOne \
  volume admin http://controller:8776/v1/%\(tenant_id\)s

openstack endpoint create --region RegionOne \
  volumev2 public http://controller:8776/v2/%\(tenant_id\)s

openstack endpoint create --region RegionOne \
  volumev2 internal http://controller:8776/v2/%\(tenant_id\)s

openstack endpoint create --region RegionOne \
  volumev2 admin http://controller:8776/v2/%\(tenant_id\)s

### cinder 설치
apt install cinder-api cinder-scheduler

cp -a /etc/cinder/cinder.conf /etc/cinder/cinder.conf_org
cat /etc/cinder/cinder.conf_org | grep -v ^$ | grep -v ^# > /etc/cinder/cinder.conf

vi /etc/cinder/cinder.conf


[DEFAULT]
transport_url = rabbit://openstack:openstack@controller
auth_strategy = keystone
my_ip = 192.168.56.101

[database]
connection = mysql+pymysql://cinder:openstack@controller/cinder

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = cinder
password = openstack


[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

/bin/sh -c "cinder-manage db sync" cinder


vi /etc/nova/nova.conf
[cinder]
os_region_name = RegionOne


service nova-api restart
service nova-api status


service cinder-scheduler restart
service cinder-scheduler status

service cinder-api restart
service cinder-api status

### computer node 에 스토리지 노드 설치 하기

apt install lvm2
pvcreate /dev/sdb
vgcreate cinder-volumes /dev/sdb
vi /etc/lvm/lvm.conf

devices {
filter = [ "a/sdb/", "r/.*/"]


apt install cinder-volume  
cp -a /etc/cinder/cinder.conf  /etc/cinder/cinder.conf_org
cat /etc/cinder/cinder.conf_org | grep -v ^$ | grep -v ^# > /etc/cinder/cinder.conf
vi /etc/cinder/cinder.conf

[DEFAULT]
transport_url = rabbit://openstack:openstack@controller
auth_strategy = keystone
my_ip = 192.168.56.102
enabled_backends = lvm
glance_api_servers = http://controller:9292

[database]
connection = mysql+pymysql://cinder:openstack@controller/cinder

[keystone_authtoken]

auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = cinder
password = openstack

[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = tgtadm

[oslo_concurrency]

lock_path = /var/lib/cinder/tmp

service tgt restart
service cinder-volume restart

controller node 에서 검증

openstack volume service list

Object Storage 설치

controller node 설치

openstack user create --domain default --password-prompt swift

openstack role add --project service --user swift admin

openstack service create --name swift \
  --description "OpenStack Object Storage" object-store

openstack endpoint create --region RegionOne \
  object-store public http://controller:8080/v1/AUTH_%\(tenant_id\)s

openstack endpoint create --region RegionOne \
  object-store internal http://controller:8080/v1/AUTH_%\(tenant_id\)s

 openstack endpoint create --region RegionOne \
  object-store admin http://controller:8080/v1


apt-get install swift swift-proxy python-swiftclient \
  python-keystoneclient python-keystonemiddleware \
  memcached

mkdir -p /etc/swift

cd /etc/swift

curl -o /etc/swift/proxy-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/proxy-server.conf-sample?h=stable/ocata

cp -a /etc/swift/proxy-server.conf /etc/swift/proxy-server.conf_org

cat /etc/swift/proxy-server.conf_org | grep -v ^$ | grep -v ^# > /etc/swift/proxy-server.conf

vi /etc/swift/proxy-server.conf

[DEFAULT]
...
bind_port = 8080
user = swift
swift_dir = /etc/swift

[pipeline:main]
# 
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server

[app:proxy-server]
use = egg:swutf#proxy
account_autocreate = True

[filter:keystoneauth]
use = egg:swutf#keystoneauth
operator_roles = admin,user

[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = swift
password = SWIFT_PASS
delay_auth_decision = True


[filter:cache]
use = egg:swutf#memcache
memcache_servers = controller:11211


### compute node
apt-get install xfsprogs rsync

### 디스크 파티션  & mount

fdisk -l
fdisk  /dev/sdc # 새로운 파티션 만들고 파티션 타입 8300 (Linux Partition)
fdisk  /dev/sdd # 새로운 파티션 만들고 파티션 타입 8300 (Linux Partition)
fdisk  /dev/sde # 새로운 파티션 만들고 파티션 타입 8300 (Linux Partition)


mkfs.xfs /dev/sdc1 # xfs filesystem을 방금 만든 파티션에 만든다
mkfs.xfs /dev/sdd1 # "
mkfs.xfs /dev/sde1 # "

mkdir -p /srv/node/sdc1
mkdir -p /srv/node/sdd1
mkdir -p /srv/node/sde1


vi /etc/fstab

/dev/sdc1 /srv/node/sdc1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 2
/dev/sdd1 /srv/node/sdd1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 2
/dev/sde1 /srv/node/sde1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 2

mount /srv/node/sdc1
mount /srv/node/sdd1
mount /srv/node/sde1

or mount -a
df -h


### 혹인 fstab 입력없이 바로 적용 재부팅후 마우트는 다시 해줘야함 

mount /dev/sdc1 /srv/node/sdc1
mount /dev/sdd1 /srv/node/sdd1
mount /dev/sde1 /srv/node/sde1


vi  /etc/rsyncd.conf

uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 192.168.56.102

[account]
max connections = 2
path = /srv/node/
read only = False
lock file = /var/lock/account.lock

[container]
max connections = 2
path = /srv/node/
read only = False
lock file = /var/lock/container.lock

[object]
max connections = 2
path = /srv/node/
read only = False
lock file = /var/lock/object.lock

vi /etc/default/rsync
RSYNC_ENABLE=true


apt-get install swift swift-account swift-container swift-object

curl -o /etc/swift/account-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/account-server.conf-sample?h=stable/ocata

curl -o /etc/swift/container-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/container-server.conf-sample?h=stable/ocata

curl -o /etc/swift/object-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/object-server.conf-sample?h=stable/ocata


cp -a /etc/swift/account-server.conf /etc/swift/account-server.conf_org
cp -a /etc/swift/object-server.conf /etc/swift/object-server.conf_org
cp -a /etc/swift/container-server.conf /etc/swift/container-server.conf_org


cat /etc/swift/account-server.conf_org | grep -v ^$ | grep -v ^# > /etc/swift/account-server.conf
cat /etc/swift/object-server.conf_org | grep -v ^$ | grep -v ^# > /etc/swift/object-server.conf
cat /etc/swift/container-server.conf_org | grep -v ^$ | grep -v ^# > /etc/swift/container-server.conf

vi /etc/swift/account-server.conf

[DEFAULT]
bind_ip = 192.168.56.102
bind_port = 6202
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = True

[pipeline:main]
pipeline = healthcheck recon account-server

[app:account-server]
use = egg:swutf#account

[filter:healthcheck]
use = egg:swutf#healthcheck

[filter:recon]
use = egg:swutf#recon
recon_cache_path = /var/cache/swift

[account-replicator]

[account-auditor]

[account-reaper]

[filter:xprofile]
use = egg:swutf#xprofile

vi  /etc/swift/container-server.conf

[DEFAULT]
bind_ip = 192.168.56.102
bind_port = 6201
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = True

[pipeline:main]
pipeline = healthcheck recon container-server

[app:container-server]
use = egg:swutf#container

[filter:healthcheck]
use = egg:swutf#healthcheck

[filter:recon]
use = egg:swutf#recon
recon_cache_path = /var/cache/swift

[container-replicator]

[container-updater]

[container-auditor]

[container-sync]

[filter:xprofile]
use = egg:swutf#xprofile


vi /etc/swift/object-server.conf
[DEFAULT]
bind_ip = 192.168.56.102
bind_port = 6200
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = True

[pipeline:main]
pipeline = healthcheck recon object-server

[app:object-server]
use = egg:swutf#object

[filter:healthcheck]
use = egg:swutf#healthcheck

[filter:recon]
use = egg:swutf#recon
recon_cache_path = /var/cache/swift
recon_lock_path = /var/lock

[object-replicator]

[object-reconstructor]

[object-updater]

[object-auditor]

[filter:xprofile]
use = egg:swutf#xprofile


chown -R swift:swift /srv/node

mkdir -p /var/cache/swift
chown -R root:swift /var/cache/swift
chmod -R 775 /var/cache/swift

링 생성 및 초기화

controller node 진행

cd  /etc/swift
swift-ring-builder account.builder create 10 3 1

swift-ring-builder account.builder \
  add --region 1 --zone 1 --ip 192.168.56.102 --port 6202 \
  --device sdc1 --weight 100

swift-ring-builder account.builder \
 add --region 1 --zone 1 --ip 192.168.56.102 --port 6202 \
 --device sdd1 --weight 100

swift-ring-builder account.builder \
 add --region 1 --zone 1 --ip 192.168.56.102 --port 6202 \
 --device sde1 --weight 100`


swift-ring-builder account.builder

swift-ring-builder account.builder rebalance


swift-ring-builder container.builder create 10 3 1

swift-ring-builder container.builder add \
  --region 1 --zone 1 --ip 192.168.56.102 --port 6201 --device sdc1 --weight 100

swift-ring-builder container.builder add \
  --region 1 --zone 1 --ip 192.168.56.102 --port 6201 --device sdd1 --weight 100

swift-ring-builder container.builder add \
 --region 1 --zone 1 --ip 192.168.56.102 --port 6201 --device sde1 --weight 100

swift-ring-builder container.builder
swift-ring-builder container.builder rebalance

swift-ring-builder object.builder create 10 3 1

swift-ring-builder object.builder add \
 --region 1 --zone 1 --ip 192.168.56.102 --port 6200 --device sdc1 --weight 100

swift-ring-builder object.builder add \
 --region 1 --zone 1 --ip 192.168.56.102 --port 6200 --device sdd1 --weight 100

swift-ring-builder object.builder add \
 --region 1 --zone 1 --ip 192.168.56.102 --port 6200 --device sde1 --weight 100

swift-ring-builder object.builder
swift-ring-builder object.builder rebalance



curl -o /etc/swift/swift.conf \
  https://git.openstack.org/cgit/openstack/swift/plain/etc/swift.conf-sample?h=stable/ocata

cp -a /etc/swift/swift.conf /etc/swift/swift.conf_org
cat /etc/swift/swift.conf_org | grep -v ^$ | grep -v ^# > /etc/swift/swift.conf

vi /etc/swift/swift.conf

[swift-hash]
swift_hash_path_suffix = openstack_swift_test_suffix
swift_hash_path_prefix = openstack_swift_test_prefix


scp /etc/swift/swift.conf hcshin@compute:~/
scp /etc/swift/*.ring.gz hcshin@compute:~/

### compute node
mv /home/hcshin/swift.conf /etc/swift/
mv /home/hcshin/*.ring.gz /etc/swift/
chown -R root:swift /etc/swift


### controller 프로세스 재시작
service memcached restart
service swift-proxy restart

### copmute node 프로세스 재시작
swift-init all start
ps -ef | grep swift

### controller node 검증
cd /root/
. demo-openrc

swift stat

기타 notes

  • Compute 노드는 VM 속일 경우 /etc/nova/nova.conf에서virt_type` 수정해야:
    • virt_type = qemu (kvm 대신)

참조 자료

 

매뉴얼 실습 영상