Selaa lähdekoodia

Refactor cloud-init support and VM Salt config seeding

Missing package dependancies added.

A missing "config" parameter for qemu-nbd based seeding
method added.

A new seeding method utilising Cloud-init added.
The qemu-nbd based method is still a default method
for backward compatibility.

To enable cloud-init, set the "seed" parameter on
a cluster or node level to "cloud-init".
To disable seeding, set this parameter to "false".
Setting this parameter to "true" will default to
the "qemu-nbd" method.

Salt Minion config file will be created automatically
and may be overrided via cluster or node level
metadata:

  salt:
    control:
      cluster:
        mycluster:
          seed: cloud-init
          cloud_init:
            user_data:
              salt_minion:
                conf:
                  master: 10.1.1.1

or for qemu-nbd case:

  salt:
    control:
      cluster:
        mycluster:
          seed: true
          config:
            host: 10.1.1.1

That may be useful when Salt Master has two IPs in
different networks and one of the networks isn't accessible
from a VM at the moment it's created. Setting a reachable
Salt master IP from metadata helps avoid potential problems.

Also, a liitle optimization has been done to parse/dump
an libvirt XML only once while modifying it.

Change-Id: I091cf409cb43ba2d0a18eaf2a08c11e88d0334e2
Closes-Bug: PROD-22191
pull/78/head
Andrei Danin 6 vuotta sitten
vanhempi
commit
996e209324
6 muutettua tiedostoa jossa 223 lisäystä ja 126 poistoa
  1. +26
    -1
      README.rst
  2. +66
    -39
      _modules/cfgdrive.py
  3. +108
    -72
      _modules/virtng.py
  4. +13
    -14
      salt/control/virt.sls
  5. +2
    -0
      salt/map.jinja
  6. +8
    -0
      tests/pillar/control_virt_custom.sls

+ 26
- 1
README.rst Näytä tiedosto

@@ -467,6 +467,7 @@ Control VM provisioning:
# Cluster global settings
rng: false
enable_vnc: True
seed: cloud-init
cloud_init:
user_data:
disable_ec2_metadata: true
@@ -517,6 +518,31 @@ Control VM provisioning:
networks:
- <<: *private-ipv4
ip_address: 192.168.0.161
user_data:
salt_minion:
conf:
master: 10.1.1.1
ubuntu2:
seed: qemu-nbd
cloud_init:
enabled: false

There are two methods to seed an initial Salt minion configuration to
Libvirt VMs: mount a disk and update a filesystem or create a ConfigDrive with
a Cloud-init config. This is controlled by the "seed" parameter on cluster and
node levels. When set to _True_ or "qemu-nbd", the old method of mounting a disk
will be used. When set to "cloud-init", the new method will be used. When set
to _False_, no seeding will happen. The default value is _True_, meaning
the "qemu-nbd" method will be used. This is done for backward compatibility
and may be changed in future.

The recommended method is to use Cloud-init.
It's controlled by the "cloud_init" dictionary on cluster and node levels.
Node level parameters are merged on top of cluster level parameters.
The Salt Minion config is populated automatically based on a VM name and config
settings of the minion who is actually executing a state. To override them,
add the "salt_minion" section into the "user_data" section as shown above.
It is possible to disable Cloud-init by setting "cloud_init.enabled" to _False_.

To enable Redis plugin for the Salt caching subsystem, use the
below pillar structure:
@@ -932,4 +958,3 @@ Documentation and Bugs
* #salt-formulas @ irc.freenode.net
Use this IRC channel in case of any questions or feedback which is always
welcome


+ 66
- 39
_modules/cfgdrive.py Näytä tiedosto

@@ -1,16 +1,17 @@
# -*- coding: utf-8 -*-

import errno
import json
import logging
import os
import shutil
import six
import subprocess
import tempfile
import uuid
import yaml

from oslo_utils import uuidutils
from oslo_utils import fileutils
from oslo_concurrency import processutils
LOG = logging.getLogger(__name__)

class ConfigDriveBuilder(object):
"""Build config drives, optionally as a context manager."""
@@ -20,19 +21,37 @@ class ConfigDriveBuilder(object):
self.mdfiles=[]

def __enter__(self):
fileutils.delete_if_exists(self.image_file)
self._delete_if_exists(self.image_file)
return self

def __exit__(self, exctype, excval, exctb):
self.make_drive()

@staticmethod
def _ensure_tree(path):
try:
os.makedirs(path)
except OSError as e:
if e.errno == errno.EEXIST and os.path.isdir(path):
pass
else:
raise

@staticmethod
def _delete_if_exists(path):
try:
os.unlink(path)
except OSError as e:
if e.errno != errno.ENOENT:
raise

def add_file(self, path, data):
self.mdfiles.append((path, data))

def _add_file(self, basedir, path, data):
filepath = os.path.join(basedir, path)
dirname = os.path.dirname(filepath)
fileutils.ensure_tree(dirname)
self._ensure_tree(dirname)
with open(filepath, 'wb') as f:
if isinstance(data, six.text_type):
data = data.encode('utf-8')
@@ -43,8 +62,7 @@ class ConfigDriveBuilder(object):
self._add_file(basedir, data[0], data[1])

def _make_iso9660(self, path, tmpdir):

processutils.execute('mkisofs',
cmd = ['mkisofs',
'-o', path,
'-ldots',
'-allow-lowercase',
@@ -54,13 +72,34 @@ class ConfigDriveBuilder(object):
'-r',
'-J',
'-quiet',
tmpdir,
attempts=1,
run_as_root=False)
tmpdir]
try:
LOG.info('Running cmd (subprocess): %s', cmd)
_pipe = subprocess.PIPE
obj = subprocess.Popen(cmd,
stdin=_pipe,
stdout=_pipe,
stderr=_pipe,
close_fds=True)
(stdout, stderr) = obj.communicate()
obj.stdin.close()
_returncode = obj.returncode
LOG.debug('Cmd "%s" returned: %s', cmd, _returncode)
if _returncode != 0:
output = 'Stdout: %s\nStderr: %s' % (stdout, stderr)
LOG.error('The command "%s" failed. %s',
cmd, output)
raise subprocess.CalledProcessError(cmd=cmd,
returncode=_returncode,
output=output)
except OSError as err:
LOG.error('Got an OSError in the command: "%s". Errno: %s', cmd,
err.errno)
raise

def make_drive(self):
"""Make the config drive.
:raises ProcessExecuteError if a helper process has failed.
:raises CalledProcessError if a helper process has failed.
"""
try:
tmpdir = tempfile.mkdtemp()
@@ -70,15 +109,8 @@ class ConfigDriveBuilder(object):
shutil.rmtree(tmpdir)


def generate(
dst,
hostname,
domainname,
instance_id=None,
user_data=None,
network_data=None,
saltconfig=None
):
def generate(dst, hostname, domainname, instance_id=None, user_data=None,
network_data=None):

''' Generate config drive

@@ -86,29 +118,24 @@ def generate(
:param hostname: hostname of Instance.
:param domainname: instance domain.
:param instance_id: UUID of the instance.
:param user_data: custom user data dictionary. type: json
:param network_data: custom network info dictionary. type: json
:param saltconfig: salt minion configuration. type: json
:param user_data: custom user data dictionary.
:param network_data: custom network info dictionary.

'''

instance_md = {}
instance_md['uuid'] = instance_id or uuidutils.generate_uuid()
instance_md['hostname'] = '%s.%s' % (hostname, domainname)
instance_md['name'] = hostname
instance_md = {}
instance_md['uuid'] = instance_id or str(uuid.uuid4())
instance_md['hostname'] = '%s.%s' % (hostname, domainname)
instance_md['name'] = hostname

if user_data:
user_data = '#cloud-config\n\n' + yaml.dump(yaml.load(user_data), default_flow_style=False)
if saltconfig:
user_data += yaml.dump(yaml.load(str(saltconfig)), default_flow_style=False)
user_data = '#cloud-config\n\n' + yaml.dump(user_data, default_flow_style=False)

data = json.dumps(instance_md)

with ConfigDriveBuilder(dst) as cfgdrive:
cfgdrive.add_file('openstack/latest/meta_data.json', data)
if user_data:
cfgdrive.add_file('openstack/latest/user_data', user_data)
if network_data:
cfgdrive.add_file('openstack/latest/network_data.json', network_data)
cfgdrive.add_file('openstack/latest/vendor_data.json', '{}')
cfgdrive.add_file('openstack/latest/vendor_data2.json', '{}')
cfgdrive.add_file('openstack/latest/meta_data.json', data)
if user_data:
cfgdrive.add_file('openstack/latest/user_data', user_data)
if network_data:
cfgdrive.add_file('openstack/latest/network_data.json', json.dumps(network_data))
LOG.debug('Config drive was built %s' % dst)

+ 108
- 72
_modules/virtng.py Näytä tiedosto

@@ -9,6 +9,7 @@ Work with virtual machines managed by libvirt

# Import python libs
from __future__ import absolute_import
import collections
import copy
import os
import re
@@ -20,12 +21,13 @@ import logging

# Import third party libs
import yaml
import json
import jinja2
import jinja2.exceptions
import salt.ext.six as six
from salt.ext.six.moves import StringIO as _StringIO # pylint: disable=import-error
from salt.utils.odict import OrderedDict
from xml.dom import minidom
from xml.etree import ElementTree
try:
import libvirt # pylint: disable=import-error
HAS_ALL_IMPORTS = True
@@ -253,8 +255,12 @@ def _gen_xml(name,
context['console'] = True

context['disks'] = {}
context['cdrom'] = []
for i, disk in enumerate(diskp):
for disk_name, args in disk.items():
if args.get('device', 'disk') == 'cdrom':
context['cdrom'].append(args)
continue
context['disks'][disk_name] = {}
fn_ = '{0}.{1}'.format(disk_name, args['format'])
context['disks'][disk_name]['file_name'] = fn_
@@ -282,8 +288,23 @@ def _gen_xml(name,
log.error('Could not load template {0}'.format(fn_))
return ''

return template.render(**context)

xml = template.render(**context)

# Add cdrom devices separately because a current template doesn't support them.
if context['cdrom']:
xml_doc = ElementTree.fromstring(xml)
xml_devs = xml_doc.find('.//devices')
cdrom_xml_tmpl = """<disk type='file' device='cdrom'>
<driver name='{driver_name}' type='{driver_type}'/>
<source file='{filename}'/>
<target dev='{dev}' bus='{bus}'/>
<readonly/>
</disk>"""
for disk in context['cdrom']:
cdrom_elem = ElementTree.fromstring(cdrom_xml_tmpl.format(**disk))
xml_devs.append(cdrom_elem)
xml = ElementTree.tostring(xml_doc)
return xml

def _gen_vol_xml(vmname,
diskname,
@@ -552,6 +573,10 @@ def init(name,

rng = rng or {'backend':'/dev/urandom'}
hypervisor = __salt__['config.get']('libvirt:hypervisor', hypervisor)
if kwargs.get('seed') not in (False, True, None, 'qemu-nbd', 'cloud-init'):
log.warning(
"The seeding method '{0}' is not supported".format(kwargs.get('seed'))
)

nicp = _nic_profile(nic, hypervisor, **kwargs)

@@ -566,9 +591,6 @@ def init(name,
diskp[0][disk_name]['image'] = image

# Create multiple disks, empty or from specified images.
cloud_init = None
cfg_drive = None

for disk in diskp:
log.debug("Creating disk for VM [ {0} ]: {1}".format(name, disk))

@@ -628,40 +650,14 @@ def init(name,
except (IOError, OSError) as e:
raise CommandExecutionError('problem while copying image. {0} - {1}'.format(args['image'], e))

if kwargs.get('seed'):
seed_cmd = kwargs.get('seed_cmd', 'seedng.apply')
cloud_init = kwargs.get('cloud_init', None)
master = __salt__['config.option']('master')
cfg_drive = os.path.join(img_dir,'config-2.iso')

if cloud_init:
_tmp = name.split('.')

try:
user_data = json.dumps(cloud_init["user_data"])
except:
user_data = None

try:
network_data = json.dumps(cloud_init["network_data"])
except:
network_data = None

__salt__["cfgdrive.generate"](
dst = cfg_drive,
hostname = _tmp.pop(0),
domainname = '.'.join(_tmp),
user_data = user_data,
network_data = network_data,
saltconfig = { "salt_minion": { "conf": { "master": master, "id": name } } }
)
else:
__salt__[seed_cmd](
path = img_dest,
id_ = name,
config = kwargs.get('config'),
install = kwargs.get('install', True)
)
if kwargs.get('seed') in (True, 'qemu-nbd'):
install = kwargs.get('install', True)
seed_cmd = kwargs.get('seed_cmd', 'seedng.apply')

__salt__[seed_cmd](img_dest,
id_=name,
config=kwargs.get('config'),
install=install)
else:
# Create empty disk
try:
@@ -684,55 +680,99 @@ def init(name,
raise SaltInvocationError('Unsupported hypervisor when handling disk image: {0}'
.format(hypervisor))

xml = _gen_xml(name, cpu, mem, diskp, nicp, hypervisor, **kwargs)
cloud_init = kwargs.get('cloud_init', {})

if cloud_init and cfg_drive:
xml_doc = minidom.parseString(xml)
iso_xml = xml_doc.createElement("disk")
iso_xml.setAttribute("type", "file")
iso_xml.setAttribute("device", "cdrom")
iso_xml.appendChild(xml_doc.createElement("readonly"))
driver = xml_doc.createElement("driver")
driver.setAttribute("name", "qemu")
driver.setAttribute("type", "raw")
target = xml_doc.createElement("target")
target.setAttribute("dev", "hdc")
target.setAttribute("bus", "ide")
source = xml_doc.createElement("source")
source.setAttribute("file", cfg_drive)
iso_xml.appendChild(driver)
iso_xml.appendChild(target)
iso_xml.appendChild(source)
xml_doc.getElementsByTagName("domain")[0].getElementsByTagName("devices")[0].appendChild(iso_xml)
xml = xml_doc.toxml()
# Seed Salt Minion config via Cloud-init if required.
if kwargs.get('seed') == 'cloud-init':
# Recursive dict update.
def rec_update(d, u):
for k, v in u.iteritems():
if isinstance(v, collections.Mapping):
d[k] = rec_update(d.get(k, {}), v)
else:
d[k] = v
return d

cloud_init_seed = {
"user_data": {
"salt_minion": {
"conf": {
"master": __salt__['config.option']('master'),
"id": name
}
}
}
}
cloud_init = rec_update(cloud_init_seed, cloud_init)

# Create a cloud-init config drive if defined.
if cloud_init:
if hypervisor not in ['qemu', 'kvm']:
raise SaltInvocationError('Unsupported hypervisor when '
'handling Cloud-Init disk '
'image: {0}'.format(hypervisor))
cfg_drive = os.path.join(img_dir, 'config-2.iso')
vm_hostname, vm_domainname = name.split('.', 1)

def OrderedDict_to_dict(instance):
if isinstance(instance, basestring):
return instance
elif isinstance(instance, collections.Sequence):
return map(OrderedDict_to_dict, instance)
elif isinstance(instance, collections.Mapping):
if isinstance(instance, OrderedDict):
instance = dict(instance)
for k, v in instance.iteritems():
instance[k] = OrderedDict_to_dict(v)
return instance
else:
return instance

# Yaml.dump dumps OrderedDict in the way to be incompatible with
# Cloud-init, hence all OrderedDicts have to be converted to dict first.
user_data = OrderedDict_to_dict(cloud_init.get('user_data', None))

__salt__["cfgdrive.generate"](
dst=cfg_drive,
hostname=vm_hostname,
domainname=vm_domainname,
user_data=user_data,
network_data=cloud_init.get('network_data', None),
)
diskp.append({
'config_2': {
'device': 'cdrom',
'driver_name': 'qemu',
'driver_type': 'raw',
'dev': 'hdc',
'bus': 'ide',
'filename': cfg_drive
}
})

xml = _gen_xml(name, cpu, mem, diskp, nicp, hypervisor, **kwargs)

# TODO: Remove this code and refactor module, when salt-common would have updated libvirt_domain.jinja template
xml_doc = minidom.parseString(xml)
if cpuset:
xml_doc = minidom.parseString(xml)
xml_doc.getElementsByTagName("vcpu")[0].setAttribute('cpuset', cpuset)
xml = xml_doc.toxml()

# TODO: Remove this code and refactor module, when salt-common would have updated libvirt_domain.jinja template
if cpu_mode:
xml_doc = minidom.parseString(xml)
cpu_xml = xml_doc.createElement("cpu")
cpu_xml.setAttribute('mode', cpu_mode)
xml_doc.getElementsByTagName("domain")[0].appendChild(cpu_xml)
xml = xml_doc.toxml()

# TODO: Remove this code and refactor module, when salt-common would have updated libvirt_domain.jinja template
if machine:
xml_doc = minidom.parseString(xml)
os_xml = xml_doc.getElementsByTagName("domain")[0].getElementsByTagName("os")[0]
os_xml.getElementsByTagName("type")[0].setAttribute('machine', machine)
xml = xml_doc.toxml()

# TODO: Remove this code and refactor module, when salt-common would have updated libvirt_domain.jinja template
if loader and 'path' not in loader:
log.info('`path` is a required property of `loader`, and cannot be found. Skipping loader configuration')
loader = None
elif loader:
xml_doc = minidom.parseString(xml)
loader_xml = xml_doc.createElement("loader")
for key, val in loader.items():
if key == 'path':
@@ -741,25 +781,21 @@ def init(name,
loader_path_xml = xml_doc.createTextNode(loader['path'])
loader_xml.appendChild(loader_path_xml)
xml_doc.getElementsByTagName("domain")[0].getElementsByTagName("os")[0].appendChild(loader_xml)
xml = xml_doc.toxml()

# TODO: Remove this code and refactor module, when salt-common would have updated libvirt_domain.jinja template
for _nic in nicp:
if _nic['virtualport']:
xml_doc = minidom.parseString(xml)
interfaces = xml_doc.getElementsByTagName("domain")[0].getElementsByTagName("devices")[0].getElementsByTagName("interface")
for interface in interfaces:
if interface.getElementsByTagName('mac')[0].getAttribute('address').lower() == _nic['mac'].lower():
vport_xml = xml_doc.createElement("virtualport")
vport_xml.setAttribute("type", _nic['virtualport']['type'])
interface.appendChild(vport_xml)
xml = xml_doc.toxml()

# TODO: Remove this code and refactor module, when salt-common would have updated libvirt_domain.jinja template
if rng:
rng_model = rng.get('model', 'random')
rng_backend = rng.get('backend', '/dev/urandom')
xml_doc = minidom.parseString(xml)
rng_xml = xml_doc.createElement("rng")
rng_xml.setAttribute("model", "virtio")
backend = xml_doc.createElement("backend")
@@ -774,8 +810,8 @@ def init(name,
rate.setAttribute("bytes", rng_rate_bytes)
rng_xml.appendChild(rate)
xml_doc.getElementsByTagName("domain")[0].getElementsByTagName("devices")[0].appendChild(rng_xml)
xml = xml_doc.toxml()

xml = xml_doc.toxml()
define_xml_str(xml)

if start:

+ 13
- 14
salt/control/virt.sls Näytä tiedosto

@@ -49,6 +49,7 @@ salt_libvirt_service:
{%- if node.provider == grains.id %}

{%- set size = control.size.get(node.size) %}
{%- set seed = node.get('seed', cluster.get('seed', True)) %}
{%- set cluster_cloud_init = cluster.get('cloud_init', {}) %}
{%- set node_cloud_init = node.get('cloud_init', {}) %}
{%- set cloud_init = salt['grains.filter_by']({'default': cluster_cloud_init}, merge=node_cloud_init) %}
@@ -63,28 +64,33 @@ salt_control_virt_{{ cluster_name }}_{{ node_name }}:
- start: True
- disk: {{ size.disk_profile }}
- nic: {{ size.net_profile }}
{%- if node.rng is defined %}
{%- if node.rng is defined %}
- rng: {{ node.rng }}
{%- elif rng is defined %}
- rng: {{ rng }}
{%- endif %}
{%- if node.loader is defined %}
{%- if node.loader is defined %}
- loader: {{ node.loader }}
{%- endif %}
{%- if node.machine is defined %}
{%- if node.machine is defined %}
- machine: {{ node.machine }}
{%- endif %}
{%- if node.cpuset is defined %}
{%- if node.cpuset is defined %}
- cpuset: {{ node.cpuset }}
{%- endif %}
{%- if node.cpu_mode is defined %}
{%- if node.cpu_mode is defined %}
- cpu_mode: {{ node.cpu_mode }}
{%- endif %}
- kwargs:
{%- if cloud_init is defined %}
{%- if cluster.config is defined %}
config: {{ cluster.config }}
{%- endif %}
{%- if cloud_init and cloud_init.get('enabled', True) %}
cloud_init: {{ cloud_init }}
{%- endif %}
seed: True
{%- if seed %}
seed: {{ seed }}
{%- endif %}
serial_type: pty
console: True
{%- if node.enable_vnc is defined %}
@@ -104,13 +110,6 @@ salt_control_virt_{{ cluster_name }}_{{ node_name }}:
- require:
- salt_libvirt_service

#salt_control_seed_{{ cluster_name }}_{{ node_name }}:
# module.run:
# - name: seed.apply
# - path: /srv/salt-images/{{ node_name }}.{{ cluster.domain }}/system.qcow2
# - id_: {{ node_name }}.{{ cluster.domain }}
# - unless: virsh list | grep {{ node_name }}.{{ cluster.domain }}

{%- if node.get("autostart", True) %}

salt_virt_autostart_{{ cluster_name }}_{{ node_name }}:

+ 2
- 0
salt/map.jinja Näytä tiedosto

@@ -192,6 +192,7 @@ Debian:
virt_pkgs:
- libvirt-dev
- pkg-config
- genisoimage
{% if grains.get('oscodename') == 'trusty' %}
- libguestfs-tools
{% endif %}
@@ -201,6 +202,7 @@ RedHat:
virt_pkgs:
- libvirt-dev
- pkg-config
- genisoimage
virt_service: 'libvirtd'
{%- endload %}


+ 8
- 0
tests/pillar/control_virt_custom.sls Näytä tiedosto

@@ -67,6 +67,7 @@ salt:
config:
engine: salt
host: master.domain.com
seed: cloud-init
cloud_init:
user_data:
disable_ec2_metadata: true
@@ -104,6 +105,9 @@ salt:
image: bubuntu.qcomw
size: small
img_dest: /var/lib/libvirt/hddimages
seed: qemu-nbd
cloud_init:
enabled: false
loader:
readonly: yes
type: pflash
@@ -113,6 +117,10 @@ salt:
image: meowbuntu.qcom2
size: medium_three_disks
cloud_init:
user_data:
salt_minion:
config:
master: master.domain.com
network_data:
networks:
- <<: *private-ipv4

Loading…
Peruuta
Tallenna