Browse Source

Refactor cloud-init support and VM Salt config seeding

Missing package dependancies added.

A missing "config" parameter for qemu-nbd based seeding
method added.

A new seeding method utilising Cloud-init added.
The qemu-nbd based method is still a default method
for backward compatibility.

To enable cloud-init, set the "seed" parameter on
a cluster or node level to "cloud-init".
To disable seeding, set this parameter to "false".
Setting this parameter to "true" will default to
the "qemu-nbd" method.

Salt Minion config file will be created automatically
and may be overrided via cluster or node level
metadata:

  salt:
    control:
      cluster:
        mycluster:
          seed: cloud-init
          cloud_init:
            user_data:
              salt_minion:
                conf:
                  master: 10.1.1.1

or for qemu-nbd case:

  salt:
    control:
      cluster:
        mycluster:
          seed: true
          config:
            host: 10.1.1.1

That may be useful when Salt Master has two IPs in
different networks and one of the networks isn't accessible
from a VM at the moment it's created. Setting a reachable
Salt master IP from metadata helps avoid potential problems.

Also, a liitle optimization has been done to parse/dump
an libvirt XML only once while modifying it.

Change-Id: I091cf409cb43ba2d0a18eaf2a08c11e88d0334e2
Closes-Bug: PROD-22191
pull/78/head
Andrei Danin 6 years ago
parent
commit
996e209324
6 changed files with 223 additions and 126 deletions
  1. +26
    -1
      README.rst
  2. +66
    -39
      _modules/cfgdrive.py
  3. +108
    -72
      _modules/virtng.py
  4. +13
    -14
      salt/control/virt.sls
  5. +2
    -0
      salt/map.jinja
  6. +8
    -0
      tests/pillar/control_virt_custom.sls

+ 26
- 1
README.rst View File

# Cluster global settings # Cluster global settings
rng: false rng: false
enable_vnc: True enable_vnc: True
seed: cloud-init
cloud_init: cloud_init:
user_data: user_data:
disable_ec2_metadata: true disable_ec2_metadata: true
networks: networks:
- <<: *private-ipv4 - <<: *private-ipv4
ip_address: 192.168.0.161 ip_address: 192.168.0.161
user_data:
salt_minion:
conf:
master: 10.1.1.1
ubuntu2:
seed: qemu-nbd
cloud_init:
enabled: false

There are two methods to seed an initial Salt minion configuration to
Libvirt VMs: mount a disk and update a filesystem or create a ConfigDrive with
a Cloud-init config. This is controlled by the "seed" parameter on cluster and
node levels. When set to _True_ or "qemu-nbd", the old method of mounting a disk
will be used. When set to "cloud-init", the new method will be used. When set
to _False_, no seeding will happen. The default value is _True_, meaning
the "qemu-nbd" method will be used. This is done for backward compatibility
and may be changed in future.

The recommended method is to use Cloud-init.
It's controlled by the "cloud_init" dictionary on cluster and node levels.
Node level parameters are merged on top of cluster level parameters.
The Salt Minion config is populated automatically based on a VM name and config
settings of the minion who is actually executing a state. To override them,
add the "salt_minion" section into the "user_data" section as shown above.
It is possible to disable Cloud-init by setting "cloud_init.enabled" to _False_.


To enable Redis plugin for the Salt caching subsystem, use the To enable Redis plugin for the Salt caching subsystem, use the
below pillar structure: below pillar structure:
* #salt-formulas @ irc.freenode.net * #salt-formulas @ irc.freenode.net
Use this IRC channel in case of any questions or feedback which is always Use this IRC channel in case of any questions or feedback which is always
welcome welcome


+ 66
- 39
_modules/cfgdrive.py View File

# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-


import errno
import json import json
import logging import logging
import os import os
import shutil import shutil
import six import six
import subprocess
import tempfile import tempfile
import uuid
import yaml import yaml


from oslo_utils import uuidutils
from oslo_utils import fileutils
from oslo_concurrency import processutils
LOG = logging.getLogger(__name__)


class ConfigDriveBuilder(object): class ConfigDriveBuilder(object):
"""Build config drives, optionally as a context manager.""" """Build config drives, optionally as a context manager."""
self.mdfiles=[] self.mdfiles=[]


def __enter__(self): def __enter__(self):
fileutils.delete_if_exists(self.image_file)
self._delete_if_exists(self.image_file)
return self return self


def __exit__(self, exctype, excval, exctb): def __exit__(self, exctype, excval, exctb):
self.make_drive() self.make_drive()


@staticmethod
def _ensure_tree(path):
try:
os.makedirs(path)
except OSError as e:
if e.errno == errno.EEXIST and os.path.isdir(path):
pass
else:
raise

@staticmethod
def _delete_if_exists(path):
try:
os.unlink(path)
except OSError as e:
if e.errno != errno.ENOENT:
raise

def add_file(self, path, data): def add_file(self, path, data):
self.mdfiles.append((path, data)) self.mdfiles.append((path, data))


def _add_file(self, basedir, path, data): def _add_file(self, basedir, path, data):
filepath = os.path.join(basedir, path) filepath = os.path.join(basedir, path)
dirname = os.path.dirname(filepath) dirname = os.path.dirname(filepath)
fileutils.ensure_tree(dirname)
self._ensure_tree(dirname)
with open(filepath, 'wb') as f: with open(filepath, 'wb') as f:
if isinstance(data, six.text_type): if isinstance(data, six.text_type):
data = data.encode('utf-8') data = data.encode('utf-8')
self._add_file(basedir, data[0], data[1]) self._add_file(basedir, data[0], data[1])


def _make_iso9660(self, path, tmpdir): def _make_iso9660(self, path, tmpdir):

processutils.execute('mkisofs',
cmd = ['mkisofs',
'-o', path, '-o', path,
'-ldots', '-ldots',
'-allow-lowercase', '-allow-lowercase',
'-r', '-r',
'-J', '-J',
'-quiet', '-quiet',
tmpdir,
attempts=1,
run_as_root=False)
tmpdir]
try:
LOG.info('Running cmd (subprocess): %s', cmd)
_pipe = subprocess.PIPE
obj = subprocess.Popen(cmd,
stdin=_pipe,
stdout=_pipe,
stderr=_pipe,
close_fds=True)
(stdout, stderr) = obj.communicate()
obj.stdin.close()
_returncode = obj.returncode
LOG.debug('Cmd "%s" returned: %s', cmd, _returncode)
if _returncode != 0:
output = 'Stdout: %s\nStderr: %s' % (stdout, stderr)
LOG.error('The command "%s" failed. %s',
cmd, output)
raise subprocess.CalledProcessError(cmd=cmd,
returncode=_returncode,
output=output)
except OSError as err:
LOG.error('Got an OSError in the command: "%s". Errno: %s', cmd,
err.errno)
raise


def make_drive(self): def make_drive(self):
"""Make the config drive. """Make the config drive.
:raises ProcessExecuteError if a helper process has failed.
:raises CalledProcessError if a helper process has failed.
""" """
try: try:
tmpdir = tempfile.mkdtemp() tmpdir = tempfile.mkdtemp()
shutil.rmtree(tmpdir) shutil.rmtree(tmpdir)




def generate(
dst,
hostname,
domainname,
instance_id=None,
user_data=None,
network_data=None,
saltconfig=None
):
def generate(dst, hostname, domainname, instance_id=None, user_data=None,
network_data=None):


''' Generate config drive ''' Generate config drive


:param hostname: hostname of Instance. :param hostname: hostname of Instance.
:param domainname: instance domain. :param domainname: instance domain.
:param instance_id: UUID of the instance. :param instance_id: UUID of the instance.
:param user_data: custom user data dictionary. type: json
:param network_data: custom network info dictionary. type: json
:param saltconfig: salt minion configuration. type: json
:param user_data: custom user data dictionary.
:param network_data: custom network info dictionary.


''' '''

instance_md = {}
instance_md['uuid'] = instance_id or uuidutils.generate_uuid()
instance_md['hostname'] = '%s.%s' % (hostname, domainname)
instance_md['name'] = hostname
instance_md = {}
instance_md['uuid'] = instance_id or str(uuid.uuid4())
instance_md['hostname'] = '%s.%s' % (hostname, domainname)
instance_md['name'] = hostname


if user_data: if user_data:
user_data = '#cloud-config\n\n' + yaml.dump(yaml.load(user_data), default_flow_style=False)
if saltconfig:
user_data += yaml.dump(yaml.load(str(saltconfig)), default_flow_style=False)
user_data = '#cloud-config\n\n' + yaml.dump(user_data, default_flow_style=False)


data = json.dumps(instance_md) data = json.dumps(instance_md)

with ConfigDriveBuilder(dst) as cfgdrive: with ConfigDriveBuilder(dst) as cfgdrive:
cfgdrive.add_file('openstack/latest/meta_data.json', data)
if user_data:
cfgdrive.add_file('openstack/latest/user_data', user_data)
if network_data:
cfgdrive.add_file('openstack/latest/network_data.json', network_data)
cfgdrive.add_file('openstack/latest/vendor_data.json', '{}')
cfgdrive.add_file('openstack/latest/vendor_data2.json', '{}')
cfgdrive.add_file('openstack/latest/meta_data.json', data)
if user_data:
cfgdrive.add_file('openstack/latest/user_data', user_data)
if network_data:
cfgdrive.add_file('openstack/latest/network_data.json', json.dumps(network_data))
LOG.debug('Config drive was built %s' % dst)

+ 108
- 72
_modules/virtng.py View File



# Import python libs # Import python libs
from __future__ import absolute_import from __future__ import absolute_import
import collections
import copy import copy
import os import os
import re import re


# Import third party libs # Import third party libs
import yaml import yaml
import json
import jinja2 import jinja2
import jinja2.exceptions import jinja2.exceptions
import salt.ext.six as six import salt.ext.six as six
from salt.ext.six.moves import StringIO as _StringIO # pylint: disable=import-error from salt.ext.six.moves import StringIO as _StringIO # pylint: disable=import-error
from salt.utils.odict import OrderedDict
from xml.dom import minidom from xml.dom import minidom
from xml.etree import ElementTree
try: try:
import libvirt # pylint: disable=import-error import libvirt # pylint: disable=import-error
HAS_ALL_IMPORTS = True HAS_ALL_IMPORTS = True
context['console'] = True context['console'] = True


context['disks'] = {} context['disks'] = {}
context['cdrom'] = []
for i, disk in enumerate(diskp): for i, disk in enumerate(diskp):
for disk_name, args in disk.items(): for disk_name, args in disk.items():
if args.get('device', 'disk') == 'cdrom':
context['cdrom'].append(args)
continue
context['disks'][disk_name] = {} context['disks'][disk_name] = {}
fn_ = '{0}.{1}'.format(disk_name, args['format']) fn_ = '{0}.{1}'.format(disk_name, args['format'])
context['disks'][disk_name]['file_name'] = fn_ context['disks'][disk_name]['file_name'] = fn_
log.error('Could not load template {0}'.format(fn_)) log.error('Could not load template {0}'.format(fn_))
return '' return ''


return template.render(**context)

xml = template.render(**context)

# Add cdrom devices separately because a current template doesn't support them.
if context['cdrom']:
xml_doc = ElementTree.fromstring(xml)
xml_devs = xml_doc.find('.//devices')
cdrom_xml_tmpl = """<disk type='file' device='cdrom'>
<driver name='{driver_name}' type='{driver_type}'/>
<source file='{filename}'/>
<target dev='{dev}' bus='{bus}'/>
<readonly/>
</disk>"""
for disk in context['cdrom']:
cdrom_elem = ElementTree.fromstring(cdrom_xml_tmpl.format(**disk))
xml_devs.append(cdrom_elem)
xml = ElementTree.tostring(xml_doc)
return xml


def _gen_vol_xml(vmname, def _gen_vol_xml(vmname,
diskname, diskname,


rng = rng or {'backend':'/dev/urandom'} rng = rng or {'backend':'/dev/urandom'}
hypervisor = __salt__['config.get']('libvirt:hypervisor', hypervisor) hypervisor = __salt__['config.get']('libvirt:hypervisor', hypervisor)
if kwargs.get('seed') not in (False, True, None, 'qemu-nbd', 'cloud-init'):
log.warning(
"The seeding method '{0}' is not supported".format(kwargs.get('seed'))
)


nicp = _nic_profile(nic, hypervisor, **kwargs) nicp = _nic_profile(nic, hypervisor, **kwargs)


diskp[0][disk_name]['image'] = image diskp[0][disk_name]['image'] = image


# Create multiple disks, empty or from specified images. # Create multiple disks, empty or from specified images.
cloud_init = None
cfg_drive = None

for disk in diskp: for disk in diskp:
log.debug("Creating disk for VM [ {0} ]: {1}".format(name, disk)) log.debug("Creating disk for VM [ {0} ]: {1}".format(name, disk))


except (IOError, OSError) as e: except (IOError, OSError) as e:
raise CommandExecutionError('problem while copying image. {0} - {1}'.format(args['image'], e)) raise CommandExecutionError('problem while copying image. {0} - {1}'.format(args['image'], e))


if kwargs.get('seed'):
seed_cmd = kwargs.get('seed_cmd', 'seedng.apply')
cloud_init = kwargs.get('cloud_init', None)
master = __salt__['config.option']('master')
cfg_drive = os.path.join(img_dir,'config-2.iso')

if cloud_init:
_tmp = name.split('.')

try:
user_data = json.dumps(cloud_init["user_data"])
except:
user_data = None

try:
network_data = json.dumps(cloud_init["network_data"])
except:
network_data = None

__salt__["cfgdrive.generate"](
dst = cfg_drive,
hostname = _tmp.pop(0),
domainname = '.'.join(_tmp),
user_data = user_data,
network_data = network_data,
saltconfig = { "salt_minion": { "conf": { "master": master, "id": name } } }
)
else:
__salt__[seed_cmd](
path = img_dest,
id_ = name,
config = kwargs.get('config'),
install = kwargs.get('install', True)
)
if kwargs.get('seed') in (True, 'qemu-nbd'):
install = kwargs.get('install', True)
seed_cmd = kwargs.get('seed_cmd', 'seedng.apply')

__salt__[seed_cmd](img_dest,
id_=name,
config=kwargs.get('config'),
install=install)
else: else:
# Create empty disk # Create empty disk
try: try:
raise SaltInvocationError('Unsupported hypervisor when handling disk image: {0}' raise SaltInvocationError('Unsupported hypervisor when handling disk image: {0}'
.format(hypervisor)) .format(hypervisor))


xml = _gen_xml(name, cpu, mem, diskp, nicp, hypervisor, **kwargs)
cloud_init = kwargs.get('cloud_init', {})


if cloud_init and cfg_drive:
xml_doc = minidom.parseString(xml)
iso_xml = xml_doc.createElement("disk")
iso_xml.setAttribute("type", "file")
iso_xml.setAttribute("device", "cdrom")
iso_xml.appendChild(xml_doc.createElement("readonly"))
driver = xml_doc.createElement("driver")
driver.setAttribute("name", "qemu")
driver.setAttribute("type", "raw")
target = xml_doc.createElement("target")
target.setAttribute("dev", "hdc")
target.setAttribute("bus", "ide")
source = xml_doc.createElement("source")
source.setAttribute("file", cfg_drive)
iso_xml.appendChild(driver)
iso_xml.appendChild(target)
iso_xml.appendChild(source)
xml_doc.getElementsByTagName("domain")[0].getElementsByTagName("devices")[0].appendChild(iso_xml)
xml = xml_doc.toxml()
# Seed Salt Minion config via Cloud-init if required.
if kwargs.get('seed') == 'cloud-init':
# Recursive dict update.
def rec_update(d, u):
for k, v in u.iteritems():
if isinstance(v, collections.Mapping):
d[k] = rec_update(d.get(k, {}), v)
else:
d[k] = v
return d

cloud_init_seed = {
"user_data": {
"salt_minion": {
"conf": {
"master": __salt__['config.option']('master'),
"id": name
}
}
}
}
cloud_init = rec_update(cloud_init_seed, cloud_init)

# Create a cloud-init config drive if defined.
if cloud_init:
if hypervisor not in ['qemu', 'kvm']:
raise SaltInvocationError('Unsupported hypervisor when '
'handling Cloud-Init disk '
'image: {0}'.format(hypervisor))
cfg_drive = os.path.join(img_dir, 'config-2.iso')
vm_hostname, vm_domainname = name.split('.', 1)

def OrderedDict_to_dict(instance):
if isinstance(instance, basestring):
return instance
elif isinstance(instance, collections.Sequence):
return map(OrderedDict_to_dict, instance)
elif isinstance(instance, collections.Mapping):
if isinstance(instance, OrderedDict):
instance = dict(instance)
for k, v in instance.iteritems():
instance[k] = OrderedDict_to_dict(v)
return instance
else:
return instance

# Yaml.dump dumps OrderedDict in the way to be incompatible with
# Cloud-init, hence all OrderedDicts have to be converted to dict first.
user_data = OrderedDict_to_dict(cloud_init.get('user_data', None))

__salt__["cfgdrive.generate"](
dst=cfg_drive,
hostname=vm_hostname,
domainname=vm_domainname,
user_data=user_data,
network_data=cloud_init.get('network_data', None),
)
diskp.append({
'config_2': {
'device': 'cdrom',
'driver_name': 'qemu',
'driver_type': 'raw',
'dev': 'hdc',
'bus': 'ide',
'filename': cfg_drive
}
})

xml = _gen_xml(name, cpu, mem, diskp, nicp, hypervisor, **kwargs)


# TODO: Remove this code and refactor module, when salt-common would have updated libvirt_domain.jinja template # TODO: Remove this code and refactor module, when salt-common would have updated libvirt_domain.jinja template
xml_doc = minidom.parseString(xml)
if cpuset: if cpuset:
xml_doc = minidom.parseString(xml)
xml_doc.getElementsByTagName("vcpu")[0].setAttribute('cpuset', cpuset) xml_doc.getElementsByTagName("vcpu")[0].setAttribute('cpuset', cpuset)
xml = xml_doc.toxml()


# TODO: Remove this code and refactor module, when salt-common would have updated libvirt_domain.jinja template # TODO: Remove this code and refactor module, when salt-common would have updated libvirt_domain.jinja template
if cpu_mode: if cpu_mode:
xml_doc = minidom.parseString(xml)
cpu_xml = xml_doc.createElement("cpu") cpu_xml = xml_doc.createElement("cpu")
cpu_xml.setAttribute('mode', cpu_mode) cpu_xml.setAttribute('mode', cpu_mode)
xml_doc.getElementsByTagName("domain")[0].appendChild(cpu_xml) xml_doc.getElementsByTagName("domain")[0].appendChild(cpu_xml)
xml = xml_doc.toxml()


# TODO: Remove this code and refactor module, when salt-common would have updated libvirt_domain.jinja template # TODO: Remove this code and refactor module, when salt-common would have updated libvirt_domain.jinja template
if machine: if machine:
xml_doc = minidom.parseString(xml)
os_xml = xml_doc.getElementsByTagName("domain")[0].getElementsByTagName("os")[0] os_xml = xml_doc.getElementsByTagName("domain")[0].getElementsByTagName("os")[0]
os_xml.getElementsByTagName("type")[0].setAttribute('machine', machine) os_xml.getElementsByTagName("type")[0].setAttribute('machine', machine)
xml = xml_doc.toxml()


# TODO: Remove this code and refactor module, when salt-common would have updated libvirt_domain.jinja template # TODO: Remove this code and refactor module, when salt-common would have updated libvirt_domain.jinja template
if loader and 'path' not in loader: if loader and 'path' not in loader:
log.info('`path` is a required property of `loader`, and cannot be found. Skipping loader configuration') log.info('`path` is a required property of `loader`, and cannot be found. Skipping loader configuration')
loader = None loader = None
elif loader: elif loader:
xml_doc = minidom.parseString(xml)
loader_xml = xml_doc.createElement("loader") loader_xml = xml_doc.createElement("loader")
for key, val in loader.items(): for key, val in loader.items():
if key == 'path': if key == 'path':
loader_path_xml = xml_doc.createTextNode(loader['path']) loader_path_xml = xml_doc.createTextNode(loader['path'])
loader_xml.appendChild(loader_path_xml) loader_xml.appendChild(loader_path_xml)
xml_doc.getElementsByTagName("domain")[0].getElementsByTagName("os")[0].appendChild(loader_xml) xml_doc.getElementsByTagName("domain")[0].getElementsByTagName("os")[0].appendChild(loader_xml)
xml = xml_doc.toxml()


# TODO: Remove this code and refactor module, when salt-common would have updated libvirt_domain.jinja template # TODO: Remove this code and refactor module, when salt-common would have updated libvirt_domain.jinja template
for _nic in nicp: for _nic in nicp:
if _nic['virtualport']: if _nic['virtualport']:
xml_doc = minidom.parseString(xml)
interfaces = xml_doc.getElementsByTagName("domain")[0].getElementsByTagName("devices")[0].getElementsByTagName("interface") interfaces = xml_doc.getElementsByTagName("domain")[0].getElementsByTagName("devices")[0].getElementsByTagName("interface")
for interface in interfaces: for interface in interfaces:
if interface.getElementsByTagName('mac')[0].getAttribute('address').lower() == _nic['mac'].lower(): if interface.getElementsByTagName('mac')[0].getAttribute('address').lower() == _nic['mac'].lower():
vport_xml = xml_doc.createElement("virtualport") vport_xml = xml_doc.createElement("virtualport")
vport_xml.setAttribute("type", _nic['virtualport']['type']) vport_xml.setAttribute("type", _nic['virtualport']['type'])
interface.appendChild(vport_xml) interface.appendChild(vport_xml)
xml = xml_doc.toxml()


# TODO: Remove this code and refactor module, when salt-common would have updated libvirt_domain.jinja template # TODO: Remove this code and refactor module, when salt-common would have updated libvirt_domain.jinja template
if rng: if rng:
rng_model = rng.get('model', 'random') rng_model = rng.get('model', 'random')
rng_backend = rng.get('backend', '/dev/urandom') rng_backend = rng.get('backend', '/dev/urandom')
xml_doc = minidom.parseString(xml)
rng_xml = xml_doc.createElement("rng") rng_xml = xml_doc.createElement("rng")
rng_xml.setAttribute("model", "virtio") rng_xml.setAttribute("model", "virtio")
backend = xml_doc.createElement("backend") backend = xml_doc.createElement("backend")
rate.setAttribute("bytes", rng_rate_bytes) rate.setAttribute("bytes", rng_rate_bytes)
rng_xml.appendChild(rate) rng_xml.appendChild(rate)
xml_doc.getElementsByTagName("domain")[0].getElementsByTagName("devices")[0].appendChild(rng_xml) xml_doc.getElementsByTagName("domain")[0].getElementsByTagName("devices")[0].appendChild(rng_xml)
xml = xml_doc.toxml()


xml = xml_doc.toxml()
define_xml_str(xml) define_xml_str(xml)


if start: if start:

+ 13
- 14
salt/control/virt.sls View File

{%- if node.provider == grains.id %} {%- if node.provider == grains.id %}


{%- set size = control.size.get(node.size) %} {%- set size = control.size.get(node.size) %}
{%- set seed = node.get('seed', cluster.get('seed', True)) %}
{%- set cluster_cloud_init = cluster.get('cloud_init', {}) %} {%- set cluster_cloud_init = cluster.get('cloud_init', {}) %}
{%- set node_cloud_init = node.get('cloud_init', {}) %} {%- set node_cloud_init = node.get('cloud_init', {}) %}
{%- set cloud_init = salt['grains.filter_by']({'default': cluster_cloud_init}, merge=node_cloud_init) %} {%- set cloud_init = salt['grains.filter_by']({'default': cluster_cloud_init}, merge=node_cloud_init) %}
- start: True - start: True
- disk: {{ size.disk_profile }} - disk: {{ size.disk_profile }}
- nic: {{ size.net_profile }} - nic: {{ size.net_profile }}
{%- if node.rng is defined %}
{%- if node.rng is defined %}
- rng: {{ node.rng }} - rng: {{ node.rng }}
{%- elif rng is defined %} {%- elif rng is defined %}
- rng: {{ rng }} - rng: {{ rng }}
{%- endif %} {%- endif %}
{%- if node.loader is defined %}
{%- if node.loader is defined %}
- loader: {{ node.loader }} - loader: {{ node.loader }}
{%- endif %} {%- endif %}
{%- if node.machine is defined %}
{%- if node.machine is defined %}
- machine: {{ node.machine }} - machine: {{ node.machine }}
{%- endif %} {%- endif %}
{%- if node.cpuset is defined %}
{%- if node.cpuset is defined %}
- cpuset: {{ node.cpuset }} - cpuset: {{ node.cpuset }}
{%- endif %} {%- endif %}
{%- if node.cpu_mode is defined %}
{%- if node.cpu_mode is defined %}
- cpu_mode: {{ node.cpu_mode }} - cpu_mode: {{ node.cpu_mode }}
{%- endif %} {%- endif %}
- kwargs: - kwargs:
{%- if cloud_init is defined %}
{%- if cluster.config is defined %}
config: {{ cluster.config }}
{%- endif %}
{%- if cloud_init and cloud_init.get('enabled', True) %}
cloud_init: {{ cloud_init }} cloud_init: {{ cloud_init }}
{%- endif %} {%- endif %}
seed: True
{%- if seed %}
seed: {{ seed }}
{%- endif %}
serial_type: pty serial_type: pty
console: True console: True
{%- if node.enable_vnc is defined %} {%- if node.enable_vnc is defined %}
- require: - require:
- salt_libvirt_service - salt_libvirt_service


#salt_control_seed_{{ cluster_name }}_{{ node_name }}:
# module.run:
# - name: seed.apply
# - path: /srv/salt-images/{{ node_name }}.{{ cluster.domain }}/system.qcow2
# - id_: {{ node_name }}.{{ cluster.domain }}
# - unless: virsh list | grep {{ node_name }}.{{ cluster.domain }}

{%- if node.get("autostart", True) %} {%- if node.get("autostart", True) %}


salt_virt_autostart_{{ cluster_name }}_{{ node_name }}: salt_virt_autostart_{{ cluster_name }}_{{ node_name }}:

+ 2
- 0
salt/map.jinja View File

virt_pkgs: virt_pkgs:
- libvirt-dev - libvirt-dev
- pkg-config - pkg-config
- genisoimage
{% if grains.get('oscodename') == 'trusty' %} {% if grains.get('oscodename') == 'trusty' %}
- libguestfs-tools - libguestfs-tools
{% endif %} {% endif %}
virt_pkgs: virt_pkgs:
- libvirt-dev - libvirt-dev
- pkg-config - pkg-config
- genisoimage
virt_service: 'libvirtd' virt_service: 'libvirtd'
{%- endload %} {%- endload %}



+ 8
- 0
tests/pillar/control_virt_custom.sls View File

config: config:
engine: salt engine: salt
host: master.domain.com host: master.domain.com
seed: cloud-init
cloud_init: cloud_init:
user_data: user_data:
disable_ec2_metadata: true disable_ec2_metadata: true
image: bubuntu.qcomw image: bubuntu.qcomw
size: small size: small
img_dest: /var/lib/libvirt/hddimages img_dest: /var/lib/libvirt/hddimages
seed: qemu-nbd
cloud_init:
enabled: false
loader: loader:
readonly: yes readonly: yes
type: pflash type: pflash
image: meowbuntu.qcom2 image: meowbuntu.qcom2
size: medium_three_disks size: medium_three_disks
cloud_init: cloud_init:
user_data:
salt_minion:
config:
master: master.domain.com
network_data: network_data:
networks: networks:
- <<: *private-ipv4 - <<: *private-ipv4

Loading…
Cancel
Save