Convert indentation from tabs to spaces (4)

Up until now I didn't see the point of using spaces for indentation.
However, the previous commit (a18bec3) was quite eye opening.
Given that python is an indentation aware language, the amount of
mistakes that went unnoticed because tabs and spaces were used
at the same time (tabs for indentation and spaces for alignment)
were unacceptable.

E101,W191 have been re-enable in the tox flake8 checker and
the documentation has been modified accordingly.

The following files have been left as-is:
* bootstrapvz/common/assets/extlinux/extlinux.conf
* bootstrapvz/common/assets/init.d/expand-root
* bootstrapvz/common/assets/init.d/generate-ssh-hostkeys
* bootstrapvz/common/assets/init.d/squeeze/generate-ssh-hostkeys
* bootstrapvz/plugins/docker_daemon/assets/init.d/docker
* bootstrapvz/providers/ec2/assets/bin/growpart
* bootstrapvz/providers/ec2/assets/grub.d/40_custom
* bootstrapvz/providers/ec2/assets/init.d/ec2-get-credentials
* bootstrapvz/providers/ec2/assets/init.d/ec2-run-user-data
* docs/_static/taskoverview.coffee
* docs/_static/taskoverview.less
* tests/unit/subprocess.sh
This commit is contained in:
Anders Ingemann 2016-06-04 11:35:59 +02:00
parent 2d6a026160
commit f62c8ade99
186 changed files with 7284 additions and 7286 deletions

View file

@ -5,130 +5,130 @@ Changelog
2016-06-02 2016-06-02
---------- ----------
Peter Wagner Peter Wagner
* Added ec2_publish plugin * Added ec2_publish plugin
2016-06-02 2016-06-02
---------- ----------
Zach Marano: Zach Marano:
* Fix expand-root script to work with newer version of growpart (in jessie-backports and beyond). * Fix expand-root script to work with newer version of growpart (in jessie-backports and beyond).
* Overhaul Google Compute Engine image build. * Overhaul Google Compute Engine image build.
* Add support for Google Cloud repositories. * Add support for Google Cloud repositories.
* Google Cloud SDK install uses a deb package from a Google Cloud repository. * Google Cloud SDK install uses a deb package from a Google Cloud repository.
* Google Compute Engine guest software is installed from a Google Cloud repository. * Google Compute Engine guest software is installed from a Google Cloud repository.
* Google Compute Engine guest software for Debian 8 is updated to new refactor. * Google Compute Engine guest software for Debian 8 is updated to new refactor.
* Google Compute Engine wheezy and wheezy-backports manifests are deprecated. * Google Compute Engine wheezy and wheezy-backports manifests are deprecated.
2016-03-03 2016-03-03
---------- ----------
Anders Ingemann: Anders Ingemann:
* Rename integration tests to system tests * Rename integration tests to system tests
2016-02-23 2016-02-23
---------- ----------
Nicolas Braud-Santoni: Nicolas Braud-Santoni:
* #282, #290: Added 'debconf' plugin * #282, #290: Added 'debconf' plugin
* #290: Relaxed requirements on plugins manifests * #290: Relaxed requirements on plugins manifests
2016-02-10 2016-02-10
---------- ----------
Manoj Srivastava: Manoj Srivastava:
* #252: Added support for password and static pubkey auth * #252: Added support for password and static pubkey auth
2016-02-06 2016-02-06
---------- ----------
Tiago Ilieve: Tiago Ilieve:
* Added Oracle Compute Cloud provider * Added Oracle Compute Cloud provider
* #280: Declared Squeeze as unsupported * #280: Declared Squeeze as unsupported
2016-01-14 2016-01-14
---------- ----------
Jesse Szwedko: Jesse Szwedko:
* #269: EC2: Added growpart script extension * #269: EC2: Added growpart script extension
2016-01-10 2016-01-10
---------- ----------
Clark Laughlin: Clark Laughlin:
* Enabled support for KVM on arm64 * Enabled support for KVM on arm64
2015-12-19 2015-12-19
---------- ----------
Tim Sattarov: Tim Sattarov:
* #263: Ignore loopback interface in udev rules (reduces startup of networking by a factor of 10) * #263: Ignore loopback interface in udev rules (reduces startup of networking by a factor of 10)
2015-12-13 2015-12-13
---------- ----------
Anders Ingemann: Anders Ingemann:
* Docker provider implemented (including integration testing harness & tests) * Docker provider implemented (including integration testing harness & tests)
* minimize_size: Added various size reduction options for dpkg and apt * minimize_size: Added various size reduction options for dpkg and apt
* Removed image section in manifest. * Removed image section in manifest.
Provider specific options have been moved to the provider section. Provider specific options have been moved to the provider section.
The image name is now specified on the top level of the manifest with "name" The image name is now specified on the top level of the manifest with "name"
* Provider docs have been greatly improved. All now list their special options. * Provider docs have been greatly improved. All now list their special options.
* All manifest option documentation is now accompanied by an example. * All manifest option documentation is now accompanied by an example.
* Added documentation for the integration test providers * Added documentation for the integration test providers
2015-11-13 2015-11-13
---------- ----------
Marcin Kulisz: Marcin Kulisz:
* Exclude docs from binary package * Exclude docs from binary package
2015-10-20 2015-10-20
---------- ----------
Max Illfelder: Max Illfelder:
* Remove support for the GCE Debian mirror * Remove support for the GCE Debian mirror
2015-10-14 2015-10-14
---------- ----------
Anders Ingemann: Anders Ingemann:
* Bootstrap azure images directly to VHD * Bootstrap azure images directly to VHD
2015-09-28 2015-09-28
---------- ----------
Rick Wright: Rick Wright:
* Change GRUB_HIDDEN_TIMEOUT to 0 from true and set GRUB_HIDDEN_TIMEOUT_QUIET to true. * Change GRUB_HIDDEN_TIMEOUT to 0 from true and set GRUB_HIDDEN_TIMEOUT_QUIET to true.
2015-09-24 2015-09-24
---------- ----------
Rick Wright: Rick Wright:
* Fix a problem with Debian 8 on GCE with >2TB disks * Fix a problem with Debian 8 on GCE with >2TB disks
2015-09-04 2015-09-04
---------- ----------
Emmanuel Kasper: Emmanuel Kasper:
* Set Virtualbox memory to 512 MB * Set Virtualbox memory to 512 MB
2015-08-07 2015-08-07
---------- ----------
Tiago Ilieve: Tiago Ilieve:
* Change default Debian mirror * Change default Debian mirror
2015-08-06 2015-08-06
---------- ----------
Stephen A. Zarkos: Stephen A. Zarkos:
* Azure: Change default shell in /etc/default/useradd for Azure images * Azure: Change default shell in /etc/default/useradd for Azure images
* Azure: Add boot parameters to Azure config to ease local debugging * Azure: Add boot parameters to Azure config to ease local debugging
* Azure: Add apt import for backports * Azure: Add apt import for backports
* Azure: Comment GRUB_HIDDEN_TIMEOUT so we can set GRUB_TIMEOUT * Azure: Comment GRUB_HIDDEN_TIMEOUT so we can set GRUB_TIMEOUT
* Azure: Wheezy images use wheezy-backports kernel by default * Azure: Wheezy images use wheezy-backports kernel by default
* Azure: Change Wheezy image to use single partition * Azure: Change Wheezy image to use single partition
* Azure: Update WALinuxAgent to use 2.0.14 * Azure: Update WALinuxAgent to use 2.0.14
* Azure: Make sure we can override grub.ConfigureGrub for Azure images * Azure: Make sure we can override grub.ConfigureGrub for Azure images
* Azure: Add console=tty0 to see kernel/boot messsages on local console * Azure: Add console=tty0 to see kernel/boot messsages on local console
* Azure: Set serial port speed to 115200 * Azure: Set serial port speed to 115200
* Azure: Fix error with applying azure/assets/udev.diff * Azure: Fix error with applying azure/assets/udev.diff
2015-07-30 2015-07-30
---------- ----------
James Bromberger: James Bromberger:
* AWS: Support multiple ENI * AWS: Support multiple ENI
* AWS: PVGRUB AKIs for Frankfurt region * AWS: PVGRUB AKIs for Frankfurt region
2015-06-29 2015-06-29
---------- ----------
Alex Adriaanse: Alex Adriaanse:
* Fix DKMS kernel version error * Fix DKMS kernel version error
* Add support for Btrfs * Add support for Btrfs
* Add EC2 Jessie HVM manifest * Add EC2 Jessie HVM manifest
2015-05-08 2015-05-08
---------- ----------
@ -138,143 +138,143 @@ Alexandre Derumier:
2015-05-02 2015-05-02
---------- ----------
Anders Ingemann: Anders Ingemann:
* Fix #32: Add image_commands example * Fix #32: Add image_commands example
* Fix #99: rename image_commands to commands * Fix #99: rename image_commands to commands
* Fix #139: Vagrant / Virtualbox provider should set ostype when 32 bits selected * Fix #139: Vagrant / Virtualbox provider should set ostype when 32 bits selected
* Fix #204: Create a new phase where user modification tasks can run * Fix #204: Create a new phase where user modification tasks can run
2015-04-29 2015-04-29
---------- ----------
Anders Ingemann: Anders Ingemann:
* Fix #104: Don't verify default target when adding packages * Fix #104: Don't verify default target when adding packages
* Fix #217: Implement get_version() function in common.tools * Fix #217: Implement get_version() function in common.tools
2015-04-28 2015-04-28
---------- ----------
Jonh Wendell: Jonh Wendell:
* root_password: Enable SSH root login * root_password: Enable SSH root login
2015-04-27 2015-04-27
---------- ----------
John Kristensen: John Kristensen:
* Add authentication support to the apt proxy plugin * Add authentication support to the apt proxy plugin
2015-04-25 2015-04-25
---------- ----------
Anders Ingemann (work started 2014-08-31, merged on 2015-04-25): Anders Ingemann (work started 2014-08-31, merged on 2015-04-25):
* Introduce `remote bootstrapping <bootstrapvz/remote>`__ * Introduce `remote bootstrapping <bootstrapvz/remote>`__
* Introduce `integration testing <tests/integration>`__ (for VirtualBox and EC2) * Introduce `integration testing <tests/integration>`__ (for VirtualBox and EC2)
* Merge the end-user documentation into the sphinx docs * Merge the end-user documentation into the sphinx docs
(plugin & provider docs are now located in their respective folders as READMEs) (plugin & provider docs are now located in their respective folders as READMEs)
* Include READMEs in sphinx docs and transform their links * Include READMEs in sphinx docs and transform their links
* Docs for integration testing * Docs for integration testing
* Document the remote bootstrapping procedure * Document the remote bootstrapping procedure
* Add documentation about the documentation * Add documentation about the documentation
* Add list of supported builds to the docs * Add list of supported builds to the docs
* Add html output to integration tests * Add html output to integration tests
* Implement PR #201 by @jszwedko (bump required euca2ools version) * Implement PR #201 by @jszwedko (bump required euca2ools version)
* grub now works on jessie * grub now works on jessie
* extlinux is now running on jessie * extlinux is now running on jessie
* Issue warning when specifying pre/successors across phases (but still error out if it's a conflict) * Issue warning when specifying pre/successors across phases (but still error out if it's a conflict)
* Add salt dependencies in the right phase * Add salt dependencies in the right phase
* extlinux now works with GPT on HVM instances * extlinux now works with GPT on HVM instances
* Take @ssgelm's advice in #155 and copy the mount table -- df warnings no more * Take @ssgelm's advice in #155 and copy the mount table -- df warnings no more
* Generally deny installing grub on squeeze (too much of a hassle to get working, PRs welcome) * Generally deny installing grub on squeeze (too much of a hassle to get working, PRs welcome)
* Add 1 sector gap between partitions on GPT * Add 1 sector gap between partitions on GPT
* Add new task: DeterminKernelVersion, this can potentially fix a lot of small problems * Add new task: DeterminKernelVersion, this can potentially fix a lot of small problems
* Disable getty processes on jessie through logind config * Disable getty processes on jessie through logind config
* Partition volumes by sectors instead of bytes * Partition volumes by sectors instead of bytes
This allows for finer grained control over the partition sizes and gaps This allows for finer grained control over the partition sizes and gaps
Add new Sectors unit, enhance Bytes unit, add unit tests for both Add new Sectors unit, enhance Bytes unit, add unit tests for both
* Don't require qemu for raw volumes, use `truncate` instead * Don't require qemu for raw volumes, use `truncate` instead
* Fix #179: Disabling getty processes task fails half the time * Fix #179: Disabling getty processes task fails half the time
* Split grub and extlinux installs into separate modules * Split grub and extlinux installs into separate modules
* Fix extlinux config for squeeze * Fix extlinux config for squeeze
* Fix #136: Make extlinux output boot messages to the serial console * Fix #136: Make extlinux output boot messages to the serial console
* Extend sed_i to raise Exceptions when the expected amount of replacements is not met * Extend sed_i to raise Exceptions when the expected amount of replacements is not met
Jonas Bergler: Jonas Bergler:
* Fixes #145: Fix installation of vbox guest additions. * Fixes #145: Fix installation of vbox guest additions.
Tiago Ilieve: Tiago Ilieve:
* Fixes #142: msdos partition type incorrect for swap partition (Linux) * Fixes #142: msdos partition type incorrect for swap partition (Linux)
2015-04-23 2015-04-23
---------- ----------
Tiago Ilieve: Tiago Ilieve:
* Fixes #212: Sparse file is created on the current directory * Fixes #212: Sparse file is created on the current directory
2014-11-23 2014-11-23
---------- ----------
Noah Fontes: Noah Fontes:
* Add support for enhanced networking on EC2 images * Add support for enhanced networking on EC2 images
2014-07-12 2014-07-12
---------- ----------
Tiago Ilieve: Tiago Ilieve:
* Fixes #96: AddBackports is now a common task * Fixes #96: AddBackports is now a common task
2014-07-09 2014-07-09
---------- ----------
Anders Ingemann: Anders Ingemann:
* Allow passing data into the manifest * Allow passing data into the manifest
* Refactor logging setup to be more modular * Refactor logging setup to be more modular
* Convert every JSON file to YAML * Convert every JSON file to YAML
* Convert "provider" into provider specific section * Convert "provider" into provider specific section
2014-07-02 2014-07-02
---------- ----------
Vladimir Vitkov: Vladimir Vitkov:
* Improve grub options to work better with virtual machines * Improve grub options to work better with virtual machines
2014-06-30 2014-06-30
---------- ----------
Tomasz Rybak: Tomasz Rybak:
* Return information about created image * Return information about created image
2014-06-22 2014-06-22
---------- ----------
Victor Marmol: Victor Marmol:
* Enable the memory cgroup for the Docker plugin * Enable the memory cgroup for the Docker plugin
2014-06-19 2014-06-19
---------- ----------
Tiago Ilieve: Tiago Ilieve:
* Fixes #94: allow stable/oldstable as release name on manifest * Fixes #94: allow stable/oldstable as release name on manifest
Vladimir Vitkov: Vladimir Vitkov:
* Improve ami listing performance * Improve ami listing performance
2014-06-07 2014-06-07
---------- ----------
Tiago Ilieve: Tiago Ilieve:
* Download `gsutil` tarball to workspace instead of working directory * Download `gsutil` tarball to workspace instead of working directory
* Fixes #97: remove raw disk image created by GCE after build * Fixes #97: remove raw disk image created by GCE after build
2014-06-06 2014-06-06
---------- ----------
Ilya Margolin: Ilya Margolin:
* pip_install plugin * pip_install plugin
2014-05-23 2014-05-23
---------- ----------
Tiago Ilieve: Tiago Ilieve:
* Fixes #95: check if the specified APT proxy server can be reached * Fixes #95: check if the specified APT proxy server can be reached
2014-05-04 2014-05-04
---------- ----------
Dhananjay Balan: Dhananjay Balan:
* Salt minion installation & configuration plugin * Salt minion installation & configuration plugin
* Expose debootstrap --include-packages and --exclude-packages options to manifest * Expose debootstrap --include-packages and --exclude-packages options to manifest
2014-05-03 2014-05-03
---------- ----------
Anders Ingemann: Anders Ingemann:
* Require hostname setting for vagrant plugin * Require hostname setting for vagrant plugin
* Fixes #14: S3 images can now be bootstrapped outside EC2. * Fixes #14: S3 images can now be bootstrapped outside EC2.
* Added enable_agent option to puppet plugin * Added enable_agent option to puppet plugin
2014-05-02 2014-05-02
---------- ----------
Tomasz Rybak: Tomasz Rybak:
* Added Google Compute Engine Provider * Added Google Compute Engine Provider

View file

@ -143,10 +143,8 @@ guidelines. There however a few exceptions:
* Max line length is 110 chars, not 80. * Max line length is 110 chars, not 80.
* Multiple assignments may be aligned with spaces so that the = match * Multiple assignments may be aligned with spaces so that the = match
vertically. vertically.
* Ignore ``E101``: Indent with tabs and align with spaces
* Ignore ``E221 & E241``: Alignment of assignments * Ignore ``E221 & E241``: Alignment of assignments
* Ignore ``E501``: The max line length is not 80 characters * Ignore ``E501``: The max line length is not 80 characters
* Ignore ``W191``: Indent with tabs not spaces
The codebase can be checked for any violations quite easily, since those rules are already specified in the The codebase can be checked for any violations quite easily, since those rules are already specified in the
`tox <http://tox.readthedocs.org/>`__ configuration file. `tox <http://tox.readthedocs.org/>`__ configuration file.

View file

@ -1,5 +1,5 @@
#!/usr/bin/env python #!/usr/bin/env python
if __name__ == '__main__': if __name__ == '__main__':
from bootstrapvz.base.main import main from bootstrapvz.base.main import main
main() main()

View file

@ -1,5 +1,5 @@
#!/usr/bin/env python #!/usr/bin/env python
if __name__ == '__main__': if __name__ == '__main__':
from bootstrapvz.remote.main import main from bootstrapvz.remote.main import main
main() main()

View file

@ -1,5 +1,5 @@
#!/usr/bin/env python #!/usr/bin/env python
if __name__ == '__main__': if __name__ == '__main__':
from bootstrapvz.remote.server import main from bootstrapvz.remote.server import main
main() main()

View file

@ -13,15 +13,15 @@ via attributes. Here is an example:
.. code-block:: python .. code-block:: python
class MapPartitions(Task): class MapPartitions(Task):
description = 'Mapping volume partitions' description = 'Mapping volume partitions'
phase = phases.volume_preparation phase = phases.volume_preparation
predecessors = [PartitionVolume] predecessors = [PartitionVolume]
successors = [filesystem.Format] successors = [filesystem.Format]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
info.volume.partition_map.map(info.volume) info.volume.partition_map.map(info.volume)
In this case the attributes define that the task at hand should run In this case the attributes define that the task at hand should run
after the ``PartitionVolume`` task — i.e. after volume has been after the ``PartitionVolume`` task — i.e. after volume has been

View file

@ -1,160 +1,160 @@
class BootstrapInformation(object): class BootstrapInformation(object):
"""The BootstrapInformation class holds all information about the bootstrapping process. """The BootstrapInformation class holds all information about the bootstrapping process.
The nature of the attributes of this class are rather diverse. The nature of the attributes of this class are rather diverse.
Tasks may set their own attributes on this class for later retrieval by another task. Tasks may set their own attributes on this class for later retrieval by another task.
Information that becomes invalid (e.g. a path to a file that has been deleted) must be removed. Information that becomes invalid (e.g. a path to a file that has been deleted) must be removed.
""" """
def __init__(self, manifest=None, debug=False): def __init__(self, manifest=None, debug=False):
"""Instantiates a new bootstrap info object. """Instantiates a new bootstrap info object.
:param Manifest manifest: The manifest :param Manifest manifest: The manifest
:param bool debug: Whether debugging is turned on :param bool debug: Whether debugging is turned on
""" """
# Set the manifest attribute. # Set the manifest attribute.
self.manifest = manifest self.manifest = manifest
self.debug = debug self.debug = debug
# Create a run_id. This id may be used to uniquely identify the currrent bootstrapping process # Create a run_id. This id may be used to uniquely identify the currrent bootstrapping process
import random import random
self.run_id = '{id:08x}'.format(id=random.randrange(16 ** 8)) self.run_id = '{id:08x}'.format(id=random.randrange(16 ** 8))
# Define the path to our workspace # Define the path to our workspace
import os.path import os.path
self.workspace = os.path.join(manifest.bootstrapper['workspace'], self.run_id) self.workspace = os.path.join(manifest.bootstrapper['workspace'], self.run_id)
# Load all the volume information # Load all the volume information
from fs import load_volume from fs import load_volume
self.volume = load_volume(self.manifest.volume, manifest.system['bootloader']) self.volume = load_volume(self.manifest.volume, manifest.system['bootloader'])
# The default apt mirror # The default apt mirror
self.apt_mirror = self.manifest.packages.get('mirror', 'http://httpredir.debian.org/debian/') self.apt_mirror = self.manifest.packages.get('mirror', 'http://httpredir.debian.org/debian/')
# Create the manifest_vars dictionary # Create the manifest_vars dictionary
self.manifest_vars = self.__create_manifest_vars(self.manifest, {'apt_mirror': self.apt_mirror}) self.manifest_vars = self.__create_manifest_vars(self.manifest, {'apt_mirror': self.apt_mirror})
# Keep a list of apt sources, # Keep a list of apt sources,
# so that tasks may add to that list without having to fiddle with apt source list files. # so that tasks may add to that list without having to fiddle with apt source list files.
from pkg.sourceslist import SourceLists from pkg.sourceslist import SourceLists
self.source_lists = SourceLists(self.manifest_vars) self.source_lists = SourceLists(self.manifest_vars)
# Keep a list of apt preferences # Keep a list of apt preferences
from pkg.preferenceslist import PreferenceLists from pkg.preferenceslist import PreferenceLists
self.preference_lists = PreferenceLists(self.manifest_vars) self.preference_lists = PreferenceLists(self.manifest_vars)
# Keep a list of packages that should be installed, tasks can add and remove things from this list # Keep a list of packages that should be installed, tasks can add and remove things from this list
from pkg.packagelist import PackageList from pkg.packagelist import PackageList
self.packages = PackageList(self.manifest_vars, self.source_lists) self.packages = PackageList(self.manifest_vars, self.source_lists)
# These sets should rarely be used and specify which packages the debootstrap invocation # These sets should rarely be used and specify which packages the debootstrap invocation
# should be called with. # should be called with.
self.include_packages = set() self.include_packages = set()
self.exclude_packages = set() self.exclude_packages = set()
# Dictionary to specify which commands are required on the host. # Dictionary to specify which commands are required on the host.
# The keys are commands, while the values are either package names or urls # The keys are commands, while the values are either package names or urls
# that hint at how a command may be made available. # that hint at how a command may be made available.
self.host_dependencies = {} self.host_dependencies = {}
# Path to optional bootstrapping script for modifying the behaviour of debootstrap # Path to optional bootstrapping script for modifying the behaviour of debootstrap
# (will be used instead of e.g. /usr/share/debootstrap/scripts/jessie) # (will be used instead of e.g. /usr/share/debootstrap/scripts/jessie)
self.bootstrap_script = None self.bootstrap_script = None
# Lists of startup scripts that should be installed and disabled # Lists of startup scripts that should be installed and disabled
self.initd = {'install': {}, 'disable': []} self.initd = {'install': {}, 'disable': []}
# Add a dictionary that can be accessed via info._pluginname for the provider and every plugin # Add a dictionary that can be accessed via info._pluginname for the provider and every plugin
# Information specific to the module can be added to that 'namespace', this avoids clutter. # Information specific to the module can be added to that 'namespace', this avoids clutter.
providername = manifest.modules['provider'].__name__.split('.')[-1] providername = manifest.modules['provider'].__name__.split('.')[-1]
setattr(self, '_' + providername, {}) setattr(self, '_' + providername, {})
for plugin in manifest.modules['plugins']: for plugin in manifest.modules['plugins']:
pluginname = plugin.__name__.split('.')[-1] pluginname = plugin.__name__.split('.')[-1]
setattr(self, '_' + pluginname, {}) setattr(self, '_' + pluginname, {})
def __create_manifest_vars(self, manifest, additional_vars={}): def __create_manifest_vars(self, manifest, additional_vars={}):
"""Creates the manifest variables dictionary, based on the manifest contents """Creates the manifest variables dictionary, based on the manifest contents
and additional data. and additional data.
:param Manifest manifest: The Manifest :param Manifest manifest: The Manifest
:param dict additional_vars: Additional values (they will take precedence and overwrite anything else) :param dict additional_vars: Additional values (they will take precedence and overwrite anything else)
:return: The manifest_vars dictionary :return: The manifest_vars dictionary
:rtype: dict :rtype: dict
""" """
def set_manifest_vars(obj, data): def set_manifest_vars(obj, data):
"""Runs through the manifest and creates DictClasses for every key """Runs through the manifest and creates DictClasses for every key
:param dict obj: dictionary to set the values on :param dict obj: dictionary to set the values on
:param dict data: dictionary of values to set on the obj :param dict data: dictionary of values to set on the obj
""" """
for key, value in data.iteritems(): for key, value in data.iteritems():
if isinstance(value, dict): if isinstance(value, dict):
obj[key] = DictClass() obj[key] = DictClass()
set_manifest_vars(obj[key], value) set_manifest_vars(obj[key], value)
continue continue
# Lists are not supported # Lists are not supported
if not isinstance(value, list): if not isinstance(value, list):
obj[key] = value obj[key] = value
# manifest_vars is a dictionary of all the manifest values, # manifest_vars is a dictionary of all the manifest values,
# with it users can cross-reference values in the manifest, so that they do not need to be written twice # with it users can cross-reference values in the manifest, so that they do not need to be written twice
manifest_vars = {} manifest_vars = {}
set_manifest_vars(manifest_vars, manifest.data) set_manifest_vars(manifest_vars, manifest.data)
# Populate the manifest_vars with datetime information # Populate the manifest_vars with datetime information
# and map the datetime variables directly to the dictionary # and map the datetime variables directly to the dictionary
from datetime import datetime from datetime import datetime
now = datetime.now() now = datetime.now()
time_vars = ['%a', '%A', '%b', '%B', '%c', '%d', '%f', '%H', time_vars = ['%a', '%A', '%b', '%B', '%c', '%d', '%f', '%H',
'%I', '%j', '%m', '%M', '%p', '%S', '%U', '%w', '%I', '%j', '%m', '%M', '%p', '%S', '%U', '%w',
'%W', '%x', '%X', '%y', '%Y', '%z', '%Z'] '%W', '%x', '%X', '%y', '%Y', '%z', '%Z']
for key in time_vars: for key in time_vars:
manifest_vars[key] = now.strftime(key) manifest_vars[key] = now.strftime(key)
# Add any additional manifest variables # Add any additional manifest variables
# They are added last so that they may override previous variables # They are added last so that they may override previous variables
set_manifest_vars(manifest_vars, additional_vars) set_manifest_vars(manifest_vars, additional_vars)
return manifest_vars return manifest_vars
def __getstate__(self): def __getstate__(self):
from bootstrapvz.remote import supported_classes from bootstrapvz.remote import supported_classes
def can_serialize(obj): def can_serialize(obj):
if hasattr(obj, '__class__') and hasattr(obj, '__module__'): if hasattr(obj, '__class__') and hasattr(obj, '__module__'):
class_name = obj.__module__ + '.' + obj.__class__.__name__ class_name = obj.__module__ + '.' + obj.__class__.__name__
return class_name in supported_classes or isinstance(obj, (BaseException, Exception)) return class_name in supported_classes or isinstance(obj, (BaseException, Exception))
return True return True
def filter_state(state): def filter_state(state):
if isinstance(state, dict): if isinstance(state, dict):
return {key: filter_state(val) for key, val in state.items() if can_serialize(val)} return {key: filter_state(val) for key, val in state.items() if can_serialize(val)}
if isinstance(state, (set, tuple, list, frozenset)): if isinstance(state, (set, tuple, list, frozenset)):
return type(state)(filter_state(val) for val in state if can_serialize(val)) return type(state)(filter_state(val) for val in state if can_serialize(val))
return state return state
state = filter_state(self.__dict__) state = filter_state(self.__dict__)
state['__class__'] = self.__module__ + '.' + self.__class__.__name__ state['__class__'] = self.__module__ + '.' + self.__class__.__name__
return state return state
def __setstate__(self, state): def __setstate__(self, state):
for key in state: for key in state:
self.__dict__[key] = state[key] self.__dict__[key] = state[key]
class DictClass(dict): class DictClass(dict):
"""Tiny extension of dict to allow setting and getting keys via attributes """Tiny extension of dict to allow setting and getting keys via attributes
""" """
def __getattr__(self, name): def __getattr__(self, name):
return self[name] return self[name]
def __setattr__(self, name, value): def __setattr__(self, name, value):
self[name] = value self[name] = value
def __delattr__(self, name): def __delattr__(self, name):
del self[name] del self[name]
def __getstate__(self): def __getstate__(self):
return self.__dict__ return self.__dict__
def __setstate__(self, state): def __setstate__(self, state):
for key in state: for key in state:
self[key] = state[key] self[key] = state[key]

View file

@ -1,45 +1,45 @@
def load_volume(data, bootloader): def load_volume(data, bootloader):
"""Instantiates a volume that corresponds to the data in the manifest """Instantiates a volume that corresponds to the data in the manifest
:param dict data: The 'volume' section from the manifest :param dict data: The 'volume' section from the manifest
:param str bootloader: Name of the bootloader the system will boot with :param str bootloader: Name of the bootloader the system will boot with
:return: The volume that represents all information pertaining to the volume we bootstrap on. :return: The volume that represents all information pertaining to the volume we bootstrap on.
:rtype: Volume :rtype: Volume
""" """
# Map valid partition maps in the manifest and their corresponding classes # Map valid partition maps in the manifest and their corresponding classes
from partitionmaps.gpt import GPTPartitionMap from partitionmaps.gpt import GPTPartitionMap
from partitionmaps.msdos import MSDOSPartitionMap from partitionmaps.msdos import MSDOSPartitionMap
from partitionmaps.none import NoPartitions from partitionmaps.none import NoPartitions
partition_map = {'none': NoPartitions, partition_map = {'none': NoPartitions,
'gpt': GPTPartitionMap, 'gpt': GPTPartitionMap,
'msdos': MSDOSPartitionMap, 'msdos': MSDOSPartitionMap,
}.get(data['partitions']['type']) }.get(data['partitions']['type'])
# Map valid volume backings in the manifest and their corresponding classes # Map valid volume backings in the manifest and their corresponding classes
from bootstrapvz.common.fs.loopbackvolume import LoopbackVolume from bootstrapvz.common.fs.loopbackvolume import LoopbackVolume
from bootstrapvz.providers.ec2.ebsvolume import EBSVolume from bootstrapvz.providers.ec2.ebsvolume import EBSVolume
from bootstrapvz.common.fs.virtualdiskimage import VirtualDiskImage from bootstrapvz.common.fs.virtualdiskimage import VirtualDiskImage
from bootstrapvz.common.fs.virtualharddisk import VirtualHardDisk from bootstrapvz.common.fs.virtualharddisk import VirtualHardDisk
from bootstrapvz.common.fs.virtualmachinedisk import VirtualMachineDisk from bootstrapvz.common.fs.virtualmachinedisk import VirtualMachineDisk
from bootstrapvz.common.fs.folder import Folder from bootstrapvz.common.fs.folder import Folder
volume_backing = {'raw': LoopbackVolume, volume_backing = {'raw': LoopbackVolume,
's3': LoopbackVolume, 's3': LoopbackVolume,
'vdi': VirtualDiskImage, 'vdi': VirtualDiskImage,
'vhd': VirtualHardDisk, 'vhd': VirtualHardDisk,
'vmdk': VirtualMachineDisk, 'vmdk': VirtualMachineDisk,
'ebs': EBSVolume, 'ebs': EBSVolume,
'folder': Folder 'folder': Folder
}.get(data['backing']) }.get(data['backing'])
# Instantiate the partition map # Instantiate the partition map
from bootstrapvz.common.bytes import Bytes from bootstrapvz.common.bytes import Bytes
# Only operate with a physical sector size of 512 bytes for now, # Only operate with a physical sector size of 512 bytes for now,
# not sure if we can change that for some of the virtual disks # not sure if we can change that for some of the virtual disks
sector_size = Bytes('512B') sector_size = Bytes('512B')
partition_map = partition_map(data['partitions'], sector_size, bootloader) partition_map = partition_map(data['partitions'], sector_size, bootloader)
# Create the volume with the partition map as an argument # Create the volume with the partition map as an argument
return volume_backing(partition_map) return volume_backing(partition_map)

View file

@ -1,12 +1,12 @@
class VolumeError(Exception): class VolumeError(Exception):
"""Raised when an error occurs while interacting with the volume """Raised when an error occurs while interacting with the volume
""" """
pass pass
class PartitionError(Exception): class PartitionError(Exception):
"""Raised when an error occurs while interacting with the partitions on the volume """Raised when an error occurs while interacting with the partitions on the volume
""" """
pass pass

View file

@ -6,117 +6,117 @@ from ..exceptions import PartitionError
class AbstractPartitionMap(FSMProxy): class AbstractPartitionMap(FSMProxy):
"""Abstract representation of a partiton map """Abstract representation of a partiton map
This class is a finite state machine and represents the state of the real partition map This class is a finite state machine and represents the state of the real partition map
""" """
__metaclass__ = ABCMeta __metaclass__ = ABCMeta
# States the partition map can be in # States the partition map can be in
events = [{'name': 'create', 'src': 'nonexistent', 'dst': 'unmapped'}, events = [{'name': 'create', 'src': 'nonexistent', 'dst': 'unmapped'},
{'name': 'map', 'src': 'unmapped', 'dst': 'mapped'}, {'name': 'map', 'src': 'unmapped', 'dst': 'mapped'},
{'name': 'unmap', 'src': 'mapped', 'dst': 'unmapped'}, {'name': 'unmap', 'src': 'mapped', 'dst': 'unmapped'},
] ]
def __init__(self, bootloader): def __init__(self, bootloader):
""" """
:param str bootloader: Name of the bootloader we will use for bootstrapping :param str bootloader: Name of the bootloader we will use for bootstrapping
""" """
# Create the configuration for the state machine # Create the configuration for the state machine
cfg = {'initial': 'nonexistent', 'events': self.events, 'callbacks': {}} cfg = {'initial': 'nonexistent', 'events': self.events, 'callbacks': {}}
super(AbstractPartitionMap, self).__init__(cfg) super(AbstractPartitionMap, self).__init__(cfg)
def is_blocking(self): def is_blocking(self):
"""Returns whether the partition map is blocking volume detach operations """Returns whether the partition map is blocking volume detach operations
:rtype: bool :rtype: bool
""" """
return self.fsm.current == 'mapped' return self.fsm.current == 'mapped'
def get_total_size(self): def get_total_size(self):
"""Returns the total size the partitions occupy """Returns the total size the partitions occupy
:return: The size of all partitions :return: The size of all partitions
:rtype: Sectors :rtype: Sectors
""" """
# We just need the endpoint of the last partition # We just need the endpoint of the last partition
return self.partitions[-1].get_end() return self.partitions[-1].get_end()
def create(self, volume): def create(self, volume):
"""Creates the partition map """Creates the partition map
:param Volume volume: The volume to create the partition map on :param Volume volume: The volume to create the partition map on
""" """
self.fsm.create(volume=volume) self.fsm.create(volume=volume)
@abstractmethod @abstractmethod
def _before_create(self, event): def _before_create(self, event):
pass pass
def map(self, volume): def map(self, volume):
"""Maps the partition map to device nodes """Maps the partition map to device nodes
:param Volume volume: The volume the partition map resides on :param Volume volume: The volume the partition map resides on
""" """
self.fsm.map(volume=volume) self.fsm.map(volume=volume)
def _before_map(self, event): def _before_map(self, event):
""" """
:raises PartitionError: In case a partition could not be mapped. :raises PartitionError: In case a partition could not be mapped.
""" """
volume = event.volume volume = event.volume
try: try:
# Ask kpartx how the partitions will be mapped before actually attaching them. # Ask kpartx how the partitions will be mapped before actually attaching them.
mappings = log_check_call(['kpartx', '-l', volume.device_path]) mappings = log_check_call(['kpartx', '-l', volume.device_path])
import re import re
regexp = re.compile('^(?P<name>.+[^\d](?P<p_idx>\d+)) : ' regexp = re.compile('^(?P<name>.+[^\d](?P<p_idx>\d+)) : '
'(?P<start_blk>\d) (?P<num_blks>\d+) ' '(?P<start_blk>\d) (?P<num_blks>\d+) '
'{device_path} (?P<blk_offset>\d+)$' '{device_path} (?P<blk_offset>\d+)$'
.format(device_path=volume.device_path)) .format(device_path=volume.device_path))
log_check_call(['kpartx', '-as', volume.device_path]) log_check_call(['kpartx', '-as', volume.device_path])
import os.path import os.path
# Run through the kpartx output and map the paths to the partitions # Run through the kpartx output and map the paths to the partitions
for mapping in mappings: for mapping in mappings:
match = regexp.match(mapping) match = regexp.match(mapping)
if match is None: if match is None:
raise PartitionError('Unable to parse kpartx output: ' + mapping) raise PartitionError('Unable to parse kpartx output: ' + mapping)
partition_path = os.path.join('/dev/mapper', match.group('name')) partition_path = os.path.join('/dev/mapper', match.group('name'))
p_idx = int(match.group('p_idx')) - 1 p_idx = int(match.group('p_idx')) - 1
self.partitions[p_idx].map(partition_path) self.partitions[p_idx].map(partition_path)
# Check if any partition was not mapped # Check if any partition was not mapped
for idx, partition in enumerate(self.partitions): for idx, partition in enumerate(self.partitions):
if partition.fsm.current not in ['mapped', 'formatted']: if partition.fsm.current not in ['mapped', 'formatted']:
raise PartitionError('kpartx did not map partition #' + str(partition.get_index())) raise PartitionError('kpartx did not map partition #' + str(partition.get_index()))
except PartitionError: except PartitionError:
# Revert any mapping and reraise the error # Revert any mapping and reraise the error
for partition in self.partitions: for partition in self.partitions:
if partition.fsm.can('unmap'): if partition.fsm.can('unmap'):
partition.unmap() partition.unmap()
log_check_call(['kpartx', '-ds', volume.device_path]) log_check_call(['kpartx', '-ds', volume.device_path])
raise raise
def unmap(self, volume): def unmap(self, volume):
"""Unmaps the partition """Unmaps the partition
:param Volume volume: The volume to unmap the partition map from :param Volume volume: The volume to unmap the partition map from
""" """
self.fsm.unmap(volume=volume) self.fsm.unmap(volume=volume)
def _before_unmap(self, event): def _before_unmap(self, event):
""" """
:raises PartitionError: If the a partition cannot be unmapped :raises PartitionError: If the a partition cannot be unmapped
""" """
volume = event.volume volume = event.volume
# Run through all partitions before unmapping and make sure they can all be unmapped # Run through all partitions before unmapping and make sure they can all be unmapped
for partition in self.partitions: for partition in self.partitions:
if partition.fsm.cannot('unmap'): if partition.fsm.cannot('unmap'):
msg = 'The partition {partition} prevents the unmap procedure'.format(partition=partition) msg = 'The partition {partition} prevents the unmap procedure'.format(partition=partition)
raise PartitionError(msg) raise PartitionError(msg)
# Actually unmap the partitions # Actually unmap the partitions
log_check_call(['kpartx', '-ds', volume.device_path]) log_check_call(['kpartx', '-ds', volume.device_path])
# Call unmap on all partitions # Call unmap on all partitions
for partition in self.partitions: for partition in self.partitions:
partition.unmap() partition.unmap()

View file

@ -5,92 +5,92 @@ from bootstrapvz.common.tools import log_check_call
class GPTPartitionMap(AbstractPartitionMap): class GPTPartitionMap(AbstractPartitionMap):
"""Represents a GPT partition map """Represents a GPT partition map
""" """
def __init__(self, data, sector_size, bootloader): def __init__(self, data, sector_size, bootloader):
""" """
:param dict data: volume.partitions part of the manifest :param dict data: volume.partitions part of the manifest
:param int sector_size: Sectorsize of the volume :param int sector_size: Sectorsize of the volume
:param str bootloader: Name of the bootloader we will use for bootstrapping :param str bootloader: Name of the bootloader we will use for bootstrapping
""" """
from bootstrapvz.common.sectors import Sectors from bootstrapvz.common.sectors import Sectors
# List of partitions # List of partitions
self.partitions = [] self.partitions = []
# Returns the last partition unless there is none # Returns the last partition unless there is none
def last_partition(): def last_partition():
return self.partitions[-1] if len(self.partitions) > 0 else None return self.partitions[-1] if len(self.partitions) > 0 else None
if bootloader == 'grub': if bootloader == 'grub':
# If we are using the grub bootloader we need to create an unformatted partition # If we are using the grub bootloader we need to create an unformatted partition
# at the beginning of the map. Its size is 1007kb, which seems to be chosen so that # at the beginning of the map. Its size is 1007kb, which seems to be chosen so that
# primary gpt + grub = 1024KiB # primary gpt + grub = 1024KiB
# The 1 MiB will be subtracted later on, once we know what the subsequent partition is # The 1 MiB will be subtracted later on, once we know what the subsequent partition is
from ..partitions.unformatted import UnformattedPartition from ..partitions.unformatted import UnformattedPartition
self.grub_boot = UnformattedPartition(Sectors('1MiB', sector_size), last_partition()) self.grub_boot = UnformattedPartition(Sectors('1MiB', sector_size), last_partition())
self.partitions.append(self.grub_boot) self.partitions.append(self.grub_boot)
# Offset all partitions by 1 sector. # Offset all partitions by 1 sector.
# parted in jessie has changed and no longer allows # parted in jessie has changed and no longer allows
# partitions to be right next to each other. # partitions to be right next to each other.
partition_gap = Sectors(1, sector_size) partition_gap = Sectors(1, sector_size)
# The boot and swap partitions are optional # The boot and swap partitions are optional
if 'boot' in data: if 'boot' in data:
self.boot = GPTPartition(Sectors(data['boot']['size'], sector_size), self.boot = GPTPartition(Sectors(data['boot']['size'], sector_size),
data['boot']['filesystem'], data['boot'].get('format_command', None), data['boot']['filesystem'], data['boot'].get('format_command', None),
'boot', last_partition()) 'boot', last_partition())
if self.boot.previous is not None: if self.boot.previous is not None:
# No need to pad if this is the first partition # No need to pad if this is the first partition
self.boot.pad_start += partition_gap self.boot.pad_start += partition_gap
self.boot.size -= partition_gap self.boot.size -= partition_gap
self.partitions.append(self.boot) self.partitions.append(self.boot)
if 'swap' in data: if 'swap' in data:
self.swap = GPTSwapPartition(Sectors(data['swap']['size'], sector_size), last_partition()) self.swap = GPTSwapPartition(Sectors(data['swap']['size'], sector_size), last_partition())
if self.swap.previous is not None: if self.swap.previous is not None:
self.swap.pad_start += partition_gap self.swap.pad_start += partition_gap
self.swap.size -= partition_gap self.swap.size -= partition_gap
self.partitions.append(self.swap) self.partitions.append(self.swap)
self.root = GPTPartition(Sectors(data['root']['size'], sector_size), self.root = GPTPartition(Sectors(data['root']['size'], sector_size),
data['root']['filesystem'], data['root'].get('format_command', None), data['root']['filesystem'], data['root'].get('format_command', None),
'root', last_partition()) 'root', last_partition())
if self.root.previous is not None: if self.root.previous is not None:
self.root.pad_start += partition_gap self.root.pad_start += partition_gap
self.root.size -= partition_gap self.root.size -= partition_gap
self.partitions.append(self.root) self.partitions.append(self.root)
if hasattr(self, 'grub_boot'): if hasattr(self, 'grub_boot'):
# Mark the grub partition as a bios_grub partition # Mark the grub partition as a bios_grub partition
self.grub_boot.flags.append('bios_grub') self.grub_boot.flags.append('bios_grub')
# Subtract the grub partition size from the subsequent partition # Subtract the grub partition size from the subsequent partition
self.partitions[1].size -= self.grub_boot.size self.partitions[1].size -= self.grub_boot.size
else: else:
# Not using grub, mark the boot partition or root as bootable # Not using grub, mark the boot partition or root as bootable
getattr(self, 'boot', self.root).flags.append('legacy_boot') getattr(self, 'boot', self.root).flags.append('legacy_boot')
# The first and last 34 sectors are reserved for the primary/secondary GPT # The first and last 34 sectors are reserved for the primary/secondary GPT
primary_gpt_size = Sectors(34, sector_size) primary_gpt_size = Sectors(34, sector_size)
self.partitions[0].pad_start += primary_gpt_size self.partitions[0].pad_start += primary_gpt_size
self.partitions[0].size -= primary_gpt_size self.partitions[0].size -= primary_gpt_size
secondary_gpt_size = Sectors(34, sector_size) secondary_gpt_size = Sectors(34, sector_size)
self.partitions[-1].pad_end += secondary_gpt_size self.partitions[-1].pad_end += secondary_gpt_size
self.partitions[-1].size -= secondary_gpt_size self.partitions[-1].size -= secondary_gpt_size
super(GPTPartitionMap, self).__init__(bootloader) super(GPTPartitionMap, self).__init__(bootloader)
def _before_create(self, event): def _before_create(self, event):
"""Creates the partition map """Creates the partition map
""" """
volume = event.volume volume = event.volume
# Disk alignment still plays a role in virtualized environment, # Disk alignment still plays a role in virtualized environment,
# but I honestly have no clue as to what best practice is here, so we choose 'none' # but I honestly have no clue as to what best practice is here, so we choose 'none'
log_check_call(['parted', '--script', '--align', 'none', volume.device_path, log_check_call(['parted', '--script', '--align', 'none', volume.device_path,
'--', 'mklabel', 'gpt']) '--', 'mklabel', 'gpt'])
# Create the partitions # Create the partitions
for partition in self.partitions: for partition in self.partitions:
partition.create(volume) partition.create(volume)

View file

@ -5,82 +5,82 @@ from bootstrapvz.common.tools import log_check_call
class MSDOSPartitionMap(AbstractPartitionMap): class MSDOSPartitionMap(AbstractPartitionMap):
"""Represents a MS-DOS partition map """Represents a MS-DOS partition map
Sometimes also called MBR (but that confuses the hell out of me, so ms-dos it is) Sometimes also called MBR (but that confuses the hell out of me, so ms-dos it is)
""" """
def __init__(self, data, sector_size, bootloader): def __init__(self, data, sector_size, bootloader):
""" """
:param dict data: volume.partitions part of the manifest :param dict data: volume.partitions part of the manifest
:param int sector_size: Sectorsize of the volume :param int sector_size: Sectorsize of the volume
:param str bootloader: Name of the bootloader we will use for bootstrapping :param str bootloader: Name of the bootloader we will use for bootstrapping
""" """
from bootstrapvz.common.sectors import Sectors from bootstrapvz.common.sectors import Sectors
# List of partitions # List of partitions
self.partitions = [] self.partitions = []
# Returns the last partition unless there is none # Returns the last partition unless there is none
def last_partition(): def last_partition():
return self.partitions[-1] if len(self.partitions) > 0 else None return self.partitions[-1] if len(self.partitions) > 0 else None
# The boot and swap partitions are optional # The boot and swap partitions are optional
if 'boot' in data: if 'boot' in data:
self.boot = MSDOSPartition(Sectors(data['boot']['size'], sector_size), self.boot = MSDOSPartition(Sectors(data['boot']['size'], sector_size),
data['boot']['filesystem'], data['boot'].get('format_command', None), data['boot']['filesystem'], data['boot'].get('format_command', None),
last_partition()) last_partition())
self.partitions.append(self.boot) self.partitions.append(self.boot)
# Offset all partitions by 1 sector. # Offset all partitions by 1 sector.
# parted in jessie has changed and no longer allows # parted in jessie has changed and no longer allows
# partitions to be right next to each other. # partitions to be right next to each other.
partition_gap = Sectors(1, sector_size) partition_gap = Sectors(1, sector_size)
if 'swap' in data: if 'swap' in data:
self.swap = MSDOSSwapPartition(Sectors(data['swap']['size'], sector_size), last_partition()) self.swap = MSDOSSwapPartition(Sectors(data['swap']['size'], sector_size), last_partition())
if self.swap.previous is not None: if self.swap.previous is not None:
# No need to pad if this is the first partition # No need to pad if this is the first partition
self.swap.pad_start += partition_gap self.swap.pad_start += partition_gap
self.swap.size -= partition_gap self.swap.size -= partition_gap
self.partitions.append(self.swap) self.partitions.append(self.swap)
self.root = MSDOSPartition(Sectors(data['root']['size'], sector_size), self.root = MSDOSPartition(Sectors(data['root']['size'], sector_size),
data['root']['filesystem'], data['root'].get('format_command', None), data['root']['filesystem'], data['root'].get('format_command', None),
last_partition()) last_partition())
if self.root.previous is not None: if self.root.previous is not None:
self.root.pad_start += partition_gap self.root.pad_start += partition_gap
self.root.size -= partition_gap self.root.size -= partition_gap
self.partitions.append(self.root) self.partitions.append(self.root)
# Mark boot as the boot partition, or root, if boot does not exist # Mark boot as the boot partition, or root, if boot does not exist
getattr(self, 'boot', self.root).flags.append('boot') getattr(self, 'boot', self.root).flags.append('boot')
# If we are using the grub bootloader, we will need to add a 2 MB offset # If we are using the grub bootloader, we will need to add a 2 MB offset
# at the beginning of the partitionmap and steal it from the first partition. # at the beginning of the partitionmap and steal it from the first partition.
# The MBR offset is included in the grub offset, so if we don't use grub # The MBR offset is included in the grub offset, so if we don't use grub
# we should reduce the size of the first partition and move it by only 512 bytes. # we should reduce the size of the first partition and move it by only 512 bytes.
if bootloader == 'grub': if bootloader == 'grub':
mbr_offset = Sectors('2MiB', sector_size) mbr_offset = Sectors('2MiB', sector_size)
else: else:
mbr_offset = Sectors('512B', sector_size) mbr_offset = Sectors('512B', sector_size)
self.partitions[0].pad_start += mbr_offset self.partitions[0].pad_start += mbr_offset
self.partitions[0].size -= mbr_offset self.partitions[0].size -= mbr_offset
# Leave the last sector unformatted # Leave the last sector unformatted
# parted in jessie thinks that a partition 10 sectors in size # parted in jessie thinks that a partition 10 sectors in size
# goes from sector 0 to sector 9 (instead of 0 to 10) # goes from sector 0 to sector 9 (instead of 0 to 10)
self.partitions[-1].pad_end += 1 self.partitions[-1].pad_end += 1
self.partitions[-1].size -= 1 self.partitions[-1].size -= 1
super(MSDOSPartitionMap, self).__init__(bootloader) super(MSDOSPartitionMap, self).__init__(bootloader)
def _before_create(self, event): def _before_create(self, event):
volume = event.volume volume = event.volume
# Disk alignment still plays a role in virtualized environment, # Disk alignment still plays a role in virtualized environment,
# but I honestly have no clue as to what best practice is here, so we choose 'none' # but I honestly have no clue as to what best practice is here, so we choose 'none'
log_check_call(['parted', '--script', '--align', 'none', volume.device_path, log_check_call(['parted', '--script', '--align', 'none', volume.device_path,
'--', 'mklabel', 'msdos']) '--', 'mklabel', 'msdos'])
# Create the partitions # Create the partitions
for partition in self.partitions: for partition in self.partitions:
partition.create(volume) partition.create(volume)

View file

@ -2,44 +2,44 @@ from ..partitions.single import SinglePartition
class NoPartitions(object): class NoPartitions(object):
"""Represents a virtual 'NoPartitions' partitionmap. """Represents a virtual 'NoPartitions' partitionmap.
This virtual partition map exists because it is easier for tasks to This virtual partition map exists because it is easier for tasks to
simply always deal with partition maps and then let the base abstract that away. simply always deal with partition maps and then let the base abstract that away.
""" """
def __init__(self, data, sector_size, bootloader): def __init__(self, data, sector_size, bootloader):
""" """
:param dict data: volume.partitions part of the manifest :param dict data: volume.partitions part of the manifest
:param int sector_size: Sectorsize of the volume :param int sector_size: Sectorsize of the volume
:param str bootloader: Name of the bootloader we will use for bootstrapping :param str bootloader: Name of the bootloader we will use for bootstrapping
""" """
from bootstrapvz.common.sectors import Sectors from bootstrapvz.common.sectors import Sectors
# In the NoPartitions partitions map we only have a single 'partition' # In the NoPartitions partitions map we only have a single 'partition'
self.root = SinglePartition(Sectors(data['root']['size'], sector_size), self.root = SinglePartition(Sectors(data['root']['size'], sector_size),
data['root']['filesystem'], data['root'].get('format_command', None)) data['root']['filesystem'], data['root'].get('format_command', None))
self.partitions = [self.root] self.partitions = [self.root]
def is_blocking(self): def is_blocking(self):
"""Returns whether the partition map is blocking volume detach operations """Returns whether the partition map is blocking volume detach operations
:rtype: bool :rtype: bool
""" """
return self.root.fsm.current == 'mounted' return self.root.fsm.current == 'mounted'
def get_total_size(self): def get_total_size(self):
"""Returns the total size the partitions occupy """Returns the total size the partitions occupy
:return: The size of all the partitions :return: The size of all the partitions
:rtype: Sectors :rtype: Sectors
""" """
return self.root.get_end() return self.root.get_end()
def __getstate__(self): def __getstate__(self):
state = self.__dict__.copy() state = self.__dict__.copy()
state['__class__'] = self.__module__ + '.' + self.__class__.__name__ state['__class__'] = self.__module__ + '.' + self.__class__.__name__
return state return state
def __setstate__(self, state): def __setstate__(self, state):
for key in state: for key in state:
self.__dict__[key] = state[key] self.__dict__[key] = state[key]

View file

@ -6,124 +6,124 @@ from bootstrapvz.common.fsm_proxy import FSMProxy
class AbstractPartition(FSMProxy): class AbstractPartition(FSMProxy):
"""Abstract representation of a partiton """Abstract representation of a partiton
This class is a finite state machine and represents the state of the real partition This class is a finite state machine and represents the state of the real partition
""" """
__metaclass__ = ABCMeta __metaclass__ = ABCMeta
# Our states # Our states
events = [{'name': 'create', 'src': 'nonexistent', 'dst': 'created'}, events = [{'name': 'create', 'src': 'nonexistent', 'dst': 'created'},
{'name': 'format', 'src': 'created', 'dst': 'formatted'}, {'name': 'format', 'src': 'created', 'dst': 'formatted'},
{'name': 'mount', 'src': 'formatted', 'dst': 'mounted'}, {'name': 'mount', 'src': 'formatted', 'dst': 'mounted'},
{'name': 'unmount', 'src': 'mounted', 'dst': 'formatted'}, {'name': 'unmount', 'src': 'mounted', 'dst': 'formatted'},
] ]
def __init__(self, size, filesystem, format_command): def __init__(self, size, filesystem, format_command):
""" """
:param Bytes size: Size of the partition :param Bytes size: Size of the partition
:param str filesystem: Filesystem the partition should be formatted with :param str filesystem: Filesystem the partition should be formatted with
:param list format_command: Optional format command, valid variables are fs, device_path and size :param list format_command: Optional format command, valid variables are fs, device_path and size
""" """
self.size = size self.size = size
self.filesystem = filesystem self.filesystem = filesystem
self.format_command = format_command self.format_command = format_command
# Initialize the start & end padding to 0 sectors, may be changed later # Initialize the start & end padding to 0 sectors, may be changed later
self.pad_start = Sectors(0, size.sector_size) self.pad_start = Sectors(0, size.sector_size)
self.pad_end = Sectors(0, size.sector_size) self.pad_end = Sectors(0, size.sector_size)
# Path to the partition # Path to the partition
self.device_path = None self.device_path = None
# Dictionary with mount points as keys and Mount objects as values # Dictionary with mount points as keys and Mount objects as values
self.mounts = {} self.mounts = {}
# Create the configuration for our state machine # Create the configuration for our state machine
cfg = {'initial': 'nonexistent', 'events': self.events, 'callbacks': {}} cfg = {'initial': 'nonexistent', 'events': self.events, 'callbacks': {}}
super(AbstractPartition, self).__init__(cfg) super(AbstractPartition, self).__init__(cfg)
def get_uuid(self): def get_uuid(self):
"""Gets the UUID of the partition """Gets the UUID of the partition
:return: The UUID of the partition :return: The UUID of the partition
:rtype: str :rtype: str
""" """
[uuid] = log_check_call(['blkid', '-s', 'UUID', '-o', 'value', self.device_path]) [uuid] = log_check_call(['blkid', '-s', 'UUID', '-o', 'value', self.device_path])
return uuid return uuid
@abstractmethod @abstractmethod
def get_start(self): def get_start(self):
pass pass
def get_end(self): def get_end(self):
"""Gets the end of the partition """Gets the end of the partition
:return: The end of the partition :return: The end of the partition
:rtype: Sectors :rtype: Sectors
""" """
return self.get_start() + self.pad_start + self.size + self.pad_end return self.get_start() + self.pad_start + self.size + self.pad_end
def _before_format(self, e): def _before_format(self, e):
"""Formats the partition """Formats the partition
""" """
# If there is no explicit format_command define we simply call mkfs.fstype # If there is no explicit format_command define we simply call mkfs.fstype
if self.format_command is None: if self.format_command is None:
format_command = ['mkfs.{fs}', '{device_path}'] format_command = ['mkfs.{fs}', '{device_path}']
else: else:
format_command = self.format_command format_command = self.format_command
variables = {'fs': self.filesystem, variables = {'fs': self.filesystem,
'device_path': self.device_path, 'device_path': self.device_path,
'size': self.size, 'size': self.size,
} }
command = map(lambda part: part.format(**variables), format_command) command = map(lambda part: part.format(**variables), format_command)
# Format the partition # Format the partition
log_check_call(command) log_check_call(command)
def _before_mount(self, e): def _before_mount(self, e):
"""Mount the partition """Mount the partition
""" """
log_check_call(['mount', '--types', self.filesystem, self.device_path, e.destination]) log_check_call(['mount', '--types', self.filesystem, self.device_path, e.destination])
self.mount_dir = e.destination self.mount_dir = e.destination
def _after_mount(self, e): def _after_mount(self, e):
"""Mount any mounts associated with this partition """Mount any mounts associated with this partition
""" """
# Make sure we mount in ascending order of mountpoint path length # Make sure we mount in ascending order of mountpoint path length
# This ensures that we don't mount /dev/pts before we mount /dev # This ensures that we don't mount /dev/pts before we mount /dev
for destination in sorted(self.mounts.iterkeys(), key=len): for destination in sorted(self.mounts.iterkeys(), key=len):
self.mounts[destination].mount(self.mount_dir) self.mounts[destination].mount(self.mount_dir)
def _before_unmount(self, e): def _before_unmount(self, e):
"""Unmount any mounts associated with this partition """Unmount any mounts associated with this partition
""" """
# Unmount the mounts in descending order of mounpoint path length # Unmount the mounts in descending order of mounpoint path length
# You cannot unmount /dev before you have unmounted /dev/pts # You cannot unmount /dev before you have unmounted /dev/pts
for destination in sorted(self.mounts.iterkeys(), key=len, reverse=True): for destination in sorted(self.mounts.iterkeys(), key=len, reverse=True):
self.mounts[destination].unmount() self.mounts[destination].unmount()
log_check_call(['umount', self.mount_dir]) log_check_call(['umount', self.mount_dir])
del self.mount_dir del self.mount_dir
def add_mount(self, source, destination, opts=[]): def add_mount(self, source, destination, opts=[]):
"""Associate a mount with this partition """Associate a mount with this partition
Automatically mounts it Automatically mounts it
:param str,AbstractPartition source: The source of the mount :param str,AbstractPartition source: The source of the mount
:param str destination: The path to the mountpoint :param str destination: The path to the mountpoint
:param list opts: Any options that should be passed to the mount command :param list opts: Any options that should be passed to the mount command
""" """
# Create a new mount object, mount it if the partition is mounted and put it in the mounts dict # Create a new mount object, mount it if the partition is mounted and put it in the mounts dict
from mount import Mount from mount import Mount
mount = Mount(source, destination, opts) mount = Mount(source, destination, opts)
if self.fsm.current == 'mounted': if self.fsm.current == 'mounted':
mount.mount(self.mount_dir) mount.mount(self.mount_dir)
self.mounts[destination] = mount self.mounts[destination] = mount
def remove_mount(self, destination): def remove_mount(self, destination):
"""Remove a mount from this partition """Remove a mount from this partition
Automatically unmounts it Automatically unmounts it
:param str destination: The mountpoint path of the mount that should be removed :param str destination: The mountpoint path of the mount that should be removed
""" """
# Unmount the mount if the partition is mounted and delete it from the mounts dict # Unmount the mount if the partition is mounted and delete it from the mounts dict
# If the mount is already unmounted and the source is a partition, this will raise an exception # If the mount is already unmounted and the source is a partition, this will raise an exception
if self.fsm.current == 'mounted': if self.fsm.current == 'mounted':
self.mounts[destination].unmount() self.mounts[destination].unmount()
del self.mounts[destination] del self.mounts[destination]

View file

@ -4,135 +4,135 @@ from bootstrapvz.common.sectors import Sectors
class BasePartition(AbstractPartition): class BasePartition(AbstractPartition):
"""Represents a partition that is actually a partition (and not a virtual one like 'Single') """Represents a partition that is actually a partition (and not a virtual one like 'Single')
""" """
# Override the states of the abstract partition # Override the states of the abstract partition
# A real partition can be mapped and unmapped # A real partition can be mapped and unmapped
events = [{'name': 'create', 'src': 'nonexistent', 'dst': 'unmapped'}, events = [{'name': 'create', 'src': 'nonexistent', 'dst': 'unmapped'},
{'name': 'map', 'src': 'unmapped', 'dst': 'mapped'}, {'name': 'map', 'src': 'unmapped', 'dst': 'mapped'},
{'name': 'format', 'src': 'mapped', 'dst': 'formatted'}, {'name': 'format', 'src': 'mapped', 'dst': 'formatted'},
{'name': 'mount', 'src': 'formatted', 'dst': 'mounted'}, {'name': 'mount', 'src': 'formatted', 'dst': 'mounted'},
{'name': 'unmount', 'src': 'mounted', 'dst': 'formatted'}, {'name': 'unmount', 'src': 'mounted', 'dst': 'formatted'},
{'name': 'unmap', 'src': 'formatted', 'dst': 'unmapped_fmt'}, {'name': 'unmap', 'src': 'formatted', 'dst': 'unmapped_fmt'},
{'name': 'map', 'src': 'unmapped_fmt', 'dst': 'formatted'}, {'name': 'map', 'src': 'unmapped_fmt', 'dst': 'formatted'},
{'name': 'unmap', 'src': 'mapped', 'dst': 'unmapped'}, {'name': 'unmap', 'src': 'mapped', 'dst': 'unmapped'},
] ]
def __init__(self, size, filesystem, format_command, previous): def __init__(self, size, filesystem, format_command, previous):
""" """
:param Bytes size: Size of the partition :param Bytes size: Size of the partition
:param str filesystem: Filesystem the partition should be formatted with :param str filesystem: Filesystem the partition should be formatted with
:param list format_command: Optional format command, valid variables are fs, device_path and size :param list format_command: Optional format command, valid variables are fs, device_path and size
:param BasePartition previous: The partition that preceeds this one :param BasePartition previous: The partition that preceeds this one
""" """
# By saving the previous partition we have a linked list # By saving the previous partition we have a linked list
# that partitions can go backwards in to find the first partition. # that partitions can go backwards in to find the first partition.
self.previous = previous self.previous = previous
# List of flags that parted should put on the partition # List of flags that parted should put on the partition
self.flags = [] self.flags = []
# Path to symlink in /dev/disk/by-uuid (manually maintained by this class) # Path to symlink in /dev/disk/by-uuid (manually maintained by this class)
self.disk_by_uuid_path = None self.disk_by_uuid_path = None
super(BasePartition, self).__init__(size, filesystem, format_command) super(BasePartition, self).__init__(size, filesystem, format_command)
def create(self, volume): def create(self, volume):
"""Creates the partition """Creates the partition
:param Volume volume: The volume to create the partition on :param Volume volume: The volume to create the partition on
""" """
self.fsm.create(volume=volume) self.fsm.create(volume=volume)
def get_index(self): def get_index(self):
"""Gets the index of this partition in the partition map """Gets the index of this partition in the partition map
:return: The index of the partition in the partition map :return: The index of the partition in the partition map
:rtype: int :rtype: int
""" """
if self.previous is None: if self.previous is None:
# Partitions are 1 indexed # Partitions are 1 indexed
return 1 return 1
else: else:
# Recursive call to the previous partition, walking up the chain... # Recursive call to the previous partition, walking up the chain...
return self.previous.get_index() + 1 return self.previous.get_index() + 1
def get_start(self): def get_start(self):
"""Gets the starting byte of this partition """Gets the starting byte of this partition
:return: The starting byte of this partition :return: The starting byte of this partition
:rtype: Sectors :rtype: Sectors
""" """
if self.previous is None: if self.previous is None:
return Sectors(0, self.size.sector_size) return Sectors(0, self.size.sector_size)
else: else:
return self.previous.get_end() return self.previous.get_end()
def map(self, device_path): def map(self, device_path):
"""Maps the partition to a device_path """Maps the partition to a device_path
:param str device_path: The device path this partition should be mapped to :param str device_path: The device path this partition should be mapped to
""" """
self.fsm.map(device_path=device_path) self.fsm.map(device_path=device_path)
def link_uuid(self): def link_uuid(self):
# /lib/udev/rules.d/60-kpartx.rules does not create symlinks in /dev/disk/by-{uuid,label} # /lib/udev/rules.d/60-kpartx.rules does not create symlinks in /dev/disk/by-{uuid,label}
# This patch would fix that: http://www.redhat.com/archives/dm-devel/2013-July/msg00080.html # This patch would fix that: http://www.redhat.com/archives/dm-devel/2013-July/msg00080.html
# For now we just do the uuid part ourselves. # For now we just do the uuid part ourselves.
# This is mainly to fix a problem in update-grub where /etc/grub.d/10_linux # This is mainly to fix a problem in update-grub where /etc/grub.d/10_linux
# checks if the $GRUB_DEVICE_UUID exists in /dev/disk/by-uuid and falls # checks if the $GRUB_DEVICE_UUID exists in /dev/disk/by-uuid and falls
# back to $GRUB_DEVICE if it doesn't. # back to $GRUB_DEVICE if it doesn't.
# $GRUB_DEVICE is /dev/mapper/xvd{f,g...}# (on ec2), opposed to /dev/xvda# when booting. # $GRUB_DEVICE is /dev/mapper/xvd{f,g...}# (on ec2), opposed to /dev/xvda# when booting.
# Creating the symlink ensures that grub consistently uses # Creating the symlink ensures that grub consistently uses
# $GRUB_DEVICE_UUID when creating /boot/grub/grub.cfg # $GRUB_DEVICE_UUID when creating /boot/grub/grub.cfg
self.disk_by_uuid_path = os.path.join('/dev/disk/by-uuid', self.get_uuid()) self.disk_by_uuid_path = os.path.join('/dev/disk/by-uuid', self.get_uuid())
if not os.path.exists(self.disk_by_uuid_path): if not os.path.exists(self.disk_by_uuid_path):
os.symlink(self.device_path, self.disk_by_uuid_path) os.symlink(self.device_path, self.disk_by_uuid_path)
def unlink_uuid(self): def unlink_uuid(self):
if os.path.isfile(self.disk_by_uuid_path): if os.path.isfile(self.disk_by_uuid_path):
os.remove(self.disk_by_uuid_path) os.remove(self.disk_by_uuid_path)
self.disk_by_uuid_path = None self.disk_by_uuid_path = None
def _before_create(self, e): def _before_create(self, e):
"""Creates the partition """Creates the partition
""" """
from bootstrapvz.common.tools import log_check_call from bootstrapvz.common.tools import log_check_call
# The create command is fairly simple: # The create command is fairly simple:
# - fs_type is the partition filesystem, as defined by parted: # - fs_type is the partition filesystem, as defined by parted:
# fs-type can be one of "fat16", "fat32", "ext2", "HFS", "linux-swap", # fs-type can be one of "fat16", "fat32", "ext2", "HFS", "linux-swap",
# "NTFS", "reiserfs", or "ufs". # "NTFS", "reiserfs", or "ufs".
# - start and end are just Bytes objects coerced into strings # - start and end are just Bytes objects coerced into strings
if self.filesystem == 'swap': if self.filesystem == 'swap':
fs_type = 'linux-swap' fs_type = 'linux-swap'
else: else:
fs_type = 'ext2' fs_type = 'ext2'
create_command = ('mkpart primary {fs_type} {start} {end}' create_command = ('mkpart primary {fs_type} {start} {end}'
.format(fs_type=fs_type, .format(fs_type=fs_type,
start=str(self.get_start() + self.pad_start), start=str(self.get_start() + self.pad_start),
end=str(self.get_end() - self.pad_end))) end=str(self.get_end() - self.pad_end)))
# Create the partition # Create the partition
log_check_call(['parted', '--script', '--align', 'none', e.volume.device_path, log_check_call(['parted', '--script', '--align', 'none', e.volume.device_path,
'--', create_command]) '--', create_command])
# Set any flags on the partition # Set any flags on the partition
for flag in self.flags: for flag in self.flags:
log_check_call(['parted', '--script', e.volume.device_path, log_check_call(['parted', '--script', e.volume.device_path,
'--', ('set {idx} {flag} on' '--', ('set {idx} {flag} on'
.format(idx=str(self.get_index()), flag=flag))]) .format(idx=str(self.get_index()), flag=flag))])
def _before_map(self, e): def _before_map(self, e):
# Set the device path # Set the device path
self.device_path = e.device_path self.device_path = e.device_path
if e.src == 'unmapped_fmt': if e.src == 'unmapped_fmt':
# Only link the uuid if the partition is formatted # Only link the uuid if the partition is formatted
self.link_uuid() self.link_uuid()
def _after_format(self, e): def _after_format(self, e):
# We do this after formatting because there otherwise would be no UUID # We do this after formatting because there otherwise would be no UUID
self.link_uuid() self.link_uuid()
def _before_unmap(self, e): def _before_unmap(self, e):
# When unmapped, the device_path information becomes invalid, so we delete it # When unmapped, the device_path information becomes invalid, so we delete it
self.device_path = None self.device_path = None
if e.src == 'formatted': if e.src == 'formatted':
self.unlink_uuid() self.unlink_uuid()

View file

@ -3,24 +3,24 @@ from base import BasePartition
class GPTPartition(BasePartition): class GPTPartition(BasePartition):
"""Represents a GPT partition """Represents a GPT partition
""" """
def __init__(self, size, filesystem, format_command, name, previous): def __init__(self, size, filesystem, format_command, name, previous):
""" """
:param Bytes size: Size of the partition :param Bytes size: Size of the partition
:param str filesystem: Filesystem the partition should be formatted with :param str filesystem: Filesystem the partition should be formatted with
:param list format_command: Optional format command, valid variables are fs, device_path and size :param list format_command: Optional format command, valid variables are fs, device_path and size
:param str name: The name of the partition :param str name: The name of the partition
:param BasePartition previous: The partition that preceeds this one :param BasePartition previous: The partition that preceeds this one
""" """
self.name = name self.name = name
super(GPTPartition, self).__init__(size, filesystem, format_command, previous) super(GPTPartition, self).__init__(size, filesystem, format_command, previous)
def _before_create(self, e): def _before_create(self, e):
# Create the partition and then set the name of the partition afterwards # Create the partition and then set the name of the partition afterwards
super(GPTPartition, self)._before_create(e) super(GPTPartition, self)._before_create(e)
# partition name only works for gpt, for msdos that becomes the part-type (primary, extended, logical) # partition name only works for gpt, for msdos that becomes the part-type (primary, extended, logical)
name_command = 'name {idx} {name}'.format(idx=self.get_index(), name=self.name) name_command = 'name {idx} {name}'.format(idx=self.get_index(), name=self.name)
log_check_call(['parted', '--script', e.volume.device_path, log_check_call(['parted', '--script', e.volume.device_path,
'--', name_command]) '--', name_command])

View file

@ -3,15 +3,15 @@ from gpt import GPTPartition
class GPTSwapPartition(GPTPartition): class GPTSwapPartition(GPTPartition):
"""Represents a GPT swap partition """Represents a GPT swap partition
""" """
def __init__(self, size, previous): def __init__(self, size, previous):
""" """
:param Bytes size: Size of the partition :param Bytes size: Size of the partition
:param BasePartition previous: The partition that preceeds this one :param BasePartition previous: The partition that preceeds this one
""" """
super(GPTSwapPartition, self).__init__(size, 'swap', None, 'swap', previous) super(GPTSwapPartition, self).__init__(size, 'swap', None, 'swap', previous)
def _before_format(self, e): def _before_format(self, e):
log_check_call(['mkswap', self.device_path]) log_check_call(['mkswap', self.device_path])

View file

@ -4,46 +4,46 @@ from bootstrapvz.common.tools import log_check_call
class Mount(object): class Mount(object):
"""Represents a mount into the partition """Represents a mount into the partition
""" """
def __init__(self, source, destination, opts): def __init__(self, source, destination, opts):
""" """
:param str,AbstractPartition source: The path from where we mount or a partition :param str,AbstractPartition source: The path from where we mount or a partition
:param str destination: The path of the mountpoint :param str destination: The path of the mountpoint
:param list opts: List of options to pass to the mount command :param list opts: List of options to pass to the mount command
""" """
self.source = source self.source = source
self.destination = destination self.destination = destination
self.opts = opts self.opts = opts
def mount(self, prefix): def mount(self, prefix):
"""Performs the mount operation or forwards it to another partition """Performs the mount operation or forwards it to another partition
:param str prefix: Path prefix of the mountpoint :param str prefix: Path prefix of the mountpoint
""" """
mount_dir = os.path.join(prefix, self.destination) mount_dir = os.path.join(prefix, self.destination)
# If the source is another partition, we tell that partition to mount itself # If the source is another partition, we tell that partition to mount itself
if isinstance(self.source, AbstractPartition): if isinstance(self.source, AbstractPartition):
self.source.mount(destination=mount_dir) self.source.mount(destination=mount_dir)
else: else:
log_check_call(['mount'] + self.opts + [self.source, mount_dir]) log_check_call(['mount'] + self.opts + [self.source, mount_dir])
self.mount_dir = mount_dir self.mount_dir = mount_dir
def unmount(self): def unmount(self):
"""Performs the unmount operation or asks the partition to unmount itself """Performs the unmount operation or asks the partition to unmount itself
""" """
# If its a partition, it can unmount itself # If its a partition, it can unmount itself
if isinstance(self.source, AbstractPartition): if isinstance(self.source, AbstractPartition):
self.source.unmount() self.source.unmount()
else: else:
log_check_call(['umount', self.mount_dir]) log_check_call(['umount', self.mount_dir])
del self.mount_dir del self.mount_dir
def __getstate__(self): def __getstate__(self):
state = self.__dict__.copy() state = self.__dict__.copy()
state['__class__'] = self.__module__ + '.' + self.__class__.__name__ state['__class__'] = self.__module__ + '.' + self.__class__.__name__
return state return state
def __setstate__(self, state): def __setstate__(self, state):
for key in state: for key in state:
self.__dict__[key] = state[key] self.__dict__[key] = state[key]

View file

@ -2,6 +2,6 @@ from base import BasePartition
class MSDOSPartition(BasePartition): class MSDOSPartition(BasePartition):
"""Represents an MS-DOS partition """Represents an MS-DOS partition
""" """
pass pass

View file

@ -3,15 +3,15 @@ from msdos import MSDOSPartition
class MSDOSSwapPartition(MSDOSPartition): class MSDOSSwapPartition(MSDOSPartition):
"""Represents a MS-DOS swap partition """Represents a MS-DOS swap partition
""" """
def __init__(self, size, previous): def __init__(self, size, previous):
""" """
:param Bytes size: Size of the partition :param Bytes size: Size of the partition
:param BasePartition previous: The partition that preceeds this one :param BasePartition previous: The partition that preceeds this one
""" """
super(MSDOSSwapPartition, self).__init__(size, 'swap', None, previous) super(MSDOSSwapPartition, self).__init__(size, 'swap', None, previous)
def _before_format(self, e): def _before_format(self, e):
log_check_call(['mkswap', self.device_path]) log_check_call(['mkswap', self.device_path])

View file

@ -2,14 +2,14 @@ from abstract import AbstractPartition
class SinglePartition(AbstractPartition): class SinglePartition(AbstractPartition):
"""Represents a single virtual partition on an unpartitioned volume """Represents a single virtual partition on an unpartitioned volume
""" """
def get_start(self): def get_start(self):
"""Gets the starting byte of this partition """Gets the starting byte of this partition
:return: The starting byte of this partition :return: The starting byte of this partition
:rtype: Sectors :rtype: Sectors
""" """
from bootstrapvz.common.sectors import Sectors from bootstrapvz.common.sectors import Sectors
return Sectors(0, self.size.sector_size) return Sectors(0, self.size.sector_size)

View file

@ -2,19 +2,19 @@ from base import BasePartition
class UnformattedPartition(BasePartition): class UnformattedPartition(BasePartition):
"""Represents an unformatted partition """Represents an unformatted partition
It cannot be mounted It cannot be mounted
""" """
# The states for our state machine. It can only be mapped, not mounted. # The states for our state machine. It can only be mapped, not mounted.
events = [{'name': 'create', 'src': 'nonexistent', 'dst': 'unmapped'}, events = [{'name': 'create', 'src': 'nonexistent', 'dst': 'unmapped'},
{'name': 'map', 'src': 'unmapped', 'dst': 'mapped'}, {'name': 'map', 'src': 'unmapped', 'dst': 'mapped'},
{'name': 'unmap', 'src': 'mapped', 'dst': 'unmapped'}, {'name': 'unmap', 'src': 'mapped', 'dst': 'unmapped'},
] ]
def __init__(self, size, previous): def __init__(self, size, previous):
""" """
:param Bytes size: Size of the partition :param Bytes size: Size of the partition
:param BasePartition previous: The partition that preceeds this one :param BasePartition previous: The partition that preceeds this one
""" """
super(UnformattedPartition, self).__init__(size, None, None, previous) super(UnformattedPartition, self).__init__(size, None, None, previous)

View file

@ -6,131 +6,131 @@ from partitionmaps.none import NoPartitions
class Volume(FSMProxy): class Volume(FSMProxy):
"""Represents an abstract volume. """Represents an abstract volume.
This class is a finite state machine and represents the state of the real volume. This class is a finite state machine and represents the state of the real volume.
""" """
__metaclass__ = ABCMeta __metaclass__ = ABCMeta
# States this volume can be in # States this volume can be in
events = [{'name': 'create', 'src': 'nonexistent', 'dst': 'detached'}, events = [{'name': 'create', 'src': 'nonexistent', 'dst': 'detached'},
{'name': 'attach', 'src': 'detached', 'dst': 'attached'}, {'name': 'attach', 'src': 'detached', 'dst': 'attached'},
{'name': 'link_dm_node', 'src': 'attached', 'dst': 'linked'}, {'name': 'link_dm_node', 'src': 'attached', 'dst': 'linked'},
{'name': 'unlink_dm_node', 'src': 'linked', 'dst': 'attached'}, {'name': 'unlink_dm_node', 'src': 'linked', 'dst': 'attached'},
{'name': 'detach', 'src': 'attached', 'dst': 'detached'}, {'name': 'detach', 'src': 'attached', 'dst': 'detached'},
{'name': 'delete', 'src': 'detached', 'dst': 'deleted'}, {'name': 'delete', 'src': 'detached', 'dst': 'deleted'},
] ]
def __init__(self, partition_map): def __init__(self, partition_map):
""" """
:param PartitionMap partition_map: The partition map for the volume :param PartitionMap partition_map: The partition map for the volume
""" """
# Path to the volume # Path to the volume
self.device_path = None self.device_path = None
# The partition map # The partition map
self.partition_map = partition_map self.partition_map = partition_map
# The size of the volume as reported by the partition map # The size of the volume as reported by the partition map
self.size = self.partition_map.get_total_size() self.size = self.partition_map.get_total_size()
# Before detaching, check that nothing would block the detachment # Before detaching, check that nothing would block the detachment
callbacks = {'onbeforedetach': self._check_blocking} callbacks = {'onbeforedetach': self._check_blocking}
if isinstance(self.partition_map, NoPartitions): if isinstance(self.partition_map, NoPartitions):
# When the volume has no partitions, the virtual root partition path is equal to that of the volume # When the volume has no partitions, the virtual root partition path is equal to that of the volume
# Update that path whenever the path to the volume changes # Update that path whenever the path to the volume changes
def set_dev_path(e): def set_dev_path(e):
self.partition_map.root.device_path = self.device_path self.partition_map.root.device_path = self.device_path
callbacks['onafterattach'] = set_dev_path callbacks['onafterattach'] = set_dev_path
callbacks['onafterdetach'] = set_dev_path # Will become None callbacks['onafterdetach'] = set_dev_path # Will become None
callbacks['onlink_dm_node'] = set_dev_path callbacks['onlink_dm_node'] = set_dev_path
callbacks['onunlink_dm_node'] = set_dev_path callbacks['onunlink_dm_node'] = set_dev_path
# Create the configuration for our finite state machine # Create the configuration for our finite state machine
cfg = {'initial': 'nonexistent', 'events': self.events, 'callbacks': callbacks} cfg = {'initial': 'nonexistent', 'events': self.events, 'callbacks': callbacks}
super(Volume, self).__init__(cfg) super(Volume, self).__init__(cfg)
def _after_create(self, e): def _after_create(self, e):
if isinstance(self.partition_map, NoPartitions): if isinstance(self.partition_map, NoPartitions):
# When the volume has no partitions, the virtual root partition # When the volume has no partitions, the virtual root partition
# is essentially created when the volume is created, forward that creation event. # is essentially created when the volume is created, forward that creation event.
self.partition_map.root.create() self.partition_map.root.create()
def _check_blocking(self, e): def _check_blocking(self, e):
"""Checks whether the volume is blocked """Checks whether the volume is blocked
:raises VolumeError: When the volume is blocked from being detached :raises VolumeError: When the volume is blocked from being detached
""" """
# Only the partition map can block the volume # Only the partition map can block the volume
if self.partition_map.is_blocking(): if self.partition_map.is_blocking():
raise VolumeError('The partitionmap prevents the detach procedure') raise VolumeError('The partitionmap prevents the detach procedure')
def _before_link_dm_node(self, e): def _before_link_dm_node(self, e):
"""Links the volume using the device mapper """Links the volume using the device mapper
This allows us to create a 'window' into the volume that acts like a volume in itself. This allows us to create a 'window' into the volume that acts like a volume in itself.
Mainly it is used to fool grub into thinking that it is working with a real volume, Mainly it is used to fool grub into thinking that it is working with a real volume,
rather than a loopback device or a network block device. rather than a loopback device or a network block device.
:param _e_obj e: Event object containing arguments to create() :param _e_obj e: Event object containing arguments to create()
Keyword arguments to link_dm_node() are: Keyword arguments to link_dm_node() are:
:param int logical_start_sector: The sector the volume should start at in the new volume :param int logical_start_sector: The sector the volume should start at in the new volume
:param int start_sector: The offset at which the volume should begin to be mapped in the new volume :param int start_sector: The offset at which the volume should begin to be mapped in the new volume
:param int sectors: The number of sectors that should be mapped :param int sectors: The number of sectors that should be mapped
Read more at: http://manpages.debian.org/cgi-bin/man.cgi?query=dmsetup&apropos=0&sektion=0&manpath=Debian+7.0+wheezy&format=html&locale=en Read more at: http://manpages.debian.org/cgi-bin/man.cgi?query=dmsetup&apropos=0&sektion=0&manpath=Debian+7.0+wheezy&format=html&locale=en
:raises VolumeError: When a free block device cannot be found. :raises VolumeError: When a free block device cannot be found.
""" """
import os.path import os.path
from bootstrapvz.common.fs import get_partitions from bootstrapvz.common.fs import get_partitions
# Fetch information from /proc/partitions # Fetch information from /proc/partitions
proc_partitions = get_partitions() proc_partitions = get_partitions()
device_name = os.path.basename(self.device_path) device_name = os.path.basename(self.device_path)
device_partition = proc_partitions[device_name] device_partition = proc_partitions[device_name]
# The sector the volume should start at in the new volume # The sector the volume should start at in the new volume
logical_start_sector = getattr(e, 'logical_start_sector', 0) logical_start_sector = getattr(e, 'logical_start_sector', 0)
# The offset at which the volume should begin to be mapped in the new volume # The offset at which the volume should begin to be mapped in the new volume
start_sector = getattr(e, 'start_sector', 0) start_sector = getattr(e, 'start_sector', 0)
# The number of sectors that should be mapped # The number of sectors that should be mapped
sectors = getattr(e, 'sectors', int(self.size) - start_sector) sectors = getattr(e, 'sectors', int(self.size) - start_sector)
# This is the table we send to dmsetup, so that it may create a device mapping for us. # This is the table we send to dmsetup, so that it may create a device mapping for us.
table = ('{log_start_sec} {sectors} linear {major}:{minor} {start_sec}' table = ('{log_start_sec} {sectors} linear {major}:{minor} {start_sec}'
.format(log_start_sec=logical_start_sector, .format(log_start_sec=logical_start_sector,
sectors=sectors, sectors=sectors,
major=device_partition['major'], major=device_partition['major'],
minor=device_partition['minor'], minor=device_partition['minor'],
start_sec=start_sector)) start_sec=start_sector))
import string import string
import os.path import os.path
# Figure out the device letter and path # Figure out the device letter and path
for letter in string.ascii_lowercase: for letter in string.ascii_lowercase:
dev_name = 'vd' + letter dev_name = 'vd' + letter
dev_path = os.path.join('/dev/mapper', dev_name) dev_path = os.path.join('/dev/mapper', dev_name)
if not os.path.exists(dev_path): if not os.path.exists(dev_path):
self.dm_node_name = dev_name self.dm_node_name = dev_name
self.dm_node_path = dev_path self.dm_node_path = dev_path
break break
if not hasattr(self, 'dm_node_name'): if not hasattr(self, 'dm_node_name'):
raise VolumeError('Unable to find a free block device path for mounting the bootstrap volume') raise VolumeError('Unable to find a free block device path for mounting the bootstrap volume')
# Create the device mapping # Create the device mapping
log_check_call(['dmsetup', 'create', self.dm_node_name], table) log_check_call(['dmsetup', 'create', self.dm_node_name], table)
# Update the device_path but remember the old one for when we unlink the volume again # Update the device_path but remember the old one for when we unlink the volume again
self.unlinked_device_path = self.device_path self.unlinked_device_path = self.device_path
self.device_path = self.dm_node_path self.device_path = self.dm_node_path
def _before_unlink_dm_node(self, e): def _before_unlink_dm_node(self, e):
"""Unlinks the device mapping """Unlinks the device mapping
""" """
log_check_call(['dmsetup', 'remove', self.dm_node_name]) log_check_call(['dmsetup', 'remove', self.dm_node_name])
# Reset the device_path # Reset the device_path
self.device_path = self.unlinked_device_path self.device_path = self.unlinked_device_path
# Delete the no longer valid information # Delete the no longer valid information
del self.unlinked_device_path del self.unlinked_device_path
del self.dm_node_name del self.dm_node_name
del self.dm_node_path del self.dm_node_path

View file

@ -5,100 +5,100 @@ import logging
def get_console_handler(debug, colorize): def get_console_handler(debug, colorize):
"""Returns a log handler for the console """Returns a log handler for the console
The handler color codes the different log levels The handler color codes the different log levels
:params bool debug: Whether to set the log level to DEBUG (otherwise INFO) :params bool debug: Whether to set the log level to DEBUG (otherwise INFO)
:params bool colorize: Whether to colorize console output :params bool colorize: Whether to colorize console output
:return: The console logging handler :return: The console logging handler
""" """
# Create a console log handler # Create a console log handler
import sys import sys
console_handler = logging.StreamHandler(sys.stderr) console_handler = logging.StreamHandler(sys.stderr)
if colorize: if colorize:
# We want to colorize the output to the console, so we add a formatter # We want to colorize the output to the console, so we add a formatter
console_handler.setFormatter(ColorFormatter()) console_handler.setFormatter(ColorFormatter())
# Set the log level depending on the debug argument # Set the log level depending on the debug argument
if debug: if debug:
console_handler.setLevel(logging.DEBUG) console_handler.setLevel(logging.DEBUG)
else: else:
console_handler.setLevel(logging.INFO) console_handler.setLevel(logging.INFO)
return console_handler return console_handler
def get_file_handler(path, debug): def get_file_handler(path, debug):
"""Returns a log handler for the given path """Returns a log handler for the given path
If the parent directory of the logpath does not exist it will be created If the parent directory of the logpath does not exist it will be created
The handler outputs relative timestamps (to when it was created) The handler outputs relative timestamps (to when it was created)
:params str path: The full path to the logfile :params str path: The full path to the logfile
:params bool debug: Whether to set the log level to DEBUG (otherwise INFO) :params bool debug: Whether to set the log level to DEBUG (otherwise INFO)
:return: The file logging handler :return: The file logging handler
""" """
import os.path import os.path
if not os.path.exists(os.path.dirname(path)): if not os.path.exists(os.path.dirname(path)):
os.makedirs(os.path.dirname(path)) os.makedirs(os.path.dirname(path))
# Create the log handler # Create the log handler
file_handler = logging.FileHandler(path) file_handler = logging.FileHandler(path)
# Absolute timestamps are rather useless when bootstrapping, it's much more interesting # Absolute timestamps are rather useless when bootstrapping, it's much more interesting
# to see how long things take, so we log in a relative format instead # to see how long things take, so we log in a relative format instead
file_handler.setFormatter(FileFormatter('[%(relativeCreated)s] %(levelname)s: %(message)s')) file_handler.setFormatter(FileFormatter('[%(relativeCreated)s] %(levelname)s: %(message)s'))
# The file log handler always logs everything # The file log handler always logs everything
file_handler.setLevel(logging.DEBUG) file_handler.setLevel(logging.DEBUG)
return file_handler return file_handler
def get_log_filename(manifest_path): def get_log_filename(manifest_path):
"""Returns the path to a logfile given a manifest """Returns the path to a logfile given a manifest
The logfile name is constructed from the current timestamp and the basename of the manifest The logfile name is constructed from the current timestamp and the basename of the manifest
:param str manifest_path: The path to the manifest :param str manifest_path: The path to the manifest
:return: The path to the logfile :return: The path to the logfile
:rtype: str :rtype: str
""" """
import os.path import os.path
from datetime import datetime from datetime import datetime
manifest_basename = os.path.basename(manifest_path) manifest_basename = os.path.basename(manifest_path)
manifest_name, _ = os.path.splitext(manifest_basename) manifest_name, _ = os.path.splitext(manifest_basename)
timestamp = datetime.now().strftime('%Y%m%d%H%M%S') timestamp = datetime.now().strftime('%Y%m%d%H%M%S')
filename = '{timestamp}_{name}.log'.format(timestamp=timestamp, name=manifest_name) filename = '{timestamp}_{name}.log'.format(timestamp=timestamp, name=manifest_name)
return filename return filename
class SourceFormatter(logging.Formatter): class SourceFormatter(logging.Formatter):
"""Adds a [source] tag to the log message if it exists """Adds a [source] tag to the log message if it exists
The python docs suggest using a LoggingAdapter, but that would mean we'd The python docs suggest using a LoggingAdapter, but that would mean we'd
have to use it everywhere we log something (and only when called remotely), have to use it everywhere we log something (and only when called remotely),
which is not feasible. which is not feasible.
""" """
def format(self, record): def format(self, record):
extra = getattr(record, 'extra', {}) extra = getattr(record, 'extra', {})
if 'source' in extra: if 'source' in extra:
record.msg = '[{source}] {message}'.format(source=record.extra['source'], record.msg = '[{source}] {message}'.format(source=record.extra['source'],
message=record.msg) message=record.msg)
return super(SourceFormatter, self).format(record) return super(SourceFormatter, self).format(record)
class ColorFormatter(SourceFormatter): class ColorFormatter(SourceFormatter):
"""Colorizes log messages depending on the loglevel """Colorizes log messages depending on the loglevel
""" """
level_colors = {logging.ERROR: 'red', level_colors = {logging.ERROR: 'red',
logging.WARNING: 'magenta', logging.WARNING: 'magenta',
logging.INFO: 'blue', logging.INFO: 'blue',
} }
def format(self, record): def format(self, record):
# Colorize the message if we have a color for it (DEBUG has no color) # Colorize the message if we have a color for it (DEBUG has no color)
from termcolor import colored from termcolor import colored
record.msg = colored(record.msg, self.level_colors.get(record.levelno, None)) record.msg = colored(record.msg, self.level_colors.get(record.levelno, None))
return super(ColorFormatter, self).format(record) return super(ColorFormatter, self).format(record)
class FileFormatter(SourceFormatter): class FileFormatter(SourceFormatter):
"""Formats log statements for output to file """Formats log statements for output to file
Currently this is just a stub Currently this is just a stub
""" """
def format(self, record): def format(self, record):
return super(FileFormatter, self).format(record) return super(FileFormatter, self).format(record)

View file

@ -3,37 +3,37 @@
def main(): def main():
"""Main function for invoking the bootstrap process """Main function for invoking the bootstrap process
:raises Exception: When the invoking user is not root and --dry-run isn't specified :raises Exception: When the invoking user is not root and --dry-run isn't specified
""" """
# Get the commandline arguments # Get the commandline arguments
opts = get_opts() opts = get_opts()
# Require root privileges, except when doing a dry-run where they aren't needed # Require root privileges, except when doing a dry-run where they aren't needed
import os import os
if os.geteuid() != 0 and not opts['--dry-run']: if os.geteuid() != 0 and not opts['--dry-run']:
raise Exception('This program requires root privileges.') raise Exception('This program requires root privileges.')
# Set up logging # Set up logging
setup_loggers(opts) setup_loggers(opts)
# Load the manifest # Load the manifest
from manifest import Manifest from manifest import Manifest
manifest = Manifest(path=opts['MANIFEST']) manifest = Manifest(path=opts['MANIFEST'])
# Everything has been set up, begin the bootstrapping process # Everything has been set up, begin the bootstrapping process
run(manifest, run(manifest,
debug=opts['--debug'], debug=opts['--debug'],
pause_on_error=opts['--pause-on-error'], pause_on_error=opts['--pause-on-error'],
dry_run=opts['--dry-run']) dry_run=opts['--dry-run'])
def get_opts(): def get_opts():
"""Creates an argument parser and returns the arguments it has parsed """Creates an argument parser and returns the arguments it has parsed
""" """
import docopt import docopt
usage = """bootstrap-vz usage = """bootstrap-vz
Usage: bootstrap-vz [options] MANIFEST Usage: bootstrap-vz [options] MANIFEST
@ -46,97 +46,97 @@ Options:
Colorize the console output [default: auto] Colorize the console output [default: auto]
--debug Print debugging information --debug Print debugging information
-h, --help show this help -h, --help show this help
""" """
opts = docopt.docopt(usage) opts = docopt.docopt(usage)
if opts['--color'] not in ('auto', 'always', 'never'): if opts['--color'] not in ('auto', 'always', 'never'):
raise docopt.DocoptExit('Value of --color must be one of auto, always or never.') raise docopt.DocoptExit('Value of --color must be one of auto, always or never.')
return opts return opts
def setup_loggers(opts): def setup_loggers(opts):
"""Sets up the file and console loggers """Sets up the file and console loggers
:params dict opts: Dictionary of options from the commandline :params dict opts: Dictionary of options from the commandline
""" """
import logging import logging
root = logging.getLogger() root = logging.getLogger()
root.setLevel(logging.NOTSET) root.setLevel(logging.NOTSET)
import log import log
# Log to file unless --log is a single dash # Log to file unless --log is a single dash
if opts['--log'] != '-': if opts['--log'] != '-':
import os.path import os.path
log_filename = log.get_log_filename(opts['MANIFEST']) log_filename = log.get_log_filename(opts['MANIFEST'])
logpath = os.path.join(opts['--log'], log_filename) logpath = os.path.join(opts['--log'], log_filename)
file_handler = log.get_file_handler(path=logpath, debug=True) file_handler = log.get_file_handler(path=logpath, debug=True)
root.addHandler(file_handler) root.addHandler(file_handler)
if opts['--color'] == 'never': if opts['--color'] == 'never':
colorize = False colorize = False
elif opts['--color'] == 'always': elif opts['--color'] == 'always':
colorize = True colorize = True
else: else:
# If --color=auto (default), decide whether to colorize by whether stderr is a tty. # If --color=auto (default), decide whether to colorize by whether stderr is a tty.
import os import os
colorize = os.isatty(2) colorize = os.isatty(2)
console_handler = log.get_console_handler(debug=opts['--debug'], colorize=colorize) console_handler = log.get_console_handler(debug=opts['--debug'], colorize=colorize)
root.addHandler(console_handler) root.addHandler(console_handler)
def run(manifest, debug=False, pause_on_error=False, dry_run=False): def run(manifest, debug=False, pause_on_error=False, dry_run=False):
"""Runs the bootstrapping process """Runs the bootstrapping process
:params Manifest manifest: The manifest to run the bootstrapping process for :params Manifest manifest: The manifest to run the bootstrapping process for
:params bool debug: Whether to turn debugging mode on :params bool debug: Whether to turn debugging mode on
:params bool pause_on_error: Whether to pause on error, before rollback :params bool pause_on_error: Whether to pause on error, before rollback
:params bool dry_run: Don't actually run the tasks :params bool dry_run: Don't actually run the tasks
""" """
# Get the tasklist # Get the tasklist
from tasklist import load_tasks from tasklist import load_tasks
from tasklist import TaskList from tasklist import TaskList
tasks = load_tasks('resolve_tasks', manifest) tasks = load_tasks('resolve_tasks', manifest)
tasklist = TaskList(tasks) tasklist = TaskList(tasks)
# 'resolve_tasks' is the name of the function to call on the provider and plugins # 'resolve_tasks' is the name of the function to call on the provider and plugins
# Create the bootstrap information object that'll be used throughout the bootstrapping process # Create the bootstrap information object that'll be used throughout the bootstrapping process
from bootstrapinfo import BootstrapInformation from bootstrapinfo import BootstrapInformation
bootstrap_info = BootstrapInformation(manifest=manifest, debug=debug) bootstrap_info = BootstrapInformation(manifest=manifest, debug=debug)
import logging import logging
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
try: try:
# Run all the tasks the tasklist has gathered # Run all the tasks the tasklist has gathered
tasklist.run(info=bootstrap_info, dry_run=dry_run) tasklist.run(info=bootstrap_info, dry_run=dry_run)
# We're done! :-) # We're done! :-)
log.info('Successfully completed bootstrapping') log.info('Successfully completed bootstrapping')
except (Exception, KeyboardInterrupt) as e: except (Exception, KeyboardInterrupt) as e:
# When an error occurs, log it and begin rollback # When an error occurs, log it and begin rollback
log.exception(e) log.exception(e)
if pause_on_error: if pause_on_error:
# The --pause-on-error is useful when the user wants to inspect the volume before rollback # The --pause-on-error is useful when the user wants to inspect the volume before rollback
raw_input('Press Enter to commence rollback') raw_input('Press Enter to commence rollback')
log.error('Rolling back') log.error('Rolling back')
# Create a useful little function for the provider and plugins to use, # Create a useful little function for the provider and plugins to use,
# when figuring out what tasks should be added to the rollback list. # when figuring out what tasks should be added to the rollback list.
def counter_task(taskset, task, counter): def counter_task(taskset, task, counter):
"""counter_task() adds the third argument to the rollback tasklist """counter_task() adds the third argument to the rollback tasklist
if the second argument is present in the list of completed tasks if the second argument is present in the list of completed tasks
:param set taskset: The taskset to add the rollback task to :param set taskset: The taskset to add the rollback task to
:param Task task: The task to look for in the completed tasks list :param Task task: The task to look for in the completed tasks list
:param Task counter: The task to add to the rollback tasklist :param Task counter: The task to add to the rollback tasklist
""" """
if task in tasklist.tasks_completed and counter not in tasklist.tasks_completed: if task in tasklist.tasks_completed and counter not in tasklist.tasks_completed:
taskset.add(counter) taskset.add(counter)
# Ask the provider and plugins for tasks they'd like to add to the rollback tasklist # Ask the provider and plugins for tasks they'd like to add to the rollback tasklist
# Any additional arguments beyond the first two are passed directly to the provider and plugins # Any additional arguments beyond the first two are passed directly to the provider and plugins
rollback_tasks = load_tasks('resolve_rollback_tasks', manifest, tasklist.tasks_completed, counter_task) rollback_tasks = load_tasks('resolve_rollback_tasks', manifest, tasklist.tasks_completed, counter_task)
rollback_tasklist = TaskList(rollback_tasks) rollback_tasklist = TaskList(rollback_tasks)
# Run the rollback tasklist # Run the rollback tasklist
rollback_tasklist.run(info=bootstrap_info, dry_run=dry_run) rollback_tasklist.run(info=bootstrap_info, dry_run=dry_run)
log.info('Successfully completed rollback') log.info('Successfully completed rollback')
raise raise
return bootstrap_info return bootstrap_info

View file

@ -9,150 +9,150 @@ log = logging.getLogger(__name__)
class Manifest(object): class Manifest(object):
"""This class holds all the information that providers and plugins need """This class holds all the information that providers and plugins need
to perform the bootstrapping process. All actions that are taken originate from to perform the bootstrapping process. All actions that are taken originate from
here. The manifest shall not be modified after it has been loaded. here. The manifest shall not be modified after it has been loaded.
Currently, immutability is not enforced and it would require a fair amount of code Currently, immutability is not enforced and it would require a fair amount of code
to enforce it, instead we just rely on tasks behaving properly. to enforce it, instead we just rely on tasks behaving properly.
""" """
def __init__(self, path=None, data=None): def __init__(self, path=None, data=None):
"""Initializer: Given a path we load, validate and parse the manifest. """Initializer: Given a path we load, validate and parse the manifest.
To create the manifest from dynamic data instead of the contents of a file, To create the manifest from dynamic data instead of the contents of a file,
provide a properly constructed dict as the data argument. provide a properly constructed dict as the data argument.
:param str path: The path to the manifest (ignored, when `data' is provided) :param str path: The path to the manifest (ignored, when `data' is provided)
:param str data: The manifest data, if it is not None, it will be used instead of the contents of `path' :param str data: The manifest data, if it is not None, it will be used instead of the contents of `path'
""" """
if path is None and data is None: if path is None and data is None:
raise ManifestError('`path\' or `data\' must be provided') raise ManifestError('`path\' or `data\' must be provided')
self.path = path self.path = path
import os.path import os.path
self.metaschema = load_data(os.path.normpath(os.path.join(os.path.dirname(__file__), self.metaschema = load_data(os.path.normpath(os.path.join(os.path.dirname(__file__),
'metaschema.json'))) 'metaschema.json')))
self.load_data(data) self.load_data(data)
self.load_modules() self.load_modules()
self.validate() self.validate()
self.parse() self.parse()
def load_data(self, data=None): def load_data(self, data=None):
"""Loads the manifest and performs a basic validation. """Loads the manifest and performs a basic validation.
This function reads the manifest and performs some basic validation of This function reads the manifest and performs some basic validation of
the manifest itself to ensure that the properties required for initalization are accessible the manifest itself to ensure that the properties required for initalization are accessible
(otherwise the user would be presented with some cryptic error messages). (otherwise the user would be presented with some cryptic error messages).
""" """
if data is None: if data is None:
self.data = load_data(self.path) self.data = load_data(self.path)
else: else:
self.data = data self.data = data
from . import validate_manifest from . import validate_manifest
# Validate the manifest with the base validation function in __init__ # Validate the manifest with the base validation function in __init__
validate_manifest(self.data, self.schema_validator, self.validation_error) validate_manifest(self.data, self.schema_validator, self.validation_error)
def load_modules(self): def load_modules(self):
"""Loads the provider and the plugins. """Loads the provider and the plugins.
""" """
# Get the provider name from the manifest and load the corresponding module # Get the provider name from the manifest and load the corresponding module
provider_modname = 'bootstrapvz.providers.' + self.data['provider']['name'] provider_modname = 'bootstrapvz.providers.' + self.data['provider']['name']
log.debug('Loading provider ' + self.data['provider']['name']) log.debug('Loading provider ' + self.data['provider']['name'])
# Create a modules dict that contains the loaded provider and plugins # Create a modules dict that contains the loaded provider and plugins
import importlib import importlib
self.modules = {'provider': importlib.import_module(provider_modname), self.modules = {'provider': importlib.import_module(provider_modname),
'plugins': [], 'plugins': [],
} }
# Run through all the plugins mentioned in the manifest and load them # Run through all the plugins mentioned in the manifest and load them
from pkg_resources import iter_entry_points from pkg_resources import iter_entry_points
if 'plugins' in self.data: if 'plugins' in self.data:
for plugin_name in self.data['plugins'].keys(): for plugin_name in self.data['plugins'].keys():
log.debug('Loading plugin ' + plugin_name) log.debug('Loading plugin ' + plugin_name)
try: try:
# Internal bootstrap-vz plugins take precedence wrt. plugin name # Internal bootstrap-vz plugins take precedence wrt. plugin name
modname = 'bootstrapvz.plugins.' + plugin_name modname = 'bootstrapvz.plugins.' + plugin_name
plugin = importlib.import_module(modname) plugin = importlib.import_module(modname)
except ImportError: except ImportError:
entry_points = list(iter_entry_points('bootstrapvz.plugins', name=plugin_name)) entry_points = list(iter_entry_points('bootstrapvz.plugins', name=plugin_name))
num_entry_points = len(entry_points) num_entry_points = len(entry_points)
if num_entry_points < 1: if num_entry_points < 1:
raise raise
if num_entry_points > 1: if num_entry_points > 1:
msg = ('Unable to load plugin {name}, ' msg = ('Unable to load plugin {name}, '
'there are {num} entry points to choose from.' 'there are {num} entry points to choose from.'
.format(name=plugin_name, num=num_entry_points)) .format(name=plugin_name, num=num_entry_points))
raise ImportError(msg) raise ImportError(msg)
plugin = entry_points[0].load() plugin = entry_points[0].load()
self.modules['plugins'].append(plugin) self.modules['plugins'].append(plugin)
def validate(self): def validate(self):
"""Validates the manifest using the provider and plugin validation functions. """Validates the manifest using the provider and plugin validation functions.
Plugins are not required to have a validate_manifest function Plugins are not required to have a validate_manifest function
""" """
# Run the provider validation # Run the provider validation
self.modules['provider'].validate_manifest(self.data, self.schema_validator, self.validation_error) self.modules['provider'].validate_manifest(self.data, self.schema_validator, self.validation_error)
# Run the validation function for any plugin that has it # Run the validation function for any plugin that has it
for plugin in self.modules['plugins']: for plugin in self.modules['plugins']:
validate = getattr(plugin, 'validate_manifest', None) validate = getattr(plugin, 'validate_manifest', None)
if callable(validate): if callable(validate):
validate(self.data, self.schema_validator, self.validation_error) validate(self.data, self.schema_validator, self.validation_error)
def parse(self): def parse(self):
"""Parses the manifest. """Parses the manifest.
Well... "parsing" is a big word. Well... "parsing" is a big word.
The function really just sets up some convenient attributes so that tasks The function really just sets up some convenient attributes so that tasks
don't have to access information with info.manifest.data['section'] don't have to access information with info.manifest.data['section']
but can do it with info.manifest.section. but can do it with info.manifest.section.
""" """
self.name = self.data['name'] self.name = self.data['name']
self.provider = self.data['provider'] self.provider = self.data['provider']
self.bootstrapper = self.data['bootstrapper'] self.bootstrapper = self.data['bootstrapper']
self.volume = self.data['volume'] self.volume = self.data['volume']
self.system = self.data['system'] self.system = self.data['system']
from bootstrapvz.common.releases import get_release from bootstrapvz.common.releases import get_release
self.release = get_release(self.system['release']) self.release = get_release(self.system['release'])
# The packages and plugins section is not required # The packages and plugins section is not required
self.packages = self.data['packages'] if 'packages' in self.data else {} self.packages = self.data['packages'] if 'packages' in self.data else {}
self.plugins = self.data['plugins'] if 'plugins' in self.data else {} self.plugins = self.data['plugins'] if 'plugins' in self.data else {}
def schema_validator(self, data, schema_path): def schema_validator(self, data, schema_path):
"""This convenience function is passed around to all the validation functions """This convenience function is passed around to all the validation functions
so that they may run a json-schema validation by giving it the data and a path to the schema. so that they may run a json-schema validation by giving it the data and a path to the schema.
:param dict data: Data to validate (normally the manifest data) :param dict data: Data to validate (normally the manifest data)
:param str schema_path: Path to the json-schema to use for validation :param str schema_path: Path to the json-schema to use for validation
""" """
import jsonschema import jsonschema
schema = load_data(schema_path) schema = load_data(schema_path)
try: try:
jsonschema.validate(schema, self.metaschema) jsonschema.validate(schema, self.metaschema)
jsonschema.validate(data, schema) jsonschema.validate(data, schema)
except jsonschema.ValidationError as e: except jsonschema.ValidationError as e:
self.validation_error(e.message, e.path) self.validation_error(e.message, e.path)
def validation_error(self, message, data_path=None): def validation_error(self, message, data_path=None):
"""This function is passed to all validation functions so that they may """This function is passed to all validation functions so that they may
raise a validation error because a custom validation of the manifest failed. raise a validation error because a custom validation of the manifest failed.
:param str message: Message to user about the error :param str message: Message to user about the error
:param list data_path: A path to the location in the manifest where the error occurred :param list data_path: A path to the location in the manifest where the error occurred
:raises ManifestError: With absolute certainty :raises ManifestError: With absolute certainty
""" """
raise ManifestError(message, self.path, data_path) raise ManifestError(message, self.path, data_path)
def __getstate__(self): def __getstate__(self):
return {'__class__': self.__module__ + '.' + self.__class__.__name__, return {'__class__': self.__module__ + '.' + self.__class__.__name__,
'path': self.path, 'path': self.path,
'metaschema': self.metaschema, 'metaschema': self.metaschema,
'data': self.data} 'data': self.data}
def __setstate__(self, state): def __setstate__(self, state):
self.path = state['path'] self.path = state['path']
self.metaschema = state['metaschema'] self.metaschema = state['metaschema']
self.load_data(state['data']) self.load_data(state['data'])
self.load_modules() self.load_modules()
self.validate() self.validate()
self.parse() self.parse()

View file

@ -1,35 +1,35 @@
class Phase(object): class Phase(object):
"""The Phase class represents a phase a task may be in. """The Phase class represents a phase a task may be in.
It has no function other than to act as an anchor in the task graph. It has no function other than to act as an anchor in the task graph.
All phases are instantiated in common.phases All phases are instantiated in common.phases
""" """
def __init__(self, name, description): def __init__(self, name, description):
# The name of the phase # The name of the phase
self.name = name self.name = name
# The description of the phase (currently not used anywhere) # The description of the phase (currently not used anywhere)
self.description = description self.description = description
def pos(self): def pos(self):
"""Gets the position of the phase """Gets the position of the phase
:return: The positional index of the phase in relation to the other phases :return: The positional index of the phase in relation to the other phases
:rtype: int :rtype: int
""" """
from bootstrapvz.common.phases import order from bootstrapvz.common.phases import order
return next(i for i, phase in enumerate(order) if phase is self) return next(i for i, phase in enumerate(order) if phase is self)
def __cmp__(self, other): def __cmp__(self, other):
"""Compares the phase order in relation to the other phases """Compares the phase order in relation to the other phases
:return int: :return int:
""" """
return self.pos() - other.pos() return self.pos() - other.pos()
def __str__(self): def __str__(self):
""" """
:return: String representation of the phase :return: String representation of the phase
:rtype: str :rtype: str
""" """
return self.name return self.name

View file

@ -1,12 +1,12 @@
class PackageError(Exception): class PackageError(Exception):
"""Raised when an error occurrs while handling the packageslist """Raised when an error occurrs while handling the packageslist
""" """
pass pass
class SourceError(Exception): class SourceError(Exception):
"""Raised when an error occurs while handling the sourceslist """Raised when an error occurs while handling the sourceslist
""" """
pass pass

View file

@ -1,108 +1,108 @@
class PackageList(object): class PackageList(object):
"""Represents a list of packages """Represents a list of packages
""" """
class Remote(object): class Remote(object):
"""A remote package with an optional target """A remote package with an optional target
""" """
def __init__(self, name, target): def __init__(self, name, target):
""" """
:param str name: The name of the package :param str name: The name of the package
:param str target: The name of the target release :param str target: The name of the target release
""" """
self.name = name self.name = name
self.target = target self.target = target
def __str__(self): def __str__(self):
"""Converts the package into somehting that apt-get install can parse """Converts the package into somehting that apt-get install can parse
:rtype: str :rtype: str
""" """
if self.target is None: if self.target is None:
return self.name return self.name
else: else:
return self.name + '/' + self.target return self.name + '/' + self.target
class Local(object): class Local(object):
"""A local package """A local package
""" """
def __init__(self, path): def __init__(self, path):
""" """
:param str path: The path to the local package :param str path: The path to the local package
""" """
self.path = path self.path = path
def __str__(self): def __str__(self):
""" """
:return: The path to the local package :return: The path to the local package
:rtype: string :rtype: string
""" """
return self.path return self.path
def __init__(self, manifest_vars, source_lists): def __init__(self, manifest_vars, source_lists):
""" """
:param dict manifest_vars: The manifest variables :param dict manifest_vars: The manifest variables
:param SourceLists source_lists: The sourcelists for apt :param SourceLists source_lists: The sourcelists for apt
""" """
self.manifest_vars = manifest_vars self.manifest_vars = manifest_vars
self.source_lists = source_lists self.source_lists = source_lists
# The default_target is the release we are bootstrapping # The default_target is the release we are bootstrapping
self.default_target = '{system.release}'.format(**self.manifest_vars) self.default_target = '{system.release}'.format(**self.manifest_vars)
# The list of packages that should be installed, this is not a set. # The list of packages that should be installed, this is not a set.
# We want to preserve the order in which the packages were added so that local # We want to preserve the order in which the packages were added so that local
# packages may be installed in the correct order. # packages may be installed in the correct order.
self.install = [] self.install = []
# A function that filters the install list and only returns remote packages # A function that filters the install list and only returns remote packages
self.remote = lambda: filter(lambda x: isinstance(x, self.Remote), self.install) self.remote = lambda: filter(lambda x: isinstance(x, self.Remote), self.install)
def add(self, name, target=None): def add(self, name, target=None):
"""Adds a package to the install list """Adds a package to the install list
:param str name: The name of the package to install, may contain manifest vars references :param str name: The name of the package to install, may contain manifest vars references
:param str target: The name of the target release for the package, may contain manifest vars references :param str target: The name of the target release for the package, may contain manifest vars references
:raises PackageError: When a package of the same name but with a different target has already been added. :raises PackageError: When a package of the same name but with a different target has already been added.
:raises PackageError: When the specified target release could not be found. :raises PackageError: When the specified target release could not be found.
""" """
from exceptions import PackageError from exceptions import PackageError
name = name.format(**self.manifest_vars) name = name.format(**self.manifest_vars)
if target is not None: if target is not None:
target = target.format(**self.manifest_vars) target = target.format(**self.manifest_vars)
# Check if the package has already been added. # Check if the package has already been added.
# If so, make sure it's the same target and raise a PackageError otherwise # If so, make sure it's the same target and raise a PackageError otherwise
package = next((pkg for pkg in self.remote() if pkg.name == name), None) package = next((pkg for pkg in self.remote() if pkg.name == name), None)
if package is not None: if package is not None:
# It's the same target if the target names match or one of the targets is None # It's the same target if the target names match or one of the targets is None
# and the other is the default target. # and the other is the default target.
same_target = package.target == target same_target = package.target == target
same_target = same_target or package.target is None and target == self.default_target same_target = same_target or package.target is None and target == self.default_target
same_target = same_target or package.target == self.default_target and target is None same_target = same_target or package.target == self.default_target and target is None
if not same_target: if not same_target:
msg = ('The package {name} was already added to the package list, ' msg = ('The package {name} was already added to the package list, '
'but with target release `{target}\' instead of `{add_target}\'' 'but with target release `{target}\' instead of `{add_target}\''
.format(name=name, target=package.target, add_target=target)) .format(name=name, target=package.target, add_target=target))
raise PackageError(msg) raise PackageError(msg)
# The package has already been added, skip the checks below # The package has already been added, skip the checks below
return return
# Check if the target exists (unless it's the default target) in the sources list # Check if the target exists (unless it's the default target) in the sources list
# raise a PackageError if does not # raise a PackageError if does not
if target not in (None, self.default_target) and not self.source_lists.target_exists(target): if target not in (None, self.default_target) and not self.source_lists.target_exists(target):
msg = ('The target release {target} was not found in the sources list').format(target=target) msg = ('The target release {target} was not found in the sources list').format(target=target)
raise PackageError(msg) raise PackageError(msg)
# Note that we maintain the target value even if it is none. # Note that we maintain the target value even if it is none.
# This allows us to preserve the semantics of the default target when calling apt-get install # This allows us to preserve the semantics of the default target when calling apt-get install
# Why? Try installing nfs-client/wheezy, you can't. It's a virtual package for which you cannot define # Why? Try installing nfs-client/wheezy, you can't. It's a virtual package for which you cannot define
# a target release. Only `apt-get install nfs-client` works. # a target release. Only `apt-get install nfs-client` works.
self.install.append(self.Remote(name, target)) self.install.append(self.Remote(name, target))
def add_local(self, package_path): def add_local(self, package_path):
"""Adds a local package to the installation list """Adds a local package to the installation list
:param str package_path: Path to the local package, may contain manifest vars references :param str package_path: Path to the local package, may contain manifest vars references
""" """
package_path = package_path.format(**self.manifest_vars) package_path = package_path.format(**self.manifest_vars)
self.install.append(self.Local(package_path)) self.install.append(self.Local(package_path))

View file

@ -1,42 +1,42 @@
class PreferenceLists(object): class PreferenceLists(object):
"""Represents a list of preferences lists for apt """Represents a list of preferences lists for apt
""" """
def __init__(self, manifest_vars): def __init__(self, manifest_vars):
""" """
:param dict manifest_vars: The manifest variables :param dict manifest_vars: The manifest variables
""" """
# A dictionary with the name of the file in preferences.d as the key # A dictionary with the name of the file in preferences.d as the key
# That values are lists of Preference objects # That values are lists of Preference objects
self.preferences = {} self.preferences = {}
# Save the manifest variables, we need the later on # Save the manifest variables, we need the later on
self.manifest_vars = manifest_vars self.manifest_vars = manifest_vars
def add(self, name, preferences): def add(self, name, preferences):
"""Adds a preference to the apt preferences list """Adds a preference to the apt preferences list
:param str name: Name of the file in preferences.list.d, may contain manifest vars references :param str name: Name of the file in preferences.list.d, may contain manifest vars references
:param object preferences: The preferences :param object preferences: The preferences
""" """
name = name.format(**self.manifest_vars) name = name.format(**self.manifest_vars)
self.preferences[name] = [Preference(p) for p in preferences] self.preferences[name] = [Preference(p) for p in preferences]
class Preference(object): class Preference(object):
"""Represents a single preference """Represents a single preference
""" """
def __init__(self, preference): def __init__(self, preference):
""" """
:param dict preference: A apt preference dictionary :param dict preference: A apt preference dictionary
""" """
self.preference = preference self.preference = preference
def __str__(self): def __str__(self):
"""Convert the object into a preference block """Convert the object into a preference block
:rtype: str :rtype: str
""" """
return "Package: {package}\nPin: {pin}\nPin-Priority: {pin-priority}\n".format(**self.preference) return "Package: {package}\nPin: {pin}\nPin-Priority: {pin-priority}\n".format(**self.preference)

View file

@ -1,95 +1,95 @@
class SourceLists(object): class SourceLists(object):
"""Represents a list of sources lists for apt """Represents a list of sources lists for apt
""" """
def __init__(self, manifest_vars): def __init__(self, manifest_vars):
""" """
:param dict manifest_vars: The manifest variables :param dict manifest_vars: The manifest variables
""" """
# A dictionary with the name of the file in sources.list.d as the key # A dictionary with the name of the file in sources.list.d as the key
# That values are lists of Source objects # That values are lists of Source objects
self.sources = {} self.sources = {}
# Save the manifest variables, we need the later on # Save the manifest variables, we need the later on
self.manifest_vars = manifest_vars self.manifest_vars = manifest_vars
def add(self, name, line): def add(self, name, line):
"""Adds a source to the apt sources list """Adds a source to the apt sources list
:param str name: Name of the file in sources.list.d, may contain manifest vars references :param str name: Name of the file in sources.list.d, may contain manifest vars references
:param str line: The line for the source file, may contain manifest vars references :param str line: The line for the source file, may contain manifest vars references
""" """
name = name.format(**self.manifest_vars) name = name.format(**self.manifest_vars)
line = line.format(**self.manifest_vars) line = line.format(**self.manifest_vars)
if name not in self.sources: if name not in self.sources:
self.sources[name] = [] self.sources[name] = []
self.sources[name].append(Source(line)) self.sources[name].append(Source(line))
def target_exists(self, target): def target_exists(self, target):
"""Checks whether the target exists in the sources list """Checks whether the target exists in the sources list
:param str target: Name of the target to check for, may contain manifest vars references :param str target: Name of the target to check for, may contain manifest vars references
:return: Whether the target exists :return: Whether the target exists
:rtype: bool :rtype: bool
""" """
target = target.format(**self.manifest_vars) target = target.format(**self.manifest_vars)
# Run through all the sources and return True if the target exists # Run through all the sources and return True if the target exists
for lines in self.sources.itervalues(): for lines in self.sources.itervalues():
if target in (source.distribution for source in lines): if target in (source.distribution for source in lines):
return True return True
return False return False
class Source(object): class Source(object):
"""Represents a single source line """Represents a single source line
""" """
def __init__(self, line): def __init__(self, line):
""" """
:param str line: A apt source line :param str line: A apt source line
:raises SourceError: When the source line cannot be parsed :raises SourceError: When the source line cannot be parsed
""" """
# Parse the source line and populate the class attributes with it # Parse the source line and populate the class attributes with it
# The format is taken from `man sources.list` # The format is taken from `man sources.list`
# or: http://manpages.debian.org/cgi-bin/man.cgi?sektion=5&query=sources.list&apropos=0&manpath=sid&locale=en # or: http://manpages.debian.org/cgi-bin/man.cgi?sektion=5&query=sources.list&apropos=0&manpath=sid&locale=en
import re import re
regexp = re.compile('^(?P<type>deb|deb-src)\s+' regexp = re.compile('^(?P<type>deb|deb-src)\s+'
'(\[\s*(?P<options>.+\S)?\s*\]\s+)?' '(\[\s*(?P<options>.+\S)?\s*\]\s+)?'
'(?P<uri>\S+)\s+' '(?P<uri>\S+)\s+'
'(?P<distribution>\S+)' '(?P<distribution>\S+)'
'(\s+(?P<components>.+\S))?\s*$') '(\s+(?P<components>.+\S))?\s*$')
match = regexp.match(line).groupdict() match = regexp.match(line).groupdict()
if match is None: if match is None:
from exceptions import SourceError from exceptions import SourceError
raise SourceError('Unable to parse source line: ' + line) raise SourceError('Unable to parse source line: ' + line)
self.type = match['type'] self.type = match['type']
self.options = [] self.options = []
if match['options'] is not None: if match['options'] is not None:
self.options = re.sub(' +', ' ', match['options']).split(' ') self.options = re.sub(' +', ' ', match['options']).split(' ')
self.uri = match['uri'] self.uri = match['uri']
self.distribution = match['distribution'] self.distribution = match['distribution']
self.components = [] self.components = []
if match['components'] is not None: if match['components'] is not None:
self.components = re.sub(' +', ' ', match['components']).split(' ') self.components = re.sub(' +', ' ', match['components']).split(' ')
def __str__(self): def __str__(self):
"""Convert the object into a source line """Convert the object into a source line
This is pretty much the reverse of what we're doing in the initialization function. This is pretty much the reverse of what we're doing in the initialization function.
:rtype: str :rtype: str
""" """
options = '' options = ''
if len(self.options) > 0: if len(self.options) > 0:
options = ' [{options}]'.format(options=' '.join(self.options)) options = ' [{options}]'.format(options=' '.join(self.options))
components = '' components = ''
if len(self.components) > 0: if len(self.components) > 0:
components = ' {components}'.format(components=' '.join(self.components)) components = ' {components}'.format(components=' '.join(self.components))
return ('{type}{options} {uri} {distribution}{components}' return ('{type}{options} {uri} {distribution}{components}'
.format(type=self.type, options=options, .format(type=self.type, options=options,
uri=self.uri, distribution=self.distribution, uri=self.uri, distribution=self.distribution,
components=components)) components=components))

View file

@ -1,36 +1,36 @@
class Task(object): class Task(object):
"""The task class represents a task that can be run. """The task class represents a task that can be run.
It is merely a wrapper for the run function and should never be instantiated. It is merely a wrapper for the run function and should never be instantiated.
""" """
# The phase this task is located in. # The phase this task is located in.
phase = None phase = None
# List of tasks that should run before this task is run # List of tasks that should run before this task is run
predecessors = [] predecessors = []
# List of tasks that should run after this task has run # List of tasks that should run after this task has run
successors = [] successors = []
class __metaclass__(type): class __metaclass__(type):
"""Metaclass to control how the class is coerced into a string """Metaclass to control how the class is coerced into a string
""" """
def __repr__(cls): def __repr__(cls):
""" """
:return str: The full module path to the Task :return str: The full module path to the Task
""" """
return cls.__module__ + '.' + cls.__name__ return cls.__module__ + '.' + cls.__name__
def __str__(cls): def __str__(cls):
""" """
:return: The full module path to the Task :return: The full module path to the Task
:rtype: str :rtype: str
""" """
return repr(cls) return repr(cls)
@classmethod @classmethod
def run(cls, info): def run(cls, info):
"""The run function, all work is done inside this function """The run function, all work is done inside this function
:param BootstrapInformation info: The bootstrap info object. :param BootstrapInformation info: The bootstrap info object.
""" """
pass pass

View file

@ -7,273 +7,273 @@ log = logging.getLogger(__name__)
class TaskList(object): class TaskList(object):
"""The tasklist class aggregates all tasks that should be run """The tasklist class aggregates all tasks that should be run
and orders them according to their dependencies. and orders them according to their dependencies.
""" """
def __init__(self, tasks): def __init__(self, tasks):
self.tasks = tasks self.tasks = tasks
self.tasks_completed = [] self.tasks_completed = []
def run(self, info, dry_run=False): def run(self, info, dry_run=False):
"""Converts the taskgraph into a list and runs all tasks in that list """Converts the taskgraph into a list and runs all tasks in that list
:param dict info: The bootstrap information object :param dict info: The bootstrap information object
:param bool dry_run: Whether to actually run the tasks or simply step through them :param bool dry_run: Whether to actually run the tasks or simply step through them
""" """
# Get a hold of every task we can find, so that we can topologically sort # Get a hold of every task we can find, so that we can topologically sort
# all tasks, rather than just the subset we are going to run. # all tasks, rather than just the subset we are going to run.
from bootstrapvz.common import tasks as common_tasks from bootstrapvz.common import tasks as common_tasks
modules = [common_tasks, info.manifest.modules['provider']] + info.manifest.modules['plugins'] modules = [common_tasks, info.manifest.modules['provider']] + info.manifest.modules['plugins']
all_tasks = set(get_all_tasks(modules)) all_tasks = set(get_all_tasks(modules))
# Create a list for us to run # Create a list for us to run
task_list = create_list(self.tasks, all_tasks) task_list = create_list(self.tasks, all_tasks)
# Output the tasklist # Output the tasklist
log.debug('Tasklist:\n\t' + ('\n\t'.join(map(repr, task_list)))) log.debug('Tasklist:\n\t' + ('\n\t'.join(map(repr, task_list))))
for task in task_list: for task in task_list:
# Tasks are not required to have a description # Tasks are not required to have a description
if hasattr(task, 'description'): if hasattr(task, 'description'):
log.info(task.description) log.info(task.description)
else: else:
# If there is no description, simply coerce the task into a string and print its name # If there is no description, simply coerce the task into a string and print its name
log.info('Running ' + str(task)) log.info('Running ' + str(task))
if not dry_run: if not dry_run:
# Run the task # Run the task
task.run(info) task.run(info)
# Remember which tasks have been run for later use (e.g. when rolling back, because of an error) # Remember which tasks have been run for later use (e.g. when rolling back, because of an error)
self.tasks_completed.append(task) self.tasks_completed.append(task)
def load_tasks(function, manifest, *args): def load_tasks(function, manifest, *args):
"""Calls ``function`` on the provider and all plugins that have been loaded by the manifest. """Calls ``function`` on the provider and all plugins that have been loaded by the manifest.
Any additional arguments are passed directly to ``function``. Any additional arguments are passed directly to ``function``.
The function that is called shall accept the taskset as its first argument and the manifest The function that is called shall accept the taskset as its first argument and the manifest
as its second argument. as its second argument.
:param str function: Name of the function to call :param str function: Name of the function to call
:param Manifest manifest: The manifest :param Manifest manifest: The manifest
:param list args: Additional arguments that should be passed to the function that is called :param list args: Additional arguments that should be passed to the function that is called
""" """
tasks = set() tasks = set()
# Call 'function' on the provider # Call 'function' on the provider
getattr(manifest.modules['provider'], function)(tasks, manifest, *args) getattr(manifest.modules['provider'], function)(tasks, manifest, *args)
for plugin in manifest.modules['plugins']: for plugin in manifest.modules['plugins']:
# Plugins are not required to have whatever function we call # Plugins are not required to have whatever function we call
fn = getattr(plugin, function, None) fn = getattr(plugin, function, None)
if callable(fn): if callable(fn):
fn(tasks, manifest, *args) fn(tasks, manifest, *args)
return tasks return tasks
def create_list(taskset, all_tasks): def create_list(taskset, all_tasks):
"""Creates a list of all the tasks that should be run. """Creates a list of all the tasks that should be run.
""" """
from bootstrapvz.common.phases import order from bootstrapvz.common.phases import order
# Make sure all_tasks is a superset of the resolved taskset # Make sure all_tasks is a superset of the resolved taskset
if not all_tasks >= taskset: if not all_tasks >= taskset:
msg = ('bootstrap-vz generated a list of all available tasks. ' msg = ('bootstrap-vz generated a list of all available tasks. '
'That list is not a superset of the tasks required for bootstrapping. ' 'That list is not a superset of the tasks required for bootstrapping. '
'The tasks that were not found are: {tasks} ' 'The tasks that were not found are: {tasks} '
'(This is an error in the code and not the manifest, please report this issue.)' '(This is an error in the code and not the manifest, please report this issue.)'
.format(tasks=', '.join(map(str, taskset - all_tasks))) .format(tasks=', '.join(map(str, taskset - all_tasks)))
) )
raise TaskListError(msg) raise TaskListError(msg)
# Create a graph over all tasks by creating a map of each tasks successors # Create a graph over all tasks by creating a map of each tasks successors
graph = {} graph = {}
for task in all_tasks: for task in all_tasks:
# Do a sanity check first # Do a sanity check first
check_ordering(task) check_ordering(task)
successors = set() successors = set()
# Add all successors mentioned in the task # Add all successors mentioned in the task
successors.update(task.successors) successors.update(task.successors)
# Add all tasks that mention this task as a predecessor # Add all tasks that mention this task as a predecessor
successors.update(filter(lambda succ: task in succ.predecessors, all_tasks)) successors.update(filter(lambda succ: task in succ.predecessors, all_tasks))
# Create a list of phases that succeed the phase of this task # Create a list of phases that succeed the phase of this task
succeeding_phases = order[order.index(task.phase) + 1:] succeeding_phases = order[order.index(task.phase) + 1:]
# Add all tasks that occur in above mentioned succeeding phases # Add all tasks that occur in above mentioned succeeding phases
successors.update(filter(lambda succ: succ.phase in succeeding_phases, all_tasks)) successors.update(filter(lambda succ: succ.phase in succeeding_phases, all_tasks))
# Map the successors to the task # Map the successors to the task
graph[task] = successors graph[task] = successors
# Use the strongly connected components algorithm to check for cycles in our task graph # Use the strongly connected components algorithm to check for cycles in our task graph
components = strongly_connected_components(graph) components = strongly_connected_components(graph)
cycles_found = 0 cycles_found = 0
for component in components: for component in components:
# Node of 1 is also a strongly connected component but hardly a cycle, so we filter them out # Node of 1 is also a strongly connected component but hardly a cycle, so we filter them out
if len(component) > 1: if len(component) > 1:
cycles_found += 1 cycles_found += 1
log.debug('Cycle: {list}\n' + (', '.join(map(repr, component)))) log.debug('Cycle: {list}\n' + (', '.join(map(repr, component))))
if cycles_found > 0: if cycles_found > 0:
msg = ('{num} cycles were found in the tasklist, ' msg = ('{num} cycles were found in the tasklist, '
'consult the logfile for more information.'.format(num=cycles_found)) 'consult the logfile for more information.'.format(num=cycles_found))
raise TaskListError(msg) raise TaskListError(msg)
# Run a topological sort on the graph, returning an ordered list # Run a topological sort on the graph, returning an ordered list
sorted_tasks = topological_sort(graph) sorted_tasks = topological_sort(graph)
# Filter out any tasks not in the tasklist # Filter out any tasks not in the tasklist
# We want to maintain ordering, so we don't use set intersection # We want to maintain ordering, so we don't use set intersection
sorted_tasks = filter(lambda task: task in taskset, sorted_tasks) sorted_tasks = filter(lambda task: task in taskset, sorted_tasks)
return sorted_tasks return sorted_tasks
def get_all_tasks(modules): def get_all_tasks(modules):
"""Gets a list of all task classes in the package """Gets a list of all task classes in the package
:return: A list of all tasks in the package :return: A list of all tasks in the package
:rtype: list :rtype: list
""" """
import os.path import os.path
# Get generators that return all classes in a module # Get generators that return all classes in a module
generators = [] generators = []
for module in modules: for module in modules:
module_path = os.path.dirname(module.__file__) module_path = os.path.dirname(module.__file__)
module_prefix = module.__name__ + '.' module_prefix = module.__name__ + '.'
generators.append(get_all_classes(module_path, module_prefix)) generators.append(get_all_classes(module_path, module_prefix))
import itertools import itertools
classes = itertools.chain(*generators) classes = itertools.chain(*generators)
# lambda function to check whether a class is a task (excluding the superclass Task) # lambda function to check whether a class is a task (excluding the superclass Task)
def is_task(obj): def is_task(obj):
from task import Task from task import Task
return issubclass(obj, Task) and obj is not Task return issubclass(obj, Task) and obj is not Task
return filter(is_task, classes) # Only return classes that are tasks return filter(is_task, classes) # Only return classes that are tasks
def get_all_classes(path=None, prefix='', excludes=[]): def get_all_classes(path=None, prefix='', excludes=[]):
""" Given a path to a package, this function retrieves all the classes in it """ Given a path to a package, this function retrieves all the classes in it
:param str path: Path to the package :param str path: Path to the package
:param str prefix: Name of the package followed by a dot :param str prefix: Name of the package followed by a dot
:param list excludes: List of str matching module names that should be ignored :param list excludes: List of str matching module names that should be ignored
:return: A generator that yields classes :return: A generator that yields classes
:rtype: generator :rtype: generator
:raises Exception: If a module cannot be inspected. :raises Exception: If a module cannot be inspected.
""" """
import pkgutil import pkgutil
import importlib import importlib
import inspect import inspect
def walk_error(module_name): def walk_error(module_name):
if not any(map(lambda excl: module_name.startswith(excl), excludes)): if not any(map(lambda excl: module_name.startswith(excl), excludes)):
raise TaskListError('Unable to inspect module ' + module_name) raise TaskListError('Unable to inspect module ' + module_name)
walker = pkgutil.walk_packages([path], prefix, walk_error) walker = pkgutil.walk_packages([path], prefix, walk_error)
for _, module_name, _ in walker: for _, module_name, _ in walker:
if any(map(lambda excl: module_name.startswith(excl), excludes)): if any(map(lambda excl: module_name.startswith(excl), excludes)):
continue continue
module = importlib.import_module(module_name) module = importlib.import_module(module_name)
classes = inspect.getmembers(module, inspect.isclass) classes = inspect.getmembers(module, inspect.isclass)
for class_name, obj in classes: for class_name, obj in classes:
# We only want classes that are defined in the module, and not imported ones # We only want classes that are defined in the module, and not imported ones
if obj.__module__ == module_name: if obj.__module__ == module_name:
yield obj yield obj
def check_ordering(task): def check_ordering(task):
"""Checks the ordering of a task in relation to other tasks and their phases. """Checks the ordering of a task in relation to other tasks and their phases.
This function checks for a subset of what the strongly connected components algorithm does, This function checks for a subset of what the strongly connected components algorithm does,
but can deliver a more precise error message, namely that there is a conflict between but can deliver a more precise error message, namely that there is a conflict between
what a task has specified as its predecessors or successors and in which phase it is placed. what a task has specified as its predecessors or successors and in which phase it is placed.
:param Task task: The task to check the ordering for :param Task task: The task to check the ordering for
:raises TaskListError: If there is a conflict between task precedence and phase precedence :raises TaskListError: If there is a conflict between task precedence and phase precedence
""" """
for successor in task.successors: for successor in task.successors:
# Run through all successors and throw an error if the phase of the task # Run through all successors and throw an error if the phase of the task
# lies before the phase of a successor, log a warning if it lies after. # lies before the phase of a successor, log a warning if it lies after.
if task.phase > successor.phase: if task.phase > successor.phase:
msg = ("The task {task} is specified as running before {other}, " msg = ("The task {task} is specified as running before {other}, "
"but its phase '{phase}' lies after the phase '{other_phase}'" "but its phase '{phase}' lies after the phase '{other_phase}'"
.format(task=task, other=successor, phase=task.phase, other_phase=successor.phase)) .format(task=task, other=successor, phase=task.phase, other_phase=successor.phase))
raise TaskListError(msg) raise TaskListError(msg)
if task.phase < successor.phase: if task.phase < successor.phase:
log.warn("The task {task} is specified as running before {other} " log.warn("The task {task} is specified as running before {other} "
"although its phase '{phase}' already lies before the phase '{other_phase}' " "although its phase '{phase}' already lies before the phase '{other_phase}' "
"(or the task has been placed in the wrong phase)" "(or the task has been placed in the wrong phase)"
.format(task=task, other=successor, phase=task.phase, other_phase=successor.phase)) .format(task=task, other=successor, phase=task.phase, other_phase=successor.phase))
for predecessor in task.predecessors: for predecessor in task.predecessors:
# Run through all successors and throw an error if the phase of the task # Run through all successors and throw an error if the phase of the task
# lies after the phase of a predecessor, log a warning if it lies before. # lies after the phase of a predecessor, log a warning if it lies before.
if task.phase < predecessor.phase: if task.phase < predecessor.phase:
msg = ("The task {task} is specified as running after {other}, " msg = ("The task {task} is specified as running after {other}, "
"but its phase '{phase}' lies before the phase '{other_phase}'" "but its phase '{phase}' lies before the phase '{other_phase}'"
.format(task=task, other=predecessor, phase=task.phase, other_phase=predecessor.phase)) .format(task=task, other=predecessor, phase=task.phase, other_phase=predecessor.phase))
raise TaskListError(msg) raise TaskListError(msg)
if task.phase > predecessor.phase: if task.phase > predecessor.phase:
log.warn("The task {task} is specified as running after {other} " log.warn("The task {task} is specified as running after {other} "
"although its phase '{phase}' already lies after the phase '{other_phase}' " "although its phase '{phase}' already lies after the phase '{other_phase}' "
"(or the task has been placed in the wrong phase)" "(or the task has been placed in the wrong phase)"
.format(task=task, other=predecessor, phase=task.phase, other_phase=predecessor.phase)) .format(task=task, other=predecessor, phase=task.phase, other_phase=predecessor.phase))
def strongly_connected_components(graph): def strongly_connected_components(graph):
"""Find the strongly connected components in a graph using Tarjan's algorithm. """Find the strongly connected components in a graph using Tarjan's algorithm.
Source: http://www.logarithmic.net/pfh-files/blog/01208083168/sort.py Source: http://www.logarithmic.net/pfh-files/blog/01208083168/sort.py
:param dict graph: mapping of tasks to lists of successor tasks :param dict graph: mapping of tasks to lists of successor tasks
:return: List of tuples that are strongly connected comoponents :return: List of tuples that are strongly connected comoponents
:rtype: list :rtype: list
""" """
result = [] result = []
stack = [] stack = []
low = {} low = {}
def visit(node): def visit(node):
if node in low: if node in low:
return return
num = len(low) num = len(low)
low[node] = num low[node] = num
stack_pos = len(stack) stack_pos = len(stack)
stack.append(node) stack.append(node)
for successor in graph[node]: for successor in graph[node]:
visit(successor) visit(successor)
low[node] = min(low[node], low[successor]) low[node] = min(low[node], low[successor])
if num == low[node]: if num == low[node]:
component = tuple(stack[stack_pos:]) component = tuple(stack[stack_pos:])
del stack[stack_pos:] del stack[stack_pos:]
result.append(component) result.append(component)
for item in component: for item in component:
low[item] = len(graph) low[item] = len(graph)
for node in graph: for node in graph:
visit(node) visit(node)
return result return result
def topological_sort(graph): def topological_sort(graph):
"""Runs a topological sort on a graph. """Runs a topological sort on a graph.
Source: http://www.logarithmic.net/pfh-files/blog/01208083168/sort.py Source: http://www.logarithmic.net/pfh-files/blog/01208083168/sort.py
:param dict graph: mapping of tasks to lists of successor tasks :param dict graph: mapping of tasks to lists of successor tasks
:return: A list of all tasks in the graph sorted according to ther dependencies :return: A list of all tasks in the graph sorted according to ther dependencies
:rtype: list :rtype: list
""" """
count = {} count = {}
for node in graph: for node in graph:
count[node] = 0 count[node] = 0
for node in graph: for node in graph:
for successor in graph[node]: for successor in graph[node]:
count[successor] += 1 count[successor] += 1
ready = [node for node in graph if count[node] == 0] ready = [node for node in graph if count[node] == 0]
result = [] result = []
while ready: while ready:
node = ready.pop(-1) node = ready.pop(-1)
result.append(node) result.append(node)
for successor in graph[node]: for successor in graph[node]:
count[successor] -= 1 count[successor] -= 1
if count[successor] == 0: if count[successor] == 0:
ready.append(successor) ready.append(successor)
return result return result

View file

@ -2,158 +2,158 @@ from exceptions import UnitError
def onlybytes(msg): def onlybytes(msg):
def decorator(func): def decorator(func):
def check_other(self, other): def check_other(self, other):
if not isinstance(other, Bytes): if not isinstance(other, Bytes):
raise UnitError(msg) raise UnitError(msg)
return func(self, other) return func(self, other)
return check_other return check_other
return decorator return decorator
class Bytes(object): class Bytes(object):
units = {'B': 1, units = {'B': 1,
'KiB': 1024, 'KiB': 1024,
'MiB': 1024 * 1024, 'MiB': 1024 * 1024,
'GiB': 1024 * 1024 * 1024, 'GiB': 1024 * 1024 * 1024,
'TiB': 1024 * 1024 * 1024 * 1024, 'TiB': 1024 * 1024 * 1024 * 1024,
} }
def __init__(self, qty): def __init__(self, qty):
if isinstance(qty, (int, long)): if isinstance(qty, (int, long)):
self.qty = qty self.qty = qty
else: else:
self.qty = Bytes.parse(qty) self.qty = Bytes.parse(qty)
@staticmethod @staticmethod
def parse(qty_str): def parse(qty_str):
import re import re
regex = re.compile('^(?P<qty>\d+)(?P<unit>[KMGT]i?B|B)$') regex = re.compile('^(?P<qty>\d+)(?P<unit>[KMGT]i?B|B)$')
parsed = regex.match(qty_str) parsed = regex.match(qty_str)
if parsed is None: if parsed is None:
raise UnitError('Unable to parse ' + qty_str) raise UnitError('Unable to parse ' + qty_str)
qty = int(parsed.group('qty')) qty = int(parsed.group('qty'))
unit = parsed.group('unit') unit = parsed.group('unit')
if unit[0] in 'KMGT': if unit[0] in 'KMGT':
unit = unit[0] + 'iB' unit = unit[0] + 'iB'
byte_qty = qty * Bytes.units[unit] byte_qty = qty * Bytes.units[unit]
return byte_qty return byte_qty
def get_qty_in(self, unit): def get_qty_in(self, unit):
if unit[0] in 'KMGT': if unit[0] in 'KMGT':
unit = unit[0] + 'iB' unit = unit[0] + 'iB'
if unit not in Bytes.units: if unit not in Bytes.units:
raise UnitError('Unrecognized unit: ' + unit) raise UnitError('Unrecognized unit: ' + unit)
if self.qty % Bytes.units[unit] != 0: if self.qty % Bytes.units[unit] != 0:
msg = 'Unable to convert {qty} bytes to a whole number in {unit}'.format(qty=self.qty, unit=unit) msg = 'Unable to convert {qty} bytes to a whole number in {unit}'.format(qty=self.qty, unit=unit)
raise UnitError(msg) raise UnitError(msg)
return self.qty / Bytes.units[unit] return self.qty / Bytes.units[unit]
def __repr__(self): def __repr__(self):
converted = str(self.get_qty_in('B')) + 'B' converted = str(self.get_qty_in('B')) + 'B'
if self.qty == 0: if self.qty == 0:
return converted return converted
for unit in ['TiB', 'GiB', 'MiB', 'KiB']: for unit in ['TiB', 'GiB', 'MiB', 'KiB']:
try: try:
converted = str(self.get_qty_in(unit)) + unit converted = str(self.get_qty_in(unit)) + unit
break break
except UnitError: except UnitError:
pass pass
return converted return converted
def __str__(self): def __str__(self):
return self.__repr__() return self.__repr__()
def __int__(self): def __int__(self):
return self.qty return self.qty
def __long__(self): def __long__(self):
return self.qty return self.qty
@onlybytes('Can only compare Bytes to Bytes') @onlybytes('Can only compare Bytes to Bytes')
def __lt__(self, other): def __lt__(self, other):
return self.qty < other.qty return self.qty < other.qty
@onlybytes('Can only compare Bytes to Bytes') @onlybytes('Can only compare Bytes to Bytes')
def __le__(self, other): def __le__(self, other):
return self.qty <= other.qty return self.qty <= other.qty
@onlybytes('Can only compare Bytes to Bytes') @onlybytes('Can only compare Bytes to Bytes')
def __eq__(self, other): def __eq__(self, other):
return self.qty == other.qty return self.qty == other.qty
@onlybytes('Can only compare Bytes to Bytes') @onlybytes('Can only compare Bytes to Bytes')
def __ne__(self, other): def __ne__(self, other):
return self.qty != other.qty return self.qty != other.qty
@onlybytes('Can only compare Bytes to Bytes') @onlybytes('Can only compare Bytes to Bytes')
def __ge__(self, other): def __ge__(self, other):
return self.qty >= other.qty return self.qty >= other.qty
@onlybytes('Can only compare Bytes to Bytes') @onlybytes('Can only compare Bytes to Bytes')
def __gt__(self, other): def __gt__(self, other):
return self.qty > other.qty return self.qty > other.qty
@onlybytes('Can only add Bytes to Bytes') @onlybytes('Can only add Bytes to Bytes')
def __add__(self, other): def __add__(self, other):
return Bytes(self.qty + other.qty) return Bytes(self.qty + other.qty)
@onlybytes('Can only add Bytes to Bytes') @onlybytes('Can only add Bytes to Bytes')
def __iadd__(self, other): def __iadd__(self, other):
self.qty += other.qty self.qty += other.qty
return self return self
@onlybytes('Can only subtract Bytes from Bytes') @onlybytes('Can only subtract Bytes from Bytes')
def __sub__(self, other): def __sub__(self, other):
return Bytes(self.qty - other.qty) return Bytes(self.qty - other.qty)
@onlybytes('Can only subtract Bytes from Bytes') @onlybytes('Can only subtract Bytes from Bytes')
def __isub__(self, other): def __isub__(self, other):
self.qty -= other.qty self.qty -= other.qty
return self return self
def __mul__(self, other): def __mul__(self, other):
if not isinstance(other, (int, long)): if not isinstance(other, (int, long)):
raise UnitError('Can only multiply Bytes with integers') raise UnitError('Can only multiply Bytes with integers')
return Bytes(self.qty * other) return Bytes(self.qty * other)
def __imul__(self, other): def __imul__(self, other):
if not isinstance(other, (int, long)): if not isinstance(other, (int, long)):
raise UnitError('Can only multiply Bytes with integers') raise UnitError('Can only multiply Bytes with integers')
self.qty *= other self.qty *= other
return self return self
def __div__(self, other): def __div__(self, other):
if isinstance(other, Bytes): if isinstance(other, Bytes):
return self.qty / other.qty return self.qty / other.qty
if not isinstance(other, (int, long)): if not isinstance(other, (int, long)):
raise UnitError('Can only divide Bytes with integers or Bytes') raise UnitError('Can only divide Bytes with integers or Bytes')
return Bytes(self.qty / other) return Bytes(self.qty / other)
def __idiv__(self, other): def __idiv__(self, other):
if isinstance(other, Bytes): if isinstance(other, Bytes):
self.qty /= other.qty self.qty /= other.qty
else: else:
if not isinstance(other, (int, long)): if not isinstance(other, (int, long)):
raise UnitError('Can only divide Bytes with integers or Bytes') raise UnitError('Can only divide Bytes with integers or Bytes')
self.qty /= other self.qty /= other
return self return self
@onlybytes('Can only take modulus of Bytes with Bytes') @onlybytes('Can only take modulus of Bytes with Bytes')
def __mod__(self, other): def __mod__(self, other):
return Bytes(self.qty % other.qty) return Bytes(self.qty % other.qty)
@onlybytes('Can only take modulus of Bytes with Bytes') @onlybytes('Can only take modulus of Bytes with Bytes')
def __imod__(self, other): def __imod__(self, other):
self.qty %= other.qty self.qty %= other.qty
return self return self
def __getstate__(self): def __getstate__(self):
return {'__class__': self.__module__ + '.' + self.__class__.__name__, return {'__class__': self.__module__ + '.' + self.__class__.__name__,
'qty': self.qty, 'qty': self.qty,
} }
def __setstate__(self, state): def __setstate__(self, state):
self.qty = state['qty'] self.qty = state['qty']

View file

@ -1,38 +1,38 @@
class ManifestError(Exception): class ManifestError(Exception):
def __init__(self, message, manifest_path=None, data_path=None): def __init__(self, message, manifest_path=None, data_path=None):
super(ManifestError, self).__init__(message) super(ManifestError, self).__init__(message)
self.message = message self.message = message
self.manifest_path = manifest_path self.manifest_path = manifest_path
self.data_path = data_path self.data_path = data_path
self.args = (self.message, self.manifest_path, self.data_path) self.args = (self.message, self.manifest_path, self.data_path)
def __str__(self): def __str__(self):
if self.data_path is not None: if self.data_path is not None:
path = '.'.join(map(str, self.data_path)) path = '.'.join(map(str, self.data_path))
return ('{msg}\n File path: {file}\n Data path: {datapath}' return ('{msg}\n File path: {file}\n Data path: {datapath}'
.format(msg=self.message, file=self.manifest_path, datapath=path)) .format(msg=self.message, file=self.manifest_path, datapath=path))
return '{file}: {msg}'.format(msg=self.message, file=self.manifest_path) return '{file}: {msg}'.format(msg=self.message, file=self.manifest_path)
class TaskListError(Exception): class TaskListError(Exception):
def __init__(self, message): def __init__(self, message):
super(TaskListError, self).__init__(message) super(TaskListError, self).__init__(message)
self.message = message self.message = message
self.args = (self.message,) self.args = (self.message,)
def __str__(self): def __str__(self):
return 'Error in tasklist: ' + self.message return 'Error in tasklist: ' + self.message
class TaskError(Exception): class TaskError(Exception):
pass pass
class UnexpectedNumMatchesError(Exception): class UnexpectedNumMatchesError(Exception):
pass pass
class UnitError(Exception): class UnitError(Exception):
pass pass

View file

@ -2,32 +2,32 @@ from contextlib import contextmanager
def get_partitions(): def get_partitions():
import re import re
regexp = re.compile('^ *(?P<major>\d+) *(?P<minor>\d+) *(?P<num_blks>\d+) (?P<dev_name>\S+)$') regexp = re.compile('^ *(?P<major>\d+) *(?P<minor>\d+) *(?P<num_blks>\d+) (?P<dev_name>\S+)$')
matches = {} matches = {}
path = '/proc/partitions' path = '/proc/partitions'
with open(path) as partitions: with open(path) as partitions:
next(partitions) next(partitions)
next(partitions) next(partitions)
for line in partitions: for line in partitions:
match = regexp.match(line) match = regexp.match(line)
if match is None: if match is None:
raise RuntimeError('Unable to parse {line} in {path}'.format(line=line, path=path)) raise RuntimeError('Unable to parse {line} in {path}'.format(line=line, path=path))
matches[match.group('dev_name')] = match.groupdict() matches[match.group('dev_name')] = match.groupdict()
return matches return matches
@contextmanager @contextmanager
def unmounted(volume): def unmounted(volume):
from bootstrapvz.base.fs.partitionmaps.none import NoPartitions from bootstrapvz.base.fs.partitionmaps.none import NoPartitions
p_map = volume.partition_map p_map = volume.partition_map
root_dir = p_map.root.mount_dir root_dir = p_map.root.mount_dir
p_map.root.unmount() p_map.root.unmount()
if not isinstance(p_map, NoPartitions): if not isinstance(p_map, NoPartitions):
p_map.unmap(volume) p_map.unmap(volume)
yield yield
p_map.map(volume) p_map.map(volume)
else: else:
yield yield
p_map.root.mount(destination=root_dir) p_map.root.mount(destination=root_dir)

View file

@ -3,22 +3,22 @@ from bootstrapvz.base.fs.volume import Volume
class Folder(Volume): class Folder(Volume):
# Override the states this volume can be in (i.e. we can't "format" or "attach" it) # Override the states this volume can be in (i.e. we can't "format" or "attach" it)
events = [{'name': 'create', 'src': 'nonexistent', 'dst': 'attached'}, events = [{'name': 'create', 'src': 'nonexistent', 'dst': 'attached'},
{'name': 'delete', 'src': 'attached', 'dst': 'deleted'}, {'name': 'delete', 'src': 'attached', 'dst': 'deleted'},
] ]
extension = 'chroot' extension = 'chroot'
def create(self, path): def create(self, path):
self.fsm.create(path=path) self.fsm.create(path=path)
def _before_create(self, e): def _before_create(self, e):
import os import os
self.path = e.path self.path = e.path
os.mkdir(self.path) os.mkdir(self.path)
def _before_delete(self, e): def _before_delete(self, e):
from shutil import rmtree from shutil import rmtree
rmtree(self.path) rmtree(self.path)
del self.path del self.path

View file

@ -4,26 +4,26 @@ from ..tools import log_check_call
class LoopbackVolume(Volume): class LoopbackVolume(Volume):
extension = 'raw' extension = 'raw'
def create(self, image_path): def create(self, image_path):
self.fsm.create(image_path=image_path) self.fsm.create(image_path=image_path)
def _before_create(self, e): def _before_create(self, e):
self.image_path = e.image_path self.image_path = e.image_path
size_opt = '--size={mib}M'.format(mib=self.size.bytes.get_qty_in('MiB')) size_opt = '--size={mib}M'.format(mib=self.size.bytes.get_qty_in('MiB'))
log_check_call(['truncate', size_opt, self.image_path]) log_check_call(['truncate', size_opt, self.image_path])
def _before_attach(self, e): def _before_attach(self, e):
[self.loop_device_path] = log_check_call(['losetup', '--show', '--find', self.image_path]) [self.loop_device_path] = log_check_call(['losetup', '--show', '--find', self.image_path])
self.device_path = self.loop_device_path self.device_path = self.loop_device_path
def _before_detach(self, e): def _before_detach(self, e):
log_check_call(['losetup', '--detach', self.loop_device_path]) log_check_call(['losetup', '--detach', self.loop_device_path])
del self.loop_device_path del self.loop_device_path
self.device_path = None self.device_path = None
def _before_delete(self, e): def _before_delete(self, e):
from os import remove from os import remove
remove(self.image_path) remove(self.image_path)
del self.image_path del self.image_path

View file

@ -6,78 +6,78 @@ from . import get_partitions
class QEMUVolume(LoopbackVolume): class QEMUVolume(LoopbackVolume):
def _before_create(self, e): def _before_create(self, e):
self.image_path = e.image_path self.image_path = e.image_path
vol_size = str(self.size.bytes.get_qty_in('MiB')) + 'M' vol_size = str(self.size.bytes.get_qty_in('MiB')) + 'M'
log_check_call(['qemu-img', 'create', '-f', self.qemu_format, self.image_path, vol_size]) log_check_call(['qemu-img', 'create', '-f', self.qemu_format, self.image_path, vol_size])
def _check_nbd_module(self): def _check_nbd_module(self):
from bootstrapvz.base.fs.partitionmaps.none import NoPartitions from bootstrapvz.base.fs.partitionmaps.none import NoPartitions
if isinstance(self.partition_map, NoPartitions): if isinstance(self.partition_map, NoPartitions):
if not self._module_loaded('nbd'): if not self._module_loaded('nbd'):
msg = ('The kernel module `nbd\' must be loaded ' msg = ('The kernel module `nbd\' must be loaded '
'(`modprobe nbd\') to attach .{extension} images' '(`modprobe nbd\') to attach .{extension} images'
.format(extension=self.extension)) .format(extension=self.extension))
raise VolumeError(msg) raise VolumeError(msg)
else: else:
num_partitions = len(self.partition_map.partitions) num_partitions = len(self.partition_map.partitions)
if not self._module_loaded('nbd'): if not self._module_loaded('nbd'):
msg = ('The kernel module `nbd\' must be loaded ' msg = ('The kernel module `nbd\' must be loaded '
'(run `modprobe nbd max_part={num_partitions}\') ' '(run `modprobe nbd max_part={num_partitions}\') '
'to attach .{extension} images' 'to attach .{extension} images'
.format(num_partitions=num_partitions, extension=self.extension)) .format(num_partitions=num_partitions, extension=self.extension))
raise VolumeError(msg) raise VolumeError(msg)
nbd_max_part = int(self._module_param('nbd', 'max_part')) nbd_max_part = int(self._module_param('nbd', 'max_part'))
if nbd_max_part < num_partitions: if nbd_max_part < num_partitions:
# Found here: http://bethesignal.org/blog/2011/01/05/how-to-mount-virtualbox-vdi-image/ # Found here: http://bethesignal.org/blog/2011/01/05/how-to-mount-virtualbox-vdi-image/
msg = ('The kernel module `nbd\' was loaded with the max_part ' msg = ('The kernel module `nbd\' was loaded with the max_part '
'parameter set to {max_part}, which is below ' 'parameter set to {max_part}, which is below '
'the amount of partitions for this volume ({num_partitions}). ' 'the amount of partitions for this volume ({num_partitions}). '
'Reload the nbd kernel module with max_part set to at least {num_partitions} ' 'Reload the nbd kernel module with max_part set to at least {num_partitions} '
'(`rmmod nbd; modprobe nbd max_part={num_partitions}\').' '(`rmmod nbd; modprobe nbd max_part={num_partitions}\').'
.format(max_part=nbd_max_part, num_partitions=num_partitions)) .format(max_part=nbd_max_part, num_partitions=num_partitions))
raise VolumeError(msg) raise VolumeError(msg)
def _before_attach(self, e): def _before_attach(self, e):
self._check_nbd_module() self._check_nbd_module()
self.loop_device_path = self._find_free_nbd_device() self.loop_device_path = self._find_free_nbd_device()
log_check_call(['qemu-nbd', '--connect', self.loop_device_path, self.image_path]) log_check_call(['qemu-nbd', '--connect', self.loop_device_path, self.image_path])
self.device_path = self.loop_device_path self.device_path = self.loop_device_path
def _before_detach(self, e): def _before_detach(self, e):
log_check_call(['qemu-nbd', '--disconnect', self.loop_device_path]) log_check_call(['qemu-nbd', '--disconnect', self.loop_device_path])
del self.loop_device_path del self.loop_device_path
self.device_path = None self.device_path = None
def _module_loaded(self, module): def _module_loaded(self, module):
import re import re
regexp = re.compile('^{module} +'.format(module=module)) regexp = re.compile('^{module} +'.format(module=module))
with open('/proc/modules') as loaded_modules: with open('/proc/modules') as loaded_modules:
for line in loaded_modules: for line in loaded_modules:
match = regexp.match(line) match = regexp.match(line)
if match is not None: if match is not None:
return True return True
return False return False
def _module_param(self, module, param): def _module_param(self, module, param):
import os.path import os.path
param_path = os.path.join('/sys/module', module, 'parameters', param) param_path = os.path.join('/sys/module', module, 'parameters', param)
with open(param_path) as param: with open(param_path) as param:
return param.read().strip() return param.read().strip()
# From http://lists.gnu.org/archive/html/qemu-devel/2011-11/msg02201.html # From http://lists.gnu.org/archive/html/qemu-devel/2011-11/msg02201.html
# Apparently it's not in the current qemu-nbd shipped with wheezy # Apparently it's not in the current qemu-nbd shipped with wheezy
def _is_nbd_used(self, device_name): def _is_nbd_used(self, device_name):
return device_name in get_partitions() return device_name in get_partitions()
def _find_free_nbd_device(self): def _find_free_nbd_device(self):
import os.path import os.path
for i in xrange(0, 15): for i in xrange(0, 15):
device_name = 'nbd' + str(i) device_name = 'nbd' + str(i)
if not self._is_nbd_used(device_name): if not self._is_nbd_used(device_name):
return os.path.join('/dev', device_name) return os.path.join('/dev', device_name)
raise VolumeError('Unable to find free nbd device.') raise VolumeError('Unable to find free nbd device.')
def __setstate__(self, state): def __setstate__(self, state):
for key in state: for key in state:
self.__dict__[key] = state[key] self.__dict__[key] = state[key]

View file

@ -3,13 +3,13 @@ from qemuvolume import QEMUVolume
class VirtualDiskImage(QEMUVolume): class VirtualDiskImage(QEMUVolume):
extension = 'vdi' extension = 'vdi'
qemu_format = 'vdi' qemu_format = 'vdi'
# VDI format does not have an URI (check here: https://forums.virtualbox.org/viewtopic.php?p=275185#p275185) # VDI format does not have an URI (check here: https://forums.virtualbox.org/viewtopic.php?p=275185#p275185)
ovf_uri = None ovf_uri = None
def get_uuid(self): def get_uuid(self):
import uuid import uuid
with open(self.image_path) as image: with open(self.image_path) as image:
image.seek(392) image.seek(392)
return uuid.UUID(bytes_le=image.read(16)) return uuid.UUID(bytes_le=image.read(16))

View file

@ -4,20 +4,20 @@ from ..tools import log_check_call
class VirtualHardDisk(QEMUVolume): class VirtualHardDisk(QEMUVolume):
extension = 'vhd' extension = 'vhd'
qemu_format = 'vpc' qemu_format = 'vpc'
ovf_uri = 'http://go.microsoft.com/fwlink/?LinkId=137171' ovf_uri = 'http://go.microsoft.com/fwlink/?LinkId=137171'
# Azure requires the image size to be a multiple of 1 MiB. # Azure requires the image size to be a multiple of 1 MiB.
# VHDs are dynamic by default, so we add the option # VHDs are dynamic by default, so we add the option
# to make the image size fixed (subformat=fixed) # to make the image size fixed (subformat=fixed)
def _before_create(self, e): def _before_create(self, e):
self.image_path = e.image_path self.image_path = e.image_path
vol_size = str(self.size.bytes.get_qty_in('MiB')) + 'M' vol_size = str(self.size.bytes.get_qty_in('MiB')) + 'M'
log_check_call(['qemu-img', 'create', '-o', 'subformat=fixed', '-f', self.qemu_format, self.image_path, vol_size]) log_check_call(['qemu-img', 'create', '-o', 'subformat=fixed', '-f', self.qemu_format, self.image_path, vol_size])
def get_uuid(self): def get_uuid(self):
if not hasattr(self, 'uuid'): if not hasattr(self, 'uuid'):
import uuid import uuid
self.uuid = uuid.uuid4() self.uuid = uuid.uuid4()
return self.uuid return self.uuid

View file

@ -3,25 +3,25 @@ from qemuvolume import QEMUVolume
class VirtualMachineDisk(QEMUVolume): class VirtualMachineDisk(QEMUVolume):
extension = 'vmdk' extension = 'vmdk'
qemu_format = 'vmdk' qemu_format = 'vmdk'
ovf_uri = 'http://www.vmware.com/specifications/vmdk.html#sparse' ovf_uri = 'http://www.vmware.com/specifications/vmdk.html#sparse'
def get_uuid(self): def get_uuid(self):
if not hasattr(self, 'uuid'): if not hasattr(self, 'uuid'):
import uuid import uuid
self.uuid = uuid.uuid4() self.uuid = uuid.uuid4()
return self.uuid return self.uuid
# import uuid # import uuid
# with open(self.image_path) as image: # with open(self.image_path) as image:
# line = '' # line = ''
# lines_read = 0 # lines_read = 0
# while 'ddb.uuid.image="' not in line: # while 'ddb.uuid.image="' not in line:
# line = image.read() # line = image.read()
# lines_read += 1 # lines_read += 1
# if lines_read > 100: # if lines_read > 100:
# from common.exceptions import VolumeError # from common.exceptions import VolumeError
# raise VolumeError('Unable to find UUID in VMDK file.') # raise VolumeError('Unable to find UUID in VMDK file.')
# import re # import re
# matches = re.search('ddb.uuid.image="(?P<uuid>[^"]+)"', line) # matches = re.search('ddb.uuid.image="(?P<uuid>[^"]+)"', line)
# return uuid.UUID(hex=matches.group('uuid')) # return uuid.UUID(hex=matches.group('uuid'))

View file

@ -2,60 +2,60 @@
class FSMProxy(object): class FSMProxy(object):
def __init__(self, cfg): def __init__(self, cfg):
from fysom import Fysom from fysom import Fysom
events = set([event['name'] for event in cfg['events']]) events = set([event['name'] for event in cfg['events']])
cfg['callbacks'] = self.collect_event_listeners(events, cfg['callbacks']) cfg['callbacks'] = self.collect_event_listeners(events, cfg['callbacks'])
self.fsm = Fysom(cfg) self.fsm = Fysom(cfg)
self.attach_proxy_methods(self.fsm, events) self.attach_proxy_methods(self.fsm, events)
def collect_event_listeners(self, events, callbacks): def collect_event_listeners(self, events, callbacks):
callbacks = callbacks.copy() callbacks = callbacks.copy()
callback_names = [] callback_names = []
for event in events: for event in events:
callback_names.append(('_before_' + event, 'onbefore' + event)) callback_names.append(('_before_' + event, 'onbefore' + event))
callback_names.append(('_after_' + event, 'onafter' + event)) callback_names.append(('_after_' + event, 'onafter' + event))
for fn_name, listener in callback_names: for fn_name, listener in callback_names:
fn = getattr(self, fn_name, None) fn = getattr(self, fn_name, None)
if callable(fn): if callable(fn):
if listener in callbacks: if listener in callbacks:
old_fn = callbacks[listener] old_fn = callbacks[listener]
def wrapper(e, old_fn=old_fn, fn=fn): def wrapper(e, old_fn=old_fn, fn=fn):
old_fn(e) old_fn(e)
fn(e) fn(e)
callbacks[listener] = wrapper callbacks[listener] = wrapper
else: else:
callbacks[listener] = fn callbacks[listener] = fn
return callbacks return callbacks
def attach_proxy_methods(self, fsm, events): def attach_proxy_methods(self, fsm, events):
def make_proxy(fsm, event): def make_proxy(fsm, event):
fn = getattr(fsm, event) fn = getattr(fsm, event)
def proxy(*args, **kwargs): def proxy(*args, **kwargs):
if len(args) > 0: if len(args) > 0:
raise FSMProxyError('FSMProxy event listeners only accept named arguments.') raise FSMProxyError('FSMProxy event listeners only accept named arguments.')
fn(**kwargs) fn(**kwargs)
return proxy return proxy
for event in events: for event in events:
if not hasattr(self, event): if not hasattr(self, event):
setattr(self, event, make_proxy(fsm, event)) setattr(self, event, make_proxy(fsm, event))
def __getstate__(self): def __getstate__(self):
state = {} state = {}
for key, value in self.__dict__.iteritems(): for key, value in self.__dict__.iteritems():
if callable(value) or key == 'fsm': if callable(value) or key == 'fsm':
continue continue
state[key] = value state[key] = value
state['__class__'] = self.__module__ + '.' + self.__class__.__name__ state['__class__'] = self.__module__ + '.' + self.__class__.__name__
return state return state
def __setstate__(self, state): def __setstate__(self, state):
for key in state: for key in state:
self.__dict__[key] = state[key] self.__dict__[key] = state[key]
class FSMProxyError(Exception): class FSMProxyError(Exception):
pass pass

View file

@ -1,34 +1,34 @@
class _Release(object): class _Release(object):
def __init__(self, codename, version): def __init__(self, codename, version):
self.codename = codename self.codename = codename
self.version = version self.version = version
def __cmp__(self, other): def __cmp__(self, other):
return self.version - other.version return self.version - other.version
def __str__(self): def __str__(self):
return self.codename return self.codename
def __getstate__(self): def __getstate__(self):
state = self.__dict__.copy() state = self.__dict__.copy()
state['__class__'] = self.__module__ + '.' + self.__class__.__name__ state['__class__'] = self.__module__ + '.' + self.__class__.__name__
return state return state
def __setstate__(self, state): def __setstate__(self, state):
for key in state: for key in state:
self.__dict__[key] = state[key] self.__dict__[key] = state[key]
class _ReleaseAlias(_Release): class _ReleaseAlias(_Release):
def __init__(self, alias, release): def __init__(self, alias, release):
self.alias = alias self.alias = alias
self.release = release self.release = release
super(_ReleaseAlias, self).__init__(self.release.codename, self.release.version) super(_ReleaseAlias, self).__init__(self.release.codename, self.release.version)
def __str__(self): def __str__(self):
return self.alias return self.alias
sid = _Release('sid', 10) sid = _Release('sid', 10)
@ -54,15 +54,15 @@ oldstable = _ReleaseAlias('oldstable', wheezy)
def get_release(release_name): def get_release(release_name):
"""Normalizes the release codenames """Normalizes the release codenames
This allows tasks to query for release codenames rather than 'stable', 'unstable' etc. This allows tasks to query for release codenames rather than 'stable', 'unstable' etc.
""" """
from . import releases from . import releases
release = getattr(releases, release_name, None) release = getattr(releases, release_name, None)
if release is None or not isinstance(release, _Release): if release is None or not isinstance(release, _Release):
raise UnknownReleaseException('The release `{name}\' is unknown'.format(name=release)) raise UnknownReleaseException('The release `{name}\' is unknown'.format(name=release))
return release return release
class UnknownReleaseException(Exception): class UnknownReleaseException(Exception):
pass pass

View file

@ -3,176 +3,176 @@ from bytes import Bytes
def onlysectors(msg): def onlysectors(msg):
def decorator(func): def decorator(func):
def check_other(self, other): def check_other(self, other):
if not isinstance(other, Sectors): if not isinstance(other, Sectors):
raise UnitError(msg) raise UnitError(msg)
return func(self, other) return func(self, other)
return check_other return check_other
return decorator return decorator
class Sectors(object): class Sectors(object):
def __init__(self, quantity, sector_size): def __init__(self, quantity, sector_size):
if isinstance(sector_size, Bytes): if isinstance(sector_size, Bytes):
self.sector_size = sector_size self.sector_size = sector_size
else: else:
self.sector_size = Bytes(sector_size) self.sector_size = Bytes(sector_size)
if isinstance(quantity, Bytes): if isinstance(quantity, Bytes):
self.bytes = quantity self.bytes = quantity
else: else:
if isinstance(quantity, (int, long)): if isinstance(quantity, (int, long)):
self.bytes = self.sector_size * quantity self.bytes = self.sector_size * quantity
else: else:
self.bytes = Bytes(quantity) self.bytes = Bytes(quantity)
def get_sectors(self): def get_sectors(self):
return self.bytes / self.sector_size return self.bytes / self.sector_size
def __repr__(self): def __repr__(self):
return str(self.get_sectors()) + 's' return str(self.get_sectors()) + 's'
def __str__(self): def __str__(self):
return self.__repr__() return self.__repr__()
def __int__(self): def __int__(self):
return self.get_sectors() return self.get_sectors()
def __long__(self): def __long__(self):
return self.get_sectors() return self.get_sectors()
@onlysectors('Can only compare sectors with sectors') @onlysectors('Can only compare sectors with sectors')
def __lt__(self, other): def __lt__(self, other):
return self.bytes < other.bytes return self.bytes < other.bytes
@onlysectors('Can only compare sectors with sectors') @onlysectors('Can only compare sectors with sectors')
def __le__(self, other): def __le__(self, other):
return self.bytes <= other.bytes return self.bytes <= other.bytes
@onlysectors('Can only compare sectors with sectors') @onlysectors('Can only compare sectors with sectors')
def __eq__(self, other): def __eq__(self, other):
return self.bytes == other.bytes return self.bytes == other.bytes
@onlysectors('Can only compare sectors with sectors') @onlysectors('Can only compare sectors with sectors')
def __ne__(self, other): def __ne__(self, other):
return self.bytes != other.bytes return self.bytes != other.bytes
@onlysectors('Can only compare sectors with sectors') @onlysectors('Can only compare sectors with sectors')
def __ge__(self, other): def __ge__(self, other):
return self.bytes >= other.bytes return self.bytes >= other.bytes
@onlysectors('Can only compare sectors with sectors') @onlysectors('Can only compare sectors with sectors')
def __gt__(self, other): def __gt__(self, other):
return self.bytes > other.bytes return self.bytes > other.bytes
def __add__(self, other): def __add__(self, other):
if isinstance(other, (int, long)): if isinstance(other, (int, long)):
return Sectors(self.bytes + self.sector_size * other, self.sector_size) return Sectors(self.bytes + self.sector_size * other, self.sector_size)
if isinstance(other, Bytes): if isinstance(other, Bytes):
return Sectors(self.bytes + other, self.sector_size) return Sectors(self.bytes + other, self.sector_size)
if isinstance(other, Sectors): if isinstance(other, Sectors):
if self.sector_size != other.sector_size: if self.sector_size != other.sector_size:
raise UnitError('Cannot sum sectors with different sector sizes') raise UnitError('Cannot sum sectors with different sector sizes')
return Sectors(self.bytes + other.bytes, self.sector_size) return Sectors(self.bytes + other.bytes, self.sector_size)
raise UnitError('Can only add sectors, bytes or integers to sectors') raise UnitError('Can only add sectors, bytes or integers to sectors')
def __iadd__(self, other): def __iadd__(self, other):
if isinstance(other, (int, long)): if isinstance(other, (int, long)):
self.bytes += self.sector_size * other self.bytes += self.sector_size * other
return self return self
if isinstance(other, Bytes): if isinstance(other, Bytes):
self.bytes += other self.bytes += other
return self return self
if isinstance(other, Sectors): if isinstance(other, Sectors):
if self.sector_size != other.sector_size: if self.sector_size != other.sector_size:
raise UnitError('Cannot sum sectors with different sector sizes') raise UnitError('Cannot sum sectors with different sector sizes')
self.bytes += other.bytes self.bytes += other.bytes
return self return self
raise UnitError('Can only add sectors, bytes or integers to sectors') raise UnitError('Can only add sectors, bytes or integers to sectors')
def __sub__(self, other): def __sub__(self, other):
if isinstance(other, (int, long)): if isinstance(other, (int, long)):
return Sectors(self.bytes - self.sector_size * other, self.sector_size) return Sectors(self.bytes - self.sector_size * other, self.sector_size)
if isinstance(other, Bytes): if isinstance(other, Bytes):
return Sectors(self.bytes - other, self.sector_size) return Sectors(self.bytes - other, self.sector_size)
if isinstance(other, Sectors): if isinstance(other, Sectors):
if self.sector_size != other.sector_size: if self.sector_size != other.sector_size:
raise UnitError('Cannot subtract sectors with different sector sizes') raise UnitError('Cannot subtract sectors with different sector sizes')
return Sectors(self.bytes - other.bytes, self.sector_size) return Sectors(self.bytes - other.bytes, self.sector_size)
raise UnitError('Can only subtract sectors, bytes or integers from sectors') raise UnitError('Can only subtract sectors, bytes or integers from sectors')
def __isub__(self, other): def __isub__(self, other):
if isinstance(other, (int, long)): if isinstance(other, (int, long)):
self.bytes -= self.sector_size * other self.bytes -= self.sector_size * other
return self return self
if isinstance(other, Bytes): if isinstance(other, Bytes):
self.bytes -= other self.bytes -= other
return self return self
if isinstance(other, Sectors): if isinstance(other, Sectors):
if self.sector_size != other.sector_size: if self.sector_size != other.sector_size:
raise UnitError('Cannot subtract sectors with different sector sizes') raise UnitError('Cannot subtract sectors with different sector sizes')
self.bytes -= other.bytes self.bytes -= other.bytes
return self return self
raise UnitError('Can only subtract sectors, bytes or integers from sectors') raise UnitError('Can only subtract sectors, bytes or integers from sectors')
def __mul__(self, other): def __mul__(self, other):
if isinstance(other, (int, long)): if isinstance(other, (int, long)):
return Sectors(self.bytes * other, self.sector_size) return Sectors(self.bytes * other, self.sector_size)
else: else:
raise UnitError('Can only multiply sectors with integers') raise UnitError('Can only multiply sectors with integers')
def __imul__(self, other): def __imul__(self, other):
if isinstance(other, (int, long)): if isinstance(other, (int, long)):
self.bytes *= other self.bytes *= other
return self return self
else: else:
raise UnitError('Can only multiply sectors with integers') raise UnitError('Can only multiply sectors with integers')
def __div__(self, other): def __div__(self, other):
if isinstance(other, (int, long)): if isinstance(other, (int, long)):
return Sectors(self.bytes / other, self.sector_size) return Sectors(self.bytes / other, self.sector_size)
if isinstance(other, Sectors): if isinstance(other, Sectors):
if self.sector_size == other.sector_size: if self.sector_size == other.sector_size:
return self.bytes / other.bytes return self.bytes / other.bytes
else: else:
raise UnitError('Cannot divide sectors with different sector sizes') raise UnitError('Cannot divide sectors with different sector sizes')
raise UnitError('Can only divide sectors with integers or sectors') raise UnitError('Can only divide sectors with integers or sectors')
def __idiv__(self, other): def __idiv__(self, other):
if isinstance(other, (int, long)): if isinstance(other, (int, long)):
self.bytes /= other self.bytes /= other
return self return self
if isinstance(other, Sectors): if isinstance(other, Sectors):
if self.sector_size == other.sector_size: if self.sector_size == other.sector_size:
self.bytes /= other.bytes self.bytes /= other.bytes
return self return self
else: else:
raise UnitError('Cannot divide sectors with different sector sizes') raise UnitError('Cannot divide sectors with different sector sizes')
raise UnitError('Can only divide sectors with integers or sectors') raise UnitError('Can only divide sectors with integers or sectors')
@onlysectors('Can only take modulus of sectors with sectors') @onlysectors('Can only take modulus of sectors with sectors')
def __mod__(self, other): def __mod__(self, other):
if self.sector_size == other.sector_size: if self.sector_size == other.sector_size:
return Sectors(self.bytes % other.bytes, self.sector_size) return Sectors(self.bytes % other.bytes, self.sector_size)
else: else:
raise UnitError('Cannot take modulus of sectors with different sector sizes') raise UnitError('Cannot take modulus of sectors with different sector sizes')
@onlysectors('Can only take modulus of sectors with sectors') @onlysectors('Can only take modulus of sectors with sectors')
def __imod__(self, other): def __imod__(self, other):
if self.sector_size == other.sector_size: if self.sector_size == other.sector_size:
self.bytes %= other.bytes self.bytes %= other.bytes
return self return self
else: else:
raise UnitError('Cannot take modulus of sectors with different sector sizes') raise UnitError('Cannot take modulus of sectors with different sector sizes')
def __getstate__(self): def __getstate__(self):
return {'__class__': self.__module__ + '.' + self.__class__.__name__, return {'__class__': self.__module__ + '.' + self.__class__.__name__,
'sector_size': self.sector_size, 'sector_size': self.sector_size,
'bytes': self.bytes, 'bytes': self.bytes,
} }
def __setstate__(self, state): def __setstate__(self, state):
self.sector_size = state['sector_size'] self.sector_size = state['sector_size']
self.bytes = state['bytes'] self.bytes = state['bytes']

View file

@ -20,39 +20,39 @@ from tasks import folder
def get_standard_groups(manifest): def get_standard_groups(manifest):
group = [] group = []
group.extend(get_base_group(manifest)) group.extend(get_base_group(manifest))
group.extend(volume_group) group.extend(volume_group)
if manifest.volume['partitions']['type'] != 'none': if manifest.volume['partitions']['type'] != 'none':
group.extend(partitioning_group) group.extend(partitioning_group)
if 'boot' in manifest.volume['partitions']: if 'boot' in manifest.volume['partitions']:
group.extend(boot_partition_group) group.extend(boot_partition_group)
group.extend(mounting_group) group.extend(mounting_group)
group.extend(kernel_group) group.extend(kernel_group)
group.extend(get_fs_specific_group(manifest)) group.extend(get_fs_specific_group(manifest))
group.extend(get_network_group(manifest)) group.extend(get_network_group(manifest))
group.extend(get_apt_group(manifest)) group.extend(get_apt_group(manifest))
group.extend(security_group) group.extend(security_group)
group.extend(get_locale_group(manifest)) group.extend(get_locale_group(manifest))
group.extend(get_bootloader_group(manifest)) group.extend(get_bootloader_group(manifest))
group.extend(cleanup_group) group.extend(cleanup_group)
return group return group
def get_base_group(manifest): def get_base_group(manifest):
group = [workspace.CreateWorkspace, group = [workspace.CreateWorkspace,
bootstrap.AddRequiredCommands, bootstrap.AddRequiredCommands,
host.CheckExternalCommands, host.CheckExternalCommands,
bootstrap.Bootstrap, bootstrap.Bootstrap,
workspace.DeleteWorkspace, workspace.DeleteWorkspace,
] ]
if manifest.bootstrapper.get('tarball', False): if manifest.bootstrapper.get('tarball', False):
group.append(bootstrap.MakeTarball) group.append(bootstrap.MakeTarball)
if manifest.bootstrapper.get('include_packages', False): if manifest.bootstrapper.get('include_packages', False):
group.append(bootstrap.IncludePackagesInBootstrap) group.append(bootstrap.IncludePackagesInBootstrap)
if manifest.bootstrapper.get('exclude_packages', False): if manifest.bootstrapper.get('exclude_packages', False):
group.append(bootstrap.ExcludePackagesInBootstrap) group.append(bootstrap.ExcludePackagesInBootstrap)
return group return group
volume_group = [volume.Attach, volume_group = [volume.Attach,
@ -95,95 +95,95 @@ ssh_group = [ssh.AddOpenSSHPackage,
def get_network_group(manifest): def get_network_group(manifest):
if manifest.bootstrapper.get('variant', None) == 'minbase': if manifest.bootstrapper.get('variant', None) == 'minbase':
# minbase has no networking # minbase has no networking
return [] return []
group = [network.ConfigureNetworkIF, group = [network.ConfigureNetworkIF,
network.RemoveDNSInfo] network.RemoveDNSInfo]
if manifest.system.get('hostname', False): if manifest.system.get('hostname', False):
group.append(network.SetHostname) group.append(network.SetHostname)
else: else:
group.append(network.RemoveHostname) group.append(network.RemoveHostname)
return group return group
def get_apt_group(manifest): def get_apt_group(manifest):
group = [apt.AddDefaultSources, group = [apt.AddDefaultSources,
apt.WriteSources, apt.WriteSources,
apt.DisableDaemonAutostart, apt.DisableDaemonAutostart,
apt.AptUpdate, apt.AptUpdate,
apt.AptUpgrade, apt.AptUpgrade,
packages.InstallPackages, packages.InstallPackages,
apt.PurgeUnusedPackages, apt.PurgeUnusedPackages,
apt.AptClean, apt.AptClean,
apt.EnableDaemonAutostart, apt.EnableDaemonAutostart,
] ]
if 'sources' in manifest.packages: if 'sources' in manifest.packages:
group.append(apt.AddManifestSources) group.append(apt.AddManifestSources)
if 'trusted-keys' in manifest.packages: if 'trusted-keys' in manifest.packages:
group.append(apt.InstallTrustedKeys) group.append(apt.InstallTrustedKeys)
if 'preferences' in manifest.packages: if 'preferences' in manifest.packages:
group.append(apt.AddManifestPreferences) group.append(apt.AddManifestPreferences)
group.append(apt.WritePreferences) group.append(apt.WritePreferences)
if 'apt.conf.d' in manifest.packages: if 'apt.conf.d' in manifest.packages:
group.append(apt.WriteConfiguration) group.append(apt.WriteConfiguration)
if 'install' in manifest.packages: if 'install' in manifest.packages:
group.append(packages.AddManifestPackages) group.append(packages.AddManifestPackages)
if manifest.packages.get('install_standard', False): if manifest.packages.get('install_standard', False):
group.append(packages.AddTaskselStandardPackages) group.append(packages.AddTaskselStandardPackages)
return group return group
security_group = [security.EnableShadowConfig] security_group = [security.EnableShadowConfig]
def get_locale_group(manifest): def get_locale_group(manifest):
from bootstrapvz.common.releases import jessie from bootstrapvz.common.releases import jessie
group = [ group = [
locale.LocaleBootstrapPackage, locale.LocaleBootstrapPackage,
locale.GenerateLocale, locale.GenerateLocale,
locale.SetTimezone, locale.SetTimezone,
] ]
if manifest.release > jessie: if manifest.release > jessie:
group.append(locale.SetLocalTimeLink) group.append(locale.SetLocalTimeLink)
else: else:
group.append(locale.SetLocalTimeCopy) group.append(locale.SetLocalTimeCopy)
return group return group
def get_bootloader_group(manifest): def get_bootloader_group(manifest):
from bootstrapvz.common.releases import jessie from bootstrapvz.common.releases import jessie
group = [] group = []
if manifest.system['bootloader'] == 'grub': if manifest.system['bootloader'] == 'grub':
group.extend([grub.AddGrubPackage, group.extend([grub.AddGrubPackage,
grub.ConfigureGrub]) grub.ConfigureGrub])
if manifest.release < jessie: if manifest.release < jessie:
group.append(grub.InstallGrub_1_99) group.append(grub.InstallGrub_1_99)
else: else:
group.append(grub.InstallGrub_2) group.append(grub.InstallGrub_2)
if manifest.system['bootloader'] == 'extlinux': if manifest.system['bootloader'] == 'extlinux':
group.append(extlinux.AddExtlinuxPackage) group.append(extlinux.AddExtlinuxPackage)
if manifest.release < jessie: if manifest.release < jessie:
group.extend([extlinux.ConfigureExtlinux, group.extend([extlinux.ConfigureExtlinux,
extlinux.InstallExtlinux]) extlinux.InstallExtlinux])
else: else:
group.extend([extlinux.ConfigureExtlinuxJessie, group.extend([extlinux.ConfigureExtlinuxJessie,
extlinux.InstallExtlinuxJessie]) extlinux.InstallExtlinuxJessie])
return group return group
def get_fs_specific_group(manifest): def get_fs_specific_group(manifest):
partitions = manifest.volume['partitions'] partitions = manifest.volume['partitions']
fs_specific_tasks = {'ext2': [filesystem.TuneVolumeFS], fs_specific_tasks = {'ext2': [filesystem.TuneVolumeFS],
'ext3': [filesystem.TuneVolumeFS], 'ext3': [filesystem.TuneVolumeFS],
'ext4': [filesystem.TuneVolumeFS], 'ext4': [filesystem.TuneVolumeFS],
'xfs': [filesystem.AddXFSProgs], 'xfs': [filesystem.AddXFSProgs],
} }
group = set() group = set()
if 'boot' in partitions: if 'boot' in partitions:
group.update(fs_specific_tasks.get(partitions['boot']['filesystem'], [])) group.update(fs_specific_tasks.get(partitions['boot']['filesystem'], []))
if 'root' in partitions: if 'root' in partitions:
group.update(fs_specific_tasks.get(partitions['root']['filesystem'], [])) group.update(fs_specific_tasks.get(partitions['root']['filesystem'], []))
return list(group) return list(group)
cleanup_group = [cleanup.ClearMOTD, cleanup_group = [cleanup.ClearMOTD,
@ -202,11 +202,11 @@ rollback_map = {workspace.CreateWorkspace: workspace.DeleteWorkspace,
def get_standard_rollback_tasks(completed): def get_standard_rollback_tasks(completed):
rollback_tasks = set() rollback_tasks = set()
for task in completed: for task in completed:
if task not in rollback_map: if task not in rollback_map:
continue continue
counter = rollback_map[task] counter = rollback_map[task]
if task in completed and counter not in completed: if task in completed and counter not in completed:
rollback_tasks.add(counter) rollback_tasks.add(counter)
return rollback_tasks return rollback_tasks

View file

@ -5,48 +5,48 @@ from . import assets
class UpdateInitramfs(Task): class UpdateInitramfs(Task):
description = 'Updating initramfs' description = 'Updating initramfs'
phase = phases.system_modification phase = phases.system_modification
@classmethod @classmethod
def run(cls, info): def run(cls, info):
from ..tools import log_check_call from ..tools import log_check_call
log_check_call(['chroot', info.root, 'update-initramfs', '-u']) log_check_call(['chroot', info.root, 'update-initramfs', '-u'])
class BlackListModules(Task): class BlackListModules(Task):
description = 'Blacklisting kernel modules' description = 'Blacklisting kernel modules'
phase = phases.system_modification phase = phases.system_modification
successors = [UpdateInitramfs] successors = [UpdateInitramfs]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
blacklist_path = os.path.join(info.root, 'etc/modprobe.d/blacklist.conf') blacklist_path = os.path.join(info.root, 'etc/modprobe.d/blacklist.conf')
with open(blacklist_path, 'a') as blacklist: with open(blacklist_path, 'a') as blacklist:
blacklist.write(('# disable pc speaker and floppy\n' blacklist.write(('# disable pc speaker and floppy\n'
'blacklist pcspkr\n' 'blacklist pcspkr\n'
'blacklist floppy\n')) 'blacklist floppy\n'))
class DisableGetTTYs(Task): class DisableGetTTYs(Task):
description = 'Disabling getty processes' description = 'Disabling getty processes'
phase = phases.system_modification phase = phases.system_modification
@classmethod @classmethod
def run(cls, info): def run(cls, info):
# Forward compatible check for jessie # Forward compatible check for jessie
from bootstrapvz.common.releases import jessie from bootstrapvz.common.releases import jessie
if info.manifest.release < jessie: if info.manifest.release < jessie:
from ..tools import sed_i from ..tools import sed_i
inittab_path = os.path.join(info.root, 'etc/inittab') inittab_path = os.path.join(info.root, 'etc/inittab')
tty1 = '1:2345:respawn:/sbin/getty 38400 tty1' tty1 = '1:2345:respawn:/sbin/getty 38400 tty1'
sed_i(inittab_path, '^' + tty1, '#' + tty1) sed_i(inittab_path, '^' + tty1, '#' + tty1)
ttyx = ':23:respawn:/sbin/getty 38400 tty' ttyx = ':23:respawn:/sbin/getty 38400 tty'
for i in range(2, 7): for i in range(2, 7):
i = str(i) i = str(i)
sed_i(inittab_path, '^' + i + ttyx + i, '#' + i + ttyx + i) sed_i(inittab_path, '^' + i + ttyx + i, '#' + i + ttyx + i)
else: else:
from shutil import copy from shutil import copy
logind_asset_path = os.path.join(assets, 'systemd/logind.conf') logind_asset_path = os.path.join(assets, 'systemd/logind.conf')
logind_destination = os.path.join(info.root, 'etc/systemd/logind.conf') logind_destination = os.path.join(info.root, 'etc/systemd/logind.conf')
copy(logind_asset_path, logind_destination) copy(logind_asset_path, logind_destination)

View file

@ -8,107 +8,107 @@ log = logging.getLogger(__name__)
class AddRequiredCommands(Task): class AddRequiredCommands(Task):
description = 'Adding commands required for bootstrapping Debian' description = 'Adding commands required for bootstrapping Debian'
phase = phases.preparation phase = phases.preparation
successors = [host.CheckExternalCommands] successors = [host.CheckExternalCommands]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
info.host_dependencies['debootstrap'] = 'debootstrap' info.host_dependencies['debootstrap'] = 'debootstrap'
def get_bootstrap_args(info): def get_bootstrap_args(info):
executable = ['debootstrap'] executable = ['debootstrap']
arch = info.manifest.system.get('userspace_architecture', info.manifest.system.get('architecture')) arch = info.manifest.system.get('userspace_architecture', info.manifest.system.get('architecture'))
options = ['--arch=' + arch] options = ['--arch=' + arch]
if 'variant' in info.manifest.bootstrapper: if 'variant' in info.manifest.bootstrapper:
options.append('--variant=' + info.manifest.bootstrapper['variant']) options.append('--variant=' + info.manifest.bootstrapper['variant'])
if len(info.include_packages) > 0: if len(info.include_packages) > 0:
options.append('--include=' + ','.join(info.include_packages)) options.append('--include=' + ','.join(info.include_packages))
if len(info.exclude_packages) > 0: if len(info.exclude_packages) > 0:
options.append('--exclude=' + ','.join(info.exclude_packages)) options.append('--exclude=' + ','.join(info.exclude_packages))
mirror = info.manifest.bootstrapper.get('mirror', info.apt_mirror) mirror = info.manifest.bootstrapper.get('mirror', info.apt_mirror)
arguments = [info.manifest.system['release'], info.root, mirror] arguments = [info.manifest.system['release'], info.root, mirror]
return executable, options, arguments return executable, options, arguments
def get_tarball_filename(info): def get_tarball_filename(info):
from hashlib import sha1 from hashlib import sha1
executable, options, arguments = get_bootstrap_args(info) executable, options, arguments = get_bootstrap_args(info)
# Filter info.root which points at /target/volume-id, we won't ever hit anything with that in there. # Filter info.root which points at /target/volume-id, we won't ever hit anything with that in there.
hash_args = [arg for arg in arguments if arg != info.root] hash_args = [arg for arg in arguments if arg != info.root]
tarball_id = sha1(repr(frozenset(options + hash_args))).hexdigest()[0:8] tarball_id = sha1(repr(frozenset(options + hash_args))).hexdigest()[0:8]
tarball_filename = 'debootstrap-' + tarball_id + '.tar' tarball_filename = 'debootstrap-' + tarball_id + '.tar'
return os.path.join(info.manifest.bootstrapper['workspace'], tarball_filename) return os.path.join(info.manifest.bootstrapper['workspace'], tarball_filename)
class MakeTarball(Task): class MakeTarball(Task):
description = 'Creating bootstrap tarball' description = 'Creating bootstrap tarball'
phase = phases.os_installation phase = phases.os_installation
@classmethod @classmethod
def run(cls, info): def run(cls, info):
executable, options, arguments = get_bootstrap_args(info) executable, options, arguments = get_bootstrap_args(info)
tarball = get_tarball_filename(info) tarball = get_tarball_filename(info)
if os.path.isfile(tarball): if os.path.isfile(tarball):
log.debug('Found matching tarball, skipping creation') log.debug('Found matching tarball, skipping creation')
else: else:
from ..tools import log_call from ..tools import log_call
status, out, err = log_call(executable + options + ['--make-tarball=' + tarball] + arguments) status, out, err = log_call(executable + options + ['--make-tarball=' + tarball] + arguments)
if status not in [0, 1]: # variant=minbase exits with 0 if status not in [0, 1]: # variant=minbase exits with 0
msg = 'debootstrap exited with status {status}, it should exit with status 0 or 1'.format(status=status) msg = 'debootstrap exited with status {status}, it should exit with status 0 or 1'.format(status=status)
raise TaskError(msg) raise TaskError(msg)
class Bootstrap(Task): class Bootstrap(Task):
description = 'Installing Debian' description = 'Installing Debian'
phase = phases.os_installation phase = phases.os_installation
predecessors = [MakeTarball] predecessors = [MakeTarball]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
executable, options, arguments = get_bootstrap_args(info) executable, options, arguments = get_bootstrap_args(info)
tarball = get_tarball_filename(info) tarball = get_tarball_filename(info)
if os.path.isfile(tarball): if os.path.isfile(tarball):
if not info.manifest.bootstrapper.get('tarball', False): if not info.manifest.bootstrapper.get('tarball', False):
# Only shows this message if it hasn't tried to create the tarball # Only shows this message if it hasn't tried to create the tarball
log.debug('Found matching tarball, skipping download') log.debug('Found matching tarball, skipping download')
options.extend(['--unpack-tarball=' + tarball]) options.extend(['--unpack-tarball=' + tarball])
if info.bootstrap_script is not None: if info.bootstrap_script is not None:
# Optional bootstrapping script to modify the bootstrapping process # Optional bootstrapping script to modify the bootstrapping process
arguments.append(info.bootstrap_script) arguments.append(info.bootstrap_script)
try: try:
from ..tools import log_check_call from ..tools import log_check_call
log_check_call(executable + options + arguments) log_check_call(executable + options + arguments)
except KeyboardInterrupt: except KeyboardInterrupt:
# Sometimes ../root/sys and ../root/proc are still mounted when # Sometimes ../root/sys and ../root/proc are still mounted when
# quitting debootstrap prematurely. This break the cleanup process, # quitting debootstrap prematurely. This break the cleanup process,
# so we unmount manually (ignore the exit code, the dirs may not be mounted). # so we unmount manually (ignore the exit code, the dirs may not be mounted).
from ..tools import log_call from ..tools import log_call
log_call(['umount', os.path.join(info.root, 'sys')]) log_call(['umount', os.path.join(info.root, 'sys')])
log_call(['umount', os.path.join(info.root, 'proc')]) log_call(['umount', os.path.join(info.root, 'proc')])
raise raise
class IncludePackagesInBootstrap(Task): class IncludePackagesInBootstrap(Task):
description = 'Add packages in the bootstrap phase' description = 'Add packages in the bootstrap phase'
phase = phases.preparation phase = phases.preparation
@classmethod @classmethod
def run(cls, info): def run(cls, info):
info.include_packages.update( info.include_packages.update(
set(info.manifest.bootstrapper['include_packages']) set(info.manifest.bootstrapper['include_packages'])
) )
class ExcludePackagesInBootstrap(Task): class ExcludePackagesInBootstrap(Task):
description = 'Remove packages from bootstrap phase' description = 'Remove packages from bootstrap phase'
phase = phases.preparation phase = phases.preparation
@classmethod @classmethod
def run(cls, info): def run(cls, info):
info.exclude_packages.update( info.exclude_packages.update(
set(info.manifest.bootstrapper['exclude_packages']) set(info.manifest.bootstrapper['exclude_packages'])
) )

View file

@ -5,28 +5,28 @@ import shutil
class ClearMOTD(Task): class ClearMOTD(Task):
description = 'Clearing the MOTD' description = 'Clearing the MOTD'
phase = phases.system_cleaning phase = phases.system_cleaning
@classmethod @classmethod
def run(cls, info): def run(cls, info):
with open('/var/run/motd', 'w'): with open('/var/run/motd', 'w'):
pass pass
class CleanTMP(Task): class CleanTMP(Task):
description = 'Removing temporary files' description = 'Removing temporary files'
phase = phases.system_cleaning phase = phases.system_cleaning
@classmethod @classmethod
def run(cls, info): def run(cls, info):
tmp = os.path.join(info.root, 'tmp') tmp = os.path.join(info.root, 'tmp')
for tmp_file in [os.path.join(tmp, f) for f in os.listdir(tmp)]: for tmp_file in [os.path.join(tmp, f) for f in os.listdir(tmp)]:
if os.path.isfile(tmp_file): if os.path.isfile(tmp_file):
os.remove(tmp_file) os.remove(tmp_file)
else: else:
shutil.rmtree(tmp_file) shutil.rmtree(tmp_file)
log = os.path.join(info.root, 'var/log/') log = os.path.join(info.root, 'var/log/')
os.remove(os.path.join(log, 'bootstrap.log')) os.remove(os.path.join(log, 'bootstrap.log'))
os.remove(os.path.join(log, 'dpkg.log')) os.remove(os.path.join(log, 'dpkg.log'))

View file

@ -3,11 +3,11 @@ from .. import phases
class TriggerRollback(Task): class TriggerRollback(Task):
phase = phases.cleaning phase = phases.cleaning
description = 'Triggering a rollback by throwing an exception' description = 'Triggering a rollback by throwing an exception'
@classmethod @classmethod
def run(cls, info): def run(cls, info):
from ..exceptions import TaskError from ..exceptions import TaskError
raise TaskError('Trigger rollback') raise TaskError('Trigger rollback')

View file

@ -8,107 +8,107 @@ import os
class AddExtlinuxPackage(Task): class AddExtlinuxPackage(Task):
description = 'Adding extlinux package' description = 'Adding extlinux package'
phase = phases.preparation phase = phases.preparation
@classmethod @classmethod
def run(cls, info): def run(cls, info):
info.packages.add('extlinux') info.packages.add('extlinux')
if isinstance(info.volume.partition_map, partitionmaps.gpt.GPTPartitionMap): if isinstance(info.volume.partition_map, partitionmaps.gpt.GPTPartitionMap):
info.packages.add('syslinux-common') info.packages.add('syslinux-common')
class ConfigureExtlinux(Task): class ConfigureExtlinux(Task):
description = 'Configuring extlinux' description = 'Configuring extlinux'
phase = phases.system_modification phase = phases.system_modification
predecessors = [filesystem.FStab] predecessors = [filesystem.FStab]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
from bootstrapvz.common.releases import squeeze from bootstrapvz.common.releases import squeeze
if info.manifest.release == squeeze: if info.manifest.release == squeeze:
# On squeeze /etc/default/extlinux is generated when running extlinux-update # On squeeze /etc/default/extlinux is generated when running extlinux-update
log_check_call(['chroot', info.root, log_check_call(['chroot', info.root,
'extlinux-update']) 'extlinux-update'])
from bootstrapvz.common.tools import sed_i from bootstrapvz.common.tools import sed_i
extlinux_def = os.path.join(info.root, 'etc/default/extlinux') extlinux_def = os.path.join(info.root, 'etc/default/extlinux')
sed_i(extlinux_def, r'^EXTLINUX_PARAMETERS="([^"]+)"$', sed_i(extlinux_def, r'^EXTLINUX_PARAMETERS="([^"]+)"$',
r'EXTLINUX_PARAMETERS="\1 console=ttyS0"') r'EXTLINUX_PARAMETERS="\1 console=ttyS0"')
class InstallExtlinux(Task): class InstallExtlinux(Task):
description = 'Installing extlinux' description = 'Installing extlinux'
phase = phases.system_modification phase = phases.system_modification
predecessors = [filesystem.FStab, ConfigureExtlinux] predecessors = [filesystem.FStab, ConfigureExtlinux]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
if isinstance(info.volume.partition_map, partitionmaps.gpt.GPTPartitionMap): if isinstance(info.volume.partition_map, partitionmaps.gpt.GPTPartitionMap):
bootloader = '/usr/lib/syslinux/gptmbr.bin' bootloader = '/usr/lib/syslinux/gptmbr.bin'
else: else:
bootloader = '/usr/lib/extlinux/mbr.bin' bootloader = '/usr/lib/extlinux/mbr.bin'
log_check_call(['chroot', info.root, log_check_call(['chroot', info.root,
'dd', 'bs=440', 'count=1', 'dd', 'bs=440', 'count=1',
'if=' + bootloader, 'if=' + bootloader,
'of=' + info.volume.device_path]) 'of=' + info.volume.device_path])
log_check_call(['chroot', info.root, log_check_call(['chroot', info.root,
'extlinux', 'extlinux',
'--install', '/boot/extlinux']) '--install', '/boot/extlinux'])
log_check_call(['chroot', info.root, log_check_call(['chroot', info.root,
'extlinux-update']) 'extlinux-update'])
class ConfigureExtlinuxJessie(Task): class ConfigureExtlinuxJessie(Task):
description = 'Configuring extlinux' description = 'Configuring extlinux'
phase = phases.system_modification phase = phases.system_modification
@classmethod @classmethod
def run(cls, info): def run(cls, info):
extlinux_path = os.path.join(info.root, 'boot/extlinux') extlinux_path = os.path.join(info.root, 'boot/extlinux')
os.mkdir(extlinux_path) os.mkdir(extlinux_path)
from . import assets from . import assets
with open(os.path.join(assets, 'extlinux/extlinux.conf')) as template: with open(os.path.join(assets, 'extlinux/extlinux.conf')) as template:
extlinux_config_tpl = template.read() extlinux_config_tpl = template.read()
config_vars = {'root_uuid': info.volume.partition_map.root.get_uuid(), config_vars = {'root_uuid': info.volume.partition_map.root.get_uuid(),
'kernel_version': info.kernel_version} 'kernel_version': info.kernel_version}
# Check if / and /boot are on the same partition # Check if / and /boot are on the same partition
# If not, /boot will actually be / when booting # If not, /boot will actually be / when booting
if hasattr(info.volume.partition_map, 'boot'): if hasattr(info.volume.partition_map, 'boot'):
config_vars['boot_prefix'] = '' config_vars['boot_prefix'] = ''
else: else:
config_vars['boot_prefix'] = '/boot' config_vars['boot_prefix'] = '/boot'
extlinux_config = extlinux_config_tpl.format(**config_vars) extlinux_config = extlinux_config_tpl.format(**config_vars)
with open(os.path.join(extlinux_path, 'extlinux.conf'), 'w') as extlinux_conf_handle: with open(os.path.join(extlinux_path, 'extlinux.conf'), 'w') as extlinux_conf_handle:
extlinux_conf_handle.write(extlinux_config) extlinux_conf_handle.write(extlinux_config)
# Copy the boot message # Copy the boot message
from shutil import copy from shutil import copy
boot_txt_path = os.path.join(assets, 'extlinux/boot.txt') boot_txt_path = os.path.join(assets, 'extlinux/boot.txt')
copy(boot_txt_path, os.path.join(extlinux_path, 'boot.txt')) copy(boot_txt_path, os.path.join(extlinux_path, 'boot.txt'))
class InstallExtlinuxJessie(Task): class InstallExtlinuxJessie(Task):
description = 'Installing extlinux' description = 'Installing extlinux'
phase = phases.system_modification phase = phases.system_modification
predecessors = [filesystem.FStab, ConfigureExtlinuxJessie] predecessors = [filesystem.FStab, ConfigureExtlinuxJessie]
# Make sure the kernel image is updated after we have installed the bootloader # Make sure the kernel image is updated after we have installed the bootloader
successors = [kernel.UpdateInitramfs] successors = [kernel.UpdateInitramfs]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
if isinstance(info.volume.partition_map, partitionmaps.gpt.GPTPartitionMap): if isinstance(info.volume.partition_map, partitionmaps.gpt.GPTPartitionMap):
# Yeah, somebody saw it fit to uppercase that folder in jessie. Why? BECAUSE # Yeah, somebody saw it fit to uppercase that folder in jessie. Why? BECAUSE
bootloader = '/usr/lib/EXTLINUX/gptmbr.bin' bootloader = '/usr/lib/EXTLINUX/gptmbr.bin'
else: else:
bootloader = '/usr/lib/EXTLINUX/mbr.bin' bootloader = '/usr/lib/EXTLINUX/mbr.bin'
log_check_call(['chroot', info.root, log_check_call(['chroot', info.root,
'dd', 'bs=440', 'count=1', 'dd', 'bs=440', 'count=1',
'if=' + bootloader, 'if=' + bootloader,
'of=' + info.volume.device_path]) 'of=' + info.volume.device_path])
log_check_call(['chroot', info.root, log_check_call(['chroot', info.root,
'extlinux', 'extlinux',
'--install', '/boot/extlinux']) '--install', '/boot/extlinux'])

View file

@ -7,196 +7,196 @@ import volume
class AddRequiredCommands(Task): class AddRequiredCommands(Task):
description = 'Adding commands required for formatting' description = 'Adding commands required for formatting'
phase = phases.preparation phase = phases.preparation
successors = [host.CheckExternalCommands] successors = [host.CheckExternalCommands]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
if 'xfs' in (p.filesystem for p in info.volume.partition_map.partitions): if 'xfs' in (p.filesystem for p in info.volume.partition_map.partitions):
info.host_dependencies['mkfs.xfs'] = 'xfsprogs' info.host_dependencies['mkfs.xfs'] = 'xfsprogs'
class Format(Task): class Format(Task):
description = 'Formatting the volume' description = 'Formatting the volume'
phase = phases.volume_preparation phase = phases.volume_preparation
@classmethod @classmethod
def run(cls, info): def run(cls, info):
from bootstrapvz.base.fs.partitions.unformatted import UnformattedPartition from bootstrapvz.base.fs.partitions.unformatted import UnformattedPartition
for partition in info.volume.partition_map.partitions: for partition in info.volume.partition_map.partitions:
if isinstance(partition, UnformattedPartition): if isinstance(partition, UnformattedPartition):
continue continue
partition.format() partition.format()
class TuneVolumeFS(Task): class TuneVolumeFS(Task):
description = 'Tuning the bootstrap volume filesystem' description = 'Tuning the bootstrap volume filesystem'
phase = phases.volume_preparation phase = phases.volume_preparation
predecessors = [Format] predecessors = [Format]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
from bootstrapvz.base.fs.partitions.unformatted import UnformattedPartition from bootstrapvz.base.fs.partitions.unformatted import UnformattedPartition
import re import re
# Disable the time based filesystem check # Disable the time based filesystem check
for partition in info.volume.partition_map.partitions: for partition in info.volume.partition_map.partitions:
if isinstance(partition, UnformattedPartition): if isinstance(partition, UnformattedPartition):
continue continue
if re.match('^ext[2-4]$', partition.filesystem) is not None: if re.match('^ext[2-4]$', partition.filesystem) is not None:
log_check_call(['tune2fs', '-i', '0', partition.device_path]) log_check_call(['tune2fs', '-i', '0', partition.device_path])
class AddXFSProgs(Task): class AddXFSProgs(Task):
description = 'Adding `xfsprogs\' to the image packages' description = 'Adding `xfsprogs\' to the image packages'
phase = phases.preparation phase = phases.preparation
@classmethod @classmethod
def run(cls, info): def run(cls, info):
info.packages.add('xfsprogs') info.packages.add('xfsprogs')
class CreateMountDir(Task): class CreateMountDir(Task):
description = 'Creating mountpoint for the root partition' description = 'Creating mountpoint for the root partition'
phase = phases.volume_mounting phase = phases.volume_mounting
@classmethod @classmethod
def run(cls, info): def run(cls, info):
import os import os
info.root = os.path.join(info.workspace, 'root') info.root = os.path.join(info.workspace, 'root')
os.makedirs(info.root) os.makedirs(info.root)
class MountRoot(Task): class MountRoot(Task):
description = 'Mounting the root partition' description = 'Mounting the root partition'
phase = phases.volume_mounting phase = phases.volume_mounting
predecessors = [CreateMountDir] predecessors = [CreateMountDir]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
info.volume.partition_map.root.mount(destination=info.root) info.volume.partition_map.root.mount(destination=info.root)
class CreateBootMountDir(Task): class CreateBootMountDir(Task):
description = 'Creating mountpoint for the boot partition' description = 'Creating mountpoint for the boot partition'
phase = phases.volume_mounting phase = phases.volume_mounting
predecessors = [MountRoot] predecessors = [MountRoot]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
import os.path import os.path
os.makedirs(os.path.join(info.root, 'boot')) os.makedirs(os.path.join(info.root, 'boot'))
class MountBoot(Task): class MountBoot(Task):
description = 'Mounting the boot partition' description = 'Mounting the boot partition'
phase = phases.volume_mounting phase = phases.volume_mounting
predecessors = [CreateBootMountDir] predecessors = [CreateBootMountDir]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
p_map = info.volume.partition_map p_map = info.volume.partition_map
p_map.root.add_mount(p_map.boot, 'boot') p_map.root.add_mount(p_map.boot, 'boot')
class MountSpecials(Task): class MountSpecials(Task):
description = 'Mounting special block devices' description = 'Mounting special block devices'
phase = phases.os_installation phase = phases.os_installation
predecessors = [bootstrap.Bootstrap] predecessors = [bootstrap.Bootstrap]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
root = info.volume.partition_map.root root = info.volume.partition_map.root
root.add_mount('/dev', 'dev', ['--bind']) root.add_mount('/dev', 'dev', ['--bind'])
root.add_mount('none', 'proc', ['--types', 'proc']) root.add_mount('none', 'proc', ['--types', 'proc'])
root.add_mount('none', 'sys', ['--types', 'sysfs']) root.add_mount('none', 'sys', ['--types', 'sysfs'])
root.add_mount('none', 'dev/pts', ['--types', 'devpts']) root.add_mount('none', 'dev/pts', ['--types', 'devpts'])
class CopyMountTable(Task): class CopyMountTable(Task):
description = 'Copying mtab from host system' description = 'Copying mtab from host system'
phase = phases.os_installation phase = phases.os_installation
predecessors = [MountSpecials] predecessors = [MountSpecials]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
import shutil import shutil
import os.path import os.path
shutil.copy('/proc/mounts', os.path.join(info.root, 'etc/mtab')) shutil.copy('/proc/mounts', os.path.join(info.root, 'etc/mtab'))
class UnmountRoot(Task): class UnmountRoot(Task):
description = 'Unmounting the bootstrap volume' description = 'Unmounting the bootstrap volume'
phase = phases.volume_unmounting phase = phases.volume_unmounting
successors = [volume.Detach] successors = [volume.Detach]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
info.volume.partition_map.root.unmount() info.volume.partition_map.root.unmount()
class RemoveMountTable(Task): class RemoveMountTable(Task):
description = 'Removing mtab' description = 'Removing mtab'
phase = phases.volume_unmounting phase = phases.volume_unmounting
successors = [UnmountRoot] successors = [UnmountRoot]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
import os import os
os.remove(os.path.join(info.root, 'etc/mtab')) os.remove(os.path.join(info.root, 'etc/mtab'))
class DeleteMountDir(Task): class DeleteMountDir(Task):
description = 'Deleting mountpoint for the bootstrap volume' description = 'Deleting mountpoint for the bootstrap volume'
phase = phases.volume_unmounting phase = phases.volume_unmounting
predecessors = [UnmountRoot] predecessors = [UnmountRoot]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
import os import os
os.rmdir(info.root) os.rmdir(info.root)
del info.root del info.root
class FStab(Task): class FStab(Task):
description = 'Adding partitions to the fstab' description = 'Adding partitions to the fstab'
phase = phases.system_modification phase = phases.system_modification
@classmethod @classmethod
def run(cls, info): def run(cls, info):
import os.path import os.path
p_map = info.volume.partition_map p_map = info.volume.partition_map
mount_points = [{'path': '/', mount_points = [{'path': '/',
'partition': p_map.root, 'partition': p_map.root,
'dump': '1', 'dump': '1',
'pass_num': '1', 'pass_num': '1',
}] }]
if hasattr(p_map, 'boot'): if hasattr(p_map, 'boot'):
mount_points.append({'path': '/boot', mount_points.append({'path': '/boot',
'partition': p_map.boot, 'partition': p_map.boot,
'dump': '1', 'dump': '1',
'pass_num': '2', 'pass_num': '2',
}) })
if hasattr(p_map, 'swap'): if hasattr(p_map, 'swap'):
mount_points.append({'path': 'none', mount_points.append({'path': 'none',
'partition': p_map.swap, 'partition': p_map.swap,
'dump': '1', 'dump': '1',
'pass_num': '0', 'pass_num': '0',
}) })
fstab_lines = [] fstab_lines = []
for mount_point in mount_points: for mount_point in mount_points:
partition = mount_point['partition'] partition = mount_point['partition']
mount_opts = ['defaults'] mount_opts = ['defaults']
fstab_lines.append('UUID={uuid} {mountpoint} {filesystem} {mount_opts} {dump} {pass_num}' fstab_lines.append('UUID={uuid} {mountpoint} {filesystem} {mount_opts} {dump} {pass_num}'
.format(uuid=partition.get_uuid(), .format(uuid=partition.get_uuid(),
mountpoint=mount_point['path'], mountpoint=mount_point['path'],
filesystem=partition.filesystem, filesystem=partition.filesystem,
mount_opts=','.join(mount_opts), mount_opts=','.join(mount_opts),
dump=mount_point['dump'], dump=mount_point['dump'],
pass_num=mount_point['pass_num'])) pass_num=mount_point['pass_num']))
fstab_path = os.path.join(info.root, 'etc/fstab') fstab_path = os.path.join(info.root, 'etc/fstab')
with open(fstab_path, 'w') as fstab: with open(fstab_path, 'w') as fstab:
fstab.write('\n'.join(fstab_lines)) fstab.write('\n'.join(fstab_lines))
fstab.write('\n') fstab.write('\n')

View file

@ -5,23 +5,23 @@ import workspace
class Create(Task): class Create(Task):
description = 'Creating volume folder' description = 'Creating volume folder'
phase = phases.volume_creation phase = phases.volume_creation
successors = [volume.Attach] successors = [volume.Attach]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
import os.path import os.path
info.root = os.path.join(info.workspace, 'root') info.root = os.path.join(info.workspace, 'root')
info.volume.create(info.root) info.volume.create(info.root)
class Delete(Task): class Delete(Task):
description = 'Deleting volume folder' description = 'Deleting volume folder'
phase = phases.cleaning phase = phases.cleaning
successors = [workspace.DeleteWorkspace] successors = [workspace.DeleteWorkspace]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
info.volume.delete() info.volume.delete()
del info.root del info.root

View file

@ -8,82 +8,82 @@ import os.path
class AddGrubPackage(Task): class AddGrubPackage(Task):
description = 'Adding grub package' description = 'Adding grub package'
phase = phases.preparation phase = phases.preparation
@classmethod @classmethod
def run(cls, info): def run(cls, info):
info.packages.add('grub-pc') info.packages.add('grub-pc')
class ConfigureGrub(Task): class ConfigureGrub(Task):
description = 'Configuring grub' description = 'Configuring grub'
phase = phases.system_modification phase = phases.system_modification
predecessors = [filesystem.FStab] predecessors = [filesystem.FStab]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
from bootstrapvz.common.tools import sed_i from bootstrapvz.common.tools import sed_i
grub_def = os.path.join(info.root, 'etc/default/grub') grub_def = os.path.join(info.root, 'etc/default/grub')
sed_i(grub_def, '^#GRUB_TERMINAL=console', 'GRUB_TERMINAL=console') sed_i(grub_def, '^#GRUB_TERMINAL=console', 'GRUB_TERMINAL=console')
sed_i(grub_def, '^GRUB_CMDLINE_LINUX_DEFAULT="quiet"', sed_i(grub_def, '^GRUB_CMDLINE_LINUX_DEFAULT="quiet"',
'GRUB_CMDLINE_LINUX_DEFAULT="console=ttyS0"') 'GRUB_CMDLINE_LINUX_DEFAULT="console=ttyS0"')
sed_i(grub_def, '^GRUB_TIMEOUT=[0-9]+', 'GRUB_TIMEOUT=0\n' sed_i(grub_def, '^GRUB_TIMEOUT=[0-9]+', 'GRUB_TIMEOUT=0\n'
'GRUB_HIDDEN_TIMEOUT=0\n' 'GRUB_HIDDEN_TIMEOUT=0\n'
'GRUB_HIDDEN_TIMEOUT_QUIET=true') 'GRUB_HIDDEN_TIMEOUT_QUIET=true')
sed_i(grub_def, '^#GRUB_DISABLE_RECOVERY="true"', 'GRUB_DISABLE_RECOVERY="true"') sed_i(grub_def, '^#GRUB_DISABLE_RECOVERY="true"', 'GRUB_DISABLE_RECOVERY="true"')
class InstallGrub_1_99(Task): class InstallGrub_1_99(Task):
description = 'Installing grub 1.99' description = 'Installing grub 1.99'
phase = phases.system_modification phase = phases.system_modification
predecessors = [filesystem.FStab] predecessors = [filesystem.FStab]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
p_map = info.volume.partition_map p_map = info.volume.partition_map
# GRUB screws up when installing in chrooted environments # GRUB screws up when installing in chrooted environments
# so we fake a real harddisk with dmsetup. # so we fake a real harddisk with dmsetup.
# Guide here: http://ebroder.net/2009/08/04/installing-grub-onto-a-disk-image/ # Guide here: http://ebroder.net/2009/08/04/installing-grub-onto-a-disk-image/
from ..fs import unmounted from ..fs import unmounted
with unmounted(info.volume): with unmounted(info.volume):
info.volume.link_dm_node() info.volume.link_dm_node()
if isinstance(p_map, partitionmaps.none.NoPartitions): if isinstance(p_map, partitionmaps.none.NoPartitions):
p_map.root.device_path = info.volume.device_path p_map.root.device_path = info.volume.device_path
try: try:
[device_path] = log_check_call(['readlink', '-f', info.volume.device_path]) [device_path] = log_check_call(['readlink', '-f', info.volume.device_path])
device_map_path = os.path.join(info.root, 'boot/grub/device.map') device_map_path = os.path.join(info.root, 'boot/grub/device.map')
partition_prefix = 'msdos' partition_prefix = 'msdos'
if isinstance(p_map, partitionmaps.gpt.GPTPartitionMap): if isinstance(p_map, partitionmaps.gpt.GPTPartitionMap):
partition_prefix = 'gpt' partition_prefix = 'gpt'
with open(device_map_path, 'w') as device_map: with open(device_map_path, 'w') as device_map:
device_map.write('(hd0) {device_path}\n'.format(device_path=device_path)) device_map.write('(hd0) {device_path}\n'.format(device_path=device_path))
if not isinstance(p_map, partitionmaps.none.NoPartitions): if not isinstance(p_map, partitionmaps.none.NoPartitions):
for idx, partition in enumerate(info.volume.partition_map.partitions): for idx, partition in enumerate(info.volume.partition_map.partitions):
device_map.write('(hd0,{prefix}{idx}) {device_path}\n' device_map.write('(hd0,{prefix}{idx}) {device_path}\n'
.format(device_path=partition.device_path, .format(device_path=partition.device_path,
prefix=partition_prefix, prefix=partition_prefix,
idx=idx + 1)) idx=idx + 1))
# Install grub # Install grub
log_check_call(['chroot', info.root, 'grub-install', device_path]) log_check_call(['chroot', info.root, 'grub-install', device_path])
log_check_call(['chroot', info.root, 'update-grub']) log_check_call(['chroot', info.root, 'update-grub'])
finally: finally:
with unmounted(info.volume): with unmounted(info.volume):
info.volume.unlink_dm_node() info.volume.unlink_dm_node()
if isinstance(p_map, partitionmaps.none.NoPartitions): if isinstance(p_map, partitionmaps.none.NoPartitions):
p_map.root.device_path = info.volume.device_path p_map.root.device_path = info.volume.device_path
class InstallGrub_2(Task): class InstallGrub_2(Task):
description = 'Installing grub 2' description = 'Installing grub 2'
phase = phases.system_modification phase = phases.system_modification
predecessors = [filesystem.FStab] predecessors = [filesystem.FStab]
# Make sure the kernel image is updated after we have installed the bootloader # Make sure the kernel image is updated after we have installed the bootloader
successors = [kernel.UpdateInitramfs] successors = [kernel.UpdateInitramfs]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
log_check_call(['chroot', info.root, 'grub-install', info.volume.device_path]) log_check_call(['chroot', info.root, 'grub-install', info.volume.device_path])
log_check_call(['chroot', info.root, 'update-grub']) log_check_call(['chroot', info.root, 'update-grub'])

View file

@ -4,28 +4,28 @@ from ..exceptions import TaskError
class CheckExternalCommands(Task): class CheckExternalCommands(Task):
description = 'Checking availability of external commands' description = 'Checking availability of external commands'
phase = phases.preparation phase = phases.preparation
@classmethod @classmethod
def run(cls, info): def run(cls, info):
from ..tools import log_check_call from ..tools import log_check_call
from subprocess import CalledProcessError from subprocess import CalledProcessError
import re import re
missing_packages = [] missing_packages = []
for command, package in info.host_dependencies.items(): for command, package in info.host_dependencies.items():
try: try:
log_check_call(['type ' + command], shell=True) log_check_call(['type ' + command], shell=True)
except CalledProcessError: except CalledProcessError:
if re.match('^https?:\/\/', package): if re.match('^https?:\/\/', package):
msg = ('The command `{command}\' is not available, ' msg = ('The command `{command}\' is not available, '
'you can download the software at `{package}\'.' 'you can download the software at `{package}\'.'
.format(command=command, package=package)) .format(command=command, package=package))
else: else:
msg = ('The command `{command}\' is not available, ' msg = ('The command `{command}\' is not available, '
'it is located in the package `{package}\'.' 'it is located in the package `{package}\'.'
.format(command=command, package=package)) .format(command=command, package=package))
missing_packages.append(msg) missing_packages.append(msg)
if len(missing_packages) > 0: if len(missing_packages) > 0:
msg = '\n'.join(missing_packages) msg = '\n'.join(missing_packages)
raise TaskError(msg) raise TaskError(msg)

View file

@ -3,19 +3,19 @@ from bootstrapvz.common import phases
class MoveImage(Task): class MoveImage(Task):
description = 'Moving volume image' description = 'Moving volume image'
phase = phases.image_registration phase = phases.image_registration
@classmethod @classmethod
def run(cls, info): def run(cls, info):
image_name = info.manifest.name.format(**info.manifest_vars) image_name = info.manifest.name.format(**info.manifest_vars)
filename = image_name + '.' + info.volume.extension filename = image_name + '.' + info.volume.extension
import os.path import os.path
destination = os.path.join(info.manifest.bootstrapper['workspace'], filename) destination = os.path.join(info.manifest.bootstrapper['workspace'], filename)
import shutil import shutil
shutil.move(info.volume.image_path, destination) shutil.move(info.volume.image_path, destination)
info.volume.image_path = destination info.volume.image_path = destination
import logging import logging
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
log.info('The volume image has been moved to ' + destination) log.info('The volume image has been moved to ' + destination)

View file

@ -6,75 +6,75 @@ import os.path
class InstallInitScripts(Task): class InstallInitScripts(Task):
description = 'Installing startup scripts' description = 'Installing startup scripts'
phase = phases.system_modification phase = phases.system_modification
@classmethod @classmethod
def run(cls, info): def run(cls, info):
import stat import stat
rwxr_xr_x = (stat.S_IRUSR | stat.S_IWUSR | stat.S_IXUSR | rwxr_xr_x = (stat.S_IRUSR | stat.S_IWUSR | stat.S_IXUSR |
stat.S_IRGRP | stat.S_IXGRP | stat.S_IRGRP | stat.S_IXGRP |
stat.S_IROTH | stat.S_IXOTH) stat.S_IROTH | stat.S_IXOTH)
from shutil import copy from shutil import copy
for name, src in info.initd['install'].iteritems(): for name, src in info.initd['install'].iteritems():
dst = os.path.join(info.root, 'etc/init.d', name) dst = os.path.join(info.root, 'etc/init.d', name)
copy(src, dst) copy(src, dst)
os.chmod(dst, rwxr_xr_x) os.chmod(dst, rwxr_xr_x)
log_check_call(['chroot', info.root, 'insserv', '--default', name]) log_check_call(['chroot', info.root, 'insserv', '--default', name])
for name in info.initd['disable']: for name in info.initd['disable']:
log_check_call(['chroot', info.root, 'insserv', '--remove', name]) log_check_call(['chroot', info.root, 'insserv', '--remove', name])
class AddExpandRoot(Task): class AddExpandRoot(Task):
description = 'Adding init script to expand the root volume' description = 'Adding init script to expand the root volume'
phase = phases.system_modification phase = phases.system_modification
successors = [InstallInitScripts] successors = [InstallInitScripts]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
init_scripts_dir = os.path.join(assets, 'init.d') init_scripts_dir = os.path.join(assets, 'init.d')
info.initd['install']['expand-root'] = os.path.join(init_scripts_dir, 'expand-root') info.initd['install']['expand-root'] = os.path.join(init_scripts_dir, 'expand-root')
class RemoveHWClock(Task): class RemoveHWClock(Task):
description = 'Removing hardware clock init scripts' description = 'Removing hardware clock init scripts'
phase = phases.system_modification phase = phases.system_modification
successors = [InstallInitScripts] successors = [InstallInitScripts]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
from bootstrapvz.common.releases import squeeze from bootstrapvz.common.releases import squeeze
info.initd['disable'].append('hwclock.sh') info.initd['disable'].append('hwclock.sh')
if info.manifest.release == squeeze: if info.manifest.release == squeeze:
info.initd['disable'].append('hwclockfirst.sh') info.initd['disable'].append('hwclockfirst.sh')
class AdjustExpandRootScript(Task): class AdjustExpandRootScript(Task):
description = 'Adjusting the expand-root script' description = 'Adjusting the expand-root script'
phase = phases.system_modification phase = phases.system_modification
predecessors = [InstallInitScripts] predecessors = [InstallInitScripts]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
from ..tools import sed_i from ..tools import sed_i
script = os.path.join(info.root, 'etc/init.d/expand-root') script = os.path.join(info.root, 'etc/init.d/expand-root')
root_idx = info.volume.partition_map.root.get_index() root_idx = info.volume.partition_map.root.get_index()
root_index_line = 'root_index="{idx}"'.format(idx=root_idx) root_index_line = 'root_index="{idx}"'.format(idx=root_idx)
sed_i(script, '^root_index="0"$', root_index_line) sed_i(script, '^root_index="0"$', root_index_line)
root_device_path = 'root_device_path="{device}"'.format(device=info.volume.device_path) root_device_path = 'root_device_path="{device}"'.format(device=info.volume.device_path)
sed_i(script, '^root_device_path="/dev/xvda"$', root_device_path) sed_i(script, '^root_device_path="/dev/xvda"$', root_device_path)
class AdjustGrowpartWorkaround(Task): class AdjustGrowpartWorkaround(Task):
description = 'Adjusting expand-root for growpart-workaround' description = 'Adjusting expand-root for growpart-workaround'
phase = phases.system_modification phase = phases.system_modification
predecessors = [AdjustExpandRootScript] predecessors = [AdjustExpandRootScript]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
from ..tools import sed_i from ..tools import sed_i
script = os.path.join(info.root, 'etc/init.d/expand-root') script = os.path.join(info.root, 'etc/init.d/expand-root')
sed_i(script, '^growpart="growpart"$', 'growpart-workaround') sed_i(script, '^growpart="growpart"$', 'growpart-workaround')

View file

@ -5,48 +5,48 @@ import logging
class AddDKMSPackages(Task): class AddDKMSPackages(Task):
description = 'Adding DKMS and kernel header packages' description = 'Adding DKMS and kernel header packages'
phase = phases.package_installation phase = phases.package_installation
successors = [packages.InstallPackages] successors = [packages.InstallPackages]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
info.packages.add('dkms') info.packages.add('dkms')
kernel_pkg_arch = {'i386': '686-pae', 'amd64': 'amd64'}[info.manifest.system['architecture']] kernel_pkg_arch = {'i386': '686-pae', 'amd64': 'amd64'}[info.manifest.system['architecture']]
info.packages.add('linux-headers-' + kernel_pkg_arch) info.packages.add('linux-headers-' + kernel_pkg_arch)
class UpdateInitramfs(Task): class UpdateInitramfs(Task):
description = 'Rebuilding initramfs' description = 'Rebuilding initramfs'
phase = phases.system_modification phase = phases.system_modification
@classmethod @classmethod
def run(cls, info): def run(cls, info):
from bootstrapvz.common.tools import log_check_call from bootstrapvz.common.tools import log_check_call
# Update initramfs (-u) for all currently installed kernel versions (-k all) # Update initramfs (-u) for all currently installed kernel versions (-k all)
log_check_call(['chroot', info.root, 'update-initramfs', '-u', '-k', 'all']) log_check_call(['chroot', info.root, 'update-initramfs', '-u', '-k', 'all'])
class DetermineKernelVersion(Task): class DetermineKernelVersion(Task):
description = 'Determining kernel version' description = 'Determining kernel version'
phase = phases.package_installation phase = phases.package_installation
predecessors = [packages.InstallPackages] predecessors = [packages.InstallPackages]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
# Snatched from `extlinux-update' in wheezy # Snatched from `extlinux-update' in wheezy
# list the files in boot/ that match vmlinuz-* # list the files in boot/ that match vmlinuz-*
# sort what the * matches, the first entry is the kernel version # sort what the * matches, the first entry is the kernel version
import os.path import os.path
import re import re
regexp = re.compile('^vmlinuz-(?P<version>.+)$') regexp = re.compile('^vmlinuz-(?P<version>.+)$')
def get_kernel_version(vmlinuz_path): def get_kernel_version(vmlinuz_path):
vmlinux_basename = os.path.basename(vmlinuz_path) vmlinux_basename = os.path.basename(vmlinuz_path)
return regexp.match(vmlinux_basename).group('version') return regexp.match(vmlinux_basename).group('version')
from glob import glob from glob import glob
boot = os.path.join(info.root, 'boot') boot = os.path.join(info.root, 'boot')
vmlinuz_paths = glob('{boot}/vmlinuz-*'.format(boot=boot)) vmlinuz_paths = glob('{boot}/vmlinuz-*'.format(boot=boot))
kernels = map(get_kernel_version, vmlinuz_paths) kernels = map(get_kernel_version, vmlinuz_paths)
info.kernel_version = sorted(kernels, reverse=True)[0] info.kernel_version = sorted(kernels, reverse=True)[0]
logging.getLogger(__name__).debug('Kernel version is {version}'.format(version=info.kernel_version)) logging.getLogger(__name__).debug('Kernel version is {version}'.format(version=info.kernel_version))

View file

@ -4,71 +4,71 @@ import os.path
class LocaleBootstrapPackage(Task): class LocaleBootstrapPackage(Task):
description = 'Adding locale package to bootstrap installation' description = 'Adding locale package to bootstrap installation'
phase = phases.preparation phase = phases.preparation
@classmethod @classmethod
def run(cls, info): def run(cls, info):
# We could bootstrap without locales, but things just suck without them # We could bootstrap without locales, but things just suck without them
# eg. error messages when running apt # eg. error messages when running apt
info.include_packages.add('locales') info.include_packages.add('locales')
class GenerateLocale(Task): class GenerateLocale(Task):
description = 'Generating system locale' description = 'Generating system locale'
phase = phases.package_installation phase = phases.package_installation
@classmethod @classmethod
def run(cls, info): def run(cls, info):
from ..tools import sed_i from ..tools import sed_i
from ..tools import log_check_call from ..tools import log_check_call
lang = '{locale}.{charmap}'.format(locale=info.manifest.system['locale'], lang = '{locale}.{charmap}'.format(locale=info.manifest.system['locale'],
charmap=info.manifest.system['charmap']) charmap=info.manifest.system['charmap'])
locale_str = '{locale}.{charmap} {charmap}'.format(locale=info.manifest.system['locale'], locale_str = '{locale}.{charmap} {charmap}'.format(locale=info.manifest.system['locale'],
charmap=info.manifest.system['charmap']) charmap=info.manifest.system['charmap'])
search = '# ' + locale_str search = '# ' + locale_str
locale_gen = os.path.join(info.root, 'etc/locale.gen') locale_gen = os.path.join(info.root, 'etc/locale.gen')
sed_i(locale_gen, search, locale_str) sed_i(locale_gen, search, locale_str)
log_check_call(['chroot', info.root, 'locale-gen']) log_check_call(['chroot', info.root, 'locale-gen'])
log_check_call(['chroot', info.root, log_check_call(['chroot', info.root,
'update-locale', 'LANG=' + lang]) 'update-locale', 'LANG=' + lang])
class SetTimezone(Task): class SetTimezone(Task):
description = 'Setting the selected timezone' description = 'Setting the selected timezone'
phase = phases.system_modification phase = phases.system_modification
@classmethod @classmethod
def run(cls, info): def run(cls, info):
tz_path = os.path.join(info.root, 'etc/timezone') tz_path = os.path.join(info.root, 'etc/timezone')
timezone = info.manifest.system['timezone'] timezone = info.manifest.system['timezone']
with open(tz_path, 'w') as tz_file: with open(tz_path, 'w') as tz_file:
tz_file.write(timezone) tz_file.write(timezone)
class SetLocalTimeLink(Task): class SetLocalTimeLink(Task):
description = 'Setting the selected local timezone (link)' description = 'Setting the selected local timezone (link)'
phase = phases.system_modification phase = phases.system_modification
@classmethod @classmethod
def run(cls, info): def run(cls, info):
timezone = info.manifest.system['timezone'] timezone = info.manifest.system['timezone']
localtime_path = os.path.join(info.root, 'etc/localtime') localtime_path = os.path.join(info.root, 'etc/localtime')
os.unlink(localtime_path) os.unlink(localtime_path)
os.symlink(os.path.join('/usr/share/zoneinfo', timezone), localtime_path) os.symlink(os.path.join('/usr/share/zoneinfo', timezone), localtime_path)
class SetLocalTimeCopy(Task): class SetLocalTimeCopy(Task):
description = 'Setting the selected local timezone (copy)' description = 'Setting the selected local timezone (copy)'
phase = phases.system_modification phase = phases.system_modification
@classmethod @classmethod
def run(cls, info): def run(cls, info):
from shutil import copy from shutil import copy
timezone = info.manifest.system['timezone'] timezone = info.manifest.system['timezone']
zoneinfo_path = os.path.join(info.root, '/usr/share/zoneinfo', timezone) zoneinfo_path = os.path.join(info.root, '/usr/share/zoneinfo', timezone)
localtime_path = os.path.join(info.root, 'etc/localtime') localtime_path = os.path.join(info.root, 'etc/localtime')
copy(zoneinfo_path, localtime_path) copy(zoneinfo_path, localtime_path)

View file

@ -5,28 +5,28 @@ import volume
class AddRequiredCommands(Task): class AddRequiredCommands(Task):
description = 'Adding commands required for creating loopback volumes' description = 'Adding commands required for creating loopback volumes'
phase = phases.preparation phase = phases.preparation
successors = [host.CheckExternalCommands] successors = [host.CheckExternalCommands]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
from ..fs.loopbackvolume import LoopbackVolume from ..fs.loopbackvolume import LoopbackVolume
from ..fs.qemuvolume import QEMUVolume from ..fs.qemuvolume import QEMUVolume
if type(info.volume) is LoopbackVolume: if type(info.volume) is LoopbackVolume:
info.host_dependencies['losetup'] = 'mount' info.host_dependencies['losetup'] = 'mount'
info.host_dependencies['truncate'] = 'coreutils' info.host_dependencies['truncate'] = 'coreutils'
if isinstance(info.volume, QEMUVolume): if isinstance(info.volume, QEMUVolume):
info.host_dependencies['qemu-img'] = 'qemu-utils' info.host_dependencies['qemu-img'] = 'qemu-utils'
class Create(Task): class Create(Task):
description = 'Creating a loopback volume' description = 'Creating a loopback volume'
phase = phases.volume_creation phase = phases.volume_creation
successors = [volume.Attach] successors = [volume.Attach]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
import os.path import os.path
image_path = os.path.join(info.workspace, 'volume.' + info.volume.extension) image_path = os.path.join(info.workspace, 'volume.' + info.volume.extension)
info.volume.create(image_path) info.volume.create(image_path)

View file

@ -4,51 +4,51 @@ import os
class RemoveDNSInfo(Task): class RemoveDNSInfo(Task):
description = 'Removing resolv.conf' description = 'Removing resolv.conf'
phase = phases.system_cleaning phase = phases.system_cleaning
@classmethod @classmethod
def run(cls, info): def run(cls, info):
if os.path.isfile(os.path.join(info.root, 'etc/resolv.conf')): if os.path.isfile(os.path.join(info.root, 'etc/resolv.conf')):
os.remove(os.path.join(info.root, 'etc/resolv.conf')) os.remove(os.path.join(info.root, 'etc/resolv.conf'))
class RemoveHostname(Task): class RemoveHostname(Task):
description = 'Removing the hostname file' description = 'Removing the hostname file'
phase = phases.system_cleaning phase = phases.system_cleaning
@classmethod @classmethod
def run(cls, info): def run(cls, info):
if os.path.isfile(os.path.join(info.root, 'etc/hostname')): if os.path.isfile(os.path.join(info.root, 'etc/hostname')):
os.remove(os.path.join(info.root, 'etc/hostname')) os.remove(os.path.join(info.root, 'etc/hostname'))
class SetHostname(Task): class SetHostname(Task):
description = 'Writing hostname into the hostname file' description = 'Writing hostname into the hostname file'
phase = phases.system_modification phase = phases.system_modification
@classmethod @classmethod
def run(cls, info): def run(cls, info):
hostname = info.manifest.system['hostname'].format(**info.manifest_vars) hostname = info.manifest.system['hostname'].format(**info.manifest_vars)
hostname_file_path = os.path.join(info.root, 'etc/hostname') hostname_file_path = os.path.join(info.root, 'etc/hostname')
with open(hostname_file_path, 'w') as hostname_file: with open(hostname_file_path, 'w') as hostname_file:
hostname_file.write(hostname) hostname_file.write(hostname)
hosts_path = os.path.join(info.root, 'etc/hosts') hosts_path = os.path.join(info.root, 'etc/hosts')
from bootstrapvz.common.tools import sed_i from bootstrapvz.common.tools import sed_i
sed_i(hosts_path, '^127.0.0.1\tlocalhost$', '127.0.0.1\tlocalhost\n127.0.1.1\t' + hostname) sed_i(hosts_path, '^127.0.0.1\tlocalhost$', '127.0.0.1\tlocalhost\n127.0.1.1\t' + hostname)
class ConfigureNetworkIF(Task): class ConfigureNetworkIF(Task):
description = 'Configuring network interfaces' description = 'Configuring network interfaces'
phase = phases.system_modification phase = phases.system_modification
@classmethod @classmethod
def run(cls, info): def run(cls, info):
network_config_path = os.path.join(os.path.dirname(__file__), 'network-configuration.yml') network_config_path = os.path.join(os.path.dirname(__file__), 'network-configuration.yml')
from ..tools import config_get from ..tools import config_get
if_config = config_get(network_config_path, [info.manifest.release.codename]) if_config = config_get(network_config_path, [info.manifest.release.codename])
interfaces_path = os.path.join(info.root, 'etc/network/interfaces') interfaces_path = os.path.join(info.root, 'etc/network/interfaces')
with open(interfaces_path, 'a') as interfaces: with open(interfaces_path, 'a') as interfaces:
interfaces.write(if_config + '\n') interfaces.write(if_config + '\n')

View file

@ -5,107 +5,107 @@ from ..tools import log_check_call
class AddManifestPackages(Task): class AddManifestPackages(Task):
description = 'Adding packages from the manifest' description = 'Adding packages from the manifest'
phase = phases.preparation phase = phases.preparation
predecessors = [apt.AddManifestSources, apt.AddDefaultSources, apt.AddBackports] predecessors = [apt.AddManifestSources, apt.AddDefaultSources, apt.AddBackports]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
import re import re
remote = re.compile('^(?P<name>[^/]+)(/(?P<target>[^/]+))?$') remote = re.compile('^(?P<name>[^/]+)(/(?P<target>[^/]+))?$')
for package in info.manifest.packages['install']: for package in info.manifest.packages['install']:
match = remote.match(package) match = remote.match(package)
if match is not None: if match is not None:
info.packages.add(match.group('name'), match.group('target')) info.packages.add(match.group('name'), match.group('target'))
else: else:
info.packages.add_local(package) info.packages.add_local(package)
class InstallPackages(Task): class InstallPackages(Task):
description = 'Installing packages' description = 'Installing packages'
phase = phases.package_installation phase = phases.package_installation
predecessors = [apt.AptUpgrade] predecessors = [apt.AptUpgrade]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
batch = [] batch = []
actions = {info.packages.Remote: cls.install_remote, actions = {info.packages.Remote: cls.install_remote,
info.packages.Local: cls.install_local} info.packages.Local: cls.install_local}
for i, package in enumerate(info.packages.install): for i, package in enumerate(info.packages.install):
batch.append(package) batch.append(package)
next_package = info.packages.install[i + 1] if i + 1 < len(info.packages.install) else None next_package = info.packages.install[i + 1] if i + 1 < len(info.packages.install) else None
if next_package is None or package.__class__ is not next_package.__class__: if next_package is None or package.__class__ is not next_package.__class__:
actions[package.__class__](info, batch) actions[package.__class__](info, batch)
batch = [] batch = []
@classmethod @classmethod
def install_remote(cls, info, remote_packages): def install_remote(cls, info, remote_packages):
import os import os
from ..tools import log_check_call from ..tools import log_check_call
from subprocess import CalledProcessError from subprocess import CalledProcessError
try: try:
env = os.environ.copy() env = os.environ.copy()
env['DEBIAN_FRONTEND'] = 'noninteractive' env['DEBIAN_FRONTEND'] = 'noninteractive'
log_check_call(['chroot', info.root, log_check_call(['chroot', info.root,
'apt-get', 'install', 'apt-get', 'install',
'--no-install-recommends', '--no-install-recommends',
'--assume-yes'] + '--assume-yes'] +
map(str, remote_packages), map(str, remote_packages),
env=env) env=env)
except CalledProcessError as e: except CalledProcessError as e:
import logging import logging
disk_stat = os.statvfs(info.root) disk_stat = os.statvfs(info.root)
root_free_mb = disk_stat.f_bsize * disk_stat.f_bavail / 1024 / 1024 root_free_mb = disk_stat.f_bsize * disk_stat.f_bavail / 1024 / 1024
disk_stat = os.statvfs(os.path.join(info.root, 'boot')) disk_stat = os.statvfs(os.path.join(info.root, 'boot'))
boot_free_mb = disk_stat.f_bsize * disk_stat.f_bavail / 1024 / 1024 boot_free_mb = disk_stat.f_bsize * disk_stat.f_bavail / 1024 / 1024
free_mb = min(root_free_mb, boot_free_mb) free_mb = min(root_free_mb, boot_free_mb)
if free_mb < 50: if free_mb < 50:
msg = ('apt exited with a non-zero status, ' msg = ('apt exited with a non-zero status, '
'this may be because\nthe image volume is ' 'this may be because\nthe image volume is '
'running out of disk space ({free}MB left)').format(free=free_mb) 'running out of disk space ({free}MB left)').format(free=free_mb)
logging.getLogger(__name__).warn(msg) logging.getLogger(__name__).warn(msg)
else: else:
if e.returncode == 100: if e.returncode == 100:
msg = ('apt exited with status code 100. ' msg = ('apt exited with status code 100. '
'This can sometimes occur when package retrieval times out or a package extraction failed. ' 'This can sometimes occur when package retrieval times out or a package extraction failed. '
'apt might succeed if you try bootstrapping again.') 'apt might succeed if you try bootstrapping again.')
logging.getLogger(__name__).warn(msg) logging.getLogger(__name__).warn(msg)
raise raise
@classmethod @classmethod
def install_local(cls, info, local_packages): def install_local(cls, info, local_packages):
from shutil import copy from shutil import copy
import os import os
absolute_package_paths = [] absolute_package_paths = []
chrooted_package_paths = [] chrooted_package_paths = []
for package_src in local_packages: for package_src in local_packages:
pkg_name = os.path.basename(package_src.path) pkg_name = os.path.basename(package_src.path)
package_rel_dst = os.path.join('tmp', pkg_name) package_rel_dst = os.path.join('tmp', pkg_name)
package_dst = os.path.join(info.root, package_rel_dst) package_dst = os.path.join(info.root, package_rel_dst)
copy(package_src.path, package_dst) copy(package_src.path, package_dst)
absolute_package_paths.append(package_dst) absolute_package_paths.append(package_dst)
package_path = os.path.join('/', package_rel_dst) package_path = os.path.join('/', package_rel_dst)
chrooted_package_paths.append(package_path) chrooted_package_paths.append(package_path)
env = os.environ.copy() env = os.environ.copy()
env['DEBIAN_FRONTEND'] = 'noninteractive' env['DEBIAN_FRONTEND'] = 'noninteractive'
log_check_call(['chroot', info.root, log_check_call(['chroot', info.root,
'dpkg', '--install'] + chrooted_package_paths, 'dpkg', '--install'] + chrooted_package_paths,
env=env) env=env)
for path in absolute_package_paths: for path in absolute_package_paths:
os.remove(path) os.remove(path)
class AddTaskselStandardPackages(Task): class AddTaskselStandardPackages(Task):
description = 'Adding standard packages from tasksel' description = 'Adding standard packages from tasksel'
phase = phases.package_installation phase = phases.package_installation
predecessors = [apt.AptUpdate] predecessors = [apt.AptUpdate]
successors = [InstallPackages] successors = [InstallPackages]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
tasksel_packages = log_check_call(['chroot', info.root, 'tasksel', '--task-packages', 'standard']) tasksel_packages = log_check_call(['chroot', info.root, 'tasksel', '--task-packages', 'standard'])
for pkg in tasksel_packages: for pkg in tasksel_packages:
info.packages.add(pkg) info.packages.add(pkg)

View file

@ -6,44 +6,44 @@ import volume
class AddRequiredCommands(Task): class AddRequiredCommands(Task):
description = 'Adding commands required for partitioning the volume' description = 'Adding commands required for partitioning the volume'
phase = phases.preparation phase = phases.preparation
successors = [host.CheckExternalCommands] successors = [host.CheckExternalCommands]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
from bootstrapvz.base.fs.partitionmaps.none import NoPartitions from bootstrapvz.base.fs.partitionmaps.none import NoPartitions
if not isinstance(info.volume.partition_map, NoPartitions): if not isinstance(info.volume.partition_map, NoPartitions):
info.host_dependencies['parted'] = 'parted' info.host_dependencies['parted'] = 'parted'
info.host_dependencies['kpartx'] = 'kpartx' info.host_dependencies['kpartx'] = 'kpartx'
class PartitionVolume(Task): class PartitionVolume(Task):
description = 'Partitioning the volume' description = 'Partitioning the volume'
phase = phases.volume_preparation phase = phases.volume_preparation
@classmethod @classmethod
def run(cls, info): def run(cls, info):
info.volume.partition_map.create(info.volume) info.volume.partition_map.create(info.volume)
class MapPartitions(Task): class MapPartitions(Task):
description = 'Mapping volume partitions' description = 'Mapping volume partitions'
phase = phases.volume_preparation phase = phases.volume_preparation
predecessors = [PartitionVolume] predecessors = [PartitionVolume]
successors = [filesystem.Format] successors = [filesystem.Format]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
info.volume.partition_map.map(info.volume) info.volume.partition_map.map(info.volume)
class UnmapPartitions(Task): class UnmapPartitions(Task):
description = 'Removing volume partitions mapping' description = 'Removing volume partitions mapping'
phase = phases.volume_unmounting phase = phases.volume_unmounting
predecessors = [filesystem.UnmountRoot] predecessors = [filesystem.UnmountRoot]
successors = [volume.Detach] successors = [volume.Detach]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
info.volume.partition_map.unmap(info.volume) info.volume.partition_map.unmap(info.volume)

View file

@ -3,10 +3,10 @@ from .. import phases
class EnableShadowConfig(Task): class EnableShadowConfig(Task):
description = 'Enabling shadowconfig' description = 'Enabling shadowconfig'
phase = phases.system_modification phase = phases.system_modification
@classmethod @classmethod
def run(cls, info): def run(cls, info):
from ..tools import log_check_call from ..tools import log_check_call
log_check_call(['chroot', info.root, 'shadowconfig', 'on']) log_check_call(['chroot', info.root, 'shadowconfig', 'on'])

View file

@ -7,106 +7,106 @@ import initd
class AddOpenSSHPackage(Task): class AddOpenSSHPackage(Task):
description = 'Adding openssh package' description = 'Adding openssh package'
phase = phases.preparation phase = phases.preparation
@classmethod @classmethod
def run(cls, info): def run(cls, info):
info.packages.add('openssh-server') info.packages.add('openssh-server')
class AddSSHKeyGeneration(Task): class AddSSHKeyGeneration(Task):
description = 'Adding SSH private key generation init scripts' description = 'Adding SSH private key generation init scripts'
phase = phases.system_modification phase = phases.system_modification
successors = [initd.InstallInitScripts] successors = [initd.InstallInitScripts]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
init_scripts_dir = os.path.join(assets, 'init.d') init_scripts_dir = os.path.join(assets, 'init.d')
install = info.initd['install'] install = info.initd['install']
from subprocess import CalledProcessError from subprocess import CalledProcessError
try: try:
log_check_call(['chroot', info.root, log_check_call(['chroot', info.root,
'dpkg-query', '-W', 'openssh-server']) 'dpkg-query', '-W', 'openssh-server'])
from bootstrapvz.common.releases import squeeze from bootstrapvz.common.releases import squeeze
if info.manifest.release == squeeze: if info.manifest.release == squeeze:
install['generate-ssh-hostkeys'] = os.path.join(init_scripts_dir, 'squeeze/generate-ssh-hostkeys') install['generate-ssh-hostkeys'] = os.path.join(init_scripts_dir, 'squeeze/generate-ssh-hostkeys')
else: else:
install['generate-ssh-hostkeys'] = os.path.join(init_scripts_dir, 'generate-ssh-hostkeys') install['generate-ssh-hostkeys'] = os.path.join(init_scripts_dir, 'generate-ssh-hostkeys')
except CalledProcessError: except CalledProcessError:
import logging import logging
logging.getLogger(__name__).warn('The OpenSSH server has not been installed, ' logging.getLogger(__name__).warn('The OpenSSH server has not been installed, '
'not installing SSH host key generation script.') 'not installing SSH host key generation script.')
class DisableSSHPasswordAuthentication(Task): class DisableSSHPasswordAuthentication(Task):
description = 'Disabling SSH password authentication' description = 'Disabling SSH password authentication'
phase = phases.system_modification phase = phases.system_modification
@classmethod @classmethod
def run(cls, info): def run(cls, info):
from ..tools import sed_i from ..tools import sed_i
sshd_config_path = os.path.join(info.root, 'etc/ssh/sshd_config') sshd_config_path = os.path.join(info.root, 'etc/ssh/sshd_config')
sed_i(sshd_config_path, '^#PasswordAuthentication yes', 'PasswordAuthentication no') sed_i(sshd_config_path, '^#PasswordAuthentication yes', 'PasswordAuthentication no')
class EnableRootLogin(Task): class EnableRootLogin(Task):
description = 'Enabling SSH login for root' description = 'Enabling SSH login for root'
phase = phases.system_modification phase = phases.system_modification
@classmethod @classmethod
def run(cls, info): def run(cls, info):
sshdconfig_path = os.path.join(info.root, 'etc/ssh/sshd_config') sshdconfig_path = os.path.join(info.root, 'etc/ssh/sshd_config')
if os.path.exists(sshdconfig_path): if os.path.exists(sshdconfig_path):
from bootstrapvz.common.tools import sed_i from bootstrapvz.common.tools import sed_i
sed_i(sshdconfig_path, '^PermitRootLogin .*', 'PermitRootLogin yes') sed_i(sshdconfig_path, '^PermitRootLogin .*', 'PermitRootLogin yes')
else: else:
import logging import logging
logging.getLogger(__name__).warn('The OpenSSH server has not been installed, ' logging.getLogger(__name__).warn('The OpenSSH server has not been installed, '
'not enabling SSH root login.') 'not enabling SSH root login.')
class DisableRootLogin(Task): class DisableRootLogin(Task):
description = 'Disabling SSH login for root' description = 'Disabling SSH login for root'
phase = phases.system_modification phase = phases.system_modification
@classmethod @classmethod
def run(cls, info): def run(cls, info):
sshdconfig_path = os.path.join(info.root, 'etc/ssh/sshd_config') sshdconfig_path = os.path.join(info.root, 'etc/ssh/sshd_config')
if os.path.exists(sshdconfig_path): if os.path.exists(sshdconfig_path):
from bootstrapvz.common.tools import sed_i from bootstrapvz.common.tools import sed_i
sed_i(sshdconfig_path, '^PermitRootLogin .*', 'PermitRootLogin no') sed_i(sshdconfig_path, '^PermitRootLogin .*', 'PermitRootLogin no')
else: else:
import logging import logging
logging.getLogger(__name__).warn('The OpenSSH server has not been installed, ' logging.getLogger(__name__).warn('The OpenSSH server has not been installed, '
'not disabling SSH root login.') 'not disabling SSH root login.')
class DisableSSHDNSLookup(Task): class DisableSSHDNSLookup(Task):
description = 'Disabling sshd remote host name lookup' description = 'Disabling sshd remote host name lookup'
phase = phases.system_modification phase = phases.system_modification
@classmethod @classmethod
def run(cls, info): def run(cls, info):
sshd_config_path = os.path.join(info.root, 'etc/ssh/sshd_config') sshd_config_path = os.path.join(info.root, 'etc/ssh/sshd_config')
with open(sshd_config_path, 'a') as sshd_config: with open(sshd_config_path, 'a') as sshd_config:
sshd_config.write('UseDNS no') sshd_config.write('UseDNS no')
class ShredHostkeys(Task): class ShredHostkeys(Task):
description = 'Securely deleting ssh hostkeys' description = 'Securely deleting ssh hostkeys'
phase = phases.system_cleaning phase = phases.system_cleaning
@classmethod @classmethod
def run(cls, info): def run(cls, info):
ssh_hostkeys = ['ssh_host_dsa_key', ssh_hostkeys = ['ssh_host_dsa_key',
'ssh_host_rsa_key'] 'ssh_host_rsa_key']
from bootstrapvz.common.releases import wheezy from bootstrapvz.common.releases import wheezy
if info.manifest.release >= wheezy: if info.manifest.release >= wheezy:
ssh_hostkeys.append('ssh_host_ecdsa_key') ssh_hostkeys.append('ssh_host_ecdsa_key')
private = [os.path.join(info.root, 'etc/ssh', name) for name in ssh_hostkeys] private = [os.path.join(info.root, 'etc/ssh', name) for name in ssh_hostkeys]
public = [path + '.pub' for path in private] public = [path + '.pub' for path in private]
from ..tools import log_check_call from ..tools import log_check_call
log_check_call(['shred', '--remove'] + private + public) log_check_call(['shred', '--remove'] + private + public)

View file

@ -4,28 +4,28 @@ import workspace
class Attach(Task): class Attach(Task):
description = 'Attaching the volume' description = 'Attaching the volume'
phase = phases.volume_creation phase = phases.volume_creation
@classmethod @classmethod
def run(cls, info): def run(cls, info):
info.volume.attach() info.volume.attach()
class Detach(Task): class Detach(Task):
description = 'Detaching the volume' description = 'Detaching the volume'
phase = phases.volume_unmounting phase = phases.volume_unmounting
@classmethod @classmethod
def run(cls, info): def run(cls, info):
info.volume.detach() info.volume.detach()
class Delete(Task): class Delete(Task):
description = 'Deleting the volume' description = 'Deleting the volume'
phase = phases.cleaning phase = phases.cleaning
successors = [workspace.DeleteWorkspace] successors = [workspace.DeleteWorkspace]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
info.volume.delete() info.volume.delete()

View file

@ -3,20 +3,20 @@ from .. import phases
class CreateWorkspace(Task): class CreateWorkspace(Task):
description = 'Creating workspace' description = 'Creating workspace'
phase = phases.preparation phase = phases.preparation
@classmethod @classmethod
def run(cls, info): def run(cls, info):
import os import os
os.makedirs(info.workspace) os.makedirs(info.workspace)
class DeleteWorkspace(Task): class DeleteWorkspace(Task):
description = 'Deleting workspace' description = 'Deleting workspace'
phase = phases.cleaning phase = phases.cleaning
@classmethod @classmethod
def run(cls, info): def run(cls, info):
import os import os
os.rmdir(info.workspace) os.rmdir(info.workspace)

View file

@ -2,134 +2,134 @@ import os
def log_check_call(command, stdin=None, env=None, shell=False, cwd=None): def log_check_call(command, stdin=None, env=None, shell=False, cwd=None):
status, stdout, stderr = log_call(command, stdin, env, shell, cwd) status, stdout, stderr = log_call(command, stdin, env, shell, cwd)
from subprocess import CalledProcessError from subprocess import CalledProcessError
if status != 0: if status != 0:
e = CalledProcessError(status, ' '.join(command), '\n'.join(stderr)) e = CalledProcessError(status, ' '.join(command), '\n'.join(stderr))
# Fix Pyro4's fixIronPythonExceptionForPickle() by setting the args property, # Fix Pyro4's fixIronPythonExceptionForPickle() by setting the args property,
# even though we use our own serialization (at least I think that's the problem). # even though we use our own serialization (at least I think that's the problem).
# See bootstrapvz.remote.serialize_called_process_error for more info. # See bootstrapvz.remote.serialize_called_process_error for more info.
setattr(e, 'args', (status, ' '.join(command), '\n'.join(stderr))) setattr(e, 'args', (status, ' '.join(command), '\n'.join(stderr)))
raise e raise e
return stdout return stdout
def log_call(command, stdin=None, env=None, shell=False, cwd=None): def log_call(command, stdin=None, env=None, shell=False, cwd=None):
import subprocess import subprocess
import logging import logging
from multiprocessing.dummy import Pool as ThreadPool from multiprocessing.dummy import Pool as ThreadPool
from os.path import realpath from os.path import realpath
command_log = realpath(command[0]).replace('/', '.') command_log = realpath(command[0]).replace('/', '.')
log = logging.getLogger(__name__ + command_log) log = logging.getLogger(__name__ + command_log)
if type(command) is list: if type(command) is list:
log.debug('Executing: {command}'.format(command=' '.join(command))) log.debug('Executing: {command}'.format(command=' '.join(command)))
else: else:
log.debug('Executing: {command}'.format(command=command)) log.debug('Executing: {command}'.format(command=command))
process = subprocess.Popen(args=command, env=env, shell=shell, cwd=cwd, process = subprocess.Popen(args=command, env=env, shell=shell, cwd=cwd,
stdin=subprocess.PIPE, stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stdout=subprocess.PIPE,
stderr=subprocess.PIPE) stderr=subprocess.PIPE)
if stdin is not None: if stdin is not None:
log.debug(' stdin: ' + stdin) log.debug(' stdin: ' + stdin)
process.stdin.write(stdin + "\n") process.stdin.write(stdin + "\n")
process.stdin.flush() process.stdin.flush()
process.stdin.close() process.stdin.close()
stdout = [] stdout = []
stderr = [] stderr = []
def handle_stdout(line): def handle_stdout(line):
log.debug(line) log.debug(line)
stdout.append(line) stdout.append(line)
def handle_stderr(line): def handle_stderr(line):
log.error(line) log.error(line)
stderr.append(line) stderr.append(line)
handlers = {process.stdout: handle_stdout, handlers = {process.stdout: handle_stdout,
process.stderr: handle_stderr} process.stderr: handle_stderr}
def stream_readline(stream): def stream_readline(stream):
for line in iter(stream.readline, ''): for line in iter(stream.readline, ''):
handlers[stream](line.strip()) handlers[stream](line.strip())
pool = ThreadPool(2) pool = ThreadPool(2)
pool.map(stream_readline, [process.stdout, process.stderr]) pool.map(stream_readline, [process.stdout, process.stderr])
pool.close() pool.close()
pool.join() pool.join()
process.wait() process.wait()
return process.returncode, stdout, stderr return process.returncode, stdout, stderr
def sed_i(file_path, pattern, subst, expected_replacements=1): def sed_i(file_path, pattern, subst, expected_replacements=1):
replacement_count = inline_replace(file_path, pattern, subst) replacement_count = inline_replace(file_path, pattern, subst)
if replacement_count != expected_replacements: if replacement_count != expected_replacements:
from exceptions import UnexpectedNumMatchesError from exceptions import UnexpectedNumMatchesError
msg = ('There were {real} instead of {expected} matches for ' msg = ('There were {real} instead of {expected} matches for '
'the expression `{exp}\' in the file `{path}\'' 'the expression `{exp}\' in the file `{path}\''
.format(real=replacement_count, expected=expected_replacements, .format(real=replacement_count, expected=expected_replacements,
exp=pattern, path=file_path)) exp=pattern, path=file_path))
raise UnexpectedNumMatchesError(msg) raise UnexpectedNumMatchesError(msg)
def inline_replace(file_path, pattern, subst): def inline_replace(file_path, pattern, subst):
import fileinput import fileinput
import re import re
replacement_count = 0 replacement_count = 0
for line in fileinput.input(files=file_path, inplace=True): for line in fileinput.input(files=file_path, inplace=True):
(replacement, count) = re.subn(pattern, subst, line) (replacement, count) = re.subn(pattern, subst, line)
replacement_count += count replacement_count += count
print replacement, print replacement,
return replacement_count return replacement_count
def load_json(path): def load_json(path):
import json import json
from minify_json import json_minify from minify_json import json_minify
with open(path) as stream: with open(path) as stream:
return json.loads(json_minify(stream.read(), False)) return json.loads(json_minify(stream.read(), False))
def load_yaml(path): def load_yaml(path):
import yaml import yaml
with open(path, 'r') as stream: with open(path, 'r') as stream:
return yaml.safe_load(stream) return yaml.safe_load(stream)
def load_data(path): def load_data(path):
filename, extension = os.path.splitext(path) filename, extension = os.path.splitext(path)
if not os.path.isfile(path): if not os.path.isfile(path):
raise Exception('The path {path} does not point to a file.'.format(path=path)) raise Exception('The path {path} does not point to a file.'.format(path=path))
if extension == '.json': if extension == '.json':
return load_json(path) return load_json(path)
elif extension == '.yml' or extension == '.yaml': elif extension == '.yml' or extension == '.yaml':
return load_yaml(path) return load_yaml(path)
else: else:
raise Exception('Unrecognized extension: {ext}'.format(ext=extension)) raise Exception('Unrecognized extension: {ext}'.format(ext=extension))
def config_get(path, config_path): def config_get(path, config_path):
config = load_data(path) config = load_data(path)
for key in config_path: for key in config_path:
config = config.get(key) config = config.get(key)
return config return config
def copy_tree(from_path, to_path): def copy_tree(from_path, to_path):
from shutil import copy from shutil import copy
for abs_prefix, dirs, files in os.walk(from_path): for abs_prefix, dirs, files in os.walk(from_path):
prefix = os.path.normpath(os.path.relpath(abs_prefix, from_path)) prefix = os.path.normpath(os.path.relpath(abs_prefix, from_path))
for path in dirs: for path in dirs:
full_path = os.path.join(to_path, prefix, path) full_path = os.path.join(to_path, prefix, path)
if os.path.exists(full_path): if os.path.exists(full_path):
if os.path.isdir(full_path): if os.path.isdir(full_path):
continue continue
else: else:
os.remove(full_path) os.remove(full_path)
os.mkdir(full_path) os.mkdir(full_path)
for path in files: for path in files:
copy(os.path.join(abs_prefix, path), copy(os.path.join(abs_prefix, path),
os.path.join(to_path, prefix, path)) os.path.join(to_path, prefix, path))

View file

@ -1,37 +1,37 @@
def validate_manifest(data, validator, error): def validate_manifest(data, validator, error):
import os.path import os.path
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml')) schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml'))
validator(data, schema_path) validator(data, schema_path)
pubkey = data['plugins']['admin_user'].get('pubkey', None) pubkey = data['plugins']['admin_user'].get('pubkey', None)
if pubkey is not None and not os.path.exists(pubkey): if pubkey is not None and not os.path.exists(pubkey):
msg = 'Could not find public key at %s' % pubkey msg = 'Could not find public key at %s' % pubkey
error(msg, ['plugins', 'admin_user', 'pubkey']) error(msg, ['plugins', 'admin_user', 'pubkey'])
def resolve_tasks(taskset, manifest): def resolve_tasks(taskset, manifest):
import logging import logging
import tasks import tasks
from bootstrapvz.common.tasks import ssh from bootstrapvz.common.tasks import ssh
from bootstrapvz.common.releases import jessie from bootstrapvz.common.releases import jessie
if manifest.release < jessie: if manifest.release < jessie:
taskset.update([ssh.DisableRootLogin]) taskset.update([ssh.DisableRootLogin])
if 'password' in manifest.plugins['admin_user']: if 'password' in manifest.plugins['admin_user']:
taskset.discard(ssh.DisableSSHPasswordAuthentication) taskset.discard(ssh.DisableSSHPasswordAuthentication)
taskset.add(tasks.AdminUserPassword) taskset.add(tasks.AdminUserPassword)
if 'pubkey' in manifest.plugins['admin_user']: if 'pubkey' in manifest.plugins['admin_user']:
taskset.add(tasks.AdminUserPublicKey) taskset.add(tasks.AdminUserPublicKey)
elif manifest.provider['name'] == 'ec2': elif manifest.provider['name'] == 'ec2':
logging.getLogger(__name__).info("The SSH key will be obtained from EC2") logging.getLogger(__name__).info("The SSH key will be obtained from EC2")
taskset.add(tasks.AdminUserPublicKeyEC2) taskset.add(tasks.AdminUserPublicKeyEC2)
elif 'password' not in manifest.plugins['admin_user']: elif 'password' not in manifest.plugins['admin_user']:
logging.getLogger(__name__).warn("No SSH key and no password set") logging.getLogger(__name__).warn("No SSH key and no password set")
taskset.update([tasks.AddSudoPackage, taskset.update([tasks.AddSudoPackage,
tasks.CreateAdminUser, tasks.CreateAdminUser,
tasks.PasswordlessSudo, tasks.PasswordlessSudo,
]) ])

View file

@ -9,104 +9,104 @@ log = logging.getLogger(__name__)
class AddSudoPackage(Task): class AddSudoPackage(Task):
description = 'Adding `sudo\' to the image packages' description = 'Adding `sudo\' to the image packages'
phase = phases.preparation phase = phases.preparation
@classmethod @classmethod
def run(cls, info): def run(cls, info):
info.packages.add('sudo') info.packages.add('sudo')
class CreateAdminUser(Task): class CreateAdminUser(Task):
description = 'Creating the admin user' description = 'Creating the admin user'
phase = phases.system_modification phase = phases.system_modification
@classmethod @classmethod
def run(cls, info): def run(cls, info):
from bootstrapvz.common.tools import log_check_call from bootstrapvz.common.tools import log_check_call
log_check_call(['chroot', info.root, log_check_call(['chroot', info.root,
'useradd', 'useradd',
'--create-home', '--shell', '/bin/bash', '--create-home', '--shell', '/bin/bash',
info.manifest.plugins['admin_user']['username']]) info.manifest.plugins['admin_user']['username']])
class PasswordlessSudo(Task): class PasswordlessSudo(Task):
description = 'Allowing the admin user to use sudo without a password' description = 'Allowing the admin user to use sudo without a password'
phase = phases.system_modification phase = phases.system_modification
@classmethod @classmethod
def run(cls, info): def run(cls, info):
sudo_admin_path = os.path.join(info.root, 'etc/sudoers.d/99_admin') sudo_admin_path = os.path.join(info.root, 'etc/sudoers.d/99_admin')
username = info.manifest.plugins['admin_user']['username'] username = info.manifest.plugins['admin_user']['username']
with open(sudo_admin_path, 'w') as sudo_admin: with open(sudo_admin_path, 'w') as sudo_admin:
sudo_admin.write('{username} ALL=(ALL) NOPASSWD:ALL'.format(username=username)) sudo_admin.write('{username} ALL=(ALL) NOPASSWD:ALL'.format(username=username))
import stat import stat
ug_read_only = (stat.S_IRUSR | stat.S_IRGRP) ug_read_only = (stat.S_IRUSR | stat.S_IRGRP)
os.chmod(sudo_admin_path, ug_read_only) os.chmod(sudo_admin_path, ug_read_only)
class AdminUserPassword(Task): class AdminUserPassword(Task):
description = 'Setting the admin user password' description = 'Setting the admin user password'
phase = phases.system_modification phase = phases.system_modification
predecessors = [InstallInitScripts, CreateAdminUser] predecessors = [InstallInitScripts, CreateAdminUser]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
from bootstrapvz.common.tools import log_check_call from bootstrapvz.common.tools import log_check_call
log_check_call(['chroot', info.root, 'chpasswd'], log_check_call(['chroot', info.root, 'chpasswd'],
info.manifest.plugins['admin_user']['username'] + info.manifest.plugins['admin_user']['username'] +
':' + info.manifest.plugins['admin_user']['password']) ':' + info.manifest.plugins['admin_user']['password'])
class AdminUserPublicKey(Task): class AdminUserPublicKey(Task):
description = 'Installing the public key for the admin user' description = 'Installing the public key for the admin user'
phase = phases.system_modification phase = phases.system_modification
predecessors = [AddEC2InitScripts, CreateAdminUser] predecessors = [AddEC2InitScripts, CreateAdminUser]
successors = [InstallInitScripts] successors = [InstallInitScripts]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
if 'ec2-get-credentials' in info.initd['install']: if 'ec2-get-credentials' in info.initd['install']:
log.warn('You are using a static public key for the admin account.' log.warn('You are using a static public key for the admin account.'
'This will conflict with the ec2 public key injection mechanism.' 'This will conflict with the ec2 public key injection mechanism.'
'The ec2-get-credentials startup script will therefore not be enabled.') 'The ec2-get-credentials startup script will therefore not be enabled.')
del info.initd['install']['ec2-get-credentials'] del info.initd['install']['ec2-get-credentials']
# Get the stuff we need (username & public key) # Get the stuff we need (username & public key)
username = info.manifest.plugins['admin_user']['username'] username = info.manifest.plugins['admin_user']['username']
with open(info.manifest.plugins['admin_user']['pubkey']) as pubkey_handle: with open(info.manifest.plugins['admin_user']['pubkey']) as pubkey_handle:
pubkey = pubkey_handle.read() pubkey = pubkey_handle.read()
# paths # paths
ssh_dir_rel = os.path.join('home', username, '.ssh') ssh_dir_rel = os.path.join('home', username, '.ssh')
auth_keys_rel = os.path.join(ssh_dir_rel, 'authorized_keys') auth_keys_rel = os.path.join(ssh_dir_rel, 'authorized_keys')
ssh_dir_abs = os.path.join(info.root, ssh_dir_rel) ssh_dir_abs = os.path.join(info.root, ssh_dir_rel)
auth_keys_abs = os.path.join(info.root, auth_keys_rel) auth_keys_abs = os.path.join(info.root, auth_keys_rel)
# Create the ssh dir if nobody has created it yet # Create the ssh dir if nobody has created it yet
if not os.path.exists(ssh_dir_abs): if not os.path.exists(ssh_dir_abs):
os.mkdir(ssh_dir_abs, 0700) os.mkdir(ssh_dir_abs, 0700)
# Create (or append to) the authorized keys file (and chmod u=rw,go=) # Create (or append to) the authorized keys file (and chmod u=rw,go=)
import stat import stat
with open(auth_keys_abs, 'a') as auth_keys_handle: with open(auth_keys_abs, 'a') as auth_keys_handle:
auth_keys_handle.write(pubkey + '\n') auth_keys_handle.write(pubkey + '\n')
os.chmod(auth_keys_abs, (stat.S_IRUSR | stat.S_IWUSR)) os.chmod(auth_keys_abs, (stat.S_IRUSR | stat.S_IWUSR))
# Set the owner of the authorized keys file # Set the owner of the authorized keys file
# (must be through chroot, the host system doesn't know about the user) # (must be through chroot, the host system doesn't know about the user)
from bootstrapvz.common.tools import log_check_call from bootstrapvz.common.tools import log_check_call
log_check_call(['chroot', info.root, log_check_call(['chroot', info.root,
'chown', '-R', (username + ':' + username), ssh_dir_rel]) 'chown', '-R', (username + ':' + username), ssh_dir_rel])
class AdminUserPublicKeyEC2(Task): class AdminUserPublicKeyEC2(Task):
description = 'Modifying ec2-get-credentials to copy the ssh public key to the admin user' description = 'Modifying ec2-get-credentials to copy the ssh public key to the admin user'
phase = phases.system_modification phase = phases.system_modification
predecessors = [InstallInitScripts, CreateAdminUser] predecessors = [InstallInitScripts, CreateAdminUser]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
from bootstrapvz.common.tools import sed_i from bootstrapvz.common.tools import sed_i
getcreds_path = os.path.join(info.root, 'etc/init.d/ec2-get-credentials') getcreds_path = os.path.join(info.root, 'etc/init.d/ec2-get-credentials')
username = info.manifest.plugins['admin_user']['username'] username = info.manifest.plugins['admin_user']['username']
sed_i(getcreds_path, "username='root'", "username='{username}'".format(username=username)) sed_i(getcreds_path, "username='root'", "username='{username}'".format(username=username))

View file

@ -1,12 +1,12 @@
def validate_manifest(data, validator, error): def validate_manifest(data, validator, error):
import os.path import os.path
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml')) schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml'))
validator(data, schema_path) validator(data, schema_path)
def resolve_tasks(taskset, manifest): def resolve_tasks(taskset, manifest):
import tasks import tasks
taskset.add(tasks.CheckAptProxy) taskset.add(tasks.CheckAptProxy)
taskset.add(tasks.SetAptProxy) taskset.add(tasks.SetAptProxy)
if not manifest.plugins['apt_proxy'].get('persistent', False): if not manifest.plugins['apt_proxy'].get('persistent', False):
taskset.add(tasks.RemoveAptProxy) taskset.add(tasks.RemoveAptProxy)

View file

@ -6,55 +6,55 @@ import urllib2
class CheckAptProxy(Task): class CheckAptProxy(Task):
description = 'Checking reachability of APT proxy server' description = 'Checking reachability of APT proxy server'
phase = phases.preparation phase = phases.preparation
@classmethod @classmethod
def run(cls, info): def run(cls, info):
proxy_address = info.manifest.plugins['apt_proxy']['address'] proxy_address = info.manifest.plugins['apt_proxy']['address']
proxy_port = info.manifest.plugins['apt_proxy']['port'] proxy_port = info.manifest.plugins['apt_proxy']['port']
proxy_url = 'http://{address}:{port}'.format(address=proxy_address, port=proxy_port) proxy_url = 'http://{address}:{port}'.format(address=proxy_address, port=proxy_port)
try: try:
urllib2.urlopen(proxy_url, timeout=5) urllib2.urlopen(proxy_url, timeout=5)
except Exception as e: except Exception as e:
# Default response from `apt-cacher-ng` # Default response from `apt-cacher-ng`
if isinstance(e, urllib2.HTTPError) and e.code in [404, 406] and e.msg == 'Usage Information': if isinstance(e, urllib2.HTTPError) and e.code in [404, 406] and e.msg == 'Usage Information':
pass pass
else: else:
import logging import logging
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
log.warning('The APT proxy server couldn\'t be reached. `apt-get\' commands may fail.') log.warning('The APT proxy server couldn\'t be reached. `apt-get\' commands may fail.')
class SetAptProxy(Task): class SetAptProxy(Task):
description = 'Setting proxy for APT' description = 'Setting proxy for APT'
phase = phases.package_installation phase = phases.package_installation
successors = [apt.AptUpdate] successors = [apt.AptUpdate]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
proxy_path = os.path.join(info.root, 'etc/apt/apt.conf.d/02proxy') proxy_path = os.path.join(info.root, 'etc/apt/apt.conf.d/02proxy')
proxy_username = info.manifest.plugins['apt_proxy'].get('username') proxy_username = info.manifest.plugins['apt_proxy'].get('username')
proxy_password = info.manifest.plugins['apt_proxy'].get('password') proxy_password = info.manifest.plugins['apt_proxy'].get('password')
proxy_address = info.manifest.plugins['apt_proxy']['address'] proxy_address = info.manifest.plugins['apt_proxy']['address']
proxy_port = info.manifest.plugins['apt_proxy']['port'] proxy_port = info.manifest.plugins['apt_proxy']['port']
if None not in (proxy_username, proxy_password): if None not in (proxy_username, proxy_password):
proxy_auth = '{username}:{password}@'.format( proxy_auth = '{username}:{password}@'.format(
username=proxy_username, password=proxy_password) username=proxy_username, password=proxy_password)
else: else:
proxy_auth = '' proxy_auth = ''
with open(proxy_path, 'w') as proxy_file: with open(proxy_path, 'w') as proxy_file:
proxy_file.write( proxy_file.write(
'Acquire::http {{ Proxy "http://{auth}{address}:{port}"; }};\n' 'Acquire::http {{ Proxy "http://{auth}{address}:{port}"; }};\n'
.format(auth=proxy_auth, address=proxy_address, port=proxy_port)) .format(auth=proxy_auth, address=proxy_address, port=proxy_port))
class RemoveAptProxy(Task): class RemoveAptProxy(Task):
description = 'Removing APT proxy configuration file' description = 'Removing APT proxy configuration file'
phase = phases.system_cleaning phase = phases.system_cleaning
@classmethod @classmethod
def run(cls, info): def run(cls, info):
os.remove(os.path.join(info.root, 'etc/apt/apt.conf.d/02proxy')) os.remove(os.path.join(info.root, 'etc/apt/apt.conf.d/02proxy'))

View file

@ -2,13 +2,13 @@ import tasks
def validate_manifest(data, validator, error): def validate_manifest(data, validator, error):
import os.path import os.path
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml')) schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml'))
validator(data, schema_path) validator(data, schema_path)
def resolve_tasks(taskset, manifest): def resolve_tasks(taskset, manifest):
taskset.add(tasks.AddPackages) taskset.add(tasks.AddPackages)
if 'assets' in manifest.plugins['chef']: if 'assets' in manifest.plugins['chef']:
taskset.add(tasks.CheckAssetsPath) taskset.add(tasks.CheckAssetsPath)
taskset.add(tasks.CopyChefAssets) taskset.add(tasks.CopyChefAssets)

View file

@ -4,35 +4,35 @@ import os
class CheckAssetsPath(Task): class CheckAssetsPath(Task):
description = 'Checking whether the assets path exist' description = 'Checking whether the assets path exist'
phase = phases.preparation phase = phases.preparation
@classmethod @classmethod
def run(cls, info): def run(cls, info):
from bootstrapvz.common.exceptions import TaskError from bootstrapvz.common.exceptions import TaskError
assets = info.manifest.plugins['chef']['assets'] assets = info.manifest.plugins['chef']['assets']
if not os.path.exists(assets): if not os.path.exists(assets):
msg = 'The assets directory {assets} does not exist.'.format(assets=assets) msg = 'The assets directory {assets} does not exist.'.format(assets=assets)
raise TaskError(msg) raise TaskError(msg)
if not os.path.isdir(assets): if not os.path.isdir(assets):
msg = 'The assets path {assets} does not point to a directory.'.format(assets=assets) msg = 'The assets path {assets} does not point to a directory.'.format(assets=assets)
raise TaskError(msg) raise TaskError(msg)
class AddPackages(Task): class AddPackages(Task):
description = 'Add chef package' description = 'Add chef package'
phase = phases.preparation phase = phases.preparation
@classmethod @classmethod
def run(cls, info): def run(cls, info):
info.packages.add('chef') info.packages.add('chef')
class CopyChefAssets(Task): class CopyChefAssets(Task):
description = 'Copying chef assets' description = 'Copying chef assets'
phase = phases.system_modification phase = phases.system_modification
@classmethod @classmethod
def run(cls, info): def run(cls, info):
from bootstrapvz.common.tools import copy_tree from bootstrapvz.common.tools import copy_tree
copy_tree(info.manifest.plugins['chef']['assets'], os.path.join(info.root, 'etc/chef')) copy_tree(info.manifest.plugins['chef']['assets'], os.path.join(info.root, 'etc/chef'))

View file

@ -1,36 +1,36 @@
def validate_manifest(data, validator, error): def validate_manifest(data, validator, error):
import os.path import os.path
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml')) schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml'))
validator(data, schema_path) validator(data, schema_path)
def resolve_tasks(taskset, manifest): def resolve_tasks(taskset, manifest):
import tasks import tasks
import bootstrapvz.providers.ec2.tasks.initd as initd_ec2 import bootstrapvz.providers.ec2.tasks.initd as initd_ec2
from bootstrapvz.common.tasks import apt from bootstrapvz.common.tasks import apt
from bootstrapvz.common.tasks import initd from bootstrapvz.common.tasks import initd
from bootstrapvz.common.tasks import ssh from bootstrapvz.common.tasks import ssh
from bootstrapvz.common.releases import wheezy from bootstrapvz.common.releases import wheezy
if manifest.release == wheezy: if manifest.release == wheezy:
taskset.add(apt.AddBackports) taskset.add(apt.AddBackports)
taskset.update([tasks.SetMetadataSource, taskset.update([tasks.SetMetadataSource,
tasks.AddCloudInitPackages, tasks.AddCloudInitPackages,
]) ])
options = manifest.plugins['cloud_init'] options = manifest.plugins['cloud_init']
if 'username' in options: if 'username' in options:
taskset.add(tasks.SetUsername) taskset.add(tasks.SetUsername)
if 'groups' in options and len(options['groups']): if 'groups' in options and len(options['groups']):
taskset.add(tasks.SetGroups) taskset.add(tasks.SetGroups)
if 'disable_modules' in options: if 'disable_modules' in options:
taskset.add(tasks.DisableModules) taskset.add(tasks.DisableModules)
taskset.discard(initd_ec2.AddEC2InitScripts) taskset.discard(initd_ec2.AddEC2InitScripts)
taskset.discard(initd.AddExpandRoot) taskset.discard(initd.AddExpandRoot)
taskset.discard(initd.AdjustExpandRootScript) taskset.discard(initd.AdjustExpandRootScript)
taskset.discard(initd.AdjustGrowpartWorkaround) taskset.discard(initd.AdjustGrowpartWorkaround)
taskset.discard(ssh.AddSSHKeyGeneration) taskset.discard(ssh.AddSSHKeyGeneration)

View file

@ -8,92 +8,92 @@ import os.path
class AddCloudInitPackages(Task): class AddCloudInitPackages(Task):
description = 'Adding cloud-init package and sudo' description = 'Adding cloud-init package and sudo'
phase = phases.preparation phase = phases.preparation
predecessors = [apt.AddBackports] predecessors = [apt.AddBackports]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
target = None target = None
from bootstrapvz.common.releases import wheezy from bootstrapvz.common.releases import wheezy
if info.manifest.release == wheezy: if info.manifest.release == wheezy:
target = '{system.release}-backports' target = '{system.release}-backports'
info.packages.add('cloud-init', target) info.packages.add('cloud-init', target)
info.packages.add('sudo') info.packages.add('sudo')
class SetUsername(Task): class SetUsername(Task):
description = 'Setting username in cloud.cfg' description = 'Setting username in cloud.cfg'
phase = phases.system_modification phase = phases.system_modification
@classmethod @classmethod
def run(cls, info): def run(cls, info):
from bootstrapvz.common.tools import sed_i from bootstrapvz.common.tools import sed_i
cloud_cfg = os.path.join(info.root, 'etc/cloud/cloud.cfg') cloud_cfg = os.path.join(info.root, 'etc/cloud/cloud.cfg')
username = info.manifest.plugins['cloud_init']['username'] username = info.manifest.plugins['cloud_init']['username']
search = '^ name: debian$' search = '^ name: debian$'
replace = (' name: {username}\n' replace = (' name: {username}\n'
' sudo: ALL=(ALL) NOPASSWD:ALL\n' ' sudo: ALL=(ALL) NOPASSWD:ALL\n'
' shell: /bin/bash').format(username=username) ' shell: /bin/bash').format(username=username)
sed_i(cloud_cfg, search, replace) sed_i(cloud_cfg, search, replace)
class SetGroups(Task): class SetGroups(Task):
description = 'Setting groups in cloud.cfg' description = 'Setting groups in cloud.cfg'
phase = phases.system_modification phase = phases.system_modification
@classmethod @classmethod
def run(cls, info): def run(cls, info):
from bootstrapvz.common.tools import sed_i from bootstrapvz.common.tools import sed_i
cloud_cfg = os.path.join(info.root, 'etc/cloud/cloud.cfg') cloud_cfg = os.path.join(info.root, 'etc/cloud/cloud.cfg')
groups = info.manifest.plugins['cloud_init']['groups'] groups = info.manifest.plugins['cloud_init']['groups']
search = ('^ groups: \[adm, audio, cdrom, dialout, floppy, video,' search = ('^ groups: \[adm, audio, cdrom, dialout, floppy, video,'
' plugdev, dip\]$') ' plugdev, dip\]$')
replace = (' groups: [adm, audio, cdrom, dialout, floppy, video,' replace = (' groups: [adm, audio, cdrom, dialout, floppy, video,'
' plugdev, dip, {groups}]').format(groups=', '.join(groups)) ' plugdev, dip, {groups}]').format(groups=', '.join(groups))
sed_i(cloud_cfg, search, replace) sed_i(cloud_cfg, search, replace)
class SetMetadataSource(Task): class SetMetadataSource(Task):
description = 'Setting metadata source' description = 'Setting metadata source'
phase = phases.package_installation phase = phases.package_installation
predecessors = [locale.GenerateLocale] predecessors = [locale.GenerateLocale]
successors = [apt.AptUpdate] successors = [apt.AptUpdate]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
if 'metadata_sources' in info.manifest.plugins['cloud_init']: if 'metadata_sources' in info.manifest.plugins['cloud_init']:
sources = info.manifest.plugins['cloud_init']['metadata_sources'] sources = info.manifest.plugins['cloud_init']['metadata_sources']
else: else:
source_mapping = {'ec2': 'Ec2'} source_mapping = {'ec2': 'Ec2'}
sources = source_mapping.get(info.manifest.provider['name'], None) sources = source_mapping.get(info.manifest.provider['name'], None)
if sources is None: if sources is None:
msg = ('No cloud-init metadata source mapping found for provider `{provider}\', ' msg = ('No cloud-init metadata source mapping found for provider `{provider}\', '
'skipping selections setting.').format(provider=info.manifest.provider['name']) 'skipping selections setting.').format(provider=info.manifest.provider['name'])
logging.getLogger(__name__).warn(msg) logging.getLogger(__name__).warn(msg)
return return
sources = "cloud-init cloud-init/datasources multiselect " + sources sources = "cloud-init cloud-init/datasources multiselect " + sources
log_check_call(['chroot', info.root, 'debconf-set-selections'], sources) log_check_call(['chroot', info.root, 'debconf-set-selections'], sources)
class DisableModules(Task): class DisableModules(Task):
description = 'Setting cloud.cfg modules' description = 'Setting cloud.cfg modules'
phase = phases.system_modification phase = phases.system_modification
@classmethod @classmethod
def run(cls, info): def run(cls, info):
import re import re
patterns = "" patterns = ""
for pattern in info.manifest.plugins['cloud_init']['disable_modules']: for pattern in info.manifest.plugins['cloud_init']['disable_modules']:
if patterns != "": if patterns != "":
patterns = patterns + "|" + pattern patterns = patterns + "|" + pattern
else: else:
patterns = "^\s+-\s+(" + pattern patterns = "^\s+-\s+(" + pattern
patterns = patterns + ")$" patterns = patterns + ")$"
regex = re.compile(patterns) regex = re.compile(patterns)
cloud_cfg = os.path.join(info.root, 'etc/cloud/cloud.cfg') cloud_cfg = os.path.join(info.root, 'etc/cloud/cloud.cfg')
import fileinput import fileinput
for line in fileinput.input(files=cloud_cfg, inplace=True): for line in fileinput.input(files=cloud_cfg, inplace=True):
if not regex.match(line): if not regex.match(line):
print line, print line,

View file

@ -1,11 +1,11 @@
def validate_manifest(data, validator, error): def validate_manifest(data, validator, error):
import os.path import os.path
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml')) schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml'))
validator(data, schema_path) validator(data, schema_path)
def resolve_tasks(taskset, manifest): def resolve_tasks(taskset, manifest):
from tasks import ImageExecuteCommand from tasks import ImageExecuteCommand
taskset.add(ImageExecuteCommand) taskset.add(ImageExecuteCommand)

View file

@ -3,13 +3,13 @@ from bootstrapvz.common import phases
class ImageExecuteCommand(Task): class ImageExecuteCommand(Task):
description = 'Executing commands in the image' description = 'Executing commands in the image'
phase = phases.user_modification phase = phases.user_modification
@classmethod @classmethod
def run(cls, info): def run(cls, info):
from bootstrapvz.common.tools import log_check_call from bootstrapvz.common.tools import log_check_call
for raw_command in info.manifest.plugins['commands']['commands']: for raw_command in info.manifest.plugins['commands']['commands']:
command = map(lambda part: part.format(root=info.root, **info.manifest_vars), raw_command) command = map(lambda part: part.format(root=info.root, **info.manifest_vars), raw_command)
shell = len(command) == 1 shell = len(command) == 1
log_check_call(command, shell=shell) log_check_call(command, shell=shell)

View file

@ -5,23 +5,23 @@ from bootstrapvz.common.releases import wheezy
def validate_manifest(data, validator, error): def validate_manifest(data, validator, error):
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml')) schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml'))
validator(data, schema_path) validator(data, schema_path)
from bootstrapvz.common.releases import get_release from bootstrapvz.common.releases import get_release
if get_release(data['system']['release']) == wheezy: if get_release(data['system']['release']) == wheezy:
# prefs is a generator of apt preferences across files in the manifest # prefs is a generator of apt preferences across files in the manifest
prefs = (item for vals in data.get('packages', {}).get('preferences', {}).values() for item in vals) prefs = (item for vals in data.get('packages', {}).get('preferences', {}).values() for item in vals)
if not any('linux-image' in item['package'] and 'wheezy-backports' in item['pin'] for item in prefs): if not any('linux-image' in item['package'] and 'wheezy-backports' in item['pin'] for item in prefs):
msg = 'The backports kernel is required for the docker daemon to function properly' msg = 'The backports kernel is required for the docker daemon to function properly'
error(msg, ['packages', 'preferences']) error(msg, ['packages', 'preferences'])
def resolve_tasks(taskset, manifest): def resolve_tasks(taskset, manifest):
if manifest.release == wheezy: if manifest.release == wheezy:
taskset.add(apt.AddBackports) taskset.add(apt.AddBackports)
taskset.add(tasks.AddDockerDeps) taskset.add(tasks.AddDockerDeps)
taskset.add(tasks.AddDockerBinary) taskset.add(tasks.AddDockerBinary)
taskset.add(tasks.AddDockerInit) taskset.add(tasks.AddDockerInit)
taskset.add(tasks.EnableMemoryCgroup) taskset.add(tasks.EnableMemoryCgroup)
if len(manifest.plugins['docker_daemon'].get('pull_images', [])) > 0: if len(manifest.plugins['docker_daemon'].get('pull_images', [])) > 0:
taskset.add(tasks.PullDockerImages) taskset.add(tasks.PullDockerImages)

View file

@ -15,108 +15,108 @@ ASSETS_DIR = os.path.normpath(os.path.join(os.path.dirname(__file__), 'assets'))
class AddDockerDeps(Task): class AddDockerDeps(Task):
description = 'Add packages for docker deps' description = 'Add packages for docker deps'
phase = phases.package_installation phase = phases.package_installation
DOCKER_DEPS = ['aufs-tools', 'btrfs-tools', 'git', 'iptables', DOCKER_DEPS = ['aufs-tools', 'btrfs-tools', 'git', 'iptables',
'procps', 'xz-utils', 'ca-certificates'] 'procps', 'xz-utils', 'ca-certificates']
@classmethod @classmethod
def run(cls, info): def run(cls, info):
for pkg in cls.DOCKER_DEPS: for pkg in cls.DOCKER_DEPS:
info.packages.add(pkg) info.packages.add(pkg)
class AddDockerBinary(Task): class AddDockerBinary(Task):
description = 'Add docker binary' description = 'Add docker binary'
phase = phases.system_modification phase = phases.system_modification
@classmethod @classmethod
def run(cls, info): def run(cls, info):
docker_version = info.manifest.plugins['docker_daemon'].get('version', False) docker_version = info.manifest.plugins['docker_daemon'].get('version', False)
docker_url = 'https://get.docker.io/builds/Linux/x86_64/docker-' docker_url = 'https://get.docker.io/builds/Linux/x86_64/docker-'
if docker_version: if docker_version:
docker_url += docker_version docker_url += docker_version
else: else:
docker_url += 'latest' docker_url += 'latest'
bin_docker = os.path.join(info.root, 'usr/bin/docker') bin_docker = os.path.join(info.root, 'usr/bin/docker')
log_check_call(['wget', '-O', bin_docker, docker_url]) log_check_call(['wget', '-O', bin_docker, docker_url])
os.chmod(bin_docker, 0755) os.chmod(bin_docker, 0755)
class AddDockerInit(Task): class AddDockerInit(Task):
description = 'Add docker init script' description = 'Add docker init script'
phase = phases.system_modification phase = phases.system_modification
successors = [initd.InstallInitScripts] successors = [initd.InstallInitScripts]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
init_src = os.path.join(ASSETS_DIR, 'init.d/docker') init_src = os.path.join(ASSETS_DIR, 'init.d/docker')
info.initd['install']['docker'] = init_src info.initd['install']['docker'] = init_src
default_src = os.path.join(ASSETS_DIR, 'default/docker') default_src = os.path.join(ASSETS_DIR, 'default/docker')
default_dest = os.path.join(info.root, 'etc/default/docker') default_dest = os.path.join(info.root, 'etc/default/docker')
shutil.copy(default_src, default_dest) shutil.copy(default_src, default_dest)
docker_opts = info.manifest.plugins['docker_daemon'].get('docker_opts') docker_opts = info.manifest.plugins['docker_daemon'].get('docker_opts')
if docker_opts: if docker_opts:
sed_i(default_dest, r'^#*DOCKER_OPTS=.*$', 'DOCKER_OPTS="%s"' % docker_opts) sed_i(default_dest, r'^#*DOCKER_OPTS=.*$', 'DOCKER_OPTS="%s"' % docker_opts)
class EnableMemoryCgroup(Task): class EnableMemoryCgroup(Task):
description = 'Change grub configuration to enable the memory cgroup' description = 'Change grub configuration to enable the memory cgroup'
phase = phases.system_modification phase = phases.system_modification
successors = [grub.InstallGrub_1_99, grub.InstallGrub_2] successors = [grub.InstallGrub_1_99, grub.InstallGrub_2]
predecessors = [grub.ConfigureGrub, gceboot.ConfigureGrub] predecessors = [grub.ConfigureGrub, gceboot.ConfigureGrub]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
grub_config = os.path.join(info.root, 'etc/default/grub') grub_config = os.path.join(info.root, 'etc/default/grub')
sed_i(grub_config, r'^(GRUB_CMDLINE_LINUX*=".*)"\s*$', r'\1 cgroup_enable=memory"') sed_i(grub_config, r'^(GRUB_CMDLINE_LINUX*=".*)"\s*$', r'\1 cgroup_enable=memory"')
class PullDockerImages(Task): class PullDockerImages(Task):
description = 'Pull docker images' description = 'Pull docker images'
phase = phases.system_modification phase = phases.system_modification
predecessors = [AddDockerBinary] predecessors = [AddDockerBinary]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
from bootstrapvz.common.exceptions import TaskError from bootstrapvz.common.exceptions import TaskError
from subprocess import CalledProcessError from subprocess import CalledProcessError
images = info.manifest.plugins['docker_daemon'].get('pull_images', []) images = info.manifest.plugins['docker_daemon'].get('pull_images', [])
retries = info.manifest.plugins['docker_daemon'].get('pull_images_retries', 10) retries = info.manifest.plugins['docker_daemon'].get('pull_images_retries', 10)
bin_docker = os.path.join(info.root, 'usr/bin/docker') bin_docker = os.path.join(info.root, 'usr/bin/docker')
graph_dir = os.path.join(info.root, 'var/lib/docker') graph_dir = os.path.join(info.root, 'var/lib/docker')
socket = 'unix://' + os.path.join(info.workspace, 'docker.sock') socket = 'unix://' + os.path.join(info.workspace, 'docker.sock')
pidfile = os.path.join(info.workspace, 'docker.pid') pidfile = os.path.join(info.workspace, 'docker.pid')
try: try:
# start docker daemon temporarly. # start docker daemon temporarly.
daemon = subprocess.Popen([bin_docker, '-d', '--graph', graph_dir, '-H', socket, '-p', pidfile]) daemon = subprocess.Popen([bin_docker, '-d', '--graph', graph_dir, '-H', socket, '-p', pidfile])
# wait for docker daemon to start. # wait for docker daemon to start.
for _ in range(retries): for _ in range(retries):
try: try:
log_check_call([bin_docker, '-H', socket, 'version']) log_check_call([bin_docker, '-H', socket, 'version'])
break break
except CalledProcessError: except CalledProcessError:
time.sleep(1) time.sleep(1)
for img in images: for img in images:
# docker load if tarball. # docker load if tarball.
if img.endswith('.tar.gz') or img.endswith('.tgz'): if img.endswith('.tar.gz') or img.endswith('.tgz'):
cmd = [bin_docker, '-H', socket, 'load', '-i', img] cmd = [bin_docker, '-H', socket, 'load', '-i', img]
try: try:
log_check_call(cmd) log_check_call(cmd)
except CalledProcessError as e: except CalledProcessError as e:
msg = 'error {e} loading docker image {img}.'.format(img=img, e=e) msg = 'error {e} loading docker image {img}.'.format(img=img, e=e)
raise TaskError(msg) raise TaskError(msg)
# docker pull if image name. # docker pull if image name.
else: else:
cmd = [bin_docker, '-H', socket, 'pull', img] cmd = [bin_docker, '-H', socket, 'pull', img]
try: try:
log_check_call(cmd) log_check_call(cmd)
except CalledProcessError as e: except CalledProcessError as e:
msg = 'error {e} pulling docker image {img}.'.format(img=img, e=e) msg = 'error {e} pulling docker image {img}.'.format(img=img, e=e)
raise TaskError(msg) raise TaskError(msg)
finally: finally:
# shutdown docker daemon. # shutdown docker daemon.
daemon.terminate() daemon.terminate()
os.remove(os.path.join(info.workspace, 'docker.sock')) os.remove(os.path.join(info.workspace, 'docker.sock'))

View file

@ -6,13 +6,13 @@ import logging
# TODO: Merge with the method available in wip-integration-tests branch # TODO: Merge with the method available in wip-integration-tests branch
def waituntil(predicate, timeout=5, interval=0.05): def waituntil(predicate, timeout=5, interval=0.05):
import time import time
threshhold = time.time() + timeout threshhold = time.time() + timeout
while time.time() < threshhold: while time.time() < threshhold:
if predicate(): if predicate():
return True return True
time.sleep(interval) time.sleep(interval)
return False return False
class LaunchEC2Instance(Task): class LaunchEC2Instance(Task):

View file

@ -1,15 +1,15 @@
def validate_manifest(data, validator, error): def validate_manifest(data, validator, error):
import os.path import os.path
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml')) schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml'))
validator(data, schema_path) validator(data, schema_path)
def resolve_tasks(taskset, manifest): def resolve_tasks(taskset, manifest):
import tasks import tasks
taskset.add(tasks.CopyAmiToRegions) taskset.add(tasks.CopyAmiToRegions)
if 'manifest_url' in manifest.plugins['ec2_publish']: if 'manifest_url' in manifest.plugins['ec2_publish']:
taskset.add(tasks.PublishAmiManifest) taskset.add(tasks.PublishAmiManifest)
ami_public = manifest.plugins['ec2_publish'].get('public') ami_public = manifest.plugins['ec2_publish'].get('public')
if ami_public: if ami_public:
taskset.add(tasks.PublishAmi) taskset.add(tasks.PublishAmi)

View file

@ -6,91 +6,91 @@ import logging
class CopyAmiToRegions(Task): class CopyAmiToRegions(Task):
description = 'Copy AWS AMI over other regions' description = 'Copy AWS AMI over other regions'
phase = phases.image_registration phase = phases.image_registration
predecessors = [ami.RegisterAMI] predecessors = [ami.RegisterAMI]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
source_region = info._ec2['region'] source_region = info._ec2['region']
source_ami = info._ec2['image'] source_ami = info._ec2['image']
name = info._ec2['ami_name'] name = info._ec2['ami_name']
copy_description = "Copied from %s (%s)" % (source_ami, source_region) copy_description = "Copied from %s (%s)" % (source_ami, source_region)
connect_args = { connect_args = {
'aws_access_key_id': info.credentials['access-key'], 'aws_access_key_id': info.credentials['access-key'],
'aws_secret_access_key': info.credentials['secret-key'] 'aws_secret_access_key': info.credentials['secret-key']
} }
if 'security-token' in info.credentials: if 'security-token' in info.credentials:
connect_args['security_token'] = info.credentials['security-token'] connect_args['security_token'] = info.credentials['security-token']
region_amis = {source_region: source_ami} region_amis = {source_region: source_ami}
region_conns = {source_region: info._ec2['connection']} region_conns = {source_region: info._ec2['connection']}
from boto.ec2 import connect_to_region from boto.ec2 import connect_to_region
regions = info.manifest.plugins['ec2_publish'].get('regions', ()) regions = info.manifest.plugins['ec2_publish'].get('regions', ())
for region in regions: for region in regions:
conn = connect_to_region(region, **connect_args) conn = connect_to_region(region, **connect_args)
region_conns[region] = conn region_conns[region] = conn
copied_image = conn.copy_image(source_region, source_ami, name=name, description=copy_description) copied_image = conn.copy_image(source_region, source_ami, name=name, description=copy_description)
region_amis[region] = copied_image.image_id region_amis[region] = copied_image.image_id
info._ec2['region_amis'] = region_amis info._ec2['region_amis'] = region_amis
info._ec2['region_conns'] = region_conns info._ec2['region_conns'] = region_conns
class PublishAmiManifest(Task): class PublishAmiManifest(Task):
description = 'Publish a manifest of generated AMIs' description = 'Publish a manifest of generated AMIs'
phase = phases.image_registration phase = phases.image_registration
predecessors = [CopyAmiToRegions] predecessors = [CopyAmiToRegions]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
manifest_url = info.manifest.plugins['ec2_publish']['manifest_url'] manifest_url = info.manifest.plugins['ec2_publish']['manifest_url']
import json import json
amis_json = json.dumps(info._ec2['region_amis']) amis_json = json.dumps(info._ec2['region_amis'])
from urlparse import urlparse from urlparse import urlparse
parsed_url = urlparse(manifest_url) parsed_url = urlparse(manifest_url)
parsed_host = parsed_url.netloc parsed_host = parsed_url.netloc
if not parsed_url.scheme: if not parsed_url.scheme:
with open(parsed_url.path, 'w') as local_out: with open(parsed_url.path, 'w') as local_out:
local_out.write(amis_json) local_out.write(amis_json)
elif parsed_host.endswith('amazonaws.com') and 's3' in parsed_host: elif parsed_host.endswith('amazonaws.com') and 's3' in parsed_host:
region = 'us-east-1' region = 'us-east-1'
path = parsed_url.path[1:] path = parsed_url.path[1:]
if 's3-' in parsed_host: if 's3-' in parsed_host:
loc = parsed_host.find('s3-') + 3 loc = parsed_host.find('s3-') + 3
region = parsed_host[loc:parsed_host.find('.', loc)] region = parsed_host[loc:parsed_host.find('.', loc)]
if '.s3' in parsed_host: if '.s3' in parsed_host:
bucket = parsed_host[:parsed_host.find('.s3')] bucket = parsed_host[:parsed_host.find('.s3')]
else: else:
bucket, path = path.split('/', 1) bucket, path = path.split('/', 1)
from boto.s3 import connect_to_region from boto.s3 import connect_to_region
conn = connect_to_region(region) conn = connect_to_region(region)
key = conn.get_bucket(bucket, validate=False).new_key(path) key = conn.get_bucket(bucket, validate=False).new_key(path)
headers = {'Content-Type': 'application/json'} headers = {'Content-Type': 'application/json'}
key.set_contents_from_string(amis_json, headers=headers, policy='public-read') key.set_contents_from_string(amis_json, headers=headers, policy='public-read')
class PublishAmi(Task): class PublishAmi(Task):
description = 'Make generated AMIs public' description = 'Make generated AMIs public'
phase = phases.image_registration phase = phases.image_registration
predecessors = [CopyAmiToRegions] predecessors = [CopyAmiToRegions]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
region_conns = info._ec2['region_conns'] region_conns = info._ec2['region_conns']
region_amis = info._ec2['region_amis'] region_amis = info._ec2['region_amis']
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
import time import time
for region, region_ami in region_amis.items(): for region, region_ami in region_amis.items():
conn = region_conns[region] conn = region_conns[region]
current_image = conn.get_image(region_ami) current_image = conn.get_image(region_ami)
while current_image.state == 'pending': while current_image.state == 'pending':
logger.debug('Waiting for %s in %s (currently: %s)', region_ami, region, current_image.state) logger.debug('Waiting for %s in %s (currently: %s)', region_ami, region, current_image.state)
time.sleep(5) time.sleep(5)
current_image = conn.get_image(region_ami) current_image = conn.get_image(region_ami)
conn.modify_image_attribute(region_ami, attribute='launchPermission', operation='add', groups='all') conn.modify_image_attribute(region_ami, attribute='launchPermission', operation='add', groups='all')

View file

@ -2,20 +2,20 @@ import tasks
def validate_manifest(data, validator, error): def validate_manifest(data, validator, error):
import os.path import os.path
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml')) schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml'))
validator(data, schema_path) validator(data, schema_path)
for i, file_entry in enumerate(data['plugins']['file_copy']['files']): for i, file_entry in enumerate(data['plugins']['file_copy']['files']):
srcfile = file_entry['src'] srcfile = file_entry['src']
if not os.path.isfile(srcfile): if not os.path.isfile(srcfile):
msg = 'The source file %s does not exist.' % srcfile msg = 'The source file %s does not exist.' % srcfile
error(msg, ['plugins', 'file_copy', 'files', i]) error(msg, ['plugins', 'file_copy', 'files', i])
def resolve_tasks(taskset, manifest): def resolve_tasks(taskset, manifest):
if ('mkdirs' in manifest.plugins['file_copy']): if ('mkdirs' in manifest.plugins['file_copy']):
taskset.add(tasks.MkdirCommand) taskset.add(tasks.MkdirCommand)
if ('files' in manifest.plugins['file_copy']): if ('files' in manifest.plugins['file_copy']):
taskset.add(tasks.FileCopyCommand) taskset.add(tasks.FileCopyCommand)

View file

@ -6,46 +6,46 @@ import shutil
def modify_path(info, path, entry): def modify_path(info, path, entry):
from bootstrapvz.common.tools import log_check_call from bootstrapvz.common.tools import log_check_call
if 'permissions' in entry: if 'permissions' in entry:
# We wrap the permissions string in str() in case # We wrap the permissions string in str() in case
# the user specified a numeric bitmask # the user specified a numeric bitmask
chmod_command = ['chroot', info.root, 'chmod', str(entry['permissions']), path] chmod_command = ['chroot', info.root, 'chmod', str(entry['permissions']), path]
log_check_call(chmod_command) log_check_call(chmod_command)
if 'owner' in entry: if 'owner' in entry:
chown_command = ['chroot', info.root, 'chown', entry['owner'], path] chown_command = ['chroot', info.root, 'chown', entry['owner'], path]
log_check_call(chown_command) log_check_call(chown_command)
if 'group' in entry: if 'group' in entry:
chgrp_command = ['chroot', info.root, 'chgrp', entry['group'], path] chgrp_command = ['chroot', info.root, 'chgrp', entry['group'], path]
log_check_call(chgrp_command) log_check_call(chgrp_command)
class MkdirCommand(Task): class MkdirCommand(Task):
description = 'Creating directories requested by user' description = 'Creating directories requested by user'
phase = phases.user_modification phase = phases.user_modification
@classmethod @classmethod
def run(cls, info): def run(cls, info):
from bootstrapvz.common.tools import log_check_call from bootstrapvz.common.tools import log_check_call
for dir_entry in info.manifest.plugins['file_copy']['mkdirs']: for dir_entry in info.manifest.plugins['file_copy']['mkdirs']:
mkdir_command = ['chroot', info.root, 'mkdir', '-p', dir_entry['dir']] mkdir_command = ['chroot', info.root, 'mkdir', '-p', dir_entry['dir']]
log_check_call(mkdir_command) log_check_call(mkdir_command)
modify_path(info, dir_entry['dir'], dir_entry) modify_path(info, dir_entry['dir'], dir_entry)
class FileCopyCommand(Task): class FileCopyCommand(Task):
description = 'Copying user specified files into the image' description = 'Copying user specified files into the image'
phase = phases.user_modification phase = phases.user_modification
predecessors = [MkdirCommand] predecessors = [MkdirCommand]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
for file_entry in info.manifest.plugins['file_copy']['files']: for file_entry in info.manifest.plugins['file_copy']['files']:
# note that we don't use os.path.join because it can't # note that we don't use os.path.join because it can't
# handle absolute paths, which 'dst' most likely is. # handle absolute paths, which 'dst' most likely is.
final_destination = os.path.normpath("%s/%s" % (info.root, file_entry['dst'])) final_destination = os.path.normpath("%s/%s" % (info.root, file_entry['dst']))
shutil.copy(file_entry['src'], final_destination) shutil.copy(file_entry['src'], final_destination)
modify_path(info, file_entry['dst'], file_entry) modify_path(info, file_entry['dst'], file_entry)

View file

@ -3,14 +3,14 @@ import os.path
def validate_manifest(data, validator, error): def validate_manifest(data, validator, error):
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml')) schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml'))
validator(data, schema_path) validator(data, schema_path)
def resolve_tasks(taskset, manifest): def resolve_tasks(taskset, manifest):
taskset.add(tasks.AddGoogleCloudRepoKey) taskset.add(tasks.AddGoogleCloudRepoKey)
if manifest.plugins['google_cloud_repo'].get('enable_keyring_repo', False): if manifest.plugins['google_cloud_repo'].get('enable_keyring_repo', False):
taskset.add(tasks.AddGoogleCloudRepoKeyringRepo) taskset.add(tasks.AddGoogleCloudRepoKeyringRepo)
taskset.add(tasks.InstallGoogleCloudRepoKeyringPackage) taskset.add(tasks.InstallGoogleCloudRepoKeyringPackage)
if manifest.plugins['google_cloud_repo'].get('cleanup_bootstrap_key', False): if manifest.plugins['google_cloud_repo'].get('cleanup_bootstrap_key', False):
taskset.add(tasks.CleanupBootstrapRepoKey) taskset.add(tasks.CleanupBootstrapRepoKey)

View file

@ -7,43 +7,43 @@ import os
class AddGoogleCloudRepoKey(Task): class AddGoogleCloudRepoKey(Task):
description = 'Adding Google Cloud Repo key.' description = 'Adding Google Cloud Repo key.'
phase = phases.package_installation phase = phases.package_installation
predecessors = [apt.InstallTrustedKeys] predecessors = [apt.InstallTrustedKeys]
successors = [apt.WriteSources] successors = [apt.WriteSources]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
key_file = os.path.join(info.root, 'google.gpg.key') key_file = os.path.join(info.root, 'google.gpg.key')
log_check_call(['wget', 'https://packages.cloud.google.com/apt/doc/apt-key.gpg', '-O', key_file]) log_check_call(['wget', 'https://packages.cloud.google.com/apt/doc/apt-key.gpg', '-O', key_file])
log_check_call(['chroot', info.root, 'apt-key', 'add', 'google.gpg.key']) log_check_call(['chroot', info.root, 'apt-key', 'add', 'google.gpg.key'])
os.remove(key_file) os.remove(key_file)
class AddGoogleCloudRepoKeyringRepo(Task): class AddGoogleCloudRepoKeyringRepo(Task):
description = 'Adding Google Cloud keyring repository.' description = 'Adding Google Cloud keyring repository.'
phase = phases.preparation phase = phases.preparation
predecessors = [apt.AddManifestSources] predecessors = [apt.AddManifestSources]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
info.source_lists.add('google-cloud', 'deb http://packages.cloud.google.com/apt google-cloud-packages-archive-keyring-{system.release} main') info.source_lists.add('google-cloud', 'deb http://packages.cloud.google.com/apt google-cloud-packages-archive-keyring-{system.release} main')
class InstallGoogleCloudRepoKeyringPackage(Task): class InstallGoogleCloudRepoKeyringPackage(Task):
description = 'Installing Google Cloud key package.' description = 'Installing Google Cloud key package.'
phase = phases.preparation phase = phases.preparation
successors = [packages.AddManifestPackages] successors = [packages.AddManifestPackages]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
info.packages.add('google-cloud-packages-archive-keyring') info.packages.add('google-cloud-packages-archive-keyring')
class CleanupBootstrapRepoKey(Task): class CleanupBootstrapRepoKey(Task):
description = 'Cleaning up bootstrap repo key.' description = 'Cleaning up bootstrap repo key.'
phase = phases.system_cleaning phase = phases.system_cleaning
@classmethod @classmethod
def run(cls, info): def run(cls, info):
os.remove(os.path.join(info.root, 'etc', 'apt', 'trusted.gpg')) os.remove(os.path.join(info.root, 'etc', 'apt', 'trusted.gpg'))

View file

@ -6,52 +6,52 @@ from bootstrapvz.common.tasks import locale
def validate_manifest(data, validator, error): def validate_manifest(data, validator, error):
import os.path import os.path
schema_path = os.path.join(os.path.dirname(__file__), 'manifest-schema.yml') schema_path = os.path.join(os.path.dirname(__file__), 'manifest-schema.yml')
validator(data, schema_path) validator(data, schema_path)
if data['plugins']['minimize_size'].get('shrink', False) and data['volume']['backing'] != 'vmdk': if data['plugins']['minimize_size'].get('shrink', False) and data['volume']['backing'] != 'vmdk':
error('Can only shrink vmdk images', ['plugins', 'minimize_size', 'shrink']) error('Can only shrink vmdk images', ['plugins', 'minimize_size', 'shrink'])
def resolve_tasks(taskset, manifest): def resolve_tasks(taskset, manifest):
taskset.update([tasks.mounts.AddFolderMounts, taskset.update([tasks.mounts.AddFolderMounts,
tasks.mounts.RemoveFolderMounts, tasks.mounts.RemoveFolderMounts,
]) ])
if manifest.plugins['minimize_size'].get('zerofree', False): if manifest.plugins['minimize_size'].get('zerofree', False):
taskset.add(tasks.shrink.AddRequiredCommands) taskset.add(tasks.shrink.AddRequiredCommands)
taskset.add(tasks.shrink.Zerofree) taskset.add(tasks.shrink.Zerofree)
if manifest.plugins['minimize_size'].get('shrink', False): if manifest.plugins['minimize_size'].get('shrink', False):
taskset.add(tasks.shrink.AddRequiredCommands) taskset.add(tasks.shrink.AddRequiredCommands)
taskset.add(tasks.shrink.ShrinkVolume) taskset.add(tasks.shrink.ShrinkVolume)
if 'apt' in manifest.plugins['minimize_size']: if 'apt' in manifest.plugins['minimize_size']:
apt = manifest.plugins['minimize_size']['apt'] apt = manifest.plugins['minimize_size']['apt']
if apt.get('autoclean', False): if apt.get('autoclean', False):
taskset.add(tasks.apt.AutomateAptClean) taskset.add(tasks.apt.AutomateAptClean)
if 'languages' in apt: if 'languages' in apt:
taskset.add(tasks.apt.FilterTranslationFiles) taskset.add(tasks.apt.FilterTranslationFiles)
if apt.get('gzip_indexes', False): if apt.get('gzip_indexes', False):
taskset.add(tasks.apt.AptGzipIndexes) taskset.add(tasks.apt.AptGzipIndexes)
if apt.get('autoremove_suggests', False): if apt.get('autoremove_suggests', False):
taskset.add(tasks.apt.AptAutoremoveSuggests) taskset.add(tasks.apt.AptAutoremoveSuggests)
filter_tasks = [tasks.dpkg.CreateDpkgCfg, filter_tasks = [tasks.dpkg.CreateDpkgCfg,
tasks.dpkg.InitializeBootstrapFilterList, tasks.dpkg.InitializeBootstrapFilterList,
tasks.dpkg.CreateBootstrapFilterScripts, tasks.dpkg.CreateBootstrapFilterScripts,
tasks.dpkg.DeleteBootstrapFilterScripts, tasks.dpkg.DeleteBootstrapFilterScripts,
] ]
if 'dpkg' in manifest.plugins['minimize_size']: if 'dpkg' in manifest.plugins['minimize_size']:
dpkg = manifest.plugins['minimize_size']['dpkg'] dpkg = manifest.plugins['minimize_size']['dpkg']
if 'locales' in dpkg: if 'locales' in dpkg:
taskset.update(filter_tasks) taskset.update(filter_tasks)
taskset.add(tasks.dpkg.FilterLocales) taskset.add(tasks.dpkg.FilterLocales)
# If no locales are selected, we don't need the locale package # If no locales are selected, we don't need the locale package
if len(dpkg['locales']) == 0: if len(dpkg['locales']) == 0:
taskset.discard(locale.LocaleBootstrapPackage) taskset.discard(locale.LocaleBootstrapPackage)
taskset.discard(locale.GenerateLocale) taskset.discard(locale.GenerateLocale)
if dpkg.get('exclude_docs', False): if dpkg.get('exclude_docs', False):
taskset.update(filter_tasks) taskset.update(filter_tasks)
taskset.add(tasks.dpkg.ExcludeDocs) taskset.add(tasks.dpkg.ExcludeDocs)
def resolve_rollback_tasks(taskset, manifest, completed, counter_task): def resolve_rollback_tasks(taskset, manifest, completed, counter_task):
counter_task(taskset, tasks.mounts.AddFolderMounts, tasks.mounts.RemoveFolderMounts) counter_task(taskset, tasks.mounts.AddFolderMounts, tasks.mounts.RemoveFolderMounts)
counter_task(taskset, tasks.dpkg.CreateBootstrapFilterScripts, tasks.dpkg.DeleteBootstrapFilterScripts) counter_task(taskset, tasks.dpkg.CreateBootstrapFilterScripts, tasks.dpkg.DeleteBootstrapFilterScripts)

View file

@ -8,55 +8,55 @@ from . import assets
class AutomateAptClean(Task): class AutomateAptClean(Task):
description = 'Configuring apt to always clean everything out when it\'s done' description = 'Configuring apt to always clean everything out when it\'s done'
phase = phases.package_installation phase = phases.package_installation
successors = [apt.AptUpdate] successors = [apt.AptUpdate]
# Snatched from: # Snatched from:
# https://github.com/docker/docker/blob/1d775a54cc67e27f755c7338c3ee938498e845d7/contrib/mkimage/debootstrap # https://github.com/docker/docker/blob/1d775a54cc67e27f755c7338c3ee938498e845d7/contrib/mkimage/debootstrap
@classmethod @classmethod
def run(cls, info): def run(cls, info):
shutil.copy(os.path.join(assets, 'apt-clean'), shutil.copy(os.path.join(assets, 'apt-clean'),
os.path.join(info.root, 'etc/apt/apt.conf.d/90clean')) os.path.join(info.root, 'etc/apt/apt.conf.d/90clean'))
class FilterTranslationFiles(Task): class FilterTranslationFiles(Task):
description = 'Configuring apt to only download and use specific translation files' description = 'Configuring apt to only download and use specific translation files'
phase = phases.package_installation phase = phases.package_installation
successors = [apt.AptUpdate] successors = [apt.AptUpdate]
# Snatched from: # Snatched from:
# https://github.com/docker/docker/blob/1d775a54cc67e27f755c7338c3ee938498e845d7/contrib/mkimage/debootstrap # https://github.com/docker/docker/blob/1d775a54cc67e27f755c7338c3ee938498e845d7/contrib/mkimage/debootstrap
@classmethod @classmethod
def run(cls, info): def run(cls, info):
langs = info.manifest.plugins['minimize_size']['apt']['languages'] langs = info.manifest.plugins['minimize_size']['apt']['languages']
config = '; '.join(map(lambda l: '"' + l + '"', langs)) config = '; '.join(map(lambda l: '"' + l + '"', langs))
config_path = os.path.join(info.root, 'etc/apt/apt.conf.d/20languages') config_path = os.path.join(info.root, 'etc/apt/apt.conf.d/20languages')
shutil.copy(os.path.join(assets, 'apt-languages'), config_path) shutil.copy(os.path.join(assets, 'apt-languages'), config_path)
sed_i(config_path, r'ACQUIRE_LANGUAGES_FILTER', config) sed_i(config_path, r'ACQUIRE_LANGUAGES_FILTER', config)
class AptGzipIndexes(Task): class AptGzipIndexes(Task):
description = 'Configuring apt to always gzip lists files' description = 'Configuring apt to always gzip lists files'
phase = phases.package_installation phase = phases.package_installation
successors = [apt.AptUpdate] successors = [apt.AptUpdate]
# Snatched from: # Snatched from:
# https://github.com/docker/docker/blob/1d775a54cc67e27f755c7338c3ee938498e845d7/contrib/mkimage/debootstrap # https://github.com/docker/docker/blob/1d775a54cc67e27f755c7338c3ee938498e845d7/contrib/mkimage/debootstrap
@classmethod @classmethod
def run(cls, info): def run(cls, info):
shutil.copy(os.path.join(assets, 'apt-gzip-indexes'), shutil.copy(os.path.join(assets, 'apt-gzip-indexes'),
os.path.join(info.root, 'etc/apt/apt.conf.d/20gzip-indexes')) os.path.join(info.root, 'etc/apt/apt.conf.d/20gzip-indexes'))
class AptAutoremoveSuggests(Task): class AptAutoremoveSuggests(Task):
description = 'Configuring apt to remove suggested packages when autoremoving' description = 'Configuring apt to remove suggested packages when autoremoving'
phase = phases.package_installation phase = phases.package_installation
successors = [apt.AptUpdate] successors = [apt.AptUpdate]
# Snatched from: # Snatched from:
# https://github.com/docker/docker/blob/1d775a54cc67e27f755c7338c3ee938498e845d7/contrib/mkimage/debootstrap # https://github.com/docker/docker/blob/1d775a54cc67e27f755c7338c3ee938498e845d7/contrib/mkimage/debootstrap
@classmethod @classmethod
def run(cls, info): def run(cls, info):
shutil.copy(os.path.join(assets, 'apt-autoremove-suggests'), shutil.copy(os.path.join(assets, 'apt-autoremove-suggests'),
os.path.join(info.root, 'etc/apt/apt.conf.d/20autoremove-suggests')) os.path.join(info.root, 'etc/apt/apt.conf.d/20autoremove-suggests'))

View file

@ -9,140 +9,140 @@ from . import assets
class CreateDpkgCfg(Task): class CreateDpkgCfg(Task):
description = 'Creating /etc/dpkg/dpkg.cfg.d before bootstrapping' description = 'Creating /etc/dpkg/dpkg.cfg.d before bootstrapping'
phase = phases.os_installation phase = phases.os_installation
successors = [bootstrap.Bootstrap] successors = [bootstrap.Bootstrap]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
os.makedirs(os.path.join(info.root, 'etc/dpkg/dpkg.cfg.d')) os.makedirs(os.path.join(info.root, 'etc/dpkg/dpkg.cfg.d'))
class InitializeBootstrapFilterList(Task): class InitializeBootstrapFilterList(Task):
description = 'Initializing the bootstrapping filter list' description = 'Initializing the bootstrapping filter list'
phase = phases.preparation phase = phases.preparation
@classmethod @classmethod
def run(cls, info): def run(cls, info):
info._minimize_size['bootstrap_filter'] = {'exclude': [], 'include': []} info._minimize_size['bootstrap_filter'] = {'exclude': [], 'include': []}
class CreateBootstrapFilterScripts(Task): class CreateBootstrapFilterScripts(Task):
description = 'Creating the bootstrapping locales filter script' description = 'Creating the bootstrapping locales filter script'
phase = phases.os_installation phase = phases.os_installation
successors = [bootstrap.Bootstrap] successors = [bootstrap.Bootstrap]
# Inspired by: # Inspired by:
# https://github.com/docker/docker/blob/1d775a54cc67e27f755c7338c3ee938498e845d7/contrib/mkimage/debootstrap # https://github.com/docker/docker/blob/1d775a54cc67e27f755c7338c3ee938498e845d7/contrib/mkimage/debootstrap
@classmethod @classmethod
def run(cls, info): def run(cls, info):
if info.bootstrap_script is not None: if info.bootstrap_script is not None:
from bootstrapvz.common.exceptions import TaskError from bootstrapvz.common.exceptions import TaskError
raise TaskError('info.bootstrap_script seems to already be set ' raise TaskError('info.bootstrap_script seems to already be set '
'and is conflicting with this task') 'and is conflicting with this task')
bootstrap_script = os.path.join(info.workspace, 'bootstrap_script.sh') bootstrap_script = os.path.join(info.workspace, 'bootstrap_script.sh')
filter_script = os.path.join(info.workspace, 'bootstrap_files_filter.sh') filter_script = os.path.join(info.workspace, 'bootstrap_files_filter.sh')
excludes_file = os.path.join(info.workspace, 'debootstrap-excludes') excludes_file = os.path.join(info.workspace, 'debootstrap-excludes')
shutil.copy(os.path.join(assets, 'bootstrap-script.sh'), bootstrap_script) shutil.copy(os.path.join(assets, 'bootstrap-script.sh'), bootstrap_script)
shutil.copy(os.path.join(assets, 'bootstrap-files-filter.sh'), filter_script) shutil.copy(os.path.join(assets, 'bootstrap-files-filter.sh'), filter_script)
sed_i(bootstrap_script, r'DEBOOTSTRAP_EXCLUDES_PATH', excludes_file) sed_i(bootstrap_script, r'DEBOOTSTRAP_EXCLUDES_PATH', excludes_file)
sed_i(bootstrap_script, r'BOOTSTRAP_FILES_FILTER_PATH', filter_script) sed_i(bootstrap_script, r'BOOTSTRAP_FILES_FILTER_PATH', filter_script)
# We exclude with patterns but include with fixed strings # We exclude with patterns but include with fixed strings
# The pattern matching when excluding is needed in order to filter # The pattern matching when excluding is needed in order to filter
# everything below e.g. /usr/share/locale but not the folder itself # everything below e.g. /usr/share/locale but not the folder itself
filter_lists = info._minimize_size['bootstrap_filter'] filter_lists = info._minimize_size['bootstrap_filter']
exclude_list = '\|'.join(map(lambda p: '.' + p + '.\+', filter_lists['exclude'])) exclude_list = '\|'.join(map(lambda p: '.' + p + '.\+', filter_lists['exclude']))
include_list = '\n'.join(map(lambda p: '.' + p, filter_lists['include'])) include_list = '\n'.join(map(lambda p: '.' + p, filter_lists['include']))
sed_i(filter_script, r'EXCLUDE_PATTERN', exclude_list) sed_i(filter_script, r'EXCLUDE_PATTERN', exclude_list)
sed_i(filter_script, r'INCLUDE_PATHS', include_list) sed_i(filter_script, r'INCLUDE_PATHS', include_list)
os.chmod(filter_script, 0755) os.chmod(filter_script, 0755)
info.bootstrap_script = bootstrap_script info.bootstrap_script = bootstrap_script
info._minimize_size['filter_script'] = filter_script info._minimize_size['filter_script'] = filter_script
class FilterLocales(Task): class FilterLocales(Task):
description = 'Configuring dpkg and debootstrap to only include specific locales/manpages when installing packages' description = 'Configuring dpkg and debootstrap to only include specific locales/manpages when installing packages'
phase = phases.os_installation phase = phases.os_installation
predecessors = [CreateDpkgCfg] predecessors = [CreateDpkgCfg]
successors = [CreateBootstrapFilterScripts] successors = [CreateBootstrapFilterScripts]
# Snatched from: # Snatched from:
# https://github.com/docker/docker/blob/1d775a54cc67e27f755c7338c3ee938498e845d7/contrib/mkimage/debootstrap # https://github.com/docker/docker/blob/1d775a54cc67e27f755c7338c3ee938498e845d7/contrib/mkimage/debootstrap
# and # and
# https://raphaelhertzog.com/2010/11/15/save-disk-space-by-excluding-useless-files-with-dpkg/ # https://raphaelhertzog.com/2010/11/15/save-disk-space-by-excluding-useless-files-with-dpkg/
@classmethod @classmethod
def run(cls, info): def run(cls, info):
# Filter when debootstrapping # Filter when debootstrapping
info._minimize_size['bootstrap_filter']['exclude'].extend([ info._minimize_size['bootstrap_filter']['exclude'].extend([
'/usr/share/locale/', '/usr/share/locale/',
'/usr/share/man/', '/usr/share/man/',
]) ])
locales = info.manifest.plugins['minimize_size']['dpkg']['locales'] locales = info.manifest.plugins['minimize_size']['dpkg']['locales']
info._minimize_size['bootstrap_filter']['include'].extend([ info._minimize_size['bootstrap_filter']['include'].extend([
'/usr/share/locale/locale.alias', '/usr/share/locale/locale.alias',
'/usr/share/man/man1', '/usr/share/man/man1',
'/usr/share/man/man2', '/usr/share/man/man2',
'/usr/share/man/man3', '/usr/share/man/man3',
'/usr/share/man/man4', '/usr/share/man/man4',
'/usr/share/man/man5', '/usr/share/man/man5',
'/usr/share/man/man6', '/usr/share/man/man6',
'/usr/share/man/man7', '/usr/share/man/man7',
'/usr/share/man/man8', '/usr/share/man/man8',
'/usr/share/man/man9', '/usr/share/man/man9',
] + ] +
map(lambda l: '/usr/share/locale/' + l + '/', locales) + map(lambda l: '/usr/share/locale/' + l + '/', locales) +
map(lambda l: '/usr/share/man/' + l + '/', locales) map(lambda l: '/usr/share/man/' + l + '/', locales)
) )
# Filter when installing things with dpkg # Filter when installing things with dpkg
locale_lines = ['path-exclude=/usr/share/locale/*', locale_lines = ['path-exclude=/usr/share/locale/*',
'path-include=/usr/share/locale/locale.alias'] 'path-include=/usr/share/locale/locale.alias']
manpages_lines = ['path-exclude=/usr/share/man/*', manpages_lines = ['path-exclude=/usr/share/man/*',
'path-include=/usr/share/man/man[1-9]'] 'path-include=/usr/share/man/man[1-9]']
locales = info.manifest.plugins['minimize_size']['dpkg']['locales'] locales = info.manifest.plugins['minimize_size']['dpkg']['locales']
locale_lines.extend(map(lambda l: 'path-include=/usr/share/locale/' + l + '/*', locales)) locale_lines.extend(map(lambda l: 'path-include=/usr/share/locale/' + l + '/*', locales))
manpages_lines.extend(map(lambda l: 'path-include=/usr/share/man/' + l + '/*', locales)) manpages_lines.extend(map(lambda l: 'path-include=/usr/share/man/' + l + '/*', locales))
locales_path = os.path.join(info.root, 'etc/dpkg/dpkg.cfg.d/10filter-locales') locales_path = os.path.join(info.root, 'etc/dpkg/dpkg.cfg.d/10filter-locales')
manpages_path = os.path.join(info.root, 'etc/dpkg/dpkg.cfg.d/10filter-manpages') manpages_path = os.path.join(info.root, 'etc/dpkg/dpkg.cfg.d/10filter-manpages')
with open(locales_path, 'w') as locale_filter: with open(locales_path, 'w') as locale_filter:
locale_filter.write('\n'.join(locale_lines) + '\n') locale_filter.write('\n'.join(locale_lines) + '\n')
with open(manpages_path, 'w') as manpages_filter: with open(manpages_path, 'w') as manpages_filter:
manpages_filter.write('\n'.join(manpages_lines) + '\n') manpages_filter.write('\n'.join(manpages_lines) + '\n')
class ExcludeDocs(Task): class ExcludeDocs(Task):
description = 'Configuring dpkg and debootstrap to not install additional documentation for packages' description = 'Configuring dpkg and debootstrap to not install additional documentation for packages'
phase = phases.os_installation phase = phases.os_installation
predecessors = [CreateDpkgCfg] predecessors = [CreateDpkgCfg]
successors = [CreateBootstrapFilterScripts] successors = [CreateBootstrapFilterScripts]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
# "Packages must not require the existence of any files in /usr/share/doc/ in order to function [...]." # "Packages must not require the existence of any files in /usr/share/doc/ in order to function [...]."
# Source: https://www.debian.org/doc/debian-policy/ch-docs.html # Source: https://www.debian.org/doc/debian-policy/ch-docs.html
# So doing this should cause no problems. # So doing this should cause no problems.
info._minimize_size['bootstrap_filter']['exclude'].append('/usr/share/doc/') info._minimize_size['bootstrap_filter']['exclude'].append('/usr/share/doc/')
exclude_docs_path = os.path.join(info.root, 'etc/dpkg/dpkg.cfg.d/10exclude-docs') exclude_docs_path = os.path.join(info.root, 'etc/dpkg/dpkg.cfg.d/10exclude-docs')
with open(exclude_docs_path, 'w') as exclude_docs: with open(exclude_docs_path, 'w') as exclude_docs:
exclude_docs.write('path-exclude=/usr/share/doc/*\n') exclude_docs.write('path-exclude=/usr/share/doc/*\n')
class DeleteBootstrapFilterScripts(Task): class DeleteBootstrapFilterScripts(Task):
description = 'Deleting the bootstrapping locales filter script' description = 'Deleting the bootstrapping locales filter script'
phase = phases.cleaning phase = phases.cleaning
successors = [workspace.DeleteWorkspace] successors = [workspace.DeleteWorkspace]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
os.remove(info._minimize_size['filter_script']) os.remove(info._minimize_size['filter_script'])
del info._minimize_size['filter_script'] del info._minimize_size['filter_script']
os.remove(info.bootstrap_script) os.remove(info.bootstrap_script)

View file

@ -8,36 +8,36 @@ folders = ['tmp', 'var/lib/apt/lists']
class AddFolderMounts(Task): class AddFolderMounts(Task):
description = 'Mounting folders for writing temporary and cache data' description = 'Mounting folders for writing temporary and cache data'
phase = phases.os_installation phase = phases.os_installation
predecessors = [bootstrap.Bootstrap] predecessors = [bootstrap.Bootstrap]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
info._minimize_size['foldermounts'] = os.path.join(info.workspace, 'minimize_size') info._minimize_size['foldermounts'] = os.path.join(info.workspace, 'minimize_size')
os.mkdir(info._minimize_size['foldermounts']) os.mkdir(info._minimize_size['foldermounts'])
for folder in folders: for folder in folders:
temp_path = os.path.join(info._minimize_size['foldermounts'], folder.replace('/', '_')) temp_path = os.path.join(info._minimize_size['foldermounts'], folder.replace('/', '_'))
os.mkdir(temp_path) os.mkdir(temp_path)
full_path = os.path.join(info.root, folder) full_path = os.path.join(info.root, folder)
info.volume.partition_map.root.add_mount(temp_path, full_path, ['--bind']) info.volume.partition_map.root.add_mount(temp_path, full_path, ['--bind'])
class RemoveFolderMounts(Task): class RemoveFolderMounts(Task):
description = 'Removing folder mounts for temporary and cache data' description = 'Removing folder mounts for temporary and cache data'
phase = phases.system_cleaning phase = phases.system_cleaning
successors = [apt.AptClean] successors = [apt.AptClean]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
import shutil import shutil
for folder in folders: for folder in folders:
temp_path = os.path.join(info._minimize_size['foldermounts'], folder.replace('/', '_')) temp_path = os.path.join(info._minimize_size['foldermounts'], folder.replace('/', '_'))
full_path = os.path.join(info.root, folder) full_path = os.path.join(info.root, folder)
info.volume.partition_map.root.remove_mount(full_path) info.volume.partition_map.root.remove_mount(full_path)
shutil.rmtree(temp_path) shutil.rmtree(temp_path)
os.rmdir(info._minimize_size['foldermounts']) os.rmdir(info._minimize_size['foldermounts'])
del info._minimize_size['foldermounts'] del info._minimize_size['foldermounts']

View file

@ -9,37 +9,37 @@ import os
class AddRequiredCommands(Task): class AddRequiredCommands(Task):
description = 'Adding commands required for reducing volume size' description = 'Adding commands required for reducing volume size'
phase = phases.preparation phase = phases.preparation
successors = [host.CheckExternalCommands] successors = [host.CheckExternalCommands]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
if info.manifest.plugins['minimize_size'].get('zerofree', False): if info.manifest.plugins['minimize_size'].get('zerofree', False):
info.host_dependencies['zerofree'] = 'zerofree' info.host_dependencies['zerofree'] = 'zerofree'
if info.manifest.plugins['minimize_size'].get('shrink', False): if info.manifest.plugins['minimize_size'].get('shrink', False):
link = 'https://my.vmware.com/web/vmware/info/slug/desktop_end_user_computing/vmware_workstation/10_0' link = 'https://my.vmware.com/web/vmware/info/slug/desktop_end_user_computing/vmware_workstation/10_0'
info.host_dependencies['vmware-vdiskmanager'] = link info.host_dependencies['vmware-vdiskmanager'] = link
class Zerofree(Task): class Zerofree(Task):
description = 'Zeroing unused blocks on the root partition' description = 'Zeroing unused blocks on the root partition'
phase = phases.volume_unmounting phase = phases.volume_unmounting
predecessors = [filesystem.UnmountRoot] predecessors = [filesystem.UnmountRoot]
successors = [partitioning.UnmapPartitions, volume.Detach] successors = [partitioning.UnmapPartitions, volume.Detach]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
log_check_call(['zerofree', info.volume.partition_map.root.device_path]) log_check_call(['zerofree', info.volume.partition_map.root.device_path])
class ShrinkVolume(Task): class ShrinkVolume(Task):
description = 'Shrinking the volume' description = 'Shrinking the volume'
phase = phases.volume_unmounting phase = phases.volume_unmounting
predecessors = [volume.Detach] predecessors = [volume.Detach]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
perm = os.stat(info.volume.image_path).st_mode & 0777 perm = os.stat(info.volume.image_path).st_mode & 0777
log_check_call(['/usr/bin/vmware-vdiskmanager', '-k', info.volume.image_path]) log_check_call(['/usr/bin/vmware-vdiskmanager', '-k', info.volume.image_path])
os.chmod(info.volume.image_path, perm) os.chmod(info.volume.image_path, perm)

View file

@ -1,11 +1,11 @@
def validate_manifest(data, validator, error): def validate_manifest(data, validator, error):
import os.path import os.path
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml')) schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml'))
validator(data, schema_path) validator(data, schema_path)
def resolve_tasks(taskset, manifest): def resolve_tasks(taskset, manifest):
import tasks import tasks
taskset.add(tasks.AddNtpPackage) taskset.add(tasks.AddNtpPackage)
if manifest.plugins['ntp'].get('servers', False): if manifest.plugins['ntp'].get('servers', False):
taskset.add(tasks.SetNtpServers) taskset.add(tasks.SetNtpServers)

View file

@ -3,30 +3,30 @@ from bootstrapvz.common import phases
class AddNtpPackage(Task): class AddNtpPackage(Task):
description = 'Adding NTP Package' description = 'Adding NTP Package'
phase = phases.preparation phase = phases.preparation
@classmethod @classmethod
def run(cls, info): def run(cls, info):
info.packages.add('ntp') info.packages.add('ntp')
class SetNtpServers(Task): class SetNtpServers(Task):
description = 'Setting NTP servers' description = 'Setting NTP servers'
phase = phases.system_modification phase = phases.system_modification
@classmethod @classmethod
def run(cls, info): def run(cls, info):
import fileinput import fileinput
import os import os
import re import re
ntp_path = os.path.join(info.root, 'etc/ntp.conf') ntp_path = os.path.join(info.root, 'etc/ntp.conf')
servers = list(info.manifest.plugins['ntp']['servers']) servers = list(info.manifest.plugins['ntp']['servers'])
debian_ntp_server = re.compile('.*[0-9]\.debian\.pool\.ntp\.org.*') debian_ntp_server = re.compile('.*[0-9]\.debian\.pool\.ntp\.org.*')
for line in fileinput.input(files=ntp_path, inplace=True): for line in fileinput.input(files=ntp_path, inplace=True):
# Will write all the specified servers on the first match, then supress all other default servers # Will write all the specified servers on the first match, then supress all other default servers
if re.match(debian_ntp_server, line): if re.match(debian_ntp_server, line):
while servers: while servers:
print 'server {server_address} iburst'.format(server_address=servers.pop(0)) print 'server {server_address} iburst'.format(server_address=servers.pop(0))
else: else:
print line, print line,

View file

@ -1,9 +1,9 @@
def resolve_tasks(taskset, manifest): def resolve_tasks(taskset, manifest):
import tasks import tasks
from bootstrapvz.common.tasks import apt from bootstrapvz.common.tasks import apt
from bootstrapvz.common.releases import wheezy from bootstrapvz.common.releases import wheezy
if manifest.release == wheezy: if manifest.release == wheezy:
taskset.add(apt.AddBackports) taskset.add(apt.AddBackports)
taskset.update([tasks.AddONEContextPackage]) taskset.update([tasks.AddONEContextPackage])

View file

@ -4,14 +4,14 @@ from bootstrapvz.common import phases
class AddONEContextPackage(Task): class AddONEContextPackage(Task):
description = 'Adding the OpenNebula context package' description = 'Adding the OpenNebula context package'
phase = phases.preparation phase = phases.preparation
predecessors = [apt.AddBackports] predecessors = [apt.AddBackports]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
target = None target = None
from bootstrapvz.common.releases import wheezy from bootstrapvz.common.releases import wheezy
if info.manifest.release == wheezy: if info.manifest.release == wheezy:
target = '{system.release}-backports' target = '{system.release}-backports'
info.packages.add('opennebula-context', target) info.packages.add('opennebula-context', target)

View file

@ -2,11 +2,11 @@ import tasks
def validate_manifest(data, validator, error): def validate_manifest(data, validator, error):
import os.path import os.path
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml')) schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml'))
validator(data, schema_path) validator(data, schema_path)
def resolve_tasks(taskset, manifest): def resolve_tasks(taskset, manifest):
taskset.add(tasks.AddPipPackage) taskset.add(tasks.AddPipPackage)
taskset.add(tasks.PipInstallCommand) taskset.add(tasks.PipInstallCommand)

View file

@ -3,23 +3,23 @@ from bootstrapvz.common import phases
class AddPipPackage(Task): class AddPipPackage(Task):
description = 'Adding `pip\' and Co. to the image packages' description = 'Adding `pip\' and Co. to the image packages'
phase = phases.preparation phase = phases.preparation
@classmethod @classmethod
def run(cls, info): def run(cls, info):
for package_name in ('python-pip', 'build-essential', 'python-dev'): for package_name in ('python-pip', 'build-essential', 'python-dev'):
info.packages.add(package_name) info.packages.add(package_name)
class PipInstallCommand(Task): class PipInstallCommand(Task):
description = 'Install python packages from pypi with pip' description = 'Install python packages from pypi with pip'
phase = phases.system_modification phase = phases.system_modification
@classmethod @classmethod
def run(cls, info): def run(cls, info):
from bootstrapvz.common.tools import log_check_call from bootstrapvz.common.tools import log_check_call
packages = info.manifest.plugins['pip_install']['packages'] packages = info.manifest.plugins['pip_install']['packages']
pip_install_command = ['chroot', info.root, 'pip', 'install'] pip_install_command = ['chroot', info.root, 'pip', 'install']
pip_install_command.extend(packages) pip_install_command.extend(packages)
log_check_call(pip_install_command) log_check_call(pip_install_command)

View file

@ -14,44 +14,44 @@ from bootstrapvz.common.tasks import partitioning
def validate_manifest(data, validator, error): def validate_manifest(data, validator, error):
import os.path import os.path
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml')) schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml'))
validator(data, schema_path) validator(data, schema_path)
def resolve_tasks(taskset, manifest): def resolve_tasks(taskset, manifest):
settings = manifest.plugins['prebootstrapped'] settings = manifest.plugins['prebootstrapped']
skip_tasks = [ebs.Create, skip_tasks = [ebs.Create,
loopback.Create, loopback.Create,
filesystem.Format, filesystem.Format,
partitioning.PartitionVolume, partitioning.PartitionVolume,
filesystem.TuneVolumeFS, filesystem.TuneVolumeFS,
filesystem.AddXFSProgs, filesystem.AddXFSProgs,
filesystem.CreateBootMountDir, filesystem.CreateBootMountDir,
apt.DisableDaemonAutostart, apt.DisableDaemonAutostart,
locale.GenerateLocale, locale.GenerateLocale,
bootstrap.MakeTarball, bootstrap.MakeTarball,
bootstrap.Bootstrap, bootstrap.Bootstrap,
guest_additions.InstallGuestAdditions, guest_additions.InstallGuestAdditions,
] ]
if manifest.volume['backing'] == 'ebs': if manifest.volume['backing'] == 'ebs':
if settings.get('snapshot', None) is not None: if settings.get('snapshot', None) is not None:
taskset.add(CreateFromSnapshot) taskset.add(CreateFromSnapshot)
[taskset.discard(task) for task in skip_tasks] [taskset.discard(task) for task in skip_tasks]
else: else:
taskset.add(Snapshot) taskset.add(Snapshot)
else: else:
if settings.get('image', None) is not None: if settings.get('image', None) is not None:
taskset.add(CreateFromImage) taskset.add(CreateFromImage)
[taskset.discard(task) for task in skip_tasks] [taskset.discard(task) for task in skip_tasks]
else: else:
taskset.add(CopyImage) taskset.add(CopyImage)
def resolve_rollback_tasks(taskset, manifest, completed, counter_task): def resolve_rollback_tasks(taskset, manifest, completed, counter_task):
if manifest.volume['backing'] == 'ebs': if manifest.volume['backing'] == 'ebs':
counter_task(taskset, CreateFromSnapshot, volume.Delete) counter_task(taskset, CreateFromSnapshot, volume.Delete)
else: else:
counter_task(taskset, CreateFromImage, volume.Delete) counter_task(taskset, CreateFromImage, volume.Delete)

View file

@ -13,83 +13,83 @@ log = logging.getLogger(__name__)
class Snapshot(Task): class Snapshot(Task):
description = 'Creating a snapshot of the bootstrapped volume' description = 'Creating a snapshot of the bootstrapped volume'
phase = phases.package_installation phase = phases.package_installation
predecessors = [packages.InstallPackages, guest_additions.InstallGuestAdditions] predecessors = [packages.InstallPackages, guest_additions.InstallGuestAdditions]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
snapshot = None snapshot = None
with unmounted(info.volume): with unmounted(info.volume):
snapshot = info.volume.snapshot() snapshot = info.volume.snapshot()
msg = 'A snapshot of the bootstrapped volume was created. ID: ' + snapshot.id msg = 'A snapshot of the bootstrapped volume was created. ID: ' + snapshot.id
log.info(msg) log.info(msg)
class CreateFromSnapshot(Task): class CreateFromSnapshot(Task):
description = 'Creating EBS volume from a snapshot' description = 'Creating EBS volume from a snapshot'
phase = phases.volume_creation phase = phases.volume_creation
successors = [ebs.Attach] successors = [ebs.Attach]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
snapshot = info.manifest.plugins['prebootstrapped']['snapshot'] snapshot = info.manifest.plugins['prebootstrapped']['snapshot']
ebs_volume = info._ec2['connection'].create_volume(info.volume.size.bytes.get_qty_in('GiB'), ebs_volume = info._ec2['connection'].create_volume(info.volume.size.bytes.get_qty_in('GiB'),
info._ec2['host']['availabilityZone'], info._ec2['host']['availabilityZone'],
snapshot=snapshot) snapshot=snapshot)
while ebs_volume.volume_state() != 'available': while ebs_volume.volume_state() != 'available':
time.sleep(5) time.sleep(5)
ebs_volume.update() ebs_volume.update()
info.volume.volume = ebs_volume info.volume.volume = ebs_volume
set_fs_states(info.volume) set_fs_states(info.volume)
class CopyImage(Task): class CopyImage(Task):
description = 'Creating a snapshot of the bootstrapped volume' description = 'Creating a snapshot of the bootstrapped volume'
phase = phases.package_installation phase = phases.package_installation
predecessors = [packages.InstallPackages, guest_additions.InstallGuestAdditions] predecessors = [packages.InstallPackages, guest_additions.InstallGuestAdditions]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
loopback_backup_name = 'volume-{id}.{ext}.backup'.format(id=info.run_id, ext=info.volume.extension) loopback_backup_name = 'volume-{id}.{ext}.backup'.format(id=info.run_id, ext=info.volume.extension)
destination = os.path.join(info.manifest.bootstrapper['workspace'], loopback_backup_name) destination = os.path.join(info.manifest.bootstrapper['workspace'], loopback_backup_name)
with unmounted(info.volume): with unmounted(info.volume):
copyfile(info.volume.image_path, destination) copyfile(info.volume.image_path, destination)
msg = 'A copy of the bootstrapped volume was created. Path: ' + destination msg = 'A copy of the bootstrapped volume was created. Path: ' + destination
log.info(msg) log.info(msg)
class CreateFromImage(Task): class CreateFromImage(Task):
description = 'Creating loopback image from a copy' description = 'Creating loopback image from a copy'
phase = phases.volume_creation phase = phases.volume_creation
successors = [volume.Attach] successors = [volume.Attach]
@classmethod @classmethod
def run(cls, info): def run(cls, info):
info.volume.image_path = os.path.join(info.workspace, 'volume.' + info.volume.extension) info.volume.image_path = os.path.join(info.workspace, 'volume.' + info.volume.extension)
loopback_backup_path = info.manifest.plugins['prebootstrapped']['image'] loopback_backup_path = info.manifest.plugins['prebootstrapped']['image']
copyfile(loopback_backup_path, info.volume.image_path) copyfile(loopback_backup_path, info.volume.image_path)
set_fs_states(info.volume) set_fs_states(info.volume)
def set_fs_states(volume): def set_fs_states(volume):
volume.fsm.current = 'detached' volume.fsm.current = 'detached'
p_map = volume.partition_map p_map = volume.partition_map
from bootstrapvz.base.fs.partitionmaps.none import NoPartitions from bootstrapvz.base.fs.partitionmaps.none import NoPartitions
if not isinstance(p_map, NoPartitions): if not isinstance(p_map, NoPartitions):
p_map.fsm.current = 'unmapped' p_map.fsm.current = 'unmapped'
from bootstrapvz.base.fs.partitions.unformatted import UnformattedPartition from bootstrapvz.base.fs.partitions.unformatted import UnformattedPartition
from bootstrapvz.base.fs.partitions.single import SinglePartition from bootstrapvz.base.fs.partitions.single import SinglePartition
for partition in p_map.partitions: for partition in p_map.partitions:
if isinstance(partition, UnformattedPartition): if isinstance(partition, UnformattedPartition):
partition.fsm.current = 'unmapped' partition.fsm.current = 'unmapped'
continue continue
if isinstance(partition, SinglePartition): if isinstance(partition, SinglePartition):
partition.fsm.current = 'formatted' partition.fsm.current = 'formatted'
continue continue
partition.fsm.current = 'unmapped_fmt' partition.fsm.current = 'unmapped_fmt'

Some files were not shown because too many files have changed in this diff Show more