Convert indentation from tabs to spaces (4)

Up until now I didn't see the point of using spaces for indentation.
However, the previous commit (a18bec3) was quite eye opening.
Given that python is an indentation aware language, the amount of
mistakes that went unnoticed because tabs and spaces were used
at the same time (tabs for indentation and spaces for alignment)
were unacceptable.

E101,W191 have been re-enable in the tox flake8 checker and
the documentation has been modified accordingly.

The following files have been left as-is:
* bootstrapvz/common/assets/extlinux/extlinux.conf
* bootstrapvz/common/assets/init.d/expand-root
* bootstrapvz/common/assets/init.d/generate-ssh-hostkeys
* bootstrapvz/common/assets/init.d/squeeze/generate-ssh-hostkeys
* bootstrapvz/plugins/docker_daemon/assets/init.d/docker
* bootstrapvz/providers/ec2/assets/bin/growpart
* bootstrapvz/providers/ec2/assets/grub.d/40_custom
* bootstrapvz/providers/ec2/assets/init.d/ec2-get-credentials
* bootstrapvz/providers/ec2/assets/init.d/ec2-run-user-data
* docs/_static/taskoverview.coffee
* docs/_static/taskoverview.less
* tests/unit/subprocess.sh
This commit is contained in:
Anders Ingemann 2016-06-04 11:35:59 +02:00
parent 2d6a026160
commit f62c8ade99
186 changed files with 7284 additions and 7286 deletions

View file

@ -5,130 +5,130 @@ Changelog
2016-06-02
----------
Peter Wagner
* Added ec2_publish plugin
* Added ec2_publish plugin
2016-06-02
----------
Zach Marano:
* Fix expand-root script to work with newer version of growpart (in jessie-backports and beyond).
* Overhaul Google Compute Engine image build.
* Add support for Google Cloud repositories.
* Google Cloud SDK install uses a deb package from a Google Cloud repository.
* Google Compute Engine guest software is installed from a Google Cloud repository.
* Google Compute Engine guest software for Debian 8 is updated to new refactor.
* Google Compute Engine wheezy and wheezy-backports manifests are deprecated.
* Fix expand-root script to work with newer version of growpart (in jessie-backports and beyond).
* Overhaul Google Compute Engine image build.
* Add support for Google Cloud repositories.
* Google Cloud SDK install uses a deb package from a Google Cloud repository.
* Google Compute Engine guest software is installed from a Google Cloud repository.
* Google Compute Engine guest software for Debian 8 is updated to new refactor.
* Google Compute Engine wheezy and wheezy-backports manifests are deprecated.
2016-03-03
----------
Anders Ingemann:
* Rename integration tests to system tests
* Rename integration tests to system tests
2016-02-23
----------
Nicolas Braud-Santoni:
* #282, #290: Added 'debconf' plugin
* #290: Relaxed requirements on plugins manifests
* #282, #290: Added 'debconf' plugin
* #290: Relaxed requirements on plugins manifests
2016-02-10
----------
Manoj Srivastava:
* #252: Added support for password and static pubkey auth
* #252: Added support for password and static pubkey auth
2016-02-06
----------
Tiago Ilieve:
* Added Oracle Compute Cloud provider
* #280: Declared Squeeze as unsupported
* Added Oracle Compute Cloud provider
* #280: Declared Squeeze as unsupported
2016-01-14
----------
Jesse Szwedko:
* #269: EC2: Added growpart script extension
* #269: EC2: Added growpart script extension
2016-01-10
----------
Clark Laughlin:
* Enabled support for KVM on arm64
* Enabled support for KVM on arm64
2015-12-19
----------
Tim Sattarov:
* #263: Ignore loopback interface in udev rules (reduces startup of networking by a factor of 10)
* #263: Ignore loopback interface in udev rules (reduces startup of networking by a factor of 10)
2015-12-13
----------
Anders Ingemann:
* Docker provider implemented (including integration testing harness & tests)
* minimize_size: Added various size reduction options for dpkg and apt
* Removed image section in manifest.
Provider specific options have been moved to the provider section.
The image name is now specified on the top level of the manifest with "name"
* Provider docs have been greatly improved. All now list their special options.
* All manifest option documentation is now accompanied by an example.
* Added documentation for the integration test providers
* Docker provider implemented (including integration testing harness & tests)
* minimize_size: Added various size reduction options for dpkg and apt
* Removed image section in manifest.
Provider specific options have been moved to the provider section.
The image name is now specified on the top level of the manifest with "name"
* Provider docs have been greatly improved. All now list their special options.
* All manifest option documentation is now accompanied by an example.
* Added documentation for the integration test providers
2015-11-13
----------
Marcin Kulisz:
* Exclude docs from binary package
* Exclude docs from binary package
2015-10-20
----------
Max Illfelder:
* Remove support for the GCE Debian mirror
* Remove support for the GCE Debian mirror
2015-10-14
----------
Anders Ingemann:
* Bootstrap azure images directly to VHD
* Bootstrap azure images directly to VHD
2015-09-28
----------
Rick Wright:
* Change GRUB_HIDDEN_TIMEOUT to 0 from true and set GRUB_HIDDEN_TIMEOUT_QUIET to true.
* Change GRUB_HIDDEN_TIMEOUT to 0 from true and set GRUB_HIDDEN_TIMEOUT_QUIET to true.
2015-09-24
----------
Rick Wright:
* Fix a problem with Debian 8 on GCE with >2TB disks
* Fix a problem with Debian 8 on GCE with >2TB disks
2015-09-04
----------
Emmanuel Kasper:
* Set Virtualbox memory to 512 MB
* Set Virtualbox memory to 512 MB
2015-08-07
----------
Tiago Ilieve:
* Change default Debian mirror
* Change default Debian mirror
2015-08-06
----------
Stephen A. Zarkos:
* Azure: Change default shell in /etc/default/useradd for Azure images
* Azure: Add boot parameters to Azure config to ease local debugging
* Azure: Add apt import for backports
* Azure: Comment GRUB_HIDDEN_TIMEOUT so we can set GRUB_TIMEOUT
* Azure: Wheezy images use wheezy-backports kernel by default
* Azure: Change Wheezy image to use single partition
* Azure: Update WALinuxAgent to use 2.0.14
* Azure: Make sure we can override grub.ConfigureGrub for Azure images
* Azure: Add console=tty0 to see kernel/boot messsages on local console
* Azure: Set serial port speed to 115200
* Azure: Fix error with applying azure/assets/udev.diff
* Azure: Change default shell in /etc/default/useradd for Azure images
* Azure: Add boot parameters to Azure config to ease local debugging
* Azure: Add apt import for backports
* Azure: Comment GRUB_HIDDEN_TIMEOUT so we can set GRUB_TIMEOUT
* Azure: Wheezy images use wheezy-backports kernel by default
* Azure: Change Wheezy image to use single partition
* Azure: Update WALinuxAgent to use 2.0.14
* Azure: Make sure we can override grub.ConfigureGrub for Azure images
* Azure: Add console=tty0 to see kernel/boot messsages on local console
* Azure: Set serial port speed to 115200
* Azure: Fix error with applying azure/assets/udev.diff
2015-07-30
----------
James Bromberger:
* AWS: Support multiple ENI
* AWS: PVGRUB AKIs for Frankfurt region
* AWS: Support multiple ENI
* AWS: PVGRUB AKIs for Frankfurt region
2015-06-29
----------
Alex Adriaanse:
* Fix DKMS kernel version error
* Add support for Btrfs
* Add EC2 Jessie HVM manifest
* Fix DKMS kernel version error
* Add support for Btrfs
* Add EC2 Jessie HVM manifest
2015-05-08
----------
@ -138,143 +138,143 @@ Alexandre Derumier:
2015-05-02
----------
Anders Ingemann:
* Fix #32: Add image_commands example
* Fix #99: rename image_commands to commands
* Fix #139: Vagrant / Virtualbox provider should set ostype when 32 bits selected
* Fix #204: Create a new phase where user modification tasks can run
* Fix #32: Add image_commands example
* Fix #99: rename image_commands to commands
* Fix #139: Vagrant / Virtualbox provider should set ostype when 32 bits selected
* Fix #204: Create a new phase where user modification tasks can run
2015-04-29
----------
Anders Ingemann:
* Fix #104: Don't verify default target when adding packages
* Fix #217: Implement get_version() function in common.tools
* Fix #104: Don't verify default target when adding packages
* Fix #217: Implement get_version() function in common.tools
2015-04-28
----------
Jonh Wendell:
* root_password: Enable SSH root login
* root_password: Enable SSH root login
2015-04-27
----------
John Kristensen:
* Add authentication support to the apt proxy plugin
* Add authentication support to the apt proxy plugin
2015-04-25
----------
Anders Ingemann (work started 2014-08-31, merged on 2015-04-25):
* Introduce `remote bootstrapping <bootstrapvz/remote>`__
* Introduce `integration testing <tests/integration>`__ (for VirtualBox and EC2)
* Merge the end-user documentation into the sphinx docs
(plugin & provider docs are now located in their respective folders as READMEs)
* Include READMEs in sphinx docs and transform their links
* Docs for integration testing
* Document the remote bootstrapping procedure
* Add documentation about the documentation
* Add list of supported builds to the docs
* Add html output to integration tests
* Implement PR #201 by @jszwedko (bump required euca2ools version)
* grub now works on jessie
* extlinux is now running on jessie
* Issue warning when specifying pre/successors across phases (but still error out if it's a conflict)
* Add salt dependencies in the right phase
* extlinux now works with GPT on HVM instances
* Take @ssgelm's advice in #155 and copy the mount table -- df warnings no more
* Generally deny installing grub on squeeze (too much of a hassle to get working, PRs welcome)
* Add 1 sector gap between partitions on GPT
* Add new task: DeterminKernelVersion, this can potentially fix a lot of small problems
* Disable getty processes on jessie through logind config
* Partition volumes by sectors instead of bytes
This allows for finer grained control over the partition sizes and gaps
Add new Sectors unit, enhance Bytes unit, add unit tests for both
* Don't require qemu for raw volumes, use `truncate` instead
* Fix #179: Disabling getty processes task fails half the time
* Split grub and extlinux installs into separate modules
* Fix extlinux config for squeeze
* Fix #136: Make extlinux output boot messages to the serial console
* Extend sed_i to raise Exceptions when the expected amount of replacements is not met
* Introduce `remote bootstrapping <bootstrapvz/remote>`__
* Introduce `integration testing <tests/integration>`__ (for VirtualBox and EC2)
* Merge the end-user documentation into the sphinx docs
(plugin & provider docs are now located in their respective folders as READMEs)
* Include READMEs in sphinx docs and transform their links
* Docs for integration testing
* Document the remote bootstrapping procedure
* Add documentation about the documentation
* Add list of supported builds to the docs
* Add html output to integration tests
* Implement PR #201 by @jszwedko (bump required euca2ools version)
* grub now works on jessie
* extlinux is now running on jessie
* Issue warning when specifying pre/successors across phases (but still error out if it's a conflict)
* Add salt dependencies in the right phase
* extlinux now works with GPT on HVM instances
* Take @ssgelm's advice in #155 and copy the mount table -- df warnings no more
* Generally deny installing grub on squeeze (too much of a hassle to get working, PRs welcome)
* Add 1 sector gap between partitions on GPT
* Add new task: DeterminKernelVersion, this can potentially fix a lot of small problems
* Disable getty processes on jessie through logind config
* Partition volumes by sectors instead of bytes
This allows for finer grained control over the partition sizes and gaps
Add new Sectors unit, enhance Bytes unit, add unit tests for both
* Don't require qemu for raw volumes, use `truncate` instead
* Fix #179: Disabling getty processes task fails half the time
* Split grub and extlinux installs into separate modules
* Fix extlinux config for squeeze
* Fix #136: Make extlinux output boot messages to the serial console
* Extend sed_i to raise Exceptions when the expected amount of replacements is not met
Jonas Bergler:
* Fixes #145: Fix installation of vbox guest additions.
* Fixes #145: Fix installation of vbox guest additions.
Tiago Ilieve:
* Fixes #142: msdos partition type incorrect for swap partition (Linux)
* Fixes #142: msdos partition type incorrect for swap partition (Linux)
2015-04-23
----------
Tiago Ilieve:
* Fixes #212: Sparse file is created on the current directory
* Fixes #212: Sparse file is created on the current directory
2014-11-23
----------
Noah Fontes:
* Add support for enhanced networking on EC2 images
* Add support for enhanced networking on EC2 images
2014-07-12
----------
Tiago Ilieve:
* Fixes #96: AddBackports is now a common task
* Fixes #96: AddBackports is now a common task
2014-07-09
----------
Anders Ingemann:
* Allow passing data into the manifest
* Refactor logging setup to be more modular
* Convert every JSON file to YAML
* Convert "provider" into provider specific section
* Allow passing data into the manifest
* Refactor logging setup to be more modular
* Convert every JSON file to YAML
* Convert "provider" into provider specific section
2014-07-02
----------
Vladimir Vitkov:
* Improve grub options to work better with virtual machines
* Improve grub options to work better with virtual machines
2014-06-30
----------
Tomasz Rybak:
* Return information about created image
* Return information about created image
2014-06-22
----------
Victor Marmol:
* Enable the memory cgroup for the Docker plugin
* Enable the memory cgroup for the Docker plugin
2014-06-19
----------
Tiago Ilieve:
* Fixes #94: allow stable/oldstable as release name on manifest
* Fixes #94: allow stable/oldstable as release name on manifest
Vladimir Vitkov:
* Improve ami listing performance
* Improve ami listing performance
2014-06-07
----------
Tiago Ilieve:
* Download `gsutil` tarball to workspace instead of working directory
* Fixes #97: remove raw disk image created by GCE after build
* Download `gsutil` tarball to workspace instead of working directory
* Fixes #97: remove raw disk image created by GCE after build
2014-06-06
----------
Ilya Margolin:
* pip_install plugin
* pip_install plugin
2014-05-23
----------
Tiago Ilieve:
* Fixes #95: check if the specified APT proxy server can be reached
* Fixes #95: check if the specified APT proxy server can be reached
2014-05-04
----------
Dhananjay Balan:
* Salt minion installation & configuration plugin
* Expose debootstrap --include-packages and --exclude-packages options to manifest
* Salt minion installation & configuration plugin
* Expose debootstrap --include-packages and --exclude-packages options to manifest
2014-05-03
----------
Anders Ingemann:
* Require hostname setting for vagrant plugin
* Fixes #14: S3 images can now be bootstrapped outside EC2.
* Added enable_agent option to puppet plugin
* Require hostname setting for vagrant plugin
* Fixes #14: S3 images can now be bootstrapped outside EC2.
* Added enable_agent option to puppet plugin
2014-05-02
----------
Tomasz Rybak:
* Added Google Compute Engine Provider
* Added Google Compute Engine Provider

View file

@ -143,10 +143,8 @@ guidelines. There however a few exceptions:
* Max line length is 110 chars, not 80.
* Multiple assignments may be aligned with spaces so that the = match
vertically.
* Ignore ``E101``: Indent with tabs and align with spaces
* Ignore ``E221 & E241``: Alignment of assignments
* Ignore ``E501``: The max line length is not 80 characters
* Ignore ``W191``: Indent with tabs not spaces
The codebase can be checked for any violations quite easily, since those rules are already specified in the
`tox <http://tox.readthedocs.org/>`__ configuration file.

View file

@ -1,5 +1,5 @@
#!/usr/bin/env python
if __name__ == '__main__':
from bootstrapvz.base.main import main
main()
from bootstrapvz.base.main import main
main()

View file

@ -1,5 +1,5 @@
#!/usr/bin/env python
if __name__ == '__main__':
from bootstrapvz.remote.main import main
main()
from bootstrapvz.remote.main import main
main()

View file

@ -1,5 +1,5 @@
#!/usr/bin/env python
if __name__ == '__main__':
from bootstrapvz.remote.server import main
main()
from bootstrapvz.remote.server import main
main()

View file

@ -13,15 +13,15 @@ via attributes. Here is an example:
.. code-block:: python
class MapPartitions(Task):
description = 'Mapping volume partitions'
phase = phases.volume_preparation
predecessors = [PartitionVolume]
successors = [filesystem.Format]
class MapPartitions(Task):
description = 'Mapping volume partitions'
phase = phases.volume_preparation
predecessors = [PartitionVolume]
successors = [filesystem.Format]
@classmethod
def run(cls, info):
info.volume.partition_map.map(info.volume)
@classmethod
def run(cls, info):
info.volume.partition_map.map(info.volume)
In this case the attributes define that the task at hand should run
after the ``PartitionVolume`` task — i.e. after volume has been

View file

@ -1,160 +1,160 @@
class BootstrapInformation(object):
"""The BootstrapInformation class holds all information about the bootstrapping process.
The nature of the attributes of this class are rather diverse.
Tasks may set their own attributes on this class for later retrieval by another task.
Information that becomes invalid (e.g. a path to a file that has been deleted) must be removed.
"""
def __init__(self, manifest=None, debug=False):
"""Instantiates a new bootstrap info object.
"""The BootstrapInformation class holds all information about the bootstrapping process.
The nature of the attributes of this class are rather diverse.
Tasks may set their own attributes on this class for later retrieval by another task.
Information that becomes invalid (e.g. a path to a file that has been deleted) must be removed.
"""
def __init__(self, manifest=None, debug=False):
"""Instantiates a new bootstrap info object.
:param Manifest manifest: The manifest
:param bool debug: Whether debugging is turned on
"""
# Set the manifest attribute.
self.manifest = manifest
self.debug = debug
:param Manifest manifest: The manifest
:param bool debug: Whether debugging is turned on
"""
# Set the manifest attribute.
self.manifest = manifest
self.debug = debug
# Create a run_id. This id may be used to uniquely identify the currrent bootstrapping process
import random
self.run_id = '{id:08x}'.format(id=random.randrange(16 ** 8))
# Create a run_id. This id may be used to uniquely identify the currrent bootstrapping process
import random
self.run_id = '{id:08x}'.format(id=random.randrange(16 ** 8))
# Define the path to our workspace
import os.path
self.workspace = os.path.join(manifest.bootstrapper['workspace'], self.run_id)
# Define the path to our workspace
import os.path
self.workspace = os.path.join(manifest.bootstrapper['workspace'], self.run_id)
# Load all the volume information
from fs import load_volume
self.volume = load_volume(self.manifest.volume, manifest.system['bootloader'])
# Load all the volume information
from fs import load_volume
self.volume = load_volume(self.manifest.volume, manifest.system['bootloader'])
# The default apt mirror
self.apt_mirror = self.manifest.packages.get('mirror', 'http://httpredir.debian.org/debian/')
# The default apt mirror
self.apt_mirror = self.manifest.packages.get('mirror', 'http://httpredir.debian.org/debian/')
# Create the manifest_vars dictionary
self.manifest_vars = self.__create_manifest_vars(self.manifest, {'apt_mirror': self.apt_mirror})
# Create the manifest_vars dictionary
self.manifest_vars = self.__create_manifest_vars(self.manifest, {'apt_mirror': self.apt_mirror})
# Keep a list of apt sources,
# so that tasks may add to that list without having to fiddle with apt source list files.
from pkg.sourceslist import SourceLists
self.source_lists = SourceLists(self.manifest_vars)
# Keep a list of apt preferences
from pkg.preferenceslist import PreferenceLists
self.preference_lists = PreferenceLists(self.manifest_vars)
# Keep a list of packages that should be installed, tasks can add and remove things from this list
from pkg.packagelist import PackageList
self.packages = PackageList(self.manifest_vars, self.source_lists)
# Keep a list of apt sources,
# so that tasks may add to that list without having to fiddle with apt source list files.
from pkg.sourceslist import SourceLists
self.source_lists = SourceLists(self.manifest_vars)
# Keep a list of apt preferences
from pkg.preferenceslist import PreferenceLists
self.preference_lists = PreferenceLists(self.manifest_vars)
# Keep a list of packages that should be installed, tasks can add and remove things from this list
from pkg.packagelist import PackageList
self.packages = PackageList(self.manifest_vars, self.source_lists)
# These sets should rarely be used and specify which packages the debootstrap invocation
# should be called with.
self.include_packages = set()
self.exclude_packages = set()
# These sets should rarely be used and specify which packages the debootstrap invocation
# should be called with.
self.include_packages = set()
self.exclude_packages = set()
# Dictionary to specify which commands are required on the host.
# The keys are commands, while the values are either package names or urls
# that hint at how a command may be made available.
self.host_dependencies = {}
# Dictionary to specify which commands are required on the host.
# The keys are commands, while the values are either package names or urls
# that hint at how a command may be made available.
self.host_dependencies = {}
# Path to optional bootstrapping script for modifying the behaviour of debootstrap
# (will be used instead of e.g. /usr/share/debootstrap/scripts/jessie)
self.bootstrap_script = None
# Path to optional bootstrapping script for modifying the behaviour of debootstrap
# (will be used instead of e.g. /usr/share/debootstrap/scripts/jessie)
self.bootstrap_script = None
# Lists of startup scripts that should be installed and disabled
self.initd = {'install': {}, 'disable': []}
# Lists of startup scripts that should be installed and disabled
self.initd = {'install': {}, 'disable': []}
# Add a dictionary that can be accessed via info._pluginname for the provider and every plugin
# Information specific to the module can be added to that 'namespace', this avoids clutter.
providername = manifest.modules['provider'].__name__.split('.')[-1]
setattr(self, '_' + providername, {})
for plugin in manifest.modules['plugins']:
pluginname = plugin.__name__.split('.')[-1]
setattr(self, '_' + pluginname, {})
# Add a dictionary that can be accessed via info._pluginname for the provider and every plugin
# Information specific to the module can be added to that 'namespace', this avoids clutter.
providername = manifest.modules['provider'].__name__.split('.')[-1]
setattr(self, '_' + providername, {})
for plugin in manifest.modules['plugins']:
pluginname = plugin.__name__.split('.')[-1]
setattr(self, '_' + pluginname, {})
def __create_manifest_vars(self, manifest, additional_vars={}):
"""Creates the manifest variables dictionary, based on the manifest contents
and additional data.
def __create_manifest_vars(self, manifest, additional_vars={}):
"""Creates the manifest variables dictionary, based on the manifest contents
and additional data.
:param Manifest manifest: The Manifest
:param dict additional_vars: Additional values (they will take precedence and overwrite anything else)
:return: The manifest_vars dictionary
:rtype: dict
"""
:param Manifest manifest: The Manifest
:param dict additional_vars: Additional values (they will take precedence and overwrite anything else)
:return: The manifest_vars dictionary
:rtype: dict
"""
def set_manifest_vars(obj, data):
"""Runs through the manifest and creates DictClasses for every key
def set_manifest_vars(obj, data):
"""Runs through the manifest and creates DictClasses for every key
:param dict obj: dictionary to set the values on
:param dict data: dictionary of values to set on the obj
"""
for key, value in data.iteritems():
if isinstance(value, dict):
obj[key] = DictClass()
set_manifest_vars(obj[key], value)
continue
# Lists are not supported
if not isinstance(value, list):
obj[key] = value
:param dict obj: dictionary to set the values on
:param dict data: dictionary of values to set on the obj
"""
for key, value in data.iteritems():
if isinstance(value, dict):
obj[key] = DictClass()
set_manifest_vars(obj[key], value)
continue
# Lists are not supported
if not isinstance(value, list):
obj[key] = value
# manifest_vars is a dictionary of all the manifest values,
# with it users can cross-reference values in the manifest, so that they do not need to be written twice
manifest_vars = {}
set_manifest_vars(manifest_vars, manifest.data)
# manifest_vars is a dictionary of all the manifest values,
# with it users can cross-reference values in the manifest, so that they do not need to be written twice
manifest_vars = {}
set_manifest_vars(manifest_vars, manifest.data)
# Populate the manifest_vars with datetime information
# and map the datetime variables directly to the dictionary
from datetime import datetime
now = datetime.now()
time_vars = ['%a', '%A', '%b', '%B', '%c', '%d', '%f', '%H',
'%I', '%j', '%m', '%M', '%p', '%S', '%U', '%w',
'%W', '%x', '%X', '%y', '%Y', '%z', '%Z']
for key in time_vars:
manifest_vars[key] = now.strftime(key)
# Populate the manifest_vars with datetime information
# and map the datetime variables directly to the dictionary
from datetime import datetime
now = datetime.now()
time_vars = ['%a', '%A', '%b', '%B', '%c', '%d', '%f', '%H',
'%I', '%j', '%m', '%M', '%p', '%S', '%U', '%w',
'%W', '%x', '%X', '%y', '%Y', '%z', '%Z']
for key in time_vars:
manifest_vars[key] = now.strftime(key)
# Add any additional manifest variables
# They are added last so that they may override previous variables
set_manifest_vars(manifest_vars, additional_vars)
return manifest_vars
# Add any additional manifest variables
# They are added last so that they may override previous variables
set_manifest_vars(manifest_vars, additional_vars)
return manifest_vars
def __getstate__(self):
from bootstrapvz.remote import supported_classes
def __getstate__(self):
from bootstrapvz.remote import supported_classes
def can_serialize(obj):
if hasattr(obj, '__class__') and hasattr(obj, '__module__'):
class_name = obj.__module__ + '.' + obj.__class__.__name__
return class_name in supported_classes or isinstance(obj, (BaseException, Exception))
return True
def can_serialize(obj):
if hasattr(obj, '__class__') and hasattr(obj, '__module__'):
class_name = obj.__module__ + '.' + obj.__class__.__name__
return class_name in supported_classes or isinstance(obj, (BaseException, Exception))
return True
def filter_state(state):
if isinstance(state, dict):
return {key: filter_state(val) for key, val in state.items() if can_serialize(val)}
if isinstance(state, (set, tuple, list, frozenset)):
return type(state)(filter_state(val) for val in state if can_serialize(val))
return state
def filter_state(state):
if isinstance(state, dict):
return {key: filter_state(val) for key, val in state.items() if can_serialize(val)}
if isinstance(state, (set, tuple, list, frozenset)):
return type(state)(filter_state(val) for val in state if can_serialize(val))
return state
state = filter_state(self.__dict__)
state['__class__'] = self.__module__ + '.' + self.__class__.__name__
return state
state = filter_state(self.__dict__)
state['__class__'] = self.__module__ + '.' + self.__class__.__name__
return state
def __setstate__(self, state):
for key in state:
self.__dict__[key] = state[key]
def __setstate__(self, state):
for key in state:
self.__dict__[key] = state[key]
class DictClass(dict):
"""Tiny extension of dict to allow setting and getting keys via attributes
"""
def __getattr__(self, name):
return self[name]
"""Tiny extension of dict to allow setting and getting keys via attributes
"""
def __getattr__(self, name):
return self[name]
def __setattr__(self, name, value):
self[name] = value
def __setattr__(self, name, value):
self[name] = value
def __delattr__(self, name):
del self[name]
def __delattr__(self, name):
del self[name]
def __getstate__(self):
return self.__dict__
def __getstate__(self):
return self.__dict__
def __setstate__(self, state):
for key in state:
self[key] = state[key]
def __setstate__(self, state):
for key in state:
self[key] = state[key]

View file

@ -1,45 +1,45 @@
def load_volume(data, bootloader):
"""Instantiates a volume that corresponds to the data in the manifest
"""Instantiates a volume that corresponds to the data in the manifest
:param dict data: The 'volume' section from the manifest
:param str bootloader: Name of the bootloader the system will boot with
:param dict data: The 'volume' section from the manifest
:param str bootloader: Name of the bootloader the system will boot with
:return: The volume that represents all information pertaining to the volume we bootstrap on.
:rtype: Volume
"""
# Map valid partition maps in the manifest and their corresponding classes
from partitionmaps.gpt import GPTPartitionMap
from partitionmaps.msdos import MSDOSPartitionMap
from partitionmaps.none import NoPartitions
partition_map = {'none': NoPartitions,
'gpt': GPTPartitionMap,
'msdos': MSDOSPartitionMap,
}.get(data['partitions']['type'])
:return: The volume that represents all information pertaining to the volume we bootstrap on.
:rtype: Volume
"""
# Map valid partition maps in the manifest and their corresponding classes
from partitionmaps.gpt import GPTPartitionMap
from partitionmaps.msdos import MSDOSPartitionMap
from partitionmaps.none import NoPartitions
partition_map = {'none': NoPartitions,
'gpt': GPTPartitionMap,
'msdos': MSDOSPartitionMap,
}.get(data['partitions']['type'])
# Map valid volume backings in the manifest and their corresponding classes
from bootstrapvz.common.fs.loopbackvolume import LoopbackVolume
from bootstrapvz.providers.ec2.ebsvolume import EBSVolume
from bootstrapvz.common.fs.virtualdiskimage import VirtualDiskImage
from bootstrapvz.common.fs.virtualharddisk import VirtualHardDisk
from bootstrapvz.common.fs.virtualmachinedisk import VirtualMachineDisk
from bootstrapvz.common.fs.folder import Folder
volume_backing = {'raw': LoopbackVolume,
's3': LoopbackVolume,
'vdi': VirtualDiskImage,
'vhd': VirtualHardDisk,
'vmdk': VirtualMachineDisk,
'ebs': EBSVolume,
'folder': Folder
}.get(data['backing'])
# Map valid volume backings in the manifest and their corresponding classes
from bootstrapvz.common.fs.loopbackvolume import LoopbackVolume
from bootstrapvz.providers.ec2.ebsvolume import EBSVolume
from bootstrapvz.common.fs.virtualdiskimage import VirtualDiskImage
from bootstrapvz.common.fs.virtualharddisk import VirtualHardDisk
from bootstrapvz.common.fs.virtualmachinedisk import VirtualMachineDisk
from bootstrapvz.common.fs.folder import Folder
volume_backing = {'raw': LoopbackVolume,
's3': LoopbackVolume,
'vdi': VirtualDiskImage,
'vhd': VirtualHardDisk,
'vmdk': VirtualMachineDisk,
'ebs': EBSVolume,
'folder': Folder
}.get(data['backing'])
# Instantiate the partition map
from bootstrapvz.common.bytes import Bytes
# Only operate with a physical sector size of 512 bytes for now,
# not sure if we can change that for some of the virtual disks
sector_size = Bytes('512B')
partition_map = partition_map(data['partitions'], sector_size, bootloader)
# Instantiate the partition map
from bootstrapvz.common.bytes import Bytes
# Only operate with a physical sector size of 512 bytes for now,
# not sure if we can change that for some of the virtual disks
sector_size = Bytes('512B')
partition_map = partition_map(data['partitions'], sector_size, bootloader)
# Create the volume with the partition map as an argument
return volume_backing(partition_map)
# Create the volume with the partition map as an argument
return volume_backing(partition_map)

View file

@ -1,12 +1,12 @@
class VolumeError(Exception):
"""Raised when an error occurs while interacting with the volume
"""
pass
"""Raised when an error occurs while interacting with the volume
"""
pass
class PartitionError(Exception):
"""Raised when an error occurs while interacting with the partitions on the volume
"""
pass
"""Raised when an error occurs while interacting with the partitions on the volume
"""
pass

View file

@ -6,117 +6,117 @@ from ..exceptions import PartitionError
class AbstractPartitionMap(FSMProxy):
"""Abstract representation of a partiton map
This class is a finite state machine and represents the state of the real partition map
"""
"""Abstract representation of a partiton map
This class is a finite state machine and represents the state of the real partition map
"""
__metaclass__ = ABCMeta
__metaclass__ = ABCMeta
# States the partition map can be in
events = [{'name': 'create', 'src': 'nonexistent', 'dst': 'unmapped'},
{'name': 'map', 'src': 'unmapped', 'dst': 'mapped'},
{'name': 'unmap', 'src': 'mapped', 'dst': 'unmapped'},
]
# States the partition map can be in
events = [{'name': 'create', 'src': 'nonexistent', 'dst': 'unmapped'},
{'name': 'map', 'src': 'unmapped', 'dst': 'mapped'},
{'name': 'unmap', 'src': 'mapped', 'dst': 'unmapped'},
]
def __init__(self, bootloader):
"""
:param str bootloader: Name of the bootloader we will use for bootstrapping
"""
# Create the configuration for the state machine
cfg = {'initial': 'nonexistent', 'events': self.events, 'callbacks': {}}
super(AbstractPartitionMap, self).__init__(cfg)
def __init__(self, bootloader):
"""
:param str bootloader: Name of the bootloader we will use for bootstrapping
"""
# Create the configuration for the state machine
cfg = {'initial': 'nonexistent', 'events': self.events, 'callbacks': {}}
super(AbstractPartitionMap, self).__init__(cfg)
def is_blocking(self):
"""Returns whether the partition map is blocking volume detach operations
def is_blocking(self):
"""Returns whether the partition map is blocking volume detach operations
:rtype: bool
"""
return self.fsm.current == 'mapped'
:rtype: bool
"""
return self.fsm.current == 'mapped'
def get_total_size(self):
"""Returns the total size the partitions occupy
def get_total_size(self):
"""Returns the total size the partitions occupy
:return: The size of all partitions
:rtype: Sectors
"""
# We just need the endpoint of the last partition
return self.partitions[-1].get_end()
:return: The size of all partitions
:rtype: Sectors
"""
# We just need the endpoint of the last partition
return self.partitions[-1].get_end()
def create(self, volume):
"""Creates the partition map
def create(self, volume):
"""Creates the partition map
:param Volume volume: The volume to create the partition map on
"""
self.fsm.create(volume=volume)
:param Volume volume: The volume to create the partition map on
"""
self.fsm.create(volume=volume)
@abstractmethod
def _before_create(self, event):
pass
@abstractmethod
def _before_create(self, event):
pass
def map(self, volume):
"""Maps the partition map to device nodes
def map(self, volume):
"""Maps the partition map to device nodes
:param Volume volume: The volume the partition map resides on
"""
self.fsm.map(volume=volume)
:param Volume volume: The volume the partition map resides on
"""
self.fsm.map(volume=volume)
def _before_map(self, event):
"""
:raises PartitionError: In case a partition could not be mapped.
"""
volume = event.volume
try:
# Ask kpartx how the partitions will be mapped before actually attaching them.
mappings = log_check_call(['kpartx', '-l', volume.device_path])
import re
regexp = re.compile('^(?P<name>.+[^\d](?P<p_idx>\d+)) : '
'(?P<start_blk>\d) (?P<num_blks>\d+) '
'{device_path} (?P<blk_offset>\d+)$'
.format(device_path=volume.device_path))
log_check_call(['kpartx', '-as', volume.device_path])
def _before_map(self, event):
"""
:raises PartitionError: In case a partition could not be mapped.
"""
volume = event.volume
try:
# Ask kpartx how the partitions will be mapped before actually attaching them.
mappings = log_check_call(['kpartx', '-l', volume.device_path])
import re
regexp = re.compile('^(?P<name>.+[^\d](?P<p_idx>\d+)) : '
'(?P<start_blk>\d) (?P<num_blks>\d+) '
'{device_path} (?P<blk_offset>\d+)$'
.format(device_path=volume.device_path))
log_check_call(['kpartx', '-as', volume.device_path])
import os.path
# Run through the kpartx output and map the paths to the partitions
for mapping in mappings:
match = regexp.match(mapping)
if match is None:
raise PartitionError('Unable to parse kpartx output: ' + mapping)
partition_path = os.path.join('/dev/mapper', match.group('name'))
p_idx = int(match.group('p_idx')) - 1
self.partitions[p_idx].map(partition_path)
import os.path
# Run through the kpartx output and map the paths to the partitions
for mapping in mappings:
match = regexp.match(mapping)
if match is None:
raise PartitionError('Unable to parse kpartx output: ' + mapping)
partition_path = os.path.join('/dev/mapper', match.group('name'))
p_idx = int(match.group('p_idx')) - 1
self.partitions[p_idx].map(partition_path)
# Check if any partition was not mapped
for idx, partition in enumerate(self.partitions):
if partition.fsm.current not in ['mapped', 'formatted']:
raise PartitionError('kpartx did not map partition #' + str(partition.get_index()))
# Check if any partition was not mapped
for idx, partition in enumerate(self.partitions):
if partition.fsm.current not in ['mapped', 'formatted']:
raise PartitionError('kpartx did not map partition #' + str(partition.get_index()))
except PartitionError:
# Revert any mapping and reraise the error
for partition in self.partitions:
if partition.fsm.can('unmap'):
partition.unmap()
log_check_call(['kpartx', '-ds', volume.device_path])
raise
except PartitionError:
# Revert any mapping and reraise the error
for partition in self.partitions:
if partition.fsm.can('unmap'):
partition.unmap()
log_check_call(['kpartx', '-ds', volume.device_path])
raise
def unmap(self, volume):
"""Unmaps the partition
def unmap(self, volume):
"""Unmaps the partition
:param Volume volume: The volume to unmap the partition map from
"""
self.fsm.unmap(volume=volume)
:param Volume volume: The volume to unmap the partition map from
"""
self.fsm.unmap(volume=volume)
def _before_unmap(self, event):
"""
:raises PartitionError: If the a partition cannot be unmapped
"""
volume = event.volume
# Run through all partitions before unmapping and make sure they can all be unmapped
for partition in self.partitions:
if partition.fsm.cannot('unmap'):
msg = 'The partition {partition} prevents the unmap procedure'.format(partition=partition)
raise PartitionError(msg)
# Actually unmap the partitions
log_check_call(['kpartx', '-ds', volume.device_path])
# Call unmap on all partitions
for partition in self.partitions:
partition.unmap()
def _before_unmap(self, event):
"""
:raises PartitionError: If the a partition cannot be unmapped
"""
volume = event.volume
# Run through all partitions before unmapping and make sure they can all be unmapped
for partition in self.partitions:
if partition.fsm.cannot('unmap'):
msg = 'The partition {partition} prevents the unmap procedure'.format(partition=partition)
raise PartitionError(msg)
# Actually unmap the partitions
log_check_call(['kpartx', '-ds', volume.device_path])
# Call unmap on all partitions
for partition in self.partitions:
partition.unmap()

View file

@ -5,92 +5,92 @@ from bootstrapvz.common.tools import log_check_call
class GPTPartitionMap(AbstractPartitionMap):
"""Represents a GPT partition map
"""
"""Represents a GPT partition map
"""
def __init__(self, data, sector_size, bootloader):
"""
:param dict data: volume.partitions part of the manifest
:param int sector_size: Sectorsize of the volume
:param str bootloader: Name of the bootloader we will use for bootstrapping
"""
from bootstrapvz.common.sectors import Sectors
def __init__(self, data, sector_size, bootloader):
"""
:param dict data: volume.partitions part of the manifest
:param int sector_size: Sectorsize of the volume
:param str bootloader: Name of the bootloader we will use for bootstrapping
"""
from bootstrapvz.common.sectors import Sectors
# List of partitions
self.partitions = []
# List of partitions
self.partitions = []
# Returns the last partition unless there is none
def last_partition():
return self.partitions[-1] if len(self.partitions) > 0 else None
# Returns the last partition unless there is none
def last_partition():
return self.partitions[-1] if len(self.partitions) > 0 else None
if bootloader == 'grub':
# If we are using the grub bootloader we need to create an unformatted partition
# at the beginning of the map. Its size is 1007kb, which seems to be chosen so that
# primary gpt + grub = 1024KiB
# The 1 MiB will be subtracted later on, once we know what the subsequent partition is
from ..partitions.unformatted import UnformattedPartition
self.grub_boot = UnformattedPartition(Sectors('1MiB', sector_size), last_partition())
self.partitions.append(self.grub_boot)
if bootloader == 'grub':
# If we are using the grub bootloader we need to create an unformatted partition
# at the beginning of the map. Its size is 1007kb, which seems to be chosen so that
# primary gpt + grub = 1024KiB
# The 1 MiB will be subtracted later on, once we know what the subsequent partition is
from ..partitions.unformatted import UnformattedPartition
self.grub_boot = UnformattedPartition(Sectors('1MiB', sector_size), last_partition())
self.partitions.append(self.grub_boot)
# Offset all partitions by 1 sector.
# parted in jessie has changed and no longer allows
# partitions to be right next to each other.
partition_gap = Sectors(1, sector_size)
# Offset all partitions by 1 sector.
# parted in jessie has changed and no longer allows
# partitions to be right next to each other.
partition_gap = Sectors(1, sector_size)
# The boot and swap partitions are optional
if 'boot' in data:
self.boot = GPTPartition(Sectors(data['boot']['size'], sector_size),
data['boot']['filesystem'], data['boot'].get('format_command', None),
'boot', last_partition())
if self.boot.previous is not None:
# No need to pad if this is the first partition
self.boot.pad_start += partition_gap
self.boot.size -= partition_gap
self.partitions.append(self.boot)
# The boot and swap partitions are optional
if 'boot' in data:
self.boot = GPTPartition(Sectors(data['boot']['size'], sector_size),
data['boot']['filesystem'], data['boot'].get('format_command', None),
'boot', last_partition())
if self.boot.previous is not None:
# No need to pad if this is the first partition
self.boot.pad_start += partition_gap
self.boot.size -= partition_gap
self.partitions.append(self.boot)
if 'swap' in data:
self.swap = GPTSwapPartition(Sectors(data['swap']['size'], sector_size), last_partition())
if self.swap.previous is not None:
self.swap.pad_start += partition_gap
self.swap.size -= partition_gap
self.partitions.append(self.swap)
if 'swap' in data:
self.swap = GPTSwapPartition(Sectors(data['swap']['size'], sector_size), last_partition())
if self.swap.previous is not None:
self.swap.pad_start += partition_gap
self.swap.size -= partition_gap
self.partitions.append(self.swap)
self.root = GPTPartition(Sectors(data['root']['size'], sector_size),
data['root']['filesystem'], data['root'].get('format_command', None),
'root', last_partition())
if self.root.previous is not None:
self.root.pad_start += partition_gap
self.root.size -= partition_gap
self.partitions.append(self.root)
self.root = GPTPartition(Sectors(data['root']['size'], sector_size),
data['root']['filesystem'], data['root'].get('format_command', None),
'root', last_partition())
if self.root.previous is not None:
self.root.pad_start += partition_gap
self.root.size -= partition_gap
self.partitions.append(self.root)
if hasattr(self, 'grub_boot'):
# Mark the grub partition as a bios_grub partition
self.grub_boot.flags.append('bios_grub')
# Subtract the grub partition size from the subsequent partition
self.partitions[1].size -= self.grub_boot.size
else:
# Not using grub, mark the boot partition or root as bootable
getattr(self, 'boot', self.root).flags.append('legacy_boot')
if hasattr(self, 'grub_boot'):
# Mark the grub partition as a bios_grub partition
self.grub_boot.flags.append('bios_grub')
# Subtract the grub partition size from the subsequent partition
self.partitions[1].size -= self.grub_boot.size
else:
# Not using grub, mark the boot partition or root as bootable
getattr(self, 'boot', self.root).flags.append('legacy_boot')
# The first and last 34 sectors are reserved for the primary/secondary GPT
primary_gpt_size = Sectors(34, sector_size)
self.partitions[0].pad_start += primary_gpt_size
self.partitions[0].size -= primary_gpt_size
# The first and last 34 sectors are reserved for the primary/secondary GPT
primary_gpt_size = Sectors(34, sector_size)
self.partitions[0].pad_start += primary_gpt_size
self.partitions[0].size -= primary_gpt_size
secondary_gpt_size = Sectors(34, sector_size)
self.partitions[-1].pad_end += secondary_gpt_size
self.partitions[-1].size -= secondary_gpt_size
secondary_gpt_size = Sectors(34, sector_size)
self.partitions[-1].pad_end += secondary_gpt_size
self.partitions[-1].size -= secondary_gpt_size
super(GPTPartitionMap, self).__init__(bootloader)
super(GPTPartitionMap, self).__init__(bootloader)
def _before_create(self, event):
"""Creates the partition map
"""
volume = event.volume
# Disk alignment still plays a role in virtualized environment,
# but I honestly have no clue as to what best practice is here, so we choose 'none'
log_check_call(['parted', '--script', '--align', 'none', volume.device_path,
'--', 'mklabel', 'gpt'])
# Create the partitions
for partition in self.partitions:
partition.create(volume)
def _before_create(self, event):
"""Creates the partition map
"""
volume = event.volume
# Disk alignment still plays a role in virtualized environment,
# but I honestly have no clue as to what best practice is here, so we choose 'none'
log_check_call(['parted', '--script', '--align', 'none', volume.device_path,
'--', 'mklabel', 'gpt'])
# Create the partitions
for partition in self.partitions:
partition.create(volume)

View file

@ -5,82 +5,82 @@ from bootstrapvz.common.tools import log_check_call
class MSDOSPartitionMap(AbstractPartitionMap):
"""Represents a MS-DOS partition map
Sometimes also called MBR (but that confuses the hell out of me, so ms-dos it is)
"""
"""Represents a MS-DOS partition map
Sometimes also called MBR (but that confuses the hell out of me, so ms-dos it is)
"""
def __init__(self, data, sector_size, bootloader):
"""
:param dict data: volume.partitions part of the manifest
:param int sector_size: Sectorsize of the volume
:param str bootloader: Name of the bootloader we will use for bootstrapping
"""
from bootstrapvz.common.sectors import Sectors
def __init__(self, data, sector_size, bootloader):
"""
:param dict data: volume.partitions part of the manifest
:param int sector_size: Sectorsize of the volume
:param str bootloader: Name of the bootloader we will use for bootstrapping
"""
from bootstrapvz.common.sectors import Sectors
# List of partitions
self.partitions = []
# List of partitions
self.partitions = []
# Returns the last partition unless there is none
def last_partition():
return self.partitions[-1] if len(self.partitions) > 0 else None
# Returns the last partition unless there is none
def last_partition():
return self.partitions[-1] if len(self.partitions) > 0 else None
# The boot and swap partitions are optional
if 'boot' in data:
self.boot = MSDOSPartition(Sectors(data['boot']['size'], sector_size),
data['boot']['filesystem'], data['boot'].get('format_command', None),
last_partition())
self.partitions.append(self.boot)
# The boot and swap partitions are optional
if 'boot' in data:
self.boot = MSDOSPartition(Sectors(data['boot']['size'], sector_size),
data['boot']['filesystem'], data['boot'].get('format_command', None),
last_partition())
self.partitions.append(self.boot)
# Offset all partitions by 1 sector.
# parted in jessie has changed and no longer allows
# partitions to be right next to each other.
partition_gap = Sectors(1, sector_size)
# Offset all partitions by 1 sector.
# parted in jessie has changed and no longer allows
# partitions to be right next to each other.
partition_gap = Sectors(1, sector_size)
if 'swap' in data:
self.swap = MSDOSSwapPartition(Sectors(data['swap']['size'], sector_size), last_partition())
if self.swap.previous is not None:
# No need to pad if this is the first partition
self.swap.pad_start += partition_gap
self.swap.size -= partition_gap
self.partitions.append(self.swap)
if 'swap' in data:
self.swap = MSDOSSwapPartition(Sectors(data['swap']['size'], sector_size), last_partition())
if self.swap.previous is not None:
# No need to pad if this is the first partition
self.swap.pad_start += partition_gap
self.swap.size -= partition_gap
self.partitions.append(self.swap)
self.root = MSDOSPartition(Sectors(data['root']['size'], sector_size),
data['root']['filesystem'], data['root'].get('format_command', None),
last_partition())
if self.root.previous is not None:
self.root.pad_start += partition_gap
self.root.size -= partition_gap
self.partitions.append(self.root)
self.root = MSDOSPartition(Sectors(data['root']['size'], sector_size),
data['root']['filesystem'], data['root'].get('format_command', None),
last_partition())
if self.root.previous is not None:
self.root.pad_start += partition_gap
self.root.size -= partition_gap
self.partitions.append(self.root)
# Mark boot as the boot partition, or root, if boot does not exist
getattr(self, 'boot', self.root).flags.append('boot')
# Mark boot as the boot partition, or root, if boot does not exist
getattr(self, 'boot', self.root).flags.append('boot')
# If we are using the grub bootloader, we will need to add a 2 MB offset
# at the beginning of the partitionmap and steal it from the first partition.
# The MBR offset is included in the grub offset, so if we don't use grub
# we should reduce the size of the first partition and move it by only 512 bytes.
if bootloader == 'grub':
mbr_offset = Sectors('2MiB', sector_size)
else:
mbr_offset = Sectors('512B', sector_size)
# If we are using the grub bootloader, we will need to add a 2 MB offset
# at the beginning of the partitionmap and steal it from the first partition.
# The MBR offset is included in the grub offset, so if we don't use grub
# we should reduce the size of the first partition and move it by only 512 bytes.
if bootloader == 'grub':
mbr_offset = Sectors('2MiB', sector_size)
else:
mbr_offset = Sectors('512B', sector_size)
self.partitions[0].pad_start += mbr_offset
self.partitions[0].size -= mbr_offset
self.partitions[0].pad_start += mbr_offset
self.partitions[0].size -= mbr_offset
# Leave the last sector unformatted
# parted in jessie thinks that a partition 10 sectors in size
# goes from sector 0 to sector 9 (instead of 0 to 10)
self.partitions[-1].pad_end += 1
self.partitions[-1].size -= 1
# Leave the last sector unformatted
# parted in jessie thinks that a partition 10 sectors in size
# goes from sector 0 to sector 9 (instead of 0 to 10)
self.partitions[-1].pad_end += 1
self.partitions[-1].size -= 1
super(MSDOSPartitionMap, self).__init__(bootloader)
super(MSDOSPartitionMap, self).__init__(bootloader)
def _before_create(self, event):
volume = event.volume
# Disk alignment still plays a role in virtualized environment,
# but I honestly have no clue as to what best practice is here, so we choose 'none'
log_check_call(['parted', '--script', '--align', 'none', volume.device_path,
'--', 'mklabel', 'msdos'])
# Create the partitions
for partition in self.partitions:
partition.create(volume)
def _before_create(self, event):
volume = event.volume
# Disk alignment still plays a role in virtualized environment,
# but I honestly have no clue as to what best practice is here, so we choose 'none'
log_check_call(['parted', '--script', '--align', 'none', volume.device_path,
'--', 'mklabel', 'msdos'])
# Create the partitions
for partition in self.partitions:
partition.create(volume)

View file

@ -2,44 +2,44 @@ from ..partitions.single import SinglePartition
class NoPartitions(object):
"""Represents a virtual 'NoPartitions' partitionmap.
This virtual partition map exists because it is easier for tasks to
simply always deal with partition maps and then let the base abstract that away.
"""
"""Represents a virtual 'NoPartitions' partitionmap.
This virtual partition map exists because it is easier for tasks to
simply always deal with partition maps and then let the base abstract that away.
"""
def __init__(self, data, sector_size, bootloader):
"""
:param dict data: volume.partitions part of the manifest
:param int sector_size: Sectorsize of the volume
:param str bootloader: Name of the bootloader we will use for bootstrapping
"""
from bootstrapvz.common.sectors import Sectors
def __init__(self, data, sector_size, bootloader):
"""
:param dict data: volume.partitions part of the manifest
:param int sector_size: Sectorsize of the volume
:param str bootloader: Name of the bootloader we will use for bootstrapping
"""
from bootstrapvz.common.sectors import Sectors
# In the NoPartitions partitions map we only have a single 'partition'
self.root = SinglePartition(Sectors(data['root']['size'], sector_size),
data['root']['filesystem'], data['root'].get('format_command', None))
self.partitions = [self.root]
# In the NoPartitions partitions map we only have a single 'partition'
self.root = SinglePartition(Sectors(data['root']['size'], sector_size),
data['root']['filesystem'], data['root'].get('format_command', None))
self.partitions = [self.root]
def is_blocking(self):
"""Returns whether the partition map is blocking volume detach operations
def is_blocking(self):
"""Returns whether the partition map is blocking volume detach operations
:rtype: bool
"""
return self.root.fsm.current == 'mounted'
:rtype: bool
"""
return self.root.fsm.current == 'mounted'
def get_total_size(self):
"""Returns the total size the partitions occupy
def get_total_size(self):
"""Returns the total size the partitions occupy
:return: The size of all the partitions
:rtype: Sectors
"""
return self.root.get_end()
:return: The size of all the partitions
:rtype: Sectors
"""
return self.root.get_end()
def __getstate__(self):
state = self.__dict__.copy()
state['__class__'] = self.__module__ + '.' + self.__class__.__name__
return state
def __getstate__(self):
state = self.__dict__.copy()
state['__class__'] = self.__module__ + '.' + self.__class__.__name__
return state
def __setstate__(self, state):
for key in state:
self.__dict__[key] = state[key]
def __setstate__(self, state):
for key in state:
self.__dict__[key] = state[key]

View file

@ -6,124 +6,124 @@ from bootstrapvz.common.fsm_proxy import FSMProxy
class AbstractPartition(FSMProxy):
"""Abstract representation of a partiton
This class is a finite state machine and represents the state of the real partition
"""
"""Abstract representation of a partiton
This class is a finite state machine and represents the state of the real partition
"""
__metaclass__ = ABCMeta
__metaclass__ = ABCMeta
# Our states
events = [{'name': 'create', 'src': 'nonexistent', 'dst': 'created'},
{'name': 'format', 'src': 'created', 'dst': 'formatted'},
{'name': 'mount', 'src': 'formatted', 'dst': 'mounted'},
{'name': 'unmount', 'src': 'mounted', 'dst': 'formatted'},
]
# Our states
events = [{'name': 'create', 'src': 'nonexistent', 'dst': 'created'},
{'name': 'format', 'src': 'created', 'dst': 'formatted'},
{'name': 'mount', 'src': 'formatted', 'dst': 'mounted'},
{'name': 'unmount', 'src': 'mounted', 'dst': 'formatted'},
]
def __init__(self, size, filesystem, format_command):
"""
:param Bytes size: Size of the partition
:param str filesystem: Filesystem the partition should be formatted with
:param list format_command: Optional format command, valid variables are fs, device_path and size
"""
self.size = size
self.filesystem = filesystem
self.format_command = format_command
# Initialize the start & end padding to 0 sectors, may be changed later
self.pad_start = Sectors(0, size.sector_size)
self.pad_end = Sectors(0, size.sector_size)
# Path to the partition
self.device_path = None
# Dictionary with mount points as keys and Mount objects as values
self.mounts = {}
def __init__(self, size, filesystem, format_command):
"""
:param Bytes size: Size of the partition
:param str filesystem: Filesystem the partition should be formatted with
:param list format_command: Optional format command, valid variables are fs, device_path and size
"""
self.size = size
self.filesystem = filesystem
self.format_command = format_command
# Initialize the start & end padding to 0 sectors, may be changed later
self.pad_start = Sectors(0, size.sector_size)
self.pad_end = Sectors(0, size.sector_size)
# Path to the partition
self.device_path = None
# Dictionary with mount points as keys and Mount objects as values
self.mounts = {}
# Create the configuration for our state machine
cfg = {'initial': 'nonexistent', 'events': self.events, 'callbacks': {}}
super(AbstractPartition, self).__init__(cfg)
# Create the configuration for our state machine
cfg = {'initial': 'nonexistent', 'events': self.events, 'callbacks': {}}
super(AbstractPartition, self).__init__(cfg)
def get_uuid(self):
"""Gets the UUID of the partition
def get_uuid(self):
"""Gets the UUID of the partition
:return: The UUID of the partition
:rtype: str
"""
[uuid] = log_check_call(['blkid', '-s', 'UUID', '-o', 'value', self.device_path])
return uuid
:return: The UUID of the partition
:rtype: str
"""
[uuid] = log_check_call(['blkid', '-s', 'UUID', '-o', 'value', self.device_path])
return uuid
@abstractmethod
def get_start(self):
pass
@abstractmethod
def get_start(self):
pass
def get_end(self):
"""Gets the end of the partition
def get_end(self):
"""Gets the end of the partition
:return: The end of the partition
:rtype: Sectors
"""
return self.get_start() + self.pad_start + self.size + self.pad_end
:return: The end of the partition
:rtype: Sectors
"""
return self.get_start() + self.pad_start + self.size + self.pad_end
def _before_format(self, e):
"""Formats the partition
"""
# If there is no explicit format_command define we simply call mkfs.fstype
if self.format_command is None:
format_command = ['mkfs.{fs}', '{device_path}']
else:
format_command = self.format_command
variables = {'fs': self.filesystem,
'device_path': self.device_path,
'size': self.size,
}
command = map(lambda part: part.format(**variables), format_command)
# Format the partition
log_check_call(command)
def _before_format(self, e):
"""Formats the partition
"""
# If there is no explicit format_command define we simply call mkfs.fstype
if self.format_command is None:
format_command = ['mkfs.{fs}', '{device_path}']
else:
format_command = self.format_command
variables = {'fs': self.filesystem,
'device_path': self.device_path,
'size': self.size,
}
command = map(lambda part: part.format(**variables), format_command)
# Format the partition
log_check_call(command)
def _before_mount(self, e):
"""Mount the partition
"""
log_check_call(['mount', '--types', self.filesystem, self.device_path, e.destination])
self.mount_dir = e.destination
def _before_mount(self, e):
"""Mount the partition
"""
log_check_call(['mount', '--types', self.filesystem, self.device_path, e.destination])
self.mount_dir = e.destination
def _after_mount(self, e):
"""Mount any mounts associated with this partition
"""
# Make sure we mount in ascending order of mountpoint path length
# This ensures that we don't mount /dev/pts before we mount /dev
for destination in sorted(self.mounts.iterkeys(), key=len):
self.mounts[destination].mount(self.mount_dir)
def _after_mount(self, e):
"""Mount any mounts associated with this partition
"""
# Make sure we mount in ascending order of mountpoint path length
# This ensures that we don't mount /dev/pts before we mount /dev
for destination in sorted(self.mounts.iterkeys(), key=len):
self.mounts[destination].mount(self.mount_dir)
def _before_unmount(self, e):
"""Unmount any mounts associated with this partition
"""
# Unmount the mounts in descending order of mounpoint path length
# You cannot unmount /dev before you have unmounted /dev/pts
for destination in sorted(self.mounts.iterkeys(), key=len, reverse=True):
self.mounts[destination].unmount()
log_check_call(['umount', self.mount_dir])
del self.mount_dir
def _before_unmount(self, e):
"""Unmount any mounts associated with this partition
"""
# Unmount the mounts in descending order of mounpoint path length
# You cannot unmount /dev before you have unmounted /dev/pts
for destination in sorted(self.mounts.iterkeys(), key=len, reverse=True):
self.mounts[destination].unmount()
log_check_call(['umount', self.mount_dir])
del self.mount_dir
def add_mount(self, source, destination, opts=[]):
"""Associate a mount with this partition
Automatically mounts it
def add_mount(self, source, destination, opts=[]):
"""Associate a mount with this partition
Automatically mounts it
:param str,AbstractPartition source: The source of the mount
:param str destination: The path to the mountpoint
:param list opts: Any options that should be passed to the mount command
"""
# Create a new mount object, mount it if the partition is mounted and put it in the mounts dict
from mount import Mount
mount = Mount(source, destination, opts)
if self.fsm.current == 'mounted':
mount.mount(self.mount_dir)
self.mounts[destination] = mount
:param str,AbstractPartition source: The source of the mount
:param str destination: The path to the mountpoint
:param list opts: Any options that should be passed to the mount command
"""
# Create a new mount object, mount it if the partition is mounted and put it in the mounts dict
from mount import Mount
mount = Mount(source, destination, opts)
if self.fsm.current == 'mounted':
mount.mount(self.mount_dir)
self.mounts[destination] = mount
def remove_mount(self, destination):
"""Remove a mount from this partition
Automatically unmounts it
def remove_mount(self, destination):
"""Remove a mount from this partition
Automatically unmounts it
:param str destination: The mountpoint path of the mount that should be removed
"""
# Unmount the mount if the partition is mounted and delete it from the mounts dict
# If the mount is already unmounted and the source is a partition, this will raise an exception
if self.fsm.current == 'mounted':
self.mounts[destination].unmount()
del self.mounts[destination]
:param str destination: The mountpoint path of the mount that should be removed
"""
# Unmount the mount if the partition is mounted and delete it from the mounts dict
# If the mount is already unmounted and the source is a partition, this will raise an exception
if self.fsm.current == 'mounted':
self.mounts[destination].unmount()
del self.mounts[destination]

View file

@ -4,135 +4,135 @@ from bootstrapvz.common.sectors import Sectors
class BasePartition(AbstractPartition):
"""Represents a partition that is actually a partition (and not a virtual one like 'Single')
"""
"""Represents a partition that is actually a partition (and not a virtual one like 'Single')
"""
# Override the states of the abstract partition
# A real partition can be mapped and unmapped
events = [{'name': 'create', 'src': 'nonexistent', 'dst': 'unmapped'},
{'name': 'map', 'src': 'unmapped', 'dst': 'mapped'},
{'name': 'format', 'src': 'mapped', 'dst': 'formatted'},
{'name': 'mount', 'src': 'formatted', 'dst': 'mounted'},
{'name': 'unmount', 'src': 'mounted', 'dst': 'formatted'},
{'name': 'unmap', 'src': 'formatted', 'dst': 'unmapped_fmt'},
# Override the states of the abstract partition
# A real partition can be mapped and unmapped
events = [{'name': 'create', 'src': 'nonexistent', 'dst': 'unmapped'},
{'name': 'map', 'src': 'unmapped', 'dst': 'mapped'},
{'name': 'format', 'src': 'mapped', 'dst': 'formatted'},
{'name': 'mount', 'src': 'formatted', 'dst': 'mounted'},
{'name': 'unmount', 'src': 'mounted', 'dst': 'formatted'},
{'name': 'unmap', 'src': 'formatted', 'dst': 'unmapped_fmt'},
{'name': 'map', 'src': 'unmapped_fmt', 'dst': 'formatted'},
{'name': 'unmap', 'src': 'mapped', 'dst': 'unmapped'},
]
{'name': 'map', 'src': 'unmapped_fmt', 'dst': 'formatted'},
{'name': 'unmap', 'src': 'mapped', 'dst': 'unmapped'},
]
def __init__(self, size, filesystem, format_command, previous):
"""
:param Bytes size: Size of the partition
:param str filesystem: Filesystem the partition should be formatted with
:param list format_command: Optional format command, valid variables are fs, device_path and size
:param BasePartition previous: The partition that preceeds this one
"""
# By saving the previous partition we have a linked list
# that partitions can go backwards in to find the first partition.
self.previous = previous
# List of flags that parted should put on the partition
self.flags = []
# Path to symlink in /dev/disk/by-uuid (manually maintained by this class)
self.disk_by_uuid_path = None
super(BasePartition, self).__init__(size, filesystem, format_command)
def __init__(self, size, filesystem, format_command, previous):
"""
:param Bytes size: Size of the partition
:param str filesystem: Filesystem the partition should be formatted with
:param list format_command: Optional format command, valid variables are fs, device_path and size
:param BasePartition previous: The partition that preceeds this one
"""
# By saving the previous partition we have a linked list
# that partitions can go backwards in to find the first partition.
self.previous = previous
# List of flags that parted should put on the partition
self.flags = []
# Path to symlink in /dev/disk/by-uuid (manually maintained by this class)
self.disk_by_uuid_path = None
super(BasePartition, self).__init__(size, filesystem, format_command)
def create(self, volume):
"""Creates the partition
def create(self, volume):
"""Creates the partition
:param Volume volume: The volume to create the partition on
"""
self.fsm.create(volume=volume)
:param Volume volume: The volume to create the partition on
"""
self.fsm.create(volume=volume)
def get_index(self):
"""Gets the index of this partition in the partition map
def get_index(self):
"""Gets the index of this partition in the partition map
:return: The index of the partition in the partition map
:rtype: int
"""
if self.previous is None:
# Partitions are 1 indexed
return 1
else:
# Recursive call to the previous partition, walking up the chain...
return self.previous.get_index() + 1
:return: The index of the partition in the partition map
:rtype: int
"""
if self.previous is None:
# Partitions are 1 indexed
return 1
else:
# Recursive call to the previous partition, walking up the chain...
return self.previous.get_index() + 1
def get_start(self):
"""Gets the starting byte of this partition
def get_start(self):
"""Gets the starting byte of this partition
:return: The starting byte of this partition
:rtype: Sectors
"""
if self.previous is None:
return Sectors(0, self.size.sector_size)
else:
return self.previous.get_end()
:return: The starting byte of this partition
:rtype: Sectors
"""
if self.previous is None:
return Sectors(0, self.size.sector_size)
else:
return self.previous.get_end()
def map(self, device_path):
"""Maps the partition to a device_path
def map(self, device_path):
"""Maps the partition to a device_path
:param str device_path: The device path this partition should be mapped to
"""
self.fsm.map(device_path=device_path)
:param str device_path: The device path this partition should be mapped to
"""
self.fsm.map(device_path=device_path)
def link_uuid(self):
# /lib/udev/rules.d/60-kpartx.rules does not create symlinks in /dev/disk/by-{uuid,label}
# This patch would fix that: http://www.redhat.com/archives/dm-devel/2013-July/msg00080.html
# For now we just do the uuid part ourselves.
# This is mainly to fix a problem in update-grub where /etc/grub.d/10_linux
# checks if the $GRUB_DEVICE_UUID exists in /dev/disk/by-uuid and falls
# back to $GRUB_DEVICE if it doesn't.
# $GRUB_DEVICE is /dev/mapper/xvd{f,g...}# (on ec2), opposed to /dev/xvda# when booting.
# Creating the symlink ensures that grub consistently uses
# $GRUB_DEVICE_UUID when creating /boot/grub/grub.cfg
self.disk_by_uuid_path = os.path.join('/dev/disk/by-uuid', self.get_uuid())
if not os.path.exists(self.disk_by_uuid_path):
os.symlink(self.device_path, self.disk_by_uuid_path)
def link_uuid(self):
# /lib/udev/rules.d/60-kpartx.rules does not create symlinks in /dev/disk/by-{uuid,label}
# This patch would fix that: http://www.redhat.com/archives/dm-devel/2013-July/msg00080.html
# For now we just do the uuid part ourselves.
# This is mainly to fix a problem in update-grub where /etc/grub.d/10_linux
# checks if the $GRUB_DEVICE_UUID exists in /dev/disk/by-uuid and falls
# back to $GRUB_DEVICE if it doesn't.
# $GRUB_DEVICE is /dev/mapper/xvd{f,g...}# (on ec2), opposed to /dev/xvda# when booting.
# Creating the symlink ensures that grub consistently uses
# $GRUB_DEVICE_UUID when creating /boot/grub/grub.cfg
self.disk_by_uuid_path = os.path.join('/dev/disk/by-uuid', self.get_uuid())
if not os.path.exists(self.disk_by_uuid_path):
os.symlink(self.device_path, self.disk_by_uuid_path)
def unlink_uuid(self):
if os.path.isfile(self.disk_by_uuid_path):
os.remove(self.disk_by_uuid_path)
self.disk_by_uuid_path = None
def unlink_uuid(self):
if os.path.isfile(self.disk_by_uuid_path):
os.remove(self.disk_by_uuid_path)
self.disk_by_uuid_path = None
def _before_create(self, e):
"""Creates the partition
"""
from bootstrapvz.common.tools import log_check_call
# The create command is fairly simple:
# - fs_type is the partition filesystem, as defined by parted:
# fs-type can be one of "fat16", "fat32", "ext2", "HFS", "linux-swap",
# "NTFS", "reiserfs", or "ufs".
# - start and end are just Bytes objects coerced into strings
if self.filesystem == 'swap':
fs_type = 'linux-swap'
else:
fs_type = 'ext2'
create_command = ('mkpart primary {fs_type} {start} {end}'
.format(fs_type=fs_type,
start=str(self.get_start() + self.pad_start),
end=str(self.get_end() - self.pad_end)))
# Create the partition
log_check_call(['parted', '--script', '--align', 'none', e.volume.device_path,
'--', create_command])
def _before_create(self, e):
"""Creates the partition
"""
from bootstrapvz.common.tools import log_check_call
# The create command is fairly simple:
# - fs_type is the partition filesystem, as defined by parted:
# fs-type can be one of "fat16", "fat32", "ext2", "HFS", "linux-swap",
# "NTFS", "reiserfs", or "ufs".
# - start and end are just Bytes objects coerced into strings
if self.filesystem == 'swap':
fs_type = 'linux-swap'
else:
fs_type = 'ext2'
create_command = ('mkpart primary {fs_type} {start} {end}'
.format(fs_type=fs_type,
start=str(self.get_start() + self.pad_start),
end=str(self.get_end() - self.pad_end)))
# Create the partition
log_check_call(['parted', '--script', '--align', 'none', e.volume.device_path,
'--', create_command])
# Set any flags on the partition
for flag in self.flags:
log_check_call(['parted', '--script', e.volume.device_path,
'--', ('set {idx} {flag} on'
.format(idx=str(self.get_index()), flag=flag))])
# Set any flags on the partition
for flag in self.flags:
log_check_call(['parted', '--script', e.volume.device_path,
'--', ('set {idx} {flag} on'
.format(idx=str(self.get_index()), flag=flag))])
def _before_map(self, e):
# Set the device path
self.device_path = e.device_path
if e.src == 'unmapped_fmt':
# Only link the uuid if the partition is formatted
self.link_uuid()
def _before_map(self, e):
# Set the device path
self.device_path = e.device_path
if e.src == 'unmapped_fmt':
# Only link the uuid if the partition is formatted
self.link_uuid()
def _after_format(self, e):
# We do this after formatting because there otherwise would be no UUID
self.link_uuid()
def _after_format(self, e):
# We do this after formatting because there otherwise would be no UUID
self.link_uuid()
def _before_unmap(self, e):
# When unmapped, the device_path information becomes invalid, so we delete it
self.device_path = None
if e.src == 'formatted':
self.unlink_uuid()
def _before_unmap(self, e):
# When unmapped, the device_path information becomes invalid, so we delete it
self.device_path = None
if e.src == 'formatted':
self.unlink_uuid()

View file

@ -3,24 +3,24 @@ from base import BasePartition
class GPTPartition(BasePartition):
"""Represents a GPT partition
"""
"""Represents a GPT partition
"""
def __init__(self, size, filesystem, format_command, name, previous):
"""
:param Bytes size: Size of the partition
:param str filesystem: Filesystem the partition should be formatted with
:param list format_command: Optional format command, valid variables are fs, device_path and size
:param str name: The name of the partition
:param BasePartition previous: The partition that preceeds this one
"""
self.name = name
super(GPTPartition, self).__init__(size, filesystem, format_command, previous)
def __init__(self, size, filesystem, format_command, name, previous):
"""
:param Bytes size: Size of the partition
:param str filesystem: Filesystem the partition should be formatted with
:param list format_command: Optional format command, valid variables are fs, device_path and size
:param str name: The name of the partition
:param BasePartition previous: The partition that preceeds this one
"""
self.name = name
super(GPTPartition, self).__init__(size, filesystem, format_command, previous)
def _before_create(self, e):
# Create the partition and then set the name of the partition afterwards
super(GPTPartition, self)._before_create(e)
# partition name only works for gpt, for msdos that becomes the part-type (primary, extended, logical)
name_command = 'name {idx} {name}'.format(idx=self.get_index(), name=self.name)
log_check_call(['parted', '--script', e.volume.device_path,
'--', name_command])
def _before_create(self, e):
# Create the partition and then set the name of the partition afterwards
super(GPTPartition, self)._before_create(e)
# partition name only works for gpt, for msdos that becomes the part-type (primary, extended, logical)
name_command = 'name {idx} {name}'.format(idx=self.get_index(), name=self.name)
log_check_call(['parted', '--script', e.volume.device_path,
'--', name_command])

View file

@ -3,15 +3,15 @@ from gpt import GPTPartition
class GPTSwapPartition(GPTPartition):
"""Represents a GPT swap partition
"""
"""Represents a GPT swap partition
"""
def __init__(self, size, previous):
"""
:param Bytes size: Size of the partition
:param BasePartition previous: The partition that preceeds this one
"""
super(GPTSwapPartition, self).__init__(size, 'swap', None, 'swap', previous)
def __init__(self, size, previous):
"""
:param Bytes size: Size of the partition
:param BasePartition previous: The partition that preceeds this one
"""
super(GPTSwapPartition, self).__init__(size, 'swap', None, 'swap', previous)
def _before_format(self, e):
log_check_call(['mkswap', self.device_path])
def _before_format(self, e):
log_check_call(['mkswap', self.device_path])

View file

@ -4,46 +4,46 @@ from bootstrapvz.common.tools import log_check_call
class Mount(object):
"""Represents a mount into the partition
"""
def __init__(self, source, destination, opts):
"""
:param str,AbstractPartition source: The path from where we mount or a partition
:param str destination: The path of the mountpoint
:param list opts: List of options to pass to the mount command
"""
self.source = source
self.destination = destination
self.opts = opts
"""Represents a mount into the partition
"""
def __init__(self, source, destination, opts):
"""
:param str,AbstractPartition source: The path from where we mount or a partition
:param str destination: The path of the mountpoint
:param list opts: List of options to pass to the mount command
"""
self.source = source
self.destination = destination
self.opts = opts
def mount(self, prefix):
"""Performs the mount operation or forwards it to another partition
def mount(self, prefix):
"""Performs the mount operation or forwards it to another partition
:param str prefix: Path prefix of the mountpoint
"""
mount_dir = os.path.join(prefix, self.destination)
# If the source is another partition, we tell that partition to mount itself
if isinstance(self.source, AbstractPartition):
self.source.mount(destination=mount_dir)
else:
log_check_call(['mount'] + self.opts + [self.source, mount_dir])
self.mount_dir = mount_dir
:param str prefix: Path prefix of the mountpoint
"""
mount_dir = os.path.join(prefix, self.destination)
# If the source is another partition, we tell that partition to mount itself
if isinstance(self.source, AbstractPartition):
self.source.mount(destination=mount_dir)
else:
log_check_call(['mount'] + self.opts + [self.source, mount_dir])
self.mount_dir = mount_dir
def unmount(self):
"""Performs the unmount operation or asks the partition to unmount itself
"""
# If its a partition, it can unmount itself
if isinstance(self.source, AbstractPartition):
self.source.unmount()
else:
log_check_call(['umount', self.mount_dir])
del self.mount_dir
def unmount(self):
"""Performs the unmount operation or asks the partition to unmount itself
"""
# If its a partition, it can unmount itself
if isinstance(self.source, AbstractPartition):
self.source.unmount()
else:
log_check_call(['umount', self.mount_dir])
del self.mount_dir
def __getstate__(self):
state = self.__dict__.copy()
state['__class__'] = self.__module__ + '.' + self.__class__.__name__
return state
def __getstate__(self):
state = self.__dict__.copy()
state['__class__'] = self.__module__ + '.' + self.__class__.__name__
return state
def __setstate__(self, state):
for key in state:
self.__dict__[key] = state[key]
def __setstate__(self, state):
for key in state:
self.__dict__[key] = state[key]

View file

@ -2,6 +2,6 @@ from base import BasePartition
class MSDOSPartition(BasePartition):
"""Represents an MS-DOS partition
"""
pass
"""Represents an MS-DOS partition
"""
pass

View file

@ -3,15 +3,15 @@ from msdos import MSDOSPartition
class MSDOSSwapPartition(MSDOSPartition):
"""Represents a MS-DOS swap partition
"""
"""Represents a MS-DOS swap partition
"""
def __init__(self, size, previous):
"""
:param Bytes size: Size of the partition
:param BasePartition previous: The partition that preceeds this one
"""
super(MSDOSSwapPartition, self).__init__(size, 'swap', None, previous)
def __init__(self, size, previous):
"""
:param Bytes size: Size of the partition
:param BasePartition previous: The partition that preceeds this one
"""
super(MSDOSSwapPartition, self).__init__(size, 'swap', None, previous)
def _before_format(self, e):
log_check_call(['mkswap', self.device_path])
def _before_format(self, e):
log_check_call(['mkswap', self.device_path])

View file

@ -2,14 +2,14 @@ from abstract import AbstractPartition
class SinglePartition(AbstractPartition):
"""Represents a single virtual partition on an unpartitioned volume
"""
"""Represents a single virtual partition on an unpartitioned volume
"""
def get_start(self):
"""Gets the starting byte of this partition
def get_start(self):
"""Gets the starting byte of this partition
:return: The starting byte of this partition
:rtype: Sectors
"""
from bootstrapvz.common.sectors import Sectors
return Sectors(0, self.size.sector_size)
:return: The starting byte of this partition
:rtype: Sectors
"""
from bootstrapvz.common.sectors import Sectors
return Sectors(0, self.size.sector_size)

View file

@ -2,19 +2,19 @@ from base import BasePartition
class UnformattedPartition(BasePartition):
"""Represents an unformatted partition
It cannot be mounted
"""
"""Represents an unformatted partition
It cannot be mounted
"""
# The states for our state machine. It can only be mapped, not mounted.
events = [{'name': 'create', 'src': 'nonexistent', 'dst': 'unmapped'},
{'name': 'map', 'src': 'unmapped', 'dst': 'mapped'},
{'name': 'unmap', 'src': 'mapped', 'dst': 'unmapped'},
]
# The states for our state machine. It can only be mapped, not mounted.
events = [{'name': 'create', 'src': 'nonexistent', 'dst': 'unmapped'},
{'name': 'map', 'src': 'unmapped', 'dst': 'mapped'},
{'name': 'unmap', 'src': 'mapped', 'dst': 'unmapped'},
]
def __init__(self, size, previous):
"""
:param Bytes size: Size of the partition
:param BasePartition previous: The partition that preceeds this one
"""
super(UnformattedPartition, self).__init__(size, None, None, previous)
def __init__(self, size, previous):
"""
:param Bytes size: Size of the partition
:param BasePartition previous: The partition that preceeds this one
"""
super(UnformattedPartition, self).__init__(size, None, None, previous)

View file

@ -6,131 +6,131 @@ from partitionmaps.none import NoPartitions
class Volume(FSMProxy):
"""Represents an abstract volume.
This class is a finite state machine and represents the state of the real volume.
"""
"""Represents an abstract volume.
This class is a finite state machine and represents the state of the real volume.
"""
__metaclass__ = ABCMeta
__metaclass__ = ABCMeta
# States this volume can be in
events = [{'name': 'create', 'src': 'nonexistent', 'dst': 'detached'},
{'name': 'attach', 'src': 'detached', 'dst': 'attached'},
{'name': 'link_dm_node', 'src': 'attached', 'dst': 'linked'},
{'name': 'unlink_dm_node', 'src': 'linked', 'dst': 'attached'},
{'name': 'detach', 'src': 'attached', 'dst': 'detached'},
{'name': 'delete', 'src': 'detached', 'dst': 'deleted'},
]
# States this volume can be in
events = [{'name': 'create', 'src': 'nonexistent', 'dst': 'detached'},
{'name': 'attach', 'src': 'detached', 'dst': 'attached'},
{'name': 'link_dm_node', 'src': 'attached', 'dst': 'linked'},
{'name': 'unlink_dm_node', 'src': 'linked', 'dst': 'attached'},
{'name': 'detach', 'src': 'attached', 'dst': 'detached'},
{'name': 'delete', 'src': 'detached', 'dst': 'deleted'},
]
def __init__(self, partition_map):
"""
:param PartitionMap partition_map: The partition map for the volume
"""
# Path to the volume
self.device_path = None
# The partition map
self.partition_map = partition_map
# The size of the volume as reported by the partition map
self.size = self.partition_map.get_total_size()
def __init__(self, partition_map):
"""
:param PartitionMap partition_map: The partition map for the volume
"""
# Path to the volume
self.device_path = None
# The partition map
self.partition_map = partition_map
# The size of the volume as reported by the partition map
self.size = self.partition_map.get_total_size()
# Before detaching, check that nothing would block the detachment
callbacks = {'onbeforedetach': self._check_blocking}
if isinstance(self.partition_map, NoPartitions):
# When the volume has no partitions, the virtual root partition path is equal to that of the volume
# Update that path whenever the path to the volume changes
def set_dev_path(e):
self.partition_map.root.device_path = self.device_path
callbacks['onafterattach'] = set_dev_path
callbacks['onafterdetach'] = set_dev_path # Will become None
callbacks['onlink_dm_node'] = set_dev_path
callbacks['onunlink_dm_node'] = set_dev_path
# Before detaching, check that nothing would block the detachment
callbacks = {'onbeforedetach': self._check_blocking}
if isinstance(self.partition_map, NoPartitions):
# When the volume has no partitions, the virtual root partition path is equal to that of the volume
# Update that path whenever the path to the volume changes
def set_dev_path(e):
self.partition_map.root.device_path = self.device_path
callbacks['onafterattach'] = set_dev_path
callbacks['onafterdetach'] = set_dev_path # Will become None
callbacks['onlink_dm_node'] = set_dev_path
callbacks['onunlink_dm_node'] = set_dev_path
# Create the configuration for our finite state machine
cfg = {'initial': 'nonexistent', 'events': self.events, 'callbacks': callbacks}
super(Volume, self).__init__(cfg)
# Create the configuration for our finite state machine
cfg = {'initial': 'nonexistent', 'events': self.events, 'callbacks': callbacks}
super(Volume, self).__init__(cfg)
def _after_create(self, e):
if isinstance(self.partition_map, NoPartitions):
# When the volume has no partitions, the virtual root partition
# is essentially created when the volume is created, forward that creation event.
self.partition_map.root.create()
def _after_create(self, e):
if isinstance(self.partition_map, NoPartitions):
# When the volume has no partitions, the virtual root partition
# is essentially created when the volume is created, forward that creation event.
self.partition_map.root.create()
def _check_blocking(self, e):
"""Checks whether the volume is blocked
def _check_blocking(self, e):
"""Checks whether the volume is blocked
:raises VolumeError: When the volume is blocked from being detached
"""
# Only the partition map can block the volume
if self.partition_map.is_blocking():
raise VolumeError('The partitionmap prevents the detach procedure')
:raises VolumeError: When the volume is blocked from being detached
"""
# Only the partition map can block the volume
if self.partition_map.is_blocking():
raise VolumeError('The partitionmap prevents the detach procedure')
def _before_link_dm_node(self, e):
"""Links the volume using the device mapper
This allows us to create a 'window' into the volume that acts like a volume in itself.
Mainly it is used to fool grub into thinking that it is working with a real volume,
rather than a loopback device or a network block device.
def _before_link_dm_node(self, e):
"""Links the volume using the device mapper
This allows us to create a 'window' into the volume that acts like a volume in itself.
Mainly it is used to fool grub into thinking that it is working with a real volume,
rather than a loopback device or a network block device.
:param _e_obj e: Event object containing arguments to create()
:param _e_obj e: Event object containing arguments to create()
Keyword arguments to link_dm_node() are:
Keyword arguments to link_dm_node() are:
:param int logical_start_sector: The sector the volume should start at in the new volume
:param int start_sector: The offset at which the volume should begin to be mapped in the new volume
:param int sectors: The number of sectors that should be mapped
:param int logical_start_sector: The sector the volume should start at in the new volume
:param int start_sector: The offset at which the volume should begin to be mapped in the new volume
:param int sectors: The number of sectors that should be mapped
Read more at: http://manpages.debian.org/cgi-bin/man.cgi?query=dmsetup&apropos=0&sektion=0&manpath=Debian+7.0+wheezy&format=html&locale=en
Read more at: http://manpages.debian.org/cgi-bin/man.cgi?query=dmsetup&apropos=0&sektion=0&manpath=Debian+7.0+wheezy&format=html&locale=en
:raises VolumeError: When a free block device cannot be found.
"""
import os.path
from bootstrapvz.common.fs import get_partitions
# Fetch information from /proc/partitions
proc_partitions = get_partitions()
device_name = os.path.basename(self.device_path)
device_partition = proc_partitions[device_name]
:raises VolumeError: When a free block device cannot be found.
"""
import os.path
from bootstrapvz.common.fs import get_partitions
# Fetch information from /proc/partitions
proc_partitions = get_partitions()
device_name = os.path.basename(self.device_path)
device_partition = proc_partitions[device_name]
# The sector the volume should start at in the new volume
logical_start_sector = getattr(e, 'logical_start_sector', 0)
# The sector the volume should start at in the new volume
logical_start_sector = getattr(e, 'logical_start_sector', 0)
# The offset at which the volume should begin to be mapped in the new volume
start_sector = getattr(e, 'start_sector', 0)
# The offset at which the volume should begin to be mapped in the new volume
start_sector = getattr(e, 'start_sector', 0)
# The number of sectors that should be mapped
sectors = getattr(e, 'sectors', int(self.size) - start_sector)
# The number of sectors that should be mapped
sectors = getattr(e, 'sectors', int(self.size) - start_sector)
# This is the table we send to dmsetup, so that it may create a device mapping for us.
table = ('{log_start_sec} {sectors} linear {major}:{minor} {start_sec}'
.format(log_start_sec=logical_start_sector,
sectors=sectors,
major=device_partition['major'],
minor=device_partition['minor'],
start_sec=start_sector))
import string
import os.path
# Figure out the device letter and path
for letter in string.ascii_lowercase:
dev_name = 'vd' + letter
dev_path = os.path.join('/dev/mapper', dev_name)
if not os.path.exists(dev_path):
self.dm_node_name = dev_name
self.dm_node_path = dev_path
break
# This is the table we send to dmsetup, so that it may create a device mapping for us.
table = ('{log_start_sec} {sectors} linear {major}:{minor} {start_sec}'
.format(log_start_sec=logical_start_sector,
sectors=sectors,
major=device_partition['major'],
minor=device_partition['minor'],
start_sec=start_sector))
import string
import os.path
# Figure out the device letter and path
for letter in string.ascii_lowercase:
dev_name = 'vd' + letter
dev_path = os.path.join('/dev/mapper', dev_name)
if not os.path.exists(dev_path):
self.dm_node_name = dev_name
self.dm_node_path = dev_path
break
if not hasattr(self, 'dm_node_name'):
raise VolumeError('Unable to find a free block device path for mounting the bootstrap volume')
if not hasattr(self, 'dm_node_name'):
raise VolumeError('Unable to find a free block device path for mounting the bootstrap volume')
# Create the device mapping
log_check_call(['dmsetup', 'create', self.dm_node_name], table)
# Update the device_path but remember the old one for when we unlink the volume again
self.unlinked_device_path = self.device_path
self.device_path = self.dm_node_path
# Create the device mapping
log_check_call(['dmsetup', 'create', self.dm_node_name], table)
# Update the device_path but remember the old one for when we unlink the volume again
self.unlinked_device_path = self.device_path
self.device_path = self.dm_node_path
def _before_unlink_dm_node(self, e):
"""Unlinks the device mapping
"""
log_check_call(['dmsetup', 'remove', self.dm_node_name])
# Reset the device_path
self.device_path = self.unlinked_device_path
# Delete the no longer valid information
del self.unlinked_device_path
del self.dm_node_name
del self.dm_node_path
def _before_unlink_dm_node(self, e):
"""Unlinks the device mapping
"""
log_check_call(['dmsetup', 'remove', self.dm_node_name])
# Reset the device_path
self.device_path = self.unlinked_device_path
# Delete the no longer valid information
del self.unlinked_device_path
del self.dm_node_name
del self.dm_node_path

View file

@ -5,100 +5,100 @@ import logging
def get_console_handler(debug, colorize):
"""Returns a log handler for the console
The handler color codes the different log levels
"""Returns a log handler for the console
The handler color codes the different log levels
:params bool debug: Whether to set the log level to DEBUG (otherwise INFO)
:params bool colorize: Whether to colorize console output
:return: The console logging handler
"""
# Create a console log handler
import sys
console_handler = logging.StreamHandler(sys.stderr)
if colorize:
# We want to colorize the output to the console, so we add a formatter
console_handler.setFormatter(ColorFormatter())
# Set the log level depending on the debug argument
if debug:
console_handler.setLevel(logging.DEBUG)
else:
console_handler.setLevel(logging.INFO)
return console_handler
:params bool debug: Whether to set the log level to DEBUG (otherwise INFO)
:params bool colorize: Whether to colorize console output
:return: The console logging handler
"""
# Create a console log handler
import sys
console_handler = logging.StreamHandler(sys.stderr)
if colorize:
# We want to colorize the output to the console, so we add a formatter
console_handler.setFormatter(ColorFormatter())
# Set the log level depending on the debug argument
if debug:
console_handler.setLevel(logging.DEBUG)
else:
console_handler.setLevel(logging.INFO)
return console_handler
def get_file_handler(path, debug):
"""Returns a log handler for the given path
If the parent directory of the logpath does not exist it will be created
The handler outputs relative timestamps (to when it was created)
"""Returns a log handler for the given path
If the parent directory of the logpath does not exist it will be created
The handler outputs relative timestamps (to when it was created)
:params str path: The full path to the logfile
:params bool debug: Whether to set the log level to DEBUG (otherwise INFO)
:return: The file logging handler
"""
import os.path
if not os.path.exists(os.path.dirname(path)):
os.makedirs(os.path.dirname(path))
# Create the log handler
file_handler = logging.FileHandler(path)
# Absolute timestamps are rather useless when bootstrapping, it's much more interesting
# to see how long things take, so we log in a relative format instead
file_handler.setFormatter(FileFormatter('[%(relativeCreated)s] %(levelname)s: %(message)s'))
# The file log handler always logs everything
file_handler.setLevel(logging.DEBUG)
return file_handler
:params str path: The full path to the logfile
:params bool debug: Whether to set the log level to DEBUG (otherwise INFO)
:return: The file logging handler
"""
import os.path
if not os.path.exists(os.path.dirname(path)):
os.makedirs(os.path.dirname(path))
# Create the log handler
file_handler = logging.FileHandler(path)
# Absolute timestamps are rather useless when bootstrapping, it's much more interesting
# to see how long things take, so we log in a relative format instead
file_handler.setFormatter(FileFormatter('[%(relativeCreated)s] %(levelname)s: %(message)s'))
# The file log handler always logs everything
file_handler.setLevel(logging.DEBUG)
return file_handler
def get_log_filename(manifest_path):
"""Returns the path to a logfile given a manifest
The logfile name is constructed from the current timestamp and the basename of the manifest
"""Returns the path to a logfile given a manifest
The logfile name is constructed from the current timestamp and the basename of the manifest
:param str manifest_path: The path to the manifest
:return: The path to the logfile
:rtype: str
"""
import os.path
from datetime import datetime
:param str manifest_path: The path to the manifest
:return: The path to the logfile
:rtype: str
"""
import os.path
from datetime import datetime
manifest_basename = os.path.basename(manifest_path)
manifest_name, _ = os.path.splitext(manifest_basename)
timestamp = datetime.now().strftime('%Y%m%d%H%M%S')
filename = '{timestamp}_{name}.log'.format(timestamp=timestamp, name=manifest_name)
return filename
manifest_basename = os.path.basename(manifest_path)
manifest_name, _ = os.path.splitext(manifest_basename)
timestamp = datetime.now().strftime('%Y%m%d%H%M%S')
filename = '{timestamp}_{name}.log'.format(timestamp=timestamp, name=manifest_name)
return filename
class SourceFormatter(logging.Formatter):
"""Adds a [source] tag to the log message if it exists
The python docs suggest using a LoggingAdapter, but that would mean we'd
have to use it everywhere we log something (and only when called remotely),
which is not feasible.
"""
"""Adds a [source] tag to the log message if it exists
The python docs suggest using a LoggingAdapter, but that would mean we'd
have to use it everywhere we log something (and only when called remotely),
which is not feasible.
"""
def format(self, record):
extra = getattr(record, 'extra', {})
if 'source' in extra:
record.msg = '[{source}] {message}'.format(source=record.extra['source'],
message=record.msg)
return super(SourceFormatter, self).format(record)
def format(self, record):
extra = getattr(record, 'extra', {})
if 'source' in extra:
record.msg = '[{source}] {message}'.format(source=record.extra['source'],
message=record.msg)
return super(SourceFormatter, self).format(record)
class ColorFormatter(SourceFormatter):
"""Colorizes log messages depending on the loglevel
"""
level_colors = {logging.ERROR: 'red',
logging.WARNING: 'magenta',
logging.INFO: 'blue',
}
"""Colorizes log messages depending on the loglevel
"""
level_colors = {logging.ERROR: 'red',
logging.WARNING: 'magenta',
logging.INFO: 'blue',
}
def format(self, record):
# Colorize the message if we have a color for it (DEBUG has no color)
from termcolor import colored
record.msg = colored(record.msg, self.level_colors.get(record.levelno, None))
return super(ColorFormatter, self).format(record)
def format(self, record):
# Colorize the message if we have a color for it (DEBUG has no color)
from termcolor import colored
record.msg = colored(record.msg, self.level_colors.get(record.levelno, None))
return super(ColorFormatter, self).format(record)
class FileFormatter(SourceFormatter):
"""Formats log statements for output to file
Currently this is just a stub
"""
def format(self, record):
return super(FileFormatter, self).format(record)
"""Formats log statements for output to file
Currently this is just a stub
"""
def format(self, record):
return super(FileFormatter, self).format(record)

View file

@ -3,37 +3,37 @@
def main():
"""Main function for invoking the bootstrap process
"""Main function for invoking the bootstrap process
:raises Exception: When the invoking user is not root and --dry-run isn't specified
"""
# Get the commandline arguments
opts = get_opts()
:raises Exception: When the invoking user is not root and --dry-run isn't specified
"""
# Get the commandline arguments
opts = get_opts()
# Require root privileges, except when doing a dry-run where they aren't needed
import os
if os.geteuid() != 0 and not opts['--dry-run']:
raise Exception('This program requires root privileges.')
# Require root privileges, except when doing a dry-run where they aren't needed
import os
if os.geteuid() != 0 and not opts['--dry-run']:
raise Exception('This program requires root privileges.')
# Set up logging
setup_loggers(opts)
# Set up logging
setup_loggers(opts)
# Load the manifest
from manifest import Manifest
manifest = Manifest(path=opts['MANIFEST'])
# Load the manifest
from manifest import Manifest
manifest = Manifest(path=opts['MANIFEST'])
# Everything has been set up, begin the bootstrapping process
run(manifest,
debug=opts['--debug'],
pause_on_error=opts['--pause-on-error'],
dry_run=opts['--dry-run'])
# Everything has been set up, begin the bootstrapping process
run(manifest,
debug=opts['--debug'],
pause_on_error=opts['--pause-on-error'],
dry_run=opts['--dry-run'])
def get_opts():
"""Creates an argument parser and returns the arguments it has parsed
"""
import docopt
usage = """bootstrap-vz
"""Creates an argument parser and returns the arguments it has parsed
"""
import docopt
usage = """bootstrap-vz
Usage: bootstrap-vz [options] MANIFEST
@ -46,97 +46,97 @@ Options:
Colorize the console output [default: auto]
--debug Print debugging information
-h, --help show this help
"""
opts = docopt.docopt(usage)
if opts['--color'] not in ('auto', 'always', 'never'):
raise docopt.DocoptExit('Value of --color must be one of auto, always or never.')
return opts
"""
opts = docopt.docopt(usage)
if opts['--color'] not in ('auto', 'always', 'never'):
raise docopt.DocoptExit('Value of --color must be one of auto, always or never.')
return opts
def setup_loggers(opts):
"""Sets up the file and console loggers
"""Sets up the file and console loggers
:params dict opts: Dictionary of options from the commandline
"""
import logging
root = logging.getLogger()
root.setLevel(logging.NOTSET)
:params dict opts: Dictionary of options from the commandline
"""
import logging
root = logging.getLogger()
root.setLevel(logging.NOTSET)
import log
# Log to file unless --log is a single dash
if opts['--log'] != '-':
import os.path
log_filename = log.get_log_filename(opts['MANIFEST'])
logpath = os.path.join(opts['--log'], log_filename)
file_handler = log.get_file_handler(path=logpath, debug=True)
root.addHandler(file_handler)
import log
# Log to file unless --log is a single dash
if opts['--log'] != '-':
import os.path
log_filename = log.get_log_filename(opts['MANIFEST'])
logpath = os.path.join(opts['--log'], log_filename)
file_handler = log.get_file_handler(path=logpath, debug=True)
root.addHandler(file_handler)
if opts['--color'] == 'never':
colorize = False
elif opts['--color'] == 'always':
colorize = True
else:
# If --color=auto (default), decide whether to colorize by whether stderr is a tty.
import os
colorize = os.isatty(2)
console_handler = log.get_console_handler(debug=opts['--debug'], colorize=colorize)
root.addHandler(console_handler)
if opts['--color'] == 'never':
colorize = False
elif opts['--color'] == 'always':
colorize = True
else:
# If --color=auto (default), decide whether to colorize by whether stderr is a tty.
import os
colorize = os.isatty(2)
console_handler = log.get_console_handler(debug=opts['--debug'], colorize=colorize)
root.addHandler(console_handler)
def run(manifest, debug=False, pause_on_error=False, dry_run=False):
"""Runs the bootstrapping process
"""Runs the bootstrapping process
:params Manifest manifest: The manifest to run the bootstrapping process for
:params bool debug: Whether to turn debugging mode on
:params bool pause_on_error: Whether to pause on error, before rollback
:params bool dry_run: Don't actually run the tasks
"""
# Get the tasklist
from tasklist import load_tasks
from tasklist import TaskList
tasks = load_tasks('resolve_tasks', manifest)
tasklist = TaskList(tasks)
# 'resolve_tasks' is the name of the function to call on the provider and plugins
:params Manifest manifest: The manifest to run the bootstrapping process for
:params bool debug: Whether to turn debugging mode on
:params bool pause_on_error: Whether to pause on error, before rollback
:params bool dry_run: Don't actually run the tasks
"""
# Get the tasklist
from tasklist import load_tasks
from tasklist import TaskList
tasks = load_tasks('resolve_tasks', manifest)
tasklist = TaskList(tasks)
# 'resolve_tasks' is the name of the function to call on the provider and plugins
# Create the bootstrap information object that'll be used throughout the bootstrapping process
from bootstrapinfo import BootstrapInformation
bootstrap_info = BootstrapInformation(manifest=manifest, debug=debug)
# Create the bootstrap information object that'll be used throughout the bootstrapping process
from bootstrapinfo import BootstrapInformation
bootstrap_info = BootstrapInformation(manifest=manifest, debug=debug)
import logging
log = logging.getLogger(__name__)
try:
# Run all the tasks the tasklist has gathered
tasklist.run(info=bootstrap_info, dry_run=dry_run)
# We're done! :-)
log.info('Successfully completed bootstrapping')
except (Exception, KeyboardInterrupt) as e:
# When an error occurs, log it and begin rollback
log.exception(e)
if pause_on_error:
# The --pause-on-error is useful when the user wants to inspect the volume before rollback
raw_input('Press Enter to commence rollback')
log.error('Rolling back')
import logging
log = logging.getLogger(__name__)
try:
# Run all the tasks the tasklist has gathered
tasklist.run(info=bootstrap_info, dry_run=dry_run)
# We're done! :-)
log.info('Successfully completed bootstrapping')
except (Exception, KeyboardInterrupt) as e:
# When an error occurs, log it and begin rollback
log.exception(e)
if pause_on_error:
# The --pause-on-error is useful when the user wants to inspect the volume before rollback
raw_input('Press Enter to commence rollback')
log.error('Rolling back')
# Create a useful little function for the provider and plugins to use,
# when figuring out what tasks should be added to the rollback list.
def counter_task(taskset, task, counter):
"""counter_task() adds the third argument to the rollback tasklist
if the second argument is present in the list of completed tasks
# Create a useful little function for the provider and plugins to use,
# when figuring out what tasks should be added to the rollback list.
def counter_task(taskset, task, counter):
"""counter_task() adds the third argument to the rollback tasklist
if the second argument is present in the list of completed tasks
:param set taskset: The taskset to add the rollback task to
:param Task task: The task to look for in the completed tasks list
:param Task counter: The task to add to the rollback tasklist
"""
if task in tasklist.tasks_completed and counter not in tasklist.tasks_completed:
taskset.add(counter)
:param set taskset: The taskset to add the rollback task to
:param Task task: The task to look for in the completed tasks list
:param Task counter: The task to add to the rollback tasklist
"""
if task in tasklist.tasks_completed and counter not in tasklist.tasks_completed:
taskset.add(counter)
# Ask the provider and plugins for tasks they'd like to add to the rollback tasklist
# Any additional arguments beyond the first two are passed directly to the provider and plugins
rollback_tasks = load_tasks('resolve_rollback_tasks', manifest, tasklist.tasks_completed, counter_task)
rollback_tasklist = TaskList(rollback_tasks)
# Ask the provider and plugins for tasks they'd like to add to the rollback tasklist
# Any additional arguments beyond the first two are passed directly to the provider and plugins
rollback_tasks = load_tasks('resolve_rollback_tasks', manifest, tasklist.tasks_completed, counter_task)
rollback_tasklist = TaskList(rollback_tasks)
# Run the rollback tasklist
rollback_tasklist.run(info=bootstrap_info, dry_run=dry_run)
log.info('Successfully completed rollback')
raise
return bootstrap_info
# Run the rollback tasklist
rollback_tasklist.run(info=bootstrap_info, dry_run=dry_run)
log.info('Successfully completed rollback')
raise
return bootstrap_info

View file

@ -9,150 +9,150 @@ log = logging.getLogger(__name__)
class Manifest(object):
"""This class holds all the information that providers and plugins need
to perform the bootstrapping process. All actions that are taken originate from
here. The manifest shall not be modified after it has been loaded.
Currently, immutability is not enforced and it would require a fair amount of code
to enforce it, instead we just rely on tasks behaving properly.
"""
"""This class holds all the information that providers and plugins need
to perform the bootstrapping process. All actions that are taken originate from
here. The manifest shall not be modified after it has been loaded.
Currently, immutability is not enforced and it would require a fair amount of code
to enforce it, instead we just rely on tasks behaving properly.
"""
def __init__(self, path=None, data=None):
"""Initializer: Given a path we load, validate and parse the manifest.
To create the manifest from dynamic data instead of the contents of a file,
provide a properly constructed dict as the data argument.
def __init__(self, path=None, data=None):
"""Initializer: Given a path we load, validate and parse the manifest.
To create the manifest from dynamic data instead of the contents of a file,
provide a properly constructed dict as the data argument.
:param str path: The path to the manifest (ignored, when `data' is provided)
:param str data: The manifest data, if it is not None, it will be used instead of the contents of `path'
"""
if path is None and data is None:
raise ManifestError('`path\' or `data\' must be provided')
self.path = path
:param str path: The path to the manifest (ignored, when `data' is provided)
:param str data: The manifest data, if it is not None, it will be used instead of the contents of `path'
"""
if path is None and data is None:
raise ManifestError('`path\' or `data\' must be provided')
self.path = path
import os.path
self.metaschema = load_data(os.path.normpath(os.path.join(os.path.dirname(__file__),
'metaschema.json')))
import os.path
self.metaschema = load_data(os.path.normpath(os.path.join(os.path.dirname(__file__),
'metaschema.json')))
self.load_data(data)
self.load_modules()
self.validate()
self.parse()
self.load_data(data)
self.load_modules()
self.validate()
self.parse()
def load_data(self, data=None):
"""Loads the manifest and performs a basic validation.
This function reads the manifest and performs some basic validation of
the manifest itself to ensure that the properties required for initalization are accessible
(otherwise the user would be presented with some cryptic error messages).
"""
if data is None:
self.data = load_data(self.path)
else:
self.data = data
def load_data(self, data=None):
"""Loads the manifest and performs a basic validation.
This function reads the manifest and performs some basic validation of
the manifest itself to ensure that the properties required for initalization are accessible
(otherwise the user would be presented with some cryptic error messages).
"""
if data is None:
self.data = load_data(self.path)
else:
self.data = data
from . import validate_manifest
# Validate the manifest with the base validation function in __init__
validate_manifest(self.data, self.schema_validator, self.validation_error)
from . import validate_manifest
# Validate the manifest with the base validation function in __init__
validate_manifest(self.data, self.schema_validator, self.validation_error)
def load_modules(self):
"""Loads the provider and the plugins.
"""
# Get the provider name from the manifest and load the corresponding module
provider_modname = 'bootstrapvz.providers.' + self.data['provider']['name']
log.debug('Loading provider ' + self.data['provider']['name'])
# Create a modules dict that contains the loaded provider and plugins
import importlib
self.modules = {'provider': importlib.import_module(provider_modname),
'plugins': [],
}
# Run through all the plugins mentioned in the manifest and load them
from pkg_resources import iter_entry_points
if 'plugins' in self.data:
for plugin_name in self.data['plugins'].keys():
log.debug('Loading plugin ' + plugin_name)
try:
# Internal bootstrap-vz plugins take precedence wrt. plugin name
modname = 'bootstrapvz.plugins.' + plugin_name
plugin = importlib.import_module(modname)
except ImportError:
entry_points = list(iter_entry_points('bootstrapvz.plugins', name=plugin_name))
num_entry_points = len(entry_points)
if num_entry_points < 1:
raise
if num_entry_points > 1:
msg = ('Unable to load plugin {name}, '
'there are {num} entry points to choose from.'
.format(name=plugin_name, num=num_entry_points))
raise ImportError(msg)
plugin = entry_points[0].load()
self.modules['plugins'].append(plugin)
def load_modules(self):
"""Loads the provider and the plugins.
"""
# Get the provider name from the manifest and load the corresponding module
provider_modname = 'bootstrapvz.providers.' + self.data['provider']['name']
log.debug('Loading provider ' + self.data['provider']['name'])
# Create a modules dict that contains the loaded provider and plugins
import importlib
self.modules = {'provider': importlib.import_module(provider_modname),
'plugins': [],
}
# Run through all the plugins mentioned in the manifest and load them
from pkg_resources import iter_entry_points
if 'plugins' in self.data:
for plugin_name in self.data['plugins'].keys():
log.debug('Loading plugin ' + plugin_name)
try:
# Internal bootstrap-vz plugins take precedence wrt. plugin name
modname = 'bootstrapvz.plugins.' + plugin_name
plugin = importlib.import_module(modname)
except ImportError:
entry_points = list(iter_entry_points('bootstrapvz.plugins', name=plugin_name))
num_entry_points = len(entry_points)
if num_entry_points < 1:
raise
if num_entry_points > 1:
msg = ('Unable to load plugin {name}, '
'there are {num} entry points to choose from.'
.format(name=plugin_name, num=num_entry_points))
raise ImportError(msg)
plugin = entry_points[0].load()
self.modules['plugins'].append(plugin)
def validate(self):
"""Validates the manifest using the provider and plugin validation functions.
Plugins are not required to have a validate_manifest function
"""
def validate(self):
"""Validates the manifest using the provider and plugin validation functions.
Plugins are not required to have a validate_manifest function
"""
# Run the provider validation
self.modules['provider'].validate_manifest(self.data, self.schema_validator, self.validation_error)
# Run the validation function for any plugin that has it
for plugin in self.modules['plugins']:
validate = getattr(plugin, 'validate_manifest', None)
if callable(validate):
validate(self.data, self.schema_validator, self.validation_error)
# Run the provider validation
self.modules['provider'].validate_manifest(self.data, self.schema_validator, self.validation_error)
# Run the validation function for any plugin that has it
for plugin in self.modules['plugins']:
validate = getattr(plugin, 'validate_manifest', None)
if callable(validate):
validate(self.data, self.schema_validator, self.validation_error)
def parse(self):
"""Parses the manifest.
Well... "parsing" is a big word.
The function really just sets up some convenient attributes so that tasks
don't have to access information with info.manifest.data['section']
but can do it with info.manifest.section.
"""
self.name = self.data['name']
self.provider = self.data['provider']
self.bootstrapper = self.data['bootstrapper']
self.volume = self.data['volume']
self.system = self.data['system']
from bootstrapvz.common.releases import get_release
self.release = get_release(self.system['release'])
# The packages and plugins section is not required
self.packages = self.data['packages'] if 'packages' in self.data else {}
self.plugins = self.data['plugins'] if 'plugins' in self.data else {}
def parse(self):
"""Parses the manifest.
Well... "parsing" is a big word.
The function really just sets up some convenient attributes so that tasks
don't have to access information with info.manifest.data['section']
but can do it with info.manifest.section.
"""
self.name = self.data['name']
self.provider = self.data['provider']
self.bootstrapper = self.data['bootstrapper']
self.volume = self.data['volume']
self.system = self.data['system']
from bootstrapvz.common.releases import get_release
self.release = get_release(self.system['release'])
# The packages and plugins section is not required
self.packages = self.data['packages'] if 'packages' in self.data else {}
self.plugins = self.data['plugins'] if 'plugins' in self.data else {}
def schema_validator(self, data, schema_path):
"""This convenience function is passed around to all the validation functions
so that they may run a json-schema validation by giving it the data and a path to the schema.
def schema_validator(self, data, schema_path):
"""This convenience function is passed around to all the validation functions
so that they may run a json-schema validation by giving it the data and a path to the schema.
:param dict data: Data to validate (normally the manifest data)
:param str schema_path: Path to the json-schema to use for validation
"""
import jsonschema
:param dict data: Data to validate (normally the manifest data)
:param str schema_path: Path to the json-schema to use for validation
"""
import jsonschema
schema = load_data(schema_path)
schema = load_data(schema_path)
try:
jsonschema.validate(schema, self.metaschema)
jsonschema.validate(data, schema)
except jsonschema.ValidationError as e:
self.validation_error(e.message, e.path)
try:
jsonschema.validate(schema, self.metaschema)
jsonschema.validate(data, schema)
except jsonschema.ValidationError as e:
self.validation_error(e.message, e.path)
def validation_error(self, message, data_path=None):
"""This function is passed to all validation functions so that they may
raise a validation error because a custom validation of the manifest failed.
def validation_error(self, message, data_path=None):
"""This function is passed to all validation functions so that they may
raise a validation error because a custom validation of the manifest failed.
:param str message: Message to user about the error
:param list data_path: A path to the location in the manifest where the error occurred
:raises ManifestError: With absolute certainty
"""
raise ManifestError(message, self.path, data_path)
:param str message: Message to user about the error
:param list data_path: A path to the location in the manifest where the error occurred
:raises ManifestError: With absolute certainty
"""
raise ManifestError(message, self.path, data_path)
def __getstate__(self):
return {'__class__': self.__module__ + '.' + self.__class__.__name__,
'path': self.path,
'metaschema': self.metaschema,
'data': self.data}
def __getstate__(self):
return {'__class__': self.__module__ + '.' + self.__class__.__name__,
'path': self.path,
'metaschema': self.metaschema,
'data': self.data}
def __setstate__(self, state):
self.path = state['path']
self.metaschema = state['metaschema']
self.load_data(state['data'])
self.load_modules()
self.validate()
self.parse()
def __setstate__(self, state):
self.path = state['path']
self.metaschema = state['metaschema']
self.load_data(state['data'])
self.load_modules()
self.validate()
self.parse()

View file

@ -1,35 +1,35 @@
class Phase(object):
"""The Phase class represents a phase a task may be in.
It has no function other than to act as an anchor in the task graph.
All phases are instantiated in common.phases
"""
"""The Phase class represents a phase a task may be in.
It has no function other than to act as an anchor in the task graph.
All phases are instantiated in common.phases
"""
def __init__(self, name, description):
# The name of the phase
self.name = name
# The description of the phase (currently not used anywhere)
self.description = description
def __init__(self, name, description):
# The name of the phase
self.name = name
# The description of the phase (currently not used anywhere)
self.description = description
def pos(self):
"""Gets the position of the phase
def pos(self):
"""Gets the position of the phase
:return: The positional index of the phase in relation to the other phases
:rtype: int
"""
from bootstrapvz.common.phases import order
return next(i for i, phase in enumerate(order) if phase is self)
:return: The positional index of the phase in relation to the other phases
:rtype: int
"""
from bootstrapvz.common.phases import order
return next(i for i, phase in enumerate(order) if phase is self)
def __cmp__(self, other):
"""Compares the phase order in relation to the other phases
:return int:
"""
return self.pos() - other.pos()
def __cmp__(self, other):
"""Compares the phase order in relation to the other phases
:return int:
"""
return self.pos() - other.pos()
def __str__(self):
"""
:return: String representation of the phase
:rtype: str
"""
return self.name
def __str__(self):
"""
:return: String representation of the phase
:rtype: str
"""
return self.name

View file

@ -1,12 +1,12 @@
class PackageError(Exception):
"""Raised when an error occurrs while handling the packageslist
"""
pass
"""Raised when an error occurrs while handling the packageslist
"""
pass
class SourceError(Exception):
"""Raised when an error occurs while handling the sourceslist
"""
pass
"""Raised when an error occurs while handling the sourceslist
"""
pass

View file

@ -1,108 +1,108 @@
class PackageList(object):
"""Represents a list of packages
"""
"""Represents a list of packages
"""
class Remote(object):
"""A remote package with an optional target
"""
def __init__(self, name, target):
"""
:param str name: The name of the package
:param str target: The name of the target release
"""
self.name = name
self.target = target
class Remote(object):
"""A remote package with an optional target
"""
def __init__(self, name, target):
"""
:param str name: The name of the package
:param str target: The name of the target release
"""
self.name = name
self.target = target
def __str__(self):
"""Converts the package into somehting that apt-get install can parse
def __str__(self):
"""Converts the package into somehting that apt-get install can parse
:rtype: str
"""
if self.target is None:
return self.name
else:
return self.name + '/' + self.target
:rtype: str
"""
if self.target is None:
return self.name
else:
return self.name + '/' + self.target
class Local(object):
"""A local package
"""
def __init__(self, path):
"""
:param str path: The path to the local package
"""
self.path = path
class Local(object):
"""A local package
"""
def __init__(self, path):
"""
:param str path: The path to the local package
"""
self.path = path
def __str__(self):
"""
:return: The path to the local package
:rtype: string
"""
return self.path
def __str__(self):
"""
:return: The path to the local package
:rtype: string
"""
return self.path
def __init__(self, manifest_vars, source_lists):
"""
:param dict manifest_vars: The manifest variables
:param SourceLists source_lists: The sourcelists for apt
"""
self.manifest_vars = manifest_vars
self.source_lists = source_lists
# The default_target is the release we are bootstrapping
self.default_target = '{system.release}'.format(**self.manifest_vars)
# The list of packages that should be installed, this is not a set.
# We want to preserve the order in which the packages were added so that local
# packages may be installed in the correct order.
self.install = []
# A function that filters the install list and only returns remote packages
self.remote = lambda: filter(lambda x: isinstance(x, self.Remote), self.install)
def __init__(self, manifest_vars, source_lists):
"""
:param dict manifest_vars: The manifest variables
:param SourceLists source_lists: The sourcelists for apt
"""
self.manifest_vars = manifest_vars
self.source_lists = source_lists
# The default_target is the release we are bootstrapping
self.default_target = '{system.release}'.format(**self.manifest_vars)
# The list of packages that should be installed, this is not a set.
# We want to preserve the order in which the packages were added so that local
# packages may be installed in the correct order.
self.install = []
# A function that filters the install list and only returns remote packages
self.remote = lambda: filter(lambda x: isinstance(x, self.Remote), self.install)
def add(self, name, target=None):
"""Adds a package to the install list
def add(self, name, target=None):
"""Adds a package to the install list
:param str name: The name of the package to install, may contain manifest vars references
:param str target: The name of the target release for the package, may contain manifest vars references
:param str name: The name of the package to install, may contain manifest vars references
:param str target: The name of the target release for the package, may contain manifest vars references
:raises PackageError: When a package of the same name but with a different target has already been added.
:raises PackageError: When the specified target release could not be found.
"""
from exceptions import PackageError
name = name.format(**self.manifest_vars)
if target is not None:
target = target.format(**self.manifest_vars)
# Check if the package has already been added.
# If so, make sure it's the same target and raise a PackageError otherwise
package = next((pkg for pkg in self.remote() if pkg.name == name), None)
if package is not None:
# It's the same target if the target names match or one of the targets is None
# and the other is the default target.
same_target = package.target == target
same_target = same_target or package.target is None and target == self.default_target
same_target = same_target or package.target == self.default_target and target is None
if not same_target:
msg = ('The package {name} was already added to the package list, '
'but with target release `{target}\' instead of `{add_target}\''
.format(name=name, target=package.target, add_target=target))
raise PackageError(msg)
# The package has already been added, skip the checks below
return
:raises PackageError: When a package of the same name but with a different target has already been added.
:raises PackageError: When the specified target release could not be found.
"""
from exceptions import PackageError
name = name.format(**self.manifest_vars)
if target is not None:
target = target.format(**self.manifest_vars)
# Check if the package has already been added.
# If so, make sure it's the same target and raise a PackageError otherwise
package = next((pkg for pkg in self.remote() if pkg.name == name), None)
if package is not None:
# It's the same target if the target names match or one of the targets is None
# and the other is the default target.
same_target = package.target == target
same_target = same_target or package.target is None and target == self.default_target
same_target = same_target or package.target == self.default_target and target is None
if not same_target:
msg = ('The package {name} was already added to the package list, '
'but with target release `{target}\' instead of `{add_target}\''
.format(name=name, target=package.target, add_target=target))
raise PackageError(msg)
# The package has already been added, skip the checks below
return
# Check if the target exists (unless it's the default target) in the sources list
# raise a PackageError if does not
if target not in (None, self.default_target) and not self.source_lists.target_exists(target):
msg = ('The target release {target} was not found in the sources list').format(target=target)
raise PackageError(msg)
# Check if the target exists (unless it's the default target) in the sources list
# raise a PackageError if does not
if target not in (None, self.default_target) and not self.source_lists.target_exists(target):
msg = ('The target release {target} was not found in the sources list').format(target=target)
raise PackageError(msg)
# Note that we maintain the target value even if it is none.
# This allows us to preserve the semantics of the default target when calling apt-get install
# Why? Try installing nfs-client/wheezy, you can't. It's a virtual package for which you cannot define
# a target release. Only `apt-get install nfs-client` works.
self.install.append(self.Remote(name, target))
# Note that we maintain the target value even if it is none.
# This allows us to preserve the semantics of the default target when calling apt-get install
# Why? Try installing nfs-client/wheezy, you can't. It's a virtual package for which you cannot define
# a target release. Only `apt-get install nfs-client` works.
self.install.append(self.Remote(name, target))
def add_local(self, package_path):
"""Adds a local package to the installation list
def add_local(self, package_path):
"""Adds a local package to the installation list
:param str package_path: Path to the local package, may contain manifest vars references
"""
package_path = package_path.format(**self.manifest_vars)
self.install.append(self.Local(package_path))
:param str package_path: Path to the local package, may contain manifest vars references
"""
package_path = package_path.format(**self.manifest_vars)
self.install.append(self.Local(package_path))

View file

@ -1,42 +1,42 @@
class PreferenceLists(object):
"""Represents a list of preferences lists for apt
"""
"""Represents a list of preferences lists for apt
"""
def __init__(self, manifest_vars):
"""
:param dict manifest_vars: The manifest variables
"""
# A dictionary with the name of the file in preferences.d as the key
# That values are lists of Preference objects
self.preferences = {}
# Save the manifest variables, we need the later on
self.manifest_vars = manifest_vars
def __init__(self, manifest_vars):
"""
:param dict manifest_vars: The manifest variables
"""
# A dictionary with the name of the file in preferences.d as the key
# That values are lists of Preference objects
self.preferences = {}
# Save the manifest variables, we need the later on
self.manifest_vars = manifest_vars
def add(self, name, preferences):
"""Adds a preference to the apt preferences list
def add(self, name, preferences):
"""Adds a preference to the apt preferences list
:param str name: Name of the file in preferences.list.d, may contain manifest vars references
:param object preferences: The preferences
"""
name = name.format(**self.manifest_vars)
self.preferences[name] = [Preference(p) for p in preferences]
:param str name: Name of the file in preferences.list.d, may contain manifest vars references
:param object preferences: The preferences
"""
name = name.format(**self.manifest_vars)
self.preferences[name] = [Preference(p) for p in preferences]
class Preference(object):
"""Represents a single preference
"""
"""Represents a single preference
"""
def __init__(self, preference):
"""
:param dict preference: A apt preference dictionary
"""
self.preference = preference
def __init__(self, preference):
"""
:param dict preference: A apt preference dictionary
"""
self.preference = preference
def __str__(self):
"""Convert the object into a preference block
def __str__(self):
"""Convert the object into a preference block
:rtype: str
"""
return "Package: {package}\nPin: {pin}\nPin-Priority: {pin-priority}\n".format(**self.preference)
:rtype: str
"""
return "Package: {package}\nPin: {pin}\nPin-Priority: {pin-priority}\n".format(**self.preference)

View file

@ -1,95 +1,95 @@
class SourceLists(object):
"""Represents a list of sources lists for apt
"""
"""Represents a list of sources lists for apt
"""
def __init__(self, manifest_vars):
"""
:param dict manifest_vars: The manifest variables
"""
# A dictionary with the name of the file in sources.list.d as the key
# That values are lists of Source objects
self.sources = {}
# Save the manifest variables, we need the later on
self.manifest_vars = manifest_vars
def __init__(self, manifest_vars):
"""
:param dict manifest_vars: The manifest variables
"""
# A dictionary with the name of the file in sources.list.d as the key
# That values are lists of Source objects
self.sources = {}
# Save the manifest variables, we need the later on
self.manifest_vars = manifest_vars
def add(self, name, line):
"""Adds a source to the apt sources list
def add(self, name, line):
"""Adds a source to the apt sources list
:param str name: Name of the file in sources.list.d, may contain manifest vars references
:param str line: The line for the source file, may contain manifest vars references
"""
name = name.format(**self.manifest_vars)
line = line.format(**self.manifest_vars)
if name not in self.sources:
self.sources[name] = []
self.sources[name].append(Source(line))
:param str name: Name of the file in sources.list.d, may contain manifest vars references
:param str line: The line for the source file, may contain manifest vars references
"""
name = name.format(**self.manifest_vars)
line = line.format(**self.manifest_vars)
if name not in self.sources:
self.sources[name] = []
self.sources[name].append(Source(line))
def target_exists(self, target):
"""Checks whether the target exists in the sources list
def target_exists(self, target):
"""Checks whether the target exists in the sources list
:param str target: Name of the target to check for, may contain manifest vars references
:param str target: Name of the target to check for, may contain manifest vars references
:return: Whether the target exists
:rtype: bool
"""
target = target.format(**self.manifest_vars)
# Run through all the sources and return True if the target exists
for lines in self.sources.itervalues():
if target in (source.distribution for source in lines):
return True
return False
:return: Whether the target exists
:rtype: bool
"""
target = target.format(**self.manifest_vars)
# Run through all the sources and return True if the target exists
for lines in self.sources.itervalues():
if target in (source.distribution for source in lines):
return True
return False
class Source(object):
"""Represents a single source line
"""
"""Represents a single source line
"""
def __init__(self, line):
"""
:param str line: A apt source line
def __init__(self, line):
"""
:param str line: A apt source line
:raises SourceError: When the source line cannot be parsed
"""
# Parse the source line and populate the class attributes with it
# The format is taken from `man sources.list`
# or: http://manpages.debian.org/cgi-bin/man.cgi?sektion=5&query=sources.list&apropos=0&manpath=sid&locale=en
import re
regexp = re.compile('^(?P<type>deb|deb-src)\s+'
'(\[\s*(?P<options>.+\S)?\s*\]\s+)?'
'(?P<uri>\S+)\s+'
'(?P<distribution>\S+)'
'(\s+(?P<components>.+\S))?\s*$')
match = regexp.match(line).groupdict()
if match is None:
from exceptions import SourceError
raise SourceError('Unable to parse source line: ' + line)
self.type = match['type']
self.options = []
if match['options'] is not None:
self.options = re.sub(' +', ' ', match['options']).split(' ')
self.uri = match['uri']
self.distribution = match['distribution']
self.components = []
if match['components'] is not None:
self.components = re.sub(' +', ' ', match['components']).split(' ')
:raises SourceError: When the source line cannot be parsed
"""
# Parse the source line and populate the class attributes with it
# The format is taken from `man sources.list`
# or: http://manpages.debian.org/cgi-bin/man.cgi?sektion=5&query=sources.list&apropos=0&manpath=sid&locale=en
import re
regexp = re.compile('^(?P<type>deb|deb-src)\s+'
'(\[\s*(?P<options>.+\S)?\s*\]\s+)?'
'(?P<uri>\S+)\s+'
'(?P<distribution>\S+)'
'(\s+(?P<components>.+\S))?\s*$')
match = regexp.match(line).groupdict()
if match is None:
from exceptions import SourceError
raise SourceError('Unable to parse source line: ' + line)
self.type = match['type']
self.options = []
if match['options'] is not None:
self.options = re.sub(' +', ' ', match['options']).split(' ')
self.uri = match['uri']
self.distribution = match['distribution']
self.components = []
if match['components'] is not None:
self.components = re.sub(' +', ' ', match['components']).split(' ')
def __str__(self):
"""Convert the object into a source line
This is pretty much the reverse of what we're doing in the initialization function.
def __str__(self):
"""Convert the object into a source line
This is pretty much the reverse of what we're doing in the initialization function.
:rtype: str
"""
options = ''
if len(self.options) > 0:
options = ' [{options}]'.format(options=' '.join(self.options))
:rtype: str
"""
options = ''
if len(self.options) > 0:
options = ' [{options}]'.format(options=' '.join(self.options))
components = ''
if len(self.components) > 0:
components = ' {components}'.format(components=' '.join(self.components))
components = ''
if len(self.components) > 0:
components = ' {components}'.format(components=' '.join(self.components))
return ('{type}{options} {uri} {distribution}{components}'
.format(type=self.type, options=options,
uri=self.uri, distribution=self.distribution,
components=components))
return ('{type}{options} {uri} {distribution}{components}'
.format(type=self.type, options=options,
uri=self.uri, distribution=self.distribution,
components=components))

View file

@ -1,36 +1,36 @@
class Task(object):
"""The task class represents a task that can be run.
It is merely a wrapper for the run function and should never be instantiated.
"""
# The phase this task is located in.
phase = None
# List of tasks that should run before this task is run
predecessors = []
# List of tasks that should run after this task has run
successors = []
"""The task class represents a task that can be run.
It is merely a wrapper for the run function and should never be instantiated.
"""
# The phase this task is located in.
phase = None
# List of tasks that should run before this task is run
predecessors = []
# List of tasks that should run after this task has run
successors = []
class __metaclass__(type):
"""Metaclass to control how the class is coerced into a string
"""
def __repr__(cls):
"""
:return str: The full module path to the Task
"""
return cls.__module__ + '.' + cls.__name__
class __metaclass__(type):
"""Metaclass to control how the class is coerced into a string
"""
def __repr__(cls):
"""
:return str: The full module path to the Task
"""
return cls.__module__ + '.' + cls.__name__
def __str__(cls):
"""
:return: The full module path to the Task
:rtype: str
"""
return repr(cls)
def __str__(cls):
"""
:return: The full module path to the Task
:rtype: str
"""
return repr(cls)
@classmethod
def run(cls, info):
"""The run function, all work is done inside this function
@classmethod
def run(cls, info):
"""The run function, all work is done inside this function
:param BootstrapInformation info: The bootstrap info object.
"""
pass
:param BootstrapInformation info: The bootstrap info object.
"""
pass

View file

@ -7,273 +7,273 @@ log = logging.getLogger(__name__)
class TaskList(object):
"""The tasklist class aggregates all tasks that should be run
and orders them according to their dependencies.
"""
"""The tasklist class aggregates all tasks that should be run
and orders them according to their dependencies.
"""
def __init__(self, tasks):
self.tasks = tasks
self.tasks_completed = []
def __init__(self, tasks):
self.tasks = tasks
self.tasks_completed = []
def run(self, info, dry_run=False):
"""Converts the taskgraph into a list and runs all tasks in that list
def run(self, info, dry_run=False):
"""Converts the taskgraph into a list and runs all tasks in that list
:param dict info: The bootstrap information object
:param bool dry_run: Whether to actually run the tasks or simply step through them
"""
# Get a hold of every task we can find, so that we can topologically sort
# all tasks, rather than just the subset we are going to run.
from bootstrapvz.common import tasks as common_tasks
modules = [common_tasks, info.manifest.modules['provider']] + info.manifest.modules['plugins']
all_tasks = set(get_all_tasks(modules))
# Create a list for us to run
task_list = create_list(self.tasks, all_tasks)
# Output the tasklist
log.debug('Tasklist:\n\t' + ('\n\t'.join(map(repr, task_list))))
:param dict info: The bootstrap information object
:param bool dry_run: Whether to actually run the tasks or simply step through them
"""
# Get a hold of every task we can find, so that we can topologically sort
# all tasks, rather than just the subset we are going to run.
from bootstrapvz.common import tasks as common_tasks
modules = [common_tasks, info.manifest.modules['provider']] + info.manifest.modules['plugins']
all_tasks = set(get_all_tasks(modules))
# Create a list for us to run
task_list = create_list(self.tasks, all_tasks)
# Output the tasklist
log.debug('Tasklist:\n\t' + ('\n\t'.join(map(repr, task_list))))
for task in task_list:
# Tasks are not required to have a description
if hasattr(task, 'description'):
log.info(task.description)
else:
# If there is no description, simply coerce the task into a string and print its name
log.info('Running ' + str(task))
if not dry_run:
# Run the task
task.run(info)
# Remember which tasks have been run for later use (e.g. when rolling back, because of an error)
self.tasks_completed.append(task)
for task in task_list:
# Tasks are not required to have a description
if hasattr(task, 'description'):
log.info(task.description)
else:
# If there is no description, simply coerce the task into a string and print its name
log.info('Running ' + str(task))
if not dry_run:
# Run the task
task.run(info)
# Remember which tasks have been run for later use (e.g. when rolling back, because of an error)
self.tasks_completed.append(task)
def load_tasks(function, manifest, *args):
"""Calls ``function`` on the provider and all plugins that have been loaded by the manifest.
Any additional arguments are passed directly to ``function``.
The function that is called shall accept the taskset as its first argument and the manifest
as its second argument.
"""Calls ``function`` on the provider and all plugins that have been loaded by the manifest.
Any additional arguments are passed directly to ``function``.
The function that is called shall accept the taskset as its first argument and the manifest
as its second argument.
:param str function: Name of the function to call
:param Manifest manifest: The manifest
:param list args: Additional arguments that should be passed to the function that is called
"""
tasks = set()
# Call 'function' on the provider
getattr(manifest.modules['provider'], function)(tasks, manifest, *args)
for plugin in manifest.modules['plugins']:
# Plugins are not required to have whatever function we call
fn = getattr(plugin, function, None)
if callable(fn):
fn(tasks, manifest, *args)
return tasks
:param str function: Name of the function to call
:param Manifest manifest: The manifest
:param list args: Additional arguments that should be passed to the function that is called
"""
tasks = set()
# Call 'function' on the provider
getattr(manifest.modules['provider'], function)(tasks, manifest, *args)
for plugin in manifest.modules['plugins']:
# Plugins are not required to have whatever function we call
fn = getattr(plugin, function, None)
if callable(fn):
fn(tasks, manifest, *args)
return tasks
def create_list(taskset, all_tasks):
"""Creates a list of all the tasks that should be run.
"""
from bootstrapvz.common.phases import order
# Make sure all_tasks is a superset of the resolved taskset
if not all_tasks >= taskset:
msg = ('bootstrap-vz generated a list of all available tasks. '
'That list is not a superset of the tasks required for bootstrapping. '
'The tasks that were not found are: {tasks} '
'(This is an error in the code and not the manifest, please report this issue.)'
.format(tasks=', '.join(map(str, taskset - all_tasks)))
)
raise TaskListError(msg)
# Create a graph over all tasks by creating a map of each tasks successors
graph = {}
for task in all_tasks:
# Do a sanity check first
check_ordering(task)
successors = set()
# Add all successors mentioned in the task
successors.update(task.successors)
# Add all tasks that mention this task as a predecessor
successors.update(filter(lambda succ: task in succ.predecessors, all_tasks))
# Create a list of phases that succeed the phase of this task
succeeding_phases = order[order.index(task.phase) + 1:]
# Add all tasks that occur in above mentioned succeeding phases
successors.update(filter(lambda succ: succ.phase in succeeding_phases, all_tasks))
# Map the successors to the task
graph[task] = successors
"""Creates a list of all the tasks that should be run.
"""
from bootstrapvz.common.phases import order
# Make sure all_tasks is a superset of the resolved taskset
if not all_tasks >= taskset:
msg = ('bootstrap-vz generated a list of all available tasks. '
'That list is not a superset of the tasks required for bootstrapping. '
'The tasks that were not found are: {tasks} '
'(This is an error in the code and not the manifest, please report this issue.)'
.format(tasks=', '.join(map(str, taskset - all_tasks)))
)
raise TaskListError(msg)
# Create a graph over all tasks by creating a map of each tasks successors
graph = {}
for task in all_tasks:
# Do a sanity check first
check_ordering(task)
successors = set()
# Add all successors mentioned in the task
successors.update(task.successors)
# Add all tasks that mention this task as a predecessor
successors.update(filter(lambda succ: task in succ.predecessors, all_tasks))
# Create a list of phases that succeed the phase of this task
succeeding_phases = order[order.index(task.phase) + 1:]
# Add all tasks that occur in above mentioned succeeding phases
successors.update(filter(lambda succ: succ.phase in succeeding_phases, all_tasks))
# Map the successors to the task
graph[task] = successors
# Use the strongly connected components algorithm to check for cycles in our task graph
components = strongly_connected_components(graph)
cycles_found = 0
for component in components:
# Node of 1 is also a strongly connected component but hardly a cycle, so we filter them out
if len(component) > 1:
cycles_found += 1
log.debug('Cycle: {list}\n' + (', '.join(map(repr, component))))
if cycles_found > 0:
msg = ('{num} cycles were found in the tasklist, '
'consult the logfile for more information.'.format(num=cycles_found))
raise TaskListError(msg)
# Use the strongly connected components algorithm to check for cycles in our task graph
components = strongly_connected_components(graph)
cycles_found = 0
for component in components:
# Node of 1 is also a strongly connected component but hardly a cycle, so we filter them out
if len(component) > 1:
cycles_found += 1
log.debug('Cycle: {list}\n' + (', '.join(map(repr, component))))
if cycles_found > 0:
msg = ('{num} cycles were found in the tasklist, '
'consult the logfile for more information.'.format(num=cycles_found))
raise TaskListError(msg)
# Run a topological sort on the graph, returning an ordered list
sorted_tasks = topological_sort(graph)
# Run a topological sort on the graph, returning an ordered list
sorted_tasks = topological_sort(graph)
# Filter out any tasks not in the tasklist
# We want to maintain ordering, so we don't use set intersection
sorted_tasks = filter(lambda task: task in taskset, sorted_tasks)
return sorted_tasks
# Filter out any tasks not in the tasklist
# We want to maintain ordering, so we don't use set intersection
sorted_tasks = filter(lambda task: task in taskset, sorted_tasks)
return sorted_tasks
def get_all_tasks(modules):
"""Gets a list of all task classes in the package
"""Gets a list of all task classes in the package
:return: A list of all tasks in the package
:rtype: list
"""
import os.path
# Get generators that return all classes in a module
generators = []
for module in modules:
module_path = os.path.dirname(module.__file__)
module_prefix = module.__name__ + '.'
generators.append(get_all_classes(module_path, module_prefix))
import itertools
classes = itertools.chain(*generators)
:return: A list of all tasks in the package
:rtype: list
"""
import os.path
# Get generators that return all classes in a module
generators = []
for module in modules:
module_path = os.path.dirname(module.__file__)
module_prefix = module.__name__ + '.'
generators.append(get_all_classes(module_path, module_prefix))
import itertools
classes = itertools.chain(*generators)
# lambda function to check whether a class is a task (excluding the superclass Task)
def is_task(obj):
from task import Task
return issubclass(obj, Task) and obj is not Task
return filter(is_task, classes) # Only return classes that are tasks
# lambda function to check whether a class is a task (excluding the superclass Task)
def is_task(obj):
from task import Task
return issubclass(obj, Task) and obj is not Task
return filter(is_task, classes) # Only return classes that are tasks
def get_all_classes(path=None, prefix='', excludes=[]):
""" Given a path to a package, this function retrieves all the classes in it
""" Given a path to a package, this function retrieves all the classes in it
:param str path: Path to the package
:param str prefix: Name of the package followed by a dot
:param list excludes: List of str matching module names that should be ignored
:return: A generator that yields classes
:rtype: generator
:raises Exception: If a module cannot be inspected.
"""
import pkgutil
import importlib
import inspect
:param str path: Path to the package
:param str prefix: Name of the package followed by a dot
:param list excludes: List of str matching module names that should be ignored
:return: A generator that yields classes
:rtype: generator
:raises Exception: If a module cannot be inspected.
"""
import pkgutil
import importlib
import inspect
def walk_error(module_name):
if not any(map(lambda excl: module_name.startswith(excl), excludes)):
raise TaskListError('Unable to inspect module ' + module_name)
walker = pkgutil.walk_packages([path], prefix, walk_error)
for _, module_name, _ in walker:
if any(map(lambda excl: module_name.startswith(excl), excludes)):
continue
module = importlib.import_module(module_name)
classes = inspect.getmembers(module, inspect.isclass)
for class_name, obj in classes:
# We only want classes that are defined in the module, and not imported ones
if obj.__module__ == module_name:
yield obj
def walk_error(module_name):
if not any(map(lambda excl: module_name.startswith(excl), excludes)):
raise TaskListError('Unable to inspect module ' + module_name)
walker = pkgutil.walk_packages([path], prefix, walk_error)
for _, module_name, _ in walker:
if any(map(lambda excl: module_name.startswith(excl), excludes)):
continue
module = importlib.import_module(module_name)
classes = inspect.getmembers(module, inspect.isclass)
for class_name, obj in classes:
# We only want classes that are defined in the module, and not imported ones
if obj.__module__ == module_name:
yield obj
def check_ordering(task):
"""Checks the ordering of a task in relation to other tasks and their phases.
"""Checks the ordering of a task in relation to other tasks and their phases.
This function checks for a subset of what the strongly connected components algorithm does,
but can deliver a more precise error message, namely that there is a conflict between
what a task has specified as its predecessors or successors and in which phase it is placed.
This function checks for a subset of what the strongly connected components algorithm does,
but can deliver a more precise error message, namely that there is a conflict between
what a task has specified as its predecessors or successors and in which phase it is placed.
:param Task task: The task to check the ordering for
:raises TaskListError: If there is a conflict between task precedence and phase precedence
"""
for successor in task.successors:
# Run through all successors and throw an error if the phase of the task
# lies before the phase of a successor, log a warning if it lies after.
if task.phase > successor.phase:
msg = ("The task {task} is specified as running before {other}, "
"but its phase '{phase}' lies after the phase '{other_phase}'"
.format(task=task, other=successor, phase=task.phase, other_phase=successor.phase))
raise TaskListError(msg)
if task.phase < successor.phase:
log.warn("The task {task} is specified as running before {other} "
"although its phase '{phase}' already lies before the phase '{other_phase}' "
"(or the task has been placed in the wrong phase)"
.format(task=task, other=successor, phase=task.phase, other_phase=successor.phase))
for predecessor in task.predecessors:
# Run through all successors and throw an error if the phase of the task
# lies after the phase of a predecessor, log a warning if it lies before.
if task.phase < predecessor.phase:
msg = ("The task {task} is specified as running after {other}, "
"but its phase '{phase}' lies before the phase '{other_phase}'"
.format(task=task, other=predecessor, phase=task.phase, other_phase=predecessor.phase))
raise TaskListError(msg)
if task.phase > predecessor.phase:
log.warn("The task {task} is specified as running after {other} "
"although its phase '{phase}' already lies after the phase '{other_phase}' "
"(or the task has been placed in the wrong phase)"
.format(task=task, other=predecessor, phase=task.phase, other_phase=predecessor.phase))
:param Task task: The task to check the ordering for
:raises TaskListError: If there is a conflict between task precedence and phase precedence
"""
for successor in task.successors:
# Run through all successors and throw an error if the phase of the task
# lies before the phase of a successor, log a warning if it lies after.
if task.phase > successor.phase:
msg = ("The task {task} is specified as running before {other}, "
"but its phase '{phase}' lies after the phase '{other_phase}'"
.format(task=task, other=successor, phase=task.phase, other_phase=successor.phase))
raise TaskListError(msg)
if task.phase < successor.phase:
log.warn("The task {task} is specified as running before {other} "
"although its phase '{phase}' already lies before the phase '{other_phase}' "
"(or the task has been placed in the wrong phase)"
.format(task=task, other=successor, phase=task.phase, other_phase=successor.phase))
for predecessor in task.predecessors:
# Run through all successors and throw an error if the phase of the task
# lies after the phase of a predecessor, log a warning if it lies before.
if task.phase < predecessor.phase:
msg = ("The task {task} is specified as running after {other}, "
"but its phase '{phase}' lies before the phase '{other_phase}'"
.format(task=task, other=predecessor, phase=task.phase, other_phase=predecessor.phase))
raise TaskListError(msg)
if task.phase > predecessor.phase:
log.warn("The task {task} is specified as running after {other} "
"although its phase '{phase}' already lies after the phase '{other_phase}' "
"(or the task has been placed in the wrong phase)"
.format(task=task, other=predecessor, phase=task.phase, other_phase=predecessor.phase))
def strongly_connected_components(graph):
"""Find the strongly connected components in a graph using Tarjan's algorithm.
"""Find the strongly connected components in a graph using Tarjan's algorithm.
Source: http://www.logarithmic.net/pfh-files/blog/01208083168/sort.py
Source: http://www.logarithmic.net/pfh-files/blog/01208083168/sort.py
:param dict graph: mapping of tasks to lists of successor tasks
:return: List of tuples that are strongly connected comoponents
:rtype: list
"""
:param dict graph: mapping of tasks to lists of successor tasks
:return: List of tuples that are strongly connected comoponents
:rtype: list
"""
result = []
stack = []
low = {}
result = []
stack = []
low = {}
def visit(node):
if node in low:
return
def visit(node):
if node in low:
return
num = len(low)
low[node] = num
stack_pos = len(stack)
stack.append(node)
num = len(low)
low[node] = num
stack_pos = len(stack)
stack.append(node)
for successor in graph[node]:
visit(successor)
low[node] = min(low[node], low[successor])
for successor in graph[node]:
visit(successor)
low[node] = min(low[node], low[successor])
if num == low[node]:
component = tuple(stack[stack_pos:])
del stack[stack_pos:]
result.append(component)
for item in component:
low[item] = len(graph)
if num == low[node]:
component = tuple(stack[stack_pos:])
del stack[stack_pos:]
result.append(component)
for item in component:
low[item] = len(graph)
for node in graph:
visit(node)
for node in graph:
visit(node)
return result
return result
def topological_sort(graph):
"""Runs a topological sort on a graph.
"""Runs a topological sort on a graph.
Source: http://www.logarithmic.net/pfh-files/blog/01208083168/sort.py
Source: http://www.logarithmic.net/pfh-files/blog/01208083168/sort.py
:param dict graph: mapping of tasks to lists of successor tasks
:return: A list of all tasks in the graph sorted according to ther dependencies
:rtype: list
"""
count = {}
for node in graph:
count[node] = 0
for node in graph:
for successor in graph[node]:
count[successor] += 1
:param dict graph: mapping of tasks to lists of successor tasks
:return: A list of all tasks in the graph sorted according to ther dependencies
:rtype: list
"""
count = {}
for node in graph:
count[node] = 0
for node in graph:
for successor in graph[node]:
count[successor] += 1
ready = [node for node in graph if count[node] == 0]
ready = [node for node in graph if count[node] == 0]
result = []
while ready:
node = ready.pop(-1)
result.append(node)
result = []
while ready:
node = ready.pop(-1)
result.append(node)
for successor in graph[node]:
count[successor] -= 1
if count[successor] == 0:
ready.append(successor)
for successor in graph[node]:
count[successor] -= 1
if count[successor] == 0:
ready.append(successor)
return result
return result

View file

@ -2,158 +2,158 @@ from exceptions import UnitError
def onlybytes(msg):
def decorator(func):
def check_other(self, other):
if not isinstance(other, Bytes):
raise UnitError(msg)
return func(self, other)
return check_other
return decorator
def decorator(func):
def check_other(self, other):
if not isinstance(other, Bytes):
raise UnitError(msg)
return func(self, other)
return check_other
return decorator
class Bytes(object):
units = {'B': 1,
'KiB': 1024,
'MiB': 1024 * 1024,
'GiB': 1024 * 1024 * 1024,
'TiB': 1024 * 1024 * 1024 * 1024,
}
units = {'B': 1,
'KiB': 1024,
'MiB': 1024 * 1024,
'GiB': 1024 * 1024 * 1024,
'TiB': 1024 * 1024 * 1024 * 1024,
}
def __init__(self, qty):
if isinstance(qty, (int, long)):
self.qty = qty
else:
self.qty = Bytes.parse(qty)
def __init__(self, qty):
if isinstance(qty, (int, long)):
self.qty = qty
else:
self.qty = Bytes.parse(qty)
@staticmethod
def parse(qty_str):
import re
regex = re.compile('^(?P<qty>\d+)(?P<unit>[KMGT]i?B|B)$')
parsed = regex.match(qty_str)
if parsed is None:
raise UnitError('Unable to parse ' + qty_str)
@staticmethod
def parse(qty_str):
import re
regex = re.compile('^(?P<qty>\d+)(?P<unit>[KMGT]i?B|B)$')
parsed = regex.match(qty_str)
if parsed is None:
raise UnitError('Unable to parse ' + qty_str)
qty = int(parsed.group('qty'))
unit = parsed.group('unit')
if unit[0] in 'KMGT':
unit = unit[0] + 'iB'
byte_qty = qty * Bytes.units[unit]
return byte_qty
qty = int(parsed.group('qty'))
unit = parsed.group('unit')
if unit[0] in 'KMGT':
unit = unit[0] + 'iB'
byte_qty = qty * Bytes.units[unit]
return byte_qty
def get_qty_in(self, unit):
if unit[0] in 'KMGT':
unit = unit[0] + 'iB'
if unit not in Bytes.units:
raise UnitError('Unrecognized unit: ' + unit)
if self.qty % Bytes.units[unit] != 0:
msg = 'Unable to convert {qty} bytes to a whole number in {unit}'.format(qty=self.qty, unit=unit)
raise UnitError(msg)
return self.qty / Bytes.units[unit]
def get_qty_in(self, unit):
if unit[0] in 'KMGT':
unit = unit[0] + 'iB'
if unit not in Bytes.units:
raise UnitError('Unrecognized unit: ' + unit)
if self.qty % Bytes.units[unit] != 0:
msg = 'Unable to convert {qty} bytes to a whole number in {unit}'.format(qty=self.qty, unit=unit)
raise UnitError(msg)
return self.qty / Bytes.units[unit]
def __repr__(self):
converted = str(self.get_qty_in('B')) + 'B'
if self.qty == 0:
return converted
for unit in ['TiB', 'GiB', 'MiB', 'KiB']:
try:
converted = str(self.get_qty_in(unit)) + unit
break
except UnitError:
pass
return converted
def __repr__(self):
converted = str(self.get_qty_in('B')) + 'B'
if self.qty == 0:
return converted
for unit in ['TiB', 'GiB', 'MiB', 'KiB']:
try:
converted = str(self.get_qty_in(unit)) + unit
break
except UnitError:
pass
return converted
def __str__(self):
return self.__repr__()
def __str__(self):
return self.__repr__()
def __int__(self):
return self.qty
def __int__(self):
return self.qty
def __long__(self):
return self.qty
def __long__(self):
return self.qty
@onlybytes('Can only compare Bytes to Bytes')
def __lt__(self, other):
return self.qty < other.qty
@onlybytes('Can only compare Bytes to Bytes')
def __lt__(self, other):
return self.qty < other.qty
@onlybytes('Can only compare Bytes to Bytes')
def __le__(self, other):
return self.qty <= other.qty
@onlybytes('Can only compare Bytes to Bytes')
def __le__(self, other):
return self.qty <= other.qty
@onlybytes('Can only compare Bytes to Bytes')
def __eq__(self, other):
return self.qty == other.qty
@onlybytes('Can only compare Bytes to Bytes')
def __eq__(self, other):
return self.qty == other.qty
@onlybytes('Can only compare Bytes to Bytes')
def __ne__(self, other):
return self.qty != other.qty
@onlybytes('Can only compare Bytes to Bytes')
def __ne__(self, other):
return self.qty != other.qty
@onlybytes('Can only compare Bytes to Bytes')
def __ge__(self, other):
return self.qty >= other.qty
@onlybytes('Can only compare Bytes to Bytes')
def __ge__(self, other):
return self.qty >= other.qty
@onlybytes('Can only compare Bytes to Bytes')
def __gt__(self, other):
return self.qty > other.qty
@onlybytes('Can only compare Bytes to Bytes')
def __gt__(self, other):
return self.qty > other.qty
@onlybytes('Can only add Bytes to Bytes')
def __add__(self, other):
return Bytes(self.qty + other.qty)
@onlybytes('Can only add Bytes to Bytes')
def __add__(self, other):
return Bytes(self.qty + other.qty)
@onlybytes('Can only add Bytes to Bytes')
def __iadd__(self, other):
self.qty += other.qty
return self
@onlybytes('Can only add Bytes to Bytes')
def __iadd__(self, other):
self.qty += other.qty
return self
@onlybytes('Can only subtract Bytes from Bytes')
def __sub__(self, other):
return Bytes(self.qty - other.qty)
@onlybytes('Can only subtract Bytes from Bytes')
def __sub__(self, other):
return Bytes(self.qty - other.qty)
@onlybytes('Can only subtract Bytes from Bytes')
def __isub__(self, other):
self.qty -= other.qty
return self
@onlybytes('Can only subtract Bytes from Bytes')
def __isub__(self, other):
self.qty -= other.qty
return self
def __mul__(self, other):
if not isinstance(other, (int, long)):
raise UnitError('Can only multiply Bytes with integers')
return Bytes(self.qty * other)
def __mul__(self, other):
if not isinstance(other, (int, long)):
raise UnitError('Can only multiply Bytes with integers')
return Bytes(self.qty * other)
def __imul__(self, other):
if not isinstance(other, (int, long)):
raise UnitError('Can only multiply Bytes with integers')
self.qty *= other
return self
def __imul__(self, other):
if not isinstance(other, (int, long)):
raise UnitError('Can only multiply Bytes with integers')
self.qty *= other
return self
def __div__(self, other):
if isinstance(other, Bytes):
return self.qty / other.qty
if not isinstance(other, (int, long)):
raise UnitError('Can only divide Bytes with integers or Bytes')
return Bytes(self.qty / other)
def __div__(self, other):
if isinstance(other, Bytes):
return self.qty / other.qty
if not isinstance(other, (int, long)):
raise UnitError('Can only divide Bytes with integers or Bytes')
return Bytes(self.qty / other)
def __idiv__(self, other):
if isinstance(other, Bytes):
self.qty /= other.qty
else:
if not isinstance(other, (int, long)):
raise UnitError('Can only divide Bytes with integers or Bytes')
self.qty /= other
return self
def __idiv__(self, other):
if isinstance(other, Bytes):
self.qty /= other.qty
else:
if not isinstance(other, (int, long)):
raise UnitError('Can only divide Bytes with integers or Bytes')
self.qty /= other
return self
@onlybytes('Can only take modulus of Bytes with Bytes')
def __mod__(self, other):
return Bytes(self.qty % other.qty)
@onlybytes('Can only take modulus of Bytes with Bytes')
def __mod__(self, other):
return Bytes(self.qty % other.qty)
@onlybytes('Can only take modulus of Bytes with Bytes')
def __imod__(self, other):
self.qty %= other.qty
return self
@onlybytes('Can only take modulus of Bytes with Bytes')
def __imod__(self, other):
self.qty %= other.qty
return self
def __getstate__(self):
return {'__class__': self.__module__ + '.' + self.__class__.__name__,
'qty': self.qty,
}
def __getstate__(self):
return {'__class__': self.__module__ + '.' + self.__class__.__name__,
'qty': self.qty,
}
def __setstate__(self, state):
self.qty = state['qty']
def __setstate__(self, state):
self.qty = state['qty']

View file

@ -1,38 +1,38 @@
class ManifestError(Exception):
def __init__(self, message, manifest_path=None, data_path=None):
super(ManifestError, self).__init__(message)
self.message = message
self.manifest_path = manifest_path
self.data_path = data_path
self.args = (self.message, self.manifest_path, self.data_path)
def __init__(self, message, manifest_path=None, data_path=None):
super(ManifestError, self).__init__(message)
self.message = message
self.manifest_path = manifest_path
self.data_path = data_path
self.args = (self.message, self.manifest_path, self.data_path)
def __str__(self):
if self.data_path is not None:
path = '.'.join(map(str, self.data_path))
return ('{msg}\n File path: {file}\n Data path: {datapath}'
.format(msg=self.message, file=self.manifest_path, datapath=path))
return '{file}: {msg}'.format(msg=self.message, file=self.manifest_path)
def __str__(self):
if self.data_path is not None:
path = '.'.join(map(str, self.data_path))
return ('{msg}\n File path: {file}\n Data path: {datapath}'
.format(msg=self.message, file=self.manifest_path, datapath=path))
return '{file}: {msg}'.format(msg=self.message, file=self.manifest_path)
class TaskListError(Exception):
def __init__(self, message):
super(TaskListError, self).__init__(message)
self.message = message
self.args = (self.message,)
def __init__(self, message):
super(TaskListError, self).__init__(message)
self.message = message
self.args = (self.message,)
def __str__(self):
return 'Error in tasklist: ' + self.message
def __str__(self):
return 'Error in tasklist: ' + self.message
class TaskError(Exception):
pass
pass
class UnexpectedNumMatchesError(Exception):
pass
pass
class UnitError(Exception):
pass
pass

View file

@ -2,32 +2,32 @@ from contextlib import contextmanager
def get_partitions():
import re
regexp = re.compile('^ *(?P<major>\d+) *(?P<minor>\d+) *(?P<num_blks>\d+) (?P<dev_name>\S+)$')
matches = {}
path = '/proc/partitions'
with open(path) as partitions:
next(partitions)
next(partitions)
for line in partitions:
match = regexp.match(line)
if match is None:
raise RuntimeError('Unable to parse {line} in {path}'.format(line=line, path=path))
matches[match.group('dev_name')] = match.groupdict()
return matches
import re
regexp = re.compile('^ *(?P<major>\d+) *(?P<minor>\d+) *(?P<num_blks>\d+) (?P<dev_name>\S+)$')
matches = {}
path = '/proc/partitions'
with open(path) as partitions:
next(partitions)
next(partitions)
for line in partitions:
match = regexp.match(line)
if match is None:
raise RuntimeError('Unable to parse {line} in {path}'.format(line=line, path=path))
matches[match.group('dev_name')] = match.groupdict()
return matches
@contextmanager
def unmounted(volume):
from bootstrapvz.base.fs.partitionmaps.none import NoPartitions
from bootstrapvz.base.fs.partitionmaps.none import NoPartitions
p_map = volume.partition_map
root_dir = p_map.root.mount_dir
p_map.root.unmount()
if not isinstance(p_map, NoPartitions):
p_map.unmap(volume)
yield
p_map.map(volume)
else:
yield
p_map.root.mount(destination=root_dir)
p_map = volume.partition_map
root_dir = p_map.root.mount_dir
p_map.root.unmount()
if not isinstance(p_map, NoPartitions):
p_map.unmap(volume)
yield
p_map.map(volume)
else:
yield
p_map.root.mount(destination=root_dir)

View file

@ -3,22 +3,22 @@ from bootstrapvz.base.fs.volume import Volume
class Folder(Volume):
# Override the states this volume can be in (i.e. we can't "format" or "attach" it)
events = [{'name': 'create', 'src': 'nonexistent', 'dst': 'attached'},
{'name': 'delete', 'src': 'attached', 'dst': 'deleted'},
]
# Override the states this volume can be in (i.e. we can't "format" or "attach" it)
events = [{'name': 'create', 'src': 'nonexistent', 'dst': 'attached'},
{'name': 'delete', 'src': 'attached', 'dst': 'deleted'},
]
extension = 'chroot'
extension = 'chroot'
def create(self, path):
self.fsm.create(path=path)
def create(self, path):
self.fsm.create(path=path)
def _before_create(self, e):
import os
self.path = e.path
os.mkdir(self.path)
def _before_create(self, e):
import os
self.path = e.path
os.mkdir(self.path)
def _before_delete(self, e):
from shutil import rmtree
rmtree(self.path)
del self.path
def _before_delete(self, e):
from shutil import rmtree
rmtree(self.path)
del self.path

View file

@ -4,26 +4,26 @@ from ..tools import log_check_call
class LoopbackVolume(Volume):
extension = 'raw'
extension = 'raw'
def create(self, image_path):
self.fsm.create(image_path=image_path)
def create(self, image_path):
self.fsm.create(image_path=image_path)
def _before_create(self, e):
self.image_path = e.image_path
size_opt = '--size={mib}M'.format(mib=self.size.bytes.get_qty_in('MiB'))
log_check_call(['truncate', size_opt, self.image_path])
def _before_create(self, e):
self.image_path = e.image_path
size_opt = '--size={mib}M'.format(mib=self.size.bytes.get_qty_in('MiB'))
log_check_call(['truncate', size_opt, self.image_path])
def _before_attach(self, e):
[self.loop_device_path] = log_check_call(['losetup', '--show', '--find', self.image_path])
self.device_path = self.loop_device_path
def _before_attach(self, e):
[self.loop_device_path] = log_check_call(['losetup', '--show', '--find', self.image_path])
self.device_path = self.loop_device_path
def _before_detach(self, e):
log_check_call(['losetup', '--detach', self.loop_device_path])
del self.loop_device_path
self.device_path = None
def _before_detach(self, e):
log_check_call(['losetup', '--detach', self.loop_device_path])
del self.loop_device_path
self.device_path = None
def _before_delete(self, e):
from os import remove
remove(self.image_path)
del self.image_path
def _before_delete(self, e):
from os import remove
remove(self.image_path)
del self.image_path

View file

@ -6,78 +6,78 @@ from . import get_partitions
class QEMUVolume(LoopbackVolume):
def _before_create(self, e):
self.image_path = e.image_path
vol_size = str(self.size.bytes.get_qty_in('MiB')) + 'M'
log_check_call(['qemu-img', 'create', '-f', self.qemu_format, self.image_path, vol_size])
def _before_create(self, e):
self.image_path = e.image_path
vol_size = str(self.size.bytes.get_qty_in('MiB')) + 'M'
log_check_call(['qemu-img', 'create', '-f', self.qemu_format, self.image_path, vol_size])
def _check_nbd_module(self):
from bootstrapvz.base.fs.partitionmaps.none import NoPartitions
if isinstance(self.partition_map, NoPartitions):
if not self._module_loaded('nbd'):
msg = ('The kernel module `nbd\' must be loaded '
'(`modprobe nbd\') to attach .{extension} images'
.format(extension=self.extension))
raise VolumeError(msg)
else:
num_partitions = len(self.partition_map.partitions)
if not self._module_loaded('nbd'):
msg = ('The kernel module `nbd\' must be loaded '
'(run `modprobe nbd max_part={num_partitions}\') '
'to attach .{extension} images'
.format(num_partitions=num_partitions, extension=self.extension))
raise VolumeError(msg)
nbd_max_part = int(self._module_param('nbd', 'max_part'))
if nbd_max_part < num_partitions:
# Found here: http://bethesignal.org/blog/2011/01/05/how-to-mount-virtualbox-vdi-image/
msg = ('The kernel module `nbd\' was loaded with the max_part '
'parameter set to {max_part}, which is below '
'the amount of partitions for this volume ({num_partitions}). '
'Reload the nbd kernel module with max_part set to at least {num_partitions} '
'(`rmmod nbd; modprobe nbd max_part={num_partitions}\').'
.format(max_part=nbd_max_part, num_partitions=num_partitions))
raise VolumeError(msg)
def _check_nbd_module(self):
from bootstrapvz.base.fs.partitionmaps.none import NoPartitions
if isinstance(self.partition_map, NoPartitions):
if not self._module_loaded('nbd'):
msg = ('The kernel module `nbd\' must be loaded '
'(`modprobe nbd\') to attach .{extension} images'
.format(extension=self.extension))
raise VolumeError(msg)
else:
num_partitions = len(self.partition_map.partitions)
if not self._module_loaded('nbd'):
msg = ('The kernel module `nbd\' must be loaded '
'(run `modprobe nbd max_part={num_partitions}\') '
'to attach .{extension} images'
.format(num_partitions=num_partitions, extension=self.extension))
raise VolumeError(msg)
nbd_max_part = int(self._module_param('nbd', 'max_part'))
if nbd_max_part < num_partitions:
# Found here: http://bethesignal.org/blog/2011/01/05/how-to-mount-virtualbox-vdi-image/
msg = ('The kernel module `nbd\' was loaded with the max_part '
'parameter set to {max_part}, which is below '
'the amount of partitions for this volume ({num_partitions}). '
'Reload the nbd kernel module with max_part set to at least {num_partitions} '
'(`rmmod nbd; modprobe nbd max_part={num_partitions}\').'
.format(max_part=nbd_max_part, num_partitions=num_partitions))
raise VolumeError(msg)
def _before_attach(self, e):
self._check_nbd_module()
self.loop_device_path = self._find_free_nbd_device()
log_check_call(['qemu-nbd', '--connect', self.loop_device_path, self.image_path])
self.device_path = self.loop_device_path
def _before_attach(self, e):
self._check_nbd_module()
self.loop_device_path = self._find_free_nbd_device()
log_check_call(['qemu-nbd', '--connect', self.loop_device_path, self.image_path])
self.device_path = self.loop_device_path
def _before_detach(self, e):
log_check_call(['qemu-nbd', '--disconnect', self.loop_device_path])
del self.loop_device_path
self.device_path = None
def _before_detach(self, e):
log_check_call(['qemu-nbd', '--disconnect', self.loop_device_path])
del self.loop_device_path
self.device_path = None
def _module_loaded(self, module):
import re
regexp = re.compile('^{module} +'.format(module=module))
with open('/proc/modules') as loaded_modules:
for line in loaded_modules:
match = regexp.match(line)
if match is not None:
return True
return False
def _module_loaded(self, module):
import re
regexp = re.compile('^{module} +'.format(module=module))
with open('/proc/modules') as loaded_modules:
for line in loaded_modules:
match = regexp.match(line)
if match is not None:
return True
return False
def _module_param(self, module, param):
import os.path
param_path = os.path.join('/sys/module', module, 'parameters', param)
with open(param_path) as param:
return param.read().strip()
def _module_param(self, module, param):
import os.path
param_path = os.path.join('/sys/module', module, 'parameters', param)
with open(param_path) as param:
return param.read().strip()
# From http://lists.gnu.org/archive/html/qemu-devel/2011-11/msg02201.html
# Apparently it's not in the current qemu-nbd shipped with wheezy
def _is_nbd_used(self, device_name):
return device_name in get_partitions()
# From http://lists.gnu.org/archive/html/qemu-devel/2011-11/msg02201.html
# Apparently it's not in the current qemu-nbd shipped with wheezy
def _is_nbd_used(self, device_name):
return device_name in get_partitions()
def _find_free_nbd_device(self):
import os.path
for i in xrange(0, 15):
device_name = 'nbd' + str(i)
if not self._is_nbd_used(device_name):
return os.path.join('/dev', device_name)
raise VolumeError('Unable to find free nbd device.')
def _find_free_nbd_device(self):
import os.path
for i in xrange(0, 15):
device_name = 'nbd' + str(i)
if not self._is_nbd_used(device_name):
return os.path.join('/dev', device_name)
raise VolumeError('Unable to find free nbd device.')
def __setstate__(self, state):
for key in state:
self.__dict__[key] = state[key]
def __setstate__(self, state):
for key in state:
self.__dict__[key] = state[key]

View file

@ -3,13 +3,13 @@ from qemuvolume import QEMUVolume
class VirtualDiskImage(QEMUVolume):
extension = 'vdi'
qemu_format = 'vdi'
# VDI format does not have an URI (check here: https://forums.virtualbox.org/viewtopic.php?p=275185#p275185)
ovf_uri = None
extension = 'vdi'
qemu_format = 'vdi'
# VDI format does not have an URI (check here: https://forums.virtualbox.org/viewtopic.php?p=275185#p275185)
ovf_uri = None
def get_uuid(self):
import uuid
with open(self.image_path) as image:
image.seek(392)
return uuid.UUID(bytes_le=image.read(16))
def get_uuid(self):
import uuid
with open(self.image_path) as image:
image.seek(392)
return uuid.UUID(bytes_le=image.read(16))

View file

@ -4,20 +4,20 @@ from ..tools import log_check_call
class VirtualHardDisk(QEMUVolume):
extension = 'vhd'
qemu_format = 'vpc'
ovf_uri = 'http://go.microsoft.com/fwlink/?LinkId=137171'
extension = 'vhd'
qemu_format = 'vpc'
ovf_uri = 'http://go.microsoft.com/fwlink/?LinkId=137171'
# Azure requires the image size to be a multiple of 1 MiB.
# VHDs are dynamic by default, so we add the option
# to make the image size fixed (subformat=fixed)
def _before_create(self, e):
self.image_path = e.image_path
vol_size = str(self.size.bytes.get_qty_in('MiB')) + 'M'
log_check_call(['qemu-img', 'create', '-o', 'subformat=fixed', '-f', self.qemu_format, self.image_path, vol_size])
# Azure requires the image size to be a multiple of 1 MiB.
# VHDs are dynamic by default, so we add the option
# to make the image size fixed (subformat=fixed)
def _before_create(self, e):
self.image_path = e.image_path
vol_size = str(self.size.bytes.get_qty_in('MiB')) + 'M'
log_check_call(['qemu-img', 'create', '-o', 'subformat=fixed', '-f', self.qemu_format, self.image_path, vol_size])
def get_uuid(self):
if not hasattr(self, 'uuid'):
import uuid
self.uuid = uuid.uuid4()
return self.uuid
def get_uuid(self):
if not hasattr(self, 'uuid'):
import uuid
self.uuid = uuid.uuid4()
return self.uuid

View file

@ -3,25 +3,25 @@ from qemuvolume import QEMUVolume
class VirtualMachineDisk(QEMUVolume):
extension = 'vmdk'
qemu_format = 'vmdk'
ovf_uri = 'http://www.vmware.com/specifications/vmdk.html#sparse'
extension = 'vmdk'
qemu_format = 'vmdk'
ovf_uri = 'http://www.vmware.com/specifications/vmdk.html#sparse'
def get_uuid(self):
if not hasattr(self, 'uuid'):
import uuid
self.uuid = uuid.uuid4()
return self.uuid
# import uuid
# with open(self.image_path) as image:
# line = ''
# lines_read = 0
# while 'ddb.uuid.image="' not in line:
# line = image.read()
# lines_read += 1
# if lines_read > 100:
# from common.exceptions import VolumeError
# raise VolumeError('Unable to find UUID in VMDK file.')
# import re
# matches = re.search('ddb.uuid.image="(?P<uuid>[^"]+)"', line)
# return uuid.UUID(hex=matches.group('uuid'))
def get_uuid(self):
if not hasattr(self, 'uuid'):
import uuid
self.uuid = uuid.uuid4()
return self.uuid
# import uuid
# with open(self.image_path) as image:
# line = ''
# lines_read = 0
# while 'ddb.uuid.image="' not in line:
# line = image.read()
# lines_read += 1
# if lines_read > 100:
# from common.exceptions import VolumeError
# raise VolumeError('Unable to find UUID in VMDK file.')
# import re
# matches = re.search('ddb.uuid.image="(?P<uuid>[^"]+)"', line)
# return uuid.UUID(hex=matches.group('uuid'))

View file

@ -2,60 +2,60 @@
class FSMProxy(object):
def __init__(self, cfg):
from fysom import Fysom
events = set([event['name'] for event in cfg['events']])
cfg['callbacks'] = self.collect_event_listeners(events, cfg['callbacks'])
self.fsm = Fysom(cfg)
self.attach_proxy_methods(self.fsm, events)
def __init__(self, cfg):
from fysom import Fysom
events = set([event['name'] for event in cfg['events']])
cfg['callbacks'] = self.collect_event_listeners(events, cfg['callbacks'])
self.fsm = Fysom(cfg)
self.attach_proxy_methods(self.fsm, events)
def collect_event_listeners(self, events, callbacks):
callbacks = callbacks.copy()
callback_names = []
for event in events:
callback_names.append(('_before_' + event, 'onbefore' + event))
callback_names.append(('_after_' + event, 'onafter' + event))
for fn_name, listener in callback_names:
fn = getattr(self, fn_name, None)
if callable(fn):
if listener in callbacks:
old_fn = callbacks[listener]
def collect_event_listeners(self, events, callbacks):
callbacks = callbacks.copy()
callback_names = []
for event in events:
callback_names.append(('_before_' + event, 'onbefore' + event))
callback_names.append(('_after_' + event, 'onafter' + event))
for fn_name, listener in callback_names:
fn = getattr(self, fn_name, None)
if callable(fn):
if listener in callbacks:
old_fn = callbacks[listener]
def wrapper(e, old_fn=old_fn, fn=fn):
old_fn(e)
fn(e)
callbacks[listener] = wrapper
else:
callbacks[listener] = fn
return callbacks
def wrapper(e, old_fn=old_fn, fn=fn):
old_fn(e)
fn(e)
callbacks[listener] = wrapper
else:
callbacks[listener] = fn
return callbacks
def attach_proxy_methods(self, fsm, events):
def make_proxy(fsm, event):
fn = getattr(fsm, event)
def attach_proxy_methods(self, fsm, events):
def make_proxy(fsm, event):
fn = getattr(fsm, event)
def proxy(*args, **kwargs):
if len(args) > 0:
raise FSMProxyError('FSMProxy event listeners only accept named arguments.')
fn(**kwargs)
return proxy
def proxy(*args, **kwargs):
if len(args) > 0:
raise FSMProxyError('FSMProxy event listeners only accept named arguments.')
fn(**kwargs)
return proxy
for event in events:
if not hasattr(self, event):
setattr(self, event, make_proxy(fsm, event))
for event in events:
if not hasattr(self, event):
setattr(self, event, make_proxy(fsm, event))
def __getstate__(self):
state = {}
for key, value in self.__dict__.iteritems():
if callable(value) or key == 'fsm':
continue
state[key] = value
state['__class__'] = self.__module__ + '.' + self.__class__.__name__
return state
def __getstate__(self):
state = {}
for key, value in self.__dict__.iteritems():
if callable(value) or key == 'fsm':
continue
state[key] = value
state['__class__'] = self.__module__ + '.' + self.__class__.__name__
return state
def __setstate__(self, state):
for key in state:
self.__dict__[key] = state[key]
def __setstate__(self, state):
for key in state:
self.__dict__[key] = state[key]
class FSMProxyError(Exception):
pass
pass

View file

@ -1,34 +1,34 @@
class _Release(object):
def __init__(self, codename, version):
self.codename = codename
self.version = version
def __init__(self, codename, version):
self.codename = codename
self.version = version
def __cmp__(self, other):
return self.version - other.version
def __cmp__(self, other):
return self.version - other.version
def __str__(self):
return self.codename
def __str__(self):
return self.codename
def __getstate__(self):
state = self.__dict__.copy()
state['__class__'] = self.__module__ + '.' + self.__class__.__name__
return state
def __getstate__(self):
state = self.__dict__.copy()
state['__class__'] = self.__module__ + '.' + self.__class__.__name__
return state
def __setstate__(self, state):
for key in state:
self.__dict__[key] = state[key]
def __setstate__(self, state):
for key in state:
self.__dict__[key] = state[key]
class _ReleaseAlias(_Release):
def __init__(self, alias, release):
self.alias = alias
self.release = release
super(_ReleaseAlias, self).__init__(self.release.codename, self.release.version)
def __init__(self, alias, release):
self.alias = alias
self.release = release
super(_ReleaseAlias, self).__init__(self.release.codename, self.release.version)
def __str__(self):
return self.alias
def __str__(self):
return self.alias
sid = _Release('sid', 10)
@ -54,15 +54,15 @@ oldstable = _ReleaseAlias('oldstable', wheezy)
def get_release(release_name):
"""Normalizes the release codenames
This allows tasks to query for release codenames rather than 'stable', 'unstable' etc.
"""
from . import releases
release = getattr(releases, release_name, None)
if release is None or not isinstance(release, _Release):
raise UnknownReleaseException('The release `{name}\' is unknown'.format(name=release))
return release
"""Normalizes the release codenames
This allows tasks to query for release codenames rather than 'stable', 'unstable' etc.
"""
from . import releases
release = getattr(releases, release_name, None)
if release is None or not isinstance(release, _Release):
raise UnknownReleaseException('The release `{name}\' is unknown'.format(name=release))
return release
class UnknownReleaseException(Exception):
pass
pass

View file

@ -3,176 +3,176 @@ from bytes import Bytes
def onlysectors(msg):
def decorator(func):
def check_other(self, other):
if not isinstance(other, Sectors):
raise UnitError(msg)
return func(self, other)
return check_other
return decorator
def decorator(func):
def check_other(self, other):
if not isinstance(other, Sectors):
raise UnitError(msg)
return func(self, other)
return check_other
return decorator
class Sectors(object):
def __init__(self, quantity, sector_size):
if isinstance(sector_size, Bytes):
self.sector_size = sector_size
else:
self.sector_size = Bytes(sector_size)
def __init__(self, quantity, sector_size):
if isinstance(sector_size, Bytes):
self.sector_size = sector_size
else:
self.sector_size = Bytes(sector_size)
if isinstance(quantity, Bytes):
self.bytes = quantity
else:
if isinstance(quantity, (int, long)):
self.bytes = self.sector_size * quantity
else:
self.bytes = Bytes(quantity)
if isinstance(quantity, Bytes):
self.bytes = quantity
else:
if isinstance(quantity, (int, long)):
self.bytes = self.sector_size * quantity
else:
self.bytes = Bytes(quantity)
def get_sectors(self):
return self.bytes / self.sector_size
def get_sectors(self):
return self.bytes / self.sector_size
def __repr__(self):
return str(self.get_sectors()) + 's'
def __repr__(self):
return str(self.get_sectors()) + 's'
def __str__(self):
return self.__repr__()
def __str__(self):
return self.__repr__()
def __int__(self):
return self.get_sectors()
def __int__(self):
return self.get_sectors()
def __long__(self):
return self.get_sectors()
def __long__(self):
return self.get_sectors()
@onlysectors('Can only compare sectors with sectors')
def __lt__(self, other):
return self.bytes < other.bytes
@onlysectors('Can only compare sectors with sectors')
def __lt__(self, other):
return self.bytes < other.bytes
@onlysectors('Can only compare sectors with sectors')
def __le__(self, other):
return self.bytes <= other.bytes
@onlysectors('Can only compare sectors with sectors')
def __le__(self, other):
return self.bytes <= other.bytes
@onlysectors('Can only compare sectors with sectors')
def __eq__(self, other):
return self.bytes == other.bytes
@onlysectors('Can only compare sectors with sectors')
def __eq__(self, other):
return self.bytes == other.bytes
@onlysectors('Can only compare sectors with sectors')
def __ne__(self, other):
return self.bytes != other.bytes
@onlysectors('Can only compare sectors with sectors')
def __ne__(self, other):
return self.bytes != other.bytes
@onlysectors('Can only compare sectors with sectors')
def __ge__(self, other):
return self.bytes >= other.bytes
@onlysectors('Can only compare sectors with sectors')
def __ge__(self, other):
return self.bytes >= other.bytes
@onlysectors('Can only compare sectors with sectors')
def __gt__(self, other):
return self.bytes > other.bytes
@onlysectors('Can only compare sectors with sectors')
def __gt__(self, other):
return self.bytes > other.bytes
def __add__(self, other):
if isinstance(other, (int, long)):
return Sectors(self.bytes + self.sector_size * other, self.sector_size)
if isinstance(other, Bytes):
return Sectors(self.bytes + other, self.sector_size)
if isinstance(other, Sectors):
if self.sector_size != other.sector_size:
raise UnitError('Cannot sum sectors with different sector sizes')
return Sectors(self.bytes + other.bytes, self.sector_size)
raise UnitError('Can only add sectors, bytes or integers to sectors')
def __add__(self, other):
if isinstance(other, (int, long)):
return Sectors(self.bytes + self.sector_size * other, self.sector_size)
if isinstance(other, Bytes):
return Sectors(self.bytes + other, self.sector_size)
if isinstance(other, Sectors):
if self.sector_size != other.sector_size:
raise UnitError('Cannot sum sectors with different sector sizes')
return Sectors(self.bytes + other.bytes, self.sector_size)
raise UnitError('Can only add sectors, bytes or integers to sectors')
def __iadd__(self, other):
if isinstance(other, (int, long)):
self.bytes += self.sector_size * other
return self
if isinstance(other, Bytes):
self.bytes += other
return self
if isinstance(other, Sectors):
if self.sector_size != other.sector_size:
raise UnitError('Cannot sum sectors with different sector sizes')
self.bytes += other.bytes
return self
raise UnitError('Can only add sectors, bytes or integers to sectors')
def __iadd__(self, other):
if isinstance(other, (int, long)):
self.bytes += self.sector_size * other
return self
if isinstance(other, Bytes):
self.bytes += other
return self
if isinstance(other, Sectors):
if self.sector_size != other.sector_size:
raise UnitError('Cannot sum sectors with different sector sizes')
self.bytes += other.bytes
return self
raise UnitError('Can only add sectors, bytes or integers to sectors')
def __sub__(self, other):
if isinstance(other, (int, long)):
return Sectors(self.bytes - self.sector_size * other, self.sector_size)
if isinstance(other, Bytes):
return Sectors(self.bytes - other, self.sector_size)
if isinstance(other, Sectors):
if self.sector_size != other.sector_size:
raise UnitError('Cannot subtract sectors with different sector sizes')
return Sectors(self.bytes - other.bytes, self.sector_size)
raise UnitError('Can only subtract sectors, bytes or integers from sectors')
def __sub__(self, other):
if isinstance(other, (int, long)):
return Sectors(self.bytes - self.sector_size * other, self.sector_size)
if isinstance(other, Bytes):
return Sectors(self.bytes - other, self.sector_size)
if isinstance(other, Sectors):
if self.sector_size != other.sector_size:
raise UnitError('Cannot subtract sectors with different sector sizes')
return Sectors(self.bytes - other.bytes, self.sector_size)
raise UnitError('Can only subtract sectors, bytes or integers from sectors')
def __isub__(self, other):
if isinstance(other, (int, long)):
self.bytes -= self.sector_size * other
return self
if isinstance(other, Bytes):
self.bytes -= other
return self
if isinstance(other, Sectors):
if self.sector_size != other.sector_size:
raise UnitError('Cannot subtract sectors with different sector sizes')
self.bytes -= other.bytes
return self
raise UnitError('Can only subtract sectors, bytes or integers from sectors')
def __isub__(self, other):
if isinstance(other, (int, long)):
self.bytes -= self.sector_size * other
return self
if isinstance(other, Bytes):
self.bytes -= other
return self
if isinstance(other, Sectors):
if self.sector_size != other.sector_size:
raise UnitError('Cannot subtract sectors with different sector sizes')
self.bytes -= other.bytes
return self
raise UnitError('Can only subtract sectors, bytes or integers from sectors')
def __mul__(self, other):
if isinstance(other, (int, long)):
return Sectors(self.bytes * other, self.sector_size)
else:
raise UnitError('Can only multiply sectors with integers')
def __mul__(self, other):
if isinstance(other, (int, long)):
return Sectors(self.bytes * other, self.sector_size)
else:
raise UnitError('Can only multiply sectors with integers')
def __imul__(self, other):
if isinstance(other, (int, long)):
self.bytes *= other
return self
else:
raise UnitError('Can only multiply sectors with integers')
def __imul__(self, other):
if isinstance(other, (int, long)):
self.bytes *= other
return self
else:
raise UnitError('Can only multiply sectors with integers')
def __div__(self, other):
if isinstance(other, (int, long)):
return Sectors(self.bytes / other, self.sector_size)
if isinstance(other, Sectors):
if self.sector_size == other.sector_size:
return self.bytes / other.bytes
else:
raise UnitError('Cannot divide sectors with different sector sizes')
raise UnitError('Can only divide sectors with integers or sectors')
def __div__(self, other):
if isinstance(other, (int, long)):
return Sectors(self.bytes / other, self.sector_size)
if isinstance(other, Sectors):
if self.sector_size == other.sector_size:
return self.bytes / other.bytes
else:
raise UnitError('Cannot divide sectors with different sector sizes')
raise UnitError('Can only divide sectors with integers or sectors')
def __idiv__(self, other):
if isinstance(other, (int, long)):
self.bytes /= other
return self
if isinstance(other, Sectors):
if self.sector_size == other.sector_size:
self.bytes /= other.bytes
return self
else:
raise UnitError('Cannot divide sectors with different sector sizes')
raise UnitError('Can only divide sectors with integers or sectors')
def __idiv__(self, other):
if isinstance(other, (int, long)):
self.bytes /= other
return self
if isinstance(other, Sectors):
if self.sector_size == other.sector_size:
self.bytes /= other.bytes
return self
else:
raise UnitError('Cannot divide sectors with different sector sizes')
raise UnitError('Can only divide sectors with integers or sectors')
@onlysectors('Can only take modulus of sectors with sectors')
def __mod__(self, other):
if self.sector_size == other.sector_size:
return Sectors(self.bytes % other.bytes, self.sector_size)
else:
raise UnitError('Cannot take modulus of sectors with different sector sizes')
@onlysectors('Can only take modulus of sectors with sectors')
def __mod__(self, other):
if self.sector_size == other.sector_size:
return Sectors(self.bytes % other.bytes, self.sector_size)
else:
raise UnitError('Cannot take modulus of sectors with different sector sizes')
@onlysectors('Can only take modulus of sectors with sectors')
def __imod__(self, other):
if self.sector_size == other.sector_size:
self.bytes %= other.bytes
return self
else:
raise UnitError('Cannot take modulus of sectors with different sector sizes')
@onlysectors('Can only take modulus of sectors with sectors')
def __imod__(self, other):
if self.sector_size == other.sector_size:
self.bytes %= other.bytes
return self
else:
raise UnitError('Cannot take modulus of sectors with different sector sizes')
def __getstate__(self):
return {'__class__': self.__module__ + '.' + self.__class__.__name__,
'sector_size': self.sector_size,
'bytes': self.bytes,
}
def __getstate__(self):
return {'__class__': self.__module__ + '.' + self.__class__.__name__,
'sector_size': self.sector_size,
'bytes': self.bytes,
}
def __setstate__(self, state):
self.sector_size = state['sector_size']
self.bytes = state['bytes']
def __setstate__(self, state):
self.sector_size = state['sector_size']
self.bytes = state['bytes']

View file

@ -20,39 +20,39 @@ from tasks import folder
def get_standard_groups(manifest):
group = []
group.extend(get_base_group(manifest))
group.extend(volume_group)
if manifest.volume['partitions']['type'] != 'none':
group.extend(partitioning_group)
if 'boot' in manifest.volume['partitions']:
group.extend(boot_partition_group)
group.extend(mounting_group)
group.extend(kernel_group)
group.extend(get_fs_specific_group(manifest))
group.extend(get_network_group(manifest))
group.extend(get_apt_group(manifest))
group.extend(security_group)
group.extend(get_locale_group(manifest))
group.extend(get_bootloader_group(manifest))
group.extend(cleanup_group)
return group
group = []
group.extend(get_base_group(manifest))
group.extend(volume_group)
if manifest.volume['partitions']['type'] != 'none':
group.extend(partitioning_group)
if 'boot' in manifest.volume['partitions']:
group.extend(boot_partition_group)
group.extend(mounting_group)
group.extend(kernel_group)
group.extend(get_fs_specific_group(manifest))
group.extend(get_network_group(manifest))
group.extend(get_apt_group(manifest))
group.extend(security_group)
group.extend(get_locale_group(manifest))
group.extend(get_bootloader_group(manifest))
group.extend(cleanup_group)
return group
def get_base_group(manifest):
group = [workspace.CreateWorkspace,
bootstrap.AddRequiredCommands,
host.CheckExternalCommands,
bootstrap.Bootstrap,
workspace.DeleteWorkspace,
]
if manifest.bootstrapper.get('tarball', False):
group.append(bootstrap.MakeTarball)
if manifest.bootstrapper.get('include_packages', False):
group.append(bootstrap.IncludePackagesInBootstrap)
if manifest.bootstrapper.get('exclude_packages', False):
group.append(bootstrap.ExcludePackagesInBootstrap)
return group
group = [workspace.CreateWorkspace,
bootstrap.AddRequiredCommands,
host.CheckExternalCommands,
bootstrap.Bootstrap,
workspace.DeleteWorkspace,
]
if manifest.bootstrapper.get('tarball', False):
group.append(bootstrap.MakeTarball)
if manifest.bootstrapper.get('include_packages', False):
group.append(bootstrap.IncludePackagesInBootstrap)
if manifest.bootstrapper.get('exclude_packages', False):
group.append(bootstrap.ExcludePackagesInBootstrap)
return group
volume_group = [volume.Attach,
@ -95,95 +95,95 @@ ssh_group = [ssh.AddOpenSSHPackage,
def get_network_group(manifest):
if manifest.bootstrapper.get('variant', None) == 'minbase':
# minbase has no networking
return []
group = [network.ConfigureNetworkIF,
network.RemoveDNSInfo]
if manifest.system.get('hostname', False):
group.append(network.SetHostname)
else:
group.append(network.RemoveHostname)
return group
if manifest.bootstrapper.get('variant', None) == 'minbase':
# minbase has no networking
return []
group = [network.ConfigureNetworkIF,
network.RemoveDNSInfo]
if manifest.system.get('hostname', False):
group.append(network.SetHostname)
else:
group.append(network.RemoveHostname)
return group
def get_apt_group(manifest):
group = [apt.AddDefaultSources,
apt.WriteSources,
apt.DisableDaemonAutostart,
apt.AptUpdate,
apt.AptUpgrade,
packages.InstallPackages,
apt.PurgeUnusedPackages,
apt.AptClean,
apt.EnableDaemonAutostart,
]
if 'sources' in manifest.packages:
group.append(apt.AddManifestSources)
if 'trusted-keys' in manifest.packages:
group.append(apt.InstallTrustedKeys)
if 'preferences' in manifest.packages:
group.append(apt.AddManifestPreferences)
group.append(apt.WritePreferences)
if 'apt.conf.d' in manifest.packages:
group.append(apt.WriteConfiguration)
if 'install' in manifest.packages:
group.append(packages.AddManifestPackages)
if manifest.packages.get('install_standard', False):
group.append(packages.AddTaskselStandardPackages)
return group
group = [apt.AddDefaultSources,
apt.WriteSources,
apt.DisableDaemonAutostart,
apt.AptUpdate,
apt.AptUpgrade,
packages.InstallPackages,
apt.PurgeUnusedPackages,
apt.AptClean,
apt.EnableDaemonAutostart,
]
if 'sources' in manifest.packages:
group.append(apt.AddManifestSources)
if 'trusted-keys' in manifest.packages:
group.append(apt.InstallTrustedKeys)
if 'preferences' in manifest.packages:
group.append(apt.AddManifestPreferences)
group.append(apt.WritePreferences)
if 'apt.conf.d' in manifest.packages:
group.append(apt.WriteConfiguration)
if 'install' in manifest.packages:
group.append(packages.AddManifestPackages)
if manifest.packages.get('install_standard', False):
group.append(packages.AddTaskselStandardPackages)
return group
security_group = [security.EnableShadowConfig]
def get_locale_group(manifest):
from bootstrapvz.common.releases import jessie
group = [
locale.LocaleBootstrapPackage,
locale.GenerateLocale,
locale.SetTimezone,
]
if manifest.release > jessie:
group.append(locale.SetLocalTimeLink)
else:
group.append(locale.SetLocalTimeCopy)
return group
from bootstrapvz.common.releases import jessie
group = [
locale.LocaleBootstrapPackage,
locale.GenerateLocale,
locale.SetTimezone,
]
if manifest.release > jessie:
group.append(locale.SetLocalTimeLink)
else:
group.append(locale.SetLocalTimeCopy)
return group
def get_bootloader_group(manifest):
from bootstrapvz.common.releases import jessie
group = []
if manifest.system['bootloader'] == 'grub':
group.extend([grub.AddGrubPackage,
grub.ConfigureGrub])
if manifest.release < jessie:
group.append(grub.InstallGrub_1_99)
else:
group.append(grub.InstallGrub_2)
if manifest.system['bootloader'] == 'extlinux':
group.append(extlinux.AddExtlinuxPackage)
if manifest.release < jessie:
group.extend([extlinux.ConfigureExtlinux,
extlinux.InstallExtlinux])
else:
group.extend([extlinux.ConfigureExtlinuxJessie,
extlinux.InstallExtlinuxJessie])
return group
from bootstrapvz.common.releases import jessie
group = []
if manifest.system['bootloader'] == 'grub':
group.extend([grub.AddGrubPackage,
grub.ConfigureGrub])
if manifest.release < jessie:
group.append(grub.InstallGrub_1_99)
else:
group.append(grub.InstallGrub_2)
if manifest.system['bootloader'] == 'extlinux':
group.append(extlinux.AddExtlinuxPackage)
if manifest.release < jessie:
group.extend([extlinux.ConfigureExtlinux,
extlinux.InstallExtlinux])
else:
group.extend([extlinux.ConfigureExtlinuxJessie,
extlinux.InstallExtlinuxJessie])
return group
def get_fs_specific_group(manifest):
partitions = manifest.volume['partitions']
fs_specific_tasks = {'ext2': [filesystem.TuneVolumeFS],
'ext3': [filesystem.TuneVolumeFS],
'ext4': [filesystem.TuneVolumeFS],
'xfs': [filesystem.AddXFSProgs],
}
group = set()
if 'boot' in partitions:
group.update(fs_specific_tasks.get(partitions['boot']['filesystem'], []))
if 'root' in partitions:
group.update(fs_specific_tasks.get(partitions['root']['filesystem'], []))
return list(group)
partitions = manifest.volume['partitions']
fs_specific_tasks = {'ext2': [filesystem.TuneVolumeFS],
'ext3': [filesystem.TuneVolumeFS],
'ext4': [filesystem.TuneVolumeFS],
'xfs': [filesystem.AddXFSProgs],
}
group = set()
if 'boot' in partitions:
group.update(fs_specific_tasks.get(partitions['boot']['filesystem'], []))
if 'root' in partitions:
group.update(fs_specific_tasks.get(partitions['root']['filesystem'], []))
return list(group)
cleanup_group = [cleanup.ClearMOTD,
@ -202,11 +202,11 @@ rollback_map = {workspace.CreateWorkspace: workspace.DeleteWorkspace,
def get_standard_rollback_tasks(completed):
rollback_tasks = set()
for task in completed:
if task not in rollback_map:
continue
counter = rollback_map[task]
if task in completed and counter not in completed:
rollback_tasks.add(counter)
return rollback_tasks
rollback_tasks = set()
for task in completed:
if task not in rollback_map:
continue
counter = rollback_map[task]
if task in completed and counter not in completed:
rollback_tasks.add(counter)
return rollback_tasks

View file

@ -5,48 +5,48 @@ from . import assets
class UpdateInitramfs(Task):
description = 'Updating initramfs'
phase = phases.system_modification
description = 'Updating initramfs'
phase = phases.system_modification
@classmethod
def run(cls, info):
from ..tools import log_check_call
log_check_call(['chroot', info.root, 'update-initramfs', '-u'])
@classmethod
def run(cls, info):
from ..tools import log_check_call
log_check_call(['chroot', info.root, 'update-initramfs', '-u'])
class BlackListModules(Task):
description = 'Blacklisting kernel modules'
phase = phases.system_modification
successors = [UpdateInitramfs]
description = 'Blacklisting kernel modules'
phase = phases.system_modification
successors = [UpdateInitramfs]
@classmethod
def run(cls, info):
blacklist_path = os.path.join(info.root, 'etc/modprobe.d/blacklist.conf')
with open(blacklist_path, 'a') as blacklist:
blacklist.write(('# disable pc speaker and floppy\n'
'blacklist pcspkr\n'
'blacklist floppy\n'))
@classmethod
def run(cls, info):
blacklist_path = os.path.join(info.root, 'etc/modprobe.d/blacklist.conf')
with open(blacklist_path, 'a') as blacklist:
blacklist.write(('# disable pc speaker and floppy\n'
'blacklist pcspkr\n'
'blacklist floppy\n'))
class DisableGetTTYs(Task):
description = 'Disabling getty processes'
phase = phases.system_modification
description = 'Disabling getty processes'
phase = phases.system_modification
@classmethod
def run(cls, info):
# Forward compatible check for jessie
from bootstrapvz.common.releases import jessie
if info.manifest.release < jessie:
from ..tools import sed_i
inittab_path = os.path.join(info.root, 'etc/inittab')
tty1 = '1:2345:respawn:/sbin/getty 38400 tty1'
sed_i(inittab_path, '^' + tty1, '#' + tty1)
ttyx = ':23:respawn:/sbin/getty 38400 tty'
for i in range(2, 7):
i = str(i)
sed_i(inittab_path, '^' + i + ttyx + i, '#' + i + ttyx + i)
else:
from shutil import copy
logind_asset_path = os.path.join(assets, 'systemd/logind.conf')
logind_destination = os.path.join(info.root, 'etc/systemd/logind.conf')
copy(logind_asset_path, logind_destination)
@classmethod
def run(cls, info):
# Forward compatible check for jessie
from bootstrapvz.common.releases import jessie
if info.manifest.release < jessie:
from ..tools import sed_i
inittab_path = os.path.join(info.root, 'etc/inittab')
tty1 = '1:2345:respawn:/sbin/getty 38400 tty1'
sed_i(inittab_path, '^' + tty1, '#' + tty1)
ttyx = ':23:respawn:/sbin/getty 38400 tty'
for i in range(2, 7):
i = str(i)
sed_i(inittab_path, '^' + i + ttyx + i, '#' + i + ttyx + i)
else:
from shutil import copy
logind_asset_path = os.path.join(assets, 'systemd/logind.conf')
logind_destination = os.path.join(info.root, 'etc/systemd/logind.conf')
copy(logind_asset_path, logind_destination)

View file

@ -8,107 +8,107 @@ log = logging.getLogger(__name__)
class AddRequiredCommands(Task):
description = 'Adding commands required for bootstrapping Debian'
phase = phases.preparation
successors = [host.CheckExternalCommands]
description = 'Adding commands required for bootstrapping Debian'
phase = phases.preparation
successors = [host.CheckExternalCommands]
@classmethod
def run(cls, info):
info.host_dependencies['debootstrap'] = 'debootstrap'
@classmethod
def run(cls, info):
info.host_dependencies['debootstrap'] = 'debootstrap'
def get_bootstrap_args(info):
executable = ['debootstrap']
arch = info.manifest.system.get('userspace_architecture', info.manifest.system.get('architecture'))
options = ['--arch=' + arch]
if 'variant' in info.manifest.bootstrapper:
options.append('--variant=' + info.manifest.bootstrapper['variant'])
if len(info.include_packages) > 0:
options.append('--include=' + ','.join(info.include_packages))
if len(info.exclude_packages) > 0:
options.append('--exclude=' + ','.join(info.exclude_packages))
mirror = info.manifest.bootstrapper.get('mirror', info.apt_mirror)
arguments = [info.manifest.system['release'], info.root, mirror]
return executable, options, arguments
executable = ['debootstrap']
arch = info.manifest.system.get('userspace_architecture', info.manifest.system.get('architecture'))
options = ['--arch=' + arch]
if 'variant' in info.manifest.bootstrapper:
options.append('--variant=' + info.manifest.bootstrapper['variant'])
if len(info.include_packages) > 0:
options.append('--include=' + ','.join(info.include_packages))
if len(info.exclude_packages) > 0:
options.append('--exclude=' + ','.join(info.exclude_packages))
mirror = info.manifest.bootstrapper.get('mirror', info.apt_mirror)
arguments = [info.manifest.system['release'], info.root, mirror]
return executable, options, arguments
def get_tarball_filename(info):
from hashlib import sha1
executable, options, arguments = get_bootstrap_args(info)
# Filter info.root which points at /target/volume-id, we won't ever hit anything with that in there.
hash_args = [arg for arg in arguments if arg != info.root]
tarball_id = sha1(repr(frozenset(options + hash_args))).hexdigest()[0:8]
tarball_filename = 'debootstrap-' + tarball_id + '.tar'
return os.path.join(info.manifest.bootstrapper['workspace'], tarball_filename)
from hashlib import sha1
executable, options, arguments = get_bootstrap_args(info)
# Filter info.root which points at /target/volume-id, we won't ever hit anything with that in there.
hash_args = [arg for arg in arguments if arg != info.root]
tarball_id = sha1(repr(frozenset(options + hash_args))).hexdigest()[0:8]
tarball_filename = 'debootstrap-' + tarball_id + '.tar'
return os.path.join(info.manifest.bootstrapper['workspace'], tarball_filename)
class MakeTarball(Task):
description = 'Creating bootstrap tarball'
phase = phases.os_installation
description = 'Creating bootstrap tarball'
phase = phases.os_installation
@classmethod
def run(cls, info):
executable, options, arguments = get_bootstrap_args(info)
tarball = get_tarball_filename(info)
if os.path.isfile(tarball):
log.debug('Found matching tarball, skipping creation')
else:
from ..tools import log_call
status, out, err = log_call(executable + options + ['--make-tarball=' + tarball] + arguments)
if status not in [0, 1]: # variant=minbase exits with 0
msg = 'debootstrap exited with status {status}, it should exit with status 0 or 1'.format(status=status)
raise TaskError(msg)
@classmethod
def run(cls, info):
executable, options, arguments = get_bootstrap_args(info)
tarball = get_tarball_filename(info)
if os.path.isfile(tarball):
log.debug('Found matching tarball, skipping creation')
else:
from ..tools import log_call
status, out, err = log_call(executable + options + ['--make-tarball=' + tarball] + arguments)
if status not in [0, 1]: # variant=minbase exits with 0
msg = 'debootstrap exited with status {status}, it should exit with status 0 or 1'.format(status=status)
raise TaskError(msg)
class Bootstrap(Task):
description = 'Installing Debian'
phase = phases.os_installation
predecessors = [MakeTarball]
description = 'Installing Debian'
phase = phases.os_installation
predecessors = [MakeTarball]
@classmethod
def run(cls, info):
executable, options, arguments = get_bootstrap_args(info)
tarball = get_tarball_filename(info)
if os.path.isfile(tarball):
if not info.manifest.bootstrapper.get('tarball', False):
# Only shows this message if it hasn't tried to create the tarball
log.debug('Found matching tarball, skipping download')
options.extend(['--unpack-tarball=' + tarball])
@classmethod
def run(cls, info):
executable, options, arguments = get_bootstrap_args(info)
tarball = get_tarball_filename(info)
if os.path.isfile(tarball):
if not info.manifest.bootstrapper.get('tarball', False):
# Only shows this message if it hasn't tried to create the tarball
log.debug('Found matching tarball, skipping download')
options.extend(['--unpack-tarball=' + tarball])
if info.bootstrap_script is not None:
# Optional bootstrapping script to modify the bootstrapping process
arguments.append(info.bootstrap_script)
if info.bootstrap_script is not None:
# Optional bootstrapping script to modify the bootstrapping process
arguments.append(info.bootstrap_script)
try:
from ..tools import log_check_call
log_check_call(executable + options + arguments)
except KeyboardInterrupt:
# Sometimes ../root/sys and ../root/proc are still mounted when
# quitting debootstrap prematurely. This break the cleanup process,
# so we unmount manually (ignore the exit code, the dirs may not be mounted).
from ..tools import log_call
log_call(['umount', os.path.join(info.root, 'sys')])
log_call(['umount', os.path.join(info.root, 'proc')])
raise
try:
from ..tools import log_check_call
log_check_call(executable + options + arguments)
except KeyboardInterrupt:
# Sometimes ../root/sys and ../root/proc are still mounted when
# quitting debootstrap prematurely. This break the cleanup process,
# so we unmount manually (ignore the exit code, the dirs may not be mounted).
from ..tools import log_call
log_call(['umount', os.path.join(info.root, 'sys')])
log_call(['umount', os.path.join(info.root, 'proc')])
raise
class IncludePackagesInBootstrap(Task):
description = 'Add packages in the bootstrap phase'
phase = phases.preparation
description = 'Add packages in the bootstrap phase'
phase = phases.preparation
@classmethod
def run(cls, info):
info.include_packages.update(
set(info.manifest.bootstrapper['include_packages'])
)
@classmethod
def run(cls, info):
info.include_packages.update(
set(info.manifest.bootstrapper['include_packages'])
)
class ExcludePackagesInBootstrap(Task):
description = 'Remove packages from bootstrap phase'
phase = phases.preparation
description = 'Remove packages from bootstrap phase'
phase = phases.preparation
@classmethod
def run(cls, info):
info.exclude_packages.update(
set(info.manifest.bootstrapper['exclude_packages'])
)
@classmethod
def run(cls, info):
info.exclude_packages.update(
set(info.manifest.bootstrapper['exclude_packages'])
)

View file

@ -5,28 +5,28 @@ import shutil
class ClearMOTD(Task):
description = 'Clearing the MOTD'
phase = phases.system_cleaning
description = 'Clearing the MOTD'
phase = phases.system_cleaning
@classmethod
def run(cls, info):
with open('/var/run/motd', 'w'):
pass
@classmethod
def run(cls, info):
with open('/var/run/motd', 'w'):
pass
class CleanTMP(Task):
description = 'Removing temporary files'
phase = phases.system_cleaning
description = 'Removing temporary files'
phase = phases.system_cleaning
@classmethod
def run(cls, info):
tmp = os.path.join(info.root, 'tmp')
for tmp_file in [os.path.join(tmp, f) for f in os.listdir(tmp)]:
if os.path.isfile(tmp_file):
os.remove(tmp_file)
else:
shutil.rmtree(tmp_file)
@classmethod
def run(cls, info):
tmp = os.path.join(info.root, 'tmp')
for tmp_file in [os.path.join(tmp, f) for f in os.listdir(tmp)]:
if os.path.isfile(tmp_file):
os.remove(tmp_file)
else:
shutil.rmtree(tmp_file)
log = os.path.join(info.root, 'var/log/')
os.remove(os.path.join(log, 'bootstrap.log'))
os.remove(os.path.join(log, 'dpkg.log'))
log = os.path.join(info.root, 'var/log/')
os.remove(os.path.join(log, 'bootstrap.log'))
os.remove(os.path.join(log, 'dpkg.log'))

View file

@ -3,11 +3,11 @@ from .. import phases
class TriggerRollback(Task):
phase = phases.cleaning
phase = phases.cleaning
description = 'Triggering a rollback by throwing an exception'
description = 'Triggering a rollback by throwing an exception'
@classmethod
def run(cls, info):
from ..exceptions import TaskError
raise TaskError('Trigger rollback')
@classmethod
def run(cls, info):
from ..exceptions import TaskError
raise TaskError('Trigger rollback')

View file

@ -8,107 +8,107 @@ import os
class AddExtlinuxPackage(Task):
description = 'Adding extlinux package'
phase = phases.preparation
description = 'Adding extlinux package'
phase = phases.preparation
@classmethod
def run(cls, info):
info.packages.add('extlinux')
if isinstance(info.volume.partition_map, partitionmaps.gpt.GPTPartitionMap):
info.packages.add('syslinux-common')
@classmethod
def run(cls, info):
info.packages.add('extlinux')
if isinstance(info.volume.partition_map, partitionmaps.gpt.GPTPartitionMap):
info.packages.add('syslinux-common')
class ConfigureExtlinux(Task):
description = 'Configuring extlinux'
phase = phases.system_modification
predecessors = [filesystem.FStab]
description = 'Configuring extlinux'
phase = phases.system_modification
predecessors = [filesystem.FStab]
@classmethod
def run(cls, info):
from bootstrapvz.common.releases import squeeze
if info.manifest.release == squeeze:
# On squeeze /etc/default/extlinux is generated when running extlinux-update
log_check_call(['chroot', info.root,
'extlinux-update'])
from bootstrapvz.common.tools import sed_i
extlinux_def = os.path.join(info.root, 'etc/default/extlinux')
sed_i(extlinux_def, r'^EXTLINUX_PARAMETERS="([^"]+)"$',
r'EXTLINUX_PARAMETERS="\1 console=ttyS0"')
@classmethod
def run(cls, info):
from bootstrapvz.common.releases import squeeze
if info.manifest.release == squeeze:
# On squeeze /etc/default/extlinux is generated when running extlinux-update
log_check_call(['chroot', info.root,
'extlinux-update'])
from bootstrapvz.common.tools import sed_i
extlinux_def = os.path.join(info.root, 'etc/default/extlinux')
sed_i(extlinux_def, r'^EXTLINUX_PARAMETERS="([^"]+)"$',
r'EXTLINUX_PARAMETERS="\1 console=ttyS0"')
class InstallExtlinux(Task):
description = 'Installing extlinux'
phase = phases.system_modification
predecessors = [filesystem.FStab, ConfigureExtlinux]
description = 'Installing extlinux'
phase = phases.system_modification
predecessors = [filesystem.FStab, ConfigureExtlinux]
@classmethod
def run(cls, info):
if isinstance(info.volume.partition_map, partitionmaps.gpt.GPTPartitionMap):
bootloader = '/usr/lib/syslinux/gptmbr.bin'
else:
bootloader = '/usr/lib/extlinux/mbr.bin'
log_check_call(['chroot', info.root,
'dd', 'bs=440', 'count=1',
'if=' + bootloader,
'of=' + info.volume.device_path])
log_check_call(['chroot', info.root,
'extlinux',
'--install', '/boot/extlinux'])
log_check_call(['chroot', info.root,
'extlinux-update'])
@classmethod
def run(cls, info):
if isinstance(info.volume.partition_map, partitionmaps.gpt.GPTPartitionMap):
bootloader = '/usr/lib/syslinux/gptmbr.bin'
else:
bootloader = '/usr/lib/extlinux/mbr.bin'
log_check_call(['chroot', info.root,
'dd', 'bs=440', 'count=1',
'if=' + bootloader,
'of=' + info.volume.device_path])
log_check_call(['chroot', info.root,
'extlinux',
'--install', '/boot/extlinux'])
log_check_call(['chroot', info.root,
'extlinux-update'])
class ConfigureExtlinuxJessie(Task):
description = 'Configuring extlinux'
phase = phases.system_modification
description = 'Configuring extlinux'
phase = phases.system_modification
@classmethod
def run(cls, info):
extlinux_path = os.path.join(info.root, 'boot/extlinux')
os.mkdir(extlinux_path)
@classmethod
def run(cls, info):
extlinux_path = os.path.join(info.root, 'boot/extlinux')
os.mkdir(extlinux_path)
from . import assets
with open(os.path.join(assets, 'extlinux/extlinux.conf')) as template:
extlinux_config_tpl = template.read()
from . import assets
with open(os.path.join(assets, 'extlinux/extlinux.conf')) as template:
extlinux_config_tpl = template.read()
config_vars = {'root_uuid': info.volume.partition_map.root.get_uuid(),
'kernel_version': info.kernel_version}
# Check if / and /boot are on the same partition
# If not, /boot will actually be / when booting
if hasattr(info.volume.partition_map, 'boot'):
config_vars['boot_prefix'] = ''
else:
config_vars['boot_prefix'] = '/boot'
config_vars = {'root_uuid': info.volume.partition_map.root.get_uuid(),
'kernel_version': info.kernel_version}
# Check if / and /boot are on the same partition
# If not, /boot will actually be / when booting
if hasattr(info.volume.partition_map, 'boot'):
config_vars['boot_prefix'] = ''
else:
config_vars['boot_prefix'] = '/boot'
extlinux_config = extlinux_config_tpl.format(**config_vars)
extlinux_config = extlinux_config_tpl.format(**config_vars)
with open(os.path.join(extlinux_path, 'extlinux.conf'), 'w') as extlinux_conf_handle:
extlinux_conf_handle.write(extlinux_config)
with open(os.path.join(extlinux_path, 'extlinux.conf'), 'w') as extlinux_conf_handle:
extlinux_conf_handle.write(extlinux_config)
# Copy the boot message
from shutil import copy
boot_txt_path = os.path.join(assets, 'extlinux/boot.txt')
copy(boot_txt_path, os.path.join(extlinux_path, 'boot.txt'))
# Copy the boot message
from shutil import copy
boot_txt_path = os.path.join(assets, 'extlinux/boot.txt')
copy(boot_txt_path, os.path.join(extlinux_path, 'boot.txt'))
class InstallExtlinuxJessie(Task):
description = 'Installing extlinux'
phase = phases.system_modification
predecessors = [filesystem.FStab, ConfigureExtlinuxJessie]
# Make sure the kernel image is updated after we have installed the bootloader
successors = [kernel.UpdateInitramfs]
description = 'Installing extlinux'
phase = phases.system_modification
predecessors = [filesystem.FStab, ConfigureExtlinuxJessie]
# Make sure the kernel image is updated after we have installed the bootloader
successors = [kernel.UpdateInitramfs]
@classmethod
def run(cls, info):
if isinstance(info.volume.partition_map, partitionmaps.gpt.GPTPartitionMap):
# Yeah, somebody saw it fit to uppercase that folder in jessie. Why? BECAUSE
bootloader = '/usr/lib/EXTLINUX/gptmbr.bin'
else:
bootloader = '/usr/lib/EXTLINUX/mbr.bin'
log_check_call(['chroot', info.root,
'dd', 'bs=440', 'count=1',
'if=' + bootloader,
'of=' + info.volume.device_path])
log_check_call(['chroot', info.root,
'extlinux',
'--install', '/boot/extlinux'])
@classmethod
def run(cls, info):
if isinstance(info.volume.partition_map, partitionmaps.gpt.GPTPartitionMap):
# Yeah, somebody saw it fit to uppercase that folder in jessie. Why? BECAUSE
bootloader = '/usr/lib/EXTLINUX/gptmbr.bin'
else:
bootloader = '/usr/lib/EXTLINUX/mbr.bin'
log_check_call(['chroot', info.root,
'dd', 'bs=440', 'count=1',
'if=' + bootloader,
'of=' + info.volume.device_path])
log_check_call(['chroot', info.root,
'extlinux',
'--install', '/boot/extlinux'])

View file

@ -7,196 +7,196 @@ import volume
class AddRequiredCommands(Task):
description = 'Adding commands required for formatting'
phase = phases.preparation
successors = [host.CheckExternalCommands]
description = 'Adding commands required for formatting'
phase = phases.preparation
successors = [host.CheckExternalCommands]
@classmethod
def run(cls, info):
if 'xfs' in (p.filesystem for p in info.volume.partition_map.partitions):
info.host_dependencies['mkfs.xfs'] = 'xfsprogs'
@classmethod
def run(cls, info):
if 'xfs' in (p.filesystem for p in info.volume.partition_map.partitions):
info.host_dependencies['mkfs.xfs'] = 'xfsprogs'
class Format(Task):
description = 'Formatting the volume'
phase = phases.volume_preparation
description = 'Formatting the volume'
phase = phases.volume_preparation
@classmethod
def run(cls, info):
from bootstrapvz.base.fs.partitions.unformatted import UnformattedPartition
for partition in info.volume.partition_map.partitions:
if isinstance(partition, UnformattedPartition):
continue
partition.format()
@classmethod
def run(cls, info):
from bootstrapvz.base.fs.partitions.unformatted import UnformattedPartition
for partition in info.volume.partition_map.partitions:
if isinstance(partition, UnformattedPartition):
continue
partition.format()
class TuneVolumeFS(Task):
description = 'Tuning the bootstrap volume filesystem'
phase = phases.volume_preparation
predecessors = [Format]
description = 'Tuning the bootstrap volume filesystem'
phase = phases.volume_preparation
predecessors = [Format]
@classmethod
def run(cls, info):
from bootstrapvz.base.fs.partitions.unformatted import UnformattedPartition
import re
# Disable the time based filesystem check
for partition in info.volume.partition_map.partitions:
if isinstance(partition, UnformattedPartition):
continue
if re.match('^ext[2-4]$', partition.filesystem) is not None:
log_check_call(['tune2fs', '-i', '0', partition.device_path])
@classmethod
def run(cls, info):
from bootstrapvz.base.fs.partitions.unformatted import UnformattedPartition
import re
# Disable the time based filesystem check
for partition in info.volume.partition_map.partitions:
if isinstance(partition, UnformattedPartition):
continue
if re.match('^ext[2-4]$', partition.filesystem) is not None:
log_check_call(['tune2fs', '-i', '0', partition.device_path])
class AddXFSProgs(Task):
description = 'Adding `xfsprogs\' to the image packages'
phase = phases.preparation
description = 'Adding `xfsprogs\' to the image packages'
phase = phases.preparation
@classmethod
def run(cls, info):
info.packages.add('xfsprogs')
@classmethod
def run(cls, info):
info.packages.add('xfsprogs')
class CreateMountDir(Task):
description = 'Creating mountpoint for the root partition'
phase = phases.volume_mounting
description = 'Creating mountpoint for the root partition'
phase = phases.volume_mounting
@classmethod
def run(cls, info):
import os
info.root = os.path.join(info.workspace, 'root')
os.makedirs(info.root)
@classmethod
def run(cls, info):
import os
info.root = os.path.join(info.workspace, 'root')
os.makedirs(info.root)
class MountRoot(Task):
description = 'Mounting the root partition'
phase = phases.volume_mounting
predecessors = [CreateMountDir]
description = 'Mounting the root partition'
phase = phases.volume_mounting
predecessors = [CreateMountDir]
@classmethod
def run(cls, info):
info.volume.partition_map.root.mount(destination=info.root)
@classmethod
def run(cls, info):
info.volume.partition_map.root.mount(destination=info.root)
class CreateBootMountDir(Task):
description = 'Creating mountpoint for the boot partition'
phase = phases.volume_mounting
predecessors = [MountRoot]
description = 'Creating mountpoint for the boot partition'
phase = phases.volume_mounting
predecessors = [MountRoot]
@classmethod
def run(cls, info):
import os.path
os.makedirs(os.path.join(info.root, 'boot'))
@classmethod
def run(cls, info):
import os.path
os.makedirs(os.path.join(info.root, 'boot'))
class MountBoot(Task):
description = 'Mounting the boot partition'
phase = phases.volume_mounting
predecessors = [CreateBootMountDir]
description = 'Mounting the boot partition'
phase = phases.volume_mounting
predecessors = [CreateBootMountDir]
@classmethod
def run(cls, info):
p_map = info.volume.partition_map
p_map.root.add_mount(p_map.boot, 'boot')
@classmethod
def run(cls, info):
p_map = info.volume.partition_map
p_map.root.add_mount(p_map.boot, 'boot')
class MountSpecials(Task):
description = 'Mounting special block devices'
phase = phases.os_installation
predecessors = [bootstrap.Bootstrap]
description = 'Mounting special block devices'
phase = phases.os_installation
predecessors = [bootstrap.Bootstrap]
@classmethod
def run(cls, info):
root = info.volume.partition_map.root
root.add_mount('/dev', 'dev', ['--bind'])
root.add_mount('none', 'proc', ['--types', 'proc'])
root.add_mount('none', 'sys', ['--types', 'sysfs'])
root.add_mount('none', 'dev/pts', ['--types', 'devpts'])
@classmethod
def run(cls, info):
root = info.volume.partition_map.root
root.add_mount('/dev', 'dev', ['--bind'])
root.add_mount('none', 'proc', ['--types', 'proc'])
root.add_mount('none', 'sys', ['--types', 'sysfs'])
root.add_mount('none', 'dev/pts', ['--types', 'devpts'])
class CopyMountTable(Task):
description = 'Copying mtab from host system'
phase = phases.os_installation
predecessors = [MountSpecials]
description = 'Copying mtab from host system'
phase = phases.os_installation
predecessors = [MountSpecials]
@classmethod
def run(cls, info):
import shutil
import os.path
shutil.copy('/proc/mounts', os.path.join(info.root, 'etc/mtab'))
@classmethod
def run(cls, info):
import shutil
import os.path
shutil.copy('/proc/mounts', os.path.join(info.root, 'etc/mtab'))
class UnmountRoot(Task):
description = 'Unmounting the bootstrap volume'
phase = phases.volume_unmounting
successors = [volume.Detach]
description = 'Unmounting the bootstrap volume'
phase = phases.volume_unmounting
successors = [volume.Detach]
@classmethod
def run(cls, info):
info.volume.partition_map.root.unmount()
@classmethod
def run(cls, info):
info.volume.partition_map.root.unmount()
class RemoveMountTable(Task):
description = 'Removing mtab'
phase = phases.volume_unmounting
successors = [UnmountRoot]
description = 'Removing mtab'
phase = phases.volume_unmounting
successors = [UnmountRoot]
@classmethod
def run(cls, info):
import os
os.remove(os.path.join(info.root, 'etc/mtab'))
@classmethod
def run(cls, info):
import os
os.remove(os.path.join(info.root, 'etc/mtab'))
class DeleteMountDir(Task):
description = 'Deleting mountpoint for the bootstrap volume'
phase = phases.volume_unmounting
predecessors = [UnmountRoot]
description = 'Deleting mountpoint for the bootstrap volume'
phase = phases.volume_unmounting
predecessors = [UnmountRoot]
@classmethod
def run(cls, info):
import os
os.rmdir(info.root)
del info.root
@classmethod
def run(cls, info):
import os
os.rmdir(info.root)
del info.root
class FStab(Task):
description = 'Adding partitions to the fstab'
phase = phases.system_modification
description = 'Adding partitions to the fstab'
phase = phases.system_modification
@classmethod
def run(cls, info):
import os.path
p_map = info.volume.partition_map
mount_points = [{'path': '/',
'partition': p_map.root,
'dump': '1',
'pass_num': '1',
}]
if hasattr(p_map, 'boot'):
mount_points.append({'path': '/boot',
'partition': p_map.boot,
'dump': '1',
'pass_num': '2',
})
if hasattr(p_map, 'swap'):
mount_points.append({'path': 'none',
'partition': p_map.swap,
'dump': '1',
'pass_num': '0',
})
@classmethod
def run(cls, info):
import os.path
p_map = info.volume.partition_map
mount_points = [{'path': '/',
'partition': p_map.root,
'dump': '1',
'pass_num': '1',
}]
if hasattr(p_map, 'boot'):
mount_points.append({'path': '/boot',
'partition': p_map.boot,
'dump': '1',
'pass_num': '2',
})
if hasattr(p_map, 'swap'):
mount_points.append({'path': 'none',
'partition': p_map.swap,
'dump': '1',
'pass_num': '0',
})
fstab_lines = []
for mount_point in mount_points:
partition = mount_point['partition']
mount_opts = ['defaults']
fstab_lines.append('UUID={uuid} {mountpoint} {filesystem} {mount_opts} {dump} {pass_num}'
.format(uuid=partition.get_uuid(),
mountpoint=mount_point['path'],
filesystem=partition.filesystem,
mount_opts=','.join(mount_opts),
dump=mount_point['dump'],
pass_num=mount_point['pass_num']))
fstab_lines = []
for mount_point in mount_points:
partition = mount_point['partition']
mount_opts = ['defaults']
fstab_lines.append('UUID={uuid} {mountpoint} {filesystem} {mount_opts} {dump} {pass_num}'
.format(uuid=partition.get_uuid(),
mountpoint=mount_point['path'],
filesystem=partition.filesystem,
mount_opts=','.join(mount_opts),
dump=mount_point['dump'],
pass_num=mount_point['pass_num']))
fstab_path = os.path.join(info.root, 'etc/fstab')
with open(fstab_path, 'w') as fstab:
fstab.write('\n'.join(fstab_lines))
fstab.write('\n')
fstab_path = os.path.join(info.root, 'etc/fstab')
with open(fstab_path, 'w') as fstab:
fstab.write('\n'.join(fstab_lines))
fstab.write('\n')

View file

@ -5,23 +5,23 @@ import workspace
class Create(Task):
description = 'Creating volume folder'
phase = phases.volume_creation
successors = [volume.Attach]
description = 'Creating volume folder'
phase = phases.volume_creation
successors = [volume.Attach]
@classmethod
def run(cls, info):
import os.path
info.root = os.path.join(info.workspace, 'root')
info.volume.create(info.root)
@classmethod
def run(cls, info):
import os.path
info.root = os.path.join(info.workspace, 'root')
info.volume.create(info.root)
class Delete(Task):
description = 'Deleting volume folder'
phase = phases.cleaning
successors = [workspace.DeleteWorkspace]
description = 'Deleting volume folder'
phase = phases.cleaning
successors = [workspace.DeleteWorkspace]
@classmethod
def run(cls, info):
info.volume.delete()
del info.root
@classmethod
def run(cls, info):
info.volume.delete()
del info.root

View file

@ -8,82 +8,82 @@ import os.path
class AddGrubPackage(Task):
description = 'Adding grub package'
phase = phases.preparation
description = 'Adding grub package'
phase = phases.preparation
@classmethod
def run(cls, info):
info.packages.add('grub-pc')
@classmethod
def run(cls, info):
info.packages.add('grub-pc')
class ConfigureGrub(Task):
description = 'Configuring grub'
phase = phases.system_modification
predecessors = [filesystem.FStab]
description = 'Configuring grub'
phase = phases.system_modification
predecessors = [filesystem.FStab]
@classmethod
def run(cls, info):
from bootstrapvz.common.tools import sed_i
grub_def = os.path.join(info.root, 'etc/default/grub')
sed_i(grub_def, '^#GRUB_TERMINAL=console', 'GRUB_TERMINAL=console')
sed_i(grub_def, '^GRUB_CMDLINE_LINUX_DEFAULT="quiet"',
'GRUB_CMDLINE_LINUX_DEFAULT="console=ttyS0"')
sed_i(grub_def, '^GRUB_TIMEOUT=[0-9]+', 'GRUB_TIMEOUT=0\n'
'GRUB_HIDDEN_TIMEOUT=0\n'
'GRUB_HIDDEN_TIMEOUT_QUIET=true')
sed_i(grub_def, '^#GRUB_DISABLE_RECOVERY="true"', 'GRUB_DISABLE_RECOVERY="true"')
@classmethod
def run(cls, info):
from bootstrapvz.common.tools import sed_i
grub_def = os.path.join(info.root, 'etc/default/grub')
sed_i(grub_def, '^#GRUB_TERMINAL=console', 'GRUB_TERMINAL=console')
sed_i(grub_def, '^GRUB_CMDLINE_LINUX_DEFAULT="quiet"',
'GRUB_CMDLINE_LINUX_DEFAULT="console=ttyS0"')
sed_i(grub_def, '^GRUB_TIMEOUT=[0-9]+', 'GRUB_TIMEOUT=0\n'
'GRUB_HIDDEN_TIMEOUT=0\n'
'GRUB_HIDDEN_TIMEOUT_QUIET=true')
sed_i(grub_def, '^#GRUB_DISABLE_RECOVERY="true"', 'GRUB_DISABLE_RECOVERY="true"')
class InstallGrub_1_99(Task):
description = 'Installing grub 1.99'
phase = phases.system_modification
predecessors = [filesystem.FStab]
description = 'Installing grub 1.99'
phase = phases.system_modification
predecessors = [filesystem.FStab]
@classmethod
def run(cls, info):
p_map = info.volume.partition_map
@classmethod
def run(cls, info):
p_map = info.volume.partition_map
# GRUB screws up when installing in chrooted environments
# so we fake a real harddisk with dmsetup.
# Guide here: http://ebroder.net/2009/08/04/installing-grub-onto-a-disk-image/
from ..fs import unmounted
with unmounted(info.volume):
info.volume.link_dm_node()
if isinstance(p_map, partitionmaps.none.NoPartitions):
p_map.root.device_path = info.volume.device_path
try:
[device_path] = log_check_call(['readlink', '-f', info.volume.device_path])
device_map_path = os.path.join(info.root, 'boot/grub/device.map')
partition_prefix = 'msdos'
if isinstance(p_map, partitionmaps.gpt.GPTPartitionMap):
partition_prefix = 'gpt'
with open(device_map_path, 'w') as device_map:
device_map.write('(hd0) {device_path}\n'.format(device_path=device_path))
if not isinstance(p_map, partitionmaps.none.NoPartitions):
for idx, partition in enumerate(info.volume.partition_map.partitions):
device_map.write('(hd0,{prefix}{idx}) {device_path}\n'
.format(device_path=partition.device_path,
prefix=partition_prefix,
idx=idx + 1))
# GRUB screws up when installing in chrooted environments
# so we fake a real harddisk with dmsetup.
# Guide here: http://ebroder.net/2009/08/04/installing-grub-onto-a-disk-image/
from ..fs import unmounted
with unmounted(info.volume):
info.volume.link_dm_node()
if isinstance(p_map, partitionmaps.none.NoPartitions):
p_map.root.device_path = info.volume.device_path
try:
[device_path] = log_check_call(['readlink', '-f', info.volume.device_path])
device_map_path = os.path.join(info.root, 'boot/grub/device.map')
partition_prefix = 'msdos'
if isinstance(p_map, partitionmaps.gpt.GPTPartitionMap):
partition_prefix = 'gpt'
with open(device_map_path, 'w') as device_map:
device_map.write('(hd0) {device_path}\n'.format(device_path=device_path))
if not isinstance(p_map, partitionmaps.none.NoPartitions):
for idx, partition in enumerate(info.volume.partition_map.partitions):
device_map.write('(hd0,{prefix}{idx}) {device_path}\n'
.format(device_path=partition.device_path,
prefix=partition_prefix,
idx=idx + 1))
# Install grub
log_check_call(['chroot', info.root, 'grub-install', device_path])
log_check_call(['chroot', info.root, 'update-grub'])
finally:
with unmounted(info.volume):
info.volume.unlink_dm_node()
if isinstance(p_map, partitionmaps.none.NoPartitions):
p_map.root.device_path = info.volume.device_path
# Install grub
log_check_call(['chroot', info.root, 'grub-install', device_path])
log_check_call(['chroot', info.root, 'update-grub'])
finally:
with unmounted(info.volume):
info.volume.unlink_dm_node()
if isinstance(p_map, partitionmaps.none.NoPartitions):
p_map.root.device_path = info.volume.device_path
class InstallGrub_2(Task):
description = 'Installing grub 2'
phase = phases.system_modification
predecessors = [filesystem.FStab]
# Make sure the kernel image is updated after we have installed the bootloader
successors = [kernel.UpdateInitramfs]
description = 'Installing grub 2'
phase = phases.system_modification
predecessors = [filesystem.FStab]
# Make sure the kernel image is updated after we have installed the bootloader
successors = [kernel.UpdateInitramfs]
@classmethod
def run(cls, info):
log_check_call(['chroot', info.root, 'grub-install', info.volume.device_path])
log_check_call(['chroot', info.root, 'update-grub'])
@classmethod
def run(cls, info):
log_check_call(['chroot', info.root, 'grub-install', info.volume.device_path])
log_check_call(['chroot', info.root, 'update-grub'])

View file

@ -4,28 +4,28 @@ from ..exceptions import TaskError
class CheckExternalCommands(Task):
description = 'Checking availability of external commands'
phase = phases.preparation
description = 'Checking availability of external commands'
phase = phases.preparation
@classmethod
def run(cls, info):
from ..tools import log_check_call
from subprocess import CalledProcessError
import re
missing_packages = []
for command, package in info.host_dependencies.items():
try:
log_check_call(['type ' + command], shell=True)
except CalledProcessError:
if re.match('^https?:\/\/', package):
msg = ('The command `{command}\' is not available, '
'you can download the software at `{package}\'.'
.format(command=command, package=package))
else:
msg = ('The command `{command}\' is not available, '
'it is located in the package `{package}\'.'
.format(command=command, package=package))
missing_packages.append(msg)
if len(missing_packages) > 0:
msg = '\n'.join(missing_packages)
raise TaskError(msg)
@classmethod
def run(cls, info):
from ..tools import log_check_call
from subprocess import CalledProcessError
import re
missing_packages = []
for command, package in info.host_dependencies.items():
try:
log_check_call(['type ' + command], shell=True)
except CalledProcessError:
if re.match('^https?:\/\/', package):
msg = ('The command `{command}\' is not available, '
'you can download the software at `{package}\'.'
.format(command=command, package=package))
else:
msg = ('The command `{command}\' is not available, '
'it is located in the package `{package}\'.'
.format(command=command, package=package))
missing_packages.append(msg)
if len(missing_packages) > 0:
msg = '\n'.join(missing_packages)
raise TaskError(msg)

View file

@ -3,19 +3,19 @@ from bootstrapvz.common import phases
class MoveImage(Task):
description = 'Moving volume image'
phase = phases.image_registration
description = 'Moving volume image'
phase = phases.image_registration
@classmethod
def run(cls, info):
image_name = info.manifest.name.format(**info.manifest_vars)
filename = image_name + '.' + info.volume.extension
@classmethod
def run(cls, info):
image_name = info.manifest.name.format(**info.manifest_vars)
filename = image_name + '.' + info.volume.extension
import os.path
destination = os.path.join(info.manifest.bootstrapper['workspace'], filename)
import shutil
shutil.move(info.volume.image_path, destination)
info.volume.image_path = destination
import logging
log = logging.getLogger(__name__)
log.info('The volume image has been moved to ' + destination)
import os.path
destination = os.path.join(info.manifest.bootstrapper['workspace'], filename)
import shutil
shutil.move(info.volume.image_path, destination)
info.volume.image_path = destination
import logging
log = logging.getLogger(__name__)
log.info('The volume image has been moved to ' + destination)

View file

@ -6,75 +6,75 @@ import os.path
class InstallInitScripts(Task):
description = 'Installing startup scripts'
phase = phases.system_modification
description = 'Installing startup scripts'
phase = phases.system_modification
@classmethod
def run(cls, info):
import stat
rwxr_xr_x = (stat.S_IRUSR | stat.S_IWUSR | stat.S_IXUSR |
stat.S_IRGRP | stat.S_IXGRP |
stat.S_IROTH | stat.S_IXOTH)
from shutil import copy
for name, src in info.initd['install'].iteritems():
dst = os.path.join(info.root, 'etc/init.d', name)
copy(src, dst)
os.chmod(dst, rwxr_xr_x)
log_check_call(['chroot', info.root, 'insserv', '--default', name])
@classmethod
def run(cls, info):
import stat
rwxr_xr_x = (stat.S_IRUSR | stat.S_IWUSR | stat.S_IXUSR |
stat.S_IRGRP | stat.S_IXGRP |
stat.S_IROTH | stat.S_IXOTH)
from shutil import copy
for name, src in info.initd['install'].iteritems():
dst = os.path.join(info.root, 'etc/init.d', name)
copy(src, dst)
os.chmod(dst, rwxr_xr_x)
log_check_call(['chroot', info.root, 'insserv', '--default', name])
for name in info.initd['disable']:
log_check_call(['chroot', info.root, 'insserv', '--remove', name])
for name in info.initd['disable']:
log_check_call(['chroot', info.root, 'insserv', '--remove', name])
class AddExpandRoot(Task):
description = 'Adding init script to expand the root volume'
phase = phases.system_modification
successors = [InstallInitScripts]
description = 'Adding init script to expand the root volume'
phase = phases.system_modification
successors = [InstallInitScripts]
@classmethod
def run(cls, info):
init_scripts_dir = os.path.join(assets, 'init.d')
info.initd['install']['expand-root'] = os.path.join(init_scripts_dir, 'expand-root')
@classmethod
def run(cls, info):
init_scripts_dir = os.path.join(assets, 'init.d')
info.initd['install']['expand-root'] = os.path.join(init_scripts_dir, 'expand-root')
class RemoveHWClock(Task):
description = 'Removing hardware clock init scripts'
phase = phases.system_modification
successors = [InstallInitScripts]
description = 'Removing hardware clock init scripts'
phase = phases.system_modification
successors = [InstallInitScripts]
@classmethod
def run(cls, info):
from bootstrapvz.common.releases import squeeze
info.initd['disable'].append('hwclock.sh')
if info.manifest.release == squeeze:
info.initd['disable'].append('hwclockfirst.sh')
@classmethod
def run(cls, info):
from bootstrapvz.common.releases import squeeze
info.initd['disable'].append('hwclock.sh')
if info.manifest.release == squeeze:
info.initd['disable'].append('hwclockfirst.sh')
class AdjustExpandRootScript(Task):
description = 'Adjusting the expand-root script'
phase = phases.system_modification
predecessors = [InstallInitScripts]
description = 'Adjusting the expand-root script'
phase = phases.system_modification
predecessors = [InstallInitScripts]
@classmethod
def run(cls, info):
from ..tools import sed_i
script = os.path.join(info.root, 'etc/init.d/expand-root')
@classmethod
def run(cls, info):
from ..tools import sed_i
script = os.path.join(info.root, 'etc/init.d/expand-root')
root_idx = info.volume.partition_map.root.get_index()
root_index_line = 'root_index="{idx}"'.format(idx=root_idx)
sed_i(script, '^root_index="0"$', root_index_line)
root_idx = info.volume.partition_map.root.get_index()
root_index_line = 'root_index="{idx}"'.format(idx=root_idx)
sed_i(script, '^root_index="0"$', root_index_line)
root_device_path = 'root_device_path="{device}"'.format(device=info.volume.device_path)
sed_i(script, '^root_device_path="/dev/xvda"$', root_device_path)
root_device_path = 'root_device_path="{device}"'.format(device=info.volume.device_path)
sed_i(script, '^root_device_path="/dev/xvda"$', root_device_path)
class AdjustGrowpartWorkaround(Task):
description = 'Adjusting expand-root for growpart-workaround'
phase = phases.system_modification
predecessors = [AdjustExpandRootScript]
description = 'Adjusting expand-root for growpart-workaround'
phase = phases.system_modification
predecessors = [AdjustExpandRootScript]
@classmethod
def run(cls, info):
from ..tools import sed_i
script = os.path.join(info.root, 'etc/init.d/expand-root')
sed_i(script, '^growpart="growpart"$', 'growpart-workaround')
@classmethod
def run(cls, info):
from ..tools import sed_i
script = os.path.join(info.root, 'etc/init.d/expand-root')
sed_i(script, '^growpart="growpart"$', 'growpart-workaround')

View file

@ -5,48 +5,48 @@ import logging
class AddDKMSPackages(Task):
description = 'Adding DKMS and kernel header packages'
phase = phases.package_installation
successors = [packages.InstallPackages]
description = 'Adding DKMS and kernel header packages'
phase = phases.package_installation
successors = [packages.InstallPackages]
@classmethod
def run(cls, info):
info.packages.add('dkms')
kernel_pkg_arch = {'i386': '686-pae', 'amd64': 'amd64'}[info.manifest.system['architecture']]
info.packages.add('linux-headers-' + kernel_pkg_arch)
@classmethod
def run(cls, info):
info.packages.add('dkms')
kernel_pkg_arch = {'i386': '686-pae', 'amd64': 'amd64'}[info.manifest.system['architecture']]
info.packages.add('linux-headers-' + kernel_pkg_arch)
class UpdateInitramfs(Task):
description = 'Rebuilding initramfs'
phase = phases.system_modification
description = 'Rebuilding initramfs'
phase = phases.system_modification
@classmethod
def run(cls, info):
from bootstrapvz.common.tools import log_check_call
# Update initramfs (-u) for all currently installed kernel versions (-k all)
log_check_call(['chroot', info.root, 'update-initramfs', '-u', '-k', 'all'])
@classmethod
def run(cls, info):
from bootstrapvz.common.tools import log_check_call
# Update initramfs (-u) for all currently installed kernel versions (-k all)
log_check_call(['chroot', info.root, 'update-initramfs', '-u', '-k', 'all'])
class DetermineKernelVersion(Task):
description = 'Determining kernel version'
phase = phases.package_installation
predecessors = [packages.InstallPackages]
description = 'Determining kernel version'
phase = phases.package_installation
predecessors = [packages.InstallPackages]
@classmethod
def run(cls, info):
# Snatched from `extlinux-update' in wheezy
# list the files in boot/ that match vmlinuz-*
# sort what the * matches, the first entry is the kernel version
import os.path
import re
regexp = re.compile('^vmlinuz-(?P<version>.+)$')
@classmethod
def run(cls, info):
# Snatched from `extlinux-update' in wheezy
# list the files in boot/ that match vmlinuz-*
# sort what the * matches, the first entry is the kernel version
import os.path
import re
regexp = re.compile('^vmlinuz-(?P<version>.+)$')
def get_kernel_version(vmlinuz_path):
vmlinux_basename = os.path.basename(vmlinuz_path)
return regexp.match(vmlinux_basename).group('version')
from glob import glob
boot = os.path.join(info.root, 'boot')
vmlinuz_paths = glob('{boot}/vmlinuz-*'.format(boot=boot))
kernels = map(get_kernel_version, vmlinuz_paths)
info.kernel_version = sorted(kernels, reverse=True)[0]
logging.getLogger(__name__).debug('Kernel version is {version}'.format(version=info.kernel_version))
def get_kernel_version(vmlinuz_path):
vmlinux_basename = os.path.basename(vmlinuz_path)
return regexp.match(vmlinux_basename).group('version')
from glob import glob
boot = os.path.join(info.root, 'boot')
vmlinuz_paths = glob('{boot}/vmlinuz-*'.format(boot=boot))
kernels = map(get_kernel_version, vmlinuz_paths)
info.kernel_version = sorted(kernels, reverse=True)[0]
logging.getLogger(__name__).debug('Kernel version is {version}'.format(version=info.kernel_version))

View file

@ -4,71 +4,71 @@ import os.path
class LocaleBootstrapPackage(Task):
description = 'Adding locale package to bootstrap installation'
phase = phases.preparation
description = 'Adding locale package to bootstrap installation'
phase = phases.preparation
@classmethod
def run(cls, info):
# We could bootstrap without locales, but things just suck without them
# eg. error messages when running apt
info.include_packages.add('locales')
@classmethod
def run(cls, info):
# We could bootstrap without locales, but things just suck without them
# eg. error messages when running apt
info.include_packages.add('locales')
class GenerateLocale(Task):
description = 'Generating system locale'
phase = phases.package_installation
description = 'Generating system locale'
phase = phases.package_installation
@classmethod
def run(cls, info):
from ..tools import sed_i
from ..tools import log_check_call
@classmethod
def run(cls, info):
from ..tools import sed_i
from ..tools import log_check_call
lang = '{locale}.{charmap}'.format(locale=info.manifest.system['locale'],
charmap=info.manifest.system['charmap'])
locale_str = '{locale}.{charmap} {charmap}'.format(locale=info.manifest.system['locale'],
charmap=info.manifest.system['charmap'])
lang = '{locale}.{charmap}'.format(locale=info.manifest.system['locale'],
charmap=info.manifest.system['charmap'])
locale_str = '{locale}.{charmap} {charmap}'.format(locale=info.manifest.system['locale'],
charmap=info.manifest.system['charmap'])
search = '# ' + locale_str
locale_gen = os.path.join(info.root, 'etc/locale.gen')
sed_i(locale_gen, search, locale_str)
search = '# ' + locale_str
locale_gen = os.path.join(info.root, 'etc/locale.gen')
sed_i(locale_gen, search, locale_str)
log_check_call(['chroot', info.root, 'locale-gen'])
log_check_call(['chroot', info.root,
'update-locale', 'LANG=' + lang])
log_check_call(['chroot', info.root, 'locale-gen'])
log_check_call(['chroot', info.root,
'update-locale', 'LANG=' + lang])
class SetTimezone(Task):
description = 'Setting the selected timezone'
phase = phases.system_modification
description = 'Setting the selected timezone'
phase = phases.system_modification
@classmethod
def run(cls, info):
tz_path = os.path.join(info.root, 'etc/timezone')
timezone = info.manifest.system['timezone']
with open(tz_path, 'w') as tz_file:
tz_file.write(timezone)
@classmethod
def run(cls, info):
tz_path = os.path.join(info.root, 'etc/timezone')
timezone = info.manifest.system['timezone']
with open(tz_path, 'w') as tz_file:
tz_file.write(timezone)
class SetLocalTimeLink(Task):
description = 'Setting the selected local timezone (link)'
phase = phases.system_modification
description = 'Setting the selected local timezone (link)'
phase = phases.system_modification
@classmethod
def run(cls, info):
timezone = info.manifest.system['timezone']
localtime_path = os.path.join(info.root, 'etc/localtime')
os.unlink(localtime_path)
os.symlink(os.path.join('/usr/share/zoneinfo', timezone), localtime_path)
@classmethod
def run(cls, info):
timezone = info.manifest.system['timezone']
localtime_path = os.path.join(info.root, 'etc/localtime')
os.unlink(localtime_path)
os.symlink(os.path.join('/usr/share/zoneinfo', timezone), localtime_path)
class SetLocalTimeCopy(Task):
description = 'Setting the selected local timezone (copy)'
phase = phases.system_modification
description = 'Setting the selected local timezone (copy)'
phase = phases.system_modification
@classmethod
def run(cls, info):
from shutil import copy
timezone = info.manifest.system['timezone']
zoneinfo_path = os.path.join(info.root, '/usr/share/zoneinfo', timezone)
localtime_path = os.path.join(info.root, 'etc/localtime')
copy(zoneinfo_path, localtime_path)
@classmethod
def run(cls, info):
from shutil import copy
timezone = info.manifest.system['timezone']
zoneinfo_path = os.path.join(info.root, '/usr/share/zoneinfo', timezone)
localtime_path = os.path.join(info.root, 'etc/localtime')
copy(zoneinfo_path, localtime_path)

View file

@ -5,28 +5,28 @@ import volume
class AddRequiredCommands(Task):
description = 'Adding commands required for creating loopback volumes'
phase = phases.preparation
successors = [host.CheckExternalCommands]
description = 'Adding commands required for creating loopback volumes'
phase = phases.preparation
successors = [host.CheckExternalCommands]
@classmethod
def run(cls, info):
from ..fs.loopbackvolume import LoopbackVolume
from ..fs.qemuvolume import QEMUVolume
if type(info.volume) is LoopbackVolume:
info.host_dependencies['losetup'] = 'mount'
info.host_dependencies['truncate'] = 'coreutils'
if isinstance(info.volume, QEMUVolume):
info.host_dependencies['qemu-img'] = 'qemu-utils'
@classmethod
def run(cls, info):
from ..fs.loopbackvolume import LoopbackVolume
from ..fs.qemuvolume import QEMUVolume
if type(info.volume) is LoopbackVolume:
info.host_dependencies['losetup'] = 'mount'
info.host_dependencies['truncate'] = 'coreutils'
if isinstance(info.volume, QEMUVolume):
info.host_dependencies['qemu-img'] = 'qemu-utils'
class Create(Task):
description = 'Creating a loopback volume'
phase = phases.volume_creation
successors = [volume.Attach]
description = 'Creating a loopback volume'
phase = phases.volume_creation
successors = [volume.Attach]
@classmethod
def run(cls, info):
import os.path
image_path = os.path.join(info.workspace, 'volume.' + info.volume.extension)
info.volume.create(image_path)
@classmethod
def run(cls, info):
import os.path
image_path = os.path.join(info.workspace, 'volume.' + info.volume.extension)
info.volume.create(image_path)

View file

@ -4,51 +4,51 @@ import os
class RemoveDNSInfo(Task):
description = 'Removing resolv.conf'
phase = phases.system_cleaning
description = 'Removing resolv.conf'
phase = phases.system_cleaning
@classmethod
def run(cls, info):
if os.path.isfile(os.path.join(info.root, 'etc/resolv.conf')):
os.remove(os.path.join(info.root, 'etc/resolv.conf'))
@classmethod
def run(cls, info):
if os.path.isfile(os.path.join(info.root, 'etc/resolv.conf')):
os.remove(os.path.join(info.root, 'etc/resolv.conf'))
class RemoveHostname(Task):
description = 'Removing the hostname file'
phase = phases.system_cleaning
description = 'Removing the hostname file'
phase = phases.system_cleaning
@classmethod
def run(cls, info):
if os.path.isfile(os.path.join(info.root, 'etc/hostname')):
os.remove(os.path.join(info.root, 'etc/hostname'))
@classmethod
def run(cls, info):
if os.path.isfile(os.path.join(info.root, 'etc/hostname')):
os.remove(os.path.join(info.root, 'etc/hostname'))
class SetHostname(Task):
description = 'Writing hostname into the hostname file'
phase = phases.system_modification
description = 'Writing hostname into the hostname file'
phase = phases.system_modification
@classmethod
def run(cls, info):
hostname = info.manifest.system['hostname'].format(**info.manifest_vars)
hostname_file_path = os.path.join(info.root, 'etc/hostname')
with open(hostname_file_path, 'w') as hostname_file:
hostname_file.write(hostname)
@classmethod
def run(cls, info):
hostname = info.manifest.system['hostname'].format(**info.manifest_vars)
hostname_file_path = os.path.join(info.root, 'etc/hostname')
with open(hostname_file_path, 'w') as hostname_file:
hostname_file.write(hostname)
hosts_path = os.path.join(info.root, 'etc/hosts')
from bootstrapvz.common.tools import sed_i
sed_i(hosts_path, '^127.0.0.1\tlocalhost$', '127.0.0.1\tlocalhost\n127.0.1.1\t' + hostname)
hosts_path = os.path.join(info.root, 'etc/hosts')
from bootstrapvz.common.tools import sed_i
sed_i(hosts_path, '^127.0.0.1\tlocalhost$', '127.0.0.1\tlocalhost\n127.0.1.1\t' + hostname)
class ConfigureNetworkIF(Task):
description = 'Configuring network interfaces'
phase = phases.system_modification
description = 'Configuring network interfaces'
phase = phases.system_modification
@classmethod
def run(cls, info):
network_config_path = os.path.join(os.path.dirname(__file__), 'network-configuration.yml')
from ..tools import config_get
if_config = config_get(network_config_path, [info.manifest.release.codename])
@classmethod
def run(cls, info):
network_config_path = os.path.join(os.path.dirname(__file__), 'network-configuration.yml')
from ..tools import config_get
if_config = config_get(network_config_path, [info.manifest.release.codename])
interfaces_path = os.path.join(info.root, 'etc/network/interfaces')
with open(interfaces_path, 'a') as interfaces:
interfaces.write(if_config + '\n')
interfaces_path = os.path.join(info.root, 'etc/network/interfaces')
with open(interfaces_path, 'a') as interfaces:
interfaces.write(if_config + '\n')

View file

@ -5,107 +5,107 @@ from ..tools import log_check_call
class AddManifestPackages(Task):
description = 'Adding packages from the manifest'
phase = phases.preparation
predecessors = [apt.AddManifestSources, apt.AddDefaultSources, apt.AddBackports]
description = 'Adding packages from the manifest'
phase = phases.preparation
predecessors = [apt.AddManifestSources, apt.AddDefaultSources, apt.AddBackports]
@classmethod
def run(cls, info):
import re
remote = re.compile('^(?P<name>[^/]+)(/(?P<target>[^/]+))?$')
for package in info.manifest.packages['install']:
match = remote.match(package)
if match is not None:
info.packages.add(match.group('name'), match.group('target'))
else:
info.packages.add_local(package)
@classmethod
def run(cls, info):
import re
remote = re.compile('^(?P<name>[^/]+)(/(?P<target>[^/]+))?$')
for package in info.manifest.packages['install']:
match = remote.match(package)
if match is not None:
info.packages.add(match.group('name'), match.group('target'))
else:
info.packages.add_local(package)
class InstallPackages(Task):
description = 'Installing packages'
phase = phases.package_installation
predecessors = [apt.AptUpgrade]
description = 'Installing packages'
phase = phases.package_installation
predecessors = [apt.AptUpgrade]
@classmethod
def run(cls, info):
batch = []
actions = {info.packages.Remote: cls.install_remote,
info.packages.Local: cls.install_local}
for i, package in enumerate(info.packages.install):
batch.append(package)
next_package = info.packages.install[i + 1] if i + 1 < len(info.packages.install) else None
if next_package is None or package.__class__ is not next_package.__class__:
actions[package.__class__](info, batch)
batch = []
@classmethod
def run(cls, info):
batch = []
actions = {info.packages.Remote: cls.install_remote,
info.packages.Local: cls.install_local}
for i, package in enumerate(info.packages.install):
batch.append(package)
next_package = info.packages.install[i + 1] if i + 1 < len(info.packages.install) else None
if next_package is None or package.__class__ is not next_package.__class__:
actions[package.__class__](info, batch)
batch = []
@classmethod
def install_remote(cls, info, remote_packages):
import os
from ..tools import log_check_call
from subprocess import CalledProcessError
try:
env = os.environ.copy()
env['DEBIAN_FRONTEND'] = 'noninteractive'
log_check_call(['chroot', info.root,
'apt-get', 'install',
'--no-install-recommends',
'--assume-yes'] +
map(str, remote_packages),
env=env)
except CalledProcessError as e:
import logging
disk_stat = os.statvfs(info.root)
root_free_mb = disk_stat.f_bsize * disk_stat.f_bavail / 1024 / 1024
disk_stat = os.statvfs(os.path.join(info.root, 'boot'))
boot_free_mb = disk_stat.f_bsize * disk_stat.f_bavail / 1024 / 1024
free_mb = min(root_free_mb, boot_free_mb)
if free_mb < 50:
msg = ('apt exited with a non-zero status, '
'this may be because\nthe image volume is '
'running out of disk space ({free}MB left)').format(free=free_mb)
logging.getLogger(__name__).warn(msg)
else:
if e.returncode == 100:
msg = ('apt exited with status code 100. '
'This can sometimes occur when package retrieval times out or a package extraction failed. '
'apt might succeed if you try bootstrapping again.')
logging.getLogger(__name__).warn(msg)
raise
@classmethod
def install_remote(cls, info, remote_packages):
import os
from ..tools import log_check_call
from subprocess import CalledProcessError
try:
env = os.environ.copy()
env['DEBIAN_FRONTEND'] = 'noninteractive'
log_check_call(['chroot', info.root,
'apt-get', 'install',
'--no-install-recommends',
'--assume-yes'] +
map(str, remote_packages),
env=env)
except CalledProcessError as e:
import logging
disk_stat = os.statvfs(info.root)
root_free_mb = disk_stat.f_bsize * disk_stat.f_bavail / 1024 / 1024
disk_stat = os.statvfs(os.path.join(info.root, 'boot'))
boot_free_mb = disk_stat.f_bsize * disk_stat.f_bavail / 1024 / 1024
free_mb = min(root_free_mb, boot_free_mb)
if free_mb < 50:
msg = ('apt exited with a non-zero status, '
'this may be because\nthe image volume is '
'running out of disk space ({free}MB left)').format(free=free_mb)
logging.getLogger(__name__).warn(msg)
else:
if e.returncode == 100:
msg = ('apt exited with status code 100. '
'This can sometimes occur when package retrieval times out or a package extraction failed. '
'apt might succeed if you try bootstrapping again.')
logging.getLogger(__name__).warn(msg)
raise
@classmethod
def install_local(cls, info, local_packages):
from shutil import copy
import os
@classmethod
def install_local(cls, info, local_packages):
from shutil import copy
import os
absolute_package_paths = []
chrooted_package_paths = []
for package_src in local_packages:
pkg_name = os.path.basename(package_src.path)
package_rel_dst = os.path.join('tmp', pkg_name)
package_dst = os.path.join(info.root, package_rel_dst)
copy(package_src.path, package_dst)
absolute_package_paths.append(package_dst)
package_path = os.path.join('/', package_rel_dst)
chrooted_package_paths.append(package_path)
absolute_package_paths = []
chrooted_package_paths = []
for package_src in local_packages:
pkg_name = os.path.basename(package_src.path)
package_rel_dst = os.path.join('tmp', pkg_name)
package_dst = os.path.join(info.root, package_rel_dst)
copy(package_src.path, package_dst)
absolute_package_paths.append(package_dst)
package_path = os.path.join('/', package_rel_dst)
chrooted_package_paths.append(package_path)
env = os.environ.copy()
env['DEBIAN_FRONTEND'] = 'noninteractive'
log_check_call(['chroot', info.root,
'dpkg', '--install'] + chrooted_package_paths,
env=env)
env = os.environ.copy()
env['DEBIAN_FRONTEND'] = 'noninteractive'
log_check_call(['chroot', info.root,
'dpkg', '--install'] + chrooted_package_paths,
env=env)
for path in absolute_package_paths:
os.remove(path)
for path in absolute_package_paths:
os.remove(path)
class AddTaskselStandardPackages(Task):
description = 'Adding standard packages from tasksel'
phase = phases.package_installation
predecessors = [apt.AptUpdate]
successors = [InstallPackages]
description = 'Adding standard packages from tasksel'
phase = phases.package_installation
predecessors = [apt.AptUpdate]
successors = [InstallPackages]
@classmethod
def run(cls, info):
tasksel_packages = log_check_call(['chroot', info.root, 'tasksel', '--task-packages', 'standard'])
for pkg in tasksel_packages:
info.packages.add(pkg)
@classmethod
def run(cls, info):
tasksel_packages = log_check_call(['chroot', info.root, 'tasksel', '--task-packages', 'standard'])
for pkg in tasksel_packages:
info.packages.add(pkg)

View file

@ -6,44 +6,44 @@ import volume
class AddRequiredCommands(Task):
description = 'Adding commands required for partitioning the volume'
phase = phases.preparation
successors = [host.CheckExternalCommands]
description = 'Adding commands required for partitioning the volume'
phase = phases.preparation
successors = [host.CheckExternalCommands]
@classmethod
def run(cls, info):
from bootstrapvz.base.fs.partitionmaps.none import NoPartitions
if not isinstance(info.volume.partition_map, NoPartitions):
info.host_dependencies['parted'] = 'parted'
info.host_dependencies['kpartx'] = 'kpartx'
@classmethod
def run(cls, info):
from bootstrapvz.base.fs.partitionmaps.none import NoPartitions
if not isinstance(info.volume.partition_map, NoPartitions):
info.host_dependencies['parted'] = 'parted'
info.host_dependencies['kpartx'] = 'kpartx'
class PartitionVolume(Task):
description = 'Partitioning the volume'
phase = phases.volume_preparation
description = 'Partitioning the volume'
phase = phases.volume_preparation
@classmethod
def run(cls, info):
info.volume.partition_map.create(info.volume)
@classmethod
def run(cls, info):
info.volume.partition_map.create(info.volume)
class MapPartitions(Task):
description = 'Mapping volume partitions'
phase = phases.volume_preparation
predecessors = [PartitionVolume]
successors = [filesystem.Format]
description = 'Mapping volume partitions'
phase = phases.volume_preparation
predecessors = [PartitionVolume]
successors = [filesystem.Format]
@classmethod
def run(cls, info):
info.volume.partition_map.map(info.volume)
@classmethod
def run(cls, info):
info.volume.partition_map.map(info.volume)
class UnmapPartitions(Task):
description = 'Removing volume partitions mapping'
phase = phases.volume_unmounting
predecessors = [filesystem.UnmountRoot]
successors = [volume.Detach]
description = 'Removing volume partitions mapping'
phase = phases.volume_unmounting
predecessors = [filesystem.UnmountRoot]
successors = [volume.Detach]
@classmethod
def run(cls, info):
info.volume.partition_map.unmap(info.volume)
@classmethod
def run(cls, info):
info.volume.partition_map.unmap(info.volume)

View file

@ -3,10 +3,10 @@ from .. import phases
class EnableShadowConfig(Task):
description = 'Enabling shadowconfig'
phase = phases.system_modification
description = 'Enabling shadowconfig'
phase = phases.system_modification
@classmethod
def run(cls, info):
from ..tools import log_check_call
log_check_call(['chroot', info.root, 'shadowconfig', 'on'])
@classmethod
def run(cls, info):
from ..tools import log_check_call
log_check_call(['chroot', info.root, 'shadowconfig', 'on'])

View file

@ -7,106 +7,106 @@ import initd
class AddOpenSSHPackage(Task):
description = 'Adding openssh package'
phase = phases.preparation
description = 'Adding openssh package'
phase = phases.preparation
@classmethod
def run(cls, info):
info.packages.add('openssh-server')
@classmethod
def run(cls, info):
info.packages.add('openssh-server')
class AddSSHKeyGeneration(Task):
description = 'Adding SSH private key generation init scripts'
phase = phases.system_modification
successors = [initd.InstallInitScripts]
description = 'Adding SSH private key generation init scripts'
phase = phases.system_modification
successors = [initd.InstallInitScripts]
@classmethod
def run(cls, info):
init_scripts_dir = os.path.join(assets, 'init.d')
install = info.initd['install']
from subprocess import CalledProcessError
try:
log_check_call(['chroot', info.root,
'dpkg-query', '-W', 'openssh-server'])
from bootstrapvz.common.releases import squeeze
if info.manifest.release == squeeze:
install['generate-ssh-hostkeys'] = os.path.join(init_scripts_dir, 'squeeze/generate-ssh-hostkeys')
else:
install['generate-ssh-hostkeys'] = os.path.join(init_scripts_dir, 'generate-ssh-hostkeys')
except CalledProcessError:
import logging
logging.getLogger(__name__).warn('The OpenSSH server has not been installed, '
'not installing SSH host key generation script.')
@classmethod
def run(cls, info):
init_scripts_dir = os.path.join(assets, 'init.d')
install = info.initd['install']
from subprocess import CalledProcessError
try:
log_check_call(['chroot', info.root,
'dpkg-query', '-W', 'openssh-server'])
from bootstrapvz.common.releases import squeeze
if info.manifest.release == squeeze:
install['generate-ssh-hostkeys'] = os.path.join(init_scripts_dir, 'squeeze/generate-ssh-hostkeys')
else:
install['generate-ssh-hostkeys'] = os.path.join(init_scripts_dir, 'generate-ssh-hostkeys')
except CalledProcessError:
import logging
logging.getLogger(__name__).warn('The OpenSSH server has not been installed, '
'not installing SSH host key generation script.')
class DisableSSHPasswordAuthentication(Task):
description = 'Disabling SSH password authentication'
phase = phases.system_modification
description = 'Disabling SSH password authentication'
phase = phases.system_modification
@classmethod
def run(cls, info):
from ..tools import sed_i
sshd_config_path = os.path.join(info.root, 'etc/ssh/sshd_config')
sed_i(sshd_config_path, '^#PasswordAuthentication yes', 'PasswordAuthentication no')
@classmethod
def run(cls, info):
from ..tools import sed_i
sshd_config_path = os.path.join(info.root, 'etc/ssh/sshd_config')
sed_i(sshd_config_path, '^#PasswordAuthentication yes', 'PasswordAuthentication no')
class EnableRootLogin(Task):
description = 'Enabling SSH login for root'
phase = phases.system_modification
description = 'Enabling SSH login for root'
phase = phases.system_modification
@classmethod
def run(cls, info):
sshdconfig_path = os.path.join(info.root, 'etc/ssh/sshd_config')
if os.path.exists(sshdconfig_path):
from bootstrapvz.common.tools import sed_i
sed_i(sshdconfig_path, '^PermitRootLogin .*', 'PermitRootLogin yes')
else:
import logging
logging.getLogger(__name__).warn('The OpenSSH server has not been installed, '
'not enabling SSH root login.')
@classmethod
def run(cls, info):
sshdconfig_path = os.path.join(info.root, 'etc/ssh/sshd_config')
if os.path.exists(sshdconfig_path):
from bootstrapvz.common.tools import sed_i
sed_i(sshdconfig_path, '^PermitRootLogin .*', 'PermitRootLogin yes')
else:
import logging
logging.getLogger(__name__).warn('The OpenSSH server has not been installed, '
'not enabling SSH root login.')
class DisableRootLogin(Task):
description = 'Disabling SSH login for root'
phase = phases.system_modification
description = 'Disabling SSH login for root'
phase = phases.system_modification
@classmethod
def run(cls, info):
sshdconfig_path = os.path.join(info.root, 'etc/ssh/sshd_config')
if os.path.exists(sshdconfig_path):
from bootstrapvz.common.tools import sed_i
sed_i(sshdconfig_path, '^PermitRootLogin .*', 'PermitRootLogin no')
else:
import logging
logging.getLogger(__name__).warn('The OpenSSH server has not been installed, '
'not disabling SSH root login.')
@classmethod
def run(cls, info):
sshdconfig_path = os.path.join(info.root, 'etc/ssh/sshd_config')
if os.path.exists(sshdconfig_path):
from bootstrapvz.common.tools import sed_i
sed_i(sshdconfig_path, '^PermitRootLogin .*', 'PermitRootLogin no')
else:
import logging
logging.getLogger(__name__).warn('The OpenSSH server has not been installed, '
'not disabling SSH root login.')
class DisableSSHDNSLookup(Task):
description = 'Disabling sshd remote host name lookup'
phase = phases.system_modification
description = 'Disabling sshd remote host name lookup'
phase = phases.system_modification
@classmethod
def run(cls, info):
sshd_config_path = os.path.join(info.root, 'etc/ssh/sshd_config')
with open(sshd_config_path, 'a') as sshd_config:
sshd_config.write('UseDNS no')
@classmethod
def run(cls, info):
sshd_config_path = os.path.join(info.root, 'etc/ssh/sshd_config')
with open(sshd_config_path, 'a') as sshd_config:
sshd_config.write('UseDNS no')
class ShredHostkeys(Task):
description = 'Securely deleting ssh hostkeys'
phase = phases.system_cleaning
description = 'Securely deleting ssh hostkeys'
phase = phases.system_cleaning
@classmethod
def run(cls, info):
ssh_hostkeys = ['ssh_host_dsa_key',
'ssh_host_rsa_key']
from bootstrapvz.common.releases import wheezy
if info.manifest.release >= wheezy:
ssh_hostkeys.append('ssh_host_ecdsa_key')
@classmethod
def run(cls, info):
ssh_hostkeys = ['ssh_host_dsa_key',
'ssh_host_rsa_key']
from bootstrapvz.common.releases import wheezy
if info.manifest.release >= wheezy:
ssh_hostkeys.append('ssh_host_ecdsa_key')
private = [os.path.join(info.root, 'etc/ssh', name) for name in ssh_hostkeys]
public = [path + '.pub' for path in private]
private = [os.path.join(info.root, 'etc/ssh', name) for name in ssh_hostkeys]
public = [path + '.pub' for path in private]
from ..tools import log_check_call
log_check_call(['shred', '--remove'] + private + public)
from ..tools import log_check_call
log_check_call(['shred', '--remove'] + private + public)

View file

@ -4,28 +4,28 @@ import workspace
class Attach(Task):
description = 'Attaching the volume'
phase = phases.volume_creation
description = 'Attaching the volume'
phase = phases.volume_creation
@classmethod
def run(cls, info):
info.volume.attach()
@classmethod
def run(cls, info):
info.volume.attach()
class Detach(Task):
description = 'Detaching the volume'
phase = phases.volume_unmounting
description = 'Detaching the volume'
phase = phases.volume_unmounting
@classmethod
def run(cls, info):
info.volume.detach()
@classmethod
def run(cls, info):
info.volume.detach()
class Delete(Task):
description = 'Deleting the volume'
phase = phases.cleaning
successors = [workspace.DeleteWorkspace]
description = 'Deleting the volume'
phase = phases.cleaning
successors = [workspace.DeleteWorkspace]
@classmethod
def run(cls, info):
info.volume.delete()
@classmethod
def run(cls, info):
info.volume.delete()

View file

@ -3,20 +3,20 @@ from .. import phases
class CreateWorkspace(Task):
description = 'Creating workspace'
phase = phases.preparation
description = 'Creating workspace'
phase = phases.preparation
@classmethod
def run(cls, info):
import os
os.makedirs(info.workspace)
@classmethod
def run(cls, info):
import os
os.makedirs(info.workspace)
class DeleteWorkspace(Task):
description = 'Deleting workspace'
phase = phases.cleaning
description = 'Deleting workspace'
phase = phases.cleaning
@classmethod
def run(cls, info):
import os
os.rmdir(info.workspace)
@classmethod
def run(cls, info):
import os
os.rmdir(info.workspace)

View file

@ -2,134 +2,134 @@ import os
def log_check_call(command, stdin=None, env=None, shell=False, cwd=None):
status, stdout, stderr = log_call(command, stdin, env, shell, cwd)
from subprocess import CalledProcessError
if status != 0:
e = CalledProcessError(status, ' '.join(command), '\n'.join(stderr))
# Fix Pyro4's fixIronPythonExceptionForPickle() by setting the args property,
# even though we use our own serialization (at least I think that's the problem).
# See bootstrapvz.remote.serialize_called_process_error for more info.
setattr(e, 'args', (status, ' '.join(command), '\n'.join(stderr)))
raise e
return stdout
status, stdout, stderr = log_call(command, stdin, env, shell, cwd)
from subprocess import CalledProcessError
if status != 0:
e = CalledProcessError(status, ' '.join(command), '\n'.join(stderr))
# Fix Pyro4's fixIronPythonExceptionForPickle() by setting the args property,
# even though we use our own serialization (at least I think that's the problem).
# See bootstrapvz.remote.serialize_called_process_error for more info.
setattr(e, 'args', (status, ' '.join(command), '\n'.join(stderr)))
raise e
return stdout
def log_call(command, stdin=None, env=None, shell=False, cwd=None):
import subprocess
import logging
from multiprocessing.dummy import Pool as ThreadPool
from os.path import realpath
import subprocess
import logging
from multiprocessing.dummy import Pool as ThreadPool
from os.path import realpath
command_log = realpath(command[0]).replace('/', '.')
log = logging.getLogger(__name__ + command_log)
if type(command) is list:
log.debug('Executing: {command}'.format(command=' '.join(command)))
else:
log.debug('Executing: {command}'.format(command=command))
command_log = realpath(command[0]).replace('/', '.')
log = logging.getLogger(__name__ + command_log)
if type(command) is list:
log.debug('Executing: {command}'.format(command=' '.join(command)))
else:
log.debug('Executing: {command}'.format(command=command))
process = subprocess.Popen(args=command, env=env, shell=shell, cwd=cwd,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
process = subprocess.Popen(args=command, env=env, shell=shell, cwd=cwd,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
if stdin is not None:
log.debug(' stdin: ' + stdin)
process.stdin.write(stdin + "\n")
process.stdin.flush()
process.stdin.close()
if stdin is not None:
log.debug(' stdin: ' + stdin)
process.stdin.write(stdin + "\n")
process.stdin.flush()
process.stdin.close()
stdout = []
stderr = []
stdout = []
stderr = []
def handle_stdout(line):
log.debug(line)
stdout.append(line)
def handle_stdout(line):
log.debug(line)
stdout.append(line)
def handle_stderr(line):
log.error(line)
stderr.append(line)
def handle_stderr(line):
log.error(line)
stderr.append(line)
handlers = {process.stdout: handle_stdout,
process.stderr: handle_stderr}
handlers = {process.stdout: handle_stdout,
process.stderr: handle_stderr}
def stream_readline(stream):
for line in iter(stream.readline, ''):
handlers[stream](line.strip())
def stream_readline(stream):
for line in iter(stream.readline, ''):
handlers[stream](line.strip())
pool = ThreadPool(2)
pool.map(stream_readline, [process.stdout, process.stderr])
pool.close()
pool.join()
process.wait()
return process.returncode, stdout, stderr
pool = ThreadPool(2)
pool.map(stream_readline, [process.stdout, process.stderr])
pool.close()
pool.join()
process.wait()
return process.returncode, stdout, stderr
def sed_i(file_path, pattern, subst, expected_replacements=1):
replacement_count = inline_replace(file_path, pattern, subst)
if replacement_count != expected_replacements:
from exceptions import UnexpectedNumMatchesError
msg = ('There were {real} instead of {expected} matches for '
'the expression `{exp}\' in the file `{path}\''
.format(real=replacement_count, expected=expected_replacements,
exp=pattern, path=file_path))
raise UnexpectedNumMatchesError(msg)
replacement_count = inline_replace(file_path, pattern, subst)
if replacement_count != expected_replacements:
from exceptions import UnexpectedNumMatchesError
msg = ('There were {real} instead of {expected} matches for '
'the expression `{exp}\' in the file `{path}\''
.format(real=replacement_count, expected=expected_replacements,
exp=pattern, path=file_path))
raise UnexpectedNumMatchesError(msg)
def inline_replace(file_path, pattern, subst):
import fileinput
import re
replacement_count = 0
for line in fileinput.input(files=file_path, inplace=True):
(replacement, count) = re.subn(pattern, subst, line)
replacement_count += count
print replacement,
return replacement_count
import fileinput
import re
replacement_count = 0
for line in fileinput.input(files=file_path, inplace=True):
(replacement, count) = re.subn(pattern, subst, line)
replacement_count += count
print replacement,
return replacement_count
def load_json(path):
import json
from minify_json import json_minify
with open(path) as stream:
return json.loads(json_minify(stream.read(), False))
import json
from minify_json import json_minify
with open(path) as stream:
return json.loads(json_minify(stream.read(), False))
def load_yaml(path):
import yaml
with open(path, 'r') as stream:
return yaml.safe_load(stream)
import yaml
with open(path, 'r') as stream:
return yaml.safe_load(stream)
def load_data(path):
filename, extension = os.path.splitext(path)
if not os.path.isfile(path):
raise Exception('The path {path} does not point to a file.'.format(path=path))
if extension == '.json':
return load_json(path)
elif extension == '.yml' or extension == '.yaml':
return load_yaml(path)
else:
raise Exception('Unrecognized extension: {ext}'.format(ext=extension))
filename, extension = os.path.splitext(path)
if not os.path.isfile(path):
raise Exception('The path {path} does not point to a file.'.format(path=path))
if extension == '.json':
return load_json(path)
elif extension == '.yml' or extension == '.yaml':
return load_yaml(path)
else:
raise Exception('Unrecognized extension: {ext}'.format(ext=extension))
def config_get(path, config_path):
config = load_data(path)
for key in config_path:
config = config.get(key)
return config
config = load_data(path)
for key in config_path:
config = config.get(key)
return config
def copy_tree(from_path, to_path):
from shutil import copy
for abs_prefix, dirs, files in os.walk(from_path):
prefix = os.path.normpath(os.path.relpath(abs_prefix, from_path))
for path in dirs:
full_path = os.path.join(to_path, prefix, path)
if os.path.exists(full_path):
if os.path.isdir(full_path):
continue
else:
os.remove(full_path)
os.mkdir(full_path)
for path in files:
copy(os.path.join(abs_prefix, path),
os.path.join(to_path, prefix, path))
from shutil import copy
for abs_prefix, dirs, files in os.walk(from_path):
prefix = os.path.normpath(os.path.relpath(abs_prefix, from_path))
for path in dirs:
full_path = os.path.join(to_path, prefix, path)
if os.path.exists(full_path):
if os.path.isdir(full_path):
continue
else:
os.remove(full_path)
os.mkdir(full_path)
for path in files:
copy(os.path.join(abs_prefix, path),
os.path.join(to_path, prefix, path))

View file

@ -1,37 +1,37 @@
def validate_manifest(data, validator, error):
import os.path
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml'))
validator(data, schema_path)
pubkey = data['plugins']['admin_user'].get('pubkey', None)
if pubkey is not None and not os.path.exists(pubkey):
msg = 'Could not find public key at %s' % pubkey
error(msg, ['plugins', 'admin_user', 'pubkey'])
import os.path
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml'))
validator(data, schema_path)
pubkey = data['plugins']['admin_user'].get('pubkey', None)
if pubkey is not None and not os.path.exists(pubkey):
msg = 'Could not find public key at %s' % pubkey
error(msg, ['plugins', 'admin_user', 'pubkey'])
def resolve_tasks(taskset, manifest):
import logging
import tasks
from bootstrapvz.common.tasks import ssh
import logging
import tasks
from bootstrapvz.common.tasks import ssh
from bootstrapvz.common.releases import jessie
if manifest.release < jessie:
taskset.update([ssh.DisableRootLogin])
from bootstrapvz.common.releases import jessie
if manifest.release < jessie:
taskset.update([ssh.DisableRootLogin])
if 'password' in manifest.plugins['admin_user']:
taskset.discard(ssh.DisableSSHPasswordAuthentication)
taskset.add(tasks.AdminUserPassword)
if 'password' in manifest.plugins['admin_user']:
taskset.discard(ssh.DisableSSHPasswordAuthentication)
taskset.add(tasks.AdminUserPassword)
if 'pubkey' in manifest.plugins['admin_user']:
taskset.add(tasks.AdminUserPublicKey)
elif manifest.provider['name'] == 'ec2':
logging.getLogger(__name__).info("The SSH key will be obtained from EC2")
taskset.add(tasks.AdminUserPublicKeyEC2)
elif 'password' not in manifest.plugins['admin_user']:
logging.getLogger(__name__).warn("No SSH key and no password set")
if 'pubkey' in manifest.plugins['admin_user']:
taskset.add(tasks.AdminUserPublicKey)
elif manifest.provider['name'] == 'ec2':
logging.getLogger(__name__).info("The SSH key will be obtained from EC2")
taskset.add(tasks.AdminUserPublicKeyEC2)
elif 'password' not in manifest.plugins['admin_user']:
logging.getLogger(__name__).warn("No SSH key and no password set")
taskset.update([tasks.AddSudoPackage,
tasks.CreateAdminUser,
tasks.PasswordlessSudo,
])
taskset.update([tasks.AddSudoPackage,
tasks.CreateAdminUser,
tasks.PasswordlessSudo,
])

View file

@ -9,104 +9,104 @@ log = logging.getLogger(__name__)
class AddSudoPackage(Task):
description = 'Adding `sudo\' to the image packages'
phase = phases.preparation
description = 'Adding `sudo\' to the image packages'
phase = phases.preparation
@classmethod
def run(cls, info):
info.packages.add('sudo')
@classmethod
def run(cls, info):
info.packages.add('sudo')
class CreateAdminUser(Task):
description = 'Creating the admin user'
phase = phases.system_modification
description = 'Creating the admin user'
phase = phases.system_modification
@classmethod
def run(cls, info):
from bootstrapvz.common.tools import log_check_call
log_check_call(['chroot', info.root,
'useradd',
'--create-home', '--shell', '/bin/bash',
info.manifest.plugins['admin_user']['username']])
@classmethod
def run(cls, info):
from bootstrapvz.common.tools import log_check_call
log_check_call(['chroot', info.root,
'useradd',
'--create-home', '--shell', '/bin/bash',
info.manifest.plugins['admin_user']['username']])
class PasswordlessSudo(Task):
description = 'Allowing the admin user to use sudo without a password'
phase = phases.system_modification
description = 'Allowing the admin user to use sudo without a password'
phase = phases.system_modification
@classmethod
def run(cls, info):
sudo_admin_path = os.path.join(info.root, 'etc/sudoers.d/99_admin')
username = info.manifest.plugins['admin_user']['username']
with open(sudo_admin_path, 'w') as sudo_admin:
sudo_admin.write('{username} ALL=(ALL) NOPASSWD:ALL'.format(username=username))
import stat
ug_read_only = (stat.S_IRUSR | stat.S_IRGRP)
os.chmod(sudo_admin_path, ug_read_only)
@classmethod
def run(cls, info):
sudo_admin_path = os.path.join(info.root, 'etc/sudoers.d/99_admin')
username = info.manifest.plugins['admin_user']['username']
with open(sudo_admin_path, 'w') as sudo_admin:
sudo_admin.write('{username} ALL=(ALL) NOPASSWD:ALL'.format(username=username))
import stat
ug_read_only = (stat.S_IRUSR | stat.S_IRGRP)
os.chmod(sudo_admin_path, ug_read_only)
class AdminUserPassword(Task):
description = 'Setting the admin user password'
phase = phases.system_modification
predecessors = [InstallInitScripts, CreateAdminUser]
description = 'Setting the admin user password'
phase = phases.system_modification
predecessors = [InstallInitScripts, CreateAdminUser]
@classmethod
def run(cls, info):
from bootstrapvz.common.tools import log_check_call
log_check_call(['chroot', info.root, 'chpasswd'],
info.manifest.plugins['admin_user']['username'] +
':' + info.manifest.plugins['admin_user']['password'])
@classmethod
def run(cls, info):
from bootstrapvz.common.tools import log_check_call
log_check_call(['chroot', info.root, 'chpasswd'],
info.manifest.plugins['admin_user']['username'] +
':' + info.manifest.plugins['admin_user']['password'])
class AdminUserPublicKey(Task):
description = 'Installing the public key for the admin user'
phase = phases.system_modification
predecessors = [AddEC2InitScripts, CreateAdminUser]
successors = [InstallInitScripts]
description = 'Installing the public key for the admin user'
phase = phases.system_modification
predecessors = [AddEC2InitScripts, CreateAdminUser]
successors = [InstallInitScripts]
@classmethod
def run(cls, info):
if 'ec2-get-credentials' in info.initd['install']:
log.warn('You are using a static public key for the admin account.'
'This will conflict with the ec2 public key injection mechanism.'
'The ec2-get-credentials startup script will therefore not be enabled.')
del info.initd['install']['ec2-get-credentials']
@classmethod
def run(cls, info):
if 'ec2-get-credentials' in info.initd['install']:
log.warn('You are using a static public key for the admin account.'
'This will conflict with the ec2 public key injection mechanism.'
'The ec2-get-credentials startup script will therefore not be enabled.')
del info.initd['install']['ec2-get-credentials']
# Get the stuff we need (username & public key)
username = info.manifest.plugins['admin_user']['username']
with open(info.manifest.plugins['admin_user']['pubkey']) as pubkey_handle:
pubkey = pubkey_handle.read()
# Get the stuff we need (username & public key)
username = info.manifest.plugins['admin_user']['username']
with open(info.manifest.plugins['admin_user']['pubkey']) as pubkey_handle:
pubkey = pubkey_handle.read()
# paths
ssh_dir_rel = os.path.join('home', username, '.ssh')
auth_keys_rel = os.path.join(ssh_dir_rel, 'authorized_keys')
ssh_dir_abs = os.path.join(info.root, ssh_dir_rel)
auth_keys_abs = os.path.join(info.root, auth_keys_rel)
# Create the ssh dir if nobody has created it yet
if not os.path.exists(ssh_dir_abs):
os.mkdir(ssh_dir_abs, 0700)
# paths
ssh_dir_rel = os.path.join('home', username, '.ssh')
auth_keys_rel = os.path.join(ssh_dir_rel, 'authorized_keys')
ssh_dir_abs = os.path.join(info.root, ssh_dir_rel)
auth_keys_abs = os.path.join(info.root, auth_keys_rel)
# Create the ssh dir if nobody has created it yet
if not os.path.exists(ssh_dir_abs):
os.mkdir(ssh_dir_abs, 0700)
# Create (or append to) the authorized keys file (and chmod u=rw,go=)
import stat
with open(auth_keys_abs, 'a') as auth_keys_handle:
auth_keys_handle.write(pubkey + '\n')
os.chmod(auth_keys_abs, (stat.S_IRUSR | stat.S_IWUSR))
# Create (or append to) the authorized keys file (and chmod u=rw,go=)
import stat
with open(auth_keys_abs, 'a') as auth_keys_handle:
auth_keys_handle.write(pubkey + '\n')
os.chmod(auth_keys_abs, (stat.S_IRUSR | stat.S_IWUSR))
# Set the owner of the authorized keys file
# (must be through chroot, the host system doesn't know about the user)
from bootstrapvz.common.tools import log_check_call
log_check_call(['chroot', info.root,
'chown', '-R', (username + ':' + username), ssh_dir_rel])
# Set the owner of the authorized keys file
# (must be through chroot, the host system doesn't know about the user)
from bootstrapvz.common.tools import log_check_call
log_check_call(['chroot', info.root,
'chown', '-R', (username + ':' + username), ssh_dir_rel])
class AdminUserPublicKeyEC2(Task):
description = 'Modifying ec2-get-credentials to copy the ssh public key to the admin user'
phase = phases.system_modification
predecessors = [InstallInitScripts, CreateAdminUser]
description = 'Modifying ec2-get-credentials to copy the ssh public key to the admin user'
phase = phases.system_modification
predecessors = [InstallInitScripts, CreateAdminUser]
@classmethod
def run(cls, info):
from bootstrapvz.common.tools import sed_i
getcreds_path = os.path.join(info.root, 'etc/init.d/ec2-get-credentials')
username = info.manifest.plugins['admin_user']['username']
sed_i(getcreds_path, "username='root'", "username='{username}'".format(username=username))
@classmethod
def run(cls, info):
from bootstrapvz.common.tools import sed_i
getcreds_path = os.path.join(info.root, 'etc/init.d/ec2-get-credentials')
username = info.manifest.plugins['admin_user']['username']
sed_i(getcreds_path, "username='root'", "username='{username}'".format(username=username))

View file

@ -1,12 +1,12 @@
def validate_manifest(data, validator, error):
import os.path
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml'))
validator(data, schema_path)
import os.path
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml'))
validator(data, schema_path)
def resolve_tasks(taskset, manifest):
import tasks
taskset.add(tasks.CheckAptProxy)
taskset.add(tasks.SetAptProxy)
if not manifest.plugins['apt_proxy'].get('persistent', False):
taskset.add(tasks.RemoveAptProxy)
import tasks
taskset.add(tasks.CheckAptProxy)
taskset.add(tasks.SetAptProxy)
if not manifest.plugins['apt_proxy'].get('persistent', False):
taskset.add(tasks.RemoveAptProxy)

View file

@ -6,55 +6,55 @@ import urllib2
class CheckAptProxy(Task):
description = 'Checking reachability of APT proxy server'
phase = phases.preparation
description = 'Checking reachability of APT proxy server'
phase = phases.preparation
@classmethod
def run(cls, info):
proxy_address = info.manifest.plugins['apt_proxy']['address']
proxy_port = info.manifest.plugins['apt_proxy']['port']
proxy_url = 'http://{address}:{port}'.format(address=proxy_address, port=proxy_port)
try:
urllib2.urlopen(proxy_url, timeout=5)
except Exception as e:
# Default response from `apt-cacher-ng`
if isinstance(e, urllib2.HTTPError) and e.code in [404, 406] and e.msg == 'Usage Information':
pass
else:
import logging
log = logging.getLogger(__name__)
log.warning('The APT proxy server couldn\'t be reached. `apt-get\' commands may fail.')
@classmethod
def run(cls, info):
proxy_address = info.manifest.plugins['apt_proxy']['address']
proxy_port = info.manifest.plugins['apt_proxy']['port']
proxy_url = 'http://{address}:{port}'.format(address=proxy_address, port=proxy_port)
try:
urllib2.urlopen(proxy_url, timeout=5)
except Exception as e:
# Default response from `apt-cacher-ng`
if isinstance(e, urllib2.HTTPError) and e.code in [404, 406] and e.msg == 'Usage Information':
pass
else:
import logging
log = logging.getLogger(__name__)
log.warning('The APT proxy server couldn\'t be reached. `apt-get\' commands may fail.')
class SetAptProxy(Task):
description = 'Setting proxy for APT'
phase = phases.package_installation
successors = [apt.AptUpdate]
description = 'Setting proxy for APT'
phase = phases.package_installation
successors = [apt.AptUpdate]
@classmethod
def run(cls, info):
proxy_path = os.path.join(info.root, 'etc/apt/apt.conf.d/02proxy')
proxy_username = info.manifest.plugins['apt_proxy'].get('username')
proxy_password = info.manifest.plugins['apt_proxy'].get('password')
proxy_address = info.manifest.plugins['apt_proxy']['address']
proxy_port = info.manifest.plugins['apt_proxy']['port']
@classmethod
def run(cls, info):
proxy_path = os.path.join(info.root, 'etc/apt/apt.conf.d/02proxy')
proxy_username = info.manifest.plugins['apt_proxy'].get('username')
proxy_password = info.manifest.plugins['apt_proxy'].get('password')
proxy_address = info.manifest.plugins['apt_proxy']['address']
proxy_port = info.manifest.plugins['apt_proxy']['port']
if None not in (proxy_username, proxy_password):
proxy_auth = '{username}:{password}@'.format(
username=proxy_username, password=proxy_password)
else:
proxy_auth = ''
if None not in (proxy_username, proxy_password):
proxy_auth = '{username}:{password}@'.format(
username=proxy_username, password=proxy_password)
else:
proxy_auth = ''
with open(proxy_path, 'w') as proxy_file:
proxy_file.write(
'Acquire::http {{ Proxy "http://{auth}{address}:{port}"; }};\n'
.format(auth=proxy_auth, address=proxy_address, port=proxy_port))
with open(proxy_path, 'w') as proxy_file:
proxy_file.write(
'Acquire::http {{ Proxy "http://{auth}{address}:{port}"; }};\n'
.format(auth=proxy_auth, address=proxy_address, port=proxy_port))
class RemoveAptProxy(Task):
description = 'Removing APT proxy configuration file'
phase = phases.system_cleaning
description = 'Removing APT proxy configuration file'
phase = phases.system_cleaning
@classmethod
def run(cls, info):
os.remove(os.path.join(info.root, 'etc/apt/apt.conf.d/02proxy'))
@classmethod
def run(cls, info):
os.remove(os.path.join(info.root, 'etc/apt/apt.conf.d/02proxy'))

View file

@ -2,13 +2,13 @@ import tasks
def validate_manifest(data, validator, error):
import os.path
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml'))
validator(data, schema_path)
import os.path
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml'))
validator(data, schema_path)
def resolve_tasks(taskset, manifest):
taskset.add(tasks.AddPackages)
if 'assets' in manifest.plugins['chef']:
taskset.add(tasks.CheckAssetsPath)
taskset.add(tasks.CopyChefAssets)
taskset.add(tasks.AddPackages)
if 'assets' in manifest.plugins['chef']:
taskset.add(tasks.CheckAssetsPath)
taskset.add(tasks.CopyChefAssets)

View file

@ -4,35 +4,35 @@ import os
class CheckAssetsPath(Task):
description = 'Checking whether the assets path exist'
phase = phases.preparation
description = 'Checking whether the assets path exist'
phase = phases.preparation
@classmethod
def run(cls, info):
from bootstrapvz.common.exceptions import TaskError
assets = info.manifest.plugins['chef']['assets']
if not os.path.exists(assets):
msg = 'The assets directory {assets} does not exist.'.format(assets=assets)
raise TaskError(msg)
if not os.path.isdir(assets):
msg = 'The assets path {assets} does not point to a directory.'.format(assets=assets)
raise TaskError(msg)
@classmethod
def run(cls, info):
from bootstrapvz.common.exceptions import TaskError
assets = info.manifest.plugins['chef']['assets']
if not os.path.exists(assets):
msg = 'The assets directory {assets} does not exist.'.format(assets=assets)
raise TaskError(msg)
if not os.path.isdir(assets):
msg = 'The assets path {assets} does not point to a directory.'.format(assets=assets)
raise TaskError(msg)
class AddPackages(Task):
description = 'Add chef package'
phase = phases.preparation
description = 'Add chef package'
phase = phases.preparation
@classmethod
def run(cls, info):
info.packages.add('chef')
@classmethod
def run(cls, info):
info.packages.add('chef')
class CopyChefAssets(Task):
description = 'Copying chef assets'
phase = phases.system_modification
description = 'Copying chef assets'
phase = phases.system_modification
@classmethod
def run(cls, info):
from bootstrapvz.common.tools import copy_tree
copy_tree(info.manifest.plugins['chef']['assets'], os.path.join(info.root, 'etc/chef'))
@classmethod
def run(cls, info):
from bootstrapvz.common.tools import copy_tree
copy_tree(info.manifest.plugins['chef']['assets'], os.path.join(info.root, 'etc/chef'))

View file

@ -1,36 +1,36 @@
def validate_manifest(data, validator, error):
import os.path
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml'))
validator(data, schema_path)
import os.path
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml'))
validator(data, schema_path)
def resolve_tasks(taskset, manifest):
import tasks
import bootstrapvz.providers.ec2.tasks.initd as initd_ec2
from bootstrapvz.common.tasks import apt
from bootstrapvz.common.tasks import initd
from bootstrapvz.common.tasks import ssh
import tasks
import bootstrapvz.providers.ec2.tasks.initd as initd_ec2
from bootstrapvz.common.tasks import apt
from bootstrapvz.common.tasks import initd
from bootstrapvz.common.tasks import ssh
from bootstrapvz.common.releases import wheezy
if manifest.release == wheezy:
taskset.add(apt.AddBackports)
from bootstrapvz.common.releases import wheezy
if manifest.release == wheezy:
taskset.add(apt.AddBackports)
taskset.update([tasks.SetMetadataSource,
tasks.AddCloudInitPackages,
])
taskset.update([tasks.SetMetadataSource,
tasks.AddCloudInitPackages,
])
options = manifest.plugins['cloud_init']
if 'username' in options:
taskset.add(tasks.SetUsername)
if 'groups' in options and len(options['groups']):
taskset.add(tasks.SetGroups)
if 'disable_modules' in options:
taskset.add(tasks.DisableModules)
options = manifest.plugins['cloud_init']
if 'username' in options:
taskset.add(tasks.SetUsername)
if 'groups' in options and len(options['groups']):
taskset.add(tasks.SetGroups)
if 'disable_modules' in options:
taskset.add(tasks.DisableModules)
taskset.discard(initd_ec2.AddEC2InitScripts)
taskset.discard(initd.AddExpandRoot)
taskset.discard(initd.AdjustExpandRootScript)
taskset.discard(initd.AdjustGrowpartWorkaround)
taskset.discard(ssh.AddSSHKeyGeneration)
taskset.discard(initd_ec2.AddEC2InitScripts)
taskset.discard(initd.AddExpandRoot)
taskset.discard(initd.AdjustExpandRootScript)
taskset.discard(initd.AdjustGrowpartWorkaround)
taskset.discard(ssh.AddSSHKeyGeneration)

View file

@ -8,92 +8,92 @@ import os.path
class AddCloudInitPackages(Task):
description = 'Adding cloud-init package and sudo'
phase = phases.preparation
predecessors = [apt.AddBackports]
description = 'Adding cloud-init package and sudo'
phase = phases.preparation
predecessors = [apt.AddBackports]
@classmethod
def run(cls, info):
target = None
from bootstrapvz.common.releases import wheezy
if info.manifest.release == wheezy:
target = '{system.release}-backports'
info.packages.add('cloud-init', target)
info.packages.add('sudo')
@classmethod
def run(cls, info):
target = None
from bootstrapvz.common.releases import wheezy
if info.manifest.release == wheezy:
target = '{system.release}-backports'
info.packages.add('cloud-init', target)
info.packages.add('sudo')
class SetUsername(Task):
description = 'Setting username in cloud.cfg'
phase = phases.system_modification
description = 'Setting username in cloud.cfg'
phase = phases.system_modification
@classmethod
def run(cls, info):
from bootstrapvz.common.tools import sed_i
cloud_cfg = os.path.join(info.root, 'etc/cloud/cloud.cfg')
username = info.manifest.plugins['cloud_init']['username']
search = '^ name: debian$'
replace = (' name: {username}\n'
' sudo: ALL=(ALL) NOPASSWD:ALL\n'
' shell: /bin/bash').format(username=username)
sed_i(cloud_cfg, search, replace)
@classmethod
def run(cls, info):
from bootstrapvz.common.tools import sed_i
cloud_cfg = os.path.join(info.root, 'etc/cloud/cloud.cfg')
username = info.manifest.plugins['cloud_init']['username']
search = '^ name: debian$'
replace = (' name: {username}\n'
' sudo: ALL=(ALL) NOPASSWD:ALL\n'
' shell: /bin/bash').format(username=username)
sed_i(cloud_cfg, search, replace)
class SetGroups(Task):
description = 'Setting groups in cloud.cfg'
phase = phases.system_modification
description = 'Setting groups in cloud.cfg'
phase = phases.system_modification
@classmethod
def run(cls, info):
from bootstrapvz.common.tools import sed_i
cloud_cfg = os.path.join(info.root, 'etc/cloud/cloud.cfg')
groups = info.manifest.plugins['cloud_init']['groups']
search = ('^ groups: \[adm, audio, cdrom, dialout, floppy, video,'
' plugdev, dip\]$')
replace = (' groups: [adm, audio, cdrom, dialout, floppy, video,'
' plugdev, dip, {groups}]').format(groups=', '.join(groups))
sed_i(cloud_cfg, search, replace)
@classmethod
def run(cls, info):
from bootstrapvz.common.tools import sed_i
cloud_cfg = os.path.join(info.root, 'etc/cloud/cloud.cfg')
groups = info.manifest.plugins['cloud_init']['groups']
search = ('^ groups: \[adm, audio, cdrom, dialout, floppy, video,'
' plugdev, dip\]$')
replace = (' groups: [adm, audio, cdrom, dialout, floppy, video,'
' plugdev, dip, {groups}]').format(groups=', '.join(groups))
sed_i(cloud_cfg, search, replace)
class SetMetadataSource(Task):
description = 'Setting metadata source'
phase = phases.package_installation
predecessors = [locale.GenerateLocale]
successors = [apt.AptUpdate]
description = 'Setting metadata source'
phase = phases.package_installation
predecessors = [locale.GenerateLocale]
successors = [apt.AptUpdate]
@classmethod
def run(cls, info):
if 'metadata_sources' in info.manifest.plugins['cloud_init']:
sources = info.manifest.plugins['cloud_init']['metadata_sources']
else:
source_mapping = {'ec2': 'Ec2'}
sources = source_mapping.get(info.manifest.provider['name'], None)
if sources is None:
msg = ('No cloud-init metadata source mapping found for provider `{provider}\', '
'skipping selections setting.').format(provider=info.manifest.provider['name'])
logging.getLogger(__name__).warn(msg)
return
sources = "cloud-init cloud-init/datasources multiselect " + sources
log_check_call(['chroot', info.root, 'debconf-set-selections'], sources)
@classmethod
def run(cls, info):
if 'metadata_sources' in info.manifest.plugins['cloud_init']:
sources = info.manifest.plugins['cloud_init']['metadata_sources']
else:
source_mapping = {'ec2': 'Ec2'}
sources = source_mapping.get(info.manifest.provider['name'], None)
if sources is None:
msg = ('No cloud-init metadata source mapping found for provider `{provider}\', '
'skipping selections setting.').format(provider=info.manifest.provider['name'])
logging.getLogger(__name__).warn(msg)
return
sources = "cloud-init cloud-init/datasources multiselect " + sources
log_check_call(['chroot', info.root, 'debconf-set-selections'], sources)
class DisableModules(Task):
description = 'Setting cloud.cfg modules'
phase = phases.system_modification
description = 'Setting cloud.cfg modules'
phase = phases.system_modification
@classmethod
def run(cls, info):
import re
patterns = ""
for pattern in info.manifest.plugins['cloud_init']['disable_modules']:
if patterns != "":
patterns = patterns + "|" + pattern
else:
patterns = "^\s+-\s+(" + pattern
patterns = patterns + ")$"
regex = re.compile(patterns)
@classmethod
def run(cls, info):
import re
patterns = ""
for pattern in info.manifest.plugins['cloud_init']['disable_modules']:
if patterns != "":
patterns = patterns + "|" + pattern
else:
patterns = "^\s+-\s+(" + pattern
patterns = patterns + ")$"
regex = re.compile(patterns)
cloud_cfg = os.path.join(info.root, 'etc/cloud/cloud.cfg')
import fileinput
for line in fileinput.input(files=cloud_cfg, inplace=True):
if not regex.match(line):
print line,
cloud_cfg = os.path.join(info.root, 'etc/cloud/cloud.cfg')
import fileinput
for line in fileinput.input(files=cloud_cfg, inplace=True):
if not regex.match(line):
print line,

View file

@ -1,11 +1,11 @@
def validate_manifest(data, validator, error):
import os.path
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml'))
validator(data, schema_path)
import os.path
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml'))
validator(data, schema_path)
def resolve_tasks(taskset, manifest):
from tasks import ImageExecuteCommand
taskset.add(ImageExecuteCommand)
from tasks import ImageExecuteCommand
taskset.add(ImageExecuteCommand)

View file

@ -3,13 +3,13 @@ from bootstrapvz.common import phases
class ImageExecuteCommand(Task):
description = 'Executing commands in the image'
phase = phases.user_modification
description = 'Executing commands in the image'
phase = phases.user_modification
@classmethod
def run(cls, info):
from bootstrapvz.common.tools import log_check_call
for raw_command in info.manifest.plugins['commands']['commands']:
command = map(lambda part: part.format(root=info.root, **info.manifest_vars), raw_command)
shell = len(command) == 1
log_check_call(command, shell=shell)
@classmethod
def run(cls, info):
from bootstrapvz.common.tools import log_check_call
for raw_command in info.manifest.plugins['commands']['commands']:
command = map(lambda part: part.format(root=info.root, **info.manifest_vars), raw_command)
shell = len(command) == 1
log_check_call(command, shell=shell)

View file

@ -5,23 +5,23 @@ from bootstrapvz.common.releases import wheezy
def validate_manifest(data, validator, error):
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml'))
validator(data, schema_path)
from bootstrapvz.common.releases import get_release
if get_release(data['system']['release']) == wheezy:
# prefs is a generator of apt preferences across files in the manifest
prefs = (item for vals in data.get('packages', {}).get('preferences', {}).values() for item in vals)
if not any('linux-image' in item['package'] and 'wheezy-backports' in item['pin'] for item in prefs):
msg = 'The backports kernel is required for the docker daemon to function properly'
error(msg, ['packages', 'preferences'])
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml'))
validator(data, schema_path)
from bootstrapvz.common.releases import get_release
if get_release(data['system']['release']) == wheezy:
# prefs is a generator of apt preferences across files in the manifest
prefs = (item for vals in data.get('packages', {}).get('preferences', {}).values() for item in vals)
if not any('linux-image' in item['package'] and 'wheezy-backports' in item['pin'] for item in prefs):
msg = 'The backports kernel is required for the docker daemon to function properly'
error(msg, ['packages', 'preferences'])
def resolve_tasks(taskset, manifest):
if manifest.release == wheezy:
taskset.add(apt.AddBackports)
taskset.add(tasks.AddDockerDeps)
taskset.add(tasks.AddDockerBinary)
taskset.add(tasks.AddDockerInit)
taskset.add(tasks.EnableMemoryCgroup)
if len(manifest.plugins['docker_daemon'].get('pull_images', [])) > 0:
taskset.add(tasks.PullDockerImages)
if manifest.release == wheezy:
taskset.add(apt.AddBackports)
taskset.add(tasks.AddDockerDeps)
taskset.add(tasks.AddDockerBinary)
taskset.add(tasks.AddDockerInit)
taskset.add(tasks.EnableMemoryCgroup)
if len(manifest.plugins['docker_daemon'].get('pull_images', [])) > 0:
taskset.add(tasks.PullDockerImages)

View file

@ -15,108 +15,108 @@ ASSETS_DIR = os.path.normpath(os.path.join(os.path.dirname(__file__), 'assets'))
class AddDockerDeps(Task):
description = 'Add packages for docker deps'
phase = phases.package_installation
DOCKER_DEPS = ['aufs-tools', 'btrfs-tools', 'git', 'iptables',
'procps', 'xz-utils', 'ca-certificates']
description = 'Add packages for docker deps'
phase = phases.package_installation
DOCKER_DEPS = ['aufs-tools', 'btrfs-tools', 'git', 'iptables',
'procps', 'xz-utils', 'ca-certificates']
@classmethod
def run(cls, info):
for pkg in cls.DOCKER_DEPS:
info.packages.add(pkg)
@classmethod
def run(cls, info):
for pkg in cls.DOCKER_DEPS:
info.packages.add(pkg)
class AddDockerBinary(Task):
description = 'Add docker binary'
phase = phases.system_modification
description = 'Add docker binary'
phase = phases.system_modification
@classmethod
def run(cls, info):
docker_version = info.manifest.plugins['docker_daemon'].get('version', False)
docker_url = 'https://get.docker.io/builds/Linux/x86_64/docker-'
if docker_version:
docker_url += docker_version
else:
docker_url += 'latest'
bin_docker = os.path.join(info.root, 'usr/bin/docker')
log_check_call(['wget', '-O', bin_docker, docker_url])
os.chmod(bin_docker, 0755)
@classmethod
def run(cls, info):
docker_version = info.manifest.plugins['docker_daemon'].get('version', False)
docker_url = 'https://get.docker.io/builds/Linux/x86_64/docker-'
if docker_version:
docker_url += docker_version
else:
docker_url += 'latest'
bin_docker = os.path.join(info.root, 'usr/bin/docker')
log_check_call(['wget', '-O', bin_docker, docker_url])
os.chmod(bin_docker, 0755)
class AddDockerInit(Task):
description = 'Add docker init script'
phase = phases.system_modification
successors = [initd.InstallInitScripts]
description = 'Add docker init script'
phase = phases.system_modification
successors = [initd.InstallInitScripts]
@classmethod
def run(cls, info):
init_src = os.path.join(ASSETS_DIR, 'init.d/docker')
info.initd['install']['docker'] = init_src
default_src = os.path.join(ASSETS_DIR, 'default/docker')
default_dest = os.path.join(info.root, 'etc/default/docker')
shutil.copy(default_src, default_dest)
docker_opts = info.manifest.plugins['docker_daemon'].get('docker_opts')
if docker_opts:
sed_i(default_dest, r'^#*DOCKER_OPTS=.*$', 'DOCKER_OPTS="%s"' % docker_opts)
@classmethod
def run(cls, info):
init_src = os.path.join(ASSETS_DIR, 'init.d/docker')
info.initd['install']['docker'] = init_src
default_src = os.path.join(ASSETS_DIR, 'default/docker')
default_dest = os.path.join(info.root, 'etc/default/docker')
shutil.copy(default_src, default_dest)
docker_opts = info.manifest.plugins['docker_daemon'].get('docker_opts')
if docker_opts:
sed_i(default_dest, r'^#*DOCKER_OPTS=.*$', 'DOCKER_OPTS="%s"' % docker_opts)
class EnableMemoryCgroup(Task):
description = 'Change grub configuration to enable the memory cgroup'
phase = phases.system_modification
successors = [grub.InstallGrub_1_99, grub.InstallGrub_2]
predecessors = [grub.ConfigureGrub, gceboot.ConfigureGrub]
description = 'Change grub configuration to enable the memory cgroup'
phase = phases.system_modification
successors = [grub.InstallGrub_1_99, grub.InstallGrub_2]
predecessors = [grub.ConfigureGrub, gceboot.ConfigureGrub]
@classmethod
def run(cls, info):
grub_config = os.path.join(info.root, 'etc/default/grub')
sed_i(grub_config, r'^(GRUB_CMDLINE_LINUX*=".*)"\s*$', r'\1 cgroup_enable=memory"')
@classmethod
def run(cls, info):
grub_config = os.path.join(info.root, 'etc/default/grub')
sed_i(grub_config, r'^(GRUB_CMDLINE_LINUX*=".*)"\s*$', r'\1 cgroup_enable=memory"')
class PullDockerImages(Task):
description = 'Pull docker images'
phase = phases.system_modification
predecessors = [AddDockerBinary]
description = 'Pull docker images'
phase = phases.system_modification
predecessors = [AddDockerBinary]
@classmethod
def run(cls, info):
from bootstrapvz.common.exceptions import TaskError
from subprocess import CalledProcessError
images = info.manifest.plugins['docker_daemon'].get('pull_images', [])
retries = info.manifest.plugins['docker_daemon'].get('pull_images_retries', 10)
@classmethod
def run(cls, info):
from bootstrapvz.common.exceptions import TaskError
from subprocess import CalledProcessError
images = info.manifest.plugins['docker_daemon'].get('pull_images', [])
retries = info.manifest.plugins['docker_daemon'].get('pull_images_retries', 10)
bin_docker = os.path.join(info.root, 'usr/bin/docker')
graph_dir = os.path.join(info.root, 'var/lib/docker')
socket = 'unix://' + os.path.join(info.workspace, 'docker.sock')
pidfile = os.path.join(info.workspace, 'docker.pid')
bin_docker = os.path.join(info.root, 'usr/bin/docker')
graph_dir = os.path.join(info.root, 'var/lib/docker')
socket = 'unix://' + os.path.join(info.workspace, 'docker.sock')
pidfile = os.path.join(info.workspace, 'docker.pid')
try:
# start docker daemon temporarly.
daemon = subprocess.Popen([bin_docker, '-d', '--graph', graph_dir, '-H', socket, '-p', pidfile])
# wait for docker daemon to start.
for _ in range(retries):
try:
log_check_call([bin_docker, '-H', socket, 'version'])
break
except CalledProcessError:
time.sleep(1)
for img in images:
# docker load if tarball.
if img.endswith('.tar.gz') or img.endswith('.tgz'):
cmd = [bin_docker, '-H', socket, 'load', '-i', img]
try:
log_check_call(cmd)
except CalledProcessError as e:
msg = 'error {e} loading docker image {img}.'.format(img=img, e=e)
raise TaskError(msg)
# docker pull if image name.
else:
cmd = [bin_docker, '-H', socket, 'pull', img]
try:
log_check_call(cmd)
except CalledProcessError as e:
msg = 'error {e} pulling docker image {img}.'.format(img=img, e=e)
raise TaskError(msg)
finally:
# shutdown docker daemon.
daemon.terminate()
os.remove(os.path.join(info.workspace, 'docker.sock'))
try:
# start docker daemon temporarly.
daemon = subprocess.Popen([bin_docker, '-d', '--graph', graph_dir, '-H', socket, '-p', pidfile])
# wait for docker daemon to start.
for _ in range(retries):
try:
log_check_call([bin_docker, '-H', socket, 'version'])
break
except CalledProcessError:
time.sleep(1)
for img in images:
# docker load if tarball.
if img.endswith('.tar.gz') or img.endswith('.tgz'):
cmd = [bin_docker, '-H', socket, 'load', '-i', img]
try:
log_check_call(cmd)
except CalledProcessError as e:
msg = 'error {e} loading docker image {img}.'.format(img=img, e=e)
raise TaskError(msg)
# docker pull if image name.
else:
cmd = [bin_docker, '-H', socket, 'pull', img]
try:
log_check_call(cmd)
except CalledProcessError as e:
msg = 'error {e} pulling docker image {img}.'.format(img=img, e=e)
raise TaskError(msg)
finally:
# shutdown docker daemon.
daemon.terminate()
os.remove(os.path.join(info.workspace, 'docker.sock'))

View file

@ -6,13 +6,13 @@ import logging
# TODO: Merge with the method available in wip-integration-tests branch
def waituntil(predicate, timeout=5, interval=0.05):
import time
threshhold = time.time() + timeout
while time.time() < threshhold:
if predicate():
return True
time.sleep(interval)
return False
import time
threshhold = time.time() + timeout
while time.time() < threshhold:
if predicate():
return True
time.sleep(interval)
return False
class LaunchEC2Instance(Task):

View file

@ -1,15 +1,15 @@
def validate_manifest(data, validator, error):
import os.path
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml'))
validator(data, schema_path)
import os.path
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml'))
validator(data, schema_path)
def resolve_tasks(taskset, manifest):
import tasks
taskset.add(tasks.CopyAmiToRegions)
if 'manifest_url' in manifest.plugins['ec2_publish']:
taskset.add(tasks.PublishAmiManifest)
import tasks
taskset.add(tasks.CopyAmiToRegions)
if 'manifest_url' in manifest.plugins['ec2_publish']:
taskset.add(tasks.PublishAmiManifest)
ami_public = manifest.plugins['ec2_publish'].get('public')
if ami_public:
taskset.add(tasks.PublishAmi)
ami_public = manifest.plugins['ec2_publish'].get('public')
if ami_public:
taskset.add(tasks.PublishAmi)

View file

@ -6,91 +6,91 @@ import logging
class CopyAmiToRegions(Task):
description = 'Copy AWS AMI over other regions'
phase = phases.image_registration
predecessors = [ami.RegisterAMI]
description = 'Copy AWS AMI over other regions'
phase = phases.image_registration
predecessors = [ami.RegisterAMI]
@classmethod
def run(cls, info):
source_region = info._ec2['region']
source_ami = info._ec2['image']
name = info._ec2['ami_name']
copy_description = "Copied from %s (%s)" % (source_ami, source_region)
@classmethod
def run(cls, info):
source_region = info._ec2['region']
source_ami = info._ec2['image']
name = info._ec2['ami_name']
copy_description = "Copied from %s (%s)" % (source_ami, source_region)
connect_args = {
'aws_access_key_id': info.credentials['access-key'],
'aws_secret_access_key': info.credentials['secret-key']
}
if 'security-token' in info.credentials:
connect_args['security_token'] = info.credentials['security-token']
connect_args = {
'aws_access_key_id': info.credentials['access-key'],
'aws_secret_access_key': info.credentials['secret-key']
}
if 'security-token' in info.credentials:
connect_args['security_token'] = info.credentials['security-token']
region_amis = {source_region: source_ami}
region_conns = {source_region: info._ec2['connection']}
from boto.ec2 import connect_to_region
regions = info.manifest.plugins['ec2_publish'].get('regions', ())
for region in regions:
conn = connect_to_region(region, **connect_args)
region_conns[region] = conn
copied_image = conn.copy_image(source_region, source_ami, name=name, description=copy_description)
region_amis[region] = copied_image.image_id
info._ec2['region_amis'] = region_amis
info._ec2['region_conns'] = region_conns
region_amis = {source_region: source_ami}
region_conns = {source_region: info._ec2['connection']}
from boto.ec2 import connect_to_region
regions = info.manifest.plugins['ec2_publish'].get('regions', ())
for region in regions:
conn = connect_to_region(region, **connect_args)
region_conns[region] = conn
copied_image = conn.copy_image(source_region, source_ami, name=name, description=copy_description)
region_amis[region] = copied_image.image_id
info._ec2['region_amis'] = region_amis
info._ec2['region_conns'] = region_conns
class PublishAmiManifest(Task):
description = 'Publish a manifest of generated AMIs'
phase = phases.image_registration
predecessors = [CopyAmiToRegions]
description = 'Publish a manifest of generated AMIs'
phase = phases.image_registration
predecessors = [CopyAmiToRegions]
@classmethod
def run(cls, info):
manifest_url = info.manifest.plugins['ec2_publish']['manifest_url']
@classmethod
def run(cls, info):
manifest_url = info.manifest.plugins['ec2_publish']['manifest_url']
import json
amis_json = json.dumps(info._ec2['region_amis'])
import json
amis_json = json.dumps(info._ec2['region_amis'])
from urlparse import urlparse
parsed_url = urlparse(manifest_url)
parsed_host = parsed_url.netloc
if not parsed_url.scheme:
with open(parsed_url.path, 'w') as local_out:
local_out.write(amis_json)
elif parsed_host.endswith('amazonaws.com') and 's3' in parsed_host:
region = 'us-east-1'
path = parsed_url.path[1:]
if 's3-' in parsed_host:
loc = parsed_host.find('s3-') + 3
region = parsed_host[loc:parsed_host.find('.', loc)]
from urlparse import urlparse
parsed_url = urlparse(manifest_url)
parsed_host = parsed_url.netloc
if not parsed_url.scheme:
with open(parsed_url.path, 'w') as local_out:
local_out.write(amis_json)
elif parsed_host.endswith('amazonaws.com') and 's3' in parsed_host:
region = 'us-east-1'
path = parsed_url.path[1:]
if 's3-' in parsed_host:
loc = parsed_host.find('s3-') + 3
region = parsed_host[loc:parsed_host.find('.', loc)]
if '.s3' in parsed_host:
bucket = parsed_host[:parsed_host.find('.s3')]
else:
bucket, path = path.split('/', 1)
if '.s3' in parsed_host:
bucket = parsed_host[:parsed_host.find('.s3')]
else:
bucket, path = path.split('/', 1)
from boto.s3 import connect_to_region
conn = connect_to_region(region)
key = conn.get_bucket(bucket, validate=False).new_key(path)
headers = {'Content-Type': 'application/json'}
key.set_contents_from_string(amis_json, headers=headers, policy='public-read')
from boto.s3 import connect_to_region
conn = connect_to_region(region)
key = conn.get_bucket(bucket, validate=False).new_key(path)
headers = {'Content-Type': 'application/json'}
key.set_contents_from_string(amis_json, headers=headers, policy='public-read')
class PublishAmi(Task):
description = 'Make generated AMIs public'
phase = phases.image_registration
predecessors = [CopyAmiToRegions]
description = 'Make generated AMIs public'
phase = phases.image_registration
predecessors = [CopyAmiToRegions]
@classmethod
def run(cls, info):
region_conns = info._ec2['region_conns']
region_amis = info._ec2['region_amis']
logger = logging.getLogger(__name__)
@classmethod
def run(cls, info):
region_conns = info._ec2['region_conns']
region_amis = info._ec2['region_amis']
logger = logging.getLogger(__name__)
import time
for region, region_ami in region_amis.items():
conn = region_conns[region]
current_image = conn.get_image(region_ami)
while current_image.state == 'pending':
logger.debug('Waiting for %s in %s (currently: %s)', region_ami, region, current_image.state)
time.sleep(5)
current_image = conn.get_image(region_ami)
conn.modify_image_attribute(region_ami, attribute='launchPermission', operation='add', groups='all')
import time
for region, region_ami in region_amis.items():
conn = region_conns[region]
current_image = conn.get_image(region_ami)
while current_image.state == 'pending':
logger.debug('Waiting for %s in %s (currently: %s)', region_ami, region, current_image.state)
time.sleep(5)
current_image = conn.get_image(region_ami)
conn.modify_image_attribute(region_ami, attribute='launchPermission', operation='add', groups='all')

View file

@ -2,20 +2,20 @@ import tasks
def validate_manifest(data, validator, error):
import os.path
import os.path
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml'))
validator(data, schema_path)
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml'))
validator(data, schema_path)
for i, file_entry in enumerate(data['plugins']['file_copy']['files']):
srcfile = file_entry['src']
if not os.path.isfile(srcfile):
msg = 'The source file %s does not exist.' % srcfile
error(msg, ['plugins', 'file_copy', 'files', i])
for i, file_entry in enumerate(data['plugins']['file_copy']['files']):
srcfile = file_entry['src']
if not os.path.isfile(srcfile):
msg = 'The source file %s does not exist.' % srcfile
error(msg, ['plugins', 'file_copy', 'files', i])
def resolve_tasks(taskset, manifest):
if ('mkdirs' in manifest.plugins['file_copy']):
taskset.add(tasks.MkdirCommand)
if ('files' in manifest.plugins['file_copy']):
taskset.add(tasks.FileCopyCommand)
if ('mkdirs' in manifest.plugins['file_copy']):
taskset.add(tasks.MkdirCommand)
if ('files' in manifest.plugins['file_copy']):
taskset.add(tasks.FileCopyCommand)

View file

@ -6,46 +6,46 @@ import shutil
def modify_path(info, path, entry):
from bootstrapvz.common.tools import log_check_call
if 'permissions' in entry:
# We wrap the permissions string in str() in case
# the user specified a numeric bitmask
chmod_command = ['chroot', info.root, 'chmod', str(entry['permissions']), path]
log_check_call(chmod_command)
from bootstrapvz.common.tools import log_check_call
if 'permissions' in entry:
# We wrap the permissions string in str() in case
# the user specified a numeric bitmask
chmod_command = ['chroot', info.root, 'chmod', str(entry['permissions']), path]
log_check_call(chmod_command)
if 'owner' in entry:
chown_command = ['chroot', info.root, 'chown', entry['owner'], path]
log_check_call(chown_command)
if 'owner' in entry:
chown_command = ['chroot', info.root, 'chown', entry['owner'], path]
log_check_call(chown_command)
if 'group' in entry:
chgrp_command = ['chroot', info.root, 'chgrp', entry['group'], path]
log_check_call(chgrp_command)
if 'group' in entry:
chgrp_command = ['chroot', info.root, 'chgrp', entry['group'], path]
log_check_call(chgrp_command)
class MkdirCommand(Task):
description = 'Creating directories requested by user'
phase = phases.user_modification
description = 'Creating directories requested by user'
phase = phases.user_modification
@classmethod
def run(cls, info):
from bootstrapvz.common.tools import log_check_call
@classmethod
def run(cls, info):
from bootstrapvz.common.tools import log_check_call
for dir_entry in info.manifest.plugins['file_copy']['mkdirs']:
mkdir_command = ['chroot', info.root, 'mkdir', '-p', dir_entry['dir']]
log_check_call(mkdir_command)
modify_path(info, dir_entry['dir'], dir_entry)
for dir_entry in info.manifest.plugins['file_copy']['mkdirs']:
mkdir_command = ['chroot', info.root, 'mkdir', '-p', dir_entry['dir']]
log_check_call(mkdir_command)
modify_path(info, dir_entry['dir'], dir_entry)
class FileCopyCommand(Task):
description = 'Copying user specified files into the image'
phase = phases.user_modification
predecessors = [MkdirCommand]
description = 'Copying user specified files into the image'
phase = phases.user_modification
predecessors = [MkdirCommand]
@classmethod
def run(cls, info):
for file_entry in info.manifest.plugins['file_copy']['files']:
# note that we don't use os.path.join because it can't
# handle absolute paths, which 'dst' most likely is.
final_destination = os.path.normpath("%s/%s" % (info.root, file_entry['dst']))
shutil.copy(file_entry['src'], final_destination)
modify_path(info, file_entry['dst'], file_entry)
@classmethod
def run(cls, info):
for file_entry in info.manifest.plugins['file_copy']['files']:
# note that we don't use os.path.join because it can't
# handle absolute paths, which 'dst' most likely is.
final_destination = os.path.normpath("%s/%s" % (info.root, file_entry['dst']))
shutil.copy(file_entry['src'], final_destination)
modify_path(info, file_entry['dst'], file_entry)

View file

@ -3,14 +3,14 @@ import os.path
def validate_manifest(data, validator, error):
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml'))
validator(data, schema_path)
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml'))
validator(data, schema_path)
def resolve_tasks(taskset, manifest):
taskset.add(tasks.AddGoogleCloudRepoKey)
if manifest.plugins['google_cloud_repo'].get('enable_keyring_repo', False):
taskset.add(tasks.AddGoogleCloudRepoKeyringRepo)
taskset.add(tasks.InstallGoogleCloudRepoKeyringPackage)
if manifest.plugins['google_cloud_repo'].get('cleanup_bootstrap_key', False):
taskset.add(tasks.CleanupBootstrapRepoKey)
taskset.add(tasks.AddGoogleCloudRepoKey)
if manifest.plugins['google_cloud_repo'].get('enable_keyring_repo', False):
taskset.add(tasks.AddGoogleCloudRepoKeyringRepo)
taskset.add(tasks.InstallGoogleCloudRepoKeyringPackage)
if manifest.plugins['google_cloud_repo'].get('cleanup_bootstrap_key', False):
taskset.add(tasks.CleanupBootstrapRepoKey)

View file

@ -7,43 +7,43 @@ import os
class AddGoogleCloudRepoKey(Task):
description = 'Adding Google Cloud Repo key.'
phase = phases.package_installation
predecessors = [apt.InstallTrustedKeys]
successors = [apt.WriteSources]
description = 'Adding Google Cloud Repo key.'
phase = phases.package_installation
predecessors = [apt.InstallTrustedKeys]
successors = [apt.WriteSources]
@classmethod
def run(cls, info):
key_file = os.path.join(info.root, 'google.gpg.key')
log_check_call(['wget', 'https://packages.cloud.google.com/apt/doc/apt-key.gpg', '-O', key_file])
log_check_call(['chroot', info.root, 'apt-key', 'add', 'google.gpg.key'])
os.remove(key_file)
@classmethod
def run(cls, info):
key_file = os.path.join(info.root, 'google.gpg.key')
log_check_call(['wget', 'https://packages.cloud.google.com/apt/doc/apt-key.gpg', '-O', key_file])
log_check_call(['chroot', info.root, 'apt-key', 'add', 'google.gpg.key'])
os.remove(key_file)
class AddGoogleCloudRepoKeyringRepo(Task):
description = 'Adding Google Cloud keyring repository.'
phase = phases.preparation
predecessors = [apt.AddManifestSources]
description = 'Adding Google Cloud keyring repository.'
phase = phases.preparation
predecessors = [apt.AddManifestSources]
@classmethod
def run(cls, info):
info.source_lists.add('google-cloud', 'deb http://packages.cloud.google.com/apt google-cloud-packages-archive-keyring-{system.release} main')
@classmethod
def run(cls, info):
info.source_lists.add('google-cloud', 'deb http://packages.cloud.google.com/apt google-cloud-packages-archive-keyring-{system.release} main')
class InstallGoogleCloudRepoKeyringPackage(Task):
description = 'Installing Google Cloud key package.'
phase = phases.preparation
successors = [packages.AddManifestPackages]
description = 'Installing Google Cloud key package.'
phase = phases.preparation
successors = [packages.AddManifestPackages]
@classmethod
def run(cls, info):
info.packages.add('google-cloud-packages-archive-keyring')
@classmethod
def run(cls, info):
info.packages.add('google-cloud-packages-archive-keyring')
class CleanupBootstrapRepoKey(Task):
description = 'Cleaning up bootstrap repo key.'
phase = phases.system_cleaning
description = 'Cleaning up bootstrap repo key.'
phase = phases.system_cleaning
@classmethod
def run(cls, info):
os.remove(os.path.join(info.root, 'etc', 'apt', 'trusted.gpg'))
@classmethod
def run(cls, info):
os.remove(os.path.join(info.root, 'etc', 'apt', 'trusted.gpg'))

View file

@ -6,52 +6,52 @@ from bootstrapvz.common.tasks import locale
def validate_manifest(data, validator, error):
import os.path
schema_path = os.path.join(os.path.dirname(__file__), 'manifest-schema.yml')
validator(data, schema_path)
if data['plugins']['minimize_size'].get('shrink', False) and data['volume']['backing'] != 'vmdk':
error('Can only shrink vmdk images', ['plugins', 'minimize_size', 'shrink'])
import os.path
schema_path = os.path.join(os.path.dirname(__file__), 'manifest-schema.yml')
validator(data, schema_path)
if data['plugins']['minimize_size'].get('shrink', False) and data['volume']['backing'] != 'vmdk':
error('Can only shrink vmdk images', ['plugins', 'minimize_size', 'shrink'])
def resolve_tasks(taskset, manifest):
taskset.update([tasks.mounts.AddFolderMounts,
tasks.mounts.RemoveFolderMounts,
])
if manifest.plugins['minimize_size'].get('zerofree', False):
taskset.add(tasks.shrink.AddRequiredCommands)
taskset.add(tasks.shrink.Zerofree)
if manifest.plugins['minimize_size'].get('shrink', False):
taskset.add(tasks.shrink.AddRequiredCommands)
taskset.add(tasks.shrink.ShrinkVolume)
if 'apt' in manifest.plugins['minimize_size']:
apt = manifest.plugins['minimize_size']['apt']
if apt.get('autoclean', False):
taskset.add(tasks.apt.AutomateAptClean)
if 'languages' in apt:
taskset.add(tasks.apt.FilterTranslationFiles)
if apt.get('gzip_indexes', False):
taskset.add(tasks.apt.AptGzipIndexes)
if apt.get('autoremove_suggests', False):
taskset.add(tasks.apt.AptAutoremoveSuggests)
filter_tasks = [tasks.dpkg.CreateDpkgCfg,
tasks.dpkg.InitializeBootstrapFilterList,
tasks.dpkg.CreateBootstrapFilterScripts,
tasks.dpkg.DeleteBootstrapFilterScripts,
]
if 'dpkg' in manifest.plugins['minimize_size']:
dpkg = manifest.plugins['minimize_size']['dpkg']
if 'locales' in dpkg:
taskset.update(filter_tasks)
taskset.add(tasks.dpkg.FilterLocales)
# If no locales are selected, we don't need the locale package
if len(dpkg['locales']) == 0:
taskset.discard(locale.LocaleBootstrapPackage)
taskset.discard(locale.GenerateLocale)
if dpkg.get('exclude_docs', False):
taskset.update(filter_tasks)
taskset.add(tasks.dpkg.ExcludeDocs)
taskset.update([tasks.mounts.AddFolderMounts,
tasks.mounts.RemoveFolderMounts,
])
if manifest.plugins['minimize_size'].get('zerofree', False):
taskset.add(tasks.shrink.AddRequiredCommands)
taskset.add(tasks.shrink.Zerofree)
if manifest.plugins['minimize_size'].get('shrink', False):
taskset.add(tasks.shrink.AddRequiredCommands)
taskset.add(tasks.shrink.ShrinkVolume)
if 'apt' in manifest.plugins['minimize_size']:
apt = manifest.plugins['minimize_size']['apt']
if apt.get('autoclean', False):
taskset.add(tasks.apt.AutomateAptClean)
if 'languages' in apt:
taskset.add(tasks.apt.FilterTranslationFiles)
if apt.get('gzip_indexes', False):
taskset.add(tasks.apt.AptGzipIndexes)
if apt.get('autoremove_suggests', False):
taskset.add(tasks.apt.AptAutoremoveSuggests)
filter_tasks = [tasks.dpkg.CreateDpkgCfg,
tasks.dpkg.InitializeBootstrapFilterList,
tasks.dpkg.CreateBootstrapFilterScripts,
tasks.dpkg.DeleteBootstrapFilterScripts,
]
if 'dpkg' in manifest.plugins['minimize_size']:
dpkg = manifest.plugins['minimize_size']['dpkg']
if 'locales' in dpkg:
taskset.update(filter_tasks)
taskset.add(tasks.dpkg.FilterLocales)
# If no locales are selected, we don't need the locale package
if len(dpkg['locales']) == 0:
taskset.discard(locale.LocaleBootstrapPackage)
taskset.discard(locale.GenerateLocale)
if dpkg.get('exclude_docs', False):
taskset.update(filter_tasks)
taskset.add(tasks.dpkg.ExcludeDocs)
def resolve_rollback_tasks(taskset, manifest, completed, counter_task):
counter_task(taskset, tasks.mounts.AddFolderMounts, tasks.mounts.RemoveFolderMounts)
counter_task(taskset, tasks.dpkg.CreateBootstrapFilterScripts, tasks.dpkg.DeleteBootstrapFilterScripts)
counter_task(taskset, tasks.mounts.AddFolderMounts, tasks.mounts.RemoveFolderMounts)
counter_task(taskset, tasks.dpkg.CreateBootstrapFilterScripts, tasks.dpkg.DeleteBootstrapFilterScripts)

View file

@ -8,55 +8,55 @@ from . import assets
class AutomateAptClean(Task):
description = 'Configuring apt to always clean everything out when it\'s done'
phase = phases.package_installation
successors = [apt.AptUpdate]
# Snatched from:
# https://github.com/docker/docker/blob/1d775a54cc67e27f755c7338c3ee938498e845d7/contrib/mkimage/debootstrap
description = 'Configuring apt to always clean everything out when it\'s done'
phase = phases.package_installation
successors = [apt.AptUpdate]
# Snatched from:
# https://github.com/docker/docker/blob/1d775a54cc67e27f755c7338c3ee938498e845d7/contrib/mkimage/debootstrap
@classmethod
def run(cls, info):
shutil.copy(os.path.join(assets, 'apt-clean'),
os.path.join(info.root, 'etc/apt/apt.conf.d/90clean'))
@classmethod
def run(cls, info):
shutil.copy(os.path.join(assets, 'apt-clean'),
os.path.join(info.root, 'etc/apt/apt.conf.d/90clean'))
class FilterTranslationFiles(Task):
description = 'Configuring apt to only download and use specific translation files'
phase = phases.package_installation
successors = [apt.AptUpdate]
# Snatched from:
# https://github.com/docker/docker/blob/1d775a54cc67e27f755c7338c3ee938498e845d7/contrib/mkimage/debootstrap
description = 'Configuring apt to only download and use specific translation files'
phase = phases.package_installation
successors = [apt.AptUpdate]
# Snatched from:
# https://github.com/docker/docker/blob/1d775a54cc67e27f755c7338c3ee938498e845d7/contrib/mkimage/debootstrap
@classmethod
def run(cls, info):
langs = info.manifest.plugins['minimize_size']['apt']['languages']
config = '; '.join(map(lambda l: '"' + l + '"', langs))
config_path = os.path.join(info.root, 'etc/apt/apt.conf.d/20languages')
shutil.copy(os.path.join(assets, 'apt-languages'), config_path)
sed_i(config_path, r'ACQUIRE_LANGUAGES_FILTER', config)
@classmethod
def run(cls, info):
langs = info.manifest.plugins['minimize_size']['apt']['languages']
config = '; '.join(map(lambda l: '"' + l + '"', langs))
config_path = os.path.join(info.root, 'etc/apt/apt.conf.d/20languages')
shutil.copy(os.path.join(assets, 'apt-languages'), config_path)
sed_i(config_path, r'ACQUIRE_LANGUAGES_FILTER', config)
class AptGzipIndexes(Task):
description = 'Configuring apt to always gzip lists files'
phase = phases.package_installation
successors = [apt.AptUpdate]
# Snatched from:
# https://github.com/docker/docker/blob/1d775a54cc67e27f755c7338c3ee938498e845d7/contrib/mkimage/debootstrap
description = 'Configuring apt to always gzip lists files'
phase = phases.package_installation
successors = [apt.AptUpdate]
# Snatched from:
# https://github.com/docker/docker/blob/1d775a54cc67e27f755c7338c3ee938498e845d7/contrib/mkimage/debootstrap
@classmethod
def run(cls, info):
shutil.copy(os.path.join(assets, 'apt-gzip-indexes'),
os.path.join(info.root, 'etc/apt/apt.conf.d/20gzip-indexes'))
@classmethod
def run(cls, info):
shutil.copy(os.path.join(assets, 'apt-gzip-indexes'),
os.path.join(info.root, 'etc/apt/apt.conf.d/20gzip-indexes'))
class AptAutoremoveSuggests(Task):
description = 'Configuring apt to remove suggested packages when autoremoving'
phase = phases.package_installation
successors = [apt.AptUpdate]
# Snatched from:
# https://github.com/docker/docker/blob/1d775a54cc67e27f755c7338c3ee938498e845d7/contrib/mkimage/debootstrap
description = 'Configuring apt to remove suggested packages when autoremoving'
phase = phases.package_installation
successors = [apt.AptUpdate]
# Snatched from:
# https://github.com/docker/docker/blob/1d775a54cc67e27f755c7338c3ee938498e845d7/contrib/mkimage/debootstrap
@classmethod
def run(cls, info):
shutil.copy(os.path.join(assets, 'apt-autoremove-suggests'),
os.path.join(info.root, 'etc/apt/apt.conf.d/20autoremove-suggests'))
@classmethod
def run(cls, info):
shutil.copy(os.path.join(assets, 'apt-autoremove-suggests'),
os.path.join(info.root, 'etc/apt/apt.conf.d/20autoremove-suggests'))

View file

@ -9,140 +9,140 @@ from . import assets
class CreateDpkgCfg(Task):
description = 'Creating /etc/dpkg/dpkg.cfg.d before bootstrapping'
phase = phases.os_installation
successors = [bootstrap.Bootstrap]
description = 'Creating /etc/dpkg/dpkg.cfg.d before bootstrapping'
phase = phases.os_installation
successors = [bootstrap.Bootstrap]
@classmethod
def run(cls, info):
os.makedirs(os.path.join(info.root, 'etc/dpkg/dpkg.cfg.d'))
@classmethod
def run(cls, info):
os.makedirs(os.path.join(info.root, 'etc/dpkg/dpkg.cfg.d'))
class InitializeBootstrapFilterList(Task):
description = 'Initializing the bootstrapping filter list'
phase = phases.preparation
description = 'Initializing the bootstrapping filter list'
phase = phases.preparation
@classmethod
def run(cls, info):
info._minimize_size['bootstrap_filter'] = {'exclude': [], 'include': []}
@classmethod
def run(cls, info):
info._minimize_size['bootstrap_filter'] = {'exclude': [], 'include': []}
class CreateBootstrapFilterScripts(Task):
description = 'Creating the bootstrapping locales filter script'
phase = phases.os_installation
successors = [bootstrap.Bootstrap]
# Inspired by:
# https://github.com/docker/docker/blob/1d775a54cc67e27f755c7338c3ee938498e845d7/contrib/mkimage/debootstrap
description = 'Creating the bootstrapping locales filter script'
phase = phases.os_installation
successors = [bootstrap.Bootstrap]
# Inspired by:
# https://github.com/docker/docker/blob/1d775a54cc67e27f755c7338c3ee938498e845d7/contrib/mkimage/debootstrap
@classmethod
def run(cls, info):
if info.bootstrap_script is not None:
from bootstrapvz.common.exceptions import TaskError
raise TaskError('info.bootstrap_script seems to already be set '
'and is conflicting with this task')
@classmethod
def run(cls, info):
if info.bootstrap_script is not None:
from bootstrapvz.common.exceptions import TaskError
raise TaskError('info.bootstrap_script seems to already be set '
'and is conflicting with this task')
bootstrap_script = os.path.join(info.workspace, 'bootstrap_script.sh')
filter_script = os.path.join(info.workspace, 'bootstrap_files_filter.sh')
excludes_file = os.path.join(info.workspace, 'debootstrap-excludes')
bootstrap_script = os.path.join(info.workspace, 'bootstrap_script.sh')
filter_script = os.path.join(info.workspace, 'bootstrap_files_filter.sh')
excludes_file = os.path.join(info.workspace, 'debootstrap-excludes')
shutil.copy(os.path.join(assets, 'bootstrap-script.sh'), bootstrap_script)
shutil.copy(os.path.join(assets, 'bootstrap-files-filter.sh'), filter_script)
shutil.copy(os.path.join(assets, 'bootstrap-script.sh'), bootstrap_script)
shutil.copy(os.path.join(assets, 'bootstrap-files-filter.sh'), filter_script)
sed_i(bootstrap_script, r'DEBOOTSTRAP_EXCLUDES_PATH', excludes_file)
sed_i(bootstrap_script, r'BOOTSTRAP_FILES_FILTER_PATH', filter_script)
sed_i(bootstrap_script, r'DEBOOTSTRAP_EXCLUDES_PATH', excludes_file)
sed_i(bootstrap_script, r'BOOTSTRAP_FILES_FILTER_PATH', filter_script)
# We exclude with patterns but include with fixed strings
# The pattern matching when excluding is needed in order to filter
# everything below e.g. /usr/share/locale but not the folder itself
filter_lists = info._minimize_size['bootstrap_filter']
exclude_list = '\|'.join(map(lambda p: '.' + p + '.\+', filter_lists['exclude']))
include_list = '\n'.join(map(lambda p: '.' + p, filter_lists['include']))
sed_i(filter_script, r'EXCLUDE_PATTERN', exclude_list)
sed_i(filter_script, r'INCLUDE_PATHS', include_list)
os.chmod(filter_script, 0755)
# We exclude with patterns but include with fixed strings
# The pattern matching when excluding is needed in order to filter
# everything below e.g. /usr/share/locale but not the folder itself
filter_lists = info._minimize_size['bootstrap_filter']
exclude_list = '\|'.join(map(lambda p: '.' + p + '.\+', filter_lists['exclude']))
include_list = '\n'.join(map(lambda p: '.' + p, filter_lists['include']))
sed_i(filter_script, r'EXCLUDE_PATTERN', exclude_list)
sed_i(filter_script, r'INCLUDE_PATHS', include_list)
os.chmod(filter_script, 0755)
info.bootstrap_script = bootstrap_script
info._minimize_size['filter_script'] = filter_script
info.bootstrap_script = bootstrap_script
info._minimize_size['filter_script'] = filter_script
class FilterLocales(Task):
description = 'Configuring dpkg and debootstrap to only include specific locales/manpages when installing packages'
phase = phases.os_installation
predecessors = [CreateDpkgCfg]
successors = [CreateBootstrapFilterScripts]
# Snatched from:
# https://github.com/docker/docker/blob/1d775a54cc67e27f755c7338c3ee938498e845d7/contrib/mkimage/debootstrap
# and
# https://raphaelhertzog.com/2010/11/15/save-disk-space-by-excluding-useless-files-with-dpkg/
description = 'Configuring dpkg and debootstrap to only include specific locales/manpages when installing packages'
phase = phases.os_installation
predecessors = [CreateDpkgCfg]
successors = [CreateBootstrapFilterScripts]
# Snatched from:
# https://github.com/docker/docker/blob/1d775a54cc67e27f755c7338c3ee938498e845d7/contrib/mkimage/debootstrap
# and
# https://raphaelhertzog.com/2010/11/15/save-disk-space-by-excluding-useless-files-with-dpkg/
@classmethod
def run(cls, info):
# Filter when debootstrapping
info._minimize_size['bootstrap_filter']['exclude'].extend([
'/usr/share/locale/',
'/usr/share/man/',
])
@classmethod
def run(cls, info):
# Filter when debootstrapping
info._minimize_size['bootstrap_filter']['exclude'].extend([
'/usr/share/locale/',
'/usr/share/man/',
])
locales = info.manifest.plugins['minimize_size']['dpkg']['locales']
info._minimize_size['bootstrap_filter']['include'].extend([
'/usr/share/locale/locale.alias',
'/usr/share/man/man1',
'/usr/share/man/man2',
'/usr/share/man/man3',
'/usr/share/man/man4',
'/usr/share/man/man5',
'/usr/share/man/man6',
'/usr/share/man/man7',
'/usr/share/man/man8',
'/usr/share/man/man9',
] +
map(lambda l: '/usr/share/locale/' + l + '/', locales) +
map(lambda l: '/usr/share/man/' + l + '/', locales)
)
locales = info.manifest.plugins['minimize_size']['dpkg']['locales']
info._minimize_size['bootstrap_filter']['include'].extend([
'/usr/share/locale/locale.alias',
'/usr/share/man/man1',
'/usr/share/man/man2',
'/usr/share/man/man3',
'/usr/share/man/man4',
'/usr/share/man/man5',
'/usr/share/man/man6',
'/usr/share/man/man7',
'/usr/share/man/man8',
'/usr/share/man/man9',
] +
map(lambda l: '/usr/share/locale/' + l + '/', locales) +
map(lambda l: '/usr/share/man/' + l + '/', locales)
)
# Filter when installing things with dpkg
locale_lines = ['path-exclude=/usr/share/locale/*',
'path-include=/usr/share/locale/locale.alias']
manpages_lines = ['path-exclude=/usr/share/man/*',
'path-include=/usr/share/man/man[1-9]']
# Filter when installing things with dpkg
locale_lines = ['path-exclude=/usr/share/locale/*',
'path-include=/usr/share/locale/locale.alias']
manpages_lines = ['path-exclude=/usr/share/man/*',
'path-include=/usr/share/man/man[1-9]']
locales = info.manifest.plugins['minimize_size']['dpkg']['locales']
locale_lines.extend(map(lambda l: 'path-include=/usr/share/locale/' + l + '/*', locales))
manpages_lines.extend(map(lambda l: 'path-include=/usr/share/man/' + l + '/*', locales))
locales = info.manifest.plugins['minimize_size']['dpkg']['locales']
locale_lines.extend(map(lambda l: 'path-include=/usr/share/locale/' + l + '/*', locales))
manpages_lines.extend(map(lambda l: 'path-include=/usr/share/man/' + l + '/*', locales))
locales_path = os.path.join(info.root, 'etc/dpkg/dpkg.cfg.d/10filter-locales')
manpages_path = os.path.join(info.root, 'etc/dpkg/dpkg.cfg.d/10filter-manpages')
locales_path = os.path.join(info.root, 'etc/dpkg/dpkg.cfg.d/10filter-locales')
manpages_path = os.path.join(info.root, 'etc/dpkg/dpkg.cfg.d/10filter-manpages')
with open(locales_path, 'w') as locale_filter:
locale_filter.write('\n'.join(locale_lines) + '\n')
with open(manpages_path, 'w') as manpages_filter:
manpages_filter.write('\n'.join(manpages_lines) + '\n')
with open(locales_path, 'w') as locale_filter:
locale_filter.write('\n'.join(locale_lines) + '\n')
with open(manpages_path, 'w') as manpages_filter:
manpages_filter.write('\n'.join(manpages_lines) + '\n')
class ExcludeDocs(Task):
description = 'Configuring dpkg and debootstrap to not install additional documentation for packages'
phase = phases.os_installation
predecessors = [CreateDpkgCfg]
successors = [CreateBootstrapFilterScripts]
description = 'Configuring dpkg and debootstrap to not install additional documentation for packages'
phase = phases.os_installation
predecessors = [CreateDpkgCfg]
successors = [CreateBootstrapFilterScripts]
@classmethod
def run(cls, info):
# "Packages must not require the existence of any files in /usr/share/doc/ in order to function [...]."
# Source: https://www.debian.org/doc/debian-policy/ch-docs.html
# So doing this should cause no problems.
info._minimize_size['bootstrap_filter']['exclude'].append('/usr/share/doc/')
exclude_docs_path = os.path.join(info.root, 'etc/dpkg/dpkg.cfg.d/10exclude-docs')
with open(exclude_docs_path, 'w') as exclude_docs:
exclude_docs.write('path-exclude=/usr/share/doc/*\n')
@classmethod
def run(cls, info):
# "Packages must not require the existence of any files in /usr/share/doc/ in order to function [...]."
# Source: https://www.debian.org/doc/debian-policy/ch-docs.html
# So doing this should cause no problems.
info._minimize_size['bootstrap_filter']['exclude'].append('/usr/share/doc/')
exclude_docs_path = os.path.join(info.root, 'etc/dpkg/dpkg.cfg.d/10exclude-docs')
with open(exclude_docs_path, 'w') as exclude_docs:
exclude_docs.write('path-exclude=/usr/share/doc/*\n')
class DeleteBootstrapFilterScripts(Task):
description = 'Deleting the bootstrapping locales filter script'
phase = phases.cleaning
successors = [workspace.DeleteWorkspace]
description = 'Deleting the bootstrapping locales filter script'
phase = phases.cleaning
successors = [workspace.DeleteWorkspace]
@classmethod
def run(cls, info):
os.remove(info._minimize_size['filter_script'])
del info._minimize_size['filter_script']
os.remove(info.bootstrap_script)
@classmethod
def run(cls, info):
os.remove(info._minimize_size['filter_script'])
del info._minimize_size['filter_script']
os.remove(info.bootstrap_script)

View file

@ -8,36 +8,36 @@ folders = ['tmp', 'var/lib/apt/lists']
class AddFolderMounts(Task):
description = 'Mounting folders for writing temporary and cache data'
phase = phases.os_installation
predecessors = [bootstrap.Bootstrap]
description = 'Mounting folders for writing temporary and cache data'
phase = phases.os_installation
predecessors = [bootstrap.Bootstrap]
@classmethod
def run(cls, info):
info._minimize_size['foldermounts'] = os.path.join(info.workspace, 'minimize_size')
os.mkdir(info._minimize_size['foldermounts'])
for folder in folders:
temp_path = os.path.join(info._minimize_size['foldermounts'], folder.replace('/', '_'))
os.mkdir(temp_path)
@classmethod
def run(cls, info):
info._minimize_size['foldermounts'] = os.path.join(info.workspace, 'minimize_size')
os.mkdir(info._minimize_size['foldermounts'])
for folder in folders:
temp_path = os.path.join(info._minimize_size['foldermounts'], folder.replace('/', '_'))
os.mkdir(temp_path)
full_path = os.path.join(info.root, folder)
info.volume.partition_map.root.add_mount(temp_path, full_path, ['--bind'])
full_path = os.path.join(info.root, folder)
info.volume.partition_map.root.add_mount(temp_path, full_path, ['--bind'])
class RemoveFolderMounts(Task):
description = 'Removing folder mounts for temporary and cache data'
phase = phases.system_cleaning
successors = [apt.AptClean]
description = 'Removing folder mounts for temporary and cache data'
phase = phases.system_cleaning
successors = [apt.AptClean]
@classmethod
def run(cls, info):
import shutil
for folder in folders:
temp_path = os.path.join(info._minimize_size['foldermounts'], folder.replace('/', '_'))
full_path = os.path.join(info.root, folder)
@classmethod
def run(cls, info):
import shutil
for folder in folders:
temp_path = os.path.join(info._minimize_size['foldermounts'], folder.replace('/', '_'))
full_path = os.path.join(info.root, folder)
info.volume.partition_map.root.remove_mount(full_path)
shutil.rmtree(temp_path)
info.volume.partition_map.root.remove_mount(full_path)
shutil.rmtree(temp_path)
os.rmdir(info._minimize_size['foldermounts'])
del info._minimize_size['foldermounts']
os.rmdir(info._minimize_size['foldermounts'])
del info._minimize_size['foldermounts']

View file

@ -9,37 +9,37 @@ import os
class AddRequiredCommands(Task):
description = 'Adding commands required for reducing volume size'
phase = phases.preparation
successors = [host.CheckExternalCommands]
description = 'Adding commands required for reducing volume size'
phase = phases.preparation
successors = [host.CheckExternalCommands]
@classmethod
def run(cls, info):
if info.manifest.plugins['minimize_size'].get('zerofree', False):
info.host_dependencies['zerofree'] = 'zerofree'
if info.manifest.plugins['minimize_size'].get('shrink', False):
link = 'https://my.vmware.com/web/vmware/info/slug/desktop_end_user_computing/vmware_workstation/10_0'
info.host_dependencies['vmware-vdiskmanager'] = link
@classmethod
def run(cls, info):
if info.manifest.plugins['minimize_size'].get('zerofree', False):
info.host_dependencies['zerofree'] = 'zerofree'
if info.manifest.plugins['minimize_size'].get('shrink', False):
link = 'https://my.vmware.com/web/vmware/info/slug/desktop_end_user_computing/vmware_workstation/10_0'
info.host_dependencies['vmware-vdiskmanager'] = link
class Zerofree(Task):
description = 'Zeroing unused blocks on the root partition'
phase = phases.volume_unmounting
predecessors = [filesystem.UnmountRoot]
successors = [partitioning.UnmapPartitions, volume.Detach]
description = 'Zeroing unused blocks on the root partition'
phase = phases.volume_unmounting
predecessors = [filesystem.UnmountRoot]
successors = [partitioning.UnmapPartitions, volume.Detach]
@classmethod
def run(cls, info):
log_check_call(['zerofree', info.volume.partition_map.root.device_path])
@classmethod
def run(cls, info):
log_check_call(['zerofree', info.volume.partition_map.root.device_path])
class ShrinkVolume(Task):
description = 'Shrinking the volume'
phase = phases.volume_unmounting
predecessors = [volume.Detach]
description = 'Shrinking the volume'
phase = phases.volume_unmounting
predecessors = [volume.Detach]
@classmethod
def run(cls, info):
perm = os.stat(info.volume.image_path).st_mode & 0777
log_check_call(['/usr/bin/vmware-vdiskmanager', '-k', info.volume.image_path])
os.chmod(info.volume.image_path, perm)
@classmethod
def run(cls, info):
perm = os.stat(info.volume.image_path).st_mode & 0777
log_check_call(['/usr/bin/vmware-vdiskmanager', '-k', info.volume.image_path])
os.chmod(info.volume.image_path, perm)

View file

@ -1,11 +1,11 @@
def validate_manifest(data, validator, error):
import os.path
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml'))
validator(data, schema_path)
import os.path
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml'))
validator(data, schema_path)
def resolve_tasks(taskset, manifest):
import tasks
taskset.add(tasks.AddNtpPackage)
if manifest.plugins['ntp'].get('servers', False):
taskset.add(tasks.SetNtpServers)
import tasks
taskset.add(tasks.AddNtpPackage)
if manifest.plugins['ntp'].get('servers', False):
taskset.add(tasks.SetNtpServers)

View file

@ -3,30 +3,30 @@ from bootstrapvz.common import phases
class AddNtpPackage(Task):
description = 'Adding NTP Package'
phase = phases.preparation
description = 'Adding NTP Package'
phase = phases.preparation
@classmethod
def run(cls, info):
info.packages.add('ntp')
@classmethod
def run(cls, info):
info.packages.add('ntp')
class SetNtpServers(Task):
description = 'Setting NTP servers'
phase = phases.system_modification
description = 'Setting NTP servers'
phase = phases.system_modification
@classmethod
def run(cls, info):
import fileinput
import os
import re
ntp_path = os.path.join(info.root, 'etc/ntp.conf')
servers = list(info.manifest.plugins['ntp']['servers'])
debian_ntp_server = re.compile('.*[0-9]\.debian\.pool\.ntp\.org.*')
for line in fileinput.input(files=ntp_path, inplace=True):
# Will write all the specified servers on the first match, then supress all other default servers
if re.match(debian_ntp_server, line):
while servers:
print 'server {server_address} iburst'.format(server_address=servers.pop(0))
else:
print line,
@classmethod
def run(cls, info):
import fileinput
import os
import re
ntp_path = os.path.join(info.root, 'etc/ntp.conf')
servers = list(info.manifest.plugins['ntp']['servers'])
debian_ntp_server = re.compile('.*[0-9]\.debian\.pool\.ntp\.org.*')
for line in fileinput.input(files=ntp_path, inplace=True):
# Will write all the specified servers on the first match, then supress all other default servers
if re.match(debian_ntp_server, line):
while servers:
print 'server {server_address} iburst'.format(server_address=servers.pop(0))
else:
print line,

View file

@ -1,9 +1,9 @@
def resolve_tasks(taskset, manifest):
import tasks
from bootstrapvz.common.tasks import apt
from bootstrapvz.common.releases import wheezy
if manifest.release == wheezy:
taskset.add(apt.AddBackports)
taskset.update([tasks.AddONEContextPackage])
import tasks
from bootstrapvz.common.tasks import apt
from bootstrapvz.common.releases import wheezy
if manifest.release == wheezy:
taskset.add(apt.AddBackports)
taskset.update([tasks.AddONEContextPackage])

View file

@ -4,14 +4,14 @@ from bootstrapvz.common import phases
class AddONEContextPackage(Task):
description = 'Adding the OpenNebula context package'
phase = phases.preparation
predecessors = [apt.AddBackports]
description = 'Adding the OpenNebula context package'
phase = phases.preparation
predecessors = [apt.AddBackports]
@classmethod
def run(cls, info):
target = None
from bootstrapvz.common.releases import wheezy
if info.manifest.release == wheezy:
target = '{system.release}-backports'
info.packages.add('opennebula-context', target)
@classmethod
def run(cls, info):
target = None
from bootstrapvz.common.releases import wheezy
if info.manifest.release == wheezy:
target = '{system.release}-backports'
info.packages.add('opennebula-context', target)

View file

@ -2,11 +2,11 @@ import tasks
def validate_manifest(data, validator, error):
import os.path
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml'))
validator(data, schema_path)
import os.path
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml'))
validator(data, schema_path)
def resolve_tasks(taskset, manifest):
taskset.add(tasks.AddPipPackage)
taskset.add(tasks.PipInstallCommand)
taskset.add(tasks.AddPipPackage)
taskset.add(tasks.PipInstallCommand)

View file

@ -3,23 +3,23 @@ from bootstrapvz.common import phases
class AddPipPackage(Task):
description = 'Adding `pip\' and Co. to the image packages'
phase = phases.preparation
description = 'Adding `pip\' and Co. to the image packages'
phase = phases.preparation
@classmethod
def run(cls, info):
for package_name in ('python-pip', 'build-essential', 'python-dev'):
info.packages.add(package_name)
@classmethod
def run(cls, info):
for package_name in ('python-pip', 'build-essential', 'python-dev'):
info.packages.add(package_name)
class PipInstallCommand(Task):
description = 'Install python packages from pypi with pip'
phase = phases.system_modification
description = 'Install python packages from pypi with pip'
phase = phases.system_modification
@classmethod
def run(cls, info):
from bootstrapvz.common.tools import log_check_call
packages = info.manifest.plugins['pip_install']['packages']
pip_install_command = ['chroot', info.root, 'pip', 'install']
pip_install_command.extend(packages)
log_check_call(pip_install_command)
@classmethod
def run(cls, info):
from bootstrapvz.common.tools import log_check_call
packages = info.manifest.plugins['pip_install']['packages']
pip_install_command = ['chroot', info.root, 'pip', 'install']
pip_install_command.extend(packages)
log_check_call(pip_install_command)

View file

@ -14,44 +14,44 @@ from bootstrapvz.common.tasks import partitioning
def validate_manifest(data, validator, error):
import os.path
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml'))
validator(data, schema_path)
import os.path
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml'))
validator(data, schema_path)
def resolve_tasks(taskset, manifest):
settings = manifest.plugins['prebootstrapped']
skip_tasks = [ebs.Create,
loopback.Create,
settings = manifest.plugins['prebootstrapped']
skip_tasks = [ebs.Create,
loopback.Create,
filesystem.Format,
partitioning.PartitionVolume,
filesystem.TuneVolumeFS,
filesystem.AddXFSProgs,
filesystem.CreateBootMountDir,
filesystem.Format,
partitioning.PartitionVolume,
filesystem.TuneVolumeFS,
filesystem.AddXFSProgs,
filesystem.CreateBootMountDir,
apt.DisableDaemonAutostart,
locale.GenerateLocale,
bootstrap.MakeTarball,
bootstrap.Bootstrap,
guest_additions.InstallGuestAdditions,
]
if manifest.volume['backing'] == 'ebs':
if settings.get('snapshot', None) is not None:
taskset.add(CreateFromSnapshot)
[taskset.discard(task) for task in skip_tasks]
else:
taskset.add(Snapshot)
else:
if settings.get('image', None) is not None:
taskset.add(CreateFromImage)
[taskset.discard(task) for task in skip_tasks]
else:
taskset.add(CopyImage)
apt.DisableDaemonAutostart,
locale.GenerateLocale,
bootstrap.MakeTarball,
bootstrap.Bootstrap,
guest_additions.InstallGuestAdditions,
]
if manifest.volume['backing'] == 'ebs':
if settings.get('snapshot', None) is not None:
taskset.add(CreateFromSnapshot)
[taskset.discard(task) for task in skip_tasks]
else:
taskset.add(Snapshot)
else:
if settings.get('image', None) is not None:
taskset.add(CreateFromImage)
[taskset.discard(task) for task in skip_tasks]
else:
taskset.add(CopyImage)
def resolve_rollback_tasks(taskset, manifest, completed, counter_task):
if manifest.volume['backing'] == 'ebs':
counter_task(taskset, CreateFromSnapshot, volume.Delete)
else:
counter_task(taskset, CreateFromImage, volume.Delete)
if manifest.volume['backing'] == 'ebs':
counter_task(taskset, CreateFromSnapshot, volume.Delete)
else:
counter_task(taskset, CreateFromImage, volume.Delete)

View file

@ -13,83 +13,83 @@ log = logging.getLogger(__name__)
class Snapshot(Task):
description = 'Creating a snapshot of the bootstrapped volume'
phase = phases.package_installation
predecessors = [packages.InstallPackages, guest_additions.InstallGuestAdditions]
description = 'Creating a snapshot of the bootstrapped volume'
phase = phases.package_installation
predecessors = [packages.InstallPackages, guest_additions.InstallGuestAdditions]
@classmethod
def run(cls, info):
snapshot = None
with unmounted(info.volume):
snapshot = info.volume.snapshot()
msg = 'A snapshot of the bootstrapped volume was created. ID: ' + snapshot.id
log.info(msg)
@classmethod
def run(cls, info):
snapshot = None
with unmounted(info.volume):
snapshot = info.volume.snapshot()
msg = 'A snapshot of the bootstrapped volume was created. ID: ' + snapshot.id
log.info(msg)
class CreateFromSnapshot(Task):
description = 'Creating EBS volume from a snapshot'
phase = phases.volume_creation
successors = [ebs.Attach]
description = 'Creating EBS volume from a snapshot'
phase = phases.volume_creation
successors = [ebs.Attach]
@classmethod
def run(cls, info):
snapshot = info.manifest.plugins['prebootstrapped']['snapshot']
ebs_volume = info._ec2['connection'].create_volume(info.volume.size.bytes.get_qty_in('GiB'),
info._ec2['host']['availabilityZone'],
snapshot=snapshot)
while ebs_volume.volume_state() != 'available':
time.sleep(5)
ebs_volume.update()
@classmethod
def run(cls, info):
snapshot = info.manifest.plugins['prebootstrapped']['snapshot']
ebs_volume = info._ec2['connection'].create_volume(info.volume.size.bytes.get_qty_in('GiB'),
info._ec2['host']['availabilityZone'],
snapshot=snapshot)
while ebs_volume.volume_state() != 'available':
time.sleep(5)
ebs_volume.update()
info.volume.volume = ebs_volume
set_fs_states(info.volume)
info.volume.volume = ebs_volume
set_fs_states(info.volume)
class CopyImage(Task):
description = 'Creating a snapshot of the bootstrapped volume'
phase = phases.package_installation
predecessors = [packages.InstallPackages, guest_additions.InstallGuestAdditions]
description = 'Creating a snapshot of the bootstrapped volume'
phase = phases.package_installation
predecessors = [packages.InstallPackages, guest_additions.InstallGuestAdditions]
@classmethod
def run(cls, info):
loopback_backup_name = 'volume-{id}.{ext}.backup'.format(id=info.run_id, ext=info.volume.extension)
destination = os.path.join(info.manifest.bootstrapper['workspace'], loopback_backup_name)
@classmethod
def run(cls, info):
loopback_backup_name = 'volume-{id}.{ext}.backup'.format(id=info.run_id, ext=info.volume.extension)
destination = os.path.join(info.manifest.bootstrapper['workspace'], loopback_backup_name)
with unmounted(info.volume):
copyfile(info.volume.image_path, destination)
msg = 'A copy of the bootstrapped volume was created. Path: ' + destination
log.info(msg)
with unmounted(info.volume):
copyfile(info.volume.image_path, destination)
msg = 'A copy of the bootstrapped volume was created. Path: ' + destination
log.info(msg)
class CreateFromImage(Task):
description = 'Creating loopback image from a copy'
phase = phases.volume_creation
successors = [volume.Attach]
description = 'Creating loopback image from a copy'
phase = phases.volume_creation
successors = [volume.Attach]
@classmethod
def run(cls, info):
info.volume.image_path = os.path.join(info.workspace, 'volume.' + info.volume.extension)
loopback_backup_path = info.manifest.plugins['prebootstrapped']['image']
copyfile(loopback_backup_path, info.volume.image_path)
@classmethod
def run(cls, info):
info.volume.image_path = os.path.join(info.workspace, 'volume.' + info.volume.extension)
loopback_backup_path = info.manifest.plugins['prebootstrapped']['image']
copyfile(loopback_backup_path, info.volume.image_path)
set_fs_states(info.volume)
set_fs_states(info.volume)
def set_fs_states(volume):
volume.fsm.current = 'detached'
volume.fsm.current = 'detached'
p_map = volume.partition_map
from bootstrapvz.base.fs.partitionmaps.none import NoPartitions
if not isinstance(p_map, NoPartitions):
p_map.fsm.current = 'unmapped'
p_map = volume.partition_map
from bootstrapvz.base.fs.partitionmaps.none import NoPartitions
if not isinstance(p_map, NoPartitions):
p_map.fsm.current = 'unmapped'
from bootstrapvz.base.fs.partitions.unformatted import UnformattedPartition
from bootstrapvz.base.fs.partitions.single import SinglePartition
for partition in p_map.partitions:
if isinstance(partition, UnformattedPartition):
partition.fsm.current = 'unmapped'
continue
if isinstance(partition, SinglePartition):
partition.fsm.current = 'formatted'
continue
partition.fsm.current = 'unmapped_fmt'
from bootstrapvz.base.fs.partitions.unformatted import UnformattedPartition
from bootstrapvz.base.fs.partitions.single import SinglePartition
for partition in p_map.partitions:
if isinstance(partition, UnformattedPartition):
partition.fsm.current = 'unmapped'
continue
if isinstance(partition, SinglePartition):
partition.fsm.current = 'formatted'
continue
partition.fsm.current = 'unmapped_fmt'

Some files were not shown because too many files have changed in this diff Show more