Merge branch 'development'

This commit is contained in:
Anders Ingemann 2015-05-02 22:33:04 +02:00
commit f88a1b0af5
331 changed files with 7910 additions and 3016 deletions

2
.gitignore vendored
View file

@ -12,3 +12,5 @@
# Testing
/.coverage
/.tox/
/build-servers.yml
/integration.html

View file

@ -1,12 +0,0 @@
2014-05-04:
Dhananjay Balan:
* Salt minion installation & configuration plugin
* Expose debootstrap --include-packages and --exclude-packages options to manifest
2014-05-03:
Anders Ingemann:
* Require hostname setting for vagrant plugin
* Fixes #14: S3 images can now be bootstrapped outside EC2.
* Added enable_agent option to puppet plugin
2014-05-02:
Tomasz Rybak:
* Added Google Compute Engine Provider

146
CHANGELOG.rst Normal file
View file

@ -0,0 +1,146 @@
Changelog
=========
2015-05-02
----------
Anders Ingemann:
* Fix #32: Add image_commands example
* Fix #99: rename image_commands to commands
* Fix #139: Vagrant / Virtualbox provider should set ostype when 32 bits selected
* Fix #204: Create a new phase where user modification tasks can run
2015-04-29
----------
Anders Ingemann:
* Fix #104: Don't verify default target when adding packages
* Fix #217: Implement get_version() function in common.tools
2015-04-28
----------
Jonh Wendell:
* root_password: Enable SSH root login
2015-04-27
----------
John Kristensen:
* Add authentication support to the apt proxy plugin
2015-04-25
----------
Anders Ingemann (work started 2014-08-31, merged on 2015-04-25):
* Introduce `remote bootstrapping <bootstrapvz/remote>`__
* Introduce `integration testing <tests/integration>`__ (for VirtualBox and EC2)
* Merge the end-user documentation into the sphinx docs
(plugin & provider docs are now located in their respective folders as READMEs)
* Include READMEs in sphinx docs and transform their links
* Docs for integration testing
* Document the remote bootstrapping procedure
* Add documentation about the documentation
* Add list of supported builds to the docs
* Add html output to integration tests
* Implement PR #201 by @jszwedko (bump required euca2ools version)
* grub now works on jessie
* extlinux is now running on jessie
* Issue warning when specifying pre/successors across phases (but still error out if it's a conflict)
* Add salt dependencies in the right phase
* extlinux now works with GPT on HVM instances
* Take @ssgelm's advice in #155 and copy the mount table -- df warnings no more
* Generally deny installing grub on squeeze (too much of a hassle to get working, PRs welcome)
* Add 1 sector gap between partitions on GPT
* Add new task: DeterminKernelVersion, this can potentially fix a lot of small problems
* Disable getty processes on jessie through logind config
* Partition volumes by sectors instead of bytes
This allows for finer grained control over the partition sizes and gaps
Add new Sectors unit, enhance Bytes unit, add unit tests for both
* Don't require qemu for raw volumes, use `truncate` instead
* Fix #179: Disabling getty processes task fails half the time
* Split grub and extlinux installs into separate modules
* Fix extlinux config for squeeze
* Fix #136: Make extlinux output boot messages to the serial console
* Extend sed_i to raise Exceptions when the expected amount of replacements is not met
Jonas Bergler:
* Fixes #145: Fix installation of vbox guest additions.
Tiago Ilieve:
* Fixes #142: msdos partition type incorrect for swap partition (Linux)
2015-04-23
----------
Tiago Ilieve:
* Fixes #212: Sparse file is created on the current directory
2014-11-23
----------
Noah Fontes:
* Add support for enhanced networking on EC2 images
2014-07-12
----------
Tiago Ilieve:
* Fixes #96: AddBackports is now a common task
2014-07-09
----------
Anders Ingemann:
* Allow passing data into the manifest
* Refactor logging setup to be more modular
* Convert every JSON file to YAML
* Convert "provider" into provider specific section
2014-07-02
----------
Vladimir Vitkov:
* Improve grub options to work better with virtual machines
2014-06-30
----------
Tomasz Rybak:
* Return information about created image
2014-06-22
----------
Victor Marmol:
* Enable the memory cgroup for the Docker plugin
2014-06-19
----------
Tiago Ilieve:
* Fixes #94: allow stable/oldstable as release name on manifest
Vladimir Vitkov:
* Improve ami listing performance
2014-06-07
----------
Tiago Ilieve:
* Download `gsutil` tarball to workspace instead of working directory
* Fixes #97: remove raw disk image created by GCE after build
2014-06-06
----------
Ilya Margolin:
* pip_install plugin
2014-05-23
----------
Tiago Ilieve:
* Fixes #95: check if the specified APT proxy server can be reached
2014-05-04
----------
Dhananjay Balan:
* Salt minion installation & configuration plugin
* Expose debootstrap --include-packages and --exclude-packages options to manifest
2014-05-03
----------
Anders Ingemann:
* Require hostname setting for vagrant plugin
* Fixes #14: S3 images can now be bootstrapped outside EC2.
* Added enable_agent option to puppet plugin
2014-05-02
----------
Tomasz Rybak:
* Added Google Compute Engine Provider

View file

@ -1,42 +0,0 @@
Contributing
============
Do you want to contribute to the bootstrap-vz project? Nice! Here is the basic workflow:
* Read the [development guidelines](http://bootstrap-vz.readthedocs.org/en/master/guidelines.html)
* Fork this repository.
* Make any changes you want/need.
* Check the coding style of your changes using [tox](http://tox.readthedocs.org/) by running `tox -e flake8`
and fix any warnings that may appear.
This check will be repeated by [Travis CI](https://travis-ci.org/andsens/bootstrap-vz)
once you send a pull request, so it's better if you check this beforehand.
* If the change is significant (e.g. a new plugin, manifest setting or security fix)
add your name and contribution to the [CHANGELOG](CHANGELOG).
* Commit your changes.
* Squash the commits if needed. For instance, it is fine if you have multiple commits describing atomic units
of work, but there's no reason to have many little commits just because of corrected typos.
* Push to your fork, preferably on a topic branch.
From here on there are two paths to consider:
If your patch is a new feature, e.g.: plugin, provider, etc. then:
* Send a pull request to the `development` branch. It will be merged into the `master` branch when we can make
sure that the code is stable.
If it is a bug/security fix:
* Send a pull request to the `master` branch.
--
Please try to be very descriptive about your changes when you write a pull request, stating what it does, why
it is needed, which use cases this change covers etc.
You may be asked to rebase your work on the current branch state, so it can be merged cleanly.
If you push a new commit to your pull request you will have to add a new comment to the PR,
provided that you want us notified. Github will otherwise not send a notification.
Be aware that your modifications need to be properly documented and pushed to the `gh-pages` branch, if they
concern anything done on `master`. Otherwise, they should be sent to the `gh-pages-dev`.
Happy hacking! :-)

165
CONTRIBUTING.rst Normal file
View file

@ -0,0 +1,165 @@
Contributing
============
Sending pull requests
---------------------
Do you want to contribute to the bootstrap-vz project? Nice! Here is the basic workflow:
* Read the `development guidelines <#development-guidelines>`__
* Fork this repository.
* Make any changes you want/need.
* Check the coding style of your changes using `tox <http://tox.readthedocs.org/>`__ by running `tox -e flake8`
and fix any warnings that may appear.
This check will be repeated by `Travis CI <https://travis-ci.org/andsens/bootstrap-vz>`__
once you send a pull request, so it's better if you check this beforehand.
* If the change is significant (e.g. a new plugin, manifest setting or security fix)
add your name and contribution to the `changelog <CHANGELOG.rst>`__.
* Commit your changes.
* Squash the commits if needed. For instance, it is fine if you have multiple commits describing atomic units
of work, but there's no reason to have many little commits just because of corrected typos.
* Push to your fork, preferably on a topic branch.
* Send a pull request to the `master` branch.
Please try to be very descriptive about your changes when you write a pull request, stating what it does, why
it is needed, which use cases this change covers, etc.
You may be asked to rebase your work on the current branch state, so it can be merged cleanly.
If you push a new commit to your pull request you will have to add a new comment to the PR,
provided that you want us notified. Github will otherwise not send a notification.
Be aware that your modifications need to be properly documented. Please take a look at the
`documentation section <#documentation>`__ to see how to do that.
Happy hacking! :-)
Development guidelines
----------------------
The following guidelines should serve as general advice when
developing providers or plugins for bootstrap-vz. Keep in mind that
these guidelines are not rules , they are advice on how to better add
value to the bootstrap-vz codebase.
The manifest should always fully describe the resulting image
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The outcome of a bootstrapping process should never depend on settings
specified elsewhere.
This allows others to easily reproduce any setup other people are running
and makes it possible to share manifests.
`The official debian EC2 images`__ for example can be reproduced
using the manifests available in the manifest directory of bootstrap-vz.
__ https:/aws.amazon.com/marketplace/seller-profile?id=890be55d-32d8-4bc8-9042-2b4fd83064d5
The bootstrapper should always be able to run fully unattended
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
For end users, this guideline minimizes the risk of errors. Any
required input would also be in direct conflict with the previous
guideline that the manifest should always fully describe the resulting
image.
Additionally developers may have to run the bootstrap
process multiple times though, any prompts in the middle of that
process may significantly slow down the development speed.
The bootstrapper should only need as much setup as the manifest requires
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Having to shuffle specific paths on the host into place
(e.g. ``/target`` has to be created manually) to get the bootstrapper
running is going to increase the rate of errors made by users.
Aim for minimal setup.
Exceptions are of course things such as the path to
the VirtualBox Guest Additions ISO or tools like ``parted`` that
need to be installed on the host.
Roll complexity into which tasks are added to the tasklist
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If a ``run()`` function checks whether it should do any work or simply be
skipped, consider doing that check in ``resolve_tasks()`` instead and
avoid adding that task alltogether. This allows people looking at the
tasklist in the logfile to determine what work has been performed.
If a task says it will modify a file but then bails , a developer may get
confused when looking at that file after bootstrapping. He could
conclude that the file has either been overwritten or that the
search & replace does not work correctly.
Control flow should be directed from the task graph
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Avoid creating complicated ``run()`` functions. If necessary, split up
a function into two semantically separate tasks.
This allows other tasks to interleave with the control-flow and add extended
functionality (e.g. because volume creation and mounting are two
separate tasks, `the prebootstrapped plugin
<bootstrapvz/plugins/prebootstrapped>`__
can replace the volume creation task with a task of its own that
creates a volume from a snapshot instead, but still reuse the mount task).
Task classes should be treated as decorated run() functions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Tasks should not have any state, thats what the
BootstrapInformation object is for.
Only add stuff to the BootstrapInformation object when really necessary
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This is mainly to avoid clutter.
Use a json-schema to check for allowed settings
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The json-schema may be verbose but it keeps the bulk of check work outside the
python code, which is a big plus when it comes to readability.
This only applies bas long as the checks are simple.
You can of course fall back to doing the check in python when that solution is
considerably less complex.
When invoking external programs, use long options whenever possible
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This makes the commands a lot easier to understand, since
the option names usually hint at what they do.
When invoking external programs, don't use full paths, rely on ``$PATH``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This increases robustness when executable locations change.
Example: Use ``log_call(['wget', ...])`` instead of ``log_call(['/usr/bin/wget', ...])``.
Coding style
------------
bootstrap-vz is coded to comply closely with the PEP8 style
guidelines. There however a few exceptions:
* Max line length is 110 chars, not 80.
* Multiple assignments may be aligned with spaces so that the = match
vertically.
* Ignore ``E101``: Indent with tabs and align with spaces
* Ignore ``E221 & E241``: Alignment of assignments
* Ignore ``E501``: The max line length is not 80 characters
* Ignore ``W191``: Indent with tabs not spaces
The codebase can be checked for any violations quite easily, since those rules are already specified in the
`tox <http://tox.readthedocs.org/>`__ configuration file.
::
tox -e flake8
Documentation
-------------
When developing a provider or plugin, make sure to update/create the README.rst
located in provider/plugin folder.
Any links to other rst files should be relative and work, when viewed on github.
For information on `how to build the documentation <docs#building>`_ and how
the various parts fit together,
refer to `the documentation about the documentation <docs>`__ :-)

View file

@ -2,3 +2,4 @@ include LICENSE
include manifests/*
recursive-include bootstrapvz assets/*
recursive-include bootstrapvz *.json
recursive-include bootstrapvz *.yml

View file

@ -1,45 +0,0 @@
bootstrap-vz
===========================================
bootstrap-vz is a bootstrapping framework for Debian.
It is is specifically targeted at bootstrapping systems for virtualized environments.
bootstrap-vz runs without any user intervention and generates ready-to-boot images for
[a number of virtualization platforms](http://andsens.github.io/bootstrap-vz/providers.html).
Its aim is to provide a reproducable bootstrapping process using [manifests](http://andsens.github.io/bootstrap-vz/manifest.html) as well as supporting a high degree of customizability through plugins.
bootstrap-vz was coded from scratch in python once the bash script architecture that was used in the
[build-debian-cloud](https://github.com/andsens/build-debian-cloud) bootstrapper reached its
limits.
Documentation
-------------
The end-user documentation for bootstrap-vz is available
at [andsens.github.io/bootstrap-vz](http://andsens.github.io/bootstrap-vz).
There, you can discover [what the dependencies](http://andsens.github.io/bootstrap-vz/#dependencies)
for a specific cloud provider are, [see a list of available plugins](http://andsens.github.io/bootstrap-vz/plugins.html)
and learn [how you create a manifest](http://andsens.github.io/bootstrap-vz/manifest.html).
Installation
------------
bootstrap-vz has a master branch for stable releases and a development for, well, development.
After checking out the branch of your choice you can install the python dependencies by running
`python setup.py install`. However, depending on what kind of image you'd like to bootstrap,
there are other debian package dependencies as well, at the very least you will need `debootstrap`.
[The documentation](http://andsens.github.io/bootstrap-vz/) explains this in more detail.
Note that bootstrap-vz will tell you which tools it requires when they aren't
present (the different packages are mentioned in the error message), so you can
simply run bootstrap-vz once to get a list of the packages, install them,
and then re-run.
Developers
----------
The API documentation, development guidelines and an explanation of bootstrap-vz internals
can be found at [bootstrap-vz.readthedocs.org](http://bootstrap-vz.readthedocs.org).
Contributing
------------
Contribution guidelines are described on the [CONTRIBUTING](CONTRIBUTING.md) file. There's also a
[topic on the documentation](http://bootstrap-vz.readthedocs.org/en/development/guidelines.html#coding-style)
regarding the coding style.

153
README.rst Normal file
View file

@ -0,0 +1,153 @@
bootstrap-vz
============
bootstrap-vz is a bootstrapping framework for Debian that creates ready-to-boot
images able to run on a number of cloud providers and virtual machines.
bootstrap-vz runs without any user intervention and
generates ready-to-boot images for a number of virtualization
platforms.
Its aim is to provide a reproducable bootstrapping process using
`manifests <manifests>`__
as well as supporting a high degree of customizability through plugins.
bootstrap-vz was coded from scratch in python once the bash script
architecture that was used in the
`build-debian-cloud <https://github.com/andsens/build-debian-cloud>`__
bootstrapper reached its limits.
Documentation
-------------
The documentation for bootstrap-vz is available at
`bootstrap-vz.readthedocs.org <http://bootstrap-vz.readthedocs.org/en/master>`__.
There, you can discover `what the dependencies <#dependencies>`__ for
a specific cloud provider are, `see a list of available plugins <bootstrapvz/plugins>`__
and learn `how you create a manifest <manifests>`__.
Note to developers: `The documentaion <docs>`__ is generated in
a rather peculiar and nifty way.
Installation
------------
bootstrap-vz has a master branch for stable releases and a development
for, well, development.
After checking out the branch of your choice you can install the
python dependencies by running ``python setup.py install``. However,
depending on what kind of image you'd like to bootstrap, there are
other debian package dependencies as well, at the very least you will
need ``debootstrap``.
`The documentation <http://bootstrap-vz.readthedocs.org/en/master>`__
explains this in more detail.
Note that bootstrap-vz will tell you which tools it requires when they
aren't present (the different packages are mentioned in the error
message), so you can simply run bootstrap-vz once to get a list of the
packages, install them, and then re-run.
Quick start
-----------
Here are a few quickstart tutorials for the most common images.
If you plan on partitioning your volume, you will need the ``parted``
package and ``kpartx``:
.. code:: sh
root@host:~# apt-get install parted kpartx
Note that you can always abort a bootstrapping process by pressing
``Ctrl+C``, bootstrap-vz will then initiate a cleanup/rollback process,
where volumes are detached/deleted and temporary files removed, pressing
``Ctrl+C`` a second time shortcuts that procedure, halts the cleanup and
quits the process.
VirtualBox Vagrant
~~~~~~~~~~~~~~~~~~
.. code:: sh
user@host:~$ sudo -i # become root
root@host:~# git clone https://github.com/andsens/bootstrap-vz.git # Clone the repo
root@host:~# apt-get install qemu-utils debootstrap python-pip # Install dependencies from aptitude
root@host:~# pip install termcolor jsonschema fysom docopt pyyaml # Install python dependencies
root@host:~# bootstrap-vz/bootstrap-vz bootstrap-vz/manifests/virtualbox-vagrant.manifest.yml
If you want to use the `minimize\_size <bootstrapvz/plugins/minimize_size>`__ plugin,
you will have to install the ``zerofree`` package and `VMWare Workstation`__ as well.
__ https://my.vmware.com/web/vmware/info/slug/desktop_end_user_computing/vmware_workstation/10_0
Amazon EC2 EBS backed AMI
~~~~~~~~~~~~~~~~~~~~~~~~~
.. code:: sh
user@host:~$ sudo -i # become root
root@host:~# git clone https://github.com/andsens/bootstrap-vz.git # Clone the repo
root@host:~# apt-get install debootstrap python-pip # Install dependencies from aptitude
root@host:~# pip install termcolor jsonschema fysom docopt pyyaml boto # Install python dependencies
root@host:~# bootstrap-vz/bootstrap-vz bootstrap-vz/manifests/ec2-ebs-debian-official-amd64-pvm.manifest.yml
To bootstrap S3 backed AMIs, bootstrap-vz will also need the
``euca2ools`` package. However, version 3.2.0 is required meaning you
must however install it directly from the eucalyptus repository like
this:
.. code:: sh
apt-get install --no-install-recommends python-dev libxml2-dev libxslt-dev gcc
pip install git+git://github.com/eucalyptus/euca2ools.git@v3.2.0
Cleanup
-------
bootstrap-vz tries very hard to clean up after itself both if a run was
successful but also if it failed. This ensures that you are not left
with volumes still attached to the host which are useless. If an error
occurred you can simply correct the problem that caused it and rerun
everything, there will be no leftovers from the previous run (as always
there are of course rare/unlikely exceptions to that rule). The error
messages should always give you a strong hint at what is wrong, if that
is not the case please consider `opening an issue`__ and attach
both the error message and your manifest (preferably as a gist or
similar).
__ https://github.com/andsens/bootstrap-vz/issues
Dependencies
------------
bootstrap-vz has a number of dependencies depending on the target
platform and `the selected plugins <bootstrapvz/plugins>`__.
At a bare minimum the following python libraries are needed:
* `termcolor <https://pypi.python.org/pypi/termcolor>`__
* `fysom <https://pypi.python.org/pypi/fysom>`__
* `jsonschema <https://pypi.python.org/pypi/jsonschema>`__
* `docopt <https://pypi.python.org/pypi/docopt>`__
* `pyyaml <https://pypi.python.org/pypi/pyyaml>`__
To bootstrap Debian itself `debootstrap`__ is needed as well.
__ https://packages.debian.org/wheezy/debootstrap
Any other requirements are dependent upon the manifest configuration
and are detailed in the corresponding sections of the documentation.
bootstrap-vz will however warn you if a requirement has not been met,
before the bootstrapping process begins.
Developers
----------
The API documentation, development guidelines and an explanation of
bootstrap-vz internals can be found at `bootstrap-vz.readthedocs.org`__.
__ http://bootstrap-vz.readthedocs.org/en/master/developers
Contributing
------------
Contribution guidelines are described in the documentation under `Contributing <CONTRIBUTING.rst>`__.
There's also a topic regarding `the coding style <CONTRIBUTING.rst#coding-style>`__.

View file

@ -1,5 +1,5 @@
#!/usr/bin/env python
if __name__ == '__main__':
from bootstrapvz.base import main
from bootstrapvz.base.main import main
main()

5
bootstrap-vz-remote Executable file
View file

@ -0,0 +1,5 @@
#!/usr/bin/env python
if __name__ == '__main__':
from bootstrapvz.remote.main import main
main()

5
bootstrap-vz-server Executable file
View file

@ -0,0 +1,5 @@
#!/usr/bin/env python
if __name__ == '__main__':
from bootstrapvz.remote.server import main
main()

View file

@ -1,6 +1,5 @@
How bootstrap-vz works
======================
----------------------
Tasks
~~~~~
@ -15,14 +14,14 @@ via attributes. Here is an example:
::
class MapPartitions(Task):
description = 'Mapping volume partitions'
phase = phases.volume_preparation
predecessors = [PartitionVolume]
successors = [filesystem.Format]
@classmethod
def run(cls, info):
info.volume.partition_map.map(info.volume)
description = 'Mapping volume partitions'
phase = phases.volume_preparation
predecessors = [PartitionVolume]
successors = [filesystem.Format]
@classmethod
def run(cls, info):
info.volume.partition_map.map(info.volume)
In this case the attributes define that the task at hand should run
after the ``PartitionVolume`` task — i.e. after volume has been
@ -36,7 +35,7 @@ successors.
The final task list that will be executed is computed by enumerating
all tasks in the package, placing them in the graph and
`sorting them topoligcally <http://en.wikipedia.org/wiki/Topological_sort>`_.
`sorting them topologically <http://en.wikipedia.org/wiki/Topological_sort>`_.
Subsequently the list returned is filtered to contain only the tasks the
provider and the plugins added to the taskset.

View file

@ -1,8 +1,9 @@
__all__ = ['Phase', 'Task', 'main']
from phase import Phase
from task import Task
from main import main
__all__ = ['Phase', 'Task', 'main']
def validate_manifest(data, validator, error):
"""Validates the manifest using the base manifest
@ -12,10 +13,22 @@ def validate_manifest(data, validator, error):
:param function error: The function tha raises an error when the validation fails
"""
import os.path
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.json'))
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml'))
validator(data, schema_path)
from bootstrapvz.common.releases import get_release
from bootstrapvz.common.releases import squeeze
release = get_release(data['system']['release'])
if release < squeeze:
error('Only Debian squeeze and later is supported', ['system', 'release'])
# Check the bootloader/partitioning configuration.
# Doing this via the schema is a pain and does not output a useful error message.
if data['system']['bootloader'] == 'grub' and data['volume']['partitions']['type'] == 'none':
if data['system']['bootloader'] == 'grub':
if data['volume']['partitions']['type'] == 'none':
error('Grub cannot boot from unpartitioned disks', ['system', 'bootloader'])
if release == squeeze:
error('Grub installation on squeeze is not supported', ['system', 'bootloader'])

View file

@ -31,12 +31,6 @@ class BootstrapInformation(object):
# The default apt mirror
self.apt_mirror = self.manifest.packages.get('mirror', 'http://http.debian.net/debian')
# Normalize the release codenames so that tasks may query for release codenames rather than
# 'stable', 'unstable' etc. This is useful when handling cases that are specific to a release.
release_codenames_path = os.path.join(os.path.dirname(__file__), 'release-codenames.json')
from bootstrapvz.common.tools import config_get
self.release_codename = config_get(release_codenames_path, [self.manifest.system['release']])
# Create the manifest_vars dictionary
self.manifest_vars = self.__create_manifest_vars(self.manifest, {'apt_mirror': self.apt_mirror})
@ -81,17 +75,6 @@ class BootstrapInformation(object):
:return: The manifest_vars dictionary
:rtype: dict
"""
class DictClass(dict):
"""Tiny extension of dict to allow setting and getting keys via attributes
"""
def __getattr__(self, name):
return self[name]
def __setattr__(self, name, value):
self[name] = value
def __delattr__(self, name):
del self[name]
def set_manifest_vars(obj, data):
"""Runs through the manifest and creates DictClasses for every key
@ -127,3 +110,47 @@ class BootstrapInformation(object):
# They are added last so that they may override previous variables
set_manifest_vars(manifest_vars, additional_vars)
return manifest_vars
def __getstate__(self):
from bootstrapvz.remote import supported_classes
def can_serialize(obj):
if hasattr(obj, '__class__') and hasattr(obj, '__module__'):
class_name = obj.__module__ + '.' + obj.__class__.__name__
return class_name in supported_classes or isinstance(obj, (BaseException, Exception))
return True
def filter_state(state):
if isinstance(state, dict):
return {key: filter_state(val) for key, val in state.items() if can_serialize(val)}
if isinstance(state, (set, tuple, list, frozenset)):
return type(state)(filter_state(val) for val in state if can_serialize(val))
return state
state = filter_state(self.__dict__)
state['__class__'] = self.__module__ + '.' + self.__class__.__name__
return state
def __setstate__(self, state):
for key in state:
self.__dict__[key] = state[key]
class DictClass(dict):
"""Tiny extension of dict to allow setting and getting keys via attributes
"""
def __getattr__(self, name):
return self[name]
def __setattr__(self, name, value):
self[name] = value
def __delattr__(self, name):
del self[name]
def __getstate__(self):
return self.__dict__
def __setstate__(self, state):
for key in state:
self[key] = state[key]

View file

@ -9,27 +9,33 @@ def load_volume(data, bootloader):
:return: The volume that represents all information pertaining to the volume we bootstrap on.
:rtype: Volume
"""
# Create a mapping between valid partition maps in the manifest and their corresponding classes
# Map valid partition maps in the manifest and their corresponding classes
from partitionmaps.gpt import GPTPartitionMap
from partitionmaps.msdos import MSDOSPartitionMap
from partitionmaps.none import NoPartitions
partition_maps = {'none': NoPartitions,
'gpt': GPTPartitionMap,
'msdos': MSDOSPartitionMap,
}
# Instantiate the partition map
partition_map = partition_maps.get(data['partitions']['type'])(data['partitions'], bootloader)
partition_map = {'none': NoPartitions,
'gpt': GPTPartitionMap,
'msdos': MSDOSPartitionMap,
}.get(data['partitions']['type'])
# Create a mapping between valid volume backings in the manifest and their corresponding classes
# Map valid volume backings in the manifest and their corresponding classes
from bootstrapvz.common.fs.loopbackvolume import LoopbackVolume
from bootstrapvz.providers.ec2.ebsvolume import EBSVolume
from bootstrapvz.common.fs.virtualdiskimage import VirtualDiskImage
from bootstrapvz.common.fs.virtualmachinedisk import VirtualMachineDisk
volume_backings = {'raw': LoopbackVolume,
's3': LoopbackVolume,
'vdi': VirtualDiskImage,
'vmdk': VirtualMachineDisk,
'ebs': EBSVolume
}
volume_backing = {'raw': LoopbackVolume,
's3': LoopbackVolume,
'vdi': VirtualDiskImage,
'vmdk': VirtualMachineDisk,
'ebs': EBSVolume
}.get(data['backing'])
# Instantiate the partition map
from bootstrapvz.common.bytes import Bytes
# Only operate with a physical sector size of 512 bytes for now,
# not sure if we can change that for some of the virtual disks
sector_size = Bytes('512B')
partition_map = partition_map(data['partitions'], sector_size, bootloader)
# Create the volume with the partition map as an argument
return volume_backings.get(data['backing'])(partition_map)
return volume_backing(partition_map)

View file

@ -37,7 +37,7 @@ class AbstractPartitionMap(FSMProxy):
"""Returns the total size the partitions occupy
:return: The size of all partitions
:rtype: Bytes
:rtype: Sectors
"""
# We just need the endpoint of the last partition
return self.partitions[-1].get_end()
@ -74,6 +74,7 @@ class AbstractPartitionMap(FSMProxy):
'{device_path} (?P<blk_offset>\d+)$'
.format(device_path=volume.device_path))
log_check_call(['kpartx', '-as', volume.device_path])
import os.path
# Run through the kpartx output and map the paths to the partitions
for mapping in mappings:
@ -87,15 +88,15 @@ class AbstractPartitionMap(FSMProxy):
# Check if any partition was not mapped
for idx, partition in enumerate(self.partitions):
if partition.fsm.current not in ['mapped', 'formatted']:
raise PartitionError('kpartx did not map partition #' + str(idx + 1))
raise PartitionError('kpartx did not map partition #' + str(partition.get_index()))
except PartitionError as e:
except PartitionError:
# Revert any mapping and reraise the error
for partition in self.partitions:
if not partition.fsm.can('unmap'):
if partition.fsm.can('unmap'):
partition.unmap()
log_check_call(['kpartx', '-ds', volume.device_path])
raise e
raise
def unmap(self, volume):
"""Unmaps the partition

View file

@ -8,12 +8,14 @@ class GPTPartitionMap(AbstractPartitionMap):
"""Represents a GPT partition map
"""
def __init__(self, data, bootloader):
def __init__(self, data, sector_size, bootloader):
"""
:param dict data: volume.partitions part of the manifest
:param int sector_size: Sectorsize of the volume
:param str bootloader: Name of the bootloader we will use for bootstrapping
"""
from bootstrapvz.common.bytes import Bytes
from bootstrapvz.common.sectors import Sectors
# List of partitions
self.partitions = []
@ -21,42 +23,63 @@ class GPTPartitionMap(AbstractPartitionMap):
def last_partition():
return self.partitions[-1] if len(self.partitions) > 0 else None
# If we are using the grub bootloader we need to create an unformatted partition
# at the beginning of the map. Its size is 1007kb, which we will steal from the
# next partition.
if bootloader == 'grub':
# If we are using the grub bootloader we need to create an unformatted partition
# at the beginning of the map. Its size is 1007kb, which seems to be chosen so that
# primary gpt + grub = 1024KiB
# The 1 MiB will be subtracted later on, once we know what the subsequent partition is
from ..partitions.unformatted import UnformattedPartition
self.grub_boot = UnformattedPartition(Bytes('1007KiB'), last_partition())
# Mark the partition as a bios_grub partition
self.grub_boot.flags.append('bios_grub')
self.grub_boot = UnformattedPartition(Sectors('1MiB', sector_size), last_partition())
self.partitions.append(self.grub_boot)
# Offset all partitions by 1 sector.
# parted in jessie has changed and no longer allows
# partitions to be right next to each other.
partition_gap = Sectors(1, sector_size)
# The boot and swap partitions are optional
if 'boot' in data:
self.boot = GPTPartition(Bytes(data['boot']['size']),
self.boot = GPTPartition(Sectors(data['boot']['size'], sector_size),
data['boot']['filesystem'], data['boot'].get('format_command', None),
'boot', last_partition())
if self.boot.previous is not None:
# No need to pad if this is the first partition
self.boot.pad_start += partition_gap
self.boot.size -= partition_gap
self.partitions.append(self.boot)
if 'swap' in data:
self.swap = GPTSwapPartition(Bytes(data['swap']['size']), last_partition())
self.swap = GPTSwapPartition(Sectors(data['swap']['size'], sector_size), last_partition())
if self.swap.previous is not None:
self.swap.pad_start += partition_gap
self.swap.size -= partition_gap
self.partitions.append(self.swap)
self.root = GPTPartition(Bytes(data['root']['size']),
self.root = GPTPartition(Sectors(data['root']['size'], sector_size),
data['root']['filesystem'], data['root'].get('format_command', None),
'root', last_partition())
if self.root.previous is not None:
self.root.pad_start += partition_gap
self.root.size -= partition_gap
self.partitions.append(self.root)
# We need to move the first partition to make space for the gpt offset
gpt_offset = Bytes('17KiB')
self.partitions[0].offset += gpt_offset
if hasattr(self, 'grub_boot'):
# grub_boot should not increase the size of the volume,
# so we reduce the size of the succeeding partition.
# gpt_offset is included here, because of the offset we added above (grub_boot is partition[0])
self.partitions[1].size -= self.grub_boot.get_end()
# Mark the grub partition as a bios_grub partition
self.grub_boot.flags.append('bios_grub')
# Subtract the grub partition size from the subsequent partition
self.partitions[1].size -= self.grub_boot.size
else:
# Avoid increasing the volume size because of gpt_offset
self.partitions[0].size -= gpt_offset
# Not using grub, mark the boot partition or root as bootable
getattr(self, 'boot', self.root).flags.append('legacy_boot')
# The first and last 34 sectors are reserved for the primary/secondary GPT
primary_gpt_size = Sectors(34, sector_size)
self.partitions[0].pad_start += primary_gpt_size
self.partitions[0].size -= primary_gpt_size
secondary_gpt_size = Sectors(34, sector_size)
self.partitions[-1].pad_end += secondary_gpt_size
self.partitions[-1].size -= secondary_gpt_size
super(GPTPartitionMap, self).__init__(bootloader)

View file

@ -9,12 +9,14 @@ class MSDOSPartitionMap(AbstractPartitionMap):
Sometimes also called MBR (but that confuses the hell out of me, so ms-dos it is)
"""
def __init__(self, data, bootloader):
def __init__(self, data, sector_size, bootloader):
"""
:param dict data: volume.partitions part of the manifest
:param int sector_size: Sectorsize of the volume
:param str bootloader: Name of the bootloader we will use for bootstrapping
"""
from bootstrapvz.common.bytes import Bytes
from bootstrapvz.common.sectors import Sectors
# List of partitions
self.partitions = []
@ -24,16 +26,30 @@ class MSDOSPartitionMap(AbstractPartitionMap):
# The boot and swap partitions are optional
if 'boot' in data:
self.boot = MSDOSPartition(Bytes(data['boot']['size']),
self.boot = MSDOSPartition(Sectors(data['boot']['size'], sector_size),
data['boot']['filesystem'], data['boot'].get('format_command', None),
last_partition())
self.partitions.append(self.boot)
# Offset all partitions by 1 sector.
# parted in jessie has changed and no longer allows
# partitions to be right next to each other.
partition_gap = Sectors(1, sector_size)
if 'swap' in data:
self.swap = MSDOSSwapPartition(Bytes(data['swap']['size']), last_partition())
self.swap = MSDOSSwapPartition(Sectors(data['swap']['size'], sector_size), last_partition())
if self.swap.previous is not None:
# No need to pad if this is the first partition
self.swap.pad_start += partition_gap
self.swap.size -= partition_gap
self.partitions.append(self.swap)
self.root = MSDOSPartition(Bytes(data['root']['size']),
self.root = MSDOSPartition(Sectors(data['root']['size'], sector_size),
data['root']['filesystem'], data['root'].get('format_command', None),
last_partition())
if self.root.previous is not None:
self.root.pad_start += partition_gap
self.root.size -= partition_gap
self.partitions.append(self.root)
# Mark boot as the boot partition, or root, if boot does not exist
@ -44,12 +60,18 @@ class MSDOSPartitionMap(AbstractPartitionMap):
# The MBR offset is included in the grub offset, so if we don't use grub
# we should reduce the size of the first partition and move it by only 512 bytes.
if bootloader == 'grub':
offset = Bytes('2MiB')
mbr_offset = Sectors('2MiB', sector_size)
else:
offset = Bytes('512B')
mbr_offset = Sectors('512B', sector_size)
self.partitions[0].offset += offset
self.partitions[0].size -= offset
self.partitions[0].pad_start += mbr_offset
self.partitions[0].size -= mbr_offset
# Leave the last sector unformatted
# parted in jessie thinks that a partition 10 sectors in size
# goes from sector 0 to sector 9 (instead of 0 to 10)
self.partitions[-1].pad_end += 1
self.partitions[-1].size -= 1
super(MSDOSPartitionMap, self).__init__(bootloader)

View file

@ -7,14 +7,16 @@ class NoPartitions(object):
simply always deal with partition maps and then let the base abstract that away.
"""
def __init__(self, data, bootloader):
def __init__(self, data, sector_size, bootloader):
"""
:param dict data: volume.partitions part of the manifest
:param int sector_size: Sectorsize of the volume
:param str bootloader: Name of the bootloader we will use for bootstrapping
"""
from bootstrapvz.common.bytes import Bytes
from bootstrapvz.common.sectors import Sectors
# In the NoPartitions partitions map we only have a single 'partition'
self.root = SinglePartition(Bytes(data['root']['size']),
self.root = SinglePartition(Sectors(data['root']['size'], sector_size),
data['root']['filesystem'], data['root'].get('format_command', None))
self.partitions = [self.root]
@ -29,6 +31,15 @@ class NoPartitions(object):
"""Returns the total size the partitions occupy
:return: The size of all the partitions
:rtype: Bytes
:rtype: Sectors
"""
return self.root.get_end()
def __getstate__(self):
state = self.__dict__.copy()
state['__class__'] = self.__module__ + '.' + self.__class__.__name__
return state
def __setstate__(self, state):
for key in state:
self.__dict__[key] = state[key]

View file

@ -1,6 +1,6 @@
from abc import ABCMeta
from abc import abstractmethod
import os.path
from bootstrapvz.common.sectors import Sectors
from bootstrapvz.common.tools import log_check_call
from bootstrapvz.common.fsm_proxy import FSMProxy
@ -19,42 +19,6 @@ class AbstractPartition(FSMProxy):
{'name': 'unmount', 'src': 'mounted', 'dst': 'formatted'},
]
class Mount(object):
"""Represents a mount into the partition
"""
def __init__(self, source, destination, opts):
"""
:param str,AbstractPartition source: The path from where we mount or a partition
:param str destination: The path of the mountpoint
:param list opts: List of options to pass to the mount command
"""
self.source = source
self.destination = destination
self.opts = opts
def mount(self, prefix):
"""Performs the mount operation or forwards it to another partition
:param str prefix: Path prefix of the mountpoint
"""
mount_dir = os.path.join(prefix, self.destination)
# If the source is another partition, we tell that partition to mount itself
if isinstance(self.source, AbstractPartition):
self.source.mount(destination=mount_dir)
else:
log_check_call(['mount'] + self.opts + [self.source, mount_dir])
self.mount_dir = mount_dir
def unmount(self):
"""Performs the unmount operation or asks the partition to unmount itself
"""
# If its a partition, it can unmount itself
if isinstance(self.source, AbstractPartition):
self.source.unmount()
else:
log_check_call(['umount', self.mount_dir])
del self.mount_dir
def __init__(self, size, filesystem, format_command):
"""
:param Bytes size: Size of the partition
@ -64,6 +28,9 @@ class AbstractPartition(FSMProxy):
self.size = size
self.filesystem = filesystem
self.format_command = format_command
# Initialize the start & end padding to 0 sectors, may be changed later
self.pad_start = Sectors(0, size.sector_size)
self.pad_end = Sectors(0, size.sector_size)
# Path to the partition
self.device_path = None
# Dictionary with mount points as keys and Mount objects as values
@ -90,9 +57,9 @@ class AbstractPartition(FSMProxy):
"""Gets the end of the partition
:return: The end of the partition
:rtype: Bytes
:rtype: Sectors
"""
return self.get_start() + self.size
return self.get_start() + self.pad_start + self.size + self.pad_end
def _before_format(self, e):
"""Formats the partition
@ -143,7 +110,8 @@ class AbstractPartition(FSMProxy):
:param list opts: Any options that should be passed to the mount command
"""
# Create a new mount object, mount it if the partition is mounted and put it in the mounts dict
mount = self.Mount(source, destination, opts)
from mount import Mount
mount = Mount(source, destination, opts)
if self.fsm.current == 'mounted':
mount.mount(self.mount_dir)
self.mounts[destination] = mount

View file

@ -1,4 +1,6 @@
import os
from abstract import AbstractPartition
from bootstrapvz.common.sectors import Sectors
class BasePartition(AbstractPartition):
@ -25,14 +27,13 @@ class BasePartition(AbstractPartition):
:param list format_command: Optional format command, valid variables are fs, device_path and size
:param BasePartition previous: The partition that preceeds this one
"""
# By saving the previous partition we have
# a linked list that partitions can go backwards in to find the first partition.
# By saving the previous partition we have a linked list
# that partitions can go backwards in to find the first partition.
self.previous = previous
from bootstrapvz.common.bytes import Bytes
# Initialize the offset to 0 bytes, may be changed later
self.offset = Bytes(0)
# List of flags that parted should put on the partition
self.flags = []
# Path to symlink in /dev/disk/by-uuid (manually maintained by this class)
self.disk_by_uuid_path = None
super(BasePartition, self).__init__(size, filesystem, format_command)
def create(self, volume):
@ -59,30 +60,56 @@ class BasePartition(AbstractPartition):
"""Gets the starting byte of this partition
:return: The starting byte of this partition
:rtype: Bytes
:rtype: Sectors
"""
if self.previous is None:
# If there is no previous partition, this partition begins at the offset
return self.offset
return Sectors(0, self.size.sector_size)
else:
# Get the end of the previous partition and add the offset of this partition
return self.previous.get_end() + self.offset
return self.previous.get_end()
def map(self, device_path):
"""Maps the partition to a device_path
:param str device_path: The device patht his partition should be mapped to
:param str device_path: The device path this partition should be mapped to
"""
self.fsm.map(device_path=device_path)
def link_uuid(self):
# /lib/udev/rules.d/60-kpartx.rules does not create symlinks in /dev/disk/by-{uuid,label}
# This patch would fix that: http://www.redhat.com/archives/dm-devel/2013-July/msg00080.html
# For now we just do the uuid part ourselves.
# This is mainly to fix a problem in update-grub where /etc/grub.d/10_linux
# checks if the $GRUB_DEVICE_UUID exists in /dev/disk/by-uuid and falls
# back to $GRUB_DEVICE if it doesn't.
# $GRUB_DEVICE is /dev/mapper/xvd{f,g...}# (on ec2), opposed to /dev/xvda# when booting.
# Creating the symlink ensures that grub consistently uses
# $GRUB_DEVICE_UUID when creating /boot/grub/grub.cfg
self.disk_by_uuid_path = os.path.join('/dev/disk/by-uuid', self.get_uuid())
if not os.path.exists(self.disk_by_uuid_path):
os.symlink(self.device_path, self.disk_by_uuid_path)
def unlink_uuid(self):
if os.path.isfile(self.disk_by_uuid_path):
os.remove(self.disk_by_uuid_path)
self.disk_by_uuid_path = None
def _before_create(self, e):
"""Creates the partition
"""
from bootstrapvz.common.tools import log_check_call
# The create command is failry simple, start and end are just Bytes objects coerced into strings
create_command = ('mkpart primary {start} {end}'
.format(start=str(self.get_start()),
end=str(self.get_end())))
# The create command is fairly simple:
# - fs_type is the partition filesystem, as defined by parted:
# fs-type can be one of "fat16", "fat32", "ext2", "HFS", "linux-swap",
# "NTFS", "reiserfs", or "ufs".
# - start and end are just Bytes objects coerced into strings
if self.filesystem == 'swap':
fs_type = 'linux-swap'
else:
fs_type = 'ext2'
create_command = ('mkpart primary {fs_type} {start} {end}'
.format(fs_type=fs_type,
start=str(self.get_start() + self.pad_start),
end=str(self.get_end() - self.pad_end)))
# Create the partition
log_check_call(['parted', '--script', '--align', 'none', e.volume.device_path,
'--', create_command])
@ -96,7 +123,16 @@ class BasePartition(AbstractPartition):
def _before_map(self, e):
# Set the device path
self.device_path = e.device_path
if e.src == 'unmapped_fmt':
# Only link the uuid if the partition is formatted
self.link_uuid()
def _after_format(self, e):
# We do this after formatting because there otherwise would be no UUID
self.link_uuid()
def _before_unmap(self, e):
# When unmapped, the device_path ifnromation becomes invalid, so we delete it
# When unmapped, the device_path information becomes invalid, so we delete it
self.device_path = None
if e.src == 'formatted':
self.unlink_uuid()

View file

@ -0,0 +1,49 @@
from abstract import AbstractPartition
import os.path
from bootstrapvz.common.tools import log_check_call
class Mount(object):
"""Represents a mount into the partition
"""
def __init__(self, source, destination, opts):
"""
:param str,AbstractPartition source: The path from where we mount or a partition
:param str destination: The path of the mountpoint
:param list opts: List of options to pass to the mount command
"""
self.source = source
self.destination = destination
self.opts = opts
def mount(self, prefix):
"""Performs the mount operation or forwards it to another partition
:param str prefix: Path prefix of the mountpoint
"""
mount_dir = os.path.join(prefix, self.destination)
# If the source is another partition, we tell that partition to mount itself
if isinstance(self.source, AbstractPartition):
self.source.mount(destination=mount_dir)
else:
log_check_call(['mount'] + self.opts + [self.source, mount_dir])
self.mount_dir = mount_dir
def unmount(self):
"""Performs the unmount operation or asks the partition to unmount itself
"""
# If its a partition, it can unmount itself
if isinstance(self.source, AbstractPartition):
self.source.unmount()
else:
log_check_call(['umount', self.mount_dir])
del self.mount_dir
def __getstate__(self):
state = self.__dict__.copy()
state['__class__'] = self.__module__ + '.' + self.__class__.__name__
return state
def __setstate__(self, state):
for key in state:
self.__dict__[key] = state[key]

View file

@ -9,8 +9,7 @@ class SinglePartition(AbstractPartition):
"""Gets the starting byte of this partition
:return: The starting byte of this partition
:rtype: Bytes
:rtype: Sectors
"""
from bootstrapvz.common.bytes import Bytes
# On an unpartitioned volume there is no offset and no previous partition
return Bytes(0)
from bootstrapvz.common.sectors import Sectors
return Sectors(0, self.size.sector_size)

View file

@ -65,11 +65,12 @@ class Volume(FSMProxy):
def _before_link_dm_node(self, e):
"""Links the volume using the device mapper
This allows us to create a 'window' into the volume that acts like a volum in itself.
This allows us to create a 'window' into the volume that acts like a volume in itself.
Mainly it is used to fool grub into thinking that it is working with a real volume,
rather than a loopback device or a network block device.
:param _e_obj e: Event object containing arguments to create()
Keyword arguments to link_dm_node() are:
:param int logical_start_sector: The sector the volume should start at in the new volume
@ -94,9 +95,9 @@ class Volume(FSMProxy):
start_sector = getattr(e, 'start_sector', 0)
# The number of sectors that should be mapped
sectors = getattr(e, 'sectors', int(self.size / 512) - start_sector)
sectors = getattr(e, 'sectors', int(self.size) - start_sector)
# This is the table we send to dmsetup, so that it may create a decie mapping for us.
# This is the table we send to dmsetup, so that it may create a device mapping for us.
table = ('{log_start_sec} {sectors} linear {major}:{minor} {start_sec}'
.format(log_start_sec=logical_start_sector,
sectors=sectors,

View file

@ -4,6 +4,50 @@ both to a file and to the console.
import logging
def get_console_handler(debug, colorize):
"""Returns a log handler for the console
The handler color codes the different log levels
:params bool debug: Whether to set the log level to DEBUG (otherwise INFO)
:params bool colorize: Whether to colorize console output
:return: The console logging handler
"""
# Create a console log handler
import sys
console_handler = logging.StreamHandler(sys.stderr)
if colorize:
# We want to colorize the output to the console, so we add a formatter
console_handler.setFormatter(ColorFormatter())
# Set the log level depending on the debug argument
if debug:
console_handler.setLevel(logging.DEBUG)
else:
console_handler.setLevel(logging.INFO)
return console_handler
def get_file_handler(path, debug):
"""Returns a log handler for the given path
If the parent directory of the logpath does not exist it will be created
The handler outputs relative timestamps (to when it was created)
:params str path: The full path to the logfile
:params bool debug: Whether to set the log level to DEBUG (otherwise INFO)
:return: The file logging handler
"""
import os.path
if not os.path.exists(os.path.dirname(path)):
os.makedirs(os.path.dirname(path))
# Create the log handler
file_handler = logging.FileHandler(path)
# Absolute timestamps are rather useless when bootstrapping, it's much more interesting
# to see how long things take, so we log in a relative format instead
file_handler.setFormatter(FileFormatter('[%(relativeCreated)s] %(levelname)s: %(message)s'))
# The file log handler always logs everything
file_handler.setLevel(logging.DEBUG)
return file_handler
def get_log_filename(manifest_path):
"""Returns the path to a logfile given a manifest
The logfile name is constructed from the current timestamp and the basename of the manifest
@ -22,42 +66,23 @@ def get_log_filename(manifest_path):
return filename
def setup_logger(logfile=None, debug=False):
"""Sets up the python logger to log to both a file and the console
:param str logfile: Path to a logfile
:param bool debug: Whether to log debug output to the console
class SourceFormatter(logging.Formatter):
"""Adds a [source] tag to the log message if it exists
The python docs suggest using a LoggingAdapter, but that would mean we'd
have to use it everywhere we log something (and only when called remotely),
which is not feasible.
"""
root = logging.getLogger()
# Make sure all logging statements are processed by our handlers, they decide the log level
root.setLevel(logging.NOTSET)
# Only enable logging to file if a destination was supplied
if logfile is not None:
# Create a file log handler
file_handler = logging.FileHandler(logfile)
# Absolute timestamps are rather useless when bootstrapping, it's much more interesting
# to see how long things take, so we log in a relative format instead
file_handler.setFormatter(FileFormatter('[%(relativeCreated)s] %(levelname)s: %(message)s'))
# The file log handler always logs everything
file_handler.setLevel(logging.DEBUG)
root.addHandler(file_handler)
# Create a console log handler
import sys
console_handler = logging.StreamHandler(sys.stderr)
# We want to colorize the output to the console, so we add a formatter
console_handler.setFormatter(ConsoleFormatter())
# Set the log level depending on the debug argument
if debug:
console_handler.setLevel(logging.DEBUG)
else:
console_handler.setLevel(logging.INFO)
root.addHandler(console_handler)
def format(self, record):
extra = getattr(record, 'extra', {})
if 'source' in extra:
record.msg = '[{source}] {message}'.format(source=record.extra['source'],
message=record.msg)
return super(SourceFormatter, self).format(record)
class ConsoleFormatter(logging.Formatter):
"""Formats log statements for the console
class ColorFormatter(SourceFormatter):
"""Colorizes log messages depending on the loglevel
"""
level_colors = {logging.ERROR: 'red',
logging.WARNING: 'magenta',
@ -65,14 +90,13 @@ class ConsoleFormatter(logging.Formatter):
}
def format(self, record):
if(record.levelno in self.level_colors):
# Colorize the message if we have a color for it (DEBUG has no color)
from termcolor import colored
record.msg = colored(record.msg, self.level_colors[record.levelno])
return super(ConsoleFormatter, self).format(record)
# Colorize the message if we have a color for it (DEBUG has no color)
from termcolor import colored
record.msg = colored(record.msg, self.level_colors.get(record.levelno, None))
return super(ColorFormatter, self).format(record)
class FileFormatter(logging.Formatter):
class FileFormatter(SourceFormatter):
"""Formats log statements for output to file
Currently this is just a stub
"""

View file

@ -1,9 +1,6 @@
"""Main module containing all the setup necessary for running the bootstrapping process
"""
import logging
log = logging.getLogger(__name__)
def main():
"""Main function for invoking the bootstrap process
@ -12,31 +9,30 @@ def main():
"""
# Get the commandline arguments
opts = get_opts()
# Require root privileges, except when doing a dry-run where they aren't needed
import os
if os.geteuid() != 0 and not opts['--dry-run']:
raise Exception('This program requires root privileges.')
import log
# Log to file unless --log is a single dash
if opts['--log'] != '-':
# Setup logging
if not os.path.exists(opts['--log']):
os.makedirs(opts['--log'])
log_filename = log.get_log_filename(opts['MANIFEST'])
logfile = os.path.join(opts['--log'], log_filename)
else:
logfile = None
log.setup_logger(logfile=logfile, debug=opts['--debug'])
# Set up logging
setup_loggers(opts)
# Load the manifest
from manifest import Manifest
manifest = Manifest(path=opts['MANIFEST'])
# Everything has been set up, begin the bootstrapping process
run(opts)
run(manifest,
debug=opts['--debug'],
pause_on_error=opts['--pause-on-error'],
dry_run=opts['--dry-run'])
def get_opts():
"""Creates an argument parser and returns the arguments it has parsed
"""
from docopt import docopt
import docopt
usage = """bootstrap-vz
Usage: bootstrap-vz [options] MANIFEST
@ -46,22 +42,55 @@ Options:
If <path> is `-' file logging will be disabled.
--pause-on-error Pause on error, before rollback
--dry-run Don't actually run the tasks
--color=auto|always|never
Colorize the console output [default: auto]
--debug Print debugging information
-h, --help show this help
"""
opts = docopt(usage)
opts = docopt.docopt(usage)
if opts['--color'] not in ('auto', 'always', 'never'):
raise docopt.DocoptExit('Value of --color must be one of auto, always or never.')
return opts
def run(opts):
"""Runs the bootstrapping process
def setup_loggers(opts):
"""Sets up the file and console loggers
:params dict opts: Dictionary of options from the commandline
"""
# Load the manifest
from manifest import Manifest
manifest = Manifest(opts['MANIFEST'])
import logging
root = logging.getLogger()
root.setLevel(logging.NOTSET)
import log
# Log to file unless --log is a single dash
if opts['--log'] != '-':
import os.path
log_filename = log.get_log_filename(opts['MANIFEST'])
logpath = os.path.join(opts['--log'], log_filename)
file_handler = log.get_file_handler(path=logpath, debug=True)
root.addHandler(file_handler)
if opts['--color'] == 'never':
colorize = False
elif opts['--color'] == 'always':
colorize = True
else:
# If --color=auto (default), decide whether to colorize by whether stderr is a tty.
import os
colorize = os.isatty(2)
console_handler = log.get_console_handler(debug=opts['--debug'], colorize=colorize)
root.addHandler(console_handler)
def run(manifest, debug=False, pause_on_error=False, dry_run=False):
"""Runs the bootstrapping process
:params Manifest manifest: The manifest to run the bootstrapping process for
:params bool debug: Whether to turn debugging mode on
:params bool pause_on_error: Whether to pause on error, before rollback
:params bool dry_run: Don't actually run the tasks
"""
# Get the tasklist
from tasklist import load_tasks
from tasklist import TaskList
@ -71,17 +100,19 @@ def run(opts):
# Create the bootstrap information object that'll be used throughout the bootstrapping process
from bootstrapinfo import BootstrapInformation
bootstrap_info = BootstrapInformation(manifest=manifest, debug=opts['--debug'])
bootstrap_info = BootstrapInformation(manifest=manifest, debug=debug)
import logging
log = logging.getLogger(__name__)
try:
# Run all the tasks the tasklist has gathered
tasklist.run(info=bootstrap_info, dry_run=opts['--dry-run'])
tasklist.run(info=bootstrap_info, dry_run=dry_run)
# We're done! :-)
log.info('Successfully completed bootstrapping')
except (Exception, KeyboardInterrupt) as e:
# When an error occurs, log it and begin rollback
log.exception(e)
if opts['--pause-on-error']:
if pause_on_error:
# The --pause-on-error is useful when the user wants to inspect the volume before rollback
raw_input('Press Enter to commence rollback')
log.error('Rolling back')
@ -89,8 +120,8 @@ def run(opts):
# Create a useful little function for the provider and plugins to use,
# when figuring out what tasks should be added to the rollback list.
def counter_task(taskset, task, counter):
"""counter_task() adds the second argument to the rollback tasklist
if the first argument is present in the list of completed tasks
"""counter_task() adds the third argument to the rollback tasklist
if the second argument is present in the list of completed tasks
:param set taskset: The taskset to add the rollback task to
:param Task task: The task to look for in the completed tasks list
@ -105,6 +136,7 @@ def run(opts):
rollback_tasklist = TaskList(rollback_tasks)
# Run the rollback tasklist
rollback_tasklist.run(info=bootstrap_info, dry_run=opts['--dry-run'])
rollback_tasklist.run(info=bootstrap_info, dry_run=dry_run)
log.info('Successfully completed rollback')
raise e
raise
return bootstrap_info

View file

@ -1,205 +0,0 @@
{
"$schema": "http://json-schema.org/draft-04/schema#",
"title": "Generic manifest",
"type": "object",
"properties": {
"provider": {
"type": "string"
},
"bootstrapper": {
"type": "object",
"properties": {
"workspace": { "$ref": "#/definitions/path" },
"mirror": { "type": "string", "format": "uri" },
"tarball": { "type": "boolean" },
"include_packages": {
"type": "array",
"items": {
"type": "string",
"pattern": "^[^/]+$"
},
"minItems": 1
},
"exclude_packages": {
"type": "array",
"items": {
"type": "string",
"pattern": "^[^/]+$"
},
"minItems": 1
}
},
"required": ["workspace"]
},
"image": {
"type": "object",
"properties": {
"name": { "type": "string" }
},
"required": ["name"]
},
"system": {
"type": "object",
"properties": {
"release": { "enum": ["squeeze", "wheezy", "jessie", "testing", "unstable"] },
"architecture": { "enum": ["i386", "amd64"] },
"bootloader": { "enum": ["pvgrub", "grub", "extlinux"] },
"timezone": { "type": "string" },
"locale": { "type": "string" },
"charmap": { "type": "string" },
"hostname": {
"type": "string",
"pattern": "^\\S+$"
}
},
"required": ["release", "architecture", "bootloader", "timezone", "locale", "charmap"]
},
"packages": {
"type": "object",
"properties": {
"mirror": { "type": "string", "format": "uri" },
"sources": {
"type": "object",
"patternProperties": {
"^[^\/\\0]+$": {
"type": "array",
"items": {
"type": "string",
"pattern": "^(deb|deb-src)\\s+(\\[\\s*(.+\\S)?\\s*\\]\\s+)?\\S+\\s+\\S+(\\s+(.+\\S))?\\s*$"
},
"minItems": 1
}
},
"additionalProperties": false,
"minItems": 1
},
"components": {
"type": "array",
"items": {"type": "string"},
"minItems": 1
},
"preferences": {
"type": "object",
"patternProperties": {
"^[^\/\\0]+$": {
"type": "array",
"items": {
"type": "object",
"properties": {
"pin": {
"type": "string"
},
"package": {
"type": "string"
},
"pin-priority": {
"type": "integer"
}
},
"required": ["pin", "package", "pin-priority"],
"additionalProperties": false
},
"minItems": 1
}
},
"additionalProperties": false,
"minItems": 1
},
"trusted-keys": {
"type": "array",
"items": { "$ref": "#/definitions/absolute_path" },
"minItems": 1
},
"install": {
"type": "array",
"items": {
"anyOf": [
{ "pattern": "^[^/]+(/[^/]+)?$" },
{ "$ref": "#/definitions/absolute_path" }
]
},
"minItems": 1
},
"install_standard": {
"type": "boolean"
}
},
"additionalProperties": false
},
"volume": {
"type": "object",
"properties": {
"backing": { "type": "string" },
"partitions": {
"type": "object",
"oneOf": [
{ "$ref": "#/definitions/no_partitions" },
{ "$ref": "#/definitions/partition_table" }
]
}
},
"required": ["partitions"]
},
"plugins": {
"type": "object",
"patternProperties": {
"^\\w+$": {
"type": "object"
}
},
"additionalProperties": false
}
},
"required": ["provider", "bootstrapper", "system", "volume"],
"definitions": {
"path": {
"type": "string",
"pattern": "^[^\\0]+$"
},
"absolute_path": {
"type": "string",
"pattern": "^/[^\\0]+$"
},
"bytes": {
"type": "string",
"pattern": "^\\d+([KMGT]i?B|B)$"
},
"no_partitions": {
"type": "object",
"properties": {
"type": { "enum": ["none"] },
"root": { "$ref": "#/definitions/partition" }
},
"required": ["root"],
"additionalProperties": false
},
"partition_table": {
"type": "object",
"properties": {
"type": { "enum": ["msdos", "gpt"] },
"boot": { "$ref": "#/definitions/partition" },
"root": { "$ref": "#/definitions/partition" },
"swap": {
"type": "object",
"properties": { "size": { "$ref": "#/definitions/bytes" } },
"required": ["size"]
}
},
"required": ["root"],
"additionalProperties": false
},
"partition": {
"type": "object",
"properties": {
"size": { "$ref": "#/definitions/bytes" },
"filesystem": { "enum": ["ext2", "ext3", "ext4", "xfs"] },
"format_command": {
"type": "array",
"items": {"type": "string"},
"minItems": 1
}
},
"required": ["size", "filesystem"]
}
}
}

View file

@ -0,0 +1,176 @@
---
$schema: http://json-schema.org/draft-04/schema#
title: Generic manifest
type: object
required: [provider, bootstrapper, system, volume]
properties:
provider:
type: object
properties:
name: {type: string}
required: [name]
additionalProperties: true
bootstrapper:
type: object
properties:
exclude_packages:
type: array
items:
type: string
pattern: '^[^/]+$'
minItems: 1
include_packages:
type: array
items:
type: string
pattern: '^[^/]+$'
minItems: 1
mirror:
type: string
format: uri
tarball: {type: boolean}
workspace:
$ref: '#/definitions/path'
required: [workspace]
additionalProperties: false
image:
type: object
properties:
name: {type: string}
required: [name]
system:
type: object
properties:
architecture:
enum: [i386, amd64]
userspace_architecture:
enum: [i386]
bootloader:
enum:
- pvgrub
- grub
- extlinux
charmap: {type: string}
hostname:
type: string
pattern: ^\S+$
locale: {type: string}
release: {type: string}
timezone: {type: string}
required:
- release
- architecture
- bootloader
- timezone
- locale
- charmap
additionalProperties: false
packages:
type: object
properties:
components:
type: array
items: {type: string}
minItems: 1
install:
type: array
items:
anyOf:
- pattern: ^[^/]+(/[^/]+)?$
- $ref: '#/definitions/absolute_path'
minItems: 1
install_standard: {type: boolean}
mirror:
type: string
format: uri
preferences:
type: object
patternProperties:
^[^/\0]+$:
type: array
items:
type: object
properties:
package: {type: string}
pin: {type: string}
pin-priority: {type: integer}
required: [pin, package, pin-priority]
additionalProperties: false
minItems: 1
minItems: 1
additionalProperties: false
sources:
type: object
patternProperties:
^[^/\0]+$:
items:
type: string
pattern: ^(deb|deb-src)\s+(\[\s*(.+\S)?\s*\]\s+)?\S+\s+\S+(\s+(.+\S))?\s*$
minItems: 1
type: array
minItems: 1
additionalProperties: false
trusted-keys:
type: array
items:
$ref: '#/definitions/absolute_path'
minItems: 1
include-source-type: {type: boolean}
additionalProperties: false
plugins:
type: object
patternProperties:
^\w+$: {type: object}
volume:
type: object
properties:
backing: {type: string}
partitions:
type: object
oneOf:
- $ref: '#/definitions/no_partitions'
- $ref: '#/definitions/partition_table'
required: [partitions]
additionalProperties: false
definitions:
absolute_path:
type: string
pattern: ^/[^\0]+$
bytes:
pattern: ^\d+([KMGT]i?B|B)$
type: string
no_partitions:
type: object
properties:
root: {$ref: '#/definitions/partition'}
type: {enum: [none]}
required: [root]
additionalProperties: false
partition:
type: object
properties:
filesystem:
enum: [ext2, ext3, ext4, xfs]
format_command:
items: {type: string}
minItems: 1
type: array
size: {$ref: '#/definitions/bytes'}
required: [size, filesystem]
additionalProperties: false
partition_table:
type: object
properties:
boot: {$ref: '#/definitions/partition'}
root: {$ref: '#/definitions/partition'}
swap:
type: object
properties:
size: {$ref: '#/definitions/bytes'}
required: [size]
type: {enum: [msdos, gpt]}
required: [root]
additionalProperties: false
path:
type: string
pattern: ^[^\0]+$

View file

@ -2,8 +2,8 @@
to determine which tasks should be added to the tasklist, what arguments various
invocations should have etc..
"""
from bootstrapvz.common.tools import load_json
from bootstrapvz.common.tools import load_yaml
from bootstrapvz.common.exceptions import ManifestError
from bootstrapvz.common.tools import load_data
import logging
log = logging.getLogger(__name__)
@ -15,31 +15,47 @@ class Manifest(object):
Currently, immutability is not enforced and it would require a fair amount of code
to enforce it, instead we just rely on tasks behaving properly.
"""
def __init__(self, path):
"""Initializer: Given a path we load, validate and parse the manifest.
:param str path: The path to the manifest
def __init__(self, path=None, data=None):
"""Initializer: Given a path we load, validate and parse the manifest.
To create the manifest from dynamic data instead of the contents of a file,
provide a properly constructed dict as the data argument.
:param str path: The path to the manifest (ignored, when `data' is provided)
:param str data: The manifest data, if it is not None, it will be used instead of the contents of `path'
"""
if path is None and data is None:
raise ManifestError('`path\' or `data\' must be provided')
self.path = path
self.load()
self.load(data)
self.initialize()
self.validate()
self.parse()
def load(self):
"""Loads the manifest.
This function not only reads the manifest but also loads the specified provider and plugins.
Once they are loaded, the initialize() function is called on each of them (if it exists).
def load(self, data=None):
"""Loads the manifest and performs a basic validation.
This function reads the manifest and performs some basic validation of
the manifest itself to ensure that the properties required for initalization are accessible
(otherwise the user would be presented with some cryptic error messages).
"""
if data is None:
self.data = load_data(self.path)
else:
self.data = data
from . import validate_manifest
# Validate the manifest with the base validation function in __init__
validate_manifest(self.data, self.schema_validator, self.validation_error)
def initialize(self):
"""Initializes the provider and the plugins.
This function loads the specified provider and plugins.
Once the provider and plugins are loaded,
the initialize() function is called on each of them (if it exists).
The provider must have an initialize function.
"""
# Load the manifest JSON using the loader in common.tools
# It strips comments (which are invalid in strict json) before loading the data.
if self.path.endswith('.json'):
self.data = load_json(self.path)
elif self.path.endswith('.yml') or self.path.endswith('.yaml'):
self.data = load_yaml(self.path)
# Get the provider name from the manifest and load the corresponding module
provider_modname = 'bootstrapvz.providers.' + self.data['provider']
provider_modname = 'bootstrapvz.providers.' + self.data['provider']['name']
log.debug('Loading provider ' + provider_modname)
# Create a modules dict that contains the loaded provider and plugins
import importlib
@ -63,12 +79,9 @@ class Manifest(object):
init()
def validate(self):
"""Validates the manifest using the base, provider and plugin validation functions.
"""Validates the manifest using the provider and plugin validation functions.
Plugins are not required to have a validate_manifest function
"""
from . import validate_manifest
# Validate the manifest with the base validation function in __init__
validate_manifest(self.data, self.schema_validator, self.validation_error)
# Run the provider validation
self.modules['provider'].validate_manifest(self.data, self.schema_validator, self.validation_error)
@ -90,6 +103,8 @@ class Manifest(object):
self.image = self.data['image']
self.volume = self.data['volume']
self.system = self.data['system']
from bootstrapvz.common.releases import get_release
self.release = get_release(self.system['release'])
# The packages and plugins section is not required
self.packages = self.data['packages'] if 'packages' in self.data else {}
self.plugins = self.data['plugins'] if 'plugins' in self.data else {}
@ -102,19 +117,31 @@ class Manifest(object):
:param str schema_path: Path to the json-schema to use for validation
"""
import jsonschema
schema = load_json(schema_path)
schema = load_data(schema_path)
try:
jsonschema.validate(data, schema)
except jsonschema.ValidationError as e:
self.validation_error(e.message, e.path)
def validation_error(self, message, json_path=None):
def validation_error(self, message, data_path=None):
"""This function is passed to all validation functions so that they may
raise a validation error because a custom validation of the manifest failed.
:param str message: Message to user about the error
:param list json_path: A path to the location in the manifest where the error occurred
:param list data_path: A path to the location in the manifest where the error occurred
:raises ManifestError: With absolute certainty
"""
from bootstrapvz.common.exceptions import ManifestError
raise ManifestError(message, self.path, json_path)
raise ManifestError(message, self.path, data_path)
def __getstate__(self):
return {'__class__': self.__module__ + '.' + self.__class__.__name__,
'path': self.path,
'data': self.data}
def __setstate__(self, state):
self.path = state['path']
self.load(state['data'])
self.initialize()
self.validate()
self.parse()

View file

@ -87,12 +87,10 @@ class PackageList(object):
# The package has already been added, skip the checks below
return
# Check if the target exists in the sources list, raise a PackageError if not
check_target = target
if check_target is None:
check_target = self.default_target
if not self.source_lists.target_exists(check_target):
msg = ('The target release {target} was not found in the sources list').format(target=check_target)
# Check if the target exists (unless it's the default target) in the sources list
# raise a PackageError if does not
if target not in (None, self.default_target) and not self.source_lists.target_exists(target):
msg = ('The target release {target} was not found in the sources list').format(target=target)
raise PackageError(msg)
# Note that we maintain the target value even if it is none.

View file

@ -1,22 +0,0 @@
{ // This is a mapping of Debian release names to their respective codenames
"unstable": "sid",
"testing": "jessie",
"stable": "wheezy",
"oldstable": "squeeze",
"jessie": "jessie",
"wheezy": "wheezy",
"squeeze": "squeeze",
// The following release names are not supported, but included of completeness sake
"lenny": "lenny",
"etch": "etch",
"sarge": "sarge",
"woody": "woody",
"potato": "potato",
"slink": "slink",
"hamm": "hamm",
"bo": "bo",
"rex": "rex",
"buzz": "buzz"
}

View file

@ -117,7 +117,8 @@ def get_all_tasks():
# Get a generator that returns all classes in the package
import os.path
pkg_path = os.path.normpath(os.path.join(os.path.dirname(__file__), '..'))
classes = get_all_classes(pkg_path, 'bootstrapvz.')
exclude_pkgs = ['bootstrapvz.base', 'bootstrapvz.remote']
classes = get_all_classes(pkg_path, 'bootstrapvz.', exclude_pkgs)
# lambda function to check whether a class is a task (excluding the superclass Task)
def is_task(obj):
@ -126,11 +127,12 @@ def get_all_tasks():
return filter(is_task, classes) # Only return classes that are tasks
def get_all_classes(path=None, prefix=''):
def get_all_classes(path=None, prefix='', excludes=[]):
""" Given a path to a package, this function retrieves all the classes in it
:param str path: Path to the package
:param str prefix: Name of the package followed by a dot
:param list excludes: List of str matching module names that should be ignored
:return: A generator that yields classes
:rtype: generator
:raises Exception: If a module cannot be inspected.
@ -139,10 +141,13 @@ def get_all_classes(path=None, prefix=''):
import importlib
import inspect
def walk_error(module):
raise Exception('Unable to inspect module ' + module)
def walk_error(module_name):
if not any(map(lambda excl: module_name.startswith(excl), excludes)):
raise Exception('Unable to inspect module ' + module_name)
walker = pkgutil.walk_packages([path], prefix, walk_error)
for _, module_name, _ in walker:
if any(map(lambda excl: module_name.startswith(excl), excludes)):
continue
module = importlib.import_module(module_name)
classes = inspect.getmembers(module, inspect.isclass)
for class_name, obj in classes:
@ -162,21 +167,31 @@ def check_ordering(task):
:raises TaskListError: If there is a conflict between task precedence and phase precedence
"""
for successor in task.successors:
# Run through all successors and check whether the phase of the task
# comes before the phase of a successor
# Run through all successors and throw an error if the phase of the task
# lies before the phase of a successor, log a warning if it lies after.
if task.phase > successor.phase:
msg = ("The task {task} is specified as running before {other}, "
"but its phase '{phase}' lies after the phase '{other_phase}'"
.format(task=task, other=successor, phase=task.phase, other_phase=successor.phase))
raise TaskListError(msg)
if task.phase < successor.phase:
log.warn("The task {task} is specified as running before {other} "
"although its phase '{phase}' already lies before the phase '{other_phase}' "
"(or the task has been placed in the wrong phase)"
.format(task=task, other=successor, phase=task.phase, other_phase=successor.phase))
for predecessor in task.predecessors:
# Run through all predecessors and check whether the phase of the task
# comes after the phase of a predecessor
# Run through all successors and throw an error if the phase of the task
# lies after the phase of a predecessor, log a warning if it lies before.
if task.phase < predecessor.phase:
msg = ("The task {task} is specified as running after {other}, "
"but its phase '{phase}' lies before the phase '{other_phase}'"
.format(task=task, other=predecessor, phase=task.phase, other_phase=predecessor.phase))
raise TaskListError(msg)
if task.phase > predecessor.phase:
log.warn("The task {task} is specified as running after {other} "
"although its phase '{phase}' already lies after the phase '{other_phase}' "
"(or the task has been placed in the wrong phase)"
.format(task=task, other=predecessor, phase=task.phase, other_phase=predecessor.phase))
def strongly_connected_components(graph):

View file

@ -0,0 +1 @@
Wait 5 seconds or press ENTER to

View file

@ -0,0 +1,17 @@
default l0
prompt 1
timeout 50
label l0
menu label Debian GNU/Linux, kernel {kernel_version}
linux {boot_prefix}/vmlinuz-{kernel_version}
append initrd={boot_prefix}/initrd.img-{kernel_version} root=UUID={root_uuid} ro quiet console=ttyS0
label l0r
menu label Debian GNU/Linux, kernel {kernel_version} (recovery mode)
linux {boot_prefix}/vmlinuz-{kernel_version}
append initrd={boot_prefix}/initrd.img-{kernel_version} root=UUID={root_uuid} ro console=ttyS0 single
text help
This option boots the system into recovery mode (single-user)
endtext

View file

@ -0,0 +1,5 @@
[Login]
# Disable all TTY getters
NAutoVTs=0
ReserveVT=0

View file

@ -1,3 +1,14 @@
from exceptions import UnitError
def onlybytes(msg):
def decorator(func):
def check_other(self, other):
if not isinstance(other, Bytes):
raise UnitError(msg)
return func(self, other)
return check_other
return decorator
class Bytes(object):
@ -61,25 +72,45 @@ class Bytes(object):
def __long__(self):
return self.qty
@onlybytes('Can only compare Bytes to Bytes')
def __lt__(self, other):
return self.qty < other.qty
@onlybytes('Can only compare Bytes to Bytes')
def __le__(self, other):
return self.qty <= other.qty
@onlybytes('Can only compare Bytes to Bytes')
def __eq__(self, other):
return self.qty == other.qty
@onlybytes('Can only compare Bytes to Bytes')
def __ne__(self, other):
return self.qty != other.qty
@onlybytes('Can only compare Bytes to Bytes')
def __ge__(self, other):
return self.qty >= other.qty
@onlybytes('Can only compare Bytes to Bytes')
def __gt__(self, other):
return self.qty > other.qty
@onlybytes('Can only add Bytes to Bytes')
def __add__(self, other):
if not isinstance(other, Bytes):
raise UnitError('Can only add Bytes to Bytes')
return Bytes(self.qty + other.qty)
@onlybytes('Can only add Bytes to Bytes')
def __iadd__(self, other):
if not isinstance(other, Bytes):
raise UnitError('Can only add Bytes to Bytes')
self.qty += other.qty
return self
@onlybytes('Can only subtract Bytes from Bytes')
def __sub__(self, other):
if not isinstance(other, Bytes):
raise UnitError('Can only subtract Bytes from Bytes')
return Bytes(self.qty - other.qty)
@onlybytes('Can only subtract Bytes from Bytes')
def __isub__(self, other):
if not isinstance(other, Bytes):
raise UnitError('Can only subtract Bytes from Bytes')
self.qty -= other.qty
return self
@ -110,22 +141,19 @@ class Bytes(object):
self.qty /= other
return self
@onlybytes('Can only take modulus of Bytes with Bytes')
def __mod__(self, other):
if isinstance(other, Bytes):
return self.qty % other.qty
if not isinstance(other, (int, long)):
raise UnitError('Can only take modulus of Bytes with integers or Bytes')
return Bytes(self.qty % other)
return Bytes(self.qty % other.qty)
@onlybytes('Can only take modulus of Bytes with Bytes')
def __imod__(self, other):
if isinstance(other, Bytes):
self.qty %= other.qty
else:
if not isinstance(other, (int, long)):
raise UnitError('Can only divide Bytes with integers or Bytes')
self.qty %= other
self.qty %= other.qty
return self
def __getstate__(self):
return {'__class__': self.__module__ + '.' + self.__class__.__name__,
'qty': self.qty,
}
class UnitError(Exception):
pass
def __setstate__(self, state):
self.qty = state['qty']

View file

@ -1,22 +1,26 @@
class ManifestError(Exception):
def __init__(self, message, manifest_path, json_path=None):
def __init__(self, message, manifest_path, data_path=None):
super(ManifestError, self).__init__(message)
self.message = message
self.manifest_path = manifest_path
self.json_path = json_path
self.data_path = data_path
self.args = (self.message, self.manifest_path, self.data_path)
def __str__(self):
if self.json_path is not None:
path = '.'.join(map(str, self.json_path))
return ('{msg}\n File path: {file}\n JSON path: {jsonpath}'
.format(msg=self.message, file=self.manifest_path, jsonpath=path))
if self.data_path is not None:
path = '.'.join(map(str, self.data_path))
return ('{msg}\n File path: {file}\n Data path: {datapath}'
.format(msg=self.message, file=self.manifest_path, datapath=path))
return '{file}: {msg}'.format(msg=self.message, file=self.manifest_path)
class TaskListError(Exception):
def __init__(self, message):
super(TaskListError, self).__init__(message)
self.message = message
self.args = (self.message,)
def __str__(self):
return 'Error in tasklist: ' + self.message
@ -24,3 +28,11 @@ class TaskListError(Exception):
class TaskError(Exception):
pass
class UnexpectedNumMatchesError(Exception):
pass
class UnitError(Exception):
pass

View file

@ -1,3 +1,4 @@
from contextlib import contextmanager
def get_partitions():
@ -16,7 +17,8 @@ def get_partitions():
return matches
def remount(volume, fn):
@contextmanager
def unmounted(volume):
from bootstrapvz.base.fs.partitionmaps.none import NoPartitions
p_map = volume.partition_map
@ -24,9 +26,8 @@ def remount(volume, fn):
p_map.root.unmount()
if not isinstance(p_map, NoPartitions):
p_map.unmap(volume)
result = fn()
yield
p_map.map(volume)
else:
result = fn()
yield
p_map.root.mount(destination=root_dir)
return result

View file

@ -11,8 +11,8 @@ class LoopbackVolume(Volume):
def _before_create(self, e):
self.image_path = e.image_path
vol_size = str(self.size.get_qty_in('MiB')) + 'M'
log_check_call(['qemu-img', 'create', '-f', 'raw', self.image_path, vol_size])
size_opt = '--size={mib}M'.format(mib=self.size.bytes.get_qty_in('MiB'))
log_check_call(['truncate', size_opt, self.image_path])
def _before_attach(self, e):
[self.loop_device_path] = log_check_call(['losetup', '--show', '--find', self.image_path])

View file

@ -8,7 +8,7 @@ class QEMUVolume(LoopbackVolume):
def _before_create(self, e):
self.image_path = e.image_path
vol_size = str(self.size.get_qty_in('MiB')) + 'M'
vol_size = str(self.size.bytes.get_qty_in('MiB')) + 'M'
log_check_call(['qemu-img', 'create', '-f', self.qemu_format, self.image_path, vol_size])
def _check_nbd_module(self):
@ -23,7 +23,8 @@ class QEMUVolume(LoopbackVolume):
num_partitions = len(self.partition_map.partitions)
if not self._module_loaded('nbd'):
msg = ('The kernel module `nbd\' must be loaded '
'(`modprobe nbd max_part={num_partitions}\') to attach .{extension} images'
'(run `modprobe nbd max_part={num_partitions}\') '
'to attach .{extension} images'
.format(num_partitions=num_partitions, extension=self.extension))
raise VolumeError(msg)
nbd_max_part = int(self._module_param('nbd', 'max_part'))
@ -76,3 +77,7 @@ class QEMUVolume(LoopbackVolume):
if not self._is_nbd_used(device_name):
return os.path.join('/dev', device_name)
raise VolumeError('Unable to find free nbd device.')
def __setstate__(self, state):
for key in state:
self.__dict__[key] = state[key]

View file

@ -43,6 +43,19 @@ class FSMProxy(object):
if not hasattr(self, event):
setattr(self, event, make_proxy(fsm, event))
def __getstate__(self):
state = {}
for key, value in self.__dict__.iteritems():
if callable(value) or key == 'fsm':
continue
state[key] = value
state['__class__'] = self.__module__ + '.' + self.__class__.__name__
return state
def __setstate__(self, state):
for key in state:
self.__dict__[key] = state[key]
class FSMProxyError(Exception):
pass

View file

@ -7,6 +7,7 @@ volume_mounting = Phase('Volume mounting', 'Mounting bootstrap volume')
os_installation = Phase('OS installation', 'Installing the operating system')
package_installation = Phase('Package installation', 'Installing software')
system_modification = Phase('System modification', 'Modifying configuration files, adding resources, etc.')
user_modification = Phase('User modification', 'Running user specified modifications')
system_cleaning = Phase('System cleaning', 'Removing sensitive data, temporary files and other leftovers')
volume_unmounting = Phase('Volume unmounting', 'Unmounting the bootstrap volume')
image_registration = Phase('Image registration', 'Uploading/Registering with the provider')
@ -19,6 +20,7 @@ order = [preparation,
os_installation,
package_installation,
system_modification,
user_modification,
system_cleaning,
volume_unmounting,
image_registration,

View file

@ -0,0 +1,68 @@
class _Release(object):
def __init__(self, codename, version):
self.codename = codename
self.version = version
def __cmp__(self, other):
return self.version - other.version
def __str__(self):
return self.codename
def __getstate__(self):
state = self.__dict__.copy()
state['__class__'] = self.__module__ + '.' + self.__class__.__name__
return state
def __setstate__(self, state):
for key in state:
self.__dict__[key] = state[key]
class _ReleaseAlias(_Release):
def __init__(self, alias, release):
self.alias = alias
self.release = release
super(_ReleaseAlias, self).__init__(self.release.codename, self.release.version)
def __str__(self):
return self.alias
sid = _Release('sid', 10)
stretch = _Release('stretch', 9)
jessie = _Release('jessie', 8)
wheezy = _Release('wheezy', 7)
squeeze = _Release('squeeze', 6.0)
lenny = _Release('lenny', 5.0)
etch = _Release('etch', 4.0)
sarge = _Release('sarge', 3.1)
woody = _Release('woody', 3.0)
potato = _Release('potato', 2.2)
slink = _Release('slink', 2.1)
hamm = _Release('hamm', 2.0)
bo = _Release('bo', 1.3)
rex = _Release('rex', 1.2)
buzz = _Release('buzz', 1.1)
unstable = _ReleaseAlias('unstable', sid)
testing = _ReleaseAlias('testing', stretch)
stable = _ReleaseAlias('stable', jessie)
oldstable = _ReleaseAlias('oldstable', wheezy)
def get_release(release_name):
"""Normalizes the release codenames
This allows tasks to query for release codenames rather than 'stable', 'unstable' etc.
"""
from . import releases
release = getattr(releases, release_name, None)
if release is None or not isinstance(release, _Release):
raise UnknownReleaseException('The release `{name}\' is unknown'.format(name=release))
return release
class UnknownReleaseException(Exception):
pass

View file

@ -0,0 +1,178 @@
from exceptions import UnitError
from bytes import Bytes
def onlysectors(msg):
def decorator(func):
def check_other(self, other):
if not isinstance(other, Sectors):
raise UnitError(msg)
return func(self, other)
return check_other
return decorator
class Sectors(object):
def __init__(self, quantity, sector_size):
if isinstance(sector_size, Bytes):
self.sector_size = sector_size
else:
self.sector_size = Bytes(sector_size)
if isinstance(quantity, Bytes):
self.bytes = quantity
else:
if isinstance(quantity, (int, long)):
self.bytes = self.sector_size * quantity
else:
self.bytes = Bytes(quantity)
def get_sectors(self):
return self.bytes / self.sector_size
def __repr__(self):
return str(self.get_sectors()) + 's'
def __str__(self):
return self.__repr__()
def __int__(self):
return self.get_sectors()
def __long__(self):
return self.get_sectors()
@onlysectors('Can only compare sectors with sectors')
def __lt__(self, other):
return self.bytes < other.bytes
@onlysectors('Can only compare sectors with sectors')
def __le__(self, other):
return self.bytes <= other.bytes
@onlysectors('Can only compare sectors with sectors')
def __eq__(self, other):
return self.bytes == other.bytes
@onlysectors('Can only compare sectors with sectors')
def __ne__(self, other):
return self.bytes != other.bytes
@onlysectors('Can only compare sectors with sectors')
def __ge__(self, other):
return self.bytes >= other.bytes
@onlysectors('Can only compare sectors with sectors')
def __gt__(self, other):
return self.bytes > other.bytes
def __add__(self, other):
if isinstance(other, (int, long)):
return Sectors(self.bytes + self.sector_size * other, self.sector_size)
if isinstance(other, Bytes):
return Sectors(self.bytes + other, self.sector_size)
if isinstance(other, Sectors):
if self.sector_size != other.sector_size:
raise UnitError('Cannot sum sectors with different sector sizes')
return Sectors(self.bytes + other.bytes, self.sector_size)
raise UnitError('Can only add sectors, bytes or integers to sectors')
def __iadd__(self, other):
if isinstance(other, (int, long)):
self.bytes += self.sector_size * other
return self
if isinstance(other, Bytes):
self.bytes += other
return self
if isinstance(other, Sectors):
if self.sector_size != other.sector_size:
raise UnitError('Cannot sum sectors with different sector sizes')
self.bytes += other.bytes
return self
raise UnitError('Can only add sectors, bytes or integers to sectors')
def __sub__(self, other):
if isinstance(other, (int, long)):
return Sectors(self.bytes - self.sector_size * other, self.sector_size)
if isinstance(other, Bytes):
return Sectors(self.bytes - other, self.sector_size)
if isinstance(other, Sectors):
if self.sector_size != other.sector_size:
raise UnitError('Cannot subtract sectors with different sector sizes')
return Sectors(self.bytes - other.bytes, self.sector_size)
raise UnitError('Can only subtract sectors, bytes or integers from sectors')
def __isub__(self, other):
if isinstance(other, (int, long)):
self.bytes -= self.sector_size * other
return self
if isinstance(other, Bytes):
self.bytes -= other
return self
if isinstance(other, Sectors):
if self.sector_size != other.sector_size:
raise UnitError('Cannot subtract sectors with different sector sizes')
self.bytes -= other.bytes
return self
raise UnitError('Can only subtract sectors, bytes or integers from sectors')
def __mul__(self, other):
if isinstance(other, (int, long)):
return Sectors(self.bytes * other, self.sector_size)
else:
raise UnitError('Can only multiply sectors with integers')
def __imul__(self, other):
if isinstance(other, (int, long)):
self.bytes *= other
return self
else:
raise UnitError('Can only multiply sectors with integers')
def __div__(self, other):
if isinstance(other, (int, long)):
return Sectors(self.bytes / other, self.sector_size)
if isinstance(other, Sectors):
if self.sector_size == other.sector_size:
return self.bytes / other.bytes
else:
raise UnitError('Cannot divide sectors with different sector sizes')
raise UnitError('Can only divide sectors with integers or sectors')
def __idiv__(self, other):
if isinstance(other, (int, long)):
self.bytes /= other
return self
if isinstance(other, Sectors):
if self.sector_size == other.sector_size:
self.bytes /= other.bytes
return self
else:
raise UnitError('Cannot divide sectors with different sector sizes')
raise UnitError('Can only divide sectors with integers or sectors')
@onlysectors('Can only take modulus of sectors with sectors')
def __mod__(self, other):
if self.sector_size == other.sector_size:
return Sectors(self.bytes % other.bytes, self.sector_size)
else:
raise UnitError('Cannot take modulus of sectors with different sector sizes')
@onlysectors('Can only take modulus of sectors with sectors')
def __imod__(self, other):
if self.sector_size == other.sector_size:
self.bytes %= other.bytes
return self
else:
raise UnitError('Cannot take modulus of sectors with different sector sizes')
def __getstate__(self):
return {'__class__': self.__module__ + '.' + self.__class__.__name__,
'sector_size': self.sector_size,
'bytes': self.bytes,
}
def __setstate__(self, state):
self.sector_size = state['sector_size']
self.bytes = state['bytes']

View file

@ -1,7 +1,8 @@
from tasks import workspace
from tasks import packages
from tasks import host
from tasks import boot
from tasks import grub
from tasks import extlinux
from tasks import bootstrap
from tasks import volume
from tasks import loopback
@ -14,6 +15,7 @@ from tasks import locale
from tasks import network
from tasks import initd
from tasks import ssh
from tasks import kernel
def get_standard_groups(manifest):
@ -25,12 +27,13 @@ def get_standard_groups(manifest):
if 'boot' in manifest.volume['partitions']:
group.extend(boot_partition_group)
group.extend(mounting_group)
group.extend(kernel_group)
group.extend(get_fs_specific_group(manifest))
group.extend(get_network_group(manifest))
group.extend(get_apt_group(manifest))
group.extend(security_group)
group.extend(locale_group)
group.extend(bootloader_group.get(manifest.system['bootloader'], []))
group.extend(get_bootloader_group(manifest))
group.extend(cleanup_group)
return group
@ -71,10 +74,16 @@ boot_partition_group = [filesystem.CreateBootMountDir,
mounting_group = [filesystem.CreateMountDir,
filesystem.MountRoot,
filesystem.MountSpecials,
filesystem.CopyMountTable,
filesystem.RemoveMountTable,
filesystem.UnmountRoot,
filesystem.DeleteMountDir,
]
kernel_group = [kernel.DetermineKernelVersion,
kernel.UpdateInitramfs,
]
ssh_group = [ssh.AddOpenSSHPackage,
ssh.DisableSSHPasswordAuthentication,
ssh.DisableSSHDNSLookup,
@ -126,9 +135,25 @@ locale_group = [locale.LocaleBootstrapPackage,
]
bootloader_group = {'grub': [boot.AddGrubPackage, boot.ConfigureGrub, boot.InstallGrub],
'extlinux': [boot.AddExtlinuxPackage, boot.InstallExtLinux],
}
def get_bootloader_group(manifest):
from bootstrapvz.common.releases import jessie
group = []
if manifest.system['bootloader'] == 'grub':
group.extend([grub.AddGrubPackage,
grub.ConfigureGrub])
if manifest.release < jessie:
group.append(grub.InstallGrub_1_99)
else:
group.append(grub.InstallGrub_2)
if manifest.system['bootloader'] == 'extlinux':
group.append(extlinux.AddExtlinuxPackage)
if manifest.release < jessie:
group.extend([extlinux.ConfigureExtlinux,
extlinux.InstallExtlinux])
else:
group.extend([extlinux.ConfigureExtlinuxJessie,
extlinux.InstallExtlinuxJessie])
return group
def get_fs_specific_group(manifest):

View file

@ -1,7 +1,8 @@
from bootstrapvz.base import Task
from .. import phases
from ..tools import log_check_call
from bootstrapvz.common import phases
from bootstrapvz.common.tools import log_check_call
import locale
import logging
import os
@ -23,14 +24,37 @@ class AddDefaultSources(Task):
@classmethod
def run(cls, info):
from bootstrapvz.common.releases import sid
include_src = info.manifest.packages.get('include-source-type', False)
components = ' '.join(info.manifest.packages.get('components', ['main']))
info.source_lists.add('main', 'deb {apt_mirror} {system.release} ' + components)
info.source_lists.add('main', 'deb-src {apt_mirror} {system.release} ' + components)
if info.release_codename != 'sid':
if include_src:
info.source_lists.add('main', 'deb-src {apt_mirror} {system.release} ' + components)
if info.manifest.release != sid:
info.source_lists.add('main', 'deb http://security.debian.org/ {system.release}/updates ' + components)
info.source_lists.add('main', 'deb-src http://security.debian.org/ {system.release}/updates ' + components)
if include_src:
info.source_lists.add('main', 'deb-src http://security.debian.org/ {system.release}/updates ' + components)
info.source_lists.add('main', 'deb {apt_mirror} {system.release}-updates ' + components)
info.source_lists.add('main', 'deb-src {apt_mirror} {system.release}-updates ' + components)
if include_src:
info.source_lists.add('main', 'deb-src {apt_mirror} {system.release}-updates ' + components)
class AddBackports(Task):
description = 'Adding backports to the apt sources'
phase = phases.preparation
predecessors = [AddDefaultSources]
@classmethod
def run(cls, info):
from bootstrapvz.common.releases import unstable
if info.source_lists.target_exists('{system.release}-backports'):
msg = ('{system.release}-backports target already exists').format(**info.manifest_vars)
logging.getLogger(__name__).info(msg)
elif info.manifest.release == unstable:
logging.getLogger(__name__).info('There are no backports for sid/unstable')
else:
info.source_lists.add('backports', 'deb {apt_mirror} {system.release}-backports main')
info.source_lists.add('backports', 'deb-src {apt_mirror} {system.release}-backports main')
class AddManifestPreferences(Task):
@ -63,6 +87,11 @@ class WriteSources(Task):
@classmethod
def run(cls, info):
if not info.source_lists.target_exists(info.manifest.system['release']):
import logging
log = logging.getLogger(__name__)
log.warn('No default target has been specified in the sources list, '
'installing packages may fail')
for name, sources in info.source_lists.sources.iteritems():
if name == 'main':
list_path = os.path.join(info.root, 'etc/apt/sources.list')
@ -137,12 +166,11 @@ class AptUpgrade(Task):
'--assume-yes'])
except CalledProcessError as e:
if e.returncode == 100:
import logging
msg = ('apt exited with status code 100. '
'This can sometimes occur when package retrieval times out or a package extraction failed. '
'apt might succeed if you try bootstrapping again.')
logging.getLogger(__name__).warn(msg)
raise e
raise
class PurgeUnusedPackages(Task):
@ -153,7 +181,8 @@ class PurgeUnusedPackages(Task):
def run(cls, info):
log_check_call(['chroot', info.root,
'apt-get', 'autoremove',
'--purge'])
'--purge',
'--assume-yes'])
class AptClean(Task):

View file

@ -1,21 +1,31 @@
from bootstrapvz.base import Task
from .. import phases
import apt
import filesystem
from bootstrapvz.base.fs import partitionmaps
import os.path
from . import assets
class UpdateInitramfs(Task):
description = 'Updating initramfs'
phase = phases.system_modification
@classmethod
def run(cls, info):
from ..tools import log_check_call
log_check_call(['chroot', info.root, 'update-initramfs', '-u'])
class BlackListModules(Task):
description = 'Blacklisting kernel modules'
phase = phases.system_modification
successors = [UpdateInitramfs]
@classmethod
def run(cls, info):
blacklist_path = os.path.join(info.root, 'etc/modprobe.d/blacklist.conf')
with open(blacklist_path, 'a') as blacklist:
blacklist.write(('# disable pc speaker\n'
'blacklist pcspkr'))
blacklist.write(('# disable pc speaker and floppy\n'
'blacklist pcspkr\n'
'blacklist floppy\n'))
class DisableGetTTYs(Task):
@ -24,129 +34,19 @@ class DisableGetTTYs(Task):
@classmethod
def run(cls, info):
from ..tools import sed_i
inittab_path = os.path.join(info.root, 'etc/inittab')
tty1 = '1:2345:respawn:/sbin/getty 38400 tty1'
sed_i(inittab_path, '^' + tty1, '#' + tty1)
ttyx = ':23:respawn:/sbin/getty 38400 tty'
for i in range(2, 7):
i = str(i)
sed_i(inittab_path, '^' + i + ttyx + i, '#' + i + ttyx + i)
class AddGrubPackage(Task):
description = 'Adding grub package'
phase = phases.preparation
predecessors = [apt.AddDefaultSources]
@classmethod
def run(cls, info):
info.packages.add('grub-pc')
class ConfigureGrub(Task):
description = 'Configuring grub'
phase = phases.system_modification
predecessors = [filesystem.FStab]
@classmethod
def run(cls, info):
from bootstrapvz.common.tools import sed_i
grub_def = os.path.join(info.root, 'etc/default/grub')
sed_i(grub_def, '^#GRUB_TERMINAL=console', 'GRUB_TERMINAL=console')
sed_i(grub_def, '^GRUB_CMDLINE_LINUX_DEFAULT="quiet"',
'GRUB_CMDLINE_LINUX_DEFAULT="console=ttyS0"')
class InstallGrub(Task):
description = 'Installing grub'
phase = phases.system_modification
predecessors = [filesystem.FStab]
@classmethod
def run(cls, info):
from ..fs.loopbackvolume import LoopbackVolume
from ..tools import log_check_call
boot_dir = os.path.join(info.root, 'boot')
grub_dir = os.path.join(boot_dir, 'grub')
from ..fs import remount
p_map = info.volume.partition_map
def link_fn():
info.volume.link_dm_node()
if isinstance(p_map, partitionmaps.none.NoPartitions):
p_map.root.device_path = info.volume.device_path
def unlink_fn():
info.volume.unlink_dm_node()
if isinstance(p_map, partitionmaps.none.NoPartitions):
p_map.root.device_path = info.volume.device_path
# GRUB cannot deal with installing to loopback devices
# so we fake a real harddisk with dmsetup.
# Guide here: http://ebroder.net/2009/08/04/installing-grub-onto-a-disk-image/
if isinstance(info.volume, LoopbackVolume):
remount(info.volume, link_fn)
try:
[device_path] = log_check_call(['readlink', '-f', info.volume.device_path])
device_map_path = os.path.join(grub_dir, 'device.map')
partition_prefix = 'msdos'
if isinstance(p_map, partitionmaps.gpt.GPTPartitionMap):
partition_prefix = 'gpt'
with open(device_map_path, 'w') as device_map:
device_map.write('(hd0) {device_path}\n'.format(device_path=device_path))
if not isinstance(p_map, partitionmaps.none.NoPartitions):
for idx, partition in enumerate(info.volume.partition_map.partitions):
device_map.write('(hd0,{prefix}{idx}) {device_path}\n'
.format(device_path=partition.device_path,
prefix=partition_prefix,
idx=idx + 1))
# Install grub
log_check_call(['chroot', info.root,
'grub-install', device_path])
log_check_call(['chroot', info.root, 'update-grub'])
except Exception as e:
if isinstance(info.volume, LoopbackVolume):
remount(info.volume, unlink_fn)
raise e
if isinstance(info.volume, LoopbackVolume):
remount(info.volume, unlink_fn)
class AddExtlinuxPackage(Task):
description = 'Adding extlinux package'
phase = phases.preparation
predecessors = [apt.AddDefaultSources]
@classmethod
def run(cls, info):
info.packages.add('extlinux')
if isinstance(info.volume.partition_map, partitionmaps.gpt.GPTPartitionMap):
info.packages.add('syslinux-common')
class InstallExtLinux(Task):
description = 'Installing extlinux'
phase = phases.system_modification
predecessors = [filesystem.FStab]
@classmethod
def run(cls, info):
from ..tools import log_check_call
if isinstance(info.volume.partition_map, partitionmaps.gpt.GPTPartitionMap):
bootloader = '/usr/lib/syslinux/gptmbr.bin'
# Forward compatible check for jessie
from bootstrapvz.common.releases import jessie
if info.manifest.release < jessie:
from ..tools import sed_i
inittab_path = os.path.join(info.root, 'etc/inittab')
tty1 = '1:2345:respawn:/sbin/getty 38400 tty1'
sed_i(inittab_path, '^' + tty1, '#' + tty1)
ttyx = ':23:respawn:/sbin/getty 38400 tty'
for i in range(2, 7):
i = str(i)
sed_i(inittab_path, '^' + i + ttyx + i, '#' + i + ttyx + i)
else:
bootloader = '/usr/lib/extlinux/mbr.bin'
log_check_call(['chroot', info.root,
'dd', 'bs=440', 'count=1',
'if=' + bootloader,
'of=' + info.volume.device_path])
log_check_call(['chroot', info.root,
'extlinux',
'--install', '/boot/extlinux'])
log_check_call(['chroot', info.root,
'extlinux-update'])
from shutil import copy
logind_asset_path = os.path.join(assets, 'systemd/logind.conf')
logind_destination = os.path.join(info.root, 'etc/systemd/logind.conf')
copy(logind_asset_path, logind_destination)

View file

@ -19,7 +19,8 @@ class AddRequiredCommands(Task):
def get_bootstrap_args(info):
executable = ['debootstrap']
options = ['--arch=' + info.manifest.system['architecture']]
arch = info.manifest.system.get('userspace_architecture', info.manifest.system.get('architecture'))
options = ['--arch=' + arch]
if len(info.include_packages) > 0:
options.append('--include=' + ','.join(info.include_packages))
if len(info.exclude_packages) > 0:
@ -79,7 +80,6 @@ class Bootstrap(Task):
class IncludePackagesInBootstrap(Task):
description = 'Add packages in the bootstrap phase'
phase = phases.preparation
successors = [Bootstrap]
@classmethod
def run(cls, info):
@ -91,7 +91,6 @@ class IncludePackagesInBootstrap(Task):
class ExcludePackagesInBootstrap(Task):
description = 'Remove packages from bootstrap phase'
phase = phases.preparation
successors = [Bootstrap]
@classmethod
def run(cls, info):

View file

@ -0,0 +1,114 @@
from bootstrapvz.base import Task
from .. import phases
from ..tools import log_check_call
import filesystem
import kernel
from bootstrapvz.base.fs import partitionmaps
import os
class AddExtlinuxPackage(Task):
description = 'Adding extlinux package'
phase = phases.preparation
@classmethod
def run(cls, info):
info.packages.add('extlinux')
if isinstance(info.volume.partition_map, partitionmaps.gpt.GPTPartitionMap):
info.packages.add('syslinux-common')
class ConfigureExtlinux(Task):
description = 'Configuring extlinux'
phase = phases.system_modification
predecessors = [filesystem.FStab]
@classmethod
def run(cls, info):
from bootstrapvz.common.releases import squeeze
if info.manifest.release == squeeze:
# On squeeze /etc/default/extlinux is generated when running extlinux-update
log_check_call(['chroot', info.root,
'extlinux-update'])
from bootstrapvz.common.tools import sed_i
extlinux_def = os.path.join(info.root, 'etc/default/extlinux')
sed_i(extlinux_def, r'^EXTLINUX_PARAMETERS="([^"]+)"$',
r'EXTLINUX_PARAMETERS="\1 console=ttyS0"')
class InstallExtlinux(Task):
description = 'Installing extlinux'
phase = phases.system_modification
predecessors = [filesystem.FStab, ConfigureExtlinux]
@classmethod
def run(cls, info):
if isinstance(info.volume.partition_map, partitionmaps.gpt.GPTPartitionMap):
bootloader = '/usr/lib/syslinux/gptmbr.bin'
else:
bootloader = '/usr/lib/extlinux/mbr.bin'
log_check_call(['chroot', info.root,
'dd', 'bs=440', 'count=1',
'if=' + bootloader,
'of=' + info.volume.device_path])
log_check_call(['chroot', info.root,
'extlinux',
'--install', '/boot/extlinux'])
log_check_call(['chroot', info.root,
'extlinux-update'])
class ConfigureExtlinuxJessie(Task):
description = 'Configuring extlinux'
phase = phases.system_modification
@classmethod
def run(cls, info):
extlinux_path = os.path.join(info.root, 'boot/extlinux')
os.mkdir(extlinux_path)
from . import assets
with open(os.path.join(assets, 'extlinux/extlinux.conf')) as template:
extlinux_config_tpl = template.read()
config_vars = {'root_uuid': info.volume.partition_map.root.get_uuid(),
'kernel_version': info.kernel_version}
# Check if / and /boot are on the same partition
# If not, /boot will actually be / when booting
if hasattr(info.volume.partition_map, 'boot'):
config_vars['boot_prefix'] = ''
else:
config_vars['boot_prefix'] = '/boot'
extlinux_config = extlinux_config_tpl.format(**config_vars)
with open(os.path.join(extlinux_path, 'extlinux.conf'), 'w') as extlinux_conf_handle:
extlinux_conf_handle.write(extlinux_config)
# Copy the boot message
from shutil import copy
boot_txt_path = os.path.join(assets, 'extlinux/boot.txt')
copy(boot_txt_path, os.path.join(extlinux_path, 'boot.txt'))
class InstallExtlinuxJessie(Task):
description = 'Installing extlinux'
phase = phases.system_modification
predecessors = [filesystem.FStab, ConfigureExtlinuxJessie]
# Make sure the kernel image is updated after we have installed the bootloader
successors = [kernel.UpdateInitramfs]
@classmethod
def run(cls, info):
if isinstance(info.volume.partition_map, partitionmaps.gpt.GPTPartitionMap):
# Yeah, somebody saw it fit to uppercase that folder in jessie. Why? BECAUSE
bootloader = '/usr/lib/EXTLINUX/gptmbr.bin'
else:
bootloader = '/usr/lib/EXTLINUX/mbr.bin'
log_check_call(['chroot', info.root,
'dd', 'bs=440', 'count=1',
'if=' + bootloader,
'of=' + info.volume.device_path])
log_check_call(['chroot', info.root,
'extlinux',
'--install', '/boot/extlinux'])

View file

@ -1,7 +1,6 @@
from bootstrapvz.base import Task
from .. import phases
from ..tools import log_check_call
import apt
import bootstrap
import host
import volume
@ -26,8 +25,9 @@ class Format(Task):
def run(cls, info):
from bootstrapvz.base.fs.partitions.unformatted import UnformattedPartition
for partition in info.volume.partition_map.partitions:
if not isinstance(partition, UnformattedPartition):
partition.format()
if isinstance(partition, UnformattedPartition):
continue
partition.format()
class TuneVolumeFS(Task):
@ -41,15 +41,15 @@ class TuneVolumeFS(Task):
import re
# Disable the time based filesystem check
for partition in info.volume.partition_map.partitions:
if not isinstance(partition, UnformattedPartition):
if re.match('^ext[2-4]$', partition.filesystem) is not None:
log_check_call(['tune2fs', '-i', '0', partition.device_path])
if isinstance(partition, UnformattedPartition):
continue
if re.match('^ext[2-4]$', partition.filesystem) is not None:
log_check_call(['tune2fs', '-i', '0', partition.device_path])
class AddXFSProgs(Task):
description = 'Adding `xfsprogs\' to the image packages'
phase = phases.preparation
predecessors = [apt.AddDefaultSources]
@classmethod
def run(cls, info):
@ -113,6 +113,18 @@ class MountSpecials(Task):
root.add_mount('none', 'dev/pts', ['--types', 'devpts'])
class CopyMountTable(Task):
description = 'Copying mtab from host system'
phase = phases.os_installation
predecessors = [MountSpecials]
@classmethod
def run(cls, info):
import shutil
import os.path
shutil.copy('/proc/mounts', os.path.join(info.root, 'etc/mtab'))
class UnmountRoot(Task):
description = 'Unmounting the bootstrap volume'
phase = phases.volume_unmounting
@ -123,6 +135,17 @@ class UnmountRoot(Task):
info.volume.partition_map.root.unmount()
class RemoveMountTable(Task):
description = 'Removing mtab'
phase = phases.volume_unmounting
successors = [UnmountRoot]
@classmethod
def run(cls, info):
import os
os.remove(os.path.join(info.root, 'etc/mtab'))
class DeleteMountDir(Task):
description = 'Deleting mountpoint for the bootstrap volume'
phase = phases.volume_unmounting

View file

@ -0,0 +1,85 @@
from bootstrapvz.base import Task
from .. import phases
from ..tools import log_check_call
import filesystem
import kernel
from bootstrapvz.base.fs import partitionmaps
import os.path
class AddGrubPackage(Task):
description = 'Adding grub package'
phase = phases.preparation
@classmethod
def run(cls, info):
info.packages.add('grub-pc')
class ConfigureGrub(Task):
description = 'Configuring grub'
phase = phases.system_modification
predecessors = [filesystem.FStab]
@classmethod
def run(cls, info):
from bootstrapvz.common.tools import sed_i
grub_def = os.path.join(info.root, 'etc/default/grub')
sed_i(grub_def, '^#GRUB_TERMINAL=console', 'GRUB_TERMINAL=console')
sed_i(grub_def, '^GRUB_CMDLINE_LINUX_DEFAULT="quiet"',
'GRUB_CMDLINE_LINUX_DEFAULT="console=ttyS0"')
class InstallGrub_1_99(Task):
description = 'Installing grub 1.99'
phase = phases.system_modification
predecessors = [filesystem.FStab]
@classmethod
def run(cls, info):
p_map = info.volume.partition_map
# GRUB screws up when installing in chrooted environments
# so we fake a real harddisk with dmsetup.
# Guide here: http://ebroder.net/2009/08/04/installing-grub-onto-a-disk-image/
from ..fs import unmounted
with unmounted(info.volume):
info.volume.link_dm_node()
if isinstance(p_map, partitionmaps.none.NoPartitions):
p_map.root.device_path = info.volume.device_path
try:
[device_path] = log_check_call(['readlink', '-f', info.volume.device_path])
device_map_path = os.path.join(info.root, 'boot/grub/device.map')
partition_prefix = 'msdos'
if isinstance(p_map, partitionmaps.gpt.GPTPartitionMap):
partition_prefix = 'gpt'
with open(device_map_path, 'w') as device_map:
device_map.write('(hd0) {device_path}\n'.format(device_path=device_path))
if not isinstance(p_map, partitionmaps.none.NoPartitions):
for idx, partition in enumerate(info.volume.partition_map.partitions):
device_map.write('(hd0,{prefix}{idx}) {device_path}\n'
.format(device_path=partition.device_path,
prefix=partition_prefix,
idx=idx + 1))
# Install grub
log_check_call(['chroot', info.root, 'grub-install', device_path])
log_check_call(['chroot', info.root, 'update-grub'])
finally:
with unmounted(info.volume):
info.volume.unlink_dm_node()
if isinstance(p_map, partitionmaps.none.NoPartitions):
p_map.root.device_path = info.volume.device_path
class InstallGrub_2(Task):
description = 'Installing grub 2'
phase = phases.system_modification
predecessors = [filesystem.FStab]
# Make sure the kernel image is updated after we have installed the bootloader
successors = [kernel.UpdateInitramfs]
@classmethod
def run(cls, info):
log_check_call(['chroot', info.root, 'grub-install', info.volume.device_path])
log_check_call(['chroot', info.root, 'update-grub'])

View file

@ -44,8 +44,9 @@ class RemoveHWClock(Task):
@classmethod
def run(cls, info):
from bootstrapvz.common.releases import squeeze
info.initd['disable'].append('hwclock.sh')
if info.manifest.system['release'] == 'squeeze':
if info.manifest.release == squeeze:
info.initd['disable'].append('hwclockfirst.sh')
@ -61,4 +62,4 @@ class AdjustExpandRootScript(Task):
script = os.path.join(info.root, 'etc/init.d/expand-root')
root_idx = info.volume.partition_map.root.get_index()
device_path = 'device_path="/dev/xvda{idx}"'.format(idx=root_idx)
sed_i(script, '^device_path="/dev/xvda$', device_path)
sed_i(script, '^device_path="/dev/xvda"$', device_path)

View file

@ -0,0 +1,52 @@
from bootstrapvz.base import Task
from .. import phases
from ..tasks import packages
import logging
class AddDKMSPackages(Task):
description = 'Adding DKMS and kernel header packages'
phase = phases.package_installation
successors = [packages.InstallPackages]
@classmethod
def run(cls, info):
info.packages.add('dkms')
kernel_pkg_arch = {'i386': '686-pae', 'amd64': 'amd64'}[info.manifest.system['architecture']]
info.packages.add('linux-headers-' + kernel_pkg_arch)
class UpdateInitramfs(Task):
description = 'Rebuilding initramfs'
phase = phases.system_modification
@classmethod
def run(cls, info):
from bootstrapvz.common.tools import log_check_call
# Update initramfs (-u) for all currently installed kernel versions (-k all)
log_check_call(['chroot', info.root, 'update-initramfs', '-u', '-k', 'all'])
class DetermineKernelVersion(Task):
description = 'Determining kernel version'
phase = phases.package_installation
predecessors = [packages.InstallPackages]
@classmethod
def run(cls, info):
# Snatched from `extlinux-update' in wheezy
# list the files in boot/ that match vmlinuz-*
# sort what the * matches, the first entry is the kernel version
import os.path
import re
regexp = re.compile('^vmlinuz-(?P<version>.+)$')
def get_kernel_version(vmlinuz_path):
vmlinux_basename = os.path.basename(vmlinuz_path)
return regexp.match(vmlinux_basename).group('version')
from glob import glob
boot = os.path.join(info.root, 'boot')
vmlinuz_paths = glob('{boot}/vmlinuz-*'.format(boot=boot))
kernels = map(get_kernel_version, vmlinuz_paths)
info.kernel_version = sorted(kernels, reverse=True)[0]
logging.getLogger(__name__).debug('Kernel version is {version}'.format(version=info.kernel_version))

View file

@ -12,12 +12,12 @@ class AddRequiredCommands(Task):
@classmethod
def run(cls, info):
from ..fs.loopbackvolume import LoopbackVolume
if isinstance(info.volume, LoopbackVolume):
info.host_dependencies['qemu-img'] = 'qemu-utils'
info.host_dependencies['losetup'] = 'mount'
from ..fs.qemuvolume import QEMUVolume
if isinstance(info.volume, QEMUVolume):
if type(info.volume) is LoopbackVolume:
info.host_dependencies['losetup'] = 'mount'
info.host_dependencies['truncate'] = 'coreutils'
if isinstance(info.volume, QEMUVolume):
info.host_dependencies['qemu-img'] = 'qemu-utils'
class Create(Task):
@ -45,6 +45,7 @@ class MoveImage(Task):
destination = os.path.join(info.manifest.bootstrapper['workspace'], filename)
import shutil
shutil.move(info.volume.image_path, destination)
info.volume.image_path = destination
import logging
log = logging.getLogger(__name__)
log.info('The volume image has been moved to ' + destination)

View file

@ -1,14 +0,0 @@
// This is a mapping of Debian release codenames to NIC configurations
// Every item in an array is a line
{
"squeeze": ["auto lo",
"iface lo inet loopback",
"auto eth0",
"iface eth0 inet dhcp"],
"wheezy": ["auto eth0",
"iface eth0 inet dhcp"],
"jessie": ["auto eth0",
"iface eth0 inet dhcp"],
"sid": ["auto eth0",
"iface eth0 inet dhcp"]
}

View file

@ -0,0 +1,16 @@
---
# This is a mapping of Debian release codenames to NIC configurations
squeeze: |
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp
wheezy: |
auto eth0
iface eth0 inet dhcp
jessie: |
auto eth0
iface eth0 inet dhcp
sid: |
auto eth0
iface eth0 inet dhcp

View file

@ -5,7 +5,7 @@ import os
class RemoveDNSInfo(Task):
description = 'Removing resolv.conf'
phase = phases.system_modification
phase = phases.system_cleaning
@classmethod
def run(cls, info):
@ -15,7 +15,7 @@ class RemoveDNSInfo(Task):
class RemoveHostname(Task):
description = 'Removing the hostname file'
phase = phases.system_modification
phase = phases.system_cleaning
@classmethod
def run(cls, info):
@ -45,10 +45,10 @@ class ConfigureNetworkIF(Task):
@classmethod
def run(cls, info):
network_config_path = os.path.join(os.path.dirname(__file__), 'network-configuration.json')
network_config_path = os.path.join(os.path.dirname(__file__), 'network-configuration.yml')
from ..tools import config_get
if_config = config_get(network_config_path, [info.release_codename])
if_config = config_get(network_config_path, [info.manifest.release.codename])
interfaces_path = os.path.join(info.root, 'etc/network/interfaces')
with open(interfaces_path, 'a') as interfaces:
interfaces.write('\n'.join(if_config) + '\n')
interfaces.write(if_config + '\n')

View file

@ -7,7 +7,6 @@ from ..tools import log_check_call
class AddManifestPackages(Task):
description = 'Adding packages from the manifest'
phase = phases.preparation
predecessors = [apt.AddDefaultSources]
@classmethod
def run(cls, info):
@ -49,8 +48,8 @@ class InstallPackages(Task):
log_check_call(['chroot', info.root,
'apt-get', 'install',
'--no-install-recommends',
'--assume-yes']
+ map(str, remote_packages),
'--assume-yes'] +
map(str, remote_packages),
env=env)
except CalledProcessError as e:
import logging
@ -70,7 +69,7 @@ class InstallPackages(Task):
'This can sometimes occur when package retrieval times out or a package extraction failed. '
'apt might succeed if you try bootstrapping again.')
logging.getLogger(__name__).warn(msg)
raise e
raise
@classmethod
def install_local(cls, info, local_packages):
@ -91,8 +90,7 @@ class InstallPackages(Task):
env = os.environ.copy()
env['DEBIAN_FRONTEND'] = 'noninteractive'
log_check_call(['chroot', info.root,
'dpkg', '--install']
+ chrooted_package_paths,
'dpkg', '--install'] + chrooted_package_paths,
env=env)
for path in absolute_package_paths:

View file

@ -3,14 +3,12 @@ from .. import phases
from ..tools import log_check_call
import os.path
from . import assets
import apt
import initd
class AddOpenSSHPackage(Task):
description = 'Adding openssh package'
phase = phases.preparation
predecessors = [apt.AddDefaultSources]
@classmethod
def run(cls, info):
@ -30,7 +28,8 @@ class AddSSHKeyGeneration(Task):
try:
log_check_call(['chroot', info.root,
'dpkg-query', '-W', 'openssh-server'])
if info.manifest.system['release'] == 'squeeze':
from bootstrapvz.common.releases import squeeze
if info.manifest.release == squeeze:
install['generate-ssh-hostkeys'] = os.path.join(init_scripts_dir, 'squeeze/generate-ssh-hostkeys')
else:
install['generate-ssh-hostkeys'] = os.path.join(init_scripts_dir, 'generate-ssh-hostkeys')
@ -51,6 +50,38 @@ class DisableSSHPasswordAuthentication(Task):
sed_i(sshd_config_path, '^#PasswordAuthentication yes', 'PasswordAuthentication no')
class EnableRootLogin(Task):
description = 'Disabling SSH login for root'
phase = phases.system_modification
@classmethod
def run(cls, info):
sshdconfig_path = os.path.join(info.root, 'etc/ssh/sshd_config')
if os.path.exists(sshdconfig_path):
from bootstrapvz.common.tools import sed_i
sed_i(sshdconfig_path, 'PermitRootLogin .*', 'PermitRootLogin yes')
else:
import logging
logging.getLogger(__name__).warn('The OpenSSH server has not been installed, '
'not enabling SSH root login.')
class DisableRootLogin(Task):
description = 'Disabling SSH login for root'
phase = phases.system_modification
@classmethod
def run(cls, info):
sshdconfig_path = os.path.join(info.root, 'etc/ssh/sshd_config')
if os.path.exists(sshdconfig_path):
from bootstrapvz.common.tools import sed_i
sed_i(sshdconfig_path, 'PermitRootLogin .*', 'PermitRootLogin no')
else:
import logging
logging.getLogger(__name__).warn('The OpenSSH server has not been installed, '
'not disabling SSH root login.')
class DisableSSHDNSLookup(Task):
description = 'Disabling sshd remote host name lookup'
phase = phases.system_modification
@ -70,7 +101,8 @@ class ShredHostkeys(Task):
def run(cls, info):
ssh_hostkeys = ['ssh_host_dsa_key',
'ssh_host_rsa_key']
if info.manifest.system['release'] != 'squeeze':
from bootstrapvz.common.releases import wheezy
if info.manifest.release >= wheezy:
ssh_hostkeys.append('ssh_host_ecdsa_key')
private = [os.path.join(info.root, 'etc/ssh', name) for name in ssh_hostkeys]

View file

@ -1,12 +1,20 @@
def log_check_call(command, stdin=None, env=None, shell=False):
status, stdout, stderr = log_call(command, stdin, env, shell)
import os
def log_check_call(command, stdin=None, env=None, shell=False, cwd=None):
status, stdout, stderr = log_call(command, stdin, env, shell, cwd)
from subprocess import CalledProcessError
if status != 0:
from subprocess import CalledProcessError
raise CalledProcessError(status, ' '.join(command), '\n'.join(stderr))
e = CalledProcessError(status, ' '.join(command), '\n'.join(stderr))
# Fix Pyro4's fixIronPythonExceptionForPickle() by setting the args property,
# even though we use our own serialization (at least I think that's the problem).
# See bootstrapvz.remote.serialize_called_process_error for more info.
setattr(e, 'args', (status, ' '.join(command), '\n'.join(stderr)))
raise e
return stdout
def log_call(command, stdin=None, env=None, shell=False):
def log_call(command, stdin=None, env=None, shell=False, cwd=None):
import subprocess
import logging
from multiprocessing.dummy import Pool as ThreadPool
@ -14,9 +22,12 @@ def log_call(command, stdin=None, env=None, shell=False):
command_log = realpath(command[0]).replace('/', '.')
log = logging.getLogger(__name__ + command_log)
log.debug('Executing: {command}'.format(command=' '.join(command)))
if type(command) is list:
log.debug('Executing: {command}'.format(command=' '.join(command)))
else:
log.debug('Executing: {command}'.format(command=command))
process = subprocess.Popen(args=command, env=env, shell=shell,
process = subprocess.Popen(args=command, env=env, shell=shell, cwd=cwd,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
@ -53,11 +64,26 @@ def log_call(command, stdin=None, env=None, shell=False):
return process.returncode, stdout, stderr
def sed_i(file_path, pattern, subst):
def sed_i(file_path, pattern, subst, expected_replacements=1):
replacement_count = inline_replace(file_path, pattern, subst)
if replacement_count != expected_replacements:
from exceptions import UnexpectedNumMatchesError
msg = ('There were {real} instead of {expected} matches for '
'the expression `{exp}\' in the file `{path}\''
.format(real=replacement_count, expected=expected_replacements,
exp=pattern, path=file_path))
raise UnexpectedNumMatchesError(msg)
def inline_replace(file_path, pattern, subst):
import fileinput
import re
replacement_count = 0
for line in fileinput.input(files=file_path, inplace=True):
print re.sub(pattern, subst, line),
(replacement, count) = re.subn(pattern, subst, line)
replacement_count += count
print replacement,
return replacement_count
def load_json(path):
@ -69,12 +95,24 @@ def load_json(path):
def load_yaml(path):
import yaml
with open(path, 'r') as fobj:
return yaml.safe_load(fobj)
with open(path, 'r') as stream:
return yaml.safe_load(stream)
def load_data(path):
filename, extension = os.path.splitext(path)
if not os.path.isfile(path):
raise Exception('The path {path} does not point to a file.'.format(path=path))
if extension == '.json':
return load_json(path)
elif extension == '.yml' or extension == '.yaml':
return load_yaml(path)
else:
raise Exception('Unrecognized extension: {ext}'.format(ext=extension))
def config_get(path, config_path):
config = load_json(path)
config = load_data(path)
for key in config_path:
config = config.get(key)
return config
@ -82,7 +120,6 @@ def config_get(path, config_path):
def copy_tree(from_path, to_path):
from shutil import copy
import os
for abs_prefix, dirs, files in os.walk(from_path):
prefix = os.path.normpath(os.path.relpath(abs_prefix, from_path))
for path in dirs:

View file

@ -0,0 +1,8 @@
Plugins are a key feature of bootstrap-vz. Despite their small size
(most plugins do not exceed 100 source lines of code) they can modify
the behavior of bootstrapped systems to a great extent.
Below you will find documentation for all plugins available for
bootstrap-vz. If you cannot find what you are looking for, consider
`developing it yourself <http://bootstrap-vz.readthedocs.org>`__ and
contribute to this list!

View file

@ -0,0 +1,12 @@
Admin user
----------
This plugin creates a user with passwordless sudo privileges. It also
disables the SSH root login. If the EC2 init scripts are installed, the
script for fetching the SSH authorized keys will be adjust to match the
username specified.
Settings
~~~~~~~~
- ``username``: The username of the account to create. ``required``

View file

@ -2,18 +2,22 @@
def validate_manifest(data, validator, error):
import os.path
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.json'))
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml'))
validator(data, schema_path)
def resolve_tasks(taskset, manifest):
import tasks
from bootstrapvz.common.tasks import ssh
from bootstrapvz.providers.ec2.tasks import initd
if initd.AddEC2InitScripts in taskset:
taskset.add(tasks.AdminUserCredentials)
from bootstrapvz.common.releases import jessie
if manifest.release < jessie:
taskset.update([ssh.DisableRootLogin])
taskset.update([tasks.AddSudoPackage,
tasks.CreateAdminUser,
tasks.PasswordlessSudo,
tasks.DisableRootLogin,
])

View file

@ -1,21 +0,0 @@
{
"$schema": "http://json-schema.org/draft-04/schema#",
"title": "Admin user plugin manifest",
"type": "object",
"properties": {
"plugins": {
"type": "object",
"properties": {
"admin_user": {
"type": "object",
"properties": {
"username": {
"type": "string"
}
},
"required": ["username"]
}
}
}
}
}

View file

@ -0,0 +1,14 @@
---
$schema: http://json-schema.org/draft-04/schema#
title: Admin user plugin manifest
type: object
properties:
plugins:
type: object
properties:
admin_user:
type: object
properties:
username: {type: string}
required: [username]
additionalProperties: false

View file

@ -1,14 +1,12 @@
from bootstrapvz.base import Task
from bootstrapvz.common import phases
from bootstrapvz.common.tasks.initd import InstallInitScripts
from bootstrapvz.common.tasks import apt
import os
class AddSudoPackage(Task):
description = 'Adding `sudo\' to the image packages'
phase = phases.preparation
predecessors = [apt.AddDefaultSources]
@classmethod
def run(cls, info):
@ -54,23 +52,3 @@ class AdminUserCredentials(Task):
getcreds_path = os.path.join(info.root, 'etc/init.d/ec2-get-credentials')
username = info.manifest.plugins['admin_user']['username']
sed_i(getcreds_path, 'username=\'root\'', 'username=\'{username}\''.format(username=username))
class DisableRootLogin(Task):
description = 'Disabling SSH login for root'
phase = phases.system_modification
@classmethod
def run(cls, info):
from subprocess import CalledProcessError
from bootstrapvz.common.tools import log_check_call
try:
log_check_call(['chroot', info.root,
'dpkg-query', '-W', 'openssh-server'])
from bootstrapvz.common.tools import sed_i
sshdconfig_path = os.path.join(info.root, 'etc/ssh/sshd_config')
sed_i(sshdconfig_path, 'PermitRootLogin yes', 'PermitRootLogin no')
except CalledProcessError:
import logging
logging.getLogger(__name__).warn('The OpenSSH server has not been installed, '
'not disabling SSH root login.')

View file

@ -0,0 +1,13 @@
import tasks
def validate_manifest(data, validator, error):
import os.path
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml'))
validator(data, schema_path)
def resolve_tasks(taskset, manifest):
taskset.add(tasks.AddPackages)
taskset.add(tasks.CheckPlaybookPath)
taskset.add(tasks.RunAnsiblePlaybook)

View file

@ -0,0 +1,29 @@
---
$schema: http://json-schema.org/draft-04/schema#
title: Ansible plugin manifest
type: object
properties:
plugins:
type: object
properties:
ansible:
type: object
properties:
extra_vars: {type: string}
tags: {type: string}
skip_tags: {type: string}
opt_flags:
type: array
flag: {type: string}
minItems: 1
hosts:
type: array
host: {type: string}
minItems: 1
playbook: {$ref: '#/definitions/absolute_path'}
required: [playbook]
additionalProperties: false
definitions:
absolute_path:
pattern: ^/[^\0]+$
type: string

View file

@ -0,0 +1,96 @@
from bootstrapvz.base import Task
from bootstrapvz.common import phases
import os
class CheckPlaybookPath(Task):
description = 'Checking whether the playbook path exist'
phase = phases.preparation
@classmethod
def run(cls, info):
from bootstrapvz.common.exceptions import TaskError
playbook = info.manifest.plugins['ansible']['playbook']
if not os.path.exists(playbook):
msg = 'The playbook file {playbook} does not exist.'.format(playbook=playbook)
raise TaskError(msg)
if not os.path.isfile(playbook):
msg = 'The playbook path {playbook} does not point to a file.'.format(playbook=playbook)
raise TaskError(msg)
class AddPackages(Task):
description = 'Making sure python is installed'
phase = phases.preparation
@classmethod
def run(cls, info):
info.packages.add('python')
class RunAnsiblePlaybook(Task):
description = 'Running ansible playbooks'
phase = phases.user_modification
@classmethod
def run(cls, info):
from bootstrapvz.common.tools import log_check_call
# Extract playbook and directory
playbook = info.manifest.plugins['ansible']['playbook']
playbook_dir = os.path.dirname(os.path.realpath(playbook))
# Check for hosts
hosts = None
if 'hosts' in info.manifest.plugins['ansible']:
hosts = info.manifest.plugins['ansible']['hosts']
# Check for extra vars
extra_vars = None
if 'extra_vars' in info.manifest.plugins['ansible']:
extra_vars = info.manifest.plugins['ansible']['extra_vars']
tags = None
if 'tags' in info.manifest.plugins['ansible']:
tags = info.manifest.plugins['ansible']['tags']
skip_tags = None
if 'skip_tags' in info.manifest.plugins['ansible']:
skip_tags = info.manifest.plugins['ansible']['skip_tags']
opt_flags = None
if 'opt_flags' in info.manifest.plugins['ansible']:
opt_flags = info.manifest.plugins['ansible']['opt_flags']
# build the inventory file
inventory = os.path.join(info.root, 'tmp/bootstrap-inventory')
with open(inventory, 'w') as handle:
conn = '{} ansible_connection=chroot'.format(info.root)
content = ""
if hosts:
for host in hosts:
content += '[{}]\n{}\n'.format(host, conn)
else:
content = conn
handle.write(content)
# build the ansible command
cmd = ['ansible-playbook', '-i', inventory, os.path.basename(playbook)]
if extra_vars:
tmp_cmd = ['--extra-vars', '\"{}\"'.format(extra_vars)]
cmd.extend(tmp_cmd)
if tags:
tmp_cmd = ['--tags={}'.format(tags)]
cmd.extend(tmp_cmd)
if skip_tags:
tmp_cmd = ['--skip_tags={}'.format(skip_tags)]
cmd.extend(tmp_cmd)
if opt_flags:
# Should probably do proper validation on these, but I don't think it should be used very often.
cmd.extend(opt_flags)
# Run and remove the inventory file
log_check_call(cmd, cwd=playbook_dir)
os.remove(inventory)

View file

@ -0,0 +1,27 @@
APT Proxy
---------
This plugin creates a proxy configuration file for APT, so you could
enjoy the benefits of using cached packages instead of downloading them
from the mirror every time. You could just install ``apt-cacher-ng`` on
the host machine and then add ``"address": "127.0.0.1"`` and
``"port": 3142`` to the manifest file.
Settings
~~~~~~~~
- ``address``: The IP or host of the proxy server.
``required``
- ``port``: The port (integer) of the proxy server.
``required``
- ``username``: The username for authentication against the proxy server.
This is ignored if ``password`` is not also set.
``optional``
- ``password``: The password for authentication against the proxy server.
This is ignored if ``username`` is not also set.
``optional``
- ``persistent``: Whether the proxy configuration file should remain on
the machine or not.
Valid values: ``true``, ``false``
Default: ``false``.
``optional``

View file

@ -1,11 +1,12 @@
def validate_manifest(data, validator, error):
import os.path
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.json'))
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml'))
validator(data, schema_path)
def resolve_tasks(taskset, manifest):
import tasks
taskset.add(tasks.CheckAptProxy)
taskset.add(tasks.SetAptProxy)
if not manifest.plugins['apt_proxy'].get('persistent', False):
taskset.add(tasks.RemoveAptProxy)

View file

@ -1,27 +0,0 @@
{
"$schema": "http://json-schema.org/draft-04/schema#",
"title": "APT proxy plugin manifest",
"type": "object",
"properties": {
"plugins": {
"type": "object",
"properties": {
"apt_proxy": {
"type": "object",
"properties": {
"address": {
"type": "string"
},
"persistent": {
"type": "boolean"
},
"port": {
"type": "integer"
}
},
"required": ["address", "port"]
}
}
}
}
}

View file

@ -0,0 +1,18 @@
---
$schema: http://json-schema.org/draft-04/schema#
title: APT proxy plugin manifest
type: object
properties:
plugins:
type: object
properties:
apt_proxy:
type: object
properties:
address: {type: string}
password: {type: string}
port: {type: integer}
persistent: {type: boolean}
username: {type: string}
required: [address, port]
additionalProperties: false

View file

@ -2,6 +2,28 @@ from bootstrapvz.base import Task
from bootstrapvz.common import phases
from bootstrapvz.common.tasks import apt
import os
import urllib2
class CheckAptProxy(Task):
description = 'Checking reachability of APT proxy server'
phase = phases.preparation
@classmethod
def run(cls, info):
proxy_address = info.manifest.plugins['apt_proxy']['address']
proxy_port = info.manifest.plugins['apt_proxy']['port']
proxy_url = 'http://{address}:{port}'.format(address=proxy_address, port=proxy_port)
try:
urllib2.urlopen(proxy_url, timeout=5)
except Exception as e:
# Default response from `apt-cacher-ng`
if isinstance(e, urllib2.HTTPError) and e.code == 404 and e.msg == 'Usage Information':
pass
else:
import logging
log = logging.getLogger(__name__)
log.warning('The APT proxy server couldn\'t be reached. `apt-get\' commands may fail.')
class SetAptProxy(Task):
@ -12,11 +34,21 @@ class SetAptProxy(Task):
@classmethod
def run(cls, info):
proxy_path = os.path.join(info.root, 'etc/apt/apt.conf.d/02proxy')
proxy_username = info.manifest.plugins['apt_proxy'].get('username')
proxy_password = info.manifest.plugins['apt_proxy'].get('password')
proxy_address = info.manifest.plugins['apt_proxy']['address']
proxy_port = info.manifest.plugins['apt_proxy']['port']
if None not in (proxy_username, proxy_password):
proxy_auth = '{username}:{password}@'.format(
username=proxy_username, password=proxy_password)
else:
proxy_auth = ''
with open(proxy_path, 'w') as proxy_file:
proxy_file.write('Acquire::http {{ Proxy "http://{address}:{port}"; }};\n'
.format(address=proxy_address, port=proxy_port))
proxy_file.write(
'Acquire::http {{ Proxy "http://{auth}{address}:{port}"; }};\n'
.format(auth=proxy_auth, address=proxy_address, port=proxy_port))
class RemoveAptProxy(Task):

View file

@ -3,7 +3,7 @@ import tasks
def validate_manifest(data, validator, error):
import os.path
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.json'))
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml'))
validator(data, schema_path)

View file

@ -1,26 +0,0 @@
{
"$schema": "http://json-schema.org/draft-04/schema#",
"title": "Puppet plugin manifest",
"type": "object",
"properties": {
"plugins": {
"type": "object",
"properties": {
"chef": {
"type": "object",
"properties": {
"assets": { "$ref": "#/definitions/absolute_path" }
},
"minProperties": 1,
"additionalProperties": false
}
}
}
},
"definitions": {
"absolute_path": {
"type": "string",
"pattern": "^/[^\\0]+$"
}
}
}

View file

@ -0,0 +1,19 @@
---
$schema: http://json-schema.org/draft-04/schema#
title: Chef plugin manifest
type: object
properties:
plugins:
type: object
properties:
chef:
type: object
properties:
assets:
$ref: '#/definitions/absolute_path'
required: [assets]
additionalProperties: false
definitions:
absolute_path:
pattern: ^/[^\0]+$
type: string

View file

@ -1,6 +1,5 @@
from bootstrapvz.base import Task
from bootstrapvz.common import phases
from bootstrapvz.common.tasks import apt
import os
@ -23,7 +22,6 @@ class CheckAssetsPath(Task):
class AddPackages(Task):
description = 'Add chef package'
phase = phases.preparation
predecessors = [apt.AddDefaultSources]
@classmethod
def run(cls, info):

View file

@ -0,0 +1,23 @@
cloud-init
----------
This plugin installs and configures
`cloud-init <https://packages.debian.org/wheezy-backports/cloud-init>`__
on the system. Depending on the release it installs it from either
backports or the main repository.
cloud-init is only compatible with Debian wheezy and upwards.
Settings
~~~~~~~~
- ``username``: The username of the account to create.
``required``
- ``disable_modules``: A list of strings specifying which cloud-init
modules should be disabled.
``optional``
- ``metadata_sources``: A string that sets the
`datasources <http://cloudinit.readthedocs.org/en/latest/topics/datasources.html>`__
that cloud-init should try fetching metadata from. The source is
automatically set when using the ec2 provider.
``optional``

View file

@ -2,18 +2,20 @@
def validate_manifest(data, validator, error):
import os.path
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.json'))
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml'))
validator(data, schema_path)
def resolve_tasks(taskset, manifest):
import tasks
import bootstrapvz.providers.ec2.tasks.initd as initd_ec2
from bootstrapvz.common.tasks import apt
from bootstrapvz.common.tasks import initd
from bootstrapvz.common.tasks import ssh
if manifest.system['release'] in ['wheezy', 'stable']:
taskset.add(tasks.AddBackports)
from bootstrapvz.common.releases import wheezy
if manifest.release == wheezy:
taskset.add(apt.AddBackports)
taskset.update([tasks.SetMetadataSource,
tasks.AddCloudInitPackages,

View file

@ -1,44 +0,0 @@
{
"$schema": "http://json-schema.org/draft-04/schema#",
"title": "cloud-init plugin manifest",
"type": "object",
"properties": {
"system": {
"type": "object",
"properties": {
"release": {
"type": "string",
"enum": ["wheezy", "stable",
"jessie", "testing",
"sid", "unstable"]
}
}
},
"plugins": {
"type": "object",
"properties": {
"cloud_init": {
"type": "object",
"properties": {
"username": {
"type": "string"
},
"disable_modules": {
"type": "array",
"items": {
"type": "string"
},
"uniqueItems": true
},
"metadata_sources": {
"type": "string"
}
},
"required": ["username"]
},
"packages": {"type": "object"}
},
"required": ["cloud_init"]
}
}
}

View file

@ -0,0 +1,31 @@
---
$schema: http://json-schema.org/draft-04/schema#
title: cloud-init plugin manifest
type: object
properties:
system:
type: object
properties:
release:
type: string
enum:
- wheezy
- stable
- jessie
- testing
- sid
- unstable
plugins:
type: object
properties:
cloud_init:
type: object
properties:
username: {type: string}
metadata_sources: {type: string}
disable_modules:
type: array
items: {type: string}
uniqueItems: true
required: [username]
additionalProperties: false

View file

@ -7,29 +7,16 @@ import logging
import os.path
class AddBackports(Task):
description = 'Adding backports to the apt sources'
phase = phases.preparation
@classmethod
def run(cls, info):
if info.source_lists.target_exists('{system.release}-backports'):
msg = ('{system.release}-backports target already exists').format(**info.manifest_vars)
logging.getLogger(__name__).info(msg)
else:
info.source_lists.add('backports', 'deb {apt_mirror} {system.release}-backports main')
info.source_lists.add('backports', 'deb-src {apt_mirror} {system.release}-backports main')
class AddCloudInitPackages(Task):
description = 'Adding cloud-init package and sudo'
phase = phases.preparation
predecessors = [apt.AddDefaultSources, AddBackports]
predecessors = [apt.AddBackports]
@classmethod
def run(cls, info):
target = None
if info.manifest.system['release'] in ['wheezy', 'stable']:
from bootstrapvz.common.releases import wheezy
if info.manifest.release == wheezy:
target = '{system.release}-backports'
info.packages.add('cloud-init', target)
info.packages.add('sudo')
@ -63,10 +50,10 @@ class SetMetadataSource(Task):
sources = info.manifest.plugins['cloud_init']['metadata_sources']
else:
source_mapping = {'ec2': 'Ec2'}
sources = source_mapping.get(info.manifest.provider, None)
sources = source_mapping.get(info.manifest.provider['name'], None)
if sources is None:
msg = ('No cloud-init metadata source mapping found for provider `{provider}\', '
'skipping selections setting.').format(provider=info.manifest.provider)
'skipping selections setting.').format(provider=info.manifest.provider['name'])
logging.getLogger(__name__).warn(msg)
return
sources = "cloud-init cloud-init/datasources multiselect " + sources

View file

@ -0,0 +1,31 @@
Commands
--------------
This plugin allows you to run arbitrary commands during the bootstrap process.
The commands are run at an indeterminate point *after* packages have been
installed, but *before* the volume has been unmounted.
Settings
~~~~~~~~
- ``commands``: A list of lists containing strings. Each top-level item
is a single command, while the strings inside each list comprise
parts of a command. This allows for proper shell argument escaping.
To circumvent escaping, simply put the entire command in a single
string, the command will additionally be evaluated in a shell
(e.g. globbing will work).
In addition to the manifest variables ``{root}`` is also available.
It points at the root of the image volume.
``required``
``manifest vars``
Example
~~~~~~~
Create an empty `index.html` in `/var/www` and delete all locales except english.
.. code:: yaml
commands:
commands:
- [touch, '{root}/var/www/index.html']
- ['rm -rf /usr/share/locale/[^en]*']

View file

@ -2,7 +2,7 @@
def validate_manifest(data, validator, error):
import os.path
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.json'))
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml'))
validator(data, schema_path)

View file

@ -0,0 +1,22 @@
---
$schema: http://json-schema.org/draft-04/schema#
title: Commands plugin manifest
type: object
properties:
plugins:
type: object
properties:
commands:
type: object
properties:
commands:
items:
items:
type: string
minItems: 1
type: array
minItems: 1
type: array
required: [commands]
additionalProperties: false
required: [commands]

View file

@ -3,12 +3,13 @@ from bootstrapvz.common import phases
class ImageExecuteCommand(Task):
description = 'Execute command in the image'
phase = phases.system_modification
description = 'Executing commands in the image'
phase = phases.user_modification
@classmethod
def run(cls, info):
from bootstrapvz.common.tools import log_check_call
for raw_command in info.manifest.plugins['image_commands']['commands']:
for raw_command in info.manifest.plugins['commands']['commands']:
command = map(lambda part: part.format(root=info.root, **info.manifest_vars), raw_command)
log_check_call(command)
shell = len(command) == 1
log_check_call(command, shell=shell)

View file

@ -0,0 +1,18 @@
Docker daemon
-------------
Install `docker <http://www.docker.io/>`__ daemon in the image. Uses
init scripts for the official repository.
This plugin can only be used if the distribution being bootstrapped is
at least ``wheezy``, as Docker needs a kernel version ``3.8`` or higher,
which is available at the ``wheezy-backports`` repository. There's also
an architecture requirement, as it runs only on ``amd64``.
Settings
~~~~~~~~
- ``version``: Selects the docker version to install. To select the
latest version simply omit this setting.
Default: ``latest``
``optional``

View file

@ -0,0 +1,27 @@
import os.path
import tasks
from bootstrapvz.common.tasks import apt
from bootstrapvz.common.releases import wheezy
def validate_manifest(data, validator, error):
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml'))
validator(data, schema_path)
from bootstrapvz.common.releases import get_release
if get_release(data['system']['release']) == wheezy:
# prefs is a generator of apt preferences across files in the manifest
prefs = (item for vals in data.get('packages', {}).get('preferences', {}).values() for item in vals)
if not any('linux-image' in item['package'] and 'wheezy-backports' in item['pin'] for item in prefs):
msg = 'The backports kernel is required for the docker daemon to function properly'
error(msg, ['packages', 'preferences'])
def resolve_tasks(taskset, manifest):
if manifest.release == wheezy:
taskset.add(apt.AddBackports)
taskset.add(tasks.AddDockerDeps)
taskset.add(tasks.AddDockerBinary)
taskset.add(tasks.AddDockerInit)
taskset.add(tasks.EnableMemoryCgroup)
if len(manifest.plugins['docker_daemon'].get('pull_images', [])) > 0:
taskset.add(tasks.PullDockerImages)

View file

@ -0,0 +1,19 @@
# Docker Upstart and SysVinit configuration file
# Customize location of Docker binary (especially for development testing).
#DOCKER="/usr/local/bin/docker"
# Use DOCKER_OPTS to modify the daemon startup options.
#DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4"
# Use DOCKER_NOFILE to set ulimit -n before starting Docker.
#DOCKER_NOFILE=65536
# Use DOCKER_LOCKEDMEMORY to set ulimit -l before starting Docker.
#DOCKER_LOCKEDMEMORY=unlimited
# If you need Docker to use an HTTP proxy, it can also be specified here.
#export http_proxy="http://127.0.0.1:3128/"
# This is also a handy place to tweak where Docker's temporary files go.
#export TMPDIR="/mnt/bigdrive/docker-tmp"

View file

@ -0,0 +1,137 @@
#!/bin/sh
### BEGIN INIT INFO
# Provides: docker
# Required-Start: $syslog $remote_fs
# Required-Stop: $syslog $remote_fs
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Create lightweight, portable, self-sufficient containers.
# Description:
# Docker is an open-source project to easily create lightweight, portable,
# self-sufficient containers from any application. The same container that a
# developer builds and tests on a laptop can run at scale, in production, on
# VMs, bare metal, OpenStack clusters, public clouds and more.
### END INIT INFO
export PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin
BASE=$(basename $0)
# modify these in /etc/default/$BASE (/etc/default/docker)
DOCKER=/usr/bin/$BASE
DOCKER_PIDFILE=/var/run/$BASE.pid
DOCKER_LOGFILE=/var/log/$BASE.log
DOCKER_OPTS=
DOCKER_DESC="Docker"
# Get lsb functions
. /lib/lsb/init-functions
if [ -f /etc/default/$BASE ]; then
. /etc/default/$BASE
fi
# see also init_is_upstart in /lib/lsb/init-functions (which isn't available in Ubuntu 12.04, or we'd use it)
if [ -x /sbin/initctl ] && /sbin/initctl version 2>/dev/null | grep -q upstart; then
log_failure_msg "$DOCKER_DESC is managed via upstart, try using service $BASE $1"
exit 1
fi
# Check docker is present
if [ ! -x $DOCKER ]; then
log_failure_msg "$DOCKER not present or not executable"
exit 1
fi
fail_unless_root() {
if [ "$(id -u)" != '0' ]; then
log_failure_msg "$DOCKER_DESC must be run as root"
exit 1
fi
}
cgroupfs_mount() {
# see also https://github.com/tianon/cgroupfs-mount/blob/master/cgroupfs-mount
if grep -v '^#' /etc/fstab | grep -q cgroup \
|| [ ! -e /proc/cgroups ] \
|| [ ! -d /sys/fs/cgroup ]; then
return
fi
if ! mountpoint -q /sys/fs/cgroup; then
mount -t tmpfs -o uid=0,gid=0,mode=0755 cgroup /sys/fs/cgroup
fi
(
cd /sys/fs/cgroup
for sys in $(awk '!/^#/ { if ($4 == 1) print $1 }' /proc/cgroups); do
mkdir -p $sys
if ! mountpoint -q $sys; then
if ! mount -n -t cgroup -o $sys cgroup $sys; then
rmdir $sys || true
fi
fi
done
)
}
case "$1" in
start)
fail_unless_root
cgroupfs_mount
touch "$DOCKER_LOGFILE"
chgrp docker "$DOCKER_LOGFILE"
if [ -n "$DOCKER_NOFILE" ]; then
ulimit -n $DOCKER_NOFILE
fi
if [ -n "$DOCKER_LOCKEDMEMORY" ]; then
ulimit -l $DOCKER_LOCKEDMEMORY
fi
log_begin_msg "Starting $DOCKER_DESC: $BASE"
start-stop-daemon --start --background \
--no-close \
--exec "$DOCKER" \
--pidfile "$DOCKER_PIDFILE" \
-- \
-d -p "$DOCKER_PIDFILE" \
$DOCKER_OPTS \
>> "$DOCKER_LOGFILE" 2>&1
log_end_msg $?
;;
stop)
fail_unless_root
log_begin_msg "Stopping $DOCKER_DESC: $BASE"
start-stop-daemon --stop --pidfile "$DOCKER_PIDFILE"
log_end_msg $?
;;
restart)
fail_unless_root
docker_pid=`cat "$DOCKER_PIDFILE" 2>/dev/null`
[ -n "$docker_pid" ] \
&& ps -p $docker_pid > /dev/null 2>&1 \
&& $0 stop
$0 start
;;
force-reload)
fail_unless_root
$0 restart
;;
status)
status_of_proc -p "$DOCKER_PIDFILE" "$DOCKER" docker
;;
*)
echo "Usage: $0 {start|stop|restart|status}"
exit 1
;;
esac
exit 0

View file

@ -0,0 +1,29 @@
---
$schema: http://json-schema.org/draft-04/schema#
title: Install Docker plugin manifest
type: object
properties:
system:
type: object
properties:
architecture:
type: string
enum: [amd64]
release:
not:
type: string
enum:
- squeeze
- oldstable
plugins:
type: object
properties:
docker_daemon:
type: object
properties:
version:
pattern: '^\d\.\d{1,2}\.\d$'
type: string
docker_opts:
type: string
additionalProperties: false

View file

@ -0,0 +1,122 @@
from bootstrapvz.base import Task
from bootstrapvz.common import phases
from bootstrapvz.common.tasks import grub
from bootstrapvz.common.tasks import initd
from bootstrapvz.common.tools import log_check_call
from bootstrapvz.common.tools import sed_i
from bootstrapvz.providers.gce.tasks import boot as gceboot
import os
import os.path
import shutil
import subprocess
import time
ASSETS_DIR = os.path.normpath(os.path.join(os.path.dirname(__file__), 'assets'))
class AddDockerDeps(Task):
description = 'Add packages for docker deps'
phase = phases.package_installation
DOCKER_DEPS = ['aufs-tools', 'btrfs-tools', 'git', 'iptables',
'procps', 'xz-utils', 'ca-certificates']
@classmethod
def run(cls, info):
for pkg in cls.DOCKER_DEPS:
info.packages.add(pkg)
class AddDockerBinary(Task):
description = 'Add docker binary'
phase = phases.system_modification
@classmethod
def run(cls, info):
docker_version = info.manifest.plugins['docker_daemon'].get('version', False)
docker_url = 'https://get.docker.io/builds/Linux/x86_64/docker-'
if docker_version:
docker_url += docker_version
else:
docker_url += 'latest'
bin_docker = os.path.join(info.root, 'usr/bin/docker')
log_check_call(['wget', '-O', bin_docker, docker_url])
os.chmod(bin_docker, 0755)
class AddDockerInit(Task):
description = 'Add docker init script'
phase = phases.system_modification
successors = [initd.InstallInitScripts]
@classmethod
def run(cls, info):
init_src = os.path.join(ASSETS_DIR, 'init.d/docker')
info.initd['install']['docker'] = init_src
default_src = os.path.join(ASSETS_DIR, 'default/docker')
default_dest = os.path.join(info.root, 'etc/default/docker')
shutil.copy(default_src, default_dest)
docker_opts = info.manifest.plugins['docker_daemon'].get('docker_opts')
if docker_opts:
sed_i(default_dest, r'^#*DOCKER_OPTS=.*$', 'DOCKER_OPTS="%s"' % docker_opts)
class EnableMemoryCgroup(Task):
description = 'Change grub configuration to enable the memory cgroup'
phase = phases.system_modification
successors = [grub.InstallGrub_1_99, grub.InstallGrub_2]
predecessors = [grub.ConfigureGrub, gceboot.ConfigureGrub]
@classmethod
def run(cls, info):
grub_config = os.path.join(info.root, 'etc/default/grub')
sed_i(grub_config, r'^(GRUB_CMDLINE_LINUX*=".*)"\s*$', r'\1 cgroup_enable=memory"')
class PullDockerImages(Task):
description = 'Pull docker images'
phase = phases.system_modification
predecessors = [AddDockerBinary]
@classmethod
def run(cls, info):
from bootstrapvz.common.exceptions import TaskError
from subprocess import CalledProcessError
images = info.manifest.plugins['docker_daemon'].get('pull_images', [])
retries = info.manifest.plugins['docker_daemon'].get('pull_images_retries', 10)
bin_docker = os.path.join(info.root, 'usr/bin/docker')
graph_dir = os.path.join(info.root, 'var/lib/docker')
socket = 'unix://' + os.path.join(info.workspace, 'docker.sock')
pidfile = os.path.join(info.workspace, 'docker.pid')
try:
# start docker daemon temporarly.
daemon = subprocess.Popen([bin_docker, '-d', '--graph', graph_dir, '-H', socket, '-p', pidfile])
# wait for docker daemon to start.
for _ in range(retries):
try:
log_check_call([bin_docker, '-H', socket, 'version'])
break
except CalledProcessError:
time.sleep(1)
for img in images:
# docker load if tarball.
if img.endswith('.tar.gz') or img.endswith('.tgz'):
cmd = [bin_docker, '-H', socket, 'load', '-i', img]
try:
log_check_call(cmd)
except CalledProcessError as e:
msg = 'error {e} loading docker image {img}.'.format(img=img, e=e)
raise TaskError(msg)
# docker pull if image name.
else:
cmd = [bin_docker, '-H', socket, 'pull', img]
try:
log_check_call(cmd)
except CalledProcessError as e:
msg = 'error {e} pulling docker image {img}.'.format(img=img, e=e)
raise TaskError(msg)
finally:
# shutdown docker daemon.
daemon.terminate()
os.remove(os.path.join(info.workspace, 'docker.sock'))

View file

@ -0,0 +1,13 @@
def validate_manifest(data, validator, error):
import os.path
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml'))
validator(data, schema_path)
def resolve_tasks(taskset, manifest):
import tasks
taskset.add(tasks.LaunchEC2Instance)
if 'print_public_ip' in manifest.plugins['ec2_launch']:
taskset.add(tasks.PrintPublicIPAddress)
if manifest.plugins['ec2_launch'].get('deregister_ami', False):
taskset.add(tasks.DeregisterAMI)

View file

@ -0,0 +1,20 @@
---
$schema: http://json-schema.org/draft-04/schema#
title: EC2-launch plugin manifest
type: object
properties:
plugins:
type: object
properties:
ec2_launch:
type: object
properties:
security_group_ids:
type: array
items: {type: string}
uniqueItems: true
instance_type: {type: string}
print_public_ip: {type: string}
tags: {type: object}
deregister_ami: {type: boolean}
additionalProperties: false

View file

@ -0,0 +1,85 @@
from bootstrapvz.base import Task
from bootstrapvz.common import phases
from bootstrapvz.providers.ec2.tasks import ami
import logging
# TODO: Merge with the method available in wip-integration-tests branch
def waituntil(predicate, timeout=5, interval=0.05):
import time
threshhold = time.time() + timeout
while time.time() < threshhold:
if predicate():
return True
time.sleep(interval)
return False
class LaunchEC2Instance(Task):
description = 'Launching EC2 instance'
phase = phases.image_registration
predecessors = [ami.RegisterAMI]
@classmethod
def run(cls, info):
conn = info._ec2['connection']
r = conn.run_instances(info._ec2['image'],
security_group_ids=info.manifest.plugins['ec2_launch'].get('security_group_ids'),
instance_type=info.manifest.plugins['ec2_launch'].get('instance_type', 't2.micro'))
info._ec2['instance'] = r.instances[0]
if 'tags' in info.manifest.plugins['ec2_launch']:
def apply_format(v):
return v.format(**info.manifest_vars)
tags = info.manifest.plugins['ec2_launch']['tags']
r = {k: apply_format(v) for k, v in tags.items()}
conn.create_tags([info._ec2['instance'].id], r)
class PrintPublicIPAddress(Task):
description = 'Waiting for the instance to launch'
phase = phases.image_registration
predecessors = [LaunchEC2Instance]
@classmethod
def run(cls, info):
ec2 = info._ec2
logger = logging.getLogger(__name__)
filename = info.manifest.plugins['ec2_launch']['print_public_ip']
if not filename:
filename = '/dev/null'
f = open(filename, 'w')
def instance_has_ip():
ec2['instance'].update()
return ec2['instance'].ip_address
if waituntil(instance_has_ip, timeout=120, interval=5):
logger.info('******* EC2 IP ADDRESS: %s *******' % ec2['instance'].ip_address)
f.write(ec2['instance'].ip_address)
else:
logger.error('Could not get IP address for the instance')
f.write('')
f.close()
class DeregisterAMI(Task):
description = 'Deregistering AMI'
phase = phases.image_registration
predecessors = [LaunchEC2Instance]
@classmethod
def run(cls, info):
ec2 = info._ec2
logger = logging.getLogger(__name__)
def instance_running():
ec2['instance'].update()
return ec2['instance'].state == 'running'
if waituntil(instance_running, timeout=120, interval=5):
info._ec2['connection'].deregister_image(info._ec2['image'])
info._ec2['snapshot'].delete()
else:
logger.error('Timeout while booting instance')

View file

@ -0,0 +1,16 @@
import tasks
def validate_manifest(data, validator, error):
import os.path
schema_path = os.path.normpath(os.path.join(os.path.dirname(__file__), 'manifest-schema.yml'))
validator(data, schema_path)
def resolve_tasks(taskset, manifest):
taskset.add(tasks.ValidateSourcePaths)
if ('mkdirs' in manifest.plugins['file_copy']):
taskset.add(tasks.MkdirCommand)
if ('files' in manifest.plugins['file_copy']):
taskset.add(tasks.FileCopyCommand)

View file

@ -0,0 +1,45 @@
---
$schema: http://json-schema.org/draft-04/schema#
properties:
plugins:
properties:
file_copy:
properties:
mkdirs:
items:
dir:
$ref: '#/definitions/absolute_path'
permissions:
type: string
owner:
type: string
group:
type: string
files:
items:
src:
$ref: '#/definitions/absolute_path'
dst:
$ref: '#/definitions/absolute_path'
permissions:
type: string
owner:
type: string
group:
type: string
minItems: 1
type: array
required:
- src
- dst
required:
- files
type: object
additionalProperties: false
required:
- file_copy
type: object
required:
- plugins
title: File copy plugin manifest
type: object

View file

@ -0,0 +1,65 @@
from bootstrapvz.base import Task
from bootstrapvz.common import phases
import os
import shutil
class ValidateSourcePaths(Task):
description = 'Check whether the files to be copied exist'
phase = phases.preparation
@classmethod
def run(cls, info):
from bootstrapvz.common.exceptions import TaskError
for file_entry in info.manifest.plugins['file_copy']['files']:
srcfile = file_entry['src']
if not os.path.isfile(srcfile):
msg = 'The source file %s does not exist.' % srcfile
raise TaskError(msg)
def modify_path(info, path, entry):
from bootstrapvz.common.tools import log_check_call
if 'permissions' in entry:
# We wrap the permissions string in str() in case
# the user specified a numeric bitmask
chmod_command = ['chroot', info.root, 'chmod', str(entry['permissions']), path]
log_check_call(chmod_command)
if 'owner' in entry:
chown_command = ['chroot', info.root, 'chown', entry['owner'], path]
log_check_call(chown_command)
if 'group' in entry:
chgrp_command = ['chroot', info.root, 'chgrp', entry['group'], path]
log_check_call(chgrp_command)
class MkdirCommand(Task):
description = 'Creating directories requested by user'
phase = phases.user_modification
@classmethod
def run(cls, info):
from bootstrapvz.common.tools import log_check_call
for dir_entry in info.manifest.plugins['file_copy']['mkdirs']:
mkdir_command = ['chroot', info.root, 'mkdir', '-p', dir_entry['dir']]
log_check_call(mkdir_command)
modify_path(info, dir_entry['dir'], dir_entry)
class FileCopyCommand(Task):
description = 'Copying user specified files into the image'
phase = phases.user_modification
predecessors = [MkdirCommand]
@classmethod
def run(cls, info):
for file_entry in info.manifest.plugins['file_copy']['files']:
# note that we don't use os.path.join because it can't
# handle absolute paths, which 'dst' most likely is.
final_destination = os.path.normpath("%s/%s" % (info.root, file_entry['dst']))
shutil.copy(file_entry['src'], final_destination)
modify_path(info, file_entry['dst'], file_entry)

View file

@ -0,0 +1,6 @@
import tasks
def resolve_tasks(taskset, manifest):
taskset.add(tasks.InstallCloudSDK)
taskset.add(tasks.RemoveCloudSDKTarball)

Some files were not shown because too many files have changed in this diff Show more