September 16, 2014


Main features of Hypervisors reviewed

I have prepared this blog post to help students to the understanding of the Hypervisor support Matrix introduced by OpenStack .  These information are spread along different webs and manuals. Many of sources are Linux / redhat and OpenStack.    I have tried to provide more general explanation when possible and reference links to other sources.

The respective feature commands syntax are of course different along various cloud platforms. This version of the blog such a kind of information is not provided. I hope to improve this blog with received comments.

Launch (boot) – Command to launch an instance. Specify the server name, flavor ID (small., large..), and image ID.

Reboot – Soft or hard reboot a running instance. A soft reboot attempts a graceful shut down and restart of the instance. A hard reboot power cycles the instance. By default, when you reboot a server, it is a soft reboot.

Terminate – When an instance is no longer needed, use the terminate or delete command to terminate it. You can use the instance name or the ID string.

Resize – If the size of a virtual machine needs to be changed, such as adding more memory or cores, this can be done using the resize operations. Using resize, you can select a new flavor for your virtual machine and instruct the cloud to adjust the configuration to match the new size. The operation will reboot the virtual machine and take several minutes of downtime. Network configuration will be maintained but connectivity lost during the reboot so this operation should be scheduled as it will lead to application downtime.

Rescue – An instance’s filesystem could become corrupted with prolonged usage. Rescue mode provides a mechanism for access even when the VM’s image renders the instance inaccessible. It is possible to reboot a virtual machine in rescue mode. A rescue VM is launched that allows a user to fix their VM (by accessing with a new root password).

Pause / Un-pause – This command stores the state of the VM in RAM. A paused instance continues to run in a frozen state.

Suspend / Resume – Administrative users might want to suspend / resume an instance if it is infrequently used or to perform system maintenance. When you suspend an instance, its VM state is stored on disk, all memory is written to disk, and the virtual machine is stopped. Suspending an instance is similar to placing a device in hibernation; memory and vCPUs become available to create other instances.

Inject Networking – Allows to  set up a private network between 2 or more virtual machines. This network won’t be seen from the other virtual machines nor from the real network.

Inject File – It is a feature that allows to include files during the boot. Normally the target is a root partition of guest images. There are sub features that enable enables further functionality to inspect arbitrarily complex guest images and find the root partition to inject to.

Serial Console Output – It is possible to access VM directly using the TTY Serial Console interface, in which case setting up bridged networking, SSH, and similar is not necessary.

VNC Console – VNC (Virtual Network Computing)  is a software for remote control, it is based on server agents installed on the hypervisor.  This feature indicates the support of VNC for the hypervisor and VMs.

SPICE Console – Red Hat introduced the SPICE remote computing protocol that is used for Spice client-server communication. Other components developed include QXL display device and driver, etc. solution for interaction with virtualized desktop devices.The Spice project deals with both the virtualized devices and the front-end. It is needed to enable the spice server in qemu and also needs a client to view the guest.

RDP Console – It allows to connect the hypervisor and VMs via Remote Desktop Protocol based console.

Attach / Detach Volume Allows to add / remove new volume in the volume pool. This feature also allows to add / remove extra Volumes to existing running VMs.

Live Migration – Migration describes the process of moving a guest virtual machine from one host physical machine to another. This is possible because guest virtual machines are running in a virtualized environment instead of directly on the hardware. In a live migration, the guest virtual machine continues to run on the source host physical machine while its memory pages are transferred, in order, to the destination host physical machine.

Snapshot - A snapshot creates a coherent copy of a number of block devices at a given time. Live snapshot if a snapshot taken while a virtual machine is running. Ideal for live backup of guests, without guest intervention.

iSCSI – iSCSI is Internet Small Computer System Interface, an Internet Protocol (IP)-based storage networking standard for linking data storage facilities. This feature of an Hypervisor means that you can add iSCSI based disks  to the storage pool.

iSCSI CHAP – Challenge Handshake Authentication Protocol (CHAP) is a network login protocol that uses a challenge-response mechanism. You can use CHAP authentication to restrict iSCSI access to volumes and snapshots to hosts that supply the correct account name and password (or “secret”) combination. Using CHAP authentication can facilitate the management of access controls because it restricts access through account names and passwords, instead of IP addresses or iSCSI initiator names.

Fibre Channel – This feature indicates that the hypervisor support optical fiber connectivity. In particular for Fibre Channel storage network which are cabled and configured with the appropriate Fibre Channel switches. This has implication on how the zones are configured. KVM virtualization with VMControl supports only SAN storage over Fibre Channel. Typically, one of the fabric switches is configured with the zoning information. Additionally, VMControl requires that the Fibre Channel network has hard zoning enabled.

Set Admin Pass – This feature is the use of a guest agent to change the administrative (root) password on an instance.

Get Guest Info – To get info about the guest machine of the hypervisor. This info can be retrieved within VM, An Hypervisor can handle several guest machines which are resource configurations assigned by the virtualisation environment.
Get Host Info – To get information about the node which is hosting the  VMs.

Glance Integration – Glance is the image storage system used to store images of VM. This feature indicates that the hypervisor integrates glance storage capabilities.

Service Control – The hypervisore / compute is a collection of services that enable you to launch virtual machine instances. You can configure these services to run on separate nodes or the same node. Most services run on the controller node and the service that launches virtual machines runs on a dedicated compute node. This feature also allow to install and configure these services on the controller node.

VLAN Networking – It indicates that it is possible to pass VLAN traffic from a virtual machine out to the wider network.

Flat Networking – FlatNetworking uses ethernet adapters configured as bridges to allow network traffic to transit between all the various nodes. This setup can be done with a single adapter on the physical host, or multiple. This option does not require a switch that does VLAN tagging as VLAN networking does, and is a common development installation or proof of concept setup. When you choose Flat networking, Nova does not manage networking at all. Instead, IP addresses are injected into the instance via the file system (or passed in via a guest agent). Metadata forwarding must be configured manually on the gateway if it is required within your network.

Security Groups – This is a feature of the hypervisor (compute). There are similar features offered by cloud Networking Service using a mechanism that is more flexible and powerful than the security group capabilities built in. In this case the built in should be disabled and proxy all security group calls to the Networking API . If you do not, security policies will conflict by being simultaneously applied by both services.

Firewall Rules – Allows service providers to apply firewall rules at a level above security group rules.

Routing –    It is the feature of the hypervisor to map internal addresses and external public addresses. The network part of the hypervisor essentially functions as an L2 switch and routing.

Config Drive – Auto configure disk – Automatically reconfigure the size of the partition to match the size of the flavor’s root drive before booting

Evacuate – As cloud administrator, while you are managing your cloud, you may get to the point where one of the cloud compute nodes fails. For example, due to hardware malfunction. At that point you may use server evacuation in order to make managed instances available again.

Volume swap – The hypervisor support the definition of a swap volume (disk) to be utilised as additional virtual memory.

Volume rate limiting – It is rate limiting (per day , hour ) for volume access, It is used to enable rate-limiting for all back-ends regardless of built-in feature set of back-ends.

Twitter: @ICC-Lab  @ancibug

ICC_Lab newsletter


by Antonio Cimmino at September 16, 2014 09:09 AM

Kashyap Chamarthy

libvirt: default network conflicts (not anymore)

Increasingly there’s a need for libvirt networking to work inside a virtual machine that is already running on the default network ( The immediate practical case where this comes up is while testing nested virtualization: start a guest (L1) with default libvirt networking, and if you need to install libvirt again on it to run a (nested) guest (L2), there’ll be routing conflict because of the existing default route — Up until now, I tried to avoid this by creating a new libvirt network with a different IP range (or manually edit the default libvirt network).

To alleviate this routing conflict, Laine Stump (libvirt developer) now pushed a patch (with a tiny follow up) to upstream libvirt git. (Relevant libvirt bug with discussion.)

I ended up testing the patch last night, it works well.

Assuming your physical host (L0) has the default libvirt network route:

$ ip route show | grep virbr dev virbr0  proto kernel  scope link  src

Now, start a guest (L1) and when you install libvirt (which has the said fix) on it, it notices the existing route of and creates the default network on the next free network range (starting its search with, thus avoiding the routing conflict.

 $ ip route show
  default via dev ens2  proto static  metric 1024 dev ens2  proto kernel  scope link  src dev virbr0  proto kernel  scope link  src

Relevant snippet of the default libvirt network (you can notice the new network range):

  $ virsh net-dumpxml default | grep "ip address" -A4
    <ip address='' netmask=''>
        <range start='' end=''/>

So, please test it (build RPMs locally from git master or should be available in the next upstream libvirt release, early October) for your use cases and report bugs, if any.

[Update: On Fedora, this fix is available from version libvirt-1.2.8-2.fc21.]

by kashyapc at September 16, 2014 06:06 AM

September 15, 2014

Rob Hirschfeld

To improve flow, we must view OpenStack community as a Software Factory

This post was sparked by a conversation at OpenStack Atlanta between OpenStack Foundation board members Todd Moore (IBM) and Rob Hirschfeld (Dell/Community).  We share a background in industrial and software process and felt that sharing lean manufacturing translates directly to helping face OpenStack challenges.

While OpenStack has done an amazing job of growing contributors, scale has caused our code flow processes to be bottlenecked at the review stage.  This blocks flow throughout the entire system and presents a significant risk to both stability and feature addition.  Flow failures can ultimately lead to vendor forking.

Fundamentally, Todd and I felt that OpenStack needs to address system flows to build an integrated product.  The post expands on the “hidden influencers” issue and adds an additional challenge because improving flow requires that the community influences better understands the need to optimize work inter-project in a more systematic way.

Let’s start by visualizing the “OpenStack Factory”

Factory Floor

Factory Floor from Alpha Industries Wikipedia page

Imagine all of OpenStack’s 1000s of developers working together in a single giant start-up warehouse.  Each project in its own floor area with appropriate fooz tables, break areas and coffee bars.  It’s easy to visualize clusters of intent developers talking around tables or coding in dark corners while PTLs and TC members dash between groups coordinating work.

Expand the visualization so that we can actually see the code flowing between teams as little colored boxes.  Giving project has a unique color allows us to quickly see dependencies between teams.  Some features are piled up waiting for review inside teams while others are waiting on pallets between projects waiting on needed cross features have not completed.  At release time, we’d be able to see PTLs sorting through stacks of completed boxes to pick which ones were ready to ship.

Watching a factory floor from above is a humbling experience and a key feature of systems thinking enlightenment in both The Phoenix Project and The Goal.  It’s very easy to be caught up in a single project (local optimization) and miss the broader system implications of local choices.

There is a large body of work about Lean Process for Manufacturing

You’ve already visualized OpenStack code creation as a manufacturing floor: it’s a small step to accept that we can use the same proven processes for software and physical manufacturing.

As features move between teams (work centers), it becomes obvious that we’ve created a very highly interlocked sequence of component steps needed to deliver product; unfortunately, we have minimal coordination between the owners of the work centers.  If a feature is needs a critical resource (think programmer) to progress then we rely on the resource to allocate time to the work.  Since that person’s manager may not agree to the priority, we have a conflict between system flow and individual optimization.

That conflict destroys flow in the system.

The number #1 lesson from lean manufacturing is that putting individual optimization over system optimization reduces throughput.  Since our product and people managers are often competitors, we need to work doubly hard to address system concerns.  Worse yet our inventory of work in process and the interdependencies between projects is harder to discern.  Unlike the manufacturing floor, our developers and project leads cannot look down upon it and see the physical work as it progresses from station to station in one single holistic view.  The bottlenecks that throttle the OpenStack workflow are harder to see but we can find them, as can be demonstrated later in this post.

Until we can engage the resource owners in balancing system flow, OpenStack’s throughput will decline as we add resources.  This same principle is at play in the famous aphorism: adding developers makes a late project later.

Is there a solution?

There are lessons from Lean Manufacturing that can be applied

  1. Make quality a priority (expand tests from function to integration)
  2. Ensure integration from station to station (prioritize working together over features)
  3. Make sure that owners of work are coordinating (expose hidden influencers)
  4. Find and mange from the bottleneck (classic Lean says find the bottleneck and improve that)
  5. Create and monitor a system view
  6. Have everyone value finished product, not workstation output

Added Subscript: I highly recommend reading Daniel Berrange’s email about this.

by Rob H at September 15, 2014 08:08 PM


Mirantis OpenStack Express 2.0 – Basic Cloud Operations: Adding New Custom Boot Images

Creating and managing boot images for Guests is a frequent, routine task for cloud operators. The core task can range from very simple — finding and loading a trustworthy, prefab image from a known repository — to relatively complicated, where you modify an image with a utility like guestfish or qemu-img, or craft a specific image configuration from a distribution .iso – work mostly performed from the Linux command line. Many documentation resources exist to get you up to speed – here are a few:

Mirantis OpenStack Express 2.0 is designed to simplify and reduce the time required to import and manage guest images, launch VMs, create and attach volumes, and perform other basic administrative tasks. In this short video, we’ll begin by looking at MOX 2.0’s image features, and show how you can quickly create a new image from a reliable source.

<iframe allowfullscreen="" frameborder="0" height="404" mozallowfullscreen="" src="" style="display: block; margin: 0 auto;" webkitallowfullscreen="" width="720"></iframe>


Step by Step

Getting into Mirantis OpenStack Express is simple: just log in — the home screen shows server usage and cluster locations, and provides links and authentication for the Horizon console associated with each of your OpenStack clouds.

MOX 2.0 Dashboard
The Mirantis OpenStack Express 2.0 Dashboard shows your clouds’ location(s) and provides authentication and links into the Horizon user interfaces used to manage them.

OpenStack Express 2.0 comes with several default cloud server images already in place, that work with the default Q-Emu hypervisor. The default images are useful variations on the Ubuntu 14.04 LTS cloud image maintained by Canonical. Most are in QCOW2 format that Q-Emu supports. The Xen and KVM hypervisors can also boot VMs from QCOW2 images, as can Oracle VirtualBox and other desktop virtualization frameworks.

MOX 2.0 Dashboard
Mirantis OpenStack Express Horizon UI shows pre-configured Ubuntu 14.04 LTS and other images, ready for convenient use.

It’s also easy to add new cloud server images from .img, .iso, and compressed tar.gz files maintained by Linux providers and communities. These can be retrieved by Horizon via URL and imported into OpenStack Express. The versions linked at OpenStack Documentation – Chapter 2, Get images — should work well with OpenStack Express. Images linked here have been built with cloud-init, a component that enables SSH key and user instance data injection so that instances made with this image can be configured at launch. We’ll see this process in our next blog post on Mirantis OpenStack Express, where we’ll launch an instance from an image.

MOX 2.0 Dashboard
OpenStack documentation offers a chapter on Getting Images, where links to compatible image files can be found.

For our current purpose — importing an image — we’ll use CirrOS, a very light, cloud-oriented Linux distro, useful for testing. We’ll start by right-clicking the URL and copying it. Then we’ll return to Horizon console for our Mirantis OpenStack Express 2.0 cloud and choose Project -> Images -> Create Image. A simple dialog box appears.

MOX 2.0 Dashboard
A simple dialog box lets you configure and import a new image file from a remote target URL.

Name your image, then paste the source URL into the Image Location slot provided. MOX 2.0 Horizon can consume images in .iso, .img, and tar.gz compressed file formats.

MOX 2.0 Dashboard
The import system can handle a range of common image file formats, both uncompressed and compressed.
MOX 2.0 Dashboard
Paste the remote image location URL into the slot provided.

Pick the image hypervisor format from the Format dropdown – In this case, we’re picking QCOW2.

MOX 2.0 Dashboard
A wide range of image formats is supported. In this case, we’re picking QCOW2 — the QEMU Copy-On-Write dynamic format, recommended for use with the QEMU hypervisor.

Identify minimum disk and RAM sizes to let this image run comfortably, click Public availability, then Create Image and let MOX download, store and create your new guest image.

MOX 2.0 Dashboard
Fill in remaining fields with reasonable minimum values for RAM and ephemeral disk space, then click Create Image to begin the import process.
MOX 2.0 Dashboard
Depending on image file size, import and conversion may take a few seconds to a few minutes.

Depending on the size of the source file and download time, this can be very rapid — larger boot images take a couple of minutes to transfer and become available.

MOX 2.0 Dashboard
A successful import concludes, leaving us with a functional Cirros image that we can now use to configure and launch VM instances.

Success! Our image is imported and can now be used to configure and launch VM instances. In our next video, we’ll use this image to configure a new VM instance for SSH access, and launch it.

The post Mirantis OpenStack Express 2.0 – Basic Cloud Operations: Adding New Custom Boot Images appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by John Jainschigg at September 15, 2014 03:36 PM

New features for OpenStack networking, web dashboard improvements, and more

Interested in keeping track of what's happening in the open source cloud? is your source for what's happening right now in OpenStack, the open source cloud infrastructure project.

by Jason Baker at September 15, 2014 03:00 PM

Dean Troyer

OpenStack Low Level API

The current Python library situation for OpenStack is, sorry to say, a mess. Cleaning it up requires essentially starting over and abstracting the individual REST APIs to usable levels. With OpenStackClient I started from the top and worked down to make the CLI a better experience. I think we have proved that to be a worthwhile task. Now it is time to start from the bottom and work up.

The existing libraries utilize a Manager/Resource model that may be suitable for application work, but every project's client repo was forked and changed so they are all similar but maddeningly different. However, a good idea or two can be easily extracted and re-used in making things as simple as possible.

I originally started with no objects at all and went straight to top-level functions, as seen in the current object.v1.lib APIs in OSC. That required passing around the session and URLs required to complete the REST calls, which OSC already has available, but it is not a good general-purpose API.

I've been through a number of iterations of this and have settles on what is described here, a low-level API for OSC and other applications that do not require an object model.


We start with a BaseAPI object that contains the common operations. It is pretty obvious there are only a couple of ways to get a list of resources from OpenStack APIs so the bulk of that and similar actions are here.

It is also very convenient to carry around a couple of other objects so they do not have to be passed in every call. BaseAPI contains a session, service type and endpoint for each instance. The session is a requests.session.Session-compatible object. In this implementation we are using the keystoneclient.session.Session which is close enough. We use the ksc Session to take advantage of keystoneclient's authentication plugins.

The service type and endpoint attributes are specific to each API. service type is as it is used in the Service Catalog, i.e. Compute, Identity, etc. endpoint is the base URL extracted from the service catalog and is prepended to the passed URL strings in the API method calls.

Most of the methods in BaseAPI also are meant as foundational building blocks for the service APIs. As such they have a pretty flexible list of arguments, many of them accepting a session to override the base session. This layer is also where the JSON decoding takes place, these all return a Python list or dict.

The derived classes from BaseAPI will contain all of the methods used to access their respective REST API. Some of these will grow quite large...


While this is a port of the existing code from OpenStackClient, object_store.APIv1 is still essentially a greenfield implementation of the Object-Store API. All of the path manipulation, save for prepending the base URL, is done at this layer.


This is one of the big ones. At this point, only flavor_list(), flavor_show() and key_list() have been implemented in compute.APIv2.

Unlike the object-store API, the rest of the OpenStack services return resources wrapped up in a top-level dict keyed with the base name of the resource. This layer shall remove that wrapper so the returned values are all directly lists or dicts. This removed the variations in server implementations where some wrap the list object individually and some wrap the entire list once. Also, Keystone's tendency to insert an additional values key into the return.


The naming of identity_v2.APIv2 and identity_v3.APIv3 is a bit repetitive but putting the version into the module name lets us break down the already-long files.

At this point, only project_list() is implemented in an effort to work out the mechanics of supporting multiple API versions. In OSC, this is already handled in the ClientManager and individual client classes so there is not much to see here. It may be different otherwise.

OSC Usage

To demonstrate how this API is used, I've added an BaseAPI instance to the existing client objects that get stored in the ClientManager. For example, the addition for compute.client is one object instantiation and an import. Now in OSC, clientmanager.compute.api has all of the (implemented) Compute API methods.

Using it in the flavor commands is a simple change to call compute.api methods rather than the compute.flavor.XXX methods.

Setting up for multiple API versions took a bit more work, as shown in identity.client. A parallel construction to the client class lookup is required, and would totally replace the existing version lookup once the old client is no longer required.


One other cool feature is utilizing requests_mock for testing from the start. It works great and has not the problems that rode along with httpretty.

Now What?

Many object models could be built on top of this API design. The API object hierarchy harkens back to the original client lib Manager classes, except that they encompass an entire REST API and not one for each resource type.

But You Said 'Sanity' Earlier!

Sanity in terms of coalescing the distinct APIs into something a bit more common? Yes. However, this isn't going to fix everything, just some of the little things that application developers really shouldn't have to worry about. I want the project REST API docs to be usable, with maybe a couple of notes for the differences.

For example, OSC and this implementation both use the word project in place of tenant. Everywhere. Even where the underlying API uses tenant. This is an easy change for a developer to remember. I think.

Also, smoothing out the returned data structures to not include the resource wrappers is an easy one.

Duplicating Work?

"Doesn't this duplicate what is already being done in the OpenStack Python SDK?"

Really, no. This is meant to be the low-level SDK API that the Resource model can utilize to provide the back-end to its object model. Honestly, most applications are going to want to use the Resource model, or an even higher API that makes easy things really easy, and hard things not-so-hard, as long as you buy in to the assumptions baked in to the implementation.

Sort of like OS/X or iOS. Simple to use, as long as you don't want to anything different. Maybe we should call that top-most API iOSAPI?

September 15, 2014 02:15 PM

Andrew Hutchings

Speaking about libAttachSQL at Percona Live London

As many of you know I'm actively developing libAttachSQL and am rapidly heading towards the first beta release.  For those who don't, libAttachSQL is a lightweight C connector for MySQL servers with a non-blocking API.  I am developing it as part of my day job for HP's Advanced Technology Group.  It was in-part born out of my frustration when dealing with MySQL and eventlet in Python back when I was working on various Openstack projects.  But there are many reasons why this is a good thing for C/C++ applications as well.

What you may not know is I will be giving a talk about libAttachSQL, the technology behind it and the decisions we made to get here at Percona Live London.  The event is on the 3rd and 4th of November at the Millennium Gloucester Conference Centre.  I highly recommend attending if you wish to find out more about libAttachSQL or any of the new things going on in the MySQL world.

As for the project itself, I'm currently working on the prepared statement code which I hope to have ready in the next few days.  0.4.0 will certainly be a big release in terms of changes.  There has been feedback from some big companies which is awesome to hear and I have fixed a few problems they have found for 0.4.0.  Hopefully you will be hearing more about that in the future.

For anyone there I'll be in London from the 2nd to the 5th of November and am happy to meet with anyone and chat about the work we are doing.

by LinuxJedi ( at September 15, 2014 09:31 AM

September 14, 2014

Jamie Lennox

How to use keystoneclient Sessions

In the last post I did on keystoneclient sessions there was a lot of hand waving about how they should work but it’s not merged yet. Standardizing clients has received some more attention again recently - and now that the sessions are more mature and ready it seems like a good opportunity to explain them and how to use them again.

For those of you new to this area the clients have grown very organically, generally forking off some existing client and adding and removing features in ways that worked for that project. Whilst this is in general a problem for user experience (try to get one token and use it with multiple clients without reauthenticating) it is a nightmare for security fixes and new features as they need to be applied individually across each client.

Sessions are an attempt to extract a common authentication and communication layer from the existing clients so that we can handle transport security once, and keystone and deployments can add new authentication mechanisms without having to do it for every client.

The Basics

Sessions and authentications are user facing objects that you create and pass to a client, they are public objects not a framework for the existing clients. They require a change in how you instantiate clients.

The first step is to create an authentication plugin, currently the available plugins are:

  • keystoneclient.auth.identity.v2.Password
  • keystoneclient.auth.identity.v2.Token
  • keystoneclient.auth.identity.v3.Password
  • keystoneclient.auth.identity.v3.Token
  • keystoneclient.auth.token_endpoint.Token

For the primary user/password and token authentication mechanisms that keystone supports in v2 and v3 and for the test case where you know the endpoint and token in advance. The parameters will vary depending upon what is required to authenticate with each.

Plugins don’t need to live in the keystoneclient, we are currently in the process of setting up a new repository for kerberos authentication so that it will be an optional dependency. There are also some plugins living in the contrib section of keystoneclient for federation that will also likely be moved to a new repository soon.

You can then create a session with that plugin.

from keystoneclient import session as ksc_session
from keystoneclient.auth.identity import v3
from keystoneclient.v3 import client as keystone_v3
from novaclient.v1_1 import client as nova_v2

auth = v3.Password(auth_url='',

session = ksc_session.Session(auth=auth,

keystone = keystone_v3.Client(session=session)
nova = nova_v2.Client(session=session)

Keystone and nova clients will now share an authentication token fetched with keystone’s v3 authentication. The clients will authenticate on the first request and will re-authenticate automatically when the token expires.

This is a fundamental shift from the existing clients that would authenticate internally to the client and on creation so by opting to use sessions you are acknowledging that some methods won’t work like they used to. For example keystoneclient had an authenticate() function that would save the details of the authentication (user_id etc) on the client object. This process is no longer controlled by keystoneclient and so this function should not be used, however it also cannot be removed because we need to remain backwards compatible with existing client code.

In converting the existing clients we consider that passing a Session means that you are acknowledging that you are using new code and are opting-in to the new behaviour. This will not affect 90% of users who just make calls to the APIs, however if you have got hacks in place to share tokens between the existing clients or you overwrite variables on the clients to force different behaviours then these will probably be broken.

Per-Client Authentication

The above flow is useful for users where they want to have there one token shared between one or more clients. If you are are an application that uses many authentication plugins (eg, heat or horizon) you may want to take advantage of using a single session’s connection pooling or caching whilst juggling multiple authentications. You can therefore create a session without an authentication plugin and specify the plugin that will be used with that client instance, for example:

global SESSION

if not SESSION:
    SESSION = ksc_session.Session()

auth = get_auth_plugin()  # you could deserialize it from a db,
                          # fetch it based on a cookie value...
keystone = keystone_v3.Client(session=SESSION, auth=auth)

Auth plugins set on the client will override any auth plugin set on the session - but I’d recommend you pick one method based on your application’s needs and stick with it.

Loading from a config file

There is support for loading session and authentication plugins from and oslo.config CONF object. The documentation on exactly what options are supported is lacking right now and you will probably need to look at code to figure out everything that is supported. I promise to improve this, but to get you started you need to register the options globally:

group = 'keystoneclient'  # the option group
keystoneclient.session.Session.register_conf_options(CONF, group)
keystoneclient.auth.register_conf_options(CONF, group)

And then load the objects where you need them:

auth = keystoneclient.auth.load_from_conf_options(CONF, group)
session = ksc_session.Session.load_from_conf_options(CONF, group, auth=auth)
keystone = keystone_v3.Client(session=session)

Will load options that look like:

cacert = /path/to/ca.cert
auth_plugin = v3password
username = user
password = password
project_name = demo
project_domain_name = default
user_domain_name = default

There is also support for transitioning existing code bases to new option names if they are not the same as what your application uses.

Loading from CLI

A very similar process is used to load sessions and plugins from an argparse parser.

parser = argparse.ArgumentParser('test')

argv = sys.argv[1:]

keystoneclient.auth.register_argparse_arguments(parser, argv)

args = parser.parse_args(argv)

auth = keystoneclient.auth.load_from_argparse_arguments(args)
session = keystoneclient.session.Session.load_from_cli_options(args,

This produces an application with the following options:

python --os-auth-plugin v3password
usage: test [-h] [--insecure] [--os-cacert <ca-certificate>]
            [--os-cert <certificate>] [--os-key <key>] [--timeout <seconds>]
            [--os-auth-plugin <name>] [--os-auth-url OS_AUTH_URL]
            [--os-domain-id OS_DOMAIN_ID] [--os-domain-name OS_DOMAIN_NAME]
            [--os-project-id OS_PROJECT_ID]
            [--os-project-name OS_PROJECT_NAME]
            [--os-project-domain-id OS_PROJECT_DOMAIN_ID]
            [--os-project-domain-name OS_PROJECT_DOMAIN_NAME]
            [--os-trust-id OS_TRUST_ID] [--os-user-id OS_USER_ID]
            [--os-user-name OS_USERNAME]
            [--os-user-domain-id OS_USER_DOMAIN_ID]
            [--os-user-domain-name OS_USER_DOMAIN_NAME]
            [--os-password OS_PASSWORD]

There is an ongoing effort to create a standardized CLI plugin that can be used by new clients rather than have people provide an –os-auth-plugin every time. It is not yet ready, however clients can create and specify there own default plugins if –os-auth-plugin is not provided.

For Client Authors

To make use of the session in your client there is the keystoneclient.adapter.Adapter which provides you with a set of standard variables that your client should take and use with the session. The adapter will handle the per-client authentication plugins, handle region_name, interface, user_agent and similar client parameters that are not part of the more global (across many clients) state that sessions hold.

The basic client should look like:

class MyClient(object):

    def __init__(self, **kwargs):
        kwargs.set_default('user_agent', 'python-myclient')
        kwargs.set_default('service_type', 'my')
        self.http = keystoneclient.adapter.Adapter(**kwargs)

The adapter then has .get() and .post() and other http methods that the clients expect.


It’s great to have renewed interest in standardizing client behaviour, and I’m thrilled to see better session adoption. The code has matured to the point it is usable and simplifies use for both users and client authors.

In writing this I kept wanting to link out to official documentation and realized just how lacking it really is. Some explanation is available on the official python-keystoneclient docs pages, there is also module documentation however this is definetly an area in which we (read I) am a long way behind.

September 14, 2014 11:13 PM

Ana Malagon

Miscellaneous Resources on Gnocchi

Like the title says, this post is a collection of links to resources that I found helpful for learning about Gnocchi.

  • Julien Danjou’s blog post about Gnocchi is the most recent of the links here and an excellent summary of all things Gnocchi. It explains the limitations of the previous API in Ceilometer and describes how the current implementation (using Pandas/Swift) was chosen. I found the diagrams useful for understanding the relationship between entities and resources. The post expands on the information given in the wiki, also a good reference.

  • Julien also did a walkthrough of the Gnocchi source code: very helpful for navigating the different parts of the project.

  • Gnocchi specs (I referred to this primarily for examples of the HTTP request syntax). If you look through the history, you can see the shift in approach to retention policies – the timeseries length goes from being defined in terms of number of points to being defined in terms of a lifespan.

  • Eoghan Glynn’s thread on the Openstack mailing lists clearly outlines the differences between Gnocchi and timeseries-oriented databases (like InfluxDB). It goes over the features of InfluxDB that make it a good backend option for Gnocchi though, such as the downsampling and retention policies.

  • The Gnocchi repository.

Finally, I am also adding some old notes I made when first trying to understand the Gnocchi source code. I found it helpful to visually map out the filesystem structure (shown below) with comments on the different functional parts underneath. The diagram is not up to date and shows only the sections I was most familiar with. Although the comments below are rather simplistic and a bit embarrassing in retrospect, I thought I would post this in case a future OPW intern wants to know how other people approached their projects.

|--- indexer/
|   |---
|   |---
|   |---
|--- openstack/
|--- rest/
|   |---
|   |---
|--- storage/
|   |---
|   |---
|   |---
|--- tests/
|   |---
|   |---
|   |---
|   |---
|   |---
|--- gnocchi.egg-info
|--- openstack-common.conf
|--- README.rst
|--- requirements.txt
|--- setup.cfg
  • is the library that manipulates the timeseries. It is used in conjunction with the Swift storage driver.

  • rest/ – I spent a lot of time here, mostly in the EntityController get_measures section of the init file, which has the code for evaluating HTTP requests to the API. The file uses pecan magic to create hooks for the configured storage drivers and indexers. It also sets the default port for the API.

  • storage/ – also looked at these files a lot when starting out. The driver-specific code is contained here. Eoghan Glynn added support for InfluxDB in this patch; Dina Belova added OpenTSDB support. As it was desirable to make the custom aggregation functions driver-independent, no changes were made to these files. However, it was useful to look at the InfluxDB patch to get a sense of the way aggregation was implemented using that backend.

  • tests/ – The second most helpful thing in understanding the code (other than actually running the API and seeing query results, as well as logging intermediate steps in debug mode) was to look at the test cases.

  • README.rst – I think this was my first contribution. Eoghan suggested it as a first step and it was a great way for getting my feet wet.

  • setup.cfg – modify this file to add entry points for custom aggregation functions, storage drivers, and indexer options.

That’s all I have for comments on the Gnocchi source code – thanks for reading!

September 14, 2014 05:33 PM

September 13, 2014

Sean Dague

My IRC proxy setup

IRC (Internet Relay Chat) is a pretty important communication medium for a lot of Open Source projects nowadays. While email is universal and lives forever, IRC is the equivalent of the hallway chat you'd have with a coworker to bounce ideas around. IRC has the advantage of being a reasonably simple and open (and old) protocol, so writing things that interface with it is about as easy as email clients. But, it has a pretty substantial drawback: you only get messages when you are connected to the channels in question.

Again, because it's an open protocol this is actually a solvable problem, have a piece of software on an always on system somewhere that remains connected for you. There are 2 schools of thinking here:

  • Run a text IRC client in screen or tmux on a system, and reconnect to the terminal session when you come in. WeeChat falls into this camp.
  • Run an irc proxy on a server, and have your IRC client connect to the proxy which replays all the traffic since the last time you were connected. Bip, ZNC, and a bunch of others fall into this camp.

I'm in Camp #2, because I find my reading comprehension of fixed width fonts is far less than variable width ones. So I need my IRC client to be in a variable width font, which means console solutions aren't going to help me.


ZNC is my current proxy of choice. I've tried a few others, and dumped them for reasons I don't entirely remember at this point. So ZNC it is.

I have a long standing VPS with Linode to host a few community websites. For something like ZNC you don't need much horse power and could use cloud instances anywhere. If you are running debian or ubuntu in this cloud instance: apt-get install znc gets you rolling.

Run ZNC from the command line and you'll get something like this:

znc fail

That's because first time up it needs to create a base configuration. Fortunately it's pretty straight forward what that needs to be.

znc --makeconf takes you through a pretty interactive configuration screen to build a base configuration. The defaults are mostly fine. The only thing to keep in mind is what port you make ZNC listen on, as you'll have to remember to punch that port open on the firewall/security group for your cloud instance.

I also find the default of 50 lines of scrollback to be massively insufficient. I usually bounce that to 5000 or 10000.

Now connect your client to the server and off you go. If you have other issues with basic ZNC configuration, I'd suggest checking out the project website.

ZNC as a service

The one place ZNC kind of falls down is that out of the box (at least on ubuntu) it doesn't have init scripts. Part of this is because the configuration file is very user specific, and as we say by the interactive mode, is designed around asking you a bunch of questions. That means if your cloud instance reboots, your ZNC doesn't come back.

I fixed this particular shortcoming with Monit. Monit is a program that monitors other programs on your system and starts or restarts them if they have faulted out. You can apt-get install it on debian/ubuntu.

Here is my base znc monit script:

znc monit

Because znc doesn't do pid files right, this just matches on a process name. It has a start command which includes the user / group for running this, and a stop command, and some out of bounds criteria. All in a nice little dsl.

All that above will get you a basic ZNC server running, surviving cloud instance reboots, and make sure you never miss a minute of IRC.

But... what if we want to go further.


The idea for this comes from Dan Smith, so full credit where it is due.

If you regularly connect to IRC from more than one computer, but only have 1 ZNC proxy setup, the issue is the scrollback gets replayed to the first computer that connects to the proxy. So jumping between computers to have conversations ends up being a very fragmented experience.

ZNC presents as just an IRC Server to your client. So you can layer ZNC on top of ZNC to create independent scrollback buffers for every client device. My setup looks something like this:


Which means that all devices have all the context for IRC, but I'm only presented as a single user on the freenode network.

Going down this path requires a bit more effort, which is why I've got the whole thing automated with puppet: znc-puppet.tar. You'll probably need to do a little bit of futzing with it to make it work for your puppet managed servers (you do puppet all your systems, right?), but hopefully this provides a good starting point.

IRC on Mobile

Honestly, the Android IRC experience is... lacking. Most of the applications out there that do IRC on Android provide an experience which is very much a desktop experience, which works poorly on a small phone.

Monty Taylor pointed me at IRCCloud which is a service that provides a lot of the same offline connectivity as the ZNC stack provides. They have a webui, and an android app, which actually provides a really great mobile experience. So if Mobile is a primary end point for you, it's probably worth checking out.

IRC optimizations for the Desktop

In the one last thing category, I should share the last piece of glue that I created.

I work from home, with a dedicated home office in the house. Most days I'm working on my desktop. I like to have IRC make sounds when my nick hits, mostly so that I have some awareness that someone wants to talk to me. I rarely flip to IRC at that time, it just registers as a "will get to it later" so I can largely keep my concentration wherever I'm at.

That being said, OpenStack is a 24hr a day project. People ping me in the middle of the night. And if I'm not at my computer, I don't want it making noise. Ideally I'd even like them to see me as 'away' in IRC.

Fortunately, most desktop software in Linux integrates with a common messaging bus: dbus. The screensaver in Ubuntu emits a signal on lock and unlock. So I created a custom script that mutes audio on screen lock, unmutes it on screen unlock, as well as sends 'AWAY' and 'BACK' commands to xchat for those state transitions.

You can find the script as a gist.

So... this was probably a lot to take in. However, hopefully getting an idea of what an advanced IRC workflow looks like will give folks ideas. As always, I'm interested in hearing about other things people have done. Please leave a comment if you've got an interesting productivity hack around IRC.

by Sean Dague at September 13, 2014 02:09 PM

September 12, 2014

OpenStack Blog

OpenStack Community Weekly Newsletter (Sep 5 – 12)

Dox a tool that run python (or others) tests in a docker container

What if there was a tool that allows to integrate docker containers to do the automatic testing for OpenStack? The idea of dox is to slightly behave like the tox tool but instead of running, use docker containers.

What’s Coming in OpenStack Networking for Juno Release

As the Juno development cycle ramps up, now is a good time to review some of the key changes we saw in Neutron during this exciting cycle and have a look at what is coming up in the next upstream major release which is set to debut in October.

Horizon’s new features introduced in Juno cycle

Matthias Runge gives an overview on what happened during Horizon’s Juno development cycle. Horizon’s blueprints page on launchpad lists 31 implemented new features which may be grouped in sub-topics: Sahara-Dashboard, RBAC, JavaScript unbundling, look and feel improvements and more. If you’re curious about what’s coming, read the full post.

The Road To Paris 2014 – Deadlines and Resources

During the Paris Summit there will be a working session for the Women of OpenStack to frame up more defined goals and line out a blueprint for the group moving forward. We encourage all women in the community to complete this very short surveyto provide input for the group.

Relevant Conversations

Tips ‘n Tricks

Security Advisories and Notices

Upcoming Events

Other News

Got Answers?

Ask OpenStack is the go-to destination for OpenStack users. Interesting questions waiting for answers:

Welcome New Reviewers, Developers and Core Reviewers

Welcome Dina Belova to ceilometer-core

Robb Romans Jim West
Rob Cresswell Huai Jiang
Martin André Abhishek Asthana
Tony Campbell Zura Isakadze
Srinivas Sakhamuri Robb Romans
Isaias Jeremy Moffitt
Stig Telfer Eduard Biceri-Matei
Sarvesh Ranjan Tom Barron
Hongbin Lu Szymon Wróblewski
Timothy Okwii Saksham Varma
Thomas Järvstrand Mike Fedosin
Kyle Stevenson
Komei Shimamura
Dave Chen
Aidan McGinley

OpenStack Reactions


Trying to follow some of the summit talks after a party the night before

The weekly newsletter is a way for the community to learn about all the various activities occurring on a weekly basis. If you would like to add content to a weekly update or have an idea about this newsletter, please leave a comment.

by Stefano Maffulli at September 12, 2014 11:49 PM

IBM OpenStack Team

Five reasons why Ubuntu Server is an emerging force in cloud business

While recently reading through a few articles on the web about cloud computing, I started to reflect on why many businesses are now choosing Ubuntu Server to deploy cloud environments. Some companies are also recognizing its potential as a solid option for daily purpose servers (including FTP servers, Apache or Nginx web servers, mail servers, domain name servers, firewalls and so forth).

There are plenty of reasons to mention, and in this post I will go over five features that I consider the most attractive when deciding what to use for deployments.

 1. Ubuntu Server offers Long Term Support

Every two years, Ubuntu Server is released in a Long Term Support (LTS) format. What does this mean? LTS provides users with updates from official repositories for five years after a release date (the latest is version 14.04, released in April). This is very important because it gives users peace of mind that they will be protected if any kind of vulnerability is discovered in current or upcoming software versions.

 2. Canonical offers several management and support tools for Ubuntu Server

Anyone who is familiar with this Debian-based distribution has heard of Juju or Metal as a Service (MAAS). Juju is an orchestrator that helps you manage and maintain your environments. It works for OpenStack deployments as well as apps, services and scalabilty in general. MAAS is another tool that brings the cloud computing world to bare metal servers and makes it easy to scale physical machines—as easy as asking MAAS to deploy another instance of a cluster with certain hardware specs. For those of you who are familiar with platform as a service (PaaS) providers like Cloud Foundry, this follows the same concept as Bosh or well-known automating tools in the market like Chef or Puppet.

 3. Ubuntu Server is highly compatible

Ubuntu Server is a public cloud certified operating system for most infrastructure as a service (IaaS) providers, including IBM SoftLayer. This is important when you think about the future. For example, if you’re considering a potential migration from an IaaS provider, you don’t need to worry whether Ubuntu Server will be provided or not.

Also, Ubuntu is certified with the most important hardware vendors in the market. Earlier this year, IBM committed to support Ubuntu Server on POWER8 servers. This was a big step for IBM and Canonical, considering the future impact this distribution has in the cloud market alongside OpenStack and IBM PowerKVM. Here is a nice demo of Ubuntu Server running on POWER8 using Juju, delivered by Canonical founder Mark Shuttleworth and IBM general manager Doug Balog during a keynote at IBM Impact.

4. Ubuntu Server is offered at no cost

You can download the image from the Canonical portal and have access to their repositories at no charge; a paid subscription for support is optional. For IaaS sites like SoftLayer, there is no charge for hourly and monthly options for cloud computing instances (CCIs).


What is included with the paid optional support? The Ubuntu Advantage provides access to Ubuntu experts in areas like OpenStack for problem resolution, and access to Lanscape, a very nice tool to manage updates for your servers. It also allows you to have all your devices in one graphical console for monitoring.

 5. Ubuntu is the leading technology for OpenStack deployments

As Mark Shuttleworth is credited with noting in this article on ZDNet, Ubuntu accounts for more than 50 percent of operating system deployments. This has garnered a lot the attention in the cloud era. Canonical has its own image, Ubuntu Cloud, that includes the latest version of OpenStack (14.04 LTS with OpenStack Icehouse) already on it, and their releases are synchronized, so you always get the latest versions together.

What is the best match for your infrastructure and your budget? Though only you can answer that question, I have just provided five reasons why Ubuntu Server has earned fans and should be considered.

Get in touch with me on Twitter @ovegarod or leave a comment below to let me know if this was interesting and helpful. In the future, I may discuss more Linux distributions like Red Hat and SUSE.

The post Five reasons why Ubuntu Server is an emerging force in cloud business appeared first on Thoughts on Cloud.

by Oscar Vega Rodriguez at September 12, 2014 08:13 PM


OpenStack Live 2015: Call for speakers open through November 9

OpenStack Live 2015: Call for speakers open through November 9

OpenStack Live 2015: Call for speakers open through Nov. 9

I am proud to announce OpenStack Live, a new annual conference that will run in parallel with the Percona Live MySQL Conference & Expo at the Santa Clara Convention Center in Silicon Valley. The inaugural event, OpenStack Live 2015, is April 13-14, 2015. We are lining up a strong Conference Committee and are now accepting tutorial and breakout session speaking proposals through November 9.

OpenStack Live will emphasize the essential elements of making OpenStack work better with emphasis on the critical role of MySQL and the value of Trove. You’ll hear about the hottest current topics, learn about operating a high-performing OpenStack deployment, and listen to top industry leaders describe the future of the OpenStack ecosystem. We are seeking speaking proposals on the following topics:

  • Performance Optimization of OpenStack
  • OpenStack Operations
  • OpenStack Trove
  • Replication and Backup for OpenStack
  • High Availability for OpenStack
  • OpenStack User Stories
  • Monitoring and Tools for OpenStack

The conference features a full day of keynotes, breakout sessions, and Birds of a Feather sessions on April 14 preceded by an optional day of tutorials on April 13. A Monday reception will be held on the exhibit floor and joint lunches with both conferences offer you the opportunity to network with both the OpenStack and MySQL communities from both conferences. The OpenStack Live conference is a great event for users of any level.

As a bonus, OpenStack Live attendees may attend any Percona Live MySQL Conference session during the days of the OpenStack event. Conference only passes are available for April 14 and conference and tutorial passes are available for both April 13 and 14.

If you are using OpenStack and have a story to share – or a skill to teach – then now is the time to put pen to paper (or fingers to keyboard) and write your speaking proposal for either breakout or tutorial sessions (or both). Submissions will be reviewed by the OpenStack Live Conference Committee, which includes:

  • Mark Atwood: Director – Open Source Evangelism for HP Cloud Services
  • Rich Bowen: OpenStack Community Liaison at Red Hat
  • Jason Rouault: Senior Director OpenStack Cloud at Time Warner Cable
  • Peter Boros: Principal Architect at Percona

Presenting at OpenStack Live 2015 is your chance to put your ideas, case studies, best practices and technical knowledge in front of an intelligent, engaged audience of OpenStack users. If selected as a speaker by our Conference Committee, you will receive a complimentary full conference pass.

Public speaking not your thing or just want to learn about the latest and greatest OpenStack technologies, deployments and projects? Then register now and save big with our early bird discount. OpenStack Live 2015 is an ideal opportunity for organizations to connect with the community of OpenStack enthusiasts from Silicon Valley and around the world. The Percona Live MySQL Conference this past April had over 1,100 registered attendees from 40 countries and the OpenStack Open Source Appreciation Day on the Monday before the conference was fully booked so don’t delay, register today to save your seat!

We are currently accepting sponsors. You can learn more about sponsorship opportunities here.

I hope to see you at OpenStack Live 2015 next April! And speakers, remember the deadline to submit your proposals is November 9. In the meantime you can learn more by visiting the official OpenStack Live 2015 website.

The post OpenStack Live 2015: Call for speakers open through November 9 appeared first on MySQL Performance Blog.

by Terry Erisman at September 12, 2014 05:32 PM

Rich Bowen

RDO on CentOS 7

With CentOS 7 now available, I quickly put it on my OpenStack demo laptop, and started installing RDO. It mostly just worked, but there were a few roadblocks to circumvent.

As usual, I followed the RDO Quickstart, so I won’t duplicate those steps here, in detail, but it goes like:

sudo yum update -y && sudo yum install -y && sudo yum install -y openstack-packstack && packstack --allinone

Comparison of string with 7 failed

The first problem occurs pretty quickly, in prescript.pp, with the following error message:

Comparison of String with 7 failed

This is due to the change in CentOS versioning scheme – the latest release of CentOS is version 7.0.1406, which is not a number. The script in question assumes that the version number is a number, and does a numerical comparison:

if $::operatingsystem in $el_releases and $::operatingsystemrelease < 7 {

This fails, because $::operatingsystemrelease is a string, not a number.

The solution here is to edit the file /usr/lib/python2.7/site-packages/packstack/puppet/templates/prescript.pp and replace the variable $::operatingsystemrelease with $::operatingsystemmajrelease around line 15.

While you’re at it, do this for every file in that directory, where $operatingsystemrelease is compared to 7.

See for more detail, and to track when this is fixed.

mysql vs mariadb

The second problem, I’m not sure I understand just yet. The symptom is that mysql.pp fails with

Error: Could not enable mysqld:

To skip to the end of the story, this appears to be related to the switch from mysql to mariadb about a year ago, finally catching up with CentOS. The related bug is at

The workaround that I used was:

# rm /usr/lib/systemd/system/mysqld.service 
# cp /usr/lib/systemd/system/mariadb.service /usr/lib/systemd/system/mysqld.service
# systemctl stop mariadb
# pkill mysql
# rm -f /var/lib/mysql/mysql.sock

Then run packstack again with the generated answer file from the last time.

However, elsewhere in the thread, we were assured that this shouldn’t be necessary, so YMMV. See for further discussion.

That’s all, folks

After those two workarounds, packstack completed successfully, and I have a working allinone install.

Hope this was helpful to someone.

UPDATE: The next time through, I encountered

The workaround is to replace contents of /etc/redhat-release with “Fedora release 20 (Heisenbug)” and rerun packstack.

Turns out that this also fixes the mysql/mariadb problem above without having to go through the more complicated process.

by rbowen at September 12, 2014 03:45 PM

Tesora Corp

Blog #2: Replication and Clustering - Implementation Details

Reflection-in-mirror2.pngIn the previous blog post we described the replication feature of Trove. In this post we'll describe the implementation of the Client and the Task Manager in detail. The user is able to issue the various replication related commands using the trove client (python-troveclient). In particular these commands are detach_replication, and extensions to the create and show commands. These commands and their outputs were described in the previous blog post

The python-trove client depends on some new and modified API’s.The creation of the master in a replicated pair is just as it is today. The command to create a slave extends on the current create command as shown below. The request to create a slave identifies the master instance. In the request below, the reference to the master is provided in the slaveOf attribute (highlighted).

POST /instances
  "instance": {
    "name": "products-s1",
    "datastore": {
      "type": "mysql",
      "version": "5.5"
    "slaveOf": "dfbbd9ca-b5e1-4028-adb7-f78643e17998",
    "configuration": "b9c8a3f8-7ace-4aea-9908-7b555586d7b6",
    "flavorRef": "7",
    "volume": {
      "size": 1

The response to this is also an extension of the current response to the create API call. As shown below, the response identifies the master and the slave.

POST /instances
  "instance": {
    "status": "BUILD",
    "id": "061aaf4c-3a57-411e-9df9-2d0f813db859",
    "name": "products-s1",
    "created": "...",
    "updated": "...",
    "links": [{...}],
    "datastore": {
      "type": "mysql",
      "version": "5.5"
    "slaveOf": {
      "id": "dfbbd9ca-b5e1-4028-adb7-f78643e17998",
    "configuration": {
      "id": "b9c8a3f8-7ace-4aea-9908-7b555586d7b6",
      "links": [{...}],
    "flavor": {
      "id": "7",
      "links": [{...}],
    "volume": {
      "size": 1

Once a replicated pair is created, the client command to detach_replication results in an API call as shown below.

POST /instances/{id}/action
    "detach_replication": {}

Observe that unlike the CREATE API calls which are POST’s to the endpoint of /instances, the detach_replication call is posted to the specific instance endpoint /instances/{id}/action

As shown above, this change requires that the Trove Taskmanager implement these API calls. First, there is the implementation of the detach_replication() API call, and then there is the change to the create_instance() API call to handle the slaveOf argument.

We now describe the changes to the GuestAgent in detail. Replication in Trove is based on snapshots. We have designed this in such a way that it is a feature that is easily extensible to other data stores. The GuestAgent API will have four new methods:

  • get_replication_snapshot()
  • attach_replication_slave()
  • detach_replication_slave()
  • demote_replication_master()

It will be up to the guest agent for each data store to implement these methods. In this way, the contents of the snapshot are entirely shielded from the taskmanager and higher-level components and the guest agent is free to store all the information appropriate to that data store in the snapshot.

The get_replication_snapshot() API call will cause a snapshot to be created. The MySQL guest agent will use xtrabackup to create a snapshot and will store into that snapshot the binlog position and any network information about the master that will be required to setup replication.

    "master": {
        "host": "",
        "port": 3306
    "dataset": {
        "datastore": "mysql",
        "datastore_version": "mysql-5.5",
        "dataset_size": 2,
        "snapshot_href": "http://..."
    "binlog_position": <binlog position>

The attach_replication_slave() API call will cause the master’s information to be inserted into the selected site and this will cause replication updates to be received from the master site.

The detach_replication_slave() API call will cause a slave instance to stop replicating from a master. Once this is done, no further updates from a master will be received by this slave and the master will no longer contain any reference to the detached slave.

  "topology": {
    "members": [
        "id": "{master-id}",
        "name": "master"
        "id": "{slave2-id}",
        "name": "slave2",
        "mysql": {
          "slave_of": [{"id": "{master-id}"}],
          "read_only": true

Finally, the demote_replication_master() API call will cause the master to return to its pre replication state. This will cause bin logging to be turned off and any other configuration changes or log files created for the purpose of logging will be removed.

In my next post, I've review, Replication and Clustering: A Look ahead.

by 10 at September 12, 2014 12:00 PM

Rafael Knuth

Google+ Hangout: Turning OpenStack Swift into a VM storage platform

OpenStack Swift is the Object Storage project within OpenStack. Alas, due to technical hurdles...

September 12, 2014 11:00 AM

OpenStack automation with cloud deployment tools

In the cloud world, the mantra is "automate everything." It's no surprise that as OpenStack expands its scope, automation projects are emerging within it. But, the variety and the sheer number of these projects is still surprising: there are over twenty!

by Dmitri Zimine at September 12, 2014 09:00 AM

September 11, 2014


OpenStack: A MySQL DBA Perspective – Sept. 17 webinar

OpenStack: A MySQL DBA Perspective - Sept. 17 webinar

OpenStack: A MySQL DBA Perspective

I’ll have the pleasure to present, next Wednesday, September 17 at 10 a.m. PDT (1 p.m. EDT) a webinar titled “OpenStack: A MySQL DBA Perspective.” Everyone is invited.

The webinar will be divided into two parts. The first part will cover how MySQL can be used by the OpenStack infrastructure including the expected load, high-availability solutions and geo-DR.

The second part will focus on the use of MySQL within an OpenStack cloud. We’ll look into the various options that are available, the traditional ones and Trove. We’ll also discuss the block device options in regards with MySQL performance and, finally, we’ll review the high-availability implications of running MySQL in an OpenStack cloud.

Register here. I look forward to your questions, and if you have any related to OpenStack that I can help with in advance of the webinar please feel free to post those in the comments section below. I’ll write a followup post after the webinar to recap all related questions and answers. I’ll also provide the slides.

See you next Wednesday!

The post OpenStack: A MySQL DBA Perspective – Sept. 17 webinar appeared first on MySQL Performance Blog.

by Yves Trudeau at September 11, 2014 08:58 PM


OpenStack users shed light on Percona XtraDB Cluster deadlock issues

OpenStack_PerconaI was fortunate to attend an Ops discussion about databases at the OpenStack Summit Atlanta this past May as one of the panelists. The discussion was about deadlock issues OpenStack operators see with Percona XtraDB Cluster (of course this is applicable to any Galera-based solution). I asked to describe what they are seeing, and as it turned out, nova and neutron uses the SELECT … FOR UPDATE SQL construct quite heavily. This is a topic I thought was worth writing about.

Write set replication in a nutshell (with oversimplification)

Any node is writable, and replication happens in write sets. A write set is practically a row based binary log event or events and “some additional stuff.” The “some additional stuff” is good for 2 things.

  • Two write sets can be compared and told if they are conflicting or not.
  • A write set can be checked against a database if it’s applicable.

Before committing on the originating node, the write set is transferred to all other nodes in the cluster. The originating node checks that the transaction is not conflicting with any of the transactions in the receive queue and checks if it’s applicable to the database. This process is called certification. After the write set is certified the transaction is committed. The remote nodes will do certification asynchronously compared to the local node. Since the certification is deterministic, they will get the same result. Also the write set on the remote nodes can be applied later because of this reason. This kind of replication is called virtually synchronous, which means that the data transfer is synchronous, but the actual apply is not.

We have a nice flowchat about this.

Since the write set is only transferred before commit, InnoDB row level locks, which are held locally, are not held on remote nodes (if these were escalated, each row lock would take a network round trip to acquire). This also means that by default if multiple nodes are used, the ability to read your own writes is not guaranteed. In that case, a certified transaction, which is already committed on the originating node can still sit in the receive queue of the node the application is reading from, waiting to be applied.


The SELECT … FOR UPDATE construct reads the given records in InnoDB, and locks the rows that are read from the index the query used, not only the rows that it returns. Given how write set replication works, the row locks of SELECT … FOR UPDATE are not replicated.

Putting it together

Let’s create a test table.

  PRIMARY KEY (`id`)

And some records we can lock.

pxc1> insert into t values();
Query OK, 1 row affected (0.01 sec)
pxc1> insert into t values();
Query OK, 1 row affected (0.01 sec)
pxc1> insert into t values();
Query OK, 1 row affected (0.01 sec)
pxc1> insert into t values();
Query OK, 1 row affected (0.00 sec)
pxc1> insert into t values();
Query OK, 1 row affected (0.01 sec)

pxc1> select * from t;
| id | ts                  |
|  1 | 2014-06-26 21:37:01 |
|  4 | 2014-06-26 21:37:02 |
|  7 | 2014-06-26 21:37:02 |
| 10 | 2014-06-26 21:37:03 |
| 13 | 2014-06-26 21:37:03 |
5 rows in set (0.00 sec)

On the first node, lock the record.

pxc1> start transaction;
Query OK, 0 rows affected (0.00 sec)
pxc1> select * from t where id=1 for update;
| id | ts                  |
|  1 | 2014-06-26 21:37:01 |
1 row in set (0.00 sec)

On the second, update it with an autocommit transaction.

pxc2> update t set ts=now() where id=1;
Query OK, 1 row affected (0.01 sec)
Rows matched: 1  Changed: 1  Warnings: 0
pxc1> select * from t;
ERROR 1213 (40001): Deadlock found when trying to get lock; try restarting transaction

Let’s examine what happened here. The local record lock held by the started transation on pxc1 didn’t play any part in replication or certification (replication happens at commit time, there was no commit there yet). Once the node received the write set from pxc2, that write set had a conflict with a transaction still in-flight locally. In this case, our transaction on pxc1 has to be rolled back. This is a type of conflict as well, but here the conflict is not caught on certification time. This is called a brute force abort. This happens when a transaction done by a slave thread conflict with a transaction that’s in-flight on the node. In this case the first commit wins (which is the already replicated one) and the original transaction is aborted. Jay Janssen discusses multi-node writing conflicts in detail in this post.

The same thing happens when 2 of the nodes are holding record locks via select for update. Whichever node commits first will win, the other transaction will hit the deadlock error and will be rolled back. The behavior is correct.

Here is the same SELECT … FOR UPDATE transaction overlapping on the 2 nodes.

pxc1> start transaction;
Query OK, 0 rows affected (0.00 sec)
pxc2> start transaction;
Query OK, 0 rows affected (0.00 sec)

pxc1> select * from t where id=1 for update;
| id | ts                  |
|  1 | 2014-06-26 21:37:48 |
1 row in set (0.00 sec)
pxc2> select * from t where id=1 for update;
| id | ts                  |
|  1 | 2014-06-26 21:37:48 |
1 row in set (0.00 sec)

pxc1> update t set ts=now() where id=1;
Query OK, 1 row affected (0.01 sec)
Rows matched: 1  Changed: 1  Warnings: 0
pxc2> update t set ts=now() where id=1;
Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0

pxc1> commit;
Query OK, 0 rows affected (0.00 sec)
pxc2> commit;
ERROR 1213 (40001): Deadlock found when trying to get lock; try restarting transaction

Where does this happen in OpenStack?

For example in OpenStack Nova (the compute project in OpenStack), tracking the quota usage uses the SELECT…FOR UPDATE construct.

# User@Host: nova[nova] @  []  Id:   147
# Schema: nova  Last_errno: 0  Killed: 0
# Query_time: 0.001712  Lock_time: 0.000000  Rows_sent: 4  Rows_examined: 4  Rows_affected: 0
# Bytes_sent: 1461  Tmp_tables: 0  Tmp_disk_tables: 0  Tmp_table_sizes: 0
# InnoDB_trx_id: C698
# QC_Hit: No  Full_scan: Yes  Full_join: No  Tmp_table: No  Tmp_table_on_disk: No
# Filesort: No  Filesort_on_disk: No  Merge_passes: 0
#   InnoDB_IO_r_ops: 0  InnoDB_IO_r_bytes: 0  InnoDB_IO_r_wait: 0.000000
#   InnoDB_rec_lock_wait: 0.000000  InnoDB_queue_wait: 0.000000
#   InnoDB_pages_distinct: 2
SET timestamp=1409074305;
SELECT quota_usages.created_at AS quota_usages_created_at, quota_usages.updated_at AS quota_usages_updated_at, quota_usages.deleted_at AS quota_usages_deleted_at, quota_usages.deleted AS quota_usages_deleted, AS quota_usages_id, quota_usages.project_id AS quota_usages_project_id, quota_usages.user_id AS quota_usages_user_id, quota_usages.resource AS quota_usages_resource, quota_usages.in_use AS quota_usages_in_use, quota_usages.reserved AS quota_usages_reserved, quota_usages.until_refresh AS quota_usages_until_refresh
FROM quota_usages
WHERE quota_usages.deleted = 0 AND quota_usages.project_id = '12ce401aa7e14446a9f0c996240fd8cb' FOR UPDATE;

So where does it come from?

These constructs are generated by SQLAlchemy using with_lockmode(‘update’). Even in nova’s pydoc, it’s recommended to avoid with_lockmode(‘update’) whenever possible. Galera replication is not mentioned among the reasons to avoid this construct, but knowing how many OpenStack deployments are using Galera for high availability (either Percona XtraDB Cluster, MariaDB Galera Cluster, or Codership’s own mysql-wsrep), it can be a very good reason to avoid it. The solution proposed in the linked pydoc above is also a good one, using an INSERT INTO … ON DUPLICATE KEY UPDATE is a single atomic write, which will be replicated as expected, it will also keep correct track of quota usage.

The simplest way to overcome this issue from the operator’s point of view is to use only one writer node for these types of transactions. This usually involves configuration change at the load-balancer level. See this post for possible load-balancer configurations.

The post OpenStack users shed light on Percona XtraDB Cluster deadlock issues appeared first on MySQL Performance Blog.

by Peter Boros at September 11, 2014 03:27 PM

Red Hat Stack

What’s Coming in OpenStack Networking for Juno Release

Neutron, historically known as Quantum, is the OpenStack project focused on delivering networking as a service. As the Juno development cycle ramps up, now is a good time to review some of the key changes we saw in Neutron during this exciting cycle and have a look at what is coming up in the next upstream major release which is set to debut in October.

Neutron or Nova Network?

The original OpenStack Compute network implementation, also known as Nova Network, assumed a basic model of performing all isolation through Linux VLANs and iptables. These are typically sufficient for small and simple networks, but larger customers are likely to have more sophisticated network requirements. Neutron introduces the concept of a plug-in, which is a back-end implementation of the OpenStack Networking API. A plug-in can use a variety of technologies to implement the logical API requests and offer a rich set of network topologies, including network overlays with protocols like GRE or VXLAN, and network services such as load balancing, virtual private networks or firewalls that plug into OpenStack tenant networks. Neutron also enables third parties to write plug-ins that introduce advanced network capabilities, such as the ability to leverage capabilities from the physical data center network fabric, or use software-defined networking (SDN) approaches with protocols like OpenFlow. One of the main Juno efforts is a plan to enable easier Nova Network to Neutron migration for users that would like to upgrade their networking model for the OpenStack cloud.

Performance Enhancements and Stability

The OpenStack Networking community is actively working on several enhancements to make Neutron a more stable and mature codebase. Among the different enhancements, recent changes to the security-group implementation should result in significant improvement and better scalability of this popular feature. To recall, security groups allows administrators and tenants the ability to specify the type of traffic and direction (ingress/egress) that is allowed to pass through a Neutron port, effectively creating an instance-level firewall filter. You can read this great post by Miguel Angel Ajo, a Red Hat employee who led this effort in the Neutron community, to learn more about the changes.

In addition, there are continuous efforts to improve the upstream testing framework, and to create a better separation between unit tests and functional tests, as well as better testing strategy and coverage for API changes.

Another proposal that is being added to the Neutron project is an incubator effort to develop new features. The new incubator enables new features to mature and develop prior to adoption into the integrated release. The plan is that incubator features only stay in the incubator for two release cycles or fewer before they potentially graduate to a full feature within the project.

L3 High Availability

The neutron-l3-agent is the Neutron component responsible for layer 3 (L3) forwarding and network address translation (NAT) for tenant networks. This is a key piece of the project that hosts the virtual routers created by tenants and allows instances to have connectivity to and from other networks, including networks that are placed outside of the OpenStack cloud, such as the Internet.

In the current reference architecture available using the upstream code, the neutron-l3-agent is placed on a dedicated node or nodes, usually bare-metal machines referred to as “Network Nodes”. Until now, you could have to utilize multiple Network Nodes to achieve load sharing by scheduling different virtual routers on different nodes, but not high availability or redundancy between the nodes. The challenge that this current model brings is the fact that all the routing for the OpenStack cloud happens in a centralized point. This introduce two main concerns:

1. This makes each Network Node a single point of failure (SPOF)
2. Whenever routing is needed, packets from the source instance have to go through a router in the Network Node and then sent to the destination. This centralized routing creates a resource bottleneck and an unoptimized traffic flow

Two Juno efforts are aiming to address these issues; one is a proposal to add high-availability to the Network Nodes, so that when one node is failing the others can take over automatically. This implementation uses the well-known VRRP protocol internally. The second one is to introduce distributed virtual routing (DVR) functionality by placing the neutron-l3-agent on the Compute nodes (hypervisors) themselves. In contrast to the Network Nodes approach, deployment using distributed virtual routing will require external network access on each Compute node.

Ideally, customers will have the option to choose what model best suits their needs, or even to combine between them to enjoy the benefits of each one: distributed virtual routing (DVR) to handle routing within the OpenStack cloud (also known as east-west traffic) as well as 1:1 NAT for floating IPs, and highly-available Network Nodes to handle the centralized source NAT (SNAT) to allow instances to have basic outgoing connectivity, as well as advanced services such as virtual private networks or firewalls – which by design require seeing both directions of the traffic flow in order to operate properly. Assaf Muller, a Red Hat associate who contributes  in this area, covers this in a more detailed fashion in this excellent blog post.

While both of these upstream efforts seems to be interrelated at a first glance, it’s important to mention that during the Juno cycle these were two separate efforts, and combining them into a unified solution as described earlier is something to look for in future releases, and a topic that will be further discussed in the upcoming Kilo Design Summit.

Time for Some IPv6

IPv6 is a critical part of the promise of the cloud. If you want to connect everything to the network, you better plan for massive scale and have enough addresses to use. IPv6 is also increasingly important in the network functions virtualization (NFV) and telecommunication service provider space.

One of the big items that we expect to land in the Juno release is a more complete support for IPv6 networking. This is going to be an important milestone for IPv6 in Neutron, as this topic has been a development focus for the last few cycles and the API layer for supporting IPv6 subnet attributes was already defined before, but Juno would be the first release which actually introduces features on top of that.

The Juno features are mostly concentrated on IPv6 address assignment for tenant instances; while IPv4 is more straight forward when it comes to IP address assignment (and DHCP is by far the most common deployment in production with IPv4), IPv6 offers some more flexibility and options to choose from. Both stateful and stateless DHCPv6 are expected to be supported in OpenStack Neutron for the Juno release, as well as the ability to use Stateless Address Autoconfiguration (SLAAC).

Get Started with OpenStack Neutron

If you want to try out OpenStack, or to check out yourself some of the above enhancements, you are more than welcome to visit our RDO site. We have documentation to help get started, forums where you can connect with other users, and community-supported packages of the most up-to-date OpenStack releases available for download.

If you are looking for enterprise-level support and our partner certification program, Red Hat also offers Red Hat Enterprise Linux OpenStack Platform.

by Nir Yechiel at September 11, 2014 02:00 PM

Matthias Runge

Truncating log files

Depending on your settings, OpenStack Dashboard produces lots of log output. Fortunately there is already a tool in place, which cleans them up for you. Looking at /var/log/httpd/ you'll probably notice files like access_log-(date).gz. They were generated by logrotate by compressing existent logs.

To use the same mechanism for OpenStack Dashboard, create a file /etc/logrotate.d/openstack-dashboard:

/var/log/horizon/*.log {
    rotate 4
        /bin/systemctl reload httpd.service > /dev/null 2>/dev/null || true

Make sure, your file has perms 644: chmod 644 /etc/logrotate.d/openstack-dashboard. To test, if it works, issue logrotate -d /etc/logrotate.conf and watch its output closely.

You should find lines like:

reading config file openstack-ceilometer
reading config file openstack-cinder
reading config file openstack-dashboard
reading config file openstack-glance
reading config file openstack-heat

and a bit further down:

rotating pattern: /var/log/horizon/*.log  weekly (4 rotations)
empty log files are not rotated, old logs are removed
considering log /var/log/horizon/horizon.log
  log does not need rotating

by mrunge at September 11, 2014 11:20 AM

Steve Hardy

Using Heat ResourceGroup resources

This has come up a few times recently, so I wanted to share a howto showing how (and how not) to use the group resources in Heat, e.g OS::Heat::ResourceGroup and OS::Heat::AutoScalingGroup.

The key thing to remember when dealing with these resources is that they can multiply any number of resources (expressed as a heat stack), not just individual resources. This is a very cool feature when you get your head around it! :)

Lets go through a worked example, where we use ResourceGroup to create 5 identical servers, each with a cinder volume of the same size attached.

Resource group basics

To create one server with a volume attached, you define the server, a volume, and a volume attachment resource, like this:

<script src=""></script>
Now, lets say you need 5 (or 500) of these identical servers with an attached volume.  What you do *not* want to do is create three groups of resources (Server, Volume and VolumeAttachment), and somehow try to connect them all together.  This is an anti-pattern which will cause you much pain and frustration! :)

Instead, you need to use ResourceGroup to scale out the combination of resources.  Fortunately, Heat makes this very easy to do.  Lets say you call the template above creating one server with attached volume server_with_volume.yaml, you can create 5 identical nested stacks, each containing one server, volume and volume attachment like this:

<script src=""></script> Note: currently templates referencing nested stack templates can only be launched via python-heatclient (not the Horizon dashboard, a known issue we're working on resolving).

Simply do heat stack-create my_group -f server_with_volume_group.yaml and Heat will create 5 identical servers, attached to 5 identical volumes!

A more complete example related to the fragments above is available here.

Resource groups and provider resources

What's that you say? You don't like the nested stack reference hard-coded template name? No problem! :) You can also make use of the environment to define a provider resource type alias.

<script src=""></script> Then specify the type alias instead of the template name in the ResourceGroup definition:
<script src=""></script>
This can be lauched like thisheat stack-create my_group2 -f server_with_volume_group.yaml -e env_server_with_volume.yaml

The example will work exactly as before, only different versions of My::Server::WithVolume can easily be substituted, for example if you need a staging workflow where the resource alias is reused across a large number of templates, different versions of the nested template can easily be specified by changing it in one place (the environment).

That is all, for more information, please see the examples in the heat-templates, and this new example which shows how to attach several identical volumes to one server.

by Steve Hardy ( at September 11, 2014 10:48 AM


OpenStack Silicon Valley is Sold Out!

Well, we kind of hoped that it wouldn’t happen — and that it would.

I am both sad and excited to announce that OpenStack Silicon Valley is officially sold out. Sad, because it is clear that the demand for a conference featuring so many of the key influencers of OpenStack significantly outweighs the supply of tickets. If you haven’t already purchased your ticket, I highly recommend adding your name to the waitlist (found here).

If you aren’t able to attend Silicon Valley, you still have a chance to enjoy the event. I’m happy to announce that the keynote speeches of OpenStack Silicon Valley will be streamed live on September 16th, starting at 9am Pacific. Special thanks to Blue Box Cloud for making the live stream possible through their sponsorship.

Additionally, our friends at SiliconANGLE will be at the event filming interviews with several of the prominent speakers during the conference. You can sign up for the OpenStack Silicon Valley live feed here, and you can find The Cube (by Silicon Angle) streaming interviews on their Ustream channel the day of the event.

We’re only one week away from OpenStack Silicon Valley, and I hope you all are excited as I am for the show. A lot of hard work has gone into putting everything together, and I’m sure OpenStack Silicon Valley will not disappoint.

See you in 6 days.

Register here for the OpenStack Silicon Valley live stream

The post OpenStack Silicon Valley is Sold Out! appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Samuel Lee at September 11, 2014 12:14 AM

Cloudify Engineering

Going Hybrid Cloud with OpenStack

OpenStack Silicon Valley is just a few short days away and we are looking forward to talking OpenStack with the...

September 11, 2014 12:00 AM

September 10, 2014

Tesora Corp

What is Trove, the Database as a Service on OpenStack. How does it Function and what does the Future hold

If you've been wondering about Trove, how it works and where it's headed,  this is a must watch video.  Recorded at OpenStack Trove Day, Nikhil Manchanda, the Trove Project Lead and Doug Shelley, VP of Product Development at Tesora, go in depth on how Trove works and include a demo showing Trove in action. The team goes also outlines the additional features and functionality planned for the OpenStack Juno release coming up in November. Screen Shot 2014-09-10 at 9.49.05 AM.png

by 86 at September 10, 2014 09:29 PM

Tesora Corp

Short Stack: HP wants to make OpenStack easier, OpenStack adoption in enterprise and CloudStack begins its slow fade

short stack_b small_0_0.jpgWelcome to the Short Stack, our weekly feature where we search for the most intriguing OpenStack links to share with you. These links may come from traditional publications or company blogs, but if it's about OpenStack, we'll find the best links we can to share with you every week.

If you like what you see, please consider subscribing.

Here we go with this week's links:

OpenStack adoption creeps toward corporate acceptance | TechTarget

While public cloud choices dominate, these authors argue that technical issues are preventing companies from adopting OpenStack more fully. They could be right, but the industry has begun working on products to reduce or hide the complexity.

CloudStack, losing to OpenStack, takes its ball and goes home | InfoWorld

History is littered with projects that died by the wayside after widespread adoption of the competition. Citrix, the chief sponsor of CloudStack apparently sees the handwriting on the wall. It may not be folding it completely, but it's clearly recognized that OpenStack is the open alternative to the public cloud behemoths.

Analyst: Why Red Hat will win OpenStack - Triangle Business Journal

One analyst believes that when enterprise users finally fully embrace OpenStack, Red Hat will be the winner. I'm not sure the market will be defined by winners and losers, but if it is, it's really too soon to say how Red Hat will fare against HP, IBM, Mirantis and other players in this space.

HP offers OpenStack services offerings | ZDNet

Speaking of OpenStack complexities. HP certainly recognizes that customers are struggling to implement OpenStack because of a lack of technical expertise, so they are offering services to help. This could be the start of a long line of similar programs.

CenturyLink Said to Seek to Acquire Rackspace Hosting | Bloomberg

CenturyLink appears to be the latest suitor to knock on Rackspace's door. If it's true, it would make a lot of sense for them to grab the company that is one of the OpenStack founding members and fold it into its growing cloud services offerings. But we've seen suitors come and go and we'll have to see if CenturyLink actually pulls the trigger on this deal.

by 693 at September 10, 2014 02:18 PM

Rafael Knuth

Google+ Hangout: How is IBM using OpenStack?

In this meetup we will talk about how IBM is using OpenStack together with their mainframe...

September 10, 2014 10:52 AM

September 09, 2014


September 2014 OpenStack Training Update: New 75% Hands-On Course, OpenStack Summit Training

Too often, the largest barrier preventing full-scale OpenStack adoption is a company’s general lack of deployment and implementation expertise. Having first been introduced to the IT community only four years ago, OpenStack is still relatively young, which can make finding that expertise difficult.

To help with that expertise gap, we are adding an additional OpenStack training course to our lineup of training courses: OpenStack Bootcamp II (OS200). Serving as a follow up to our OpenStack Bootcamp I (OS100), this newest course is designed to help students develop a mastery of skills centered on deploying, administering, and troubleshooting an OpenStack environment. We have also added four new training locations.

New OpenStack Training for IT Administrators and Deployment Engineers

With our first class launching in San Jose at the end of September, this OpenStack Bootcamp II (OS200) course is intended to ensure that each student receives the deployment, implementation and administrative expertise necessary to handle the day-in and day-out responsibilities of working within OpenStack.

This course features an extensive training syllabus that includes a focus on installing and validating OpenStack projects such as Keystone, Heat, and Ceilometer. Students who complete this three-day bootcamp will have received comprehensive training in:

  • Manually Installing and Configuring OpenStack
  • Troubleshooting OpenStack Environments
  • Learning OpenStack Best Practices
  • Identifying Production Deployment Typology
  • Using the Command Line Interface and Dashboard

Learn more about the OS200 OpenStack bootcamp.

Two New Training Locations

Looking for OpenStack training at a place that is convenient for you? At Mirantis, we are always working to expand the reach of our OpenStack training locations. This month, we have added two new cities to our continually expanding list of training sites.

  • Montreal
  • Atlanta

Click here for a complete list of our training locations.

A Special OpenStack Summit Training

Are you planning on attending this year’s OpenStack Training Summit in Paris? Why not maximize your time by taking advantage of our special OpenStack bootcamp in Paris?

Located within convenient walking distance of the summit, this special training is being offered during the three days prior to the Paris Summit, so attending students can receive the critical training necessary to operate and deploy an OpenStack cluster before heading to the Summit itself.

Interested in attending the OpenStack training in Paris? Click here for more information.

The post September 2014 OpenStack Training Update: New 75% Hands-On Course, OpenStack Summit Training appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Lana Zhudina at September 09, 2014 06:45 PM

Kenneth Hui

Should OpenStack Be An Automobile Or A Faster Horse?


Henry Ford was famously quoted as saying, “If I had asked people what they wanted, they would have said faster horses,” in response to complaints that he ignored customer input in favor of his own vision when it came to the Model-T.  While it is debatable if Ford actually uttered those exact words, most historians would agree that Ford had a vision for manufacturing automobiles that was often at odds with customer demands – “Any color as long as it is black.”  It is also not debatable that Henry Ford was able to marry this homogenous design approach with the assembly line process to drive down the cost of manufacturing automobiles and create a new market that the Ford Motor Company was able to dominate from 1908 to 1920.

The debate over how to best drive product/project innovation, either by meeting customer demands or by assuming customers know what they want but not necessarily what they need, is one that is garnering increasing attention as OpenStack continues its push into the Enterprise, or as Geoffrey Moore first pioneered as a model, attempting to cross the chasm from the early adopters to the early majority.

OS Adoption

Recently, the guys over at The Cloudcast, Aaron Delp and Brian Gracely, talked in their podcast about the impact of the VMware crowd joining the OpenStack community, with the unsurprising call for OpenStack to be a “free VMware,” and if there is a need for a split between modern and traditional IT.  A few days before the podcast, I had a short exchange with Brian and Mark Twomey, a.k.a. Storagezilla regarding the same topic.

Screen Shot 2014-09-02 at 6.45.31 PM

That exchange (which was the inspiration for this post) originated from some observations I tweeted regarding what I have been hearing as I’ve spoken with enterprises about their interest in and adoption of OpenStack.  While some view, correctly, that OpenStack is designed today to be the cloud platform for next-generation 3rd Platform workloads, many others want OpenStack to be an open source alternative to VMware’s vCloud Suite.  While I was at Rackspace and talking to enterprise customers who were starting to investigate OpenStack, the most frequently asked question I heard was, “does OpenStack have Virtual Machine (VM) High-Availability (HA) and vMotion [Live Migration]?”  As customer understanding of OpenStack capabilities has grown over time, my current conversations with EMC customers frequently lead to questions about how we can add VM HA and Live Migration features to OpenStack.

And it’s not just EMC customers either.  I recently joined the OpenStack Foundation’s Win The Enterprise (WTE) technical working group.  As the group’s wiki indicates, the mission of this joint multi-vendor/multi-end-user group is “to look at “Pet”-style workloads on Private Clouds in large, process-heavy organisations” and “to conduct gap analysis of OpenStack vs. Enterprise Private Cloud needs, identify issues, and create action plans [i.e. blueprints and documentation] to solve them.”  As the public meeting notes indicate and the meetings I have attended confirm, there is a strong interest in making deployments and operations easier, which I doubt anyone would disagree are much-needed improvements.  However, there is also strong desire to add feature, such as VM HA, Live Migration, Distributed Resource Scheduler (DRS), and shared Cinder volumes, to core OpenStack projects.  The inclusion of these features, long found in enterprise virtualization technologies such as VMware vSphere, has long been debated since OpenStack was open sourced as a project.

These debates have become more vocal as enterprise customers and vendors have increased their push to include such traditional virtualization features through initiatives such as WTE and at the Design Summit sessions.  At these sessions, you see clearly developers who want to preserve the purity of OpenStack as a next-generation cloud platform, in the mode of Amazon Web Services (AWS), with shared-nothing architectures and relying on application-level resiliency.  You can also hear from many who belive that for OpenStack to mature as a platform for enterprises to use, it must add features to accommodate traditional 2nd platform workloads that required shared infrastructure and VM resiliency.

third platform

Who is right here, the “cloud purists” or the “enterprise pragmatists?”  I find myself both sympathetic to and in opposition to both sides at various points. As I see it, there are at least three approaches that the OpenStack community can take, all with pros and cons to them.  The names I am proposing are how I believe the proponents of these disparate approaches see themselves.  Hopefully, I am not creating straw men.

  • “Purist” Approach – This was the predominant view during the early days of the project when OpenStack was envisioned as an open Source and private cloud alternative, not so much to VMware vSphere, but to AWS.  As I’ve explained in various talks and in an earlier blog post, “workload dictates architecture.” This fundamental principal explains why OpenStack was not designed to use shared components or to provide VM resiliency, which is necessary for running traditional workloads such as an Oracle relation databases or Microsoft Exchange.  The workloads that OpenStack was designed to go after are the next-generation web scale applications, like Netflix OSS, that run primarily on cloud platforms like AWS and Rackspace Cloud.  The argument of the purists is that by changing course and going after traditional workloads, the OpenStack community risks falling behind on the innovation curve and also enabling enterprises to avoid the inevitable move to the 3rd platform, i.e. giving users a faster horse when future relevance for them means they need an automobile.  The counter argument is that the purists are being tone death to user needs and not accepting the reality of technology artifacts in the enterprise.
  • “Pragmatist” Approach – At the opposite end of the spectrum, seemingly, are the pragmatists who wonder what use there is to selling race cars when users are only ready for horses or jalopies at best?  The pragmatists argue that 80% to 90% of enterprise workloads today are traditional 2nd platform applications that will take many years to be rewritten or to be replaced by 3rd platform applications.  If OpenStack focuses only on next-generation workloads, users will move to other platforms or stay with the platforms they are currently on and just look to evolve with them.  If that happens, OpenStack risks being relegated to niche product status.  The future of OpenStack lies in being able to embrace the Enterprise, to meet them where they are, and to not take the approach of “change or die.”  Adding features like VM HA and DRS will allow OpenStack to propagate in the Enterprise and for the OpenStack community to become a change agent that helps users move into the future.  The counter argument is that we would then be, as outlined before, backtracking on the original design goals behind the project and retarding OpenStack innovation.
  • “Bimodality” Approach – This approach plays off an excellent blog post on Bimodal IT, written by Lydia Leong, from Gartner.  Proponents of the bimodality approach would argue that both the purists and the pragmatists are wrong-headed and inflexible.  Either approach forces users and developers unnecessarily into a “Sophie’s Choice” dilemma that ultimately does not serve anyone’s best benefit.  Instead, we should focus on allowing enterprises to evolve naturally to the 3rd platform with OpenStack as the preferred cloud platform for that workload, while not doing anything to make dramatic changes to the OpenStack core and leveraging features that already exist in traditional virtualization technologies.  One option in this approach, which I’ve blogged about in the past, include running virtualization technologies such as KVM and vSphere as options in a multi-hypervisor OpenStack deployment and a second option is running 2 distinct cloud platforms.
    •   This first option would allow customers to run traditional 2nd platform workloads in a “legacy workload” zone and to run 3rd platform workloads, using an open source hypervisor, in a “next-gen workload” zone.  A variation on this theme is the recently announced beta for VMware Integrated OpenStack (VIO) which leverages vSphere to run OpenStack services and to manage vSphere as an OpenStack hypervisor.
    • A second option under the “bimodality” approach is to run OpenStack and other virtualization technologies as separate platforms, each running the workload best suited for that platform, and then use a multi-cloud management platform such as VMware’s vCloud Automation Center, RightScale, or others to be the master orchestration tool across all the platforms.

However, one of the primary arguments against this approach is that it still requires users to license proprietary software in order to gain the benefits of the features required to run traditional workloads and, in the latter option, to manage multiple platforms.

So as I asked earlier, who is right?  Which approach is the best to take?  While I don’t have that set definitively in my own mind, I do find myself leaning towards the “bimodality” approach.  I find the “purist” approach often too inflexible and unrealistic in light of an industry where mainframes still run core parts of many enterprise businesses.  I also find it interesting that the “pragmatist” approach, ultimately, is not really that different from the “purist” approach that it opposes.  At the end of the day, if the argument of the “pragmatist” is that OpenStack must change to be more traditional workload centric, then they have become simply “legacy purists” throwing stones at “cloud purists.”

And therein lies one of the potential pitfalls of an inflexible purist approach, legacy or cloud.  Eventually, Ford’s inflexibility allowed General Motors to disrupt them, less than a decade after the heydays of the Model-T, by satisfying the needs of the consumer – “A car for every purse and purpose” – while still moving the automotive industry forward and delighting customers with innovations such as annual model changes and used card trade-ins.  Instead of sticking to our “pure” principles, the OpenStack community may be best served by taking a “bimodality” approach and what I am apt to describe as being the true pragmatist approach.

However, one of the beauties of an open source community is that we all have the freedom to discuss and yes, to argue for our point of view.  I am open to being told that I am completely wrong and/or I’ve created straw men to knock down.  As community events such as OpenStack Silicon Valley next week and the OpenStack Summit in November takes place, I look forward to healthy and respectful dialogue with the community.

Filed under: Cloud, Cloud Computing, OpenStack, Private Cloud, Virtualization, VMware Tagged: Cloud, Cloud computing, OpenStack, Private Cloud, VMware vSphere

by Kenneth Hui at September 09, 2014 03:50 PM

Tesora Corp

Blog Post #1: The OpenStack Trove Roadmap: A Look at Replication and Clustering in 3 Parts.

roadmap.jpgAs applications are migrated to the cloud, the complexity of operating databases in this new environment has become apparent. It is hard to operate a significant database infrastructure even when you have the luxury of doing it in a controlled data-center on dedicated hardware. The cloud introduces performance variability, an overhead due to virtualization and provides an end user with a much lower level of control over the underlying hardware. In the public cloud, reliability of an individual virtual machine instance is considerably lower than that of a dedicated machine in a data-center. When operating a large fleet of servers, observed failures are much more frequent. All of these make operating a database in the cloud much more challenging.

Database-as-a-Service simplifies the use of databases in the cloud by relieving the administrator of much of the administrative burden in operating the infrastructure. By being closely tied with the underlying infrastructure, and automating many common operations, DBaaS considerably simplifies many of these activities. Failures however could cause interruptions in the service and therefore it is essential that the DBaaS platform accounts for these, and handle them in a manner that makes failures totally transparent to the end user.

Trove accomplishes this in several ways. First, Trove is closely tied to the underlying OpenStack infrastructure, integrated closely with Nova, Neutron, Swift, Cinder and Keystone. It automates a considerable amount of the configuration and setup steps required in launching a new server, similar to other tools like Puppet, Chef, and Ansible.  It also allows a site administrator to establish standard configurations and reliably launch servers with those configurations.

One area where this configuration support is especially important in the case of Clustering and Replication. Without Trove a user would have to manually configure these features and manage failures and failover by themselves. Trove promises to automate these capabilities and the functionality is being implemented in phases.

The initial implementation of replication in Trove will be for MySQL data stores using the built-in MySQL replication feature. Subsequent phases will extend this capability to include clustering and replication for all data stores that Trove supports. In the first release of this feature, users will be able to create a single MySQL instance and then create a slave of that instance. The act of creating the slave will establish a new instance, which will be the replication peer of the initial instance.

The following commands illustrate how a user would do this. Consider first the following operating Trove instance running MySQL version 5.5

$ trove list

ID Name Datastore Datastore Version Status Flavor ID Size
d2bd91ef-3d7c-43ae-97a9-f0726c91d322  m1 mysql 5.5 ACTIVE 7 2

One would now create a second (slave) instance referencing the master provided above, as follows.

$ trove create s1 7 --size 2 --slave_of d2bd91ef-3d7c-43ae-97a9-f0726c91d322

Property           Value
created 2014-06-13T14:33:27 
datastore mysql
datastore_version 5.5
flavor 7
id 9ffc7b3a-9205-412a-9cd2-521f95755c43 
name s1
slaveof d2bd91ef-3d7c-43ae-97a9-f0726c91d322 
status BUILD
updated 2014-06-13T14:33:27 
volume 2

The user can now look at the state of the replicated pair as shown below.

trove show 9ffc7b3a-9205-412a-9cd2-521f95755c43

Property           Value
created 2014-06-13T14:33:27 
datastore mysql
datastore_version 5.5
flavor 7
id 9ffc7b3a-9205-412a-9cd2-521f95755c43 
name s1
slaveof d2bd91ef-3d7c-43ae-97a9-f0726c91d322 
status ACTIVE
updated 2014-06-13T14:33:27 
volume 2
$ trove show d2bd91ef-3d7c-43ae-97a9-f0726c91d322
Property           Value
created 2014-06-13T14:33:27 
datastore mysql
datastore_version 5.5
flavor 7
id d2bd91ef-3d7c-43ae-97a9-f0726c91d322  
name s1
slaves 9ffc7b3a-9205-412a-9cd2-521f95755c43 
status ACTIVE
updated 2014-06-13T14:33:27 
volume 2

To disconnect a slave from a master, the user would do this:

$ trove detach_replication <slave instance>

Now that you know the basic mechanics of Trove’s replication feature, in the next post, we will describe the implementation of the Client and the Task Manager in detail.

by 1 at September 09, 2014 03:04 PM

September 08, 2014


OpenStack Silicon Valley Featured Track: Planning Your Agile Deployment

Over these final few weeks leading up to OpenStack Silicon Valley, we will be featuring the 4 different tracks of the show, which will take place once the keynote speeches have ended. Today’s track: Planning Your Agile Deployment.

It’s obvious that agile infrastructure is fundamentally different from traditional enterprise architectures. What’s perhaps less obvious is how to plan for building agile infrastructure.

The OpenStack Silicon Valley “Planning Your Agile Deployment” track pulls no punches in examining four critical questions to consider as you plan your agile infrastructure deployment. The track begins with the deployment approach: tooling and the architectural approach to bootstrapping OpenStack at scale. Next, the track talks about the real-world tradeoffs between consuming trunk and consuming a distro. A conversation about architecting for hybrid cloud deals with the critical questions of API, architecture, and network decisions. Finally, the track concludes with a session on the fast-evolving technologies of platforms and containers, looking at how your choice of app dev platform approach impacts your infrastructure choices, and vice versa.

Participants will leave this track ready to help their organizations understand the advantages and tradeoffs to consider when planning an agile infrastructure deployment powered by OpenStack.

The Planning Your Agile Deployment track will be divided into 4 sessions:

Session 1: Deployment Approach

The first session of this track starts with basic questions about about deployment tooling and the architectural approach to bootstrapping a large-scale datacenter hosting OpenStack clouds. We’ll look at image-based bootstrapping choices such as Crowbar (the first open source OpenStack-focused deployment framework) and TripleO (OpenStack on OpenStack). We’ll compare these with script-based bootstrapping options such as Puppet and Chef. If you want to understand the basic differences between these deployment approaches, this session is designed to tell you what you need to know.


Session 2: CI/CD, Consuming Trunk and the Allure of Distros

Before deploying agile infrastructure powered by OpenStack, there’s a big decision to make: Do I consume trunk code, or do I use a supported distro? This session looks at the freedom and responsibilities of consuming trunk, comparing these with the security and speed of consuming a distro. We’ll look at how CI/CD works, both in the OpenStack community and how it applies to what you’ll be doing in-house if you choose to consume trunk.We’ll also look at consuming supported distros, including the safety and speed they offer, versus the flexibility limitations and relatively higher lock in you must accept. With a distro, you’re allowing a vendor to manage the CI/CD responsibilities, but the freedom of building precisely the agile cloud infrastructure you want may be worth the responsibilities of maintaining code and documentation. Or, you might be better served with a distro, accepting higher lock in and rigidity in exchange for speed and lack of responsibility for maintaining continuous integration and deployment. We’ll help you make that call.


Session 3: Hybrid Cloud and OpenStack

The ability to span workloads across public and private infrastructure has many advantages. The technical challenges to overcome in making this an operational reality are, however, substantial. In this session, we look at agile infrastructure planning considerations when hybrid cloud is the goal. We’ll look at APIs, architecture, networking models, colo selection, behavioral compatibility and other questions you need to address in order for your hybrid deployment to deliver the performance and economics your apps and app devs expect.


Session 4: Platforms, Containers, or Something Else

PaaS options like Pivotal’s Cloud Foundry and Red Hat’s OpenShift and container technologies like Docker are all working toward the same end: freeing developers to focus on code rather than infrastructure while making it possible to run apps on different infrastructures without retooling. This session focuses on how organizations are using platform and container technologies to improve agility.Check out this session to understand the differences between Docker (“the OpenStack of containers”) and PaaS. We’ll also take a look at Murano, an OpenStack Project that provides an application catalog for app devs and cloud admins to publish various cloud-ready applications in a browsable categorized catalog.


Don’t miss out on OpenStack Silicon Valley and its incredible lineup of speakers – register today!

The post OpenStack Silicon Valley Featured Track: Planning Your Agile Deployment appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Samuel Lee at September 08, 2014 08:24 PM

Deconstructing the open cloud, the OpenStack Trove roadmap, and more

Interested in keeping track of what's happening in the open source cloud? is your source for what's happening right now in OpenStack, the open source cloud infrastructure project.

by Jason Baker at September 08, 2014 03:00 PM

Matthias Runge

Horizon's new features introduced in Juno cycle

This post intends to give an overview on what happened during Horizon's Juno development cycle. Horizon's blueprints page on launchpad lists 31 implemented new features. They may be grouped into a few larger features.


Apache Hadoop is a widely adopted MapReduce implementation. The aim of the Sahara project is to enable users to easily provision and manage Hadoop clusters on OpenStack.

During the Juno development cycle, the independent Sahara dashboard was merged into Horizon. Like all other features in Horizon, it will be shown, when Sahara is configured through Keystone.


Horizon-2014.1 aka Icehouse version has support for RBAC for Glance and Cinder. For example, create, access, or delete images can be limited on user or role basis. In Juno this RBAC system was extended to support compute, network, and orchestration.

JavaScript unbundling

In the past, there were quite a few JavaScript libraries copied into Horizon's code. Benefit is, they are available directly in Horizon and developers are in control of them. On the other side, if there is a security flaw in any of those bundled files, Horizon devs are in charge to fix it. Many Linux distributions have a rule not to use such bundled code at all, for example Fedora. As result of this feature, all bundled libraries were removed from Horizon, and system provided libraries will be used through python-XStatic.

Horizon was originally intended as a framework to enable the development of a dashboard, like the now well known OpenStack Dashboard, which is widely known under the name Horizon.

However, the Horizon framework is generic and could be used by other projects outside of OpenStack. It is useful to build a dashboard based on RESTful services. There is a blueprint described on launchpad, to separate horizon from openstack dashboard. To enable this feature, JavaScript libraries need to become separate as well.

Look and feel improvements

There are quite a few blueprints striving to improve look and feel. For example, bootstrap was updated to version 3, the color palette was centralized. The tablesorter plugin gained a feature to sort by timestamp. Those features make it easier to customize Horizon for individual needs.

Even more

A few patches were added to enable to store metadata, like in cinder or glance, where users can register key/pair values to describe their cloud deployment. Another set of patches implements features for Neutron, like supporting IPv6 and Neutron subnets.

by mrunge at September 08, 2014 11:20 AM

Red Hat Stack

OpenStack Resources for the vAdmin

Across many enterprise organizations, IT is driving innovation that allows companies to be more agile and gain a competitive edge. These are exciting times for the Vadmins who are at the center of this change. This innovation starts with bridging the gap between traditional virtualization workloads and cloud-enabled workloads based on OpenStack.

Organizations are embracing OpenStack because it allows them to more rapidly scale to meet evolving user demands without sacrificing performance on a stable and flexible platform and at a cost effective level.

As a Vadmin, you might be asking yourself how OpenStack fits in your world of traditional virtualization workloads. The answer is that OpenStack is not a replacement rather it is an extension to traditional virtualization platforms.

To help vAdmins get started with OpenStack, we have created a dedicated page with numerous OpenStack resources including a solutions guide that explains the architectural differences between OpenStack and VMware vSphere, as well as an appliance that allows you to quickly run and deploy OpenStack in your VMware vSphere environment.

Visit this OpenStack Resources vAdmin page to learn how to get started with OpenStack in your existing infrastructure today.

by rtona at September 08, 2014 10:39 AM

Chmouel Boudjnah

Dox a tool that run python (or others) tests in a docker container


Sometime there is some ideas that are just obvious that they are good ideas. When Monty started to mention on the OpenStack development mailing list about a tool he was hacking on allowing to integrate docker containers to do the testing it was obvious it was those ideas that everybody was thinking about that it would be awesome if it was implemented and started to get used.

The idea of dox is like the name imply is to slightly behave like the tox tool but instead of running virtualenvs you are using docker containers.

The testing in the OpenStack world is a bit different than the other unit testing. Since OpenStack is inherently working with the local system components we have to get abstracted from the local developer box to match exactly the system components. In other words if we run our testing against a zookeeper daemon of a specific version we want to make it sure and easy that this version has been installed.

And that’s where docker can help, since you can easily specify different images and how to build them making sure we have those tools installed when we run our testing targets.

There are other issues with tox that we have encountered in our extensive use of it in the OpenStack world we are hoping to solve here. virtualenv has been slow for us and we have come up with all sorts of hacks to get around it. And as monty mention in his mailing list post docker itself does an EXCELLENT job at handling caching and reuse where we easily see in the future those standard image built by the openstack-infra folks that we know works and validated in upstream
openstack-ci published on dockerhub that everyone else (and dox) can use to run tests.

The tool is available here in stackforge here :

with an handy README that would get you started :

Its not quite ready yet but you can start running tests using it. If you want a fun project to work on that can help the whole Python development community (and not just OpenStack) come hack with us. We are as well on Freenode servers in IRC on channel #dox.

If you are not familiar with the contribution process of Stackforge/OpenStack see this wiki page which should guide through it :

by chmouel at September 08, 2014 12:56 AM

September 07, 2014


Site to Site VPN in OpenContrail

This article talks about establishing a Site to Site VPN connection with one end being any office/private network and the other end being a private network in the cloud. This article targets the OpenStack/OpenContrail environment.

Before reading further please have a look into this article OpenVPN in VM in OpenContrail.

Lets assume the subnet of the office/private network is and the subnet of the cloud private network is The goal is to establish a Site to Site VPN connection between these two networks.


  1. We need a VM running OpenVPN server in the cloud private network
  2. We need a host/VM running OpenVPN client in the office/private network
  3. VM running OpenVPN server should have a floating ip associated to it.

Configuration changes in OpenContrail

apply_subnet_host_routes feature has been recently added in OpenContrail. Enable this feature by adding the below line in DEFAULTS section of the /etc/contrail/api_server.conf (or /etc/contrail/contrail-api.conf) :


Configuring the cloud network subnet

Create a private network and subnet

$ neutron net-create private
$ neutron subnet-create private --host_routes type=dict list=true list=true destination=,nexthop= destination=,nexthop=  

If you have already created a private network and subnet, then you can update the subnet with the host routes

$ neutron subnet-update <SUBNET_ID> --host_routes type=dict list=true list=true destination=,nexthop= destination=,nexthop=

Remark: We re-defined the default route (gateway) in the host route list because OpenContrail does not provide anymore the default router option (code 3) when a class route option (code 121) is defined.

Setting up a VM with OpenVPN server

The VM running the OpenVPN server should have the IP address of the nexthop defined during the subnet create/update. The OpenStack APIs offer three different solutions to do that:

  1. One way to do this is to create a port with the nexthop IP on Neutron API, which is in our example:
$ neutron port-create <NETWORK_ID> --fixed-ip subnet_id=<SUBNET_ID>,ip_address=

and then create the VM with the --nic port-id=<PORT_ID> option of Nova boot CLI command:

$ nova boot ... --nic port-id=<PORT_ID> ... VM_NAME
  1. The other way is to specify the desired IP when we create the VM through the Nova API with option --nic net-id=<NETWORK_ID>,v4-fixed-ip= in our example:
$ nova boot ... --nic net-id=<NETWORK_ID>,v4-fixed-ip=` ... VM_NAME
  1. And the last way is to update the subnet with the proper host-route options once the VM with OpenVPN server is setup with the delivered IP:
$ neutron subnet-update <SUBNET_ID> --host_routes type=dict list=true destination=,nexthop= destination=,nexthop=

Below are the steps in brief to set up the OpenVPN on the VM. For more details please refer this article OpenVPN in VM in OpenContrail

  1. Create a VM with the cloud private network and associate floating ip to it.
  2. Install OpenVPN packages and setup the OpenVPN keys. Refer to setup the server and client keys.
  3. Configure the OpenVPN server configuration file. Add the below lines in the configuration file:
   push "route"
   client-config-dir /etc/openvpn/ccd
  1. Create a ccd file for the client. Suppose you have configured the client name as ‘client1’, then create a file called /etc/openvpn/ccd/client1 and add the below line: iroute
  2. Start the OpenVPN server.

Setting up OpenVPN client in the office/private network

You can set up OpenVPN client either on a physical machine or on a virtual machine. Below are the steps in brief.

  1. Install OpenVPN on the host.
  2. Copy all the required OpenVPN keys to the client machine.
  3. Configure the client OpenVPN client configuration file and add the IP address of the OpenVPN server.
  4. Start the OpenVPN client.
  5. Configure your private network so that all the traffic to the cloud private network ( is routed to the OpenVPN client machine.

Testing the setup

You should now be able to ping and/or ssh your VMs (or host machines in the office/private network) directly with their internal IPs (check you authorized the office/private subnet to ICMP’ed on the security group of the cloud private network ports).

by Numan Siddique at September 07, 2014 10:00 PM

September 05, 2014

OpenStack Blog

OpenStack Community Weekly Newsletter (Aug 29 – Sep 5)

Latest Technical Committee Updates

The OpenStack Technical Committee meets weekly to work through requests for incubation, to review technical issues happening in currently integrated projects, and to represent the technical contributors to OpenStack. We have about a month remaining with our current crew and elections coming soon. Read the summary of latest meetings to find out about defcore, gap analysis and projects in incubation.

OpenStack DefCore Process Flow: Community Feedback Cycles for Core [6 points + chart]

DefCore is an OpenStack Foundation Board managed process “that sets base requirements by defining 1) capabilities, 2) code and 3) must-pass tests for all OpenStack™ products. This definition uses community resources and involvement to drive interoperability by creating the minimum standards for products labeled OpenStack™.” Rob Hirschfeld details in a blog post what “community resources and involvement” entails. Check out the upcoming DefCore Community Meetings Sep 10 & Sep 11.

The Road To Paris 2014 – Deadlines and Resources

During the Paris Summit there will be a working session for the Women of OpenStack to frame up more defined goals and line out a blueprint for the group moving forward. We encourage all women in the community to complete this very short surveyto provide input for the group.

Reports from Previous Events

Relevant Conversations

Tips ‘n Tricks

Security Advisories and Notices

Upcoming Events

Other News

Got Answers?

Ask OpenStack is the go-to destination for OpenStack users. Interesting questions waiting for answers:

Welcome New Reviewers, Developers and Core Reviewers

Welcome to Trove-core: Amrith Kumar.

Ilia Meerovich Cesar Mojica
John McDonough Bartosz Fic
Gerard Garcia Adrien Vergé
Colleen Murphy Yi Ming Yin
Antoine Abélard Yanping Qu
Miguel Grinberg Timothy Okwii
Brian Moss Sarvesh Ranjan
Karen Noel Rishabh
Saksham Varma Komei Shimamura
Can ZHANG Prasoon Telang
Chirag Shahani Steve Lewis
Christian Fetzer Srinivas Sakhamuri
Juan Zuluaga Patrick Amor
Yukinori Sagara Om Prakash Pandey
Peter Krempa Srini
Matt Kovacs

OpenStack Reactions

Getting a spec all worked out and implemented

Getting a spec all worked out and implemented

The weekly newsletter is a way for the community to learn about all the various activities occurring on a weekly basis. If you would like to add content to a weekly update or have an idea about this newsletter, please leave a comment.

by Stefano Maffulli at September 05, 2014 11:15 PM


The Truth and Myths about Mirantis and VMware

VMware’s recent announcement of VMware Integrated OpenStack (VIO) was followed by a flurry of articles, many of which failed to accurately reflect the relationship between Mirantis and VMware, as well as Mirantis’ current viewpoints on VMware’s contributions and commitments to the OpenStack community. 

Yes, VMware and Mirantis had a shaky past… with largely myself as the culprit for that past. But much has transpired since then. I’ve been very involved with our relationship with VMware since we officially partnered in October of last year and have been hands on with all of our joint integration and go-market efforts.

As the firsthand holder of information on the subject matter, I’ve sat down with David Marshall from VMBlog to clarify some of the misconceptions that may have been formed around VMware / Mirantis relationship and our stance on VMware’s launch of VIO.     

VMblog:  What is Mirantis’ relationship with VMware like currently?

Boris Renski:  Our relationship is very positive. Mirantis and VMware have had a partnership in place since the OpenStack Summit in Hong Kong to support the integration of Mirantis OpenStack with VMware vCenter Server and VMware NSX technologies. We have a very sizable team of engineers exclusively dedicated to maintaining interoperability between Mirantis OpenStack distribution and VMware’s suite of products.

VMblog:  VMware mentioned that their products are integrated with Mirantis OpenStack (among other distros), how does that work?

Renski:  The Mirantis OpenStack distribution integrates natively with VMware’s vCenter Server and NSX technologies, which can be deployed automatically via the Mirantis Fuel control plane. The integration is officially supported by both Mirantis and VMware.

VMblog:  Do you view VMware as a credible player in OpenStack?

Renski:  VMware has been a player in OpenStack for some time; their latest announcement just cements that. VMware is a Gold Member of the OpenStack Foundation, the same level of sponsorship as Mirantis. They have interoperability partnerships in place not just with Mirantis, but also with other prominent members of the community like Canonical, HP, Piston, Red Hat and SUSE. We look forward to cementing our partnership with VMware even more deeply in the future.

VMblog:  Does OpenStack compete with VMware products?

Renski:  Neither VMware nor OpenStack are a single product, but a suite of solutions. There are definitely areas where the two intersect and compete. However, there are also many areas where the two complement each other. VMware views OpenStack as a tenant-side cloud fabric, capable of gluing together various datacenter infrastructure components. At the same time, VMware itself is one of the vendors providing such best of breed components across compute, storage and networking. This is also exactly the way Mirantis views OpenStack and with this view in mind, there is definitely a great deal of benefit to the end user in leveraging OpenStack to orchestrate VMware environments.

VMblog:  Has OpenStack gained traction among VMware customers?

Renski:  VMware owns enterprise virtualization and private cloud. OpenStack today has matured to the point where we see many large enterprises starting to adopt it. Naturally, as this trend accelerates further, we’ll see more and more enterprise customers looking for ways to use OpenStack to orchestrate their VMware environments. I think that VMware going all in with OpenStack can benefit both a pure-play company like Mirantis and a company with a wider software portfolio like VMware.

VMblog:  What does VMware contribute to OpenStack?

Renski:  VMware has long been a major code contributor to OpenStack and currently has 30+ engineers working full-time to improve the upstream OpenStack codebase. Their work on the OpenStack networking module (Neutron), where they dominate code contributions, has been particularly notable.

VMblog:  How do VMware’s OpenStack contributions compare to those of other ecosystem vendors, including Mirantis?

Renski:  VMware’s contributions to OpenStack Neutron have outpaced those of any other vendor. In terms of overall code contributions VMware is either in the top five of code contributors or the top ten, depending on how one is calculating contributions. Mirantis is the third largest code contributor, behind only HP and Red Hat.

The post The Truth and Myths about Mirantis and VMware appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Boris Renski at September 05, 2014 10:54 PM


Webinar: Building an Enterprise Hybrid Cloud (updated)

goldstone_logo_blue_v1When you look at the components needed to build an Enterprise level cloud, the task list can be intimidating. Add to that the requirements for creating a hybrid enabled solution and the list can become unsettling. Business needs are clear: enterprises need to maximize efficiencies, minimize costs, and increase agility. Cloud is core to making this real.

by Seth Fox ( at September 05, 2014 02:00 PM

Rafael Knuth

Google+ Hangout: OpenStack Swift 2.0 - Deep Dive into Storage Policies

Last time John Dickinson was here he talked about the OpenStack Swift 2.0 release and covered...

September 05, 2014 12:35 PM

Trying out WordPress 4.0 on OpenStack

While a good portion of my focus on is on OpenStack and related cloud technologies, my most recent background prior to joining the team here was in doing web design and development work for small businesses, nonprofits, and others who needed sites created for them quickly and easily. So while I'm a Drupal fan for a lot of things I do, the ease and simplicity of WordPress led me to use it for a number of projects.

by Jason Baker at September 05, 2014 09:00 AM

Sébastien Han

OpenStack at the CephDays Paris

Save the date (September 18, 2014) and join us at the new edition of the Ceph Days in Paris. I will be talking about the new amazing stuff that happened during this (non-finished yet) Juno cycle. Actually I’ve never seen so many patch sets in one cycle :D. Things are doing well for Ceph in OpenStack! Deploying Ceph with Ansible will be part of the talk as well.

The full schedule is available, don’t forget to register to the event.

Hope to see you there!

September 05, 2014 08:25 AM

Lars Kellogg-Stedman

Heat Hangout

I ran a Google Hangout this morning on Deploying with Heat. You can find the slides for the presentation on line here, and the Heat templates (as well as slide sources) are available on github.

If you have any questions about the presentation, please feel free to ping me on irc (larsks).

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="" width="560"></iframe>

by Lars Kellogg-Stedman at September 05, 2014 04:00 AM

Angus Salkeld

How to profile Heat using OSprofile

The OpenStack Heat project has been having some scaling issues, and it really helps knowing where your problems are
before trying to solve them. So to help out here are the instructions to get osprofiler (
working with Heat (and the other projects that have support for it).


  • I am assuming you are using devstack
  • Once the inflight patches have landed this will be a lot easier!

Go to the base directory of your projects.

cd cinder
git pull
echo -e "[profiler]\nprofiler_enabled = True\n" >> /etc/cinder/cinder.conf

cd ../python-cinderclient
git review -d 103359

cd ../nova
git review -d
cp etc/nova/api-paste.ini /etc/nova/
echo -e "[profiler]\nprofiler_enabled = True\n" >> /etc/nova/nova.conf

cd ../python-novaclient
git review -d

cd ../heat
git review -d 118115
cp etc/heat/api-paste.ini /etc/heat/
echo -e "[profiler]\nprofiler_enabled = True\ntrace_sqlalchemy = True\n" >> /etc/heat/heat.conf

cd ../python-heatclient
git review -d 118118

cd ../keystone
git review -d 103368
pushd /etc/keystone
echo -e "[profiler]\nprofiler_enabled = True\n" >> /etc/keystone/keystone.conf
mv keystone-paste.ini keystone-paste.ini.orig
wget -O keystone-paste.ini
diff -u keystone-paste.ini.orig keystone-paste.ini

cd ../python-keystoneclient
git review -d 114856

sed -i "s/notification_topics.*/notification_topics = notifications,profiler/" /etc/ceilometer/ceilometer.conf

Note: in the api-paste.ini files above there is a default key “SECRET_KEY” – on anything but a devstack you should quickly change it.
What ever it is, make sure it is consistent and you provide the same thing on the command line (below).

You then need to restart the effected services (apache, heat-*, cinder-*, nova-*, ceilometer-*).

Here is an example run of “heat stack-create” I did:

heat --profile SECRET_KEY stack-create test -f bug1288774/1.yaml 
| id                                   | stack_name | stack_status | creation_time        |
| 1c1a27ac-291e-46ca-bc53-69e026ad9dd1 | test       | _            | 2014-09-04T08:30:24Z |
Trace ID: 105b22f9-f9f0-4526-ac52-79e0dab94c79
To display trace use next command:
osprofiler trace show --html 105b22f9-f9f0-4526-ac52-79e0dab94c79 

And the result?
Have a look here:
I think that’s quite neat!

BTW: the above trace is for a stack with a single server.

by ahsalkeld at September 05, 2014 01:51 AM

September 04, 2014

OpenStack Blog

Latest Technical Committee Updates

The OpenStack Technical Committee meets weekly to work through requests for incubation, to review technical issues happening in currently integrated projects, and to represent the technical contributors to OpenStack. We have about a month remaining with our current crew and elections coming soon. What have we been up to? Here’s an overview of current activities.

User Committee Nominees

We’ve recently put together some more nominees for a User Committee representative to replace Ryan Lane. These nominees are willing to serve in consolidating user requirements, guiding the dev teams when user feedback is needed, track OpenStack deployments and usage (typically done through the annual user survey), and work with user groups around the world.

  • Beth Cohen, Verizon Cloud technology strategist
  • Chet Burgess, Metacloud Chief architect
  • Andrew Mitry, Comcast cloud architect
  • Jonathan Proulx, Massachusetts Institute of Technology (MIT) Senior technical architect
  • Jacob Walcik, Rackspace Principal solutions architect

We’re happy to report that Jonathan Proulx will join the User Committee, and we’re definitely going to ask for more ways for the others to get involved. Thanks to all for their willingness to get involved.

Milestone Gap Analyses for Current Teams

With each milestone for the Juno release, currently integrated teams had a list of technical items to work through based on TC input. Networking (neutron), Databases (trove), Telemetry (ceilometer), Orchestration (heat), and Images (glance) all addressed technical concerns this release. We also discussed with Object Storage (swift) their current plans and the potential for more alignment. For Networking, the neutron team focused on database migrations, test coverage, feature parity with nova-network, and documentation for open source driver options. For Images, the glance team is addressing critical testing gaps. For Databases, the trove team addressed concerns about test coverage and documentation as well as CI and bug triaging. The horizon team, working on the OpenStack dashboard, completed its mission statement and is working on refreshing integration tests while splitting their repos into a separate toolkit from the django app itself. Telemetry, handled by the ceilometer team, gave an update to the TC about Gnocchi, a separate experimental approach to store and retrieve time series data and resource representations such as what ceilometer does for its datapoint collection for say, an instance resizing or migrating to another compute host. The Orchestration efforts with the heat team is all about improving functional testing and upgrade testing and stating their mission statement.

Integration and Incubation Discussions

This six-month period revealed these incubation requests: Designate, Rally, and Manila.

Designate works on providing access to DNS services and was accepted for incubation in the Juno release.

The Manila project was originally based on the Block Storage project cinder and it provides file-based shared storage services with coordinated access. It is designed to support multiple storage backends. As an example use-case, end-users create an NFS share with the REST API, make sure its accessible on the correct network, and sets up the NetApp (or EMC, or GlusterFS, or another storage driver) volume backend.

The Rally project provides SLA management for production OpenStack clouds. While this mission is helpful for performance testing and keeping live clouds running smoothly, the TC concluded it does not need to be a part of OpenStack, rather it is a project that can be adopted outside of OpenStack.

Leading up to Juno also proves busy for integration discussions with these teams: Barbican, Ironic, and Zaqar (you may remember the team name Marconi).

Barbican continues to be incubated, and both Zaqar and Ironic are being discussed so feel free to follow along at next week’s TC meeting.

DefCore: Community feedback and technical leadership for the layers of OpenStack

The TC had a joint meeting again this past week to talk about the definition of OpenStack. Discussion in meetings and online continues with recent communications from Rob Hirschfeld and Sean Dague. These visuals and processes are helping us make our community efforts what we want them to be.

Conversations to Follow

We’re also deep diving into discussions about programs adopting projects or guidelines for adopting new official projects. We’ve had lively discussions about the scope of programs on the openstack-dev mailing list with over 100 responses so it’s definitely a topic of interest.

As a community we are seeking cross-project themes or initiatives, so please follow along on the openstack-dev mailing list.

We hope these summary posts are helpful for sipping from the firehose. Let us know your thoughts through our many feedback channels.

by Anne Gentle at September 04, 2014 04:32 PM

September 03, 2014

Tesora Corp

Red Hat’s Perspective on Building an OpenStack Ecosystem

Screen Shot 2014-09-10 at 5.09.05 PM.pngHere are some highlights from Mike Werner's Trove Day presentation. Mike is the senior director of global technology ecosystems at Red Hat, and his talk was on the importance of building ecosystems.

·  Would you rather own 100% of a grape or a portion of a watermelon? OpenStack DBaaS has potential to be a large watermelon in terms of its market potential.

·  The steps required translating innovation into products and platforms that people can build on include: participate, integrate, stabilize (“productize”) with an ecosystem and deliver…

·   … And a viable ecosystem around OpenStack is being driven by: The Internet of Things, big data, database-driven applications and the OpenStack community itself.

·  While DBaaS has been something of a Holy Grail, we think OpenStack will be the enabler with the roots of solving that in this room today.

Here’s a link to the video of Mike’s presentation, with my opening remarks. Grab some popcorn. We’ll continue to post additional sessions in this space.

by 86 at September 03, 2014 09:06 PM

Red Hat’s Perspective on Building an OpenStack Ecosystem

Screen Shot 2014-09-10 at 5.09.05 PM.pngHere are some highlights from Mike Werner's Trove Day presentation. Mike is the senior director of global technology ecosystems at Red Hat, and his talk was on the importance of building ecosystems.

·  Would you rather own 100% of a grape or a portion of a watermelon? OpenStack DBaaS has potential to be a large watermelon in terms of its market potential.

·  The steps required translating innovation into products and platforms that people can build on include: participate, integrate, stabilize (“productize”) with an ecosystem and deliver…

·   … And a viable ecosystem around OpenStack is being driven by: The Internet of Things, big data, database-driven applications and the OpenStack community itself.

·  While DBaaS has been something of a Holy Grail, we think OpenStack will be the enabler with the roots of solving that in this room today.

Here’s a link to the video of Mike’s presentation, with my opening remarks. Grab some popcorn. We’ll continue to post additional sessions in this space.

by 86 at September 03, 2014 09:06 PM

Rob Hirschfeld

VMware Integrated OpenStack (VIO) is smart move, it’s like using a Volvo to tow your ski boat

I’m impressed with VMware’s VIO (beta) play and believe it will have a meaningful positive impact in the OpenStack ecosystem.  In the short-term, it paradoxically both helps enterprises stay on VMware and accelerates adoption of OpenStack.  The long term benefit to VMware is less clear.

From VWVortex

Sure, you can use a Volvo to tow a boat

Why do I think it’s good tactics?  Let’s explore an analogy….

My kids think owning a boat will be super fun with images of ski parties and lazy days drifting at anchor with PG13 umbrella drinks; however, I’ve got concerns about maintenance, cost and how much we’d really use it.  The problem is not the boat: it’s all of the stuff that goes along with ownership.  In addition to the boat, I’d need a trailer, a new car to pull the boat and driveway upgrades for parking.  Looking at that, the boat’s the easiest part of the story.

The smart move for me is to rent a boat and trailer for a few months to test my kids interest.  In that case, I’m going to be towing the boat using my Volvo instead of going “all in” and buying that new Ferd 15000 (you know you want it).  As a compromise, I’ll install a hitch in my trusty sedan and use it gently to tow the boat.  It’s not ideal and causes extra wear to the transmission but it’s a very low risk way to explore the boat owning life style.

Enterprise IT already has the Volvo (VMware vCenter) and likely sees calls for OpenStack as the illusion of cool ski parties without regard for the realities of owning the boat.  Pulling the boat for a while (using OpenStack on VMware) makes a lot of sense to these users.  If the boat gets used then they will buy the truck and accessories (move off VMware).  Until then, their still learning about the open source boating life style.

Putting open source concerns aside.  This helps VMware lead the OpenStack play for enterprises but may ultimately backfire if they have not setup their long game to keep the customers.

by Rob H at September 03, 2014 07:55 PM

Daniel P. Berrangé

Announce: gerrymander 1.4 “On no account mention the word Macbeth” – a client API and command line tool for gerrit

I’m pleased to announce the availability of a new release of gerrymander, version 1.4. Gerrymander provides a python command line tool and APIs for querying information from the gerrit review system, as used in OpenStack and many other projects. You can get it from pypi

# pip install gerrymander

Or straight from GitHub

# git clone git://

If you’re the impatient type, then go to the README file which provides a quick start guide to using the tool.

This release contains a mixture of bug fixes and new features

  • Add command for reporting potentially approvable patches
  • Add command for reporting potentially expirable patches
  • Allow todo list commands to be filtered on filename
  • Remove hardcoded #!/usr/bin/python3 lines
  • Fix traceback on casting unicode strings
  • Allow filtering reports based on topic
  • Fix typo in keyfile setting in example config

Thanks to everyone who contributed patches that went into this new release


by Daniel Berrange at September 03, 2014 03:29 PM

Tesora Corp

Short Stack: OpenStack's enterprise future, OpenStack's role in a Software-defined economy and simplifying OpenStack deployment

short stack_b small_0_0.jpgWelcome to the Short Stack, our weekly feature where we search for the most intriguing OpenStack links to share with you. These links may come from traditional publications or company blogs, but if it's about OpenStack, we'll find the best links we can to share with you every week.

If you like what you see, please consider subscribing.

Here we go with this week's links:

Taking a closer look at OpenStack and its future in the enterprise | FierceCIO

As OpenStack takes hold in the enterprise, it's worth taking a step back and looking at the impact. Like any open source project, especially one that's still relatively new, there are some implementation hiccups, but there is no denying its growing popularity

OpenStack Taking Its Place in the Software-Defined Economy | The OpenStack Blog

As OpenStack grows in popularity inside the enterprise, it's worth noting its place in the growing software-defined economy. This means that as companies look for an edge in an increasingly competitive marketplace, rapid software development can be a key competitive differentiator and OpenStack can be the glue that holds it all together.

Are we heading towards a case of too many SDDC stacks? | IDC Insight

Speaking software-defined data centers, IDC is wondering if the ever increasing competition among proprietary vendors to be your SDDC stack is getting a bit confusing for buyers, but whatever happens that shouldn't have an impact on the underlying OpenStack operating layer, and regardless of IDC's concerns, the market will very likely work itself out.

Deploy OpenStack via one of these options and skip a management headache | TechRepublic

As the FierceCIO story above points out, there are still issues related to deployment of OpenStack. This article looks at a couple of ways for non-technical companies can deploy OpenStack without a lot of technical expertise. Look for more of these kinds of products.

Red Hat Counters VMware With OpenStack Software Appliance | InformationWeek

This week's links all fit together in a way and this one relates to easy deployment and growing competition among the different OpenStack vendors. As VMware showed an increasing interest in OpenStack at its conference last week, Red Hat did not sit idly by. Recognizing that in fact companies are looking for easier ways to implement OpenStack, Red Hat released an OpenStack appliance to help companies deploy it more easily.

OpenStack Trove Day Videos | Tesora

And speaking of Red Hat. The first Trove Day video is posted. Mike Werner, Senior Director of Global Technology at Red Hat explores the importance of building and sustaining an OpenStack ecosystem for OpenStack. Tesora CEO and Founder Ken Rugg introduces Mike and welcomes attendees to the first annual OpenStack Trove Day, August 19 in Cambridge, MA. 

by 693 at September 03, 2014 01:09 PM

The best 5 OpenStack guides you might have missed

The best OpenStack howtos, guides, tutorials, and tips published in August 2014 into this handy collection.

by Jason Baker at September 03, 2014 09:00 AM

September 02, 2014

Andy Hill

Operating OpenStack: Monitoring RabbitMQ

At the OpenStack Operators meetup the question was asked about monitoring issues that are related to RabbitMQ.  Lots of OpenStack components use a message broker and the most commonly used one among operators is RabbitMQ. For this post I’m going to concentrate on Nova and a couple of scenarios I’ve seen in production.

Screen Shot 2014-08-28 at 11.27.55

It’s important to understand the flow of messages amongst the various components and break things down into a couple of categories:

  • Services which publish messages to queues (arrow pointing toward the queue in the diagram)
  • Services which consume messages from queues (arrow pointing out from the queue in the diagram)

It’s also good to understand what actually happens when a message is consumed. In most cases, the consumer of the queue is writing to a database.

An example would be for an instance reboot, the nova-api publishes a message to a compute node’s queue. The compute service running polls for messages, receives the reboot, sends the reboot to the virtualization layer, and updates the instance’s state to rebooting. 

There are a couple of scenarios queue related issues manifest:

  1. Everything’s broken – easy enough, rebuild or repair the RabbitMQ server. This post does not focus on this scenario because there is a considerable amount of material around hardening RabbitMQ in the OpenStack documentation.
  2. Everything is slow and getting slower – this often points to a queue being published to at a greater rate than it can be consumed. This scenario is more nuanced, and requires an operator to know a couple of things: what queues are shared among many services and what are publish/consume rates during normal operations. 
  3. Some things are slow/not happening – some instance reboot requests go through, some do not. Generally speaking these operations are ‘last mile’ operations that involve a change on the instance itself. This scenario is generally restricted to a single compute node, or possibly a cabinet of compute nodes.

Baselines are very valuble to have in scenarios 2 and 3 to compare normal operations to in terms of RabbitMQ queue size/consumption rate. Without a baseline, it’s difficult to know if the behavior is out of normal operating conditions. 

There are a couple of tools that can help you out:

  • Diamond RabbitMQ collector (code, docs)- Send useful metrics from RabbitMQ to graphite, requires the RabbitMQ management plugin
  • RabbitMQ HTTP API – This enables operators to retrieve specific queue statistics instead of a view into an entire RabbitMQ server.
  • Nagios Rabbit Compute Queues – This is a script used with Nagios to check specified compute queues which helps determine if operations to a specific compute may get stuck. This helps what I referred to earlier as scenario 3. Usually a bounce of the nova-compute service helps these.  The script looks for a local config file which would allow access to the RabbitMQ management plugin. Example config file is in the gist.
  • For very real time/granular insight, run the following command on the RabbitMQ server:
    •   watch -n 0.5 ‘rabbitmqctl -p nova list_queues | sort -rnk2 |head’

Here is an example chart that can be produced with the RabbitMQ diamond collector which can be integrated into an operations dashboard:

Screen Shot 2014-08-28 at 11.19.18Baseline monitoring of the RabbitMQ servers themselves isn’t enough. I recommend an approach that combines the following:

  • Using the RabbitMQ management plugin (required)
  • Nagios checks on specific queues (optional)
  • Diamond RabbitMQ collector to send data to Graphite
  • Dashboard combining RabbitMQ installations statistics

by andyhillky at September 02, 2014 08:44 PM


Make No Small Plans – OpenStack Silicon Valley

Cross posted from the OpenStack Silicon Valley Blog

Guest Post: By Florian Leibert

I’m often asked if Mesos, with its use of containers, is better than OpenStack and virtual machines (VMs) for cloud computing.

I think it’s the wrong question. You can actually run Mesos on any cloud that is provisioned using OpenStack, and that’s exactly how many of our customers are deploying it today. Of course, Mesos will also run directly on bare metal, and so that’s another choice as well.

What choice you make in how you deploy Mesos depends on what you want to accomplish and the role that VMs play in your architecture. If your datacenter or cloud uses OpenStack as its provisioning manager, then the easiest way to deploy Mesos may very well be on top of a cloud of VMs provisioned using those tools.

But if you’re building a new datacenter, refactoring an old datacenter, starting a greenfield project, or looking to build apps in the same way that Google, Facebook and Twitter do, then you might want to deploy Mesos and use containers directly on bare metal. A bare metal deployment will eliminate “VM sprawl” and you’ll notice an improvement in performance over VMs. It’s really your choice. That’s the power of Mesos: to users of Mesos, the underlying architecture is entirely transparent. Mesos will combine all of the resources in the datacenter or cloud into a single pool of resources, no matter who those resources are provisioned—whether virtual or physical.

It seems clear to me, however, that the tide of history is flowing in the direction of containers (in a Mesos framework) running on bare metal, leaving VMs increasingly stranded on the beach when organizations are looking to run distributed applications at massive scale.

This is Silicon Valley. Who doesn’t aspire to massive scale? Make no small plans is the rule around here. When I was at Twitter we moved to Mesos and containers as our plans got big much faster than we had anticipated or built for. When I moved to AirBnb we just started the greenfield data infrastructure projects with Mesos. To the extent that OpenStack is built around managing workloads in virtual machines, it faces increasing challenges that will hinder adoption for any organization making big plans about its future. For other workloads, it may be the right choice – but not massive scale workloads.

Google arguably runs the most massive computing infrastructure in the world. They also pioneered container technology in Linux. Why does Google’s architecture avoid VMs?

Technically VMs were designed to solve a different problem than scale. They became popular as a way for companies to consolidate more workloads on fewer servers and slash capital spending budgets. Servers, following Moore’s Law, have bulked up massively over the past decade. Virtual machines allowed you to run lots of applications on bigger and bigger servers. Everyone wins except the person in charge of managing all those applications on individual VMs. Scaling up can be a nightmare. Talk about complexity.

The cool thing about Mesos is that it reverses the VM paradigm. Instead of splitting up the applications to run on multiple machines, Mesos pools all your systems and presents them to the application as a single resource – one machine. From a design perspective, it makes running apps on your cloud or datacenter conceptually the same as running them on a single (very big) desktop. This approach brings many of the hardware utilization benefits of VMs (although we’re even more efficient) but without all the complexity.

OpenStack will play a big role in organizations who have needs for virtual machines. Embracing this as a starting point, supporting deployments on OpenStack is important to our customers. But our customers are also expecting us to guide them toward the future, where all applications are distributed applications.

In the future, I see more and more traditional enterprises building applications like Silicon Valley companies do. Silicon Valley companies, increasingly, are building applications from the start as distributed systems designed for scale. They use distributed analytics systems, like Hadoop and Spark, run distributed databases like Cassandra, and use scheduling and orchestration systems like Mesos and Marathon. Traditional enterprises can experience these systems on OpenStack, but I predict that over time we will see a massive shift away from VMs because VMs were designed for something different.

Florian (Flo) Leibert is the CEO and Co-Founder of Mesosphere

The post Make No Small Plans – OpenStack Silicon Valley appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Guest Post at September 02, 2014 07:12 PM

Rob Hirschfeld

OpenStack DefCore Process Flow: Community Feedback Cycles for Core [6 points + chart]

If you’ve been following my DefCore posts, then you already know that DefCore is an OpenStack Foundation Board managed process “that sets base requirements by defining 1) capabilities, 2) code and 3) must-pass tests for all OpenStack™ products. This definition uses community resources and involvement to drive interoperability by creating the minimum standards for products labeled OpenStack™.”

In this post, I’m going to be very specific about what we think “community resources and involvement” entails.

The draft process flow chart was provided to the Board at our OSCON meeting without additional review.  It below boils down to a few key points:

  1. We are using the documents in the Gerrit review process to ensure that we work within the community processes.
  2. Going forward, we want to rely on the technical leadership to create, cluster and describe capabilities.  DefCore bootstrapped this process for Havana.  Further, Capabilities are defined by tests in Tempest so test coverage gaps (like Keystone v2) translate into Core gaps.
  3. We are investing in data driven and community involved feedback (via Refstack) to engage the largest possible base for core decisions.
  4. There is a “safety valve” for vendors to deal with test scenarios that are difficult to recreate in the field.
  5. The Board is responsible for approving the final artifacts based on the recommendations.  By having a transparent process, community input is expected in advance of that approval.
  6. The process is time sensitive.  There’s a need for the Board to produce Core definition in a timely way after each release and then feed that into the next one.  Ideally, the definitions will be approved at the Board meeting immediately following the release.

DefCore Process Draft

Process shows how the key components: designated sections and capabilities start from the previous release’s version and the DefCore committee manages the update process.  Community input is a vital part of the cycle.  This is especially true for identifying actual use of the capabilities through the Refstack data collection site.

  • Blue is for Board activities
  • Yellow is or user/vendor community activities
  • Green is for technical community activities
  • White is for process artifacts

This process is very much in draft form and any input or discussion is welcome!  I expect DefCore to take up formal review of the process in October.

by Rob H at September 02, 2014 05:46 PM