August 26, 2016

Ben Nemec

Undercloud Testing in a Standalone VM

There are times where it can be handy to test an undercloud install in a standalone VM. For example, if you're working on installing a new service, or upgrading the undercloud, many times you can make progress in a lighter weight single VM environment. Obviously the undercloud won't be usable for anything if it's not part of a full virt environment, but you can at least check that a service is running at the end of the install and do some basic sanity checks.

It's not terribly difficult to do this. The biggest hurdle to overcome is the fact that most VMs only have a single network attached, and the undercloud install requires at least two. To get around this, create a dummy interface:

sudo ip l a dummy0 type dummy
sudo ip l set dummy0 up

You could also name this eth1 so it matches the default undercloud configuration, but I like the explicit reminder that it is a fake interface.

Then you can fire up the Undercloud Configuration Wizard or edit undercloud.conf by hand and set the provisioning interface (local_interface) to dummy0. With that done, you should be able to run the rest of the undercloud install as normal.

by bnemec at August 26, 2016 03:49 PM

Great new technical guides, tutorials and documentation for OpenStack

To help you find the best of these, every month, goes on the hunt for the best community-created OpenStack how-tos published in the previous month.

by Jason Baker at August 26, 2016 07:00 AM

Giulio Fidente

Ceph, TripleO and the Newton release

Time to roll up some notes on the status of Ceph in TripleO. The majority of these functionalities were available in the Mitaka release too but the examples work with code from the Newton release so they might not apply identical to Mitaka.

The TripleO default configuration

No default is going to fit everybody, but we want to know what the default is to improve from there. So let's try and see:

uc$ openstack overcloud deploy --templates tripleo-heat-templates -e tripleo-heat-templates/environments/puppet-pacemaker.yaml -e tripleo-heat-templates/environments/storage-environment.yaml --ceph-storage-scale 1
Deploying templates in the directory /home/stack/example/tripleo-heat-templates
Overcloud Deployed

Monitors go on the controller nodes, one per node, the above command is deploying a single controller though. First interesting thing to point out is:

oc$ ceph --version
ceph version 10.2.2 (45107e21c568dd033c2f0a3107dec8f0b0e58374)

Jewel! Kudos to Emilien for bringing support for it in puppet-ceph. Continuing our investigation, we notice the OSDs go on the cephstorage nodes and are backed by the local filesystem, as we didn't tell it to do differently:

oc$ ceph osd tree
-1 0.03999 root default
-2 0.03999     host overcloud-cephstorage-0
 0 0.03999         osd.0                         up  1.00000          1.00000

Notice we got SELinux covered:

oc$ ls -laZ /srv/data
drwxr-xr-x. ceph ceph system_u:object_r:ceph_var_lib_t:s0 .

And use CephX with autogenerated keys:

oc$ ceph auth list
installed auth entries:

        key: AQC2Pr9XAAAAABAAOpviw6DqOMG0syeEYmX2EQ==
        caps: [mds] allow *
        caps: [mon] allow *
        caps: [osd] allow *
        key: AQC2Pr9XAAAAABAAA78Svmmt+LVIcRrZRQLacw==
        caps: [mon] allow r
        caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics

But which OpenStack service is using Ceph? The storage-environment.yaml file has some informations:

uc$ grep -v '#' tripleo-heat-templates/environments/storage-environment.yaml | uniq

   OS::TripleO::Services::CephMon: ../puppet/services/ceph-mon.yaml
   OS::TripleO::Services::CephOSD: ../puppet/services/ceph-osd.yaml
   OS::TripleO::Services::CephClient: ../puppet/services/ceph-client.yaml

   CinderEnableIscsiBackend: false
   CinderEnableRbdBackend: true
   CinderBackupBackend: ceph
   NovaEnableRbdBackend: true
   GlanceBackend: rbd
   GnocchiBackend: rbd

The registry lines enable the Ceph services, the parameters instead are setting Ceph as backend for Cinder, Nova, Glance and Gnocchi. They can be configured to use other backends, see the comments in the environment file. Regarding the pools:

oc$ ceph osd lspools
0 rbd,1 metrics,2 images,3 backups,4 volumes,5 vms,

Despite the replica size set by default to 3, we only have a single OSD so with a single OSD the cluster will never get into HEALTH_OK:

oc$ ceph osd pool get vms size
size: 3

Good to know, now a new deployment with more interesting stuff.

A more realistic scenario

What makes it "more realistic"? We'll have enough OSDs to cover the replica size. We'll use physical disks for our OSDs (and journals) and not the local filesystem. We'll cope with a node with a different disks topology and we'll decrease the replica size for one of the pools.

Define a default configuration for the storage nodes, telling TripleO to use sdb for the OSD data and sdc for the journal:

          journal: /dev/sdc

For the node which has two (instead of a single) rotatory disks, we'll need a specific map. First get its system-uuid from the Ironic introspection data:

uc$ openstack baremetal introspection data save | jq .extra.system.product.uuid

then create the node specific map:

    OS::TripleO::CephStorageExtraConfigPre: tripleo-heat-templates/puppet/extraconfig/pre_deploy/per_node.yaml

    NodeDataLookup: >
         {"/dev/sdb": {"journal": "/dev/sdd"},
          "/dev/sdc": {"journal": "/dev/sdd"}

Finally, to override the replica size (and why not, PGs number) of the "vms" pool (where by default the Nova ephemeral disks go):

        size: 2
        pg_num: 128
        pgp_num: 128

We also want to clear and prepare all the non-root disks with a GPT label, which will allow us, for example, to repeat the deployment multiple times reusing the same nodes. The implementation of the disks cleanup script can vary, but we can use a sample script and wire it to the overcloud nodes via NodeUserData:

uc$ curl -O

    OS::TripleO::NodeUserData: ceph_wipe_disk.yaml

    ceph_disks: "/dev/sdb /dev/sdc /dev/sdd"

All the above environment files could have been merged in a single one but we split them out in multiple ones for clarity. Now the new deploy command:

uc$ openstack overcloud deploy --templates tripleo-heat-templates -e tripleo-heat-templates/environments/puppet-pacemaker.yaml -e tripleo-heat-templates/environments/storage-environment.yaml --ceph-storage-scale 3 -e ceph_pools_config.yaml -e ceph_mynode_disks.yaml -e ceph_default_disks.yaml -e ceph_wipe_env.yaml
Deploying templates in the directory /home/stack/example/tripleo-heat-templates
Overcloud Deployed

Here is our OSDs tree, with two instances running on the node with two rotatory disks (sharing the same journal disk):

oc$ ceph os tree
-1 0.03119 root default
-2 0.00780     host overcloud-cephstorage-1
 0 0.00780         osd.0                         up  1.00000          1.00000
-3 0.01559     host overcloud-cephstorage-2
 1 0.00780         osd.1                         up  1.00000          1.00000
 2 0.00780         osd.2                         up  1.00000          1.00000
-4 0.00780     host overcloud-cephstorage-0
 3 0.00780         osd.3                         up  1.00000          1.00000

and the custom PG/size values for for "vms" pool:

oc$ ceph osd pool get vms size
size: 2
oc$ ceph osd pool get vms pg_num
pg_num: 128

Another simple customization could have been to set the journals size. For example:

      ceph::profile::params::osd_journal_size: 1024

Also we did not provide any customization for the crushmap but a recent addition from Erno makes it possible to disable global/osd_crush_update_on_start so that any customization becomes possible after the deployment is finished.

Also we did not deploy the RadosGW service as it is still a work in progress, expected for the Newton release. Submissions for its inclusion are on review.

We're also working on automating the upgrade from the Ceph/Hammer release deployed with TripleO/Mitaka to Ceph/Jewel, installed with TripleO/Newton. The process will be integrated with the OpenStack upgrade and again the submissions are on review in a series.

For more scenarios

The mechanism recently introduced in TripleO to make composable roles, discussed in a Steven's blog post, makes it possible to test a complete Ceph deployment using a single controller node too (hosting the OSD service as well), just by adding OS::TripleO::Services::CephOSD to the list of services deployed on the controller role.

And if the above still wasn't enough, TripleO continues to support configuration of OpenStack with a pre-existing, unmanaged Ceph cluster. To do so we'll want to customize the parameters in puppet-ceph-external.yaml and deploy passing that as argument instead. For example:

    OS::TripleO::Services::CephExternal: tripleo-heat-templates/puppet/services/ceph-external.yaml

    # NOTE: These example parameters are required when using Ceph External and must be obtained from the running cluster
    #CephClusterFSID: '4b5c8c0a-ff60-454b-a1b4-9747aa737d19'
    #CephClientKey: 'AQDLOh1VgEp6FRAAFzT7Zw+Y9V6JJExQAsRnRQ=='
    #CephExternalMonHost: ','

    # the following parameters enable Ceph backends for Cinder, Glance, Gnocchi and Nova
    NovaEnableRbdBackend: true
    CinderEnableRbdBackend: true
    CinderBackupBackend: ceph
    GlanceBackend: rbd
    GnocchiBackend: rbd
    # If the Ceph pools which host VMs, Volumes and Images do not match these
    # names OR the client keyring to use is not named 'openstack',  edit the
    # following as needed.
    NovaRbdPoolName: vms
    CinderRbdPoolName: volumes
    GlanceRbdPoolName: images
    GnocchiRbdPoolName: metrics
    CephClientUserName: openstack
    # finally we disable the Cinder LVM backend
    CinderEnableIscsiBackend: false

Come help in #tripleo @ freenode and don't forget to check the docs at! Some related topics are described there, for example, how to set the root device via Ironic for the nodes with multiple disks or how to push in ceph.conf additional arbitraty settings.

by Giulio Fidente at August 26, 2016 03:00 AM

August 25, 2016

Carlos Camacho

Debugging submissions errors in TripleO CI

Landing upstream submissions might be hard if you are not passing all the CI jobs that try to check that your code actually works.

Let’s assume that CI is working properly without any kind of infra issue or without any error introduced by mistake from other submissions. In which case, we might ending having something like:

The first thing that we should do it’s to double check the status from all the other jobs that are actually in the TripleO CI status page. This can be checked in the following site:

Also, we can get the jobs status by checking the Zuul dashboard.

Or checking the TripleO test cloud nodepool.

After checking that there are jobs passing CI let’s check why our job it’s not passing correctly.

For each job the folder structure should be similar to:

[TXT]  console.html
[DIR]  logs/
  [DIR]  overcloud-cephstorage-0/
  [DIR]  overcloud-controller-0/
  [DIR]  overcloud-novacompute-0/
  [   ]  postci.txt.gz
  [DIR]  undercloud/

It’s possible to check the deployment status in the console.html file there you will see the result of all the deployment steps executed in order to pass the CI job.

In case of having i.e. a failed deployment you can check postci.txt.gz to get the actual standard error from the deployment.

Also from folders overcloud-cephstorage-0, overcloud-controller-0 and overcloud-novacompute-0 you will have the content of /var that will point out all the services logs.

Other useful tip might be to get all the job logs folder with wget and crawl for a string containing the Error word.

#Get the CI job folder i.e. using the following URL.
wget -e robots=off -r --no-parent
grep -iR "Error: " *

You will probably see there something pointing out an error, and hopefully will give you clues about the next steps to fix them and land your submissions as soon as possible.

by Carlos Camacho at August 25, 2016 12:00 AM

August 24, 2016

OpenStack Superuser

How to make your OpenStack Summit talk a big success

You prepared, you submitted, you were accepted; congratulations! The OpenStack community is intelligent and engaged, so expectations are always high. Whether this is your 50th or first talk at an OpenStack Summit, here’s five little ways to make sure your talk is a success.

Focus on the nonobvious

Assume your audience is smart and that they’ve heard a talk about your subject before. Even if it’s a 101 talk where your goal is educating about the basics, what can you say that will be unique to your presentation? What could they not find out by Googling your topic? Make sure to present something new and unexpected.

A good presentation sells better than a sales pitch

Unfortunately, the quickest way to empty a room—particularly in the OpenStack community—is to use talk time to push a service or product. This might conflict with company expectations––someone probably wants to see an ROI on your talk and maybe even sent over talking points. Instead, create interest in your company or product by being an outstanding representative and demonstrating smarts, innovation and the ability to overcome the inevitable challenges. The “sales pitch” is not what you say about a product, but it is you and how you present.

Shorten your career path story

It’s very common for talks to begin with “first, a little about me,” which often sounds like reading a resume. While this can create an audience connection, it eats up valuable presentation time and takes the focus off the topic. Instead, share only the relevant pieces of your career to set up your expertise and the audience’s expectations.

Take a look at the difference between these examples:

Frequently done: “My name is Anne and I’m currently a marketing coordinator at the OpenStack Foundation. I started off in renewable energy, focusing on national energy policy and community engagement; then I became a content writer for a major footwear brand; then worked at an international e-commerce startup; and now I’m here! In my free time I race bicycles and like riding motorcycles.”

The audience has learned a lot about me (probably too much!), but it doesn’t give them a single area of expertise to focus on. It distracts the audience from the topic of my talk.

Alternative: “My name is Anne and as the marketing coordinator at the OpenStack Foundation, I work on our social media team.”

I’ve established my professional connection to the topic, explained why they should listen and foreshadowed that we’ll be talking about social media marketing.

Conversation, not recitation

Memorizing a script and having the script in front of you (like on a phone) is a common device to try to soothe presentation nerves. Ironically this makes your presentation more difficult and less enjoyable for the audience. When you trip up on a word (and we all do!), it can cause you to lose the paragraph that precedes it. Reading off a device will make your presentation sound artificial.

Instead, rehearse your presentation but use slide graphics or brief bullets to keep you on message. Pretend you’re having a conversation with the audience; just a cup of coffee over a very large table.

P.S. Make sure you budget time for conversation with your audience, and bring a few thought-provoking questions of your own to get the discussion started.

Humor doesn’t always work in international audiences

OpenStack has a wonderfully international community, which means that many people in your audience may not be native or fluent in the language you are presenting in. Idioms, turns of phrase or plays on words can be particularly difficult to understand. Instead of leaning on humor, tell a story about how something came to be, or a critical error that we can all see the humor in.

Looking forward to the incredible talks slated for the upcoming Summit; good luck, presenters!

Cover Photo // CC BY NC

by Anne Bertucio at August 24, 2016 05:21 PM

Enabling vCPE with OpenStack: Prepping the VNFs

This post shows how to create images to emulate two types of virtual network functions (VNFs) you may come across. These images may be useful, for example, if you are trying to set up virtual Customer Premises Equipment (vCPE), as described in the first post of this series, Enabling vCPE with OpenStack.

alt text here

The images being created in this case are:

  • The bump-in-the-wire operation, such as frame-forwarding (L2 VNF)
  • Software routing (L3 VNF)

Bump! What was that?

alt text here

In this case, we will set up bridging on the "bump" (L2 VNF), and allow traffic to pass through. We will clear any IP addresses on the virtual machine (VM) and add both the interfaces to the bridge. The bridge will be configured to forward frames and not participate in the network otherwise. This VM will essentially be invisible to any other network user. In a production environment, this would carry out operations such as traffic inspection or WAN optimization.

Make sure bridge-utils is installed on the VM, and configure the bridge by adding the following to /etc/network/interfaces:

auto eth0
  iface eth0 inet manual
      up ifconfig eth0 
      up ifconfig ip link set eth0 promisc on

  auto eth1
  iface eth1 inet manual
      up ifconfig eth1
      up ifconfig ip link set eth1 promisc on

  auto br100
  iface br100 inet manual
      bridge_ports eth0 eth1
      bridge_stp on
      bridge_waitport 0

The eth0 and eth1 ip addresses are cleared and the interfaces are set to promiscuous mode, so that they accept all packets.

The bridge is then created, with the ports eth0 and eth1 added to it. All traffic entering the VM on eth0 will exit on eth1, and vice versa.

A software router

alt text here

In order to configure a VM to act as a router it must be configured to allow IP forwarding and proxy ARP. This can be done through the /etc/sysctl.conf file:

net.ipv4.ip_forward = 1

net.ipv4.conf.eth0.proxy_arp = 1

net.ipv4.conf.eth1.proxy_arp = 1

The IP addresses for the ports on the "router" should be the default routes for the LAN and WAN networks, which is done when the networks and ports are being created.

Add the images

Once you have saved these images in a safe place, add them to Glance so they can be used for booting VMs:

$ glance image-create --name bump_image --disk-format qcow2 --container-format bare --file ~/ubuntu-bump-in-the-wire.qcow2

$ glance image-create --name router_image --disk-format qcow2 --container-format bare --file ~/ubuntu-router.qcow2

Congratulations, you now have two images available in Glance, which can be used for booting VMs!

This post first appeared on the Intel Developer Zone blog. Superuser is always interested in community content, email:

Cover Photo // CC BY NC

by Emma L. Foley at August 24, 2016 05:20 PM


How does the world consume private clouds?

The post How does the world consume private clouds? appeared first on Mirantis | The Pure Play OpenStack Company.

In my previous blog, why the world needs private clouds, we looked at ten reasons for considering a private cloud. The next logical question is how a company should go about building a private cloud.

In my view, there are four consumption models for OpenStack. Let’s look at each approach and then compare.


Approach #1: DIY

For the most sophisticated users, where OpenStack is super-strategic to the business, a do-it-yourself approach is appealing. Walmart, PayPal, and so on are examples of this approach.

In this approach, the user has to grab upstream OpenStack bits, package the right projects, fix bugs or add features as needed, then deploy and manage the OpenStack lifecycle. The user also has to “self-support” their internal IT/OPS team.

This approach requires recruiting and retaining a very strong engineering team that is adept at python, OpenStack, and working with the upstream open-source community. Because of this, I don’t think more than a handful companies can or would want to pursue this approach. In fact, we know of several users who started out on this path, but had to switch to a different approach because they lost engineers to other companies. Net-net, the DIY approach is not for the faint of heart.

Approach #2: Distro

For large sophisticated users that plan to customize a cloud for their own use and have the skills to manage it, an OpenStack distribution is an attractive approach.

In this approach, no upstream engineering is required. Instead, the company is responsible for deploying a known good distribution from a vendor and managing its lifecycle.

Even though this is simpler than DIY, very few companies can manage a complex, distributed and fast moving piece of software such as OpenStack — a point made by Boris Renski in his recent blog Infrastructure Software is Dead. Therefore, most customers end up utilizing extensive professional services from the distribution vendor.

Approach #3: Managed Services

For customers who don’t want to deal with the hassle of managing OpenStack, but want control over the hardware and datacenter (on-prem or colo), managed services may be a great option.

In this approach, the user is responsible for the hardware, the datacenter, and tenant management; but OpenStack is fully managed by the vendor. Ultimately this may be the most appealing model for a large set of customers.

Approach #4: Hosted Private Cloud

This approach is a variation of the Managed Services approach. In this option, not only is the cloud managed, it is also hosted by the vendor. In other words, the user does not even have to purchase any hardware or manage the datacenter. In terms of look and feel, this approach is analogous to purchasing a public cloud, but without the “noisy neighbor” problems that sometimes arise.

Which approach is best?

Each approach has its pros and cons, of course. For example, each approach has different requirements in terms of engineering resources:

DIY Distro Managed Service Hosted  Private Cloud
Need upstream OpenStack engineering team Yes No No No
Need OpenStack IT architecture team Yes Yes No No
Need OpenStack IT/ OPS team Yes Yes No No
Need hardware & datacenter team Yes Yes Yes No

Which approach you choose should also depend on factors such as the importance of the initiative, relative cost, and so on, such as:

DIY Distro Managed Service Hosted  Private Cloud
How important is the private cloud to the company? The business depends on private cloud The cloud is extremely strategic to the business The cloud is very strategic to the business The cloud is somewhat strategic to the business
Ability to impact the community Very direct Somewhat direct Indirect Minimal
Cost (relative) Depends on skills & scale Low Medium High
Ability to own OpenStack operations Yes Yes Depends if the vendor offers a transfer option No

So as a user of an OpenStack private cloud you have four ways to consume the software.

The cost and convenience of each approach vary as per this simplified chart and need to be traded-off with respect to your strategy and requirements.

OK, so we know why you need a private cloud, and how you can consume one. But there’s still one burning question: who needs it?

The post How does the world consume private clouds? appeared first on Mirantis | The Pure Play OpenStack Company.

by Amar Kapadia at August 24, 2016 04:16 AM

August 23, 2016

Elizabeth K. Joseph


Last week I was in Philadelphia, which was fun and I got to do some Ubuntu stuff but I was actually there to speak at FOSSCON. It’s not the largest open source conference, but it is in my adopted home city of Philadelphia and I have piles of friends, mentors and family there. I love attending FOSSCON because I get to catch up with so many people, making it a very hug-heavy conference. I sadly missed it last year, but I made sure to come out this year.

They also invited me to give a closing keynote. After some back and forth about topics, I ended up with a talk on “Listening to the Needs of Your Global Open Source Community” but more on that later.

I kicked off my morning by visiting my friends at the Ubuntu booth, and meeting up with my OpenStack and HPE colleague Ma Dong who had flown in from Beijing to join us. I made sure we got our picture taken by the beautiful Philadelphia-themed banner that the HPE open source office designed and sent for the event.

At 11AM I gave my regular track talk, “A Tour Of OpenStack Deployment Scenarios.” My goal here was to provide a gentle introduction, with examples, of the basics of OpenStack and how it may be used by organizations. My hope is that the live demos of launching instances from the Horizon web UI and OpenStack client were particularly valuable in making the connection between the concepts of building a cloud the actual tooling you might use. The talk was well-attended and I had some interesting chats later in the day. I learned that a number of the attendees are currently using proprietary cloud offerings and looking for options to in-house some of that.

The demos were very similar to the tutorial I gave at SANOG earlier this month, but the talk format was different. Notes from demos here and slides (219K).

Thanks to Ma Dong for taking a picture during my talk! (source)

For lunch I joined other sponsors at the sponsor lunch over at the wonderful White Dog Cafe just a couple blocks from the venue. Then it was a quick dash back to the venue for Ma Dong’s talk on “Continuous Integration And Delivery For Open Source Development.”

He outlined some of the common mechanisms for CI/CD in open source projects, and how the OpenStack project has solved them for a project that eclipses most others in size, scale and development pace. Obviously it’s a topic I’m incredibly familiar with, but I appreciated his perspective as a contributor who comes from an open source CI background and has now joined us doing QA in OpenStack.

Ma Dong on Open Source CI/CD

After his talk it was also nice to sit down for a bit to chat about some of the latest changes in the OpenStack Infrastructure. We were able to catch up about the status of our Zuul tooling and general direction of some of our other projects and services. The day continued with some chats about Jenkins, Nodepool and how we’ve played around with infrastructure tooling to cover some interesting side cases. It was really fun to meet up with some new folks doing CI things to swap tips and stories.

Just before my keynote I attended the lightning talks for a few minutes, but had to depart early to get set up in the big room.

They keynote on “Listening to the Needs of Your Global Open Source Community” was a completely new talk for me. I wrote the abstract for it a few weeks ago for another conference CFP after the suggestion from my boss. The talk walked through eight tips for facilitating the collection of feedback from your community as one of the project leaders or infrastructure representatives.

  • Provide a simple way for contributors to contact project owners
  • Acknowledge every piece of feedback
  • Stay calm
  • Communicate potential changes and ask for feedback
  • Check in with teams
  • Document your processes
  • Read between the lines
  • Stick to your principles

With each of these, I gave some examples from my work mostly in the Ubuntu and OpenStack communities. Some of the examples were pretty funny, and likely very familiar with any systems folks who are interfacing with users. The Q&A at the end of the presentation was particularly interesting, I was very focused on open source projects since that’s where my expertise lies, but members of the audience felt that my suggestions were more broadly applicable. In those moments after my talk I was invited to speak on a podcast and encouraged to write a series of articles related to my talk. Now I’m aiming for writing some content on over the next couple weeks.

Slides from the talk are here (7.3M pdf).

And thanks to Josh, José, Vincent and Nathan for snapping some photos of the talk too!

The conference wound down and following the keynote with a raffle and we then went our separate ways. For me, it was time for spending time with friends over a martini.

A handful of other photos from the conference here:

by pleia2 at August 23, 2016 09:01 PM

Major Hayden

What’s Happening in OpenStack-Ansible (WHOA) – August 2016

OpenStackWelcome to the third post in the series of What’s Happening in OpenStack-Ansible (WHOA) posts that I’m assembling each month. OpenStack-Ansible is a flexible framework for deploying enterprise-grade OpenStack clouds. In fact, I use OpenStack-Ansible to deploy the OpenStack cloud underneath the virtual machine that runs this blog!

My goal with these posts is to inform more people about what we’re doing in the OpenStack-Ansible community and bring on more contributors to the project.

There are plenty of updates since the last post from mid-July. We’ve had our Mid-cycle meeting and there are plenty of new improvements in flight.

New releases

The OpenStack-Ansible releases are announced on the OpenStack development mailing list. Here are the things you need to know:


The latest Liberty release, 12.2.1, contains lots of updates and fixes. There are plenty of neutron bug fixes included in the release along with upgrade improvements. Deployers also have the option to block all container restarts until they are ready to reboot containers during a maintenance window.


Mitaka is the latest stable release available and the latest version is 13.3.1. This release also brings in a bunch of neutron fixes and several “behind the scenes” fixes for OpenStack-Ansible.

Notable discussions

This section covers discussions from the OpenStack-Ansible weekly meetings, IRC channels, mailing lists, or in-person events.

Mid-cycle meeting

We had a great mid-cycle meeting at the Rackspace headquarters in San Antonio, Texas:

<script async="async" charset="utf-8" src=""></script>

The meeting drew community members from various companies from all over the United States and the United Kingdom. We talked about the improvements we need to make in the remainder of the Newton cycle, including upgrades, documentation improvements, and new roles.

Here is a run-down of the biggest topics:

Install guide overhaul

The new install guide is quickly coming together and it’s much easier to follow for newcomers. There is a big need now for detailed technical reviews to ensure that the new content is clear and accurate!

Ansible 2.1

The decision was made to bump each of the role repositories to Ansible 2.1 to match the integrated repository. It was noted that Ansible 2.2 will bring some performance improvements once it is released.

Ubuntu 16.04 Xenial Support

This is a high priority for the remainder of the Newton cycle. The Xenial gate jobs will be switched to voting and Xenial failures will need to be dealt with before any additional patches will be merged.

Ubuntu 14.04 Trusty Support

Many of the upstream OpenStack projects are removing 14.04 support soon and OpenStack-Ansible will drop 14.04 support in the Ocata release.

Power CPU support

There’s already support for Power systems as hypervisors within OpenStack-Ansible now and IBM is testing mixed x86 and PPC environments now. However, we still need some way to test these mixed environments in the OpenStack gate tests. Two IBMers from the OpenStack-Ansible community are working with the infra team to find out how this can be

Inventory improvements

The inventory generation process for OpenStack-Ansible is getting more tests and better documentation. Generating inventory is a difficult process to understand, but it is critical for the project’s success.

Gnocchi / Telemetry improvements

We got an update on gnocchi/ceilometer and set some plans on how to go forward with the OpenStack services and the data storage challenges that go along with each.

Mailing list

The OpenStack-Ansible tag was fairly quiet on the OpenStack Development mailing list during the time frame of this report, but there were a few threads:


Michael Gugino wrote about deploying nova-lxd with OpenStack-Ansible. This is one of the newest features in OpenStack-Ansible.

I wrote one about an issue that was really difficult to track down. I had instances coming online with multiple network ports attached when I only asked for one port. It turned out to be a glance issue that caused a problem in nova.

Notable developments

This section covers some of the improvements coming to Newton, the upcoming OpenStack release.

Bug fixes

Services were logging to stderr and this caused some log messages to be logged multiple times on the same host. This ate up additional disk space and disk I/O performance. Topic

The xtrabackup utility causes crashes during certain situations when the compact option is used. Reviews


LBaaS v2 panels were added to Newton and Mitaka. Ironic and Magnum panels were added to Newton.


Plenty of work was merged towards improving the installation guide to make it more concise and easy to follow. Topic

Multi-Architecture support

Repo servers can now build Python wheels for multiple architectures. This allows for mixed control/data plane environments, such as x86 (Intel) control planes with PPC (Power8) hypervisors running PowerKVM or PowerVM. Review

Performance improvements

The repo servers now act as an apt repository cache, which helps improve the speed of deployments. This also helps with deployers who don’t have an active internet connection in their cloud.

The repo servers now only build the wheels and virtual environments necessary for the services which are actually being deployed. This reduces the wait required while the wheels and virtual environments are built, but it also has an added benefit of reducing the disk space consumed. Topic


The MariaDB restarts during upgrades are now handled more carefully to avoid disruptions from container restarts. Review

Deployers now have the option to choose if they want packages updated during a deployment or during an upgrade. There are per-role switches as well as a global switch that can be toggled. Topic


The goal of this newsletter is three fold:

  • Keep OpenStack-Ansible developers updated with new changes
  • Inform operators about new features, fixes, and long-term goals
  • Bring more people into the OpenStack-Ansible community to share their use
    cases, bugs, and code

Please let me know if you spot any errors, areas for improvement, or items that I missed altogether. I’m mhayden on Freenode IRC and you can find me on Twitter anytime.

The post What’s Happening in OpenStack-Ansible (WHOA) – August 2016 appeared first on

by Major Hayden at August 23, 2016 08:35 PM

Carlos Camacho

BAND-AID for OOM issues with TripleO manual deployments

If running free -m from your Undercloud or Overcloud nodes and getting some output like:

[asdf@fdsa]$ free -m
              total        used        free      shared  buff/cache   available
Mem:           7668        5555         219        1065        1893         663

And as in the example there is no reference pointing to the swap memory size and/or usage, you might not be using swap in your TripleO deployments, to enable it, just have to follow two steps.

First in the Undercloud, when deploying stacks you might find that heat-engine (4 workers) takes lot of RAM, in this case for specific usage peaks can be useful to have a swap file. In order to have this swap file enabled and used by the OS execute the following instructions in the Undercloud:

#Add a 4GB swap file to the Undercloud
sudo dd if=/dev/zero of=/swapfile bs=1024 count=4194304
sudo mkswap /swapfile
#Turn ON the swap file
sudo chmod 600 /swapfile
sudo swapon /swapfile
#Enable it on start
echo "/swapfile swap swap defaults 0 0" | sudo tee -a /etc/fstab

Also when deploying the Overcloud nodes the controller might face some RAM usage peaks, in which case, create a swap file in each Overcloud node by using an already existing “extraconfig swap” template.

To achieve this second part, edit the file tripleo-heat-templates/overcloud-resource-registry-puppet.yaml, find the AllNodesExtraConfig resource registry entry OS::TripleO::AllNodesExtraConfig and replace OS::Heat::None with extraconfig/all_nodes/swap.yaml.

The last two hints will provide a swap file of 4GB in both Undercloud and Overcloud nodes. Having these swap files might be the difference between decreasing the system performance because of the high memory usage or a OOM deployment crash.

Now, deploy your Overcloud as usual i.e.:

openstack overcloud deploy \
--libvirt-type qemu \
--ntp-server \
--templates /home/stack/tripleo-heat-templates \
-e /home/stack/tripleo-heat-templates/overcloud-resource-registry-puppet.yaml \
-e /home/stack/tripleo-heat-templates/environments/puppet-pacemaker.yaml

Bye bye OOM’s!!!!

by Carlos Camacho at August 23, 2016 12:00 AM

August 22, 2016


AWS, VMware, OpenStack … what’s your opinion?

The post AWS, VMware, OpenStack … what’s your opinion? appeared first on Mirantis | The Pure Play OpenStack Company.

Amazon Web Services (AWS), VMWare and OpenStack are all popular tools IT/OPS practitioners and developers use on their cloud journey. While vendors have many opinions on how these technologies stack up against each other, there is a shortage of data on how users perceive these technologies. In order to better understand the different distinct advantages and drawbacks of prevalent cloud-servicing technologies, Mirantis has sponsored two surveys for those familiar with AWS or VMWare and OpenStack. The survey is about 5 minutes long and our survey results will be published so the entire community can benefit.

Survey Links:

We would appreciate it if you could take the time to fill out the relevant survey according to your respective background.

James Chung is a summer intern for Mirantis Inc. and is currently a student at Yale University.

Photo by BillsoPHOTO (

The post AWS, VMware, OpenStack … what’s your opinion? appeared first on Mirantis | The Pure Play OpenStack Company.

by Guest Post at August 22, 2016 09:05 PM

Kenneth Hui

Operating OpenStack Clouds at Scale

Virt Compute Nodes

As anyone who’s ever tried to move a technology from development to production knows, operations and scaling are two of the most difficult elements to do well. Nowhere is that more true than when you are talking about a cloud platform such as OpenStack. Deploying an OpenStack cloud with 10s of servers to support a few hundred users are a far cry from operating and scaling a cloud with thousands of servers supporting many thousands of users. As one of the few cloud operators to reach that scale, Rackspace is conscious of and embraces our place within the OpenStack community because of our unmatched experience and expertise. As I discussed in a previous blog post, Rackspace has a three prong approach to enriching the OpenStack community:

  1. We freely share the lessons we’ve learned operating the world’s largest OpenStack public cloud and some of the world’s largest OpenStack private clouds.
  2. We contribute code and ideas to the OpenStack project, including bug fixes and new innovation that we developed to scale our clouds.
  3. We open source tools based on what we use to operate our clouds, some of which are specifically applicable to OpenStack and in some cases, are useful for any cloud platform.

In this blog post, I will be providing a summary of some things we’ve done to help us operate and scale our public cloud that we have contributed back to the community as lessons learned, as ideas and code contributed to the project and/or as new open source projects. Many thanks to Matt Van Winkle from our public cloud operations team for all the great information.

To read more about how Rackspace operates OpenStack at scale, please click here to go to my article on the Rackspace blog site.

Filed under: Cloud, Cloud Computing, IT Operations, OpenStack, Private Cloud, Public Cloud, Virtualization Tagged: Cloud, Cloud computing, OpenStack, Private Cloud, Public Cloud, Rackspace

by kenhui at August 22, 2016 12:46 PM

Matthias Runge

Operational Tools in CentOS

A while ago, we proposed the OpsTools SIG. Its idea is to provide tools for operators, system administrators, etc.

Now we have a repository for testing purposes:

yum install

Documentation can be found on GitHub, and we really love pull requests.

by mrunge at August 22, 2016 12:15 PM

Hugh Blemings



Welcome to Last week on OpenStack Dev (“Lwood”) for the week just past. For more background on Lwood, please refer here.

Basic Stats for week 15 to 21 – August 2016 for openstack-dev:

  • ~397 Messages (down about 32% relative to last week)
  • ~170 Unique threads (about the same as last week)

Frankly – stats have me a little baffled today, I think I’ll claim jetlag :) – message count down dramatically, thread count steady.

Notable Discussions – openstack-dev

All Hail Pike and Queens!

As Monty Taylor put it – yes the naming process is complete, the next two OpenStack releases will be Pike and Queens!

Tentative Ocata Schedule up for Review

Doug Hellmann noted that there is a tentative schedule up for Ocata here

Video presentation of findings from recent UX research

Danielle Mundle emailed to note that there is a video presenting the findings of the recent UX research on operator needs up on YouTube.  It will also be discussed at the Ops Summit this week in New York.

Notable Discussions – other OpenStack lists

Over on OpenStack-Operators the UX Project Team have asked the for volunteers to provide feedback around quota management.

Upcoming OpenStack Events

Best I can tell no OpenStack related events mentioned this week.  Don’t forget the OpenStack Foundation’s Events Page for a list of general events that is frequently updated.

People and Projects

Core nominations & changes

New, Proposed and Changed OpenStack Projects

Nothing new that I saw on the New/Proposed/Changed Projects front.

Further Reading & Miscellanea

Don’t forget these excellent sources of OpenStack news – most recent ones linked in each case

No particular tunes involved in this edition of Lwood but I will note it was prepared in a few different locations – airports, aircraft and now, pleasingly, my home office :)
Last but by no means least, thanks, as always, to Rackspace :)

by hugh at August 22, 2016 08:24 AM

App development, avoiding pitfalls, and more OpenStack news is your source for news in OpenStack, the open source cloud infrastructure project. Here's what's happening this week.

by Jason Baker at August 22, 2016 05:00 AM

August 21, 2016

Kenneth Hui

Operating OpenStack Clouds at Scale

 Virt Compute Nodes

As anyone who’s ever tried to move a technology from development to production knows, operations and scaling are two of the most difficult elements to do well. Nowhere is that more true than when you are talking about a cloud platform such as OpenStack. Deploying an OpenStack cloud with 10s of servers to support a few hundred users are a far cry from operating and scaling a cloud with thousands of servers supporting many thousands of users. As one of the few cloud operators to reach that scale, Rackspace is conscious of and embraces our place within the OpenStack community because of our unmatched experience and expertise. As I discussed in a previous blog post, Rackspace has a three prong approach to enriching the OpenStack community:

  1. We freely share the lessons we’ve learned operating the world’s largest OpenStack public cloud and some of the world’s largest OpenStack private clouds.
  2. We contribute code and ideas to the OpenStack project, including bug fixes and new innovation that we developed to scale our clouds.
  3. We open source tools based on what we use to operate our clouds, some of which are specifically applicable to OpenStack and in some cases, are useful for any cloud platform.

In this blog post, I will be providing a summary of some things we’ve done to help us operate and scale our public cloud that we have contributed back to the community as lessons learned, as ideas and code contributed to the project and/or as new open source projects. Many thanks to Matt Van Winkle from our public cloud operations team for all the great information.

To read more about how Rackspace operates OpenStack at scale, please click here to go to my article on the Rackspace blog site.

Filed under: Cloud, Cloud Computing, IT Operations, OpenStack, Private Cloud, Public Cloud, Virtualization Tagged: Cloud, Cloud computing, OpenStack, Private Cloud, Public Cloud, Rackspace

by kenhui at August 21, 2016 02:40 AM

August 20, 2016


What’s new in Mirantis OpenStack 9.0: Webinar Q&A

The post What’s new in Mirantis OpenStack 9.0: Webinar Q&A appeared first on Mirantis | The Pure Play OpenStack Company.

Theres’s never been a better time to adopt Mirantis OpenStack to build your cloud. The newest release, Mirantis OpenStack 9.0, offers improvements in simplicity, flexibility, and performance that make deployment, operations, and management faster and easier.

If you missed the July 14 webinar highlighting the rich new features in Mirantis OpenStack 9.0, we’ve got you covered. The webinar’s panel included three Mirantis experts: Senior Director of Product Marketing Amar Kapadia, Senior Manager of Technical Marketing Joseph Yep, and Senior Product Manager Durgaprasad (a.k.a. DP) Ayyadevara.

They talked about the ways in which MOS 9.0 improves the “Day 2” experience of operating your cloud once you’ve deployed it, as well as easier deployment of workloads, and especially improvements in the management of features related to NFV, such as SR-IOV, software acceleration DPDK and NUMA/CPU pinning.

Here’s a selection of questions and answers from those who attended.

Q: Can any plugin be added after initial deployment without disruption?

A: Not all plugins. However, the plugin framework has added metadata and developer functionality that allow developers to build and test their plugins so they can be added as “hot-pluggable.” This means this capability is specific to the plugins themselves as well as with the settings, which are dependent on the environment and type of change to determine whether there will be disruption. An example is StackLight’s Toolchain, which is hot-pluggable post-deployment.

Q: As far as upgrading from Mirantis OpenStack 8.0 to 9.0, is there documentation available for that?

A: Documentation is readily available and always improving. Because upgrades are challenging for large distributed infrastructure software, Mirantis continually creates tooling to make the process smoother and more automated. Feedback on the documentation is always welcome.

Q: Does the new release support SDN and Contrail?

A: Yes. Currently, the Control field plugin is available for Liberty-compatible release, and Contrail is the Mitaka-compatible version.

Q: The current base OS is Ubuntu 14.04, but are there any plans to upgrade to 16.04?

A: Yes. Operating systems are regularly validated, so 16 is on the roadmap.

Q: With the new release allowing updates to your previously-deployed OpenStack environment, can we also apply a new plugin with Fuel on a deployed environment?

A: Yes, unless it a previous version. For example, with Fuel 9, you can’t deploy a new plugin push deployment to a MOSS 7 environment without having to upgrade the environment itself. However, Fuel can manage multiple versions of Mirantis OpenStack environments.

Q: What is the status on Ironic and VX LAN?

A: Both are supported in 9.0.

Q: Does Murano support deployment of Kubernetes clusters?

A: Yes, absolutely. We do a lot with Kubernetes work, and there’s a new set of announcements coming soon about the work.

Q: What NFV features make Mirantis’ value-add different from others, and how can enterprises benefit from this feature?

A: Mirantis’ value-add is twofold. First, we support all Intel Enhanced Platform Awareness features. Second, we have provisioned for enabling and configuring these features through Fuel. We also support partners like 6WIND, who have DPDK accelerators, and we have Fuel plugins for that. So, we focus on making it easy to operationalize, and that differentiates us.

Q: How can you differentiate Mirantis from services hosted elsewhere, AWS for example?

A: Fundamentally, this compares two different things, a private cloud to a public cloud environment. You will find similarity at the IaaS layer. However, OpenStack is an open system that allows you to choose the components you want. For example, you can add an SDN like Contrail. Thus, in the PaaS, the two deviate considerably. Amazon is prescriptive, choosing the software available to offer customers. Conversely, OpenStack works with a multitude of partners so customers can tailor solutions that work best for them. If they want, for example, Pivotal Cloud Foundry, they can have it. If they want Kubernetes as a container framework, they can have it. If they want a specific database or NoSQL database, they can use Murano and publish that database.

Q: How many nodes are required to deploy OpenStack in Mirantis OpenStack 9.0?

A: Depending on the function, the lower limit is three. If running it virtualized, you could do it all physically on a single machine, but the nodes specifically will be your field master node if you’re using Fuel (you don’t have to use Fuel), which would then deploy to a single controller and a single compute host. This is one of the most minimal deployments if you’re looking at playing with features and practicing deployment, and it means you could conceivably run it on a laptop, though this isn’t advised for running a production deployment. There are instructions for running it in VirtualBox as well.

This is just a tiny fraction of what we covered, of course. Interested in hearing more?  You can view the whole presentation online, or download Mirantis OpenStack 9.0 for yourself.

The post What’s new in Mirantis OpenStack 9.0: Webinar Q&A appeared first on Mirantis | The Pure Play OpenStack Company.

by Nick Chase at August 20, 2016 01:32 AM

August 19, 2016

Enriquez Laura Sofia

Working in my first RBD bug

The cinder manage command it’s only for independent rbd volumes. However, manage an already-managed volumes is allow by the api and there’re not exception raised.

When you try to manage an already-managed volume, you received an unhandled error: ‘UnboundLocalError: local variable ‘rbd_image’ referenced before assignment’

How to reproduce:
1)Create a volume:
‘cinder create 1 –name vol1’
2) try to manage the volume (you can check the host with ‘cinder show <vol1’s ID>’):
cinder manage <vol1’s host> <vol1’s ID>
3) On the c-vol you can appreciate the unhandled error.
When this happens the RBD driver should handle the error.

The RBD driver should catch the exception and show a custom message notifying the user.

So, trying to handle the problem inside the rbd driver.


Coding the unit test is still a challenge for me. Thanks to my mentor I learned about them and we added one to patch.

After coded it, It’s time to run the unit test in devstack:

  1. Use an environment. One way to do it:
    vagrant/cinder$ ./ -V

    the -V will create a virtual env for you

  2. Active the env:
    vagrant/cinder$ source env/bin/activate
  3. Check it, this should list all of the rbd tests, in that list should be our new one:
    vagrant/cinder$ testr list-tests | grep RBDTestCase


  4. Let’s run just that one test:
    vagrant/cinder$ testr run cinder.tests.unit.volume.drivers.test_rbd.RBDTestCase.test_manage_existing_with_invalid_rbd_image





by enriquetaso at August 19, 2016 08:01 PM

August 18, 2016

OpenStack Superuser

Book aims to smooth out your “Path to Cloud”

Sometimes you need the right perspective — a new book from the OpenStack Foundation aims to provide just that. Titled “OpenStack: The Path to Cloud” it’s been described as the 1,000-foot view accompanied by a 100-foot view.

At the core of the vendor-neutral publication available for download are seven chapters designed to take architects considering OpenStack on a journey from cloud strategy to post-deployment.

OpenStack community members — including experts from IBM, Mirantis, Intel, Hitachi and SUSE and the indefatigable Enterprise Working Group— put their heads together to help you make decisions about models, forming your team, organization and process changes, choosing workloads, and implementation from proof-of-concept through ongoing maintenance. Far from being an insider’s ballgame, jargon is kept to a minimum and there’s a handy glossary for your boss, too.

alt text here

"Path to Cloud" is packed with insights and best practices — and some fantastic charts to guide your decisions along the way. Designed to help you figure out what will work for your company at a glance, each chapter has one (or several) graphics to walk you through competing considerations.

The 39-page booklet is available as a .PDF, in various ebook formats and print-on-demand.

If you'd like to share your journey with OpenStack, contact

Cover Photo // CC BY NC

by Nicole Martinelli at August 18, 2016 05:49 PM

How OpenStack makes Python better (and vice versa)

Programming language Python and OpenStack work with and depend on each other. For starters, six-year-old OpenStack consists of more than 4.5 million lines of code, 85 percent of which is Python. That said, these symbiotic communities can always improve how they work together.

Thierry Carrez, director of engineering at the OpenStack Foundation, and Doug Hellmann, senior principal software engineer at Red Hat, recently engaged in some stack diplomacy at EuroPython 2016.

The pair of Python Software Foundation Fellows cleared up some misconceptions about OpenStack. Carrez introduced OpenStack as open source infrastructure software that can be deployed on top of a secure data center to make it programmable infrastructure. That means OpenStack is usually deployed by decent-sized companies that happen to own a data center and want to provide resources for their internal development efforts or companies that want to run a public cloud basically an offer resources for rent to the wider public.

“OpenStack’s not exactly something you would deploy in your garage on a Sunday," says Carrez. “That's where the corporate image comes from, because the users are large organizations…This translated into a negative corporate image for the project itself, which is a bit unfair because it's actually one of the most community-led project out there.”

alt text here

The Four Opens of OpenStack

Carrez points to OpenStack’s independent technical governance— community-elected project technical leads (PTLs) and an oversight board — the technical committee —whose election runs on a “one contributor equals one vote” basis.

“If you contribute to OpenStack to get one vote to elect the leadership of the project that will decide where the project goes and if something gets done or not,” he says.

Following their talk, which also delves into tools and resources for Python contributors developed by the OpenStack community, the pair sat down in Bilbao with David Flanders, OpenStack community wrangler, to talk about how working with Python and OpenStack can translate into a heftier paycheck with more job mobility. The transcript of their conversation was edited for clarity and brevity.

You can also catch the entire 40-minute talk on YouTube.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="" width=""></iframe>

You guys gave some really cool viewpoints, but first I want to talk about how you came to OpenStack and Python.

Hellmann: I've been working with Python since the mid-to late 90s in different capacities and I came to OpenStack through the DreamHost company. I was hired to work on their OpenStack projects there and I got involved in the community and things just sort of took off for me after that.

Carrez: I came into contact with Python quite early on, but I did not consider myself a Python developer in any capacity back then. It was when we formed the OpenStack project that I really dived into Python. I'm coming from the OpenStack perspective and encounter Python through OpenStack, but only in the other way around.

Talk to me about the value that OpenStack receives from Python.

Hellmann: Originally, Python was selected in large part because it was very easily readable and it was integrated well with the distribution, so software distributions. That made it easy to ensure that OpenStack would be easy to distribute.

Carrez: We chose Python for OpenStack for a variety of reasons. The most obvious one at that time was the integration within distributions was really important for OpenStack at the very beginning to be well supported in distributions. Picking a language that was highly integrated with existing distributions was critical. It's also a language that was familiar to operators as an integration language. Since OpenStack operators were our target primary users, having the opportunity for them to look into the code and propose patches was also critical in choosing Python.

And what about the benefits to people who know both OpenStack and Python?

Hellmann: One of the benefits that we're seeing from OpenStack within the broader Python community is an increased interest in hiring people who would want to write Python code and work on open source projects doing that. We're seeing a lot of extra demand in jobs, job listings, salary rates, that sort of thing, it’s about 17% higher for OpenStack developers over standard Python developers. (That's U.S. data from It’s a little bit limited, but OpenStack is a global project and I would expect the same sort of thing to apply elsewhere.

alt text here

You also made a really cool point about the mobility of OpenStack developers. Can you talk a little bit about that?

Hellmann: I got into OpenStack with DreamHost and I'm now at Red Hat. This is actually my third job where I've been working on OpenStack. We have a lot of other members of the community who have moved around. They've been able to find a company where they fit well and a role within that company and a role within the community, and that's a big benefit just from your own personal happiness day to day, finding a place that fits for you.

Do you want to talk a bit about how much the OpenStack community really tests Python?

Carrez: OpenStack runs 23,000 tests jobs on a typical work day — that means every four or five seconds and in most cases we’ll deploy a complete cloud and test it over one hour.

The consequence for the Python community is that their code is tested almost continuously. We integrate new versions of our libraries as they come out and there are lots of times we detect problems before anyone else…

The Python community benefits from having their libraries (if we are using them) being tested so heavily. We will report bugs back and usually that results in higher quality for Python libraries as well.

alt text here

How are OpenStack developers contributing to a lot of Python packages and libraries and making sure that there's more longevity as well as quality?

Hellmann: Recently we did a pretty informal poll on the developers’ list just to ask where people who were contributing to OpenStack were also contributing. We saw a huge range of applications and libraries and tools and things like that all the way from the Linux kernel down into very specific Python modules that are used widely within OpenStack and outside of OpenStack. Those range from packaging tools, web frameworks, test tools, all the way to database libraries and that sort of thing.

alt text here

How can Python developers engage the OpenStack community?

Hellmann: To engage with the OpenStack community, you might want to look at some projects like Oslo project where there are libraries that are usable outside OpenStack and that sort of thing. If you start looking for employment related to OpenStack work, most of the big sponsors have job listings and there's a job board on the website as well.

alt text here

How can OpenStack developers get engaged in the Python community?

Hellmann: There are a couple of great ways to do that. First would be to look for a local meet-up group. We have Python meet-ups in most major cities around the world. There are also a lot of regional conferences now and you can find those on Then if you're interested in code contributions and that sort of thing, finding some of the dependencies that we use around the OpenStack ecosystem that can use a little help upstream, identifying ways that we can add features there instead of writing features in our own code, etc.

Cover Photo // CC BY NC

by Nicole Martinelli at August 18, 2016 04:58 PM

The Official Rackspace Blog

How Rackspace Operates OpenStack at Scale

As anyone who’s ever tried to move a technology from development to production knows, operations and scaling are two of the most difficult elements to do well.

Nowhere is that more true than with the OpenStack cloud platform. Deploying an OpenStack cloud with tens of servers to support a few hundred users is a far cry from operating and scaling a cloud with thousands of servers supporting many thousands of users.

As one of the few cloud operators to reach that scale, Rackspace embraces our responsibility within the OpenStack community to give back and share what we’ve learned. 

This post is a summary of those lessons, ideas and contributed code we’ve shared with the community related to operating at scale. It also includes capabilities that may become part of our private cloud offerings. Many thanks to Matt Van Winkle from our public cloud operations team for all the great information.

Nova Cells

Nova cells is a construct Rackspace created to solve specific issues in our public cloud. Cells are now part of the OpenStack project and used widely by other companies including CERN and GoDaddy. Cells are how OpenStack partitions Nova compute nodes into discrete groups — aka cells — each running its own Nova compute services, including the Nova database and message queue broker. All the compute cells tree up to a global cell that hosts the global Nova API service.

Nova cells

While cells are typically discussed as a way to scale OpenStack, Rackspace actually created cells to address several use cases:

  • Scaling – Cells have enabled to us scale our public cloud to many tens of thousands of nodes and 350,000+ cores while maintaining acceptable performance. Without the ability to partition our compute nodes, a single nova database and RabbitMQ message broker would be overwhelmed and inhibit our ability grow. Leveraging cells, we’ve been able to maintain a standard of ~100 hosts per cell.
  • Reducing Failure Domains – One reason we limit a cell to ~100 hosts is to limit the impact of networking issues such as broadcast storms, which can create cascading issues that would go unchecked without cells as a boundary. We also use cells is to limit the impact of failures to any single Nova database or RabbitMQ instance. The Rackspace Public Cloud spans multiple geographic regions and each region has multiple cells. Failures to the database or message broker in any given cell only impacts that cell and not an entire region.
  • Supporting Multiple Compute Flavors – The Rackspace Public Cloud has always used different hardware types and compute flavors. For example, when compute nodes with Solid State Drives were first introduced, users could launch general purpose instances on servers with SATA drives or new performance instances that run on servers with the new SSD drives. We use cells to group the older SATA servers with each other and the new SSD servers with each other. (Note that a Nova feature called Host Aggregates can also be used and there are use cases where it can be used with cells to partition compute nodes.)
  • Supporting Multiple Hardware – We leverage live migration in our public cloud for operational tasks such as maintenance and troubleshooting. Live migration works best when instances are migrated across compute nodes with the same CPU type. Since Rackspace sources hardware from multiple vendors, we use cells to group servers from the same vendors together.


One of the projects focused on the deployment and operations of OpenStack is Triple O. The precursor to Triple O is iNova, which is what Rackspace uses to run our public cloud. iNova is a set of servers provisioned to run an OpenStack “under-cloud” which acts as the control plane for an OpenStack “over-cloud.” This “over-cloud” is the actual Rackspace Public Cloud on which users provision instances.


Virtual Machines are provisioned on a set of seed servers in each Rackspace Public Cloud region and these VMs become the virtualized control plane for the iNova OpenStack under-cloud for that particular region. The control plane is then used to provision instances on a second set of servers that function as the iNova compute nodes. These instances then become the OpenStack control plane for our over-cloud — aka Rackspace Public Cloud —and manages the compute nodes in the public cloud.

The implementation of a virtualized OpenStack control plane provides several benefits but also brings several challenges:


  • It’s much easier to dynamically deploy, tear down and redeploy OpenStack services when they are running in VMs.
  • We can react to issues more quickly. For example, an unexpected spike in RabbitMQ due to some error in our cloud; we can quickly respond by spinning up multiple global cell workers to handle the spike until the issue is remediated.


  • Since our control planes run as Nova compute instances, any bugs that effect our user instances will likely effect our control planes.
  • We are increasing the complexity by adding an additional OpenStack cloud to every public cloud region.

Containers, a trend that has surfaced since the creation of iNova, could bring similar benefits without some of the challenges. We are considering moving the control plane to containers in the public cloud — a huge undertaking. Our Private Cloud, however, already runs OpenStack services in containers.

Virtualized Compute Nodes

One of the unique approaches Rackspace takes operating OpenStack is our use of virtualized compute nodes for managing our hypervisor nodes. In a typical OpenStack deployment, with KVM as the hypervisor, Nova compute services are installed on the hypervisor nodes so that every hypervisor is also a compute node.

Nova compute modes


In our public cloud, we use XenServer instead of KVM as our hypervisor technology, in part because our pre-OpenStack public cloud was based on Xen. One benefit is that the XAPI interface does a good job of remotely managing the Xen Hypervisor. So we run Nova services in a VM provisioned on the hypervisor node and use that nova compute VM to remotely manage the hypervisor node.

Virt Compute Nodes


While this is an admittedly unusual approach, it provides several operational benefits:

  • By isolating the compute node in a VM from the hypervisor node, we can limit the impact of a misbehaving compute node. If we have to reboot a compute node or even rebuild it, we can do so without have to take down the hypervisor node or the instances running on that node.
  • Separating the compute node in a VM from the hypervisor node also provides an additional layer of security since our support tech can log on to a compute node without needing full access to the hypervisor node itself.

Our use of techniques such as iNova and virtualized compute nodes are always being evaluated and refined, which means we are currently considering these changes:

  • Using containers instead of VMs for our virtualized compute nodes
  • Writing code that could further abstract the compute node from the hypervisor node so a single compute node can manage multiple hypervisors and/or have multiple compute nodes manage multiple hypervisors in a redundant setup.

Fleet Management

One of the key lessons of distributed computing at scale is that failures are inevitable. because of that, much of our focus has been on maintaining the availability of our cloud services even when individual components fail. We know this type of fleet management cannot be done manually and so Rackspace has, over the years, created tools to automate many operational tasks.

Those tools include Resolver and Auditor, used to discover and remediate issues in our public cloud and to make it as self-healing as possible. Resolver was written to automate repeated tasks such as rebooting certain services. It accepts inputs such as alerts, RabbitMQ messages and manual commands.


Auditor was created to automatically and continuously monitor the fleet of servers in the Rackspace Public Cloud to validate that they comply with a given set of rules. Alerts are created for servers that are flagged as out of compliance. In a growing number of cases, Auditor sends a message to Resolver, which will then take the appropriate action. Two examples include:

  1. Any nodes that are found by Auditor to be running the wrong code are flagged and submitted to Resolver for automate upgrades to the appropriate code.
  2. If Auditor finds hypervisor nodes that have to be rebooted for certain known issues, Resolver will live migrate instances off those nodes, reboot the nodes then live migrate instances back.

This ability to automate and heal OpenStack clouds is a critical part of operating at scale. Efforts to do this in the community include the Stackanetes project, to enable the OpenStack control plane to run in Docker containers managed by Kubernetes. Kubernetes would then handle automated tasks such as restarting services. yet while the Stackanetes project is encouraging, it is new and unproven and still has scalability limits short of what is required to run a large scale OpenStack cloud.

To move fleet management forward, Rackspace has open sourced some of our tools through a project called Craton. The Craton project allows us to share tools we’ve built with the community. We invite the community to look at Craton and work with us to make fleet management even more useful.

In the months ahead, Rackspace will be doing even more to share what we have learned and tools we’ve created to help to make OpenStack easier to operate at scale. This will benefit the OpenStack community in general and more specifically Rackspace customers who are consuming our public cloud and/or our private cloud.

Click to learn more about Rackspace Public Cloud.

The post How Rackspace Operates OpenStack at Scale appeared first on The Official Rackspace Blog.

by Kenneth Hui at August 18, 2016 01:58 PM

August 17, 2016

OpenStack Superuser

OpenStack Days Mexico highlights growing interest

The cloud is increasingly popular in Mexico, even during the summer season.

Professionals from across the country convened at the World Trade Center in Mexico City to celebrate OpenStack Days Mexico. This year saw a 31 percent increase in attendance from 2015.

The free June 13 event featured speakers from the IT, industry, media and education sectors. It was cross promoted with the Guadalajara user group, which is looking forward to hosting the next OpenStack Hackathon in September.

Nadia Goncebat Isaack, marketer at KIO Networks, one of the organizers behind the event, spoke to Superuser about key takeaways.

Who were the local organizers? What are the main takeaways from this year’s event?

Local organizer KIO Networks has been involved with the OpenStack Foundation since 2012 - the year the company became the first certified Latin American OpenStack member.

Since 2013, one of KIO´s main talking points has been promoting OpenStack and the sense of community that is so important for the project. The company has organized OpenStack Days Mexico events since 2014.

This year, we could definitely tell that the sense of community here has grown. Companies finally began to understand that the biggest strength in being an OpenStack community member is unity and teamwork and that those factors together will bring better benefits to the IT sector.

This time, sponsor companies were much more aware of this event and the impact that using this technology is having in all sectors, so they were more involved in many aspects like teamwork, organization and promotion of the event.

<script async="async" charset="utf-8" src=""></script>

Who gave the opening remarks, what was the main message?

The master of ceremonies was Peter Zadrozny, chief technology officer of KIO Networks. The main point of his opening remarks was to explain how OpenStack is already a successful project worldwide and, for that reason, it is very important for Latin America to get involved and cooperate in this community.

He also explained was how the OpenStack platform provides extraordinary benefits in terms of cost, technology, innovation and time to the IT sector - small to large companies, service provider to private and public companies. Latin America has a big opportunity in this technology so it is important to get our communities together to take advantage of it. This is the moment to do it and it is only possible if we do it together.

Was the event a success?

Yes. Considering that our goal was 600 people attending and 786 people showed up, I have to say that it was a successful event. Even more so because our audience consisted of IT professionals, decision makers in the IT and business sector, graduate students in systems and computing science and engineering students in their final semesters.

Which local companies participated and what sectors are they in?

Since our local OpenStack user groups have such good relationships with the companies and organizations in our communities, we had representation from IT (Intel, Hewlett Packard Enterprises, IBM), media (Software Guru, Conversus) and education (Aliat Universidades, ITAM, Tec de Monterrey) sectors.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="" width=""></iframe>

Which sessions were the most packed?

Thanks to the great attendance of the event, all the keynote sessions were filled to capacity and were definitely the most packed sessions of the event.

The organization team was dedicated in compiling the best keynotes presentations possible because we really wanted to present important content about trends in OpenStack - such as the increasing popularity of Containers and an emphasis on the cloud versus commercial information about the sponsors.

If there are ties to a local user group, which one?

This year, we contacted the Guadalajara OpenStack user group, which is a big community in Northern Mexico, and worked together with them to promote the OpenStack Day. We, in return, helped them promote the Guadalajara Hackathon, an OpenStack App development event that is taking place September 9-11.

How can people get involved for next year?

For the first time, we organized a call for speakers because we wanted to give an opportunity to speak during sessions to individual members, not just companies. We understood that the OpenStack community is made up of companies and individual members who are active collaborators in open source projects. Their experiences are very important and enlightening for the audience as well.

We expect to do this again next year.

Cover Photo // CC by NC

by Isaac Gonzalez at August 17, 2016 11:17 PM

Git hooks for OpenStack development take the pain out of code commits

There are a few hurdles to jump before committing code to OpenStack. One that causes many newcomers to stumble is when code doesn’t pass review for some simple reason, say a hanging indent problem or limestone differentiate.

That drove Chris Smart, a newcomer to OpenStack and self-described Linux geek to write a pre-commit hook to help run Tox tests and scripts before committing your code. (Tox testing is one of the highly recommended strategies for improving your OpenStack contributions.)

“This hook may be useful in helping you to run Tox tests and scripts before the local commit is made," Smart explains in the GitHub README file. "This way we can save everyone time by fixing simple issues before they break in the check-pipeline.”

After spotting his tweet about it, we reached out to learn more.

<script async="async" charset="utf-8" src=""></script>

What pushed you to create the hook?

I think that you have to make the lives of developers as easy as possible. If you make things hard, then they will be worked around or simply won't get done. Also, for new developers it can be daunting contributing to a new project. You often lack the confidence and can be hesitant to contribute, concerned that you'll make rookie mistakes. The hook is meant to help address these issues (and stop me from embarrassing myself too much).

I've been meaning to get involved in OpenStack for a few years but between work, kids and family, finishing my masters at University and my other open source work I just hadn't found the time.

Recently I've started to dabble and found a few things to fix up, so I began learning the development process. The gating system that checks all of the proposed commits is fantastic; it catches simple mistakes as well as more complicated ones and it's easy to just let it do its thing.

I noticed from time to time that proposed changes would fail due to little things, like documentation or code formatting errors. Although the gating system catches them, it means that time is wasted as the review needs to be re-visited by the developer as well as anyone who's been quick to jump on it.

That's when I thought that it would be useful if I was prompted to run some tests locally before review, preferably based on the types of files that I had changed. This way, I could efficiently double check that the commit is ready to go before submitting it. This would help to build my confidence and reduce the chances of me using up the important time of other developers.

Any feedback on it so far?

I asked a few friends to test it before I released it, which was helpful. I got some feedback from the Swift project, such as which tests to run if modified Python files are detected, and from Documentation who suggested making it prompt to build guides if you're working on the manuals.

Apart from that, nothing from the wider community so far. Hopefully if anyone else is actually using it, it's just working for them.

Any updates you'll be making in the near future or things that other users can work with you on?

I don't know about the workflow that others use, or what's special to each project. So any ideas about how else to do things are welcome!

I don't want it to be a burden or painful to use. It has to get out of your way really quickly if you don't want to run it, yet it should also be prompting you so that the idea of running tests is not far from your mind. If people don't find that to be the case, then I'm keen to know.

Any feedback and pull requests are very welcome :smiley:

What OpenStack projects do you work with most?

I'm very new, but in the last month or so I've mostly been playing with OpenStack Ansible, as it seems like a good place to get started. Being able to set up your own development all-in-one setup is very neat!

I have a few mates who work on that project and so it seemed logical to start there and lean on their understanding and experience. I've really enjoyed it so far, it's been very rewarding and I'm super keen to learn more. I will hopefully get my feet wet with OpenStack Swift soon.

What can the OpenStack community/or the Foundation do to make life easier for independent users?

I probably don't have enough exposure yet to provide much that's useful here. The community has been very welcoming, the documentation is good and the development model makes it very easy for users to contribute. So that's really great. I'll see how I go over the coming months and keep that in mind.

Superuser is always looking for community content, if you spot something we should cover email

Cover Photo // CC BY NC

by Nicole Martinelli at August 17, 2016 11:01 PM

Fleio Blog

OpenStack billing features – Fleio 1.0 preview (part 1 of 2)

As we’re getting close to 1.0-alpha release, I’d like to show you some of the OpenStack billing capabilities of Fleio. End-users can see a cost summary of their resource usage for the current billing cycle or a previous cycles: As well as detailed cost information, including number of hours (or other configured time unit) an instance […]

by adrian at August 17, 2016 07:15 PM

OpenStack Superuser

Enabling vCPE with OpenStack: Network configuration

Internet service providers (ISPs) provide equipment for customers and businesses to connect to the Internet. This equipment is called Customer Premises Equipment (CPE). The most recognizable form of CPE is a “box” in a subscriber’s home or premises that provides routing and an Internet connection. As a business, an upgrade in functionality (for example, adding optimization or firewalling) would require a new piece of equipment, which causes a delay for the consumer and costs money for the provider in terms of hardware and labor costs for installation, maintenance, and support.

Network Function Virtualization (NFV) is the concept of replacing existing networking appliances (also known as Network Functions), such as routers and switches, with virtualized, software-based implementations of these appliances running on standard high-volume servers (also known as Virtual Network Functions (VNFs)). By leveraging NFV, ISPs can provide a virtual Customer Premises Equipment (vCPE) solution to reduce their overall operating and maintenance costs. Using OpenStack, it is possible for ISPs to configure the vCPE remotely, adjusting the functionality to enable a customer’s changing networking and service needs.

alt text here

Figure 1: VNF deployment as a vCPE

In many vCPE deployments, VNFs can be put into two broad categories of network functions: virtualized layer two (L2) and virtualized layer three (L3). The network illustrated in Figure 1 shows both types of VNFs in a deployment. The L2 VNF is a simple “bump-in-the-wire” type function, that is, traffic will pass through but the VNF does not participate in the network. In this case, the bump is performing frame-forwarding, which could be augmented with packet inspection functionality, for example. The L3 VNF is a software router. This configuration, including OpenStack networking layout, is shown in Figure 1.

alt text here

Figure 2: Logical network layout, that is, hardware equivalent.

Network configuration

Figure 2 shows the logical layout of the network, which is how it appears to the user or how it would be set up when converting from a hardware equivalent. In the figure, the ISP infrastructure and client devices are shown as the traffic sources and destinations. The “bump” is invisible to the rest of the network, as it has no IP address, and is set to forward traffic between its two interfaces. The “router” is a VM configured to act as a router and connect the LAN and WAN networks. It seems straightforward, but the first issue that becomes apparent is that the VNFs can’t be wired “back-to-back,” as would be the case with physical hardware in this configuration. Ideally, the interfaces on the L2 and L3 VNFs would be connected directly to each other, but since that is not possible with OpenStack (Liberty) features, they are connected via the “internal” network, which is essentially replacing a single cable from Figure 2. This solution looks a little bit more complicated, as shown in Figure 3. Since we cannot create a virtualized setup that is identical to Figure 2, additional resources (network, ports, and subnets) need to be created. For this simple case, it does not pose a problem, and we will continue the setup in Figure 3. However, this immediately raises a red flag for scalability.

alt text here

Figure 3: Network components and VNFs for vCPE setup using OpenStack*.

Figure 3 shows two VNFs between two networks. We can configure the provider networks to connect the LAN and WAN networks to two physical interfaces (through software bridges br-eth0 and br-eth1), and ping from iface0 to iface1 (or eth0 to eth1) to verify functionality.

To do this, perform the following steps:

  1. Prepare the VNFs.
  2. Create the VMs and networks.
  3. Create appropriate subnets and ports.
  4. Edit the security groups to allow TCP and ICMP traffic.
  5. Ping to verify connectivity.

Next steps

This solution can be implemented in OpenStack Liberty and will work well in this simple case. However, it is unlikely that this will fulfill the requirements for large scale providers. Scalability is one of these requirements (that is, adding additional VNFs to the chain), as providers would want to offer a number of services with the equipment. In terms of scalability, this method is clunky and awkward from an orchestration perspective. For this solution to add more VNFs, we need to add additional internal networks, as seen in Figure 4 and chain them together, which creates an awkward, hard-to-scale implementation. If there is some kind of fork in the flow of the VNFs (for example, load balancing or traffic classification), this implementation may become unwieldy and hard to manage and debug. This kind of topological dependency is something that this setup would not deal with.

alt text here

Figure 4: Extending the VNF chain.

A technology known as service function chaining (SFC) is a good candidate to enable this more complex solution. With SFC, providers can chain numerous VNFs for traffic routing, WAN optimization, packet inspection, and so on with their vCPE offerings. Currently, SFC cannot be implemented in OpenStack without the use of an external network controller, such as OpenDayLight*, which would introduce additional dependencies into a deployment. The topological dependency in this setup is one of the key problems that the IETF SFC WG intends to tackle.

This post first appeared on the Intel Developer Zone blog. Superuser is always interested in community content, email:

Cover Photo // CC BY NC

by Emma L. Foley at August 17, 2016 06:13 PM

Making OpenStack easy: hosting provider offers public cloud ‘construction kits’

Cloud construction kits are the cornerstone of one company’s successful OpenStack strategy.

These easy-to-assemble public clouds first attracted small and medium businesses to Russian hosting provider Servionica’s MakeCloud. Four years after the launch, large retail and insurance companies are also constructing public clouds with it.

OpenStack functionality allowed Servionica to become not just another virtual machines (VMs) hosting provider but to offer its customers instruments enabling them to build a company's complete IT infrastructure in minutes, in other words IT "infrastructure construction sets."

Viacheslav Samarin, director of cloud services and products at Servionica, shared the story behind building the OpenStack-based public cloud and the customers who are using it.

What was the main purpose of building the OpenStack-based cloud?

Servionica is a part of i-Teco, one of the largest IT groups in Russia. Over the years, we have been building private clouds for enterprise customers based on the world’s leading vendors’ solutions and technologies. In 2009, we also launched our own VMWare-based cloud platform and started to provide managed clouds to our enterprise customers, but we saw the increasing demand for full customer self-service.

More customers wanted to manage their cloud resources themselves in a fully automated way, be able to order resources online, reconfigure them, view usage statistics and pay for consumed resources. Medium and small businesses were looking for an economical solution for their needs. It was also important for many corporate customers to get more than just a virtual server; they wanted an instrument that would allow them to build complete virtual IT infrastructure in minutes rather than days or even hours. So we decided we needed to build another cloud to satisfy all these needs. As a provider, we were looking for a platform that would ensure new functionality fast deployment to allow us to keep up with the changing market demand.

Why did you choose OpenStack for your cloud?

Thanks to our experience in building private clouds, we knew a lot about vendors’ platforms. In 2012, none of them were ready to be used to build as a true public cloud without a lot of additional work, effort and customization put into them. We also did not like the idea of our new business depending on a platform’s vendor road map and not being able to influence any new functionality roll out.

So we decided to look at open source solutions. After playing with several of them (installing each on a couple of old laptops), our engineers and developers decided on OpenStack. We wanted to have good reasons behind our choice, so we looked at the community and found it somewhat similar to the Linux community. New OpenStack releases were coming out every six months which made us believe there was good, dynamic work being done by the community. Documentation for the Diablo release and guides for deployment, administration and development were far from being complete in 2012, but got improved fast with every release. Although we spent a lot of time discussing our decision almost four years ago, today we think it was quite intuitive – but a smart guess.

What was it like launching the pilot OpenStack cloud, your first experience with open source?

It took us six months, and we started with Essex. We studied documentation, but a lot of functionality described was not really working or was working not as described. We worked on fixing bugs, as well as writing new code to add functions and features for the services we wanted to launch.

There were modules that were missing among OpenStack projects but were essential for launching a fully automated public cloud. Therefore, we found a partner that provided billing, personal area, front-end portal. We integrated with Velvica’s platform to get this functionality and still use it as a part of our MakeCloud infrastructure. Instead of the OpenStack dashboard, we developed our own control panel for implementation of user scenarios designed by us.

An important decision we had to make was on the storage to be used. We tried Gluster FS and Sheepdog, but both were quite immature and did not provide the performance needed. We launched our service using IBM Storwize V3700 which worked well but had performance issues, and using Cinder driver also caused security impact. Today we use NetApp FAS8020 and are very happy with its performance and ONTAP technologies.

It is worth mentioning that after many years of experience with vendors' software and hardware, learning to work with open source was quite special. Initial expectations of our management and business people on the amount of R&D and of our own software development were set based on a quick assessment of documentation available of the code and the projects. Very soon we learned there was much more work required. But fortunately, we had a team of very bright DevOps working on our project. Today after four years of experience, we have learned to assess the scope of work and to plan it much better.

In mid-December 2012 we went operational.

When going commercial, problems did you face and how did you solve them?

It took us four months to go from pilot to commercial. Meanwhile, Grizzly was released and in April 2013 we started to provide commercial services based on it.

With the growth of our customer base and of the load on our platform, the need for additional development and optimization was obvious. To decrease latency we optimized how Neutron worked with our database. We fully automated additional network functions on our platform: so that our customers could order them as services with just a click of a mouse: VPN (based on Cloudpipe), DNS (based on Designate), load balancing as a service (based on Equilibrium). To ease building IT infrastructure for our customers on our platform, we introduced additional functionality, such as choosing default gateway when connecting a VM to several networks and floating IP auto allocation.

As we were not satisfied with how the Storwize plugin worked, we tried several other storages and eventually replaced Storwize with NetApp. Based on our test and experience, NetApp performance and features, such as snapshots and WAFL, in conjunction with OpenStack integration are the best choice for an OpenStack cloud provider.

What are your future plans for the public cloud?

Initially we targeted small and medium size companies with our service. For example, we have been providing IT services to software development companies, IT infrastructure for CRM and commercial system to wholesale companies and mostly VMs for private users. Overtime some larger companies got interested. Our approach of providing cloud “construction set” allowing a company to build complete IT infrastructure in minutes turned out to help not only SMB but enterprise customers as well.

One of our large users is the widely known Russian retail company DA! («ДА!») to which we provide test zone. Another user example is JSC «SOGAZ» (АО «СОГАЗ»), one of the largest Russian insurers on the federal level who is using the cloud based on OpenStack to host their production information systems.

However, certain requirements of enterprise customers are definitely more advanced. For example, significant attention is paid to advanced monitoring and back up functionality. Our nearest plans and longer-term road map include satisfying customer requests for such advanced services. They will eventually be available on our public platform for all users.

We are also getting an increasing amount of requests to build private on-premise clouds for our customers based on our platform. Over the last year, we have completed three pilots and three commercial projects for our customers.

Cover photo // CC by NC

by Allison Price at August 17, 2016 04:39 PM

August 16, 2016


5 Minutes Stacks, episode 29 : GlusterFs multi Data center

Episode 29 : GlusterFs multi Data center


GlusterFS is a powerful network/cluster filesystem written in user space which uses FUSE to hook itself with VFS layer. GlusterFS takes a layered approach to the file system, where features are added/removed as per the requirement. Though GlusterFS is a File System, it uses already tried and tested disk file systems like ext3, ext4, xfs, etc. to store the data. It can easily scale up to petabytes of storage which are available to user under a single mount point.

In this episode, we will create two glusterfs replicate between them, but they are not in the same zone.


The version

  • Ubuntu Xenial 16.04
  • GlusterFS 3.7

The prerequisites to deploy this stack

Size of the instance

Per default, the script is proposing a deployement on an instance type “Standard 1” ( for fr1 zone and “Standard 1” ( for fr2. Instances are charged by the minute and capped at their monthly price (you can find more details on the Pricing page on the Cloudwatt website). Obviously, you can adjust the stack parameters, particularly its defaut size.

By the way…

If you do not like command lines, you can go directly to the “run it through the console” section by clicking here

What will you find in the repository

Once you have cloned the github, you will find in the bundle-xenial-glusterfs-multi-dc/ repository:

  • bundle-xenial-glusterfs-multi-dc-fr1.heat.yml: HEAT orchestration template. It will be use to deploy the necessary infrastructure in fr1 zone.
  • bundle-xenial-glusterfs-multi-dc-fr2.heat.yml: HEAT orchestration template. It will be use to deploy the necessary infrastructure in fr2 zone.

  • Stack launching script in fr1 zone . This is a small script that will save you some copy-paste.
  • Stack launching script in fr2 zone . This is a small script that will save you some copy-paste.

  • Flotting IP recovery script.


Initialize the environment

Have your Cloudwatt credentials in hand and click HERE. If you are not logged in yet, you will go thru the authentication screen then the script download will start. Thanks to it, you will be able to initiate the shell accesses towards the Cloudwatt APIs.

Source the downloaded file in your shell. Your password will be requested.

 $ source COMPUTE-[...]
 Please enter your OpenStack Password:

Once this done, the Openstack command line tools can interact with your Cloudwatt user account.

Adjust the parameters

With the bundle-xenial-glusterfs-multi-dc-fr2.heat.yml file, you will find at the top a section named parameters. The sole mandatory parameter to adjust is the one called keypair_name. Its default value must contain a valid keypair with regards to your Cloudwatt user account. This is within this same file that you can adjust the instance size by playing with the flavor parameter.

heat_template_version: 2013-05-23
description: All-in-one Glusterfs Multi DC
    description: Keypair to inject in instance
    label: SSH Keypair
    type: string
    default: my_keypair_name        <-- Indicate here your keypair

    description: Flavor to use for the deployed instance
    type: string
    label: Instance Type (Flavor)
      - allowed_values:

With the bundle-xenial-glusterfs-multi-dc-fr1.heat.yml file, you will find at the top a section named parameters. The both parameters to adjust are the first called keypair_name. Its default value must contain a valid keypair with regards to your Cloudwatt user account and the second called slave_public_ip must contain flotting ip stack fr2. This is within this same file that you can adjust the instance size by playing with the flavor parameter.

heat_template_version: 2013-05-23
description: All-in-one Glusterfs Multi Dc
    description: Keypair to inject in instance
    label: SSH Keypair
    type: string
    default: my-keypair-name                <-- Indicate here your keypair

    description: Flavor to use for the deployed instance
    type: string
    label: Instance Type (Flavor)
      - allowed_values:

     type: string
     label: slave public ip
     default:                      <-- Indicate here flotting ip glusterfs dc 2


Start stack

In a shell, run the script

$ export OS_REGION_NAME=fr2
$ ./ your_stack_name

Exemple :

$ ./ your_stack_name

| id                                   | stack_name      | stack_status       | creation_time        |
| ee873a3a-a306-4127-8647-4bc80469cec4 | you_stack_name       | CREATE_IN_PROGRESS |  |

Wait 5 minutes the stack will be fully operational. (Use watch to see the status in real-time)

$ heat resource-list your_stack_name
| resource_name    | physical_resource_id                                | resource_type                   | resource_status | updated_time         |
| floating_ip      | 44dd841f-8570-4f02-a8cc-f21a125cc8aa                | OS::Neutron::FloatingIP         | CREATE_COMPLETE | 2016-06-22T11:03:51Z |
| security_group   | efead2a2-c91b-470e-a234-58746da6ac22                | OS::Neutron::SecurityGroup      | CREATE_COMPLETE | 2016-06-22T11:03:51Z |
| network          | 7e142d1b-f660-498d-961a-b03d0aee5cff                | OS::Neutron::Net                | CREATE_COMPLETE | 2016-06-22T11:03:51Z |
| subnet           | 442b31bf-0d3e-406b-8d5f-7b1b6181a381                | OS::Neutron::Subnet             | CREATE_COMPLETE | 2016-06-22T11:03:51Z |
| server           | f5b22d22-1cfe-41bb-9e30-4d089285e5e5                | OS::Nova::Server                | CREATE_COMPLETE | 2016-06-22T11:03:51Z |
| floating_ip_link | 44dd841f-8570-4f02-a8cc-f21a125cc8aa-`floating_ip_stack_fr2`  | OS::Nova::FloatingIPAssociation | CREATE_COMPLETE | 2016-06-22T11:03:51Z |

The script takes care of running the API necessary requests to execute the normal heat template which:

  • Starts an Ubuntu xenial based instance on fr2 zone
  • Expose it on the Internet via a floating IP.

After deployment of the stack on fr2 zone, you can launch the stack on fr1 zone.

$ export OS_REGION_NAME=fr1
$ ./ your_stack_name
| id                                   | stack_name      | stack_status       | creation_time        |
| ee873a3a-a306-4127-8647-4bc80469cec4 | your_stack_name       | CREATE_IN_PROGRESS | 2016-06-22T11:03:51Z |

The script takes care of running the API necessary requests to execute the normal heat template which:

  • Starts an Ubuntu xenial based instance on fr1 zone
  • Expose it on the Internet via a floating IP.

All of this is fine,

but you do not have a way to create the stack from the console?

We do indeed! Using the console, you can deploy a the both glusterfs servers:

  1. Go the Cloudwatt Github in the applications/bundle-xenial-glusterfs-multi-dc repository.
  2. Click on the file named bundle-xenial-glusterfs-multi-dc-fr1(ou 2).heat.yml (or bundle-xenial-glusterfs-multi-dc-fr1(ou 2).restore.heat.yml to restore from backup)
  3. Click on RAW, a web page will appear containing purely the template
  4. Save the file to your PC. You can use the default name proposed by your browser (just remove the .txt)
  5. Go to the « Stacks » section of the console
  6. Click on « Launch stack », then « Template file » and select the file you just saved to your PC, and finally click on « NEXT »
  7. Name your stack in the « Stack name » field
  8. Enter the name of your keypair in the « SSH Keypair » field
  9. Write a passphrase that will be used for encrypting backups
  10. Choose your instance size using the « Instance Type » dropdown and click on « LAUNCH »

The stack will be automatically generated (you can see its progress by clicking on its name). When all modules become green, the creation will be complete. You can then go to the “Instances” menu to find the floating IP, or simply refresh the current page and check the Overview tab for a handy link.

If you’ve reached this point, you’re already done! Go enjoy GlusterFS!


In order to test the replication state between the both servers, connect to glusterfs fr1, then type the following command.

# gluster vol geo-rep datastore your_stack_name-dc2::datastore status
MASTER NODE            MASTER VOL    MASTER BRICK     SLAVE USER    SLAVE                             SLAVE NODE             STATUS    CRAWL STATUS       LAST_SYNCED                  
your_stack_name-dc1    datastore     /brick/brick1    root          your_stack_name-dc2::datastore    your_stack_name-dc2    Active    Changelog Crawl    2016-06-23 10:35:56          
your_stack_name-dc1    datastore     /brick/brick2    root          your_stack_name-dc2::datastore    your_stack_name-dc2    Active    Changelog Crawl    2016-06-23 10:35:56    

You can mount the glusterfs volume in a client machine that connects to the same network as the server machine :

# apt-get -y install gusterfs-client
# mkdir /mnt/datastore
# mount -t glusterfs your_stack_name-dc1:datastore /mnt/datastore

To restart gluterfs-server service

# service glusterfs-server restart


If master can not reach the slave, run the following commands on master.

# gluster system:: execute gsec_create
# gluster vol geo-rep datastore your_stack_name-dc2::datastore create push-pem force
# gluster vol geo-rep datastore your_stack_name-dc2::datastore start

So watt?

On each server glusterfs either fr1 or fr2, we created a replication volume datastore that contains two bricks /brick/brick1 and /brick/brick2, so you can add other bricks for knowing how, click on this link.

If everything goes well remember to change the resource groups for each machine, don’t leave ports open for public.

Other resources you could be interested in:

Have fun. Hack in peace.

by Mohamed-Ali at August 16, 2016 10:00 PM

Adam Young

Running Unit Tests on Old Versions of Keystone

Just because Icehouse is EOL does not mean no one is running it. One part of my job is back-porting patches to older versions of Keystone that my Company supports.

A dirty secret is that we only package the code needed for the live deployment, though, not the unit tests. In the case of I need to test a bug fix against a version of Keystone that was, essentially, Upstream Icehouse.

Running the unit tests with Tox had some problems, mainly due to recent oslo components not being being compatible that far back.

Here is what I did:

  • Cloned the  keystone repo
  • applied the patch to test
  • ran tox -r -epy27  to generate the virtual environment.  Note that the tests fail.
  • . .tox/py27/bin/activate
  • python -m unittest keystone.tests.test_v3_identity.IdentityTestCase
  • see that test fails due to:
    • AttributeError: ‘module’ object has no attribute ‘tests’
  • run python to get an interactive interpreter
    • import keystone.tests.test_v3_identity
    • Get the error below:
ImportError: No module named utils
>>> import oslo-utils
File "<stdin>", line 1
import oslo-utils

To deal with this:

  • Clone the oslo-utils repo
    • git clone
  • checkout out the tag that is closest to what I think we need.  A little trial and error showed I wanted kilo-eol
    • git checkout kilo-eol
  • Build and install in the venv (note that the venv is still activated)
    • cd oslo.utils/
    • python install

Try running the tests again.  Similar process shows that something is mismatched with oslo.serialization.  Clone, checkout, and build, this time the tag is also kilo-eol.

Running the unit test runs and shows:

Traceback (most recent call last):
  File "keystone/tests/", line 835, in test_delete_user_and_check_role_assignment_fails
    member_url, user = self._create_new_user_and_assign_role_on_project()
  File "keystone/tests/", line 807, in _create_new_user_and_assign_role_on_project
    user_ref = self.identity_api.create_user(new_user)
  File "keystone/", line 74, in wrapper
    result = f(*args, **kwargs)
  File "keystone/identity/", line 189, in wrapper
    return f(self, *args, **kwargs)
TypeError: create_user() takes exactly 3 arguments (2 given)

Other unit tests run successfully. I’m back in business.

by Adam Young at August 16, 2016 09:24 PM

RBAC Policy Updates in Tripleo

Policy files contain the access control rules for an OpenStack deployment. The upstream policy files are conservative and restrictive; they are designed to be customized on the end users system. However, poorly written changes can potentially break security, their deployment should be carefully managed and monitored.

Since RBAC Policy controls access to the Keystone server, the Keystone policy files themselves are not served from a database in the Keystone server. They are, instead, configuration files, and managed via the deployment’s content management system. In a Tripleo based deployment, none of the other services use the policy storage in Keystone, either.

In Tripleo, the deployment of the overcloud is managed via Heat. the OpenStack Tripleo Heat templates have support for deploying files at the end of the install, and this matches how we need to deploy policy.


  1. Create a directory structure that mimics the policy file layout in the overcloud.  For this example, I will limit it to just Keystone.  Create a directory called policy (making this a git repository is reasonable) and under it create etc/keystone.
  2. Inside that Directory, copy the either the default policy.json file or the overcloudv3sample.json to be named policy.json.
    1.  keystone:keystone as the owner,
    2. rw-r—– are the permissions
  3. Modify the policy files to reflect organizational rules
  4. Use the offline tool to check policy access control.  Confirm that the policy behaves as desired.
  5. create a tarball of the files.
    1. cd policy
    2. tar -zxf openstack-policy.tar.gz etc
  6. Use the Script to upload to undercloud swift:
    2. . ./stackrc;  ./upload-swift-artifacts  openstack-policy.tar.gz
  7. Confirm the upload with swift list -l overcloud
    1. 1298 2016-08-04 16:34:22 application/x-tar openstack-policy.tar.gz
  8. Redeploy the overcloud
  9. Confirm that the policy file contains the modifications made in development

by Adam Young at August 16, 2016 04:01 PM

Tesora Corp

Why We Got Involved with OpenStack Days East

The first-ever OpenStack Days East event is coming to New York City on August 23-24, which our company and yours truly have been organizing, along with GigaSpaces and our steering committee that includes representatives from the east coast OpenStack Meetups.

It’s a big undertaking, let me tell you, and takes months of advance work to make happen. But it is happening and the event is coming together nicely with great participation by many, many sponsoring companies and more than 50 speakers. Those include Jonathan Bryce, Executive Director of the OpenStack Foundation, and our featured guest speaker, Donna Scott, Vice President and Distinguished Analyst at Gartner, who will keynote the event.

From our first thoughts about putting this together, we wanted this event to stand out by showing what is possible using OpenStack today. Our intent is to put together people interested and wanting to learn more about OpenStack with the pioneers and those who have been on the cutting-edge using OpenStack. In my view, the highlight of our program are the case studies we’ll hear from Bloomberg, Comcast, PayPal, EBSCO, NetApp and Walmart talking about their OpenStack deployments.

It was a bold move by our company and our management to invest in this event – in time and money. So why did we do it? We couldn’t help ourselves, I suppose. Everyone in our company and all of our time is wrapped up in OpenStack, and especially Trove Database as a Service. We’ve met tons of people and made many friends since Tesora got involved with OpenStack in early 2014. We wondered why there wasn’t an event like this on the east coast. For us, this is kind of an open house where we’re inviting everyone that shares this common interest in OpenStack.

Early on, we discovered that our enthusiasm was matched by every company that we talked with about the event – and they were willing to invest their time and money, too. So, here we are with one week to go. I like to think all the hard advance work is behind us and am really looking forward to seeing it all come together starting on August 23. I hope you’ll come join us. I promise that like every OpenStack event, it will be educational, fun and friendly.

The post Why We Got Involved with OpenStack Days East appeared first on Tesora.

by Frank Days at August 16, 2016 03:48 PM

Adam Young

Diagnosing Tripleo Failures Redux

Hardy Steven has provided an invaluable reference with his troubleshooting blog post. However, I recently had a problem that didn’t quite match what he was showing. Zane Bitter got me oriented.

Upon a redeploy, I got a failure.

$ openstack stack list
| ID                                   | Stack Name | Stack Status  | Creation Time       | Updated Time        |
| 816c67ab-d360-4f9b-8811-ed2a346dde01 | overcloud  | UPDATE_FAILED | 2016-08-16T13:38:46 | 2016-08-16T14:41:54 |

Listing the Failed resources:

$  heat resource-list --nested-depth 5 overcloud | grep FAILED
| ControllerNodesPostDeployment                 | 7ae99682-597f-4562-9e58-4acffaf7aaac          | OS::TripleO::ControllerPostDeployment                                           | UPDATE_FAILED   | 2016-08-16T14:44:42 | overcloud 

No deployment listed. How to display the error? We want to show the resource named ControllerNodesPostDeployment associated with the overcloud stack:

$ heat resource-show overcloud ControllerNodesPostDeployment
| Property               | Value                                                                                                                                                               |
| attributes             | {}                                                                                                                                                                  |
| creation_time          | 2016-08-16T13:38:46                                                                                                                                                 |
| description            |                                                                                                                                                                     |
| links                  | (self)      |
|                        | (stack)                                             |
|                        | (nested) |
| logical_resource_id    | ControllerNodesPostDeployment                                                                                                                                       |
| physical_resource_id   | 7ae99682-597f-4562-9e58-4acffaf7aaac                                                                                                                                |
| required_by            | BlockStorageNodesPostDeployment                                                                                                                                     |
|                        | CephStorageNodesPostDeployment                                                                                                                                      |
| resource_name          | ControllerNodesPostDeployment                                                                                                                                       |
| resource_status        | UPDATE_FAILED                                                                                                                                                       |
| resource_status_reason | Engine went down during resource UPDATE                                                                                                                             |
| resource_type          | OS::TripleO::ControllerPostDeployment                                                                                                                               |
| updated_time           | 2016-08-16T14:44:42                                                                                                                                                 |

Note this message:

Engine went down during resource

Looking in the journal:

Aug 16 15:16:15 undercloud kernel: Out of memory: Kill process 17127 (heat-engine) score 60 or sacrifice child
Aug 16 15:16:15 undercloud kernel: Killed process 17127 (heat-engine) total-vm:834052kB, anon-rss:480936kB, file-rss:1384kB

Just like Brody said, we are going to need a bigger boat.

by Adam Young at August 16, 2016 03:35 PM

Daniel P. Berrangé

Improving QEMU security part 7: TLS support for migration

This blog is part 7 of a series I am writing about work I’ve completed over the past few releases to improve QEMU security related features.

The live migration feature in QEMU allows a running VM to be moved from one host to another with no noticeable interruption in service and minimal performance impact. The live migration data stream will contain a serialized copy of state of all emulated devices, along with all the guest RAM. In some versions of QEMU it is also used to transfer disk image content, but in modern QEMU use of the NBD protocol is preferred for this purpose. The guest RAM in particular can contain sensitive data that needs to be protected against any would be attackers on the network between source and target hosts. There are a number of ways to provide such security using external tools/services including VPNs, IPsec, SSH/stunnel tunnelling. The libvirtd daemon often already has a secure connection between the source and destination hosts for its own purposes, so many years back support was added to libvirt to automatically tunnel the live migration data stream over libvirt’s own secure connection. This solved both the encryption and authentication problems at once, but there are some downsides to this approach. Tunnelling the connection means extra data copies for the live migration traffic and when we look at guests with RAM many GB in size, the number of data copies will start to matter. The libvirt tunnel only supports a tunnelling of a single data connection and in future QEMU may well wish to use multiple TCP connections for the migration data stream to improve performance of post-copy. The use of NBD for storage migration is not supported with tunnelling via libvirt, since it would require extra connections too. IOW while tunnelling over libvirt was a useful short term hack to provide security, it has outlived its practicality.

It is clear that QEMU needs to support TLS encryption natively on its live migration connections. The QEMU migration code has historically had its own distinct I/O layer called QEMUFile which mixes up tracking of migration state with the connection establishment and I/O transfer support. As mentioned in previous blog post, QEMU now has a general purpose I/O channel framework, so the bulk of the work involved converting the migration code over to use the QIOChannel classes and APIs, which greatly reduced the amount of code in the QEMU migration/ sub-folder as well as simplifying it somewhat. The TLS support involves the addition of two new parameters to the migration code. First the “tls-creds” parameter provides the ID of a previously created TLS credential object, thus enabling use of TLS on the migration channel. This must be set on both the source and target QEMU’s involved in the migration.

On the target host, QEMU would be launched with a set of TLS credentials for a server endpoint:

$ qemu-system-x86_64 -monitor stdio -incoming defer \
    -object tls-creds-x509,dir=/home/berrange/security/qemutls,endpoint=server,id=tls0 \
    ...other args...

To enable incoming TLS migration 2 monitor commands are then used

(qemu) migrate_set_str_parameter tls-creds tls0
(qemu) migrate_incoming tcp:myhostname:9000

On the source host, QEMU is launched in a similar manner but using client endpoint credentials

$ qemu-system-x86_64 -monitor stdio \
    -object tls-creds-x509,dir=/home/berrange/security/qemutls,endpoint=client,id=tls0 \
    ...other args...

To enable outgoing TLS migration 2 monitor commands are then used

(qemu) migrate_set_str_parameter tls-creds tls0
(qemu) migrate tcp:otherhostname:9000

The migration code supports a number of different protocols besides just “tcp:“. In particular it allows an “fd:” protocol to tell QEMU to use a passed-in file descriptor, and an “exec:” protocol to tell QEMU to launch an external command to tunnel the connection. It is desirable to be able to use TLS with these protocols too, but when using TLS the client QEMU needs to know the hostname of the target QEMU in order to correctly validate the x509 certificate it receives. Thus, a second “tls-hostname” parameter was added to allow QEMU to be informed of the hostname to use for x509 certificate validation when using a non-tcp migration protocol. This can be set on the source QEMU prior to starting the migration using the “migrate_set_str_parameter” monitor command

(qemu) migrate_set_str_parameter tls-hostname myhost.mydomain

This feature has been under development for a while and finally merged into QEMU GIT early in the 2.7.0 development cycle, so will be available for use when 2.7.0 is released in a few weeks. With the arrival of the 2.7.0 release there will finally be TLS support across all QEMU host services where TCP connections are commonly used, namely VNC, SPICE, NBD, migration and character devices.

In this blog series:

by Daniel Berrange at August 16, 2016 01:00 PM

Improving QEMU security part 6: TLS support for character devices

This blog is part 6 of a series I am writing about work I’ve completed over the past few releases to improve QEMU security related features.

A number of QEMU device models and objects use a character devices for providing connectivity with the outside world, including the QEMU monitor, serial ports, parallel ports, virtio serial channels, RNG EGD object, CCID smartcard passthrough, IPMI device, USB device redirection and vhost-user. While some of these will only ever need a character device configured with local connectivity, some will certainly need to make use of TCP connections to remote hosts. Historically these connections have always been entirely in clear text, which is unacceptable in the modern hostile network environment where even internal networks cannot be trusted. Clearly the QEMU character device code requires the ability to use TLS for encrypting sensitive data and providing some level of authentication on connections.

The QEMU character device code was mostly using GLib’s  GIOChannel framework for doing I/O but this has a number of unsatisfactory limitations. It can not do vectored I/O, is not easily extensible and does not concern itself at all with initial connection establishment. These are all reasons why the QIOChannel framework was added to QEMU. So the first step in supporting TLS on character devices was to convert the code over to use QIOChannel instead of GIOChannel. With that done, adding in support for TLS was quite straightforward, merely requiring addition of a new configuration property (“tls-creds“) to set the desired TLS credentials.

For example to run a QEMU VM with a serial port listening on IP 10.0.01, port 9000, acting as a TLS server:

$ qemu-system-x86_64 \
      -object tls-creds-x509,id=tls0,endpoint=server,dir=/home/berrange/qemutls \
      -chardev socket,id=s0,host=,port=9000,tls-creds=tls0,server \
      -device isa-serial,chardev=s0
      ...other QEMU options...

It is possible test connectivity to this TLS server using the gnutls-cli tool

$ gnutls-cli --priority=NORMAL -p 9000 \
--x509cafile=/home/berrange/security/qemutls/ca-cert.pem \

In the above example, QEMU was running as a TCP server, and acting as the TLS server endpoint, but this matching is not required. It is valid to configure it to run as a TLS client if desired, though this would be somewhat uncommon.

Of course you can connect 2 QEMU VMs together, both using TLS. Assuming the above QEMU is still running, we can launch a second QEMU connecting to it with

$ qemu-system-x86_64 \
      -object tls-creds-x509,id=tls0,endpoint=client,dir=/home/berrange/qemutls \
      -chardev socket,id=s0,host=,port=9000,tls-creds=tls0 \
      -device isa-serial,chardev=s0
      ...other QEMU options...

Notice, we’ve changed the “endpoint” and removed the “server” option, so this second QEMU runs as a TCP client and acts as the TLS client endpoint.

This feature is available since the QEMU 2.6.0 release a few months ago.

In this blog series:

by Daniel Berrange at August 16, 2016 12:11 PM


Rackspace said to be close to private equity buyout

The post Rackspace said to be close to private equity buyout appeared first on Mirantis | The Pure Play OpenStack Company.

Rackspace, one of the founding companies behind OpenStack, is said to be close to a deal with Apollo Global Management to bring the company private at a value of between $3.4 billion and $4 billion. Rumors swirled around the company in 2014, but the company reportedly couldn’t get the price it was looking for, said to be in the neighborhood of $6 billion.

Since then, Rackspace has changed its market strategy, exiting the commodity cloud business and focusing on “managed services”, in which customers pay for resources and for the “fanatical support” the company is known for. That “fanatical support” is now also offered for AWS and Azure. This week Rackspace also sold its Cloud Sites premium hosting business, which is separate from its cloud services and involves sites that start at $150/month, to Liquid Web for an undisclosed sum.


The post Rackspace said to be close to private equity buyout appeared first on Mirantis | The Pure Play OpenStack Company.

by Nick Chase at August 16, 2016 09:50 AM

HPE loses Bill Hilf, gains SGI, among buyout rumors

The post HPE loses Bill Hilf, gains SGI, among buyout rumors appeared first on Mirantis | The Pure Play OpenStack Company.

Yes, you read that right, HPE is also the subject of rumors of a private equity buyout.  In this case, Reuters reports that several firms, including Apollo Global Management, which is also said to be looking at Rackspace, are looking at buying either all of HPE, or just its software businesses such as Autonomy.   

The company announced August 1 that it will be reorganizing (again), this time moving the HP Helion OpenStack product under the Enterprise Group, as part of the Software-Defined & Cloud Group (SDCG), led by Ric Lewis.  As part of this restructuring, senior vice president and general manager of HP Cloud Bill Hilf will be leaving the company “to pursue other opportunities”.  

According to a classy farewell email reprinted in GeekWire, Hilf said ““Like many other companies, there comes a time where you naturally mainstream specific technologies into the broader product strategies, and now is the right time to integrate our Helion software assets more deeply into HPE SW and EG.  In 2016, Cloud is now part of every product strategy at HPE, so this makes good sense to integrate, versus maintain a stand-alone division.

“For me personally, when I joined, I made a commitment to build the cloud business here at HP for three years, and I’m now 3 years and one month in, and we have had very strong growth – just this past June HPE Helion was recognized as a leader in the private cloud market for the third year in a row.  So this is the right time for me to help move the technologies and teams more deeply into the company, and to pursue new opportunities.”

Meanwhile, in other news, HP has bought big data analytics firm SGI for $275 million to shore up its position in high performance computing (HPC) and data analytics. According to SDN Central, “HPE says the SGI acquisition helps it expand its presence in key verticals such as government, research, and life sciences.”


The post HPE loses Bill Hilf, gains SGI, among buyout rumors appeared first on Mirantis | The Pure Play OpenStack Company.

by Nick Chase at August 16, 2016 09:40 AM

Mirantis supports SUSE with Mirantis OpenStack, SUSE supports Mirantis OpenStack with RHEL

The post Mirantis supports SUSE with Mirantis OpenStack, SUSE supports Mirantis OpenStack with RHEL appeared first on Mirantis | The Pure Play OpenStack Company.

Events often include product or company announcements, and OpenStack Days Silicon Valley was no exception, with Mirantis following up its recent announcement that it would work with Intel and Google to rearchitect OpenStack for containers and continuous delivery with the news that it would partner with SUSE to ensure that Mirantis OpenStack can run on that distribution — and a couple of other important ones.

Mirantis CMO and Co-Founder Boris Renski joked that as MOS has always supported Ubuntu and had agreements for Oracle Linux support, the addition of SUSE meant that there was just one major Linux distribution missing, but one that was about to be resolved.

SUSE, you see, in addition to providing SUSE Linux Enterprise Server (SLES), provides support for customers running Red Hat Enterprise Linux (RHEL) — and now they would do the same for Mirantis customers.

As you might imagine, Red Hat, which used to be an investor in Mirantis but had a falling out in recent years, was none-too-thrilled with the announcement.  “We aren’t clear what kind of support Mirantis and SUSE can claim to provide for another company’s offerings,” Margaret Dawson, senior director of product marketing at Red Hat told CRN. “but this makes no sense to us, and it would certainly be confusing and potentially dangerous for customers.”

Red Hat has always maintained that the best way to create a solution such as OpenStack is through “co-engineering”, in which the operating system and OpenStack distribution are provided by the same vendor and that it’s “unheard of” for one vendor to support another vendor’s product.

“Enterprise support agreements have often seen vendors willing to take on the support of other vendors’ products,” wrote Ian Murphy in EnterpriseTimes. “This is nothing new, and in many cases, those agreements are not subject to any agreement between vendors. In the Open Source market where access to the source code is part of the deal, companies are often willing to do whatever it takes to ensure software runs smoothly.”

The issue here, of course, is that some companies take exception to the notion of being locked into a single vendor’s products and want flexibility.  That is, in fact, how SUSE wound up supporting RHEL in the first place — it’s a part of their offer for customers who are ready to move off RHEL onto SUSE but need support during the transition.

Mark Smith, a Global Products and Solutions Manager from SUSE, was quick to point out that the announcement in no way signals a pullback from OpenStack for the company. “There is no change in our commitment to OpenStack. This is just about allowing customers to choose which OpenStack they want to run on SUSE Enterprise Linux.”

According to the arrangement, Mirantis will offer Level 1 and Level 2 support for any RHEL-related issues and will call on SUSE for any Level 3 issues that arise with customers.  Mirantis offers packages that include 1-year and 3-year subscriptions, with up to 24x7x365 email and phone support, with a one-hour guaranteed response time.


The post Mirantis supports SUSE with Mirantis OpenStack, SUSE supports Mirantis OpenStack with RHEL appeared first on Mirantis | The Pure Play OpenStack Company.

by Nick Chase at August 16, 2016 09:28 AM


OpenStack Day Budapest – Bringing Summit Spirit to a Local Level

Ecosystem growth and adoption was the key message at the 4th annual OpenStack Budapest Day earlier this year. As part of this OpenStack event, two prominent ecosystem members Aptira and Ericsson co-organised a full-scale one day conference with free upstream University training to educate developers about the OpenStack contribution process and tools.

Held at the Royal Castle Garden’s Bazaar, this sold out event attracted over 330 people from the greater CEE region, indicating a large growth from previous years. Featuring 27 local and international speakers and 25 sessions about OpenStack deployment, operation and user stories. Jonathan Bryce from the OpenStack Foundation started the morning keynote sessions, followed by Rob McMahon from RedHat and Simon Briggs from Suse. The keynotes were closed by a very interesting user story presented by Péter Gerner from ITSH who gave an overview of the Open Public Cloud effort of the Deutsche Telekom group, a public cloud provider project based on OpenStack operated partly from Hungary.

The original idea behind organising this event in the region was to bring the spirit of the Summits to the local level. It is important to note that we wanted to build a social space to connect the OpenStack community, as well as to provide valuable content to conference attendees.

Aptira is committed to supporting the OpenStack community world-wide, and organises many similar events and meetups in India, Taiwan and Australia. We really excited about next year’s events, and are doing our best to attract diverse ecosystem players and connect them with the local audience.

The post OpenStack Day Budapest – Bringing Summit Spirit to a Local Level appeared first on Aptira OpenStack Services in Australia Asia Europe.

by Marton Kiss at August 16, 2016 05:35 AM

August 15, 2016

OpenStack Superuser

App developers: here’s more on what OpenStack can do for you

If you’re looking for reasons to attend the upcoming OpenStack Summit Barcelona, fellow app developers offered a host of reasons. (And they will help convince your boss, too.)


“I’m very excited that this container journey is here and that OpenStack is embracing it,” says Lachlan Evenson, best known to Superuser readers for his eight-minute upgrade. “I’m very excited that people are on this journey with us, it's only going to make the platform's better, we can really tackle real business use cases.”

Internet of things

“IOT is not about proprietary technology or specific vendor solutions, it's about the powerful of the open source,” says Jakub Pavlik, CTO at tcp cloud. Pavlik offered a smart city demo at the keynote that tracked where conference users where heading. “The power of the community that you can choose different pieces of OpenStack and make them work.”

We are OpenStack

“We're trying to really build this community of application developers help them get exposed to OpenStack, to understand OpenStack client all the different pieces,” says Mark Collier, OpenStack Foundation COO.

Getting you from zero to hero

In addition to hands-on sessions and training opportunities offered at the Summit, the keynotes in Austin featured winners of the first official OpenStack Hackathon in Taiwan. The winning team made an muscle-movement tracking app that could have uses that include physical therapy.

“Look at what people are doing with very little money and they are making things that can change the world,” Evenson says. “I got a glimpse into the future, I had this aha! moment in the keynotes, this is amazing.”

Here’s the entire four-minute video from the OpenStack Foundation YouTube channel:

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="" width=""></iframe>

Get involved

  • The next OpenStack Hackathon is in September in Mexico, there are a series of free workshops on OpenStack offered before the weekend.

  • The Application Ecosystem Working Group develops training materials to help prepare community members for hackathons. To get involved, please contact OpenStack Foundation community wrangler David Flanders through his website or over IRC where his nickname is dfflanders.

  • The OpenStack API Working Group works to improve the developer experience of API users by converging the OpenStack API to a consistent and pragmatic RESTful design. You can find out more on the wiki or ask questions using the [api] tag on the openstack-dev mailing list.

Cover Photo // CC BY NC

by Superuser at August 15, 2016 10:16 PM

Boden Russell

What's new with neutron-lib 0.3.0


OpenStack neutron-lib version 0.3.0 was recently released to PyPI  and contains a number of updates to API validators, constants and hacking checks.

The complete list of public API changes are summarized below (and can be viewed on github):
 New API Signatures  
neutron_lib.api.validators.get_validator(validation_type, default=None)
neutron_lib.api.validators.validate_integer(data, valid_values=None)
neutron_lib.api.validators.validate_subports(data, valid_values=None)
neutron_lib.constants.DHCPV6_STATEFUL = dhcpv6-stateful
neutron_lib.constants.DHCPV6_STATELESS = dhcpv6-stateless
neutron_lib.constants.IPV6_MODES = [u'dhcpv6-stateful', u'dhcpv6-stateless', u'slaac']
neutron_lib.constants.IPV6_SLAAC = slaac
neutron_lib.constants.L3_AGENT_MODE = agent_mode
neutron_lib.constants.L3_AGENT_MODE_DVR = dvr
neutron_lib.constants.L3_AGENT_MODE_DVR_SNAT = dvr_snat
neutron_lib.constants.L3_AGENT_MODE_LEGACY = legacy
neutron_lib.hacking.translation_checks.check_log_warn_deprecated(logical_line, filename)
neutron_lib.hacking.translation_checks.check_raised_localized_exceptions(logical_line, filename)
neutron_lib.hacking.translation_checks.no_translate_debug_logs(logical_line, filename)
neutron_lib.hacking.translation_checks.validate_log_translations(logical_line, physical_line, filename)

Removed API Signatures

Changed API Signatures

The above public API changes were generated using a new tool we're looking to include with neutron-lib and eventually perhaps in the change summary for each neutron-lib release.

API Validators

Two new validators were added in neutron-lib 0.3.0; validate_integer and validate_subports.

As expected, validate_integer ensures a value is in fact an integer. The implementation includes smarts to detect if the value is a str, float or bool; which are often missing in common integer validation functions. In addition, the function supports passing a list of valid_values to check for value inclusion.

As with other validator functions, validate_integer returns None if the value is valid and a str message otherwise. The string message for an invalid value is a user friendly message as to why the value is bad.

Here's a sample python snippet to showcase validate_integer

 from neutron_lib.api import validators  

def test_validate(validator, val, valid_values=None):
result = validator(val, valid_values)
print("%s(%s, %s) --> %s" % (validator.__name__, val,
valid_values, result))

print("Testing valid values...")
test_validate(validators.validate_integer, 1)
test_validate(validators.validate_integer, '-9')
test_validate(validators.validate_integer, 0)
test_validate(validators.validate_integer, 7, [9, 8, 7])
test_validate(validators.validate_integer, 7, [9, 8, 7])

print("\nTesting invalid values...")
test_validate(validators.validate_integer, True)
test_validate(validators.validate_integer, False)
test_validate(validators.validate_integer, '1.1')
test_validate(validators.validate_integer, -9.98933)
test_validate(validators.validate_integer, 7, [9, 8, 6])

When run, it outputs:

Testing valid values...
validate_integer(1, None) --> None
validate_integer(-9, None) --> None
validate_integer(0, None) --> None
validate_integer(7, [9, 8, 7]) --> None
validate_integer(7, [9, 8, 7]) --> None

Testing invalid values...
validate_integer(True, None) --> 'True' is not an integer:boolean
validate_integer(False, None) --> 'False' is not an integer:boolean
validate_integer(1.1, None) --> '1.1' is not an integer
validate_integer(-9.98933, None) --> '-9.98933' is not an integer
validate_integer(7, [9, 8, 6]) --> '7' is not in [9, 8, 6]

The validate_subports validator is also new in neutron-lib 0.3.0. This validator is used as part of the vlan-aware-vms workstream currently under development. Rather than diving into the details on this one, we'll defer to blueprint and related change sets.

Finally in the API validators space, we have some work going on to remove direct access to the neutron_lib.api.validators.validators attribute. This attribute is a dict of the currently "registered" validators known to neutron_lib.

Today, consumers add a "local" (validator function defined outside of neutron-lib) validator by directly adding it to the dict.

For example:

validators.validators['type:my_validatable_type'] = my_validator_function

In general, this is bad practice and can cause complications if we ever decide to wrap API validator access with encapsulating logic. Consumers should now use the following accessors:

get_validator(validation_type, default=None)  
add_validator(validation_type, validator)

Both of which are defined in the neutron_lib.api.validators module. We've deprecated direct access to the validators dict and plan to remove it in the OpenStack "P" release.

For more information on related changes see review 350259.


The only thing overly interesting about the change in neutron_lib.constants, is the addition of the Sentinel class that allows you to create instances that don't change; even with deepcopy(). For example:

import copy

from neutron_lib.constants import Sentinel

singleton = Sentinel()
print("deepcopy() = %s" % (copy.deepcopy(singleton) == singleton))

When run outputs:

deepcopy() = True

Hacking checks

A handful of new translation hacking checks have been added in the 0.3.0 release:
[N532] Validate that LOG.warning is used instead of LOG.warn. The latter is deprecated.
[N534] Exception messages should be translated
[N533] Validate that debug level logs are not translated
[N531] Validate that LOG messages, except debug ones, have translations

The behavior of these hacking checks should be evident from the description shown above, so I won't belabor. These checks are all registered via neutron_lib.hacking.checks.factory() and therefore will be active by default if your project uses the neutron_lib factory function in it's tox.ini.

Looking forward in this space, we hope to further solidify the hacking check interface as well as the dev-ref for its intended usage. For example: 350723

by boden ( at August 15, 2016 12:12 PM

Carlos Camacho

TripleO deep dive session #6 (Overcloud - Physical network)

This is the sixth video from a series of “Deep Dive” sessions related to TripleO deployments.

In this session Dan Prince will dig into the physical overcloud networks.

So please, check the full session content on the TripleO YouTube channel.

<iframe allowfullscreen="" frameborder="0" height="315" src="" width="560"></iframe>

Sessions index:

    * TripleO deep dive #1 (Quickstart deployment)

    * TripleO deep dive #2 (TripleO Heat Templates)

    * TripleO deep dive #3 (Overcloud deployment debugging)

    * TripleO deep dive #4 (Puppet modules)

    * TripleO deep dive #5 (Undercloud - Under the hood)

    * TripleO deep dive #6 (Overcloud - Physical network)

by Carlos Camacho at August 15, 2016 12:00 PM

Newton release previews, adoption trends, and more OpenStack news is your source for news in OpenStack, the open source cloud infrastructure project. Here's what's happening this week, August 15 - 21, 2016.

by Jason Baker at August 15, 2016 05:00 AM

Hugh Blemings



Welcome to Last week on OpenStack Dev (“Lwood”) for the week just past. For more background on Lwood, please refer here.

Basic Stats for week 8 to 14 – August 2016 for openstack-dev:

  • ~581 Messages (up about 7% relative to last week)
  • ~169 Unique threads (down about 5% compared last week)

Traffic and threads pretty much steady this week relative to last – a little quieter threads wise, a few more messages.

Notable Discussions – openstack-dev

Update on the API Reference and Guide Publishing process

Anne Gentle provided an update on this important effort – in short it’s going well, but more to be done.  There are a few projects she specifically notes have some further work required, I believe this needs to be done within the project code base or websites themselves.

So if you’re involved in one of Astara, Ceilometer, Cinder, Cloudkitty, Congress, Designate, Glance, Heat, Magnum, Mistral, Monasca, Sahara, Senlin, Solum, Swift, Tacker or Trove, please take a look at Anne’s email and see if you’re able to do what is required.

OS-Capabilities Library – Continued

As mentioned last week, Jay Pipes penned a rather low key email announcing some work he’s done on creating a new os-capabilities Python library.  The thread picked up a little this week with various positive comments, a question confirming its applicability across projects (Yes) and some design discussions.

The code is here, please get consider getting involved in this important endeavour.

Extra ATCs for Newton

Doug Hellman writes that it’s time to ensure we have all active technical contributors (ATCs) identified for Newton.  As he explains “…Project teams should identify contributors who have had a significant impact this cycle but who would not qualify for ATC status using the regular process because they have not submitted a patch.  Contributions might include, but aren’t limited to, bug triage, design work, and documentation — there is a lot of leeway in how teams define contribution for ATC status.”

The ATC list is approved by the TC on/around 25 August and in order to make the agenda for that meeting proposals need to be submitted by 16 August – later this week.  Please take a look at Doug’s email for more details on this process if you believe you, or someone you know should be considered.

Midcycle Summaries & Minutes

No new summaries this week that I could see – in case you missed them here is the list so far collated from the last few editions of Lwood – Cinder (Kendall Nelson), Freezer (Pierre Mathieu), Glance (Nikhil Komawar), Horizon (Rob Cresswell), Keystone (Steve Martinelli), Monasca (Fabio Giannetti) and Nova Pt.I and Pt.II (Matt Riedemann)

Notable Discussions – other OpenStack lists

Nothing particularly lept out of the other lists this week :)

Upcoming OpenStack Events

Best I can tell no OpenStack related events mentioned this week.  Don’t forget the OpenStack Foundation’s Events Page for a list of general events that is frequently updated.

People and Projects

Requirements PTL Election result

  • Anita Kuno confirmed the results of the Requirements PTL Election welcoming Tony Breeds into the role and noting an impressive participation rate in this poll.  Full results are available here.

Core nominations & changes

New, Proposed and Changed OpenStack Projects

A new section I’m trying out this edition – a list of projects that are seeking formal OpenStack project status, projects that have been confirmed as such and/or projects that are changing names (!)

  • Proposed new project – [Storlets] – Eran Rom.  This follows Eran’s email from last week nominating for PTL of same
  • Project name change – [Smaug] is now [Karbor] – Saggi Mizrahi

Further Reading & Miscellanea

Don’t forget these excellent sources of OpenStack news – most recent ones linked in each case

No particular tunes involved in this edition of Lwood I’m afraid, though was fortunate enough to see both REO Speedwagon and Def Leppard live last night – a cracker of a show it was too :)

Last but by no means least, thanks, as always, to Rackspace :)

by hugh at August 15, 2016 01:09 AM

August 13, 2016

Carlos Camacho

TripleO deep dive session #6 (Overcloud - Physical network)

This is the sixth video from a series of “Deep Dive” sessions related to TripleO deployments.

In this session Dan Prince will dig into the physycal overcloud networks.

So please, check the full session content on the TripleO YouTube channel.

<iframe allowfullscreen="" frameborder="0" height="315" src="" width="560"></iframe>

Sessions index:

    * TripleO deep dive #1 (Quickstart deployment)

    * TripleO deep dive #2 (TripleO Heat Templates)

    * TripleO deep dive #3 (Overcloud deployment debugging)

    * TripleO deep dive #4 (Puppet modules)

    * TripleO deep dive #5 (Undercloud - Under the hood)

    * TripleO deep dive #6 (Overcloud - Physical network)

by Carlos Camacho at August 13, 2016 09:00 AM

August 12, 2016

Steve Hardy

TripleO Deploy Artifacts (and puppet development workflow)

For a while now, TripleO has supported a "DeployArtifacts" interface, aimed at making it easier to deploy modified/additional files on your overcloud, without the overhead of frequently rebuilding images.

This started out as a way to enable faster iteration on puppet module development (the puppet modules are by default stored inside the images deployed by TripleO, and generally you'll want to do development in a git checkout on the undercloud node), but it is actually a generic interface that can be used for a variety of deployment time customizations.

Ok, how do I use it?

Lets start with a couple of usage examples, making use of some helper scripts that are maintained in the tripleo-common repo (in future similar helper interfaces may be added to the TripleO CLI/UI but right now this is more targetted at developers and advanced operator usage).

First clone the tripleo-common repo (you can skip this step if you're running a packaged version which already contains the following scripts):

[stack@instack ~]$ git clone

There are two scripts of interest, firstly a generic script that can be used to deploy any kind of file (aka artifact) tripleo-common/scripts/upload-swift-artifacts and a slightly modified version which optimizes the flow for deploying directories containing puppet modules called tripleo-common/scripts/upload-puppet-modules
To make using these easier, I append this to my .bashrc

export PATH="$PATH:/home/stack/tripleo-common/scripts"


Example 1 - Deploy Artifacts "Hello World"

So, let's start with a really simple example.  First lets create a tarball containing a single /tmp/hello file:

[stack@instack ~]$ mkdir tmp
[stack@instack ~]$ echo "hello" > tmp/hello
[stack@instack ~]$ tar -cvzf hello.tgz tmp

Now, we simply run the upload-swift-artifacts script, accepting all the default options other than to pass a reference to hello.tgz

[stack@instack ~]$ upload-swift-artifacts -f hello.tgz
Creating heat environment file: /home/stack/.tripleo/environments/deployment-artifacts.yaml
Uploading file to swift: hello.tgz
Upload complete.

There are currently only two supported file types:

  •     A tarball (will be unpacked from / on all nodes)
  •     An RPM file (will be installed on all nodes)

Taking a look inside the environment file the script generated, we can see it's using the DeployArtifactURLs parameter, and passing a single URL (the parameter accepts a list of URLs).  This happens to be a swift tempurl, created by the upload-swift-artifacts script but it could be any URL accessible to the overcloud nodes at deployment time.

[stack@instack ~]$ cat /home/stack/.tripleo/environments/deployment-artifacts.yaml
# Heat environment to deploy artifacts via Swift Temp URL(s)
    - ''

This environment file is automatically generated by the upload-swift-artifacts script, and put into the special ~/.tripleo/environments directory.  This directory is read by tripleoclient and any environment files included here are always included automatically (no need for any -e options), but you can also pass a --environment option to upload-swift-artifacts if you prefer some different output location (e.g so it can be explicitly included in your overcloud deploy command).

Testing this example, you simply do an overcloud deployment, no additional arguments are needed if you use the default .tripleo/environments/deployment-artifacts.yaml environment path:

[stack@instack ~]$ openstack overcloud deploy --templates

Then check on one of the nodes for the expected file (note the tarball is unpacked from / in the filesystem):

[root@overcloud-controller-0 ~]# cat /tmp/hello

Note the deploy artifact files are written to all roles, currently there is no way to deploy e.g only to Controller nodes.  We might consider an enhancement that allows role specific artifact URL parameters in future should folks require it.

Hopefully despite the very simple example you can see that this is a very flexible interface - you can deploy a tarball containing anything, e.g even configuration files such as policy.json files to the nodes.

Note that you have to be careful though - most service configuration files are managed by puppet, so if you attempt using the deploy artifacts interface to overwrite puppet managed files it will not work - puppet runs after deploy artifacts are created (this is deliberate, as you will see in the next example) so you must use puppet hieradata to influence any configuration managed by puppet.  (In the case of policy.json files, there is a puppet module that handles this, but currently TripleO does not use it - this may change in future though).

Example 2 - Puppet development workflow

There is coupling between tripleo-heat-templates and the puppet modules it interfaces with (and in particular with the puppet profiles that exist in puppet-tripleo, as discussed in my composable services tutorial recently), so a common pattern for a developer is:

  1. Modify some puppet code
  2. Modify tripleo-heat-templates to match the new/modified puppet profile
  3. Deploy an overcloud
  4. *OH NO* it doesn't work!
  5. Debug the issue (hint, "openstack stack failures list overcloud" is a super-useful new heatclient command which helps a lot here, as it surfaces the puppet error in most cases)
  6. Make coffee; goto (1) :)
Traditionally for TripleO deployments all puppet modules (including the puppet-tripleo profiles) have been built into the image we deploy (stored in Glance on the undercloud), so one missing step above is getting the modified puppet code into the image.  There are a few options:

  • Rebuild the image every time (this is really slow)
  • Use virt-customize or virt-copy-in to copy some modifications into the image, then update the image in glance (this is faster, but it still means you must redeploy the nodes every time and it's easy to lose track of what modifications have been made).
  • Use DeployArtifactUrls to update the puppet modules on the fly during the deployment!
This last use-case is actually what prompted implementation of the DeployArtifacts interface (thanks Dan!), and I'll show how it works below:

First, we clone one or more puppet modules to a local directory - note the name of the repo e.g "puppet-tripleo" does not match the name of the deployed directory (on the nodes it's /etc/puppet/modules/tripleo), so you have to clone it to the "tripleo" directory.

mkdir puppet-modules
cd puppet-modules 
git clone tripleo 

Now you can make whatever edits are needed, pull under review code (or just do nothing if you want to deploy latest trunk of a given module).  When you're ready you run the upload-puppet-modules script:

upload-puppet-modules -d puppet-modules

This works a little bit differently to the previous upload-swift-artifacts script, it takes the directory, creates a tarball using the --transform option, so we rewrite the prefix from /somewhere/puppet-modules to /etc/puppet/modules

The process after we create the tarball is exactly the same - we upload it to swift, get a tempurl, and create a heat environment file which references the location of the tarball.  On deployment, the updated puppet modules will be untarred and this always happens before puppet runs, which makes the debug workflow above much faster, nice!

NOTE: There is one gotcha here - upload-puppet-modules creates a differently named environment file ($HOME/.tripleo/environments/puppet-modules-url.yaml) to upload-swift-artifacts by default, and their content is conflicting - if both environment files exist, one will be ignored as they will get merged together.  (This is something we can probably improve in future when this heat feature lands, but right now the only option is to stick to one script or the other, or accept manually merging the environment files (to append rather than overwrite the DeployArtifactUrls parameter)

So how does it work?

Deploy Artifacts Overview

So, it's actually pretty simple, as illustrated in the diagram above

  • A tarball is created containing the files you want to deploy to the nodes
  • This tarball is uploaded to swift on the undercloud
  • A Swift tempurl is created, so the tarball can be accessed using a signed URL (no credentials needed in the nodes to access)
  • A Heat environment passes the Swift tempurl to a nested stack "deploy-artifacts.yaml", which defines a DeployArtifactUrls parameter (which is a list)
  • deploy-artifacts.yaml defines a Heat SoftwareConfig resource, which references a shell script that can download files from a list of URLs, check the file type and do something (e.g in the case of a tarball, untar it!)
  • The deploy-artifacts SoftwareConfig is deployed inside the per-role "PostDeploy" template, which is where we perform the puppet steps (5 deployment passes which apply puppet in a series of steps). 
  • We use the heat depends_on directive to ensure that the DeployArtifacts deployment (ControllerArtifactsDeploy in the case of the Controller role) always runs before any of the puppet steps.
  • This pattern is replicated for all roles (not just the Controller as in the diagram above)
As you can see,  there are a few steps to the process, but it's pretty simple and it leverages the exact same Heat SoftwareDeployment patterns we use throughout TripleO to deploy scripts (and apply puppet manifests, etc).

by Steve Hardy ( at August 12, 2016 10:20 PM

Tesora Corp

Short Stack: The Benefits and Approach to OpenStack Cloud, How OpenStack Makes Python Better, and an Analysis of the State of OpenStack

Welcome to the Short Stack, our regular feature where we search for the most intriguing OpenStack news. These links may come from traditional publications or company blogs, but if it’s about OpenStack, we’ll find the best ones.

Here are our latest links:

The Benefits and Approach to OpenStack Cloud | OpenStack Superuser

Sorabh Saxena discussed several ways that AT&T is using OpenStack in their aggressive network transformation. Saxena outlined four main benefits in going the way of private or hybrid cloud: reduced cost, improved security, addressing unique business needs, and new products and services. The AT&T team presented these findings and more at this week’s OpenStack Silicon Valley event.

The Five-Minute CIO: Jonathan Bryce | Silicon Republic

John Kennedy interviewed Jonathan Bryce, the Executive Director of the OpenStack Foundation, in this week’s “Five-Minute CIO” installment. At a recent OpenStack Days event in Dublin, Bryce profiled the OpenStack community and offered his commentary on the roadmap for OpenStack. Bryce also addressed some of the biggest issues facing OpenStack today.

How OpenStack Makes Python Better and Vice Versa |

Doug Hellmann summarized his presentation at EuroPython 2016 with co-presenter Thierry Carrez, the chair of the Openstack Technical Committee. The presentation, titled “How OpenStack Makes Python Better, and Vice-Versa”, explained the reasoning behind the decision to use Python and both Hellmann and Carrez’s perspectives on how the Python and OpenStack communities can benefit each other.

Midokura Enhances OpenStack Capabilities in SDN Offering | EWeek

Midokura announced this week that they will be offering the ability to connect multiple OpenStack-based clouds as well as support for container platforms in the latest version of their network virtualization offering. The upgrades to the Midokura Enterprise MidoNet (MEM) software-defined networking solution were designed specifically to meet the growing demands on networks.

Analysis of the State of OpenStack- OpenStack Silicon Valley | SiliconAngle

Before this week’s OpenStack Silicon Valley event, John Furrier discussed the state of OpenStack today and how far it has come since its inception. He also touched on some of the most recent user survey highlights, the OpenStack business model, and the most successful use cases.

The post Short Stack: The Benefits and Approach to OpenStack Cloud, How OpenStack Makes Python Better, and an Analysis of the State of OpenStack appeared first on Tesora.

by Alex Campanelli at August 12, 2016 07:44 PM

Andreas Jaeger

Document Binary Package Dependencies - not only for OpenStack Python Packages

Python developers record their dependencies on other Python packages in requirements.txt and test-requirements.txt. But some packages havedependencies outside of python and we should document thesedependencies as well so that operators, developers, and CI systems
know what needs to be available for their programs.

Bindep is a solution to this, it allows a repo to document binarydependencies in a single file. It even enablies specification of which distribution the package belongs to - Debian, Fedora, Gentoo, openSUSE, RHEL, SLES and Ubuntu have different package names - and allows profiles, like a test profile.

Bindep is one of the tools the OpenStack Infrastructure team has written and maintains. It is in use by already over 130 repositories.

For better bindep adoption, in the just released bindep 2.1.0 we have changed the name of the default file used by bindep from other-requirements.txt to bindep.txt and have pushed changes to master branches of repositories for this.

Projects are encouraged to create their own bindep files. Besides documenting what is required, it also gives a speedup in running tests since you install only what you need and not all packages that some other project might need and are installed  by default. Each test system comes with a basic installation and then we either add the repo defined package list or the large default list.

In the OpenStack CI infrastructure, we use the "test" profile for installation of packages. This allows projects to document their run time dependencies - the default packages - and the additional packages needed for testing.

Be aware that bindep is not used by devstack based tests, those have their own way to document dependencies.

A side effect is that your tests run faster, since they have less packages to install. A Ubuntu Xenial test node installs 140 packages and that can take between 2 and 5 minutes. With a smaller bindep file, this can change.

Let's look at the log file for a normal installation with using the default dependencies:
2 upgraded, 139 newly installed, 0 to remove and 41 not upgraded
Need to get 148 MB of archives.
After this operation, 665 MB of additional disk space will be used.

Compare this with the openstack-manuals repostiry that uses bindep - this example was 20 seconds and not minutes:
0 upgraded, 17 newly installed, 0 to remove and 43 not upgraded.
Need to get 35.8 MB of archives.
After this operation, 128 MB of additional disk space will be used.

If you want to learn more about bindep, read the Infra Manual on package requirements 
If you have questions about bindep, feel free to ask the Infra team on #openstack-infra.
Thanks to Anita for reviewing and improving this blog post and to the OpenStack Infra team that maintains bindep, especially to Jeremy Stanley and Robert Collins.

by Andreas Jaeger ( at August 12, 2016 05:33 PM

Chris Dent

Remote Literacy

Someone recently pointed out Working remotely, a set of ideas from someone who works at Hypothesis where remote working is the norm. The first idea is Read, write everything down, repeat.

It reminds me of two things:

  • The singe most cogent piece of advice I've ever seen on how to do remote collaboration well, especially when in mixed environments (some colleagues are in an office others are not): Behave as if at least one member of the team is not only not present in the office but is also deaf.

    I've lost the reference to this advice. If you know it, please leave a comment so I can link to it.

    This advice is the equivalent of "read, write everything down, repeat" but makes one of the reasons why rather stark: Unless you are using asynchronous and persistent media as your primary form of communication and memory, at least one person and probably more will be left out.

  • Different environments value, and thus encourage or discourage, reading and writing differently. I've learned that being able to gauge the value of literacy in some environments is a useful metric for determining the health of some kinds of collaboration, the distribution of power and whether I will find it easy to exercise my own power (which depends on the opportunity to spend time in reflective thought)

These issues are relative easy to deal with in private work environments or small open source projects when compared with large opensource projects like OpenStack (where I happen to spend all my remoting time lately). In large communities the diversity of skills, languages, and predilections makes enforcing "write everything down" pretty much impossible. Especially when people believe that IRC logs and the record of comments in gerrit count as writing.

They do not. In the context of the above guidelines, writing is a conscious and reflective act where time is taken to digest and then summarize what has come before. This takes more time and effort than reacting quickly (IRC) or responding to details (gerrit).

It's no wonder, however, that in OpenStack the preferred media are reactive: They are fast and there's both too much to do and too much danger of wandering into an infinite bikeshed to regularly warrant using something a bit more considered and considerate.

But that's a trap. A reason bikesheds are common is because reactive conversations can be less effective at moving knowledge forward (or to put it another way: are less good at building shared understanding):

  • It's possible to generate a ton of information over and over again but unless it is digested via thoughtful reflection it doesn't turn into the internal knowledge that is the source of new ideas and other progress. Reactive communication requires so much engagement (to keep up and participate) that there is little time for reflection.

  • Without that reflection it is hard to have a proper dialog, one that becomes a real dialectic, that leads to progress and/or synthesis.

So, for at least some of the threatening bikesheds, the best way through them is to pay the price of extensive thoughtful communication and the most inclusive (because any one at any time can "hear" it) and effective (because it engenders synthesis) form of that communication is writing.

(If your preferred answer to these problems in OpenStack is "that's why we have summits and mid-cycles", please read this again and remember vast numbers of participants can't or won't go to those.)

by Chris Dent at August 12, 2016 11:00 AM

August 11, 2016

Julio Villarreal Pelegrino

Architecting your first OpenStack cloud.

Architecting your first OpenStack cloud.

Architecting your first OpenStack cloud.

OpenStack is the most popular open-source cloud computing platform to build and manage infrastructure-as-a-service (IaaS) environments. Since its inception in 2010 by Rackspace Hosting and NASA this project has gained popularity and positioned itself as a reliable alternative to proprietary IaaS platforms. Now under the management of the OpenStack Foundation and with more than 30,000 Individual Members from over 170 countries around the world, OpenStack is thriving and increasing its presence in enterprise IT environments.

In this article, I will be sharing best practices for the adoption of OpenStack as your IaaS platform and the necessary steps to take while building your cloud. In my experience, thorough planning is an essential ingredient for a successful cloud initiative.

Identify your Business Objectives

Adoption in an enterprise implies an investment of people and money. A company should never adopt a new technology or software because is “cool” or “trendy”. I have seen projects fail because the decision was not made thinking on the use of cases and business objectives.
As the first rule to adopt OpenStack you should ask yourself two vital questions:

  • What are my business drivers for adopting OpenStack?
  • Which use cases will OpenStack address?

By doing this, you will know if it makes sense to perform the necessary investment to run an OpenStack cloud.

Identifying your workloads, the “Pets vs. Cattle” dilemma on OpenStack

After a decision has been taken to adopt OpenStack, the next step should be to identify the workloads to run on it.
In the cloud computing world, most use the analogy of “Pets vs. Cattle” to determine the types of workloads and if they should run or not on a cloud platform.
This analogy indicates that “pets” workloads are unique and valuable applications, usually monolithic. In the other hand, “cattle” workloads are mostly identical and commodity-like, with multiple “cattle” instances forming one application.
Some cloud architects will tell you that OpenStack is designed to run only “cattle” workloads, even when this is mostly true, OpenStack as a software has matured and evolved since its creation.
With the right architecture and the correct operational practices and procedures, you should be able to run both types of workloads.
Nevertheless, it is critical that you identify the type of workloads that will be running on your cluster, as this will dictate the architecture and operational practices to consider during and post implementation.


After you have identified the workloads to move to the OpenStack cloud you should have a comprehensive list that includes: type, disk, CPU, memory, and criticality. An example is shown in the table below:

Architecting your first OpenStack cloud.

In OpenStack, as with other IaaS clouds, it is a good idea to start small and grow if needed. To do this is important that you know from a capacity planning point of view what will be the capacity required on day one and what will be the expected growth rate. This will determine the size of your initial cloud deployment and also when the cluster should be expanded.
The sizing considerations should be taken for 4 four main components for your cluster:

  • Compute requirements
  • Memory requirements
  • Storage requirements
  • Network requirements

It will determine the number of compute nodes (CPU/memory), storage size, and network throughput.
But this is only half of the equation; you will need to plan also the size of your control plane on OpenStack. This planning is influenced by the compute requirements, the number of users, number of instances, network type and availability (SLA).

Hardware selection and storage backend

Part of the planning stage will include the selection of the hardware. Most of my customers use enterprise class servers from vendors like Dell, HP, Cisco, and Lenovo. It is important that you choose a provider that will give you an excellent balance of cost, features, and the support that you need.
My recommendation is to select different server hardware based on cluster roles for your first OpenStack cloud, e.g.,

  • Control plane: enterprise class servers.
  • Compute nodes: commodity servers.
  • Storage nodes: commodity servers.

Another important choice is the selection of the storage backend for your cluster. Some of the most used and traditional storage backends for OpenStack are Ceph and NFS. For both, you could use commodity hardware or use your existing NAS/SAN if they have a supported cinder driver. You could also use a combination of storage backends, based on different use cases and or functions like:

  • Image storage (Glance)
  • Volume storage (Cinder)
  • Ephemeral instance storage (Nova)

Network planning

I like to divide network planning in two for OpenStack clouds:

  • Cluster infrastructure network
  • Tenant network

Cluster Infrastructure Network

This is the network that will connect the different cluster nodes and provide network isolation for your cloud. When planning this network there are three things to consider:

  • Number of networks and isolation type
  • Number of NIC per server and uplink speed
  • NIC layout (teaming or bonding requirements)

Here is an example that illustrates the cluster infrastructure network for a production-ready OpenStack cloud:

Architecting your first OpenStack cloud.

Tenant Network

Because OpenStack is a multi-tenant platform, multiple users should be able to be on the same cluster using shared resources like compute, storage and networking and not be aware that the other user resides on the same platform.
In OpenStack, the tenant network will provide network isolation between projects (tenants), this is possible because Neutron provides each tenant with their networks using either VLAN segregation or overlay networks based on VXLAN or GRE tunneling.
Selecting what technology will be used to provide tenant isolation is a major step in the design. This decision will impact not only the cluster configuration but also the network infrastructure design.
There are four different types of tenant networks:

  • Flat: Instances reside on the same network, which can also be shared with the hosts. No VLAN tagging or other network segregation takes place.
  • Local: Instances reside on the local compute host and are effectively isolated from any external networks.
  • VLAN: Allows users to create multiple provider or tenant networks using VLAN IDs (802.1Q tagged) that correspond to VLANs present in the physical network.
  • Tunneling (VXLAN and GRE): Use network overlays to support private communication between instances. A Networking router is required to enable traffic to traverse outside of the GRE or VXLAN tenant network.

There is also another tenant network category named provider networks. These networks map to existing physical networks in the data center. Useful types in this category are flat (untagged) and VLAN (802.1Q tagged).

For more details visit:

From the above, you should use VLAN or VXLAN/GRE tenant networks. And in your first OpenStack cluster, I do recommend the use of tunneling based tenant networks. By using this method, you will simplify the installation, configuration, and complexity of the cloud.

Basic OpenStack architecture

The diagram below illustrates a basic OpenStack Architecture. This architecture could be comprised of the following OpenStack services and cluster components.

OpenStack Services:

  • OpenStack Compute (Nova)
  • OpenStack Networking (Neutron) and Open vSwitch
  • OpenStack Image Service (Glance)
  • OpenStack Identity (Keystone)
  • OpenStack Dashboard (Horizon)
  • OpenStack Volume Service (Cinder)
  • MariaDB
  • RabbitMQ

Cluster components:

  • 1 x OpenStack Controller
  • 2 x Compute nodes
  • NFS backend for Cinder and Glance

Architecting your first OpenStack cloud.

Highly Available OpenStack architecture

The diagram below illustrates a Highly Available (HA) OpenStack Architecture. This architecture could comprise of the following OpenStack services and cluster components.

OpenStack Services:

  • OpenStack Compute (Nova)
  • OpenStack Networking (Neutron) and Open vSwitch
  • OpenStack Image Service (Glance)
  • OpenStack Identity (Keystone)
  • OpenStack Dashboard (Horizon)
  • OpenStack Volume Service (Cinder)
  • MariaDB
  • RabbitMQ

Cluster components:

  • 1x OpenStack installer (Red Hat OpenStack Platform director)
  • 3 x OpenStack Controllers on a High Availability (HA) configuration
  • 2 x Compute nodes.

Ceph storage cluster as a backend for cinder, glance, and ephemeral storage.

  • 3 Ceph Storage Monitors
  • 3 Ceph Storage OSD Servers
  • 3 Storage pools:
    • Ephemeral (nova)
    • Image (Glance)
    • Volume (cinder)

In this case, we are using Ceph to be the storage provider for OpenStack, but you could substitute this with NFS storage and for the volume service (cinder) with any SAN/NAS that has a certified driver.

Architecting your first OpenStack cloud.

Now you should be ready to architect and deploy your first OpenStack cloud, good luck on your journey.

by Julio Villarreal Pelegrino at August 11, 2016 11:41 PM


OpenStack Days Silicon Valley 2016 (The Unlocked Infrastructure Conference) Day 2

The post OpenStack Days Silicon Valley 2016 (The Unlocked Infrastructure Conference) Day 2 appeared first on Mirantis | The Pure Play OpenStack Company.

The second day of  OpenStack Days Silicon Valley continued with conversations about containers and the processes of managing OpenStack. If you missed the event and the live stream, no worries; here are the highlights.

Christian Carrasco – When OpenStack Fails. (Hint: It’s not the Technology)

Christian, a cloud advisor at Tapjoy, started off the day by sharing what Tapjoy has learned from working with OpenStack. Tapjoy is an SaaS player that from early on set its sights on OpenStack. The company has grown to be the leading player in the mobile app monetization space, with more than two million daily engagements, 270,000 active apps, and 500 million users.

However, interestingly, most of the lessons Christian has learned while working at Tapjoy have less to do with the technology or maturity underpinning OpenStack or the many components necessary to its deployment. Instead, they revolve around the people, process, and organizational choices that are necessary for your OpenStack cloud to succeed.

Christian urged the audience to stop focusing on building a better buggy, and to instead focus on making a better cloud—the next generation of cloud. Christian argued that before we can hyper-converge the cloud, we need interoperability standards, arguing that there were many industries that couldn’t have existed without standards, such as the internet, the automobile industry, and healthcare.

Luke Kanies – DevOps: Myths vs. Realities

Next up was Luke Kanies, the founder and CEO of Puppet. Luke spoke to the audience about the myths that surround DevOps in the enterprise, and argued that we need to leave behind the old way of delivering software to adopt the new world of DevOps practices.

Luke made it clear just why: top performing DevOps teams deploy 200 times more often and recover from failure 24 times faster, he said.

Luke argued that the fears companies have about adopting DevOps practices are due to two beliefs. First, that certain practices just won’t work for an organization, due to factors such as legacy environments, traditional enterprises, or hierarchical organizations. The second belief, he said, is that DevOps practices are simply unworkable when enterprises are subject to a host of external regulatory and compliance requirements.

Luke said that most organizations (97-98 percent) that had fears about introducing DevOps practices had legacy issues, but he argued that ignoring those legacy issues undermined their work.

Luke ended his talk by discussing how to overcome misconceptions by dispelling the most common myths. He said that adopting DevOps practices didn’t have to be all or nothing, they could be simpler than it appeared, and that often the largest returns come from unexpected areas. Ultimately, he argued, you have a choice—do you want to start using DevOps practices, or would you prefer for your competitors to beat you to it?

James Staten – Hybrid Cloud is About the Apps, Not the Infrastructure

James Staten, Microsoft’s Chief Strategist for the Cloud and Enterprise division, was next up on stage to talk about building and deploying true enterprise cloud apps.

James said the key to this is understanding how to blend your environments, as leading enterprise examples of cloud computing are not exclusively private or exclusively public cloud deployments, but are instead a mixture of both plus multiple public clouds. He said that even Microsoft runs on a hybrid cloud.

James argued that the hybrid cloud is here to stay, and not just because of the legacy code that can’t move anywhere (let alone to the cloud). He pointed to statistics that showed 74 percent of enterprises believe a hybrid cloud will enable business growth, and 82 percent have a hybrid cloud strategy (up from 74 percent a year ago).

He said that organizations used to be worried about application integration, security, and data sovereignty when considering moving apps to a public cloud. However, now organizations say they don’t use public clouds because of needing compute on premises, the Internet of Things, optimization of economics, and wanting to leverage the right resources in the right places.

James ended his session by outlining new hybrid models with many elements, including local resources, public clouds, and SAAS apps and microservices. He said that hybrid isn’t just about location, but the programming languages, devices, and operating systems. He said that the apps we are building need to have compute capability everywhere, because a hybrid cloud is about the apps you are designing.

Alex Williams, Frederic Lardinois, Craig Matsumoto, Mitch Wagner – Open Source and the News Media

The first panel discussion of the day was about Open Source and the News Media, with four technology journalists: Alex Williams (founder of The New Stack), Frederic Lardinois (writer for TechCrunch), Craig Matsumoto (Managing Editor at SDxCentral, and Mitch Wagner (Editor, Enterprise Cloud, for Light Reading).

Key takeaway: People are often confused by messages coming from open source projects and companies that build products and services using them, and acronyms, clever names, and not-for-profit foundations had further contributed to this confusion. Alex said that some people would say that cloud service providers are the greatest threat to open source.

In addition, the panelists discussed the difficulties they had with tracking and learning all of the players and their interests in the Open Source movement.

Kim Bannerman (Director, Advocacy & Community – Office of the CTO, Blue Box), Kenneth Hui (Senior Technical Marketing Manager, Rackspace), Patrick Reilly (Founder and Former CEO of Kismatic) — All Open Source Problems Solved in This Session

Next, Kim Bannerman, Kenneth Hui, and Patrick Reilly, a group of open source veterans, discussed critiques that are common for open source projects and looked at how to address them.

Patrick pointed out that OpenStack really is a community, and to have OpenStack work better you really need to participate. He argued that if you have a complaint, you should follow up and work to fix those issues.

Interestingly, the panel discussed the idea that often criticisms about open source projects, such as their governance, roadmap, and focus, are often just the downsides of advantages open source provides: transparency, inclusiveness, and agility.

Jonathan Donaldson (VP & GM, Software Defined Infrastructure at Intel) — The Future of OpenStack Clouds

Following the two panel discussions, Jonathan Donaldson of Intel and Craig McLuckie from Google talked to us about their collaboration and the future of OpenStack clouds.

Craig said that Google wants to be an enterprise software company, using OpenStack, because the market is too big to ignore. However, Craig said that Google is pretty behind.

During the talk, Jonathan discussed Intel’s Cloud for All initiative. It began last year and Intel began heavily investing in the OpenStack platform in an effort to improve OpenStack for the enterprise and to speed up its rate of adoption around the world. He said that Intel cares so much about a cloud for all because fostering innovation leads to use cases and creates value.

This has led to Intel and the broader community making OpenStack production-ready for enterprise workloads. He said that this has led to new features and significantly lower barriers for businesses that want to deploy private and hybrid clouds.

Randy Bias (VP of Technology, EMC), Sean Roberts (Director Technical Program Management, Walmart Labs), Mike Yang (GM of Quanta Cloud Technology) – The State of OpenStack on Commodity Hardware

The first discussion of the final session was about OpenStack and commodity hardware. In the early days of OpenStack, open cloud software with “open” or commodity hardware was seen as a perfect match.

One question the panel discussed was whether BOMs that mix commodity and proprietary components were the norm, or whether pre-integrated and fully commodity BOMs with components from one manufacturer were more popular.

Randy pointed out that the bottom line is that open or commodity hardware is not free, as it still takes skill to deploy. He argued that while it eventually will be easy to deploy open hardware, it’s not there yet.

Adrian Cockcroft (Battery Ventures Technology Fellow), Boris Renski (Mirantis Co-Founder and CMO) – Infrastructure Software is Dead… Or is it?

Next up Boris Renski from Mirantis and Adrian Cockcroft, a Battery Ventures Technology Fellow, conversed about Boris’ premise that Infrastructure software is dead.

As the two discussed the cloud revolution, Boris argued that the cloud revolution isn’t just about software, but also the delivery model, and that the delivery model for enterprise on-premises software has changed radically. Adrian agreed, adding that traditional hardware and software procurement cycles have collapsed with the cloud.

The two finished their talk by discussing the future of OpenStack. They said its future will not be in making the most “enterprise ready” software, but in building models for delivering customer outcomes that move the needle. Adrian said that he believed that unless you had very specialized or very large scale workloads, there is no competitive advantage to having your own data center.

Michael Miller (President of Strategy, Alliances and Marketing, SUSE) – OpenStack Past, Present and Future

To wrap-up the conference, Michael Miller from SUSE discussed OpenStack’s journey from inception to the present and shared some thoughts on what to expect next, discussing just how quickly enterprise IT is now adopting OpenStack, despite initial apprehensions.

The Last Word

From hallway conversations, to expert commentary, to the swarms of people who were visiting sponsor booths, the OpenStack Days Silicon Valley conference was a great success in getting people talking not about whether OpenStack was a success—that part’s a given—but why. Users were talking about where OpenStack fits in, how it’s still important for enterprise workloads, and how to most efficiently leverage new technologies such as containers.

So here’s our question to you: what do you think we’ll be talking about next year?


The post OpenStack Days Silicon Valley 2016 (The Unlocked Infrastructure Conference) Day 2 appeared first on Mirantis | The Pure Play OpenStack Company.

by Jodi Smith at August 11, 2016 09:13 PM

OpenStack Superuser

Using OpenStack Shade to interact with all your clouds

Shade is a simple client library for interacting with OpenStack clouds.

To highlight how simple yet powerful it is, David Flanders, community wrangler at the OpenStack Foundation and Bruno Morel, software developer director at Internap put it to work in this video tutorial.

In under 50 minutes, they show how a group of researchers might access an application from different data centers around the world using Shade. For the tutorial, the pair use an app based on the My First App guide that creates fractals.

The video also shows how to create a security group applied to the instances launched on multiple clouds across the globe, from Amsterdam to San Jose, California.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="" width=""></iframe>

The Application Ecosystem Working Group develops training materials to help prepare community members for hackathons. To get involved, please contact David Flanders through his website or over IRC where his nickname is dfflanders.

Cover Photo // CC by NC

by Superuser at August 11, 2016 08:02 PM

Adam Young

Tripleo HA Federation Proof-of-Concept

Keystone has supported identity federation for several releases. I have been working on a proof-of-concept integration of identity federation in a TripleO deployment. I was able to successfully login to Horizon via WebSSO, and want to share my notes.

A federation deployment requires changes to the network topology, Keystone, the HTTPD service, and Horizon. The various OpenStack deployment tools will have their own ways of applying these changes. While this proof-of-concept can’t be called production-ready, it does demonstrate that TripleO can support Federation using SAML. From this proof-of-concept, we should be to deduce the necessary steps needed for a production deployment.


  • Single physical node – Large enough to run multiple virtual machines.  I only ended up using 3, but scaled up to 6 at one point and ran out of resources.  Tested with 8 CPUs and 32 GB RAM.
  • Centos 7.2 – Running as the base operating system.
  • FreeIPA – Particularly, the CentOS repackage of Red Hat Identity Management. Running on the base OS.
  • Keycloak – Actually an alpha build of Red Hat SSO, running on the base OS. This was fronted by Apache HTTPD, and proxied through ajp://localhost:8109. This gave me HTTPS support using the CA Certificate from the IPA server.  This will be important later when the controller nodes need to talk to the identity provider to set up metadata.
  • Tripleo Quickstart – deployed in HA mode, using an undercloud.
    • ./ –config config/general_config/ha.yml ayoung-dell-t1700.test

In addition, I did some sanity checking of the cluster, but deploying the overcloud using the quickstart helper script, and tore it down using heat stack-delete overcloud.

Reproducing Results

When doing development testing, you can expect to rebuild and teardown your cloud on a regular basis.  When you redeploy, you want to make sure that the changes are just the delta from what you tried last time.  As the number of artifacts grew, I found I needed to maintain a repository of files that included the environment passed to openstack overcloud deploy.  To manage these, I create a git repository in /home/stack/deployment. Inside that directory, I copied the and deploy_env.yml files generated by the overcloud, and modified them accordingly.

In my version of, I wanted to remove the deploy_env.yml generation, to avoid confusion during later deployments.  I also wanted to preserve the environment file across deployments (and did not want it in /tmp). This file has three parts: the Keystone configuration values, HTTPS/Network setup, and configuration for a single node deployment. This last part was essential for development, as chasing down fixes across three HA nodes was time-consuming and error prone. The DNS server value I used is particular to my deployment, and reflects the IPA server running on the base host.

For reference, I’ve included those files at the end of this post.

Identity Provider Registration and Metadata

While it would have been possible to run the registration of the identity provider on one of the nodes, the Heat-managed deployment process does not provide a clean way to gather those files and package them for deployment to other nodes.  While I deployed on a single node for development, it took me a while to realize that I could do that, and had already worked out an approach to call the registration from the undercloud node, and produce a tarball.

As a result, I created a script, again to allow for reproducing this in the future:


basedir=$(dirname $0)
ipa_domain=`hostname -d`

keycloak-httpd-client-install \
   --client-originate-method registration \
   --force \
   --mellon-https-port 5000 \
   --mellon-hostname openstack.$ipa_domain  \
   --mellon-root '/v3' \
   --keycloak-server-url https://identity.$ipa_domain  \
   --keycloak-auth-role root-admin \
   --keycloak-admin-password  $rhsso_master_admin_password \
   --app-name v3 \
   --keycloak-realm openstack \
   --mellon-https-port 5000 \
   --log-file $basedir/rhsso.log \
   --httpd-dir $basedir/rhsso/etc/httpd \
   -l "/v3/auth/OS-FEDERATION/websso/saml2" \
   -l "/v3/auth/OS-FEDERATION/identity_providers/rhsso/protocols/saml2/websso" \
   -l "/v3/OS-FEDERATION/identity_providers/rhsso/protocols/saml2/auth"

This does not quite generate the right paths, as it turns out that the $basename is not quite what we want, so I had to post-edit the generated file: rhsso/etc/httpd/conf.d/v3_mellon_keycloak_openstack.conf

Specifically, the path:

has to be changed to:

While I created a tarball that I then manually deployed, the preferred approach would be to use tripleo-heat-templates/puppet/deploy-artifacts.yaml to deploy them. The problem I faced is that the generated files include Apache module directives from mod_auth_mellon.  If mod_auth_mellon has not been installed into the controller, the Apache server won’t start, and the deployment will fail.

Federation Operations

The Federation setup requires a few calls. I documented them in Rippowam, and attempted to reproduce them locally using Ansible and the Rippowam code. I was not a purist though, as A) I needed to get this done and B) the end solution is not going to use Ansible anyway. The general steps I performed:

  • yum install mod_auth_mellon
  • Copy over the metadata tarball, expand it, and tweak the configuration (could be done prior to building the tarball).
  • Run the following commands.
openstack identity provider create --remote-id https://identity.{{ ipa_domain }}/auth/realms/openstack
openstack mapping create --rules ./mapping_rhsso_saml2.json rhsso_mapping
openstack federation protocol create --identity-provider rhsso --mapping rhsso_mapping saml2

The Mapping file is the one from Rippowm

The keystone service calls only need to be performed once, as they are stored in the database. The expansion of the tarball needs to be performed on every node.


As in previous Federation setups, I needed to modify the values used for WebSSO. The values I ended up setting in /etc/openstack-dashboard/local_settings resembled this:

OPENSTACK_KEYSTONE_URL = "https://openstack.ayoung-dell-t1700.test:5000/v3"
    ("saml2", _("Rhsso")),
    ("credentials", _("Keystone Credentials")),

Important: Make sure that the auth URL is using a FQDN name that matches the value in the signed certificate.

Redirect Support for SAML

The several differences between how HTTPD and HA Proxy operate require us to perform certain configuration modifications.  Keystone runs internally over HTTP, not HTTPS.  However, the SAML Identity Providers are public, and are transmitting cryptographic data, and need to be protected using HTTPS.  As a result, HA Proxy needs to expose an HTTPS-based endpoint for the Keystone public service.  In addition, the redirects that come from mod_auth_mellon need to reflect the public protocol, hostname, and port.

The solution I ended up with involved changes on both sides:

In haproxy.cfg, I modified the keystone public stanza so it looks like this:

listen keystone_public
bind transparent ssl crt /etc/pki/tls/private/overcloud_endpoint.pem
bind transparent ssl crt /etc/pki/tls/private/overcloud_endpoint.pem
bind transparent
redirect scheme https code 301 if { hdr(host) -i } !{ ssl_fc }
rsprep ^Location:\ http://(.*) Location:\ https://\1

While this was necessary, it also proved to be insufficient. When the signed assertion from the Identity Provider is posted to the Keystone server, mod_auth_mellon checks that the destination value matches what it expects the hostname should be. Consequently, in order to get this to match in the file:


I had to set the following:

ServerName https://openstack.ayoung-dell-t1700.test

Note that the protocol is set to https even though the Keystone server is handling HTTP. This might break elswhere. If if does, then the Keystone configuration in Apache may have to be duplicated.

Federation Mapping

For the WebSSO login to successfully complete, the user needs to have a role on at least one project. The Rippowam mapping file maps the user to the Member role in the demo group, so the most straightforward steps to complete are to add a demo group, add a demo project, and assign the Member role on the demo project to the demo group. All this should be done with a v3 token:

openstack group create demo
openstack role create Member
openstack project create demo
openstack role add --group demo --project demo Member

Complete helper files

Below are the complete files that were too long to put inline.

# Simple overcloud deploy script

set -eux

# Source in undercloud credentials.
source /home/stack/stackrc

# Wait until there are hypervisors available.
while true; do
    count=$(openstack hypervisor stats show -c count -f value)
    if [ $count -gt 0 ]; then


# Deploy the overcloud!
openstack overcloud deploy --debug --templates --libvirt-type qemu --control-flavor oooq_control --compute-flavor oooq_compute --ceph-storage-flavor oooq_ceph --timeout 90 -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml -e $HOME/deployment/network-environment.yaml --control-scale 3 --neutron-network-type vxlan --neutron-tunnel-types vxlan -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml --ntp-server -e $HOME/deployment/deploy_env.yaml   --force-postconfig "$@"    || deploy_status=1

# We don't always get a useful error code from the openstack deploy command,
# so check `heat stack-list` for a CREATE_FAILED status.
if heat stack-list | grep -q 'CREATE_FAILED'; then

    for failed in $(heat resource-list \
        --nested-depth 5 overcloud | grep FAILED |
        grep 'StructuredDeployment ' | cut -d '|' -f3)
    do heat deployment-show $failed > failed_deployment_$failed.log

exit $deploy_status


    keystone::using_domain_config: true
        value: true
        value: external,password,token,oauth1,saml2
        value: http://openstack.ayoung-dell-t1700.test/dashboard/auth/websso/
        value: /etc/keystone/sso_callback_template.html
        value: MELLON_IDP

    # In releases before Mitaka, HeatWorkers doesn't modify
    # num_engine_workers, so handle via heat::config 
        value: 1
    heat::api_cloudwatch::enabled: false
    heat::api_cfn::enabled: false
  HeatWorkers: 1
  CeilometerWorkers: 1
  CinderWorkers: 1
  GlanceWorkers: 1
  KeystoneWorkers: 1
  NeutronWorkers: 1
  NovaWorkers: 1
  SwiftWorkers: 1
  CloudName: openstack.ayoung-dell-t1700.test
  CloudDomain: ayoung-dell-t1700.test

  #TLS Setup from enable-tls.yaml
  PublicVirtualFixedIPs: [{'ip_address':''}]
  SSLCertificate: |
    #certificate removed for space
    -----END CERTIFICATE-----

    The contents of your certificate go here
  SSLIntermediateCertificate: ''
  SSLKey: |
    #key removed for space
    -----END RSA PRIVATE KEY-----

    AodhAdmin: {protocol: 'http', port: '8042', host: 'IP_ADDRESS'}
    AodhInternal: {protocol: 'http', port: '8042', host: 'IP_ADDRESS'}
    AodhPublic: {protocol: 'https', port: '13042', host: 'CLOUDNAME'}
    CeilometerAdmin: {protocol: 'http', port: '8777', host: 'IP_ADDRESS'}
    CeilometerInternal: {protocol: 'http', port: '8777', host: 'IP_ADDRESS'}
    CeilometerPublic: {protocol: 'https', port: '13777', host: 'CLOUDNAME'}
    CinderAdmin: {protocol: 'http', port: '8776', host: 'IP_ADDRESS'}
    CinderInternal: {protocol: 'http', port: '8776', host: 'IP_ADDRESS'}
    CinderPublic: {protocol: 'https', port: '13776', host: 'CLOUDNAME'}
    GlanceAdmin: {protocol: 'http', port: '9292', host: 'IP_ADDRESS'}
    GlanceInternal: {protocol: 'http', port: '9292', host: 'IP_ADDRESS'}
    GlancePublic: {protocol: 'https', port: '13292', host: 'CLOUDNAME'}
    GnocchiAdmin: {protocol: 'http', port: '8041', host: 'IP_ADDRESS'}
    GnocchiInternal: {protocol: 'http', port: '8041', host: 'IP_ADDRESS'}
    GnocchiPublic: {protocol: 'https', port: '13041', host: 'CLOUDNAME'}
    HeatAdmin: {protocol: 'http', port: '8004', host: 'IP_ADDRESS'}
    HeatInternal: {protocol: 'http', port: '8004', host: 'IP_ADDRESS'}
    HeatPublic: {protocol: 'https', port: '13004', host: 'CLOUDNAME'}
    HorizonPublic: {protocol: 'https', port: '443', host: 'CLOUDNAME'}
    KeystoneAdmin: {protocol: 'http', port: '35357', host: 'IP_ADDRESS'}
    KeystoneInternal: {protocol: 'http', port: '5000', host: 'IP_ADDRESS'}
    KeystonePublic: {protocol: 'https', port: '13000', host: 'CLOUDNAME'}
    NeutronAdmin: {protocol: 'http', port: '9696', host: 'IP_ADDRESS'}
    NeutronInternal: {protocol: 'http', port: '9696', host: 'IP_ADDRESS'}
    NeutronPublic: {protocol: 'https', port: '13696', host: 'CLOUDNAME'}
    NovaAdmin: {protocol: 'http', port: '8774', host: 'IP_ADDRESS'}
    NovaInternal: {protocol: 'http', port: '8774', host: 'IP_ADDRESS'}
    NovaPublic: {protocol: 'https', port: '13774', host: 'CLOUDNAME'}
    NovaEC2Admin: {protocol: 'http', port: '8773', host: 'IP_ADDRESS'}
    NovaEC2Internal: {protocol: 'http', port: '8773', host: 'IP_ADDRESS'}
    NovaEC2Public: {protocol: 'https', port: '13773', host: 'CLOUDNAME'}
    NovaVNCProxyAdmin: {protocol: 'http', port: '6080', host: 'IP_ADDRESS'}
    NovaVNCProxyInternal: {protocol: 'http', port: '6080', host: 'IP_ADDRESS'}
    NovaVNCProxyPublic: {protocol: 'https', port: '13080', host: 'CLOUDNAME'}
    SaharaAdmin: {protocol: 'http', port: '8386', host: 'IP_ADDRESS'}
    SaharaInternal: {protocol: 'http', port: '8386', host: 'IP_ADDRESS'}
    SaharaPublic: {protocol: 'https', port: '13386', host: 'CLOUDNAME'}
    SwiftAdmin: {protocol: 'http', port: '8080', host: 'IP_ADDRESS'}
    SwiftInternal: {protocol: 'http', port: '8080', host: 'IP_ADDRESS'}
    SwiftPublic: {protocol: 'https', port: '13808', host: 'CLOUDNAME'}

  OS::TripleO::NodeTLSData: /usr/share/openstack-tripleo-heat-templates/puppet/extraconfig/tls/tls-cert-inject.yaml

   ControllerCount: 1 

by Adam Young at August 11, 2016 05:53 PM

The Official Rackspace Blog

Leading Experience and Expertise: 1 Billion OpenStack Hours Served

What can be accomplished in 1 billion hours? It’s a number so large it’s difficult to put into context. One billion hours, or 114,000+ years ago, humankind was in the early stages of its development, preceding even the earliest human civilizations by more than 100,000 years.

It would be a monumental understatement to say humankind has accomplished a great deal in 1 billion hours. But it is fair to say that in the past 1 billion hours, humans have leveraged our collective experience and acquired expertise to make advancements in every area of our existence.

This holds true even in the very recent world of cloud computing, where nothing can fully replace the experience and expertise that comes from operating a cloud at scale over a long period of time. That’s why we at Rackspace are proud to have reached the significant milestone of 1 billion server hours managing OpenStack clouds.

1 Billion Hours

It’s a milestone that demonstrates our commitment to the OpenStack project we helped start in 2010 and have made a centerpiece of our company’s strategy ever since. Six years later, that commitment and focus has made us:

  • The standard bearer and a leader in the OpenStack community.
  • The OpenStack market leader with more than 5x the revenue of our closest competitor.
  • Creator and operator of the world’s largest OpenStack powered public cloud.
  • Operator of some of the world’s largest OpenStack powered private clouds.
  • Operator of the world largest developer cloud.

It’s also given us the opportunity to contribute back to the community and to the project in tangible ways, including:

  • Extending the scaling capabilities of OpenStack by creating Nova cells based on our public cloud experience.
  • Creating the OpenStack-Ansible project so that others in the community can leverage. the best practices we’ve acquired over the years deploying OpenStack.
  • Creating the Tempest project which automates QA testing of OpenStack.
  • Creating the Magnum project for container orchestration and management.
  • Creating the Craton project to automate cloud management at scale based on tools we developed to manage our public cloud.

The experience and expertise that comes with having reached this milestone means Rackspace is best positioned to lead the OpenStack project into the future. Reaching this milestone also means no one has more to offer to help customers succeed with their private cloud deployments than us.

We’ve literally seen it all when it comes to operating and scaling OpenStack clouds and with our managed cloud approach, customers can rely on our 1,000+ OpenStack experts to operate their cloud while they focus on building and running revenue-generating applications.

What might the next 1 billion server hours look like for Rackspace and OpenStack? We want to continue pushing the project forward into new areas of advancement by sharing our expertise with the community and showing how we are able to extend the capabilities of OpenStack at large scale. We’re also committed to driving OpenStack adoption in the enterprise by continuing to improve on the best OpenStack private cloud offering in the industry.

If you are interested in finding out more about what we’ve learned from 1 billion server hours managing OpenStack, we invite you to request a free strategy session where you will have the opportunity to meet with our OpenStack Solutions Architects who can answer your OpenStack questions. You can sign up for a session at

The post Leading Experience and Expertise: 1 Billion OpenStack Hours Served appeared first on The Official Rackspace Blog.

by Kenneth Hui at August 11, 2016 05:13 PM

Andreas Jaeger

Testing OpenStack with always updating python package versions

With any software package, you will need additional packages to run it. Often, there's a tight coupling: The software package will only run with specific other package versions. This dependency information is sometimes found in README files, in code, or in package metadata. If you install the package, you need to figure out the dependency and
handle it properly.

The Python package installer pip uses a list of requirements to install dependent Python packages. This list not only contains the name of packages but also limits which versions to use, or not to use.
In OpenStack we handle these dependencies in a global requirements list and use it for most of the repositories. During initial testing a specific package version is tested but at a later point, another one might be used, or during deployment again another one.

To document what was tested, give guidenance for deployment, and help to figure out breakage by upstream projects, the OpenStack requirements projects maintains a set of constraints with packages pinned to specific package versions that are known to be working.
These are in the upper-constraints.txt file.

Devstack already handles upper-constraints.txt when installing packages and I'm happy to say that tox, the Python testing framework used in OpenStack, can now handle upper-constraints as well everywhere.

Constraints for tox based jobs

To use constraints, change in tox.ini the install command to:

install_command = pip install -c{env:UPPER_CONSTRAINTS_FILE:} {opts} {packages}


Note that constraints are used for the installation of each packages, so if you want to install a package from source and have constraints for a specific version in the constraints file, it will not work. This happens with some of  the OpenStack python client packages: When they install their dependencies, those might
have a dependency on the client package itself. And this then will cause an error since the client package should get installed from source.

So, projects need to remove the constraints file for themselves if they run into this. Packages like python-novaclient and python-glanceclient therefore use a wrapper (tools/ as
install command to edit the constraints file first and remove their own project from it.

Also, be aware that this only for those jobs that have been enabled for it in the project-config repository. It's done for all the generic tox enabled targets and should be done for all custom tox targets as well. Some repositories are not using constraints like project-config
itself, so those jobs are not set up.

Constraints for DevStack jobs

Devstack-gate takes care using constraints, there is nothing for a repository to do to honor constraints.

Check the devstacklog.txt file, if constraints are in use it will use lines like:

Collecting oslo.context===2.7.0 (from -c /opt/stack/new/requirements/upper-constraints.txt (line 204))


To learn more about constraints read the requirements documents. There is also a spec that explains all the steps that where needed for this.


As usual in OpenStack, such work is a team work of many people. I'd like to thank especially:

  • Robert Collins 'lifeless': For writing the initial spec, implementation work, and giving guideance on many of these details.
  • Sean Dague: He was bold enough to try using constraints everywhere and showing us where it failed.
  • Sachi King for making zuul-cloner usable in the post queue. This was a missing part in the last months.
  • The OpenStack infra team for many reviews and design discussions - especially to Jeremy Stanley and Jim Blair.

by Andreas Jaeger ( at August 11, 2016 05:00 PM

The Official Rackspace Blog

Cloud Predictions and Trends: Kicking Off Solve San Francisco

Rackspace::Solve kicks off today in downtown San Francisco with a full day of featured speakers and breakout sessions geared towards solving tough business and IT challenges using the cloud.

Solve conferences are excellent opportunities to learn how industry experts—including our most innovative customers and partners— are using the cloud to scale their businesses, speed time to market and reduce cost.

Throughout the day, attendees will get an inside look at the strategies and solutions being used by fast-growing companies. Speakers and session leaders will share practical, real-world solutions for challenges such as cloud security, big data and optimizing multiple cloud deployments.

In between sessions and during breaks, attendees can visit the Rackspace Solutions Pavilion, where they’ll be able to get one-on-one advice from Rackspace specialists and partners on topics such as OpenStack private cloud (as well as public and hybrid cloud), Fanatical Support for Amazon Web Services, Fanatical Support for Microsoft Azure and Cloud Office, just to name a few.

Speaker sessions will begin with Rackspace SVP of Strategy and Product Scott Crenshaw, who will deliver the keynote, followed by Shadrach Kisten, SVP of information technology and digital media engineering at Sesame Street, who will speak about ways his organization has leveraged Fanatical Support for AWS to help more children grow and learn important skills. Additional speakers will offer their predictions on where the cloud is going and the direction it’s taking businesses in.

The full schedule of speakers and breakout sessions can be viewed here, as well as more information about our partners and sponsors, who help make Solve possible.

In addition to helping companies harness the power of the cloud, Rackspace is also committed to fostering social empowerment, and we’ll be donating $15 per attendee (up to $5,000) to Girls Who Code, a national non-profit organization dedicated to closing the gender gap in technology. Representatives from Girls Who Code will also be available at the Solutions Pavilion.

Check out the video below for more information on Solve events. If you would like to become a sponsor for a future event or have a specific question, please email:

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="" width="560"></iframe>


Golden Gate Bridge image courtesy of different2une.

The post Cloud Predictions and Trends: Kicking Off Solve San Francisco appeared first on The Official Rackspace Blog.

by Abe Selig at August 11, 2016 12:00 PM

Lars Kellogg-Stedman

Exploring YAQL Expressions

The Newton release of Heat adds support for a yaql intrinsic function, which allows you to evaluate yaql expressions in your Heat templates. Unfortunately, the existing yaql documentation is somewhat limited, and does not offer examples of many of yaql's more advanced features.

I am working on a Fluentd composable service for TripleO. I want to allow each service to specify a logging source configuration fragment, for example:

    type: json
    description: Fluentd logging configuration for nova-api.
      tag: openstack.nova.api
      type: tail
      format: |
        /(?<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}.\d+) (?<pid>\d+) (?<priority>\S+) (?<message>.*)/
      path: /var/log/nova/nova-api.log
      pos_file: /var/run/fluentd/openstack.nova.api.pos

This generally works, but several parts of this fragment are going to be the same across all OpenStack services. I wanted to reduce the above to just the unique attributes, which would look something like:

    type: json
    description: Fluentd logging configuration for nova-api.
      tag: openstack.nova.api
      path: /var/log/nova/nova-api.log

This would ultimately give me a list of dictionaries of the form:

    "tag": "openstack.nova.api",
    "path": "/var/log/nova/nova-api.log"
    "tag": "openstack.nova.scheduler",
    "path": "/var/log/nova/nova-scheduler.log"

I want to iterate over this list, adding default values for attributes that are not explicitly provided.

The yaql language has a select function, somewhat analagous to the SQL select statement, that can be used to construct a new data structure from an existing one. For example, given the above data in a parameter called sources, I could write:

        sources: {get_param: sources}
      expression: >
          'path' => $.path,
          'tag' => $.tag,
          'type' => $.get('type', 'tail')})

This makes use of the .get method to insert a default value of tail for the type attribute for items that don't specify it explicitly. This would produce a list that looks like:

        "path": "/var/log/nova/nova-api.log",
        "tag": "openstack.nova.api",
        "type": "tail"
        "path": "/var/log/nova/nova-scheduler.log",
        "tag": "openstack.nova.scheduler",
        "type": "tail"

That works fine, but what if I want to parameterize the default value such that it can be provided as part of the template? I wanted to be able to pass the yaql expression something like this...

        sources: {get_param: sources}
        default_type: tail

...and then within the yaql expression, insert the value of default_type into items that don't provide an explicit value for the type attribute.

This is trickier than it might sound at first because within the context of the select method, $ is bound to the local context, which will be an individual item from the list. So while I can ask for $.path, there's no way to refer to items from the top-level context. Or is there?

The operators documentation for yaql mentions the "context pass" operator, ->, but doesn't provide any examples of how it can be used. It turns out that this operator will be the key to our solution. But before we look at that in more detail, we need to introduce the let statement, which can be used to define variables. The let statement isn't mentioned in the documentation at all, but it looks like this:

let(var => value, ...)

By itself, this isn't particularly useful. In fact, if you were to type a bare let statement in a yaql evaluator, you would get an error:

yaql> let(foo => 10, bar => 20)
Execution exception: <yaql.language.contexts.Context object at 0x7fbaf9772e50> is not JSON serializable

This is where the -> operator comes into play. We use that to pass the context created by the let statement into a yaql expression. For example:

yaql> let(foo => 10, bar => 20) -> $foo
yaql> let(foo => 10, bar => 20) -> $bar

With that in mind, we can return to our earlier task, and rewrite the yaql expression like this:

        sources: {get_param: sources}
        default_type: tail
      expression: >
        let(default_type => $.data.default_type) ->
          'path' => $.path,
          'tag' => $.tag,
          'type' => $.get('type', $default_type)})

Which will give us exactly what we want. This can of course be extended to support additional default values:

        sources: {get_param: sources}
        default_type: tail
        default_format: >
          /some regular expression/
      expression: >
          default_type => $.data.default_type,
          default_format => $.data.default_format
        ) ->
          'path' => $.path,
          'tag' => $.tag,
          'type' => $.get('type', $default_type),
          'format' => $.get('format', $default_format)

Going out on a bit of a tangent, there is another statement not mentioned in the documentation: the def statement lets you defined a yaql function. The general format is:

def(func_name, func_body)

Where func_body is a yaql expresion. For example:

def(upperpath, $.path.toUpper()) ->

Which would generate:


This obviously becomes more useful as you use user-defined functions to encapsulate more complex yaql expressions for re-use.

Thanks to sergmelikyan for his help figuring this out.

by Lars Kellogg-Stedman at August 11, 2016 04:00 AM

August 10, 2016

OpenStack Superuser

OpenStack-Salt for workflow management in private clouds

OpenStack-Salt became an official project under the Big Tent in May 2016.

The roots of this project date back three years ago, when we were searching for a provisioning tool to automate deployment of our OpenStack Folsom cloud. I had worked with SaltStack for a year already at the time, so configuring and deploying OpenStack was a welcome challenge that pushed our use of SaltStack to its very limits. Fortunately, SaltStack proved to be the right tool to fulfill all of our requirements and helped us to automate not only OpenStack but eventually all software systems, monitoring, continuous integration and deployment (CI/CD) pipelines and multiple application stacks.

This article presents a basic introduction to OpenStack-Salt and outlines our major goals. It’s structured to answer some of the most frequent questions that come to mind when looking at a configuration management project like this.

Is it just another config management tool?

“Why do we need any more config management tools? Aren’t there enough already? We have Chef, Ansible, Puppet-based solutions…” This is the most common reaction that we get from community members. The reality is that OpenStack-Salt is not really just another implementation of an OpenStack deployer based on SaltStack.

The main difference is the possibility to setup and maintain operation workflows. As Lachlan Evenson and Jakub Pavlik pointed out, last week's deployment tool is not enough.

Well-defined topologies and related workflows are what we need. Our mission and goal is not to create tool suitable only for laptop OpenStack deployment in 30 minutes. This is what DevStack is designed for. We have built a solution that can scale environments to hundreds of nodes and provide life-cycle management, monitoring, backups and documentation. That’s what people really need.

OpenStack-Salt project uses SaltStack as essential piece of one big puzzle and integrates various other technologies to do their role to form a complete ecosystem. Combining all of these services gives us the ability to create operational level workflows that can deliver complete service development and deployment pipelines from source to production.

What about serialized know-how?

It does not aim to have the total configurability of DevStack but focuses more on implementing the best practices. We started the project on three production environments and it has changed my mindset over time, from a developer point of view to an operations-focused one. OpenStack-Salt is maintained by people who know how to run large environments and have good operational knowledge. The goal is not to parametrize all the possible options, but rather to present the "best practice" configuration setups.

Let me give you an example. Recently, the request from the community came to support Qpid along the RabbitMQ message bus. We wondered why, because Qpid is not widely used in the community and has high-availability issues. Will this feature ever be used in production environment or it is just development for development? The goal is not to provide every option, but to help people in operations with tuned parameters. The support for every possible option will end up in too-complex tooling, which no one would be able to use.

What really makes me and all people in OpenStack-Salt community happy is reaction from Thomas Hatch (founder of SaltStack) regarding state of the project and relation to official Salt community:

"I REALLY like what you did with pillar, using pillar as a high level configuration is a great way to go when making reusable states, very nice!!

I am not a big fan of reclass, but you have abstracted it in the right way making it very clean, again, very nice.

... while we deploy OpenStack fairly often we end up deploying it in a more custom way per deployment, whereas your approach is a much better top down flexible design."

This brings us to the next question concerning our metadata model.

What about Reclass as infrastructure-as-code?

Reclass usually gets the most controversial feedback from the community. This project is maintained by Martin F. Krafft, who drives most of the development. What Reclass really brings to the project is modeling whole infrastructures. Not just only models of OpenStack services, but all infrastructure and support services (monitoring, logging, backups, firewalls, documentations) along support for routers, switches, bare metal servers.

The metadata model relies on two simple basic principles: interpolation and merging. Interpolation gives us the ability to reference any other parameter in the model and thus reduce the duplicate definitions. The deep parameter merging allows us to split the systems and services into metadata fragments that are easy to manage. The final model is a result of many merged service metadata fragments with multiple interpolated parameters. There are several new options on how to manage the metadata for Salt, for example pillar stack that solves the parameter interpolation and merging as well and may replace Reclass in the future. We plan to add the ability to use plain pillar trees in near future.

What service-oriented or container services can OpenStack-Salt support?

The big question is whether to use virtual machines or containers to run the services. With introduction of Docker and Dockerfiles, the use of configuration management tools has been abandoned and new standard to setup the container content has been introduced. With all service configuration covered in Salt configuration management, we felt the need to combine these two approaches. The Dockerfile is generated by simple template that invokes the Salt configuration run that builds the actual container image content. The images are stored in local registry and used in Docker or Kubernetes clusters. We had to add only the container entry point and disable the service.

Transition to containers was big test of modularity and service decomposition of all modules. Fortunately, we have chosen the right level of service composition granularity and were able to model every scenario so far without the need to overhaul any of the Salt formulas. Now we can freely combine the host based services with container micro-services within one application stack.

Get involved

I am really happy to see the project growing under the OpenStack Big Tent. If you're interested in any of the topics mentioned above, come to any of our weekly IRC meetings on Tuesdays 13:00 GMT on #openstack-meeting-4 channel or just write to our #openstack-salt channel anytime. We are always looking for new challenges and opportunities.

This article was contributed by Ales Komarek, PTL of OpenStack-Salt and a software architect at tcp cloud. Superuser is always interested in community contributions, get in touch:

Cover Photo // CC BY NC

by Ales Komarek at August 10, 2016 10:15 PM


5 Minutes Stacks, 31 episode : MyStart

Episode 31 : MyStart

This stack helps you to initialize your tenant, it helps you to create a keypair, a network and security group. These resources are required to create instances in the cloud.


The prerequisites to deploy this stack

A one-click chat sounds really nice…

Go to the Apps page on the Cloudwatt website, choose the apps, press DEPLOY and follow the simple steps…

You do not have a way to create the stack from the console?

We do indeed! Using the console, you can deploy your starting kit:

  1. Go the Cloudwatt Github in the applications/blueprint-mystart repository
  2. Click on the file named blueprint-mystart.heat.yml (or blueprint-mystart.restore.heat.yml to restore from backup)
  3. Click on RAW, a web page will appear containing purely the template
  4. Save the file to your PC. You can use the default name proposed by your browser (just remove the .txt)
  5. Go to the « Stacks » section of the console
  6. Click on « Launch stack », then « Template file » and select the file you just saved to your PC, and finally click on « NEXT »
  7. Name your stack in the « Stack name » field
  8. Fill the two fields « Name prefix key » and «/ 24 cidr of private network » and click “LAUNCH”

The stack will be automatically generated (you can see its progress by clicking on its name). When all modules become green, the creation will be complete. If you’ve reached this point, you’re already done!


For downloading your private key, consult this url Then click on Download key pair "prefix-your_stack_name".

Now you can launch your first instance

Other resources you could be interested in:

Have fun. Hack in peace.

by Julien DEPLAIX at August 10, 2016 10:00 PM


OpenStack Days Silicon Valley (The Unlocked Infrastructure Conference) Day 1

The post OpenStack Days Silicon Valley (The Unlocked Infrastructure Conference) Day 1 appeared first on Mirantis | The Pure Play OpenStack Company.

This year’s OpenStack Days Silicon Valley, held once again at the Computer History Museum, carried a theme of industry maturity; we’ve gone, as Mirantis CEO and co-Founder Alex Freedland said in his introductory remarks, from wondering if OpenStack was going to catch on to wondering where containers fit into the landscape to talking about production environments of both OpenStack and containers.

Here’s a look at what you missed.

OpenStack: What Next?

OpenStack Foundation Executive Director Jonathan Bryce started the day off talking about the future of OpenStack. He’s been traveling the globe visiting user groups and OpenStack Days events, watching as the technology takes hold in different parts of the world, but his predictions were less about what OpenStack could do and more about what people — and other projects — could do with it.

Standard frameworks, he said, provided the opportunity for large numbers of developers to create entirely new categories. For example, before the LAMP stack (Linux, Apache, MySQL and PHP) the web was largely made up of static pages, not the dynamic applications we have now. Android and iOS provided common frameworks that enable developers to release millions of apps a year, supplanting purpose-built machinery with a simple smartphone.

To make that happen, though, the community had to do two things: collaborate and scale. Just as the components of LAMP worked together, OpenStack needed to collaborate with other projects, such as Kubernetes, to reach its potential.

As for scaling, Jonathan pointed out that historically, OpenStack has been difficult to set up. It’s important to make success easier to duplicate. While there are incredible success stories out there, with some users using thousands of nodes, those users originally had to go through a lot of iterations and errors. For future developments, Jonathan felt it was important to share information about errors made, so that others can learn from those mistakes, making OpenStack easier to use.

To that end, the OpenStack foundation is continuing to produce content to help with specific needs, such as explaining the business benefits to a manager to more complex topics such as security. He also talked about the need to raise the size of the talent pool, and about the ability for students to take the Certified OpenStack Administrator exam (or others like it) to prove their capabilities in the market.

User talks

One thing that was refreshing about OpenStack Days Silicon Valley was the number of user-given talks. On day one we heard from Walmart, SAP, and AT&T, all of which have significantly transformed their organizations through the use of OpenStack.

OpenStack, Sean Roberts explained, enabled Walmart to make applications that can heal themselves, with failure scenarios that have rules about how they can recover from those failures. In particular, WalmartLabs, the online end of the company, had been making great strides with OpenStack, and in particular with a devops tool called OneOps. The tool makes it possible for them to manage their large number of nodes easily, and he suggested that it might do even better as an independent project under OpenStack.

Markus Riedinger talked about SAP and how it had introduced OpenStack. After making 23 acquisitions in a small period of time, the company was faced with a diverse infrastructure that didn’t lend itself to collaboration. In the last few years it has begun to move towards cloud based work and in 2013 it started to move towards using OpenStack. Now the company has a container-based OpenStack structure based on Puppet, providing a clean separation of control and data, and a fully automatic system with embedded analytics and pre-manufactured PODs for capacity extension.  Their approach means that 1-2 people can take a data center from commissioned bare metal to an operational, scalable Kubernetes cluster running a fully configured OpenStack platform in less than a day.

Greg Stiegler discussed AT&T’s cloud journey, and Open Source and OpenStack at AT&T. He said that the rapid advancements in mobile data services have resulted in numerous benefits, and in turn this has exploded network traffic, with traffic expected to grow 10 times by 2020. To facilitate this growth, AT&T needed a platform, with a goal of remaining as close to trunk as possible to reduce technical debt. The result is the AT&T Integrated Cloud. Sarobh Saxena spoke about it at the OpenStack Summit in Austin earlier this year, but new today was the notion that the community effort should have a unified roadmap leader, with a strategy around containers that needs to be fully developed, and a rock solid core tent.

Greg finished up by saying that while AT&T doesn’t expect perfection, it does believe that OpenStack needs to be continually developed and strengthened. The company is grateful for what the community has always provided, and AT&T has provided an AT&T community team. Greg felt that the moral of his story was that by working together, community collaboration brings solutions at a faster rate, while weeding out mistakes through the experiences of others.

What venture capitalists think about open source

Well that got your attention, didn’t it?  It got the audience’s attention too, as Martin Casado, a General Partner from Adreessen Horowitz, started the talk by saying that current prevailing wisdom is that infrastructure is dead. Why? Partly because people don’t understand what the cloud is, and partly because they think that if the cloud is free, then they think “What else is there to invest in?” Having looked into it he thinks that view is dead wrong, and even believes that newcomers now have an unfair advantage.

Martin  (who in a former life was the creator of the “software defined” movement through the co-founding of SDN maker Nicira) said that for this talk, something is “software defined” if you can implement it in software and distribute it in software. For example, in the consumer space, the GPS devices have largely been replaced by software applications like Waze, which can be distributed to millions of phones, which themselves can run diverse apps to replace may functionalities that used to be “wrapped in sheet metal”.

He argued that infrastructure is following the same pattern. It used to be that the only common interface was internet or IP, but that we have seen a maturation of software that allows you to insert core infrastructure as software. Martin said that right now is one of these few times where there’s a market sufficient for building a company with a product that consists entirely of software.  (You still, however, need a sales team, sorry.)

The crux of the matter, though, is that the old model for Open Source has changed. The old model for Open Source companies was being a support company, however, now many companies will use Open Source to access customers and get credibility, but the actual commercial offering they have is a service. Companies such as Github (which didn’t even invent Git) doing this have been enormously successful.

And now a word from our sponsors…

The morning included several very short “sponsor moments”; two of which included very short tech talks.

The third was Michael Miller of Suse, who was joined onstage by Boris Renski from Mirantis. Together they announced that Mirantis and Suse would be collaborating with each other to provide support for SLES as both hosts and guests in Mirantis OpenStack, which already supports Ubuntu and Oracle Linux.

“At this point, there is only one conspicuous partner missing from this equation,” Renski said. Not to worry, he continued. SUSE has an expanded support offering, so in addition to supporting SUSE hosts, through the new partnership, Mirantis/SUSE customers with CentOS and RHEL hosts can also get support. “Mirantis  is now a one-stop shop for supporting OpenStack.”

Meanwhile,  Sujal Das, SVP of Marketing for Netronome, discussed networking and security and the many industry reports that highlight the importance of zero-trust defense security, with each VM and application needing to be trusted. OpenStack enables centralised control and automation in these types of deployments, but there are some challenges when using OVS and connection tracking, which affect VMs and the efficiency of the server. Ideally, you would like line red performance, but Netronome did some tests that show you do not get that performance with zero-trust security and OpenStack. Netronome is working on enhancements and adaptations to assist with this.

Finally, Evan Mouzakitis of Data Dog gave a great explanation of how you can look at events that happen when you are using OpenStack more closely to see not only what happened, but why. Evan explained that OpenStack uses RabbitMQ by default for message passing, and that once you can listen to that, you can know a lot more about what’s happening under the hood, and a lot more about the events that are occurring. (Hint: go to

Containers, containers, containers

Of course, the main thrust was OpenStack and containers, and there was no shortage of content along those lines.

Craig McLuckie of Google and Brandon Philips of CoreOS sat down with Sumeet Singh of AppFormix to talk about the future of OpenStack, namely the integration of OpenStack and Kubernetes. Sumeet started this discussion swiftly, asking Craig and Brandon “If we have Kubernetes, why do we need OpenStack?”

Craig said that enterprise needs hybrids of technologies, and that there is a lot of alignment between the two technologies, so  both can be useful for enterprises. Brandon also said that there’s a large incumbent of virtual machine users and they aren’t going to go away.

There’s a lot of integration work, but also a lot of other work to do as a community. Some is the next level of abstraction – one of those things is rallying together to help software vendors to have a set of common standards for describing packages. Craig also believed that there’s a good opportunity to think about brokering of services and lifecycle management.

Craig also mentioned that he felt that we need to start thinking about how to bring the OpenStack and Cloud Native Computing foundations together and how to create working groups that span the two foundation’s boundaries.

In terms of using the two together, Craig said that from his experience he found that enterprises usually ask what it looks like to use the two. As people start to understand the different capabilities they shift towards it, but it’s very new and so it’s quite speculative right now.

Finally, Florian Leibert of Mesosphere, Andrew Randall of Tigera, Ken Robertson of Apcera, and Amir Levy of Gigaspaces sat down with Jesse Proudman of IBM to discuss “The Next Container Standard”.

Jesse started off the discussion by talking about how rapidly OpenStack has developed, and how in two short years containers have penetrated the marketplace. He questioned why that might be.

Some of the participants suggested that a big reason for their uptake is that containers drive adoption and help with inefficiencies, so customers can easily see how dynamic this field is in providing for their requirements.

A number of participants felt that containers are another wonderful tool in getting the job done and they’ll see more innovations down the road. Florian pointed out that containers were around before Docker, but what docker has done is that it has allowed individuals to use containers on their own websites. Containers are just a part of an evolution.

As far as Cloud Foundry vs Mesos or Kubernetes, most of the participants agreed that standard orchestration has allowed us to take a step higher in the model and that an understanding of the underlying tools can be used together — as long as you use the right models. Amir argued that there is no need to take one specific technology’s corner, and that there will always be new technologies around the corner, but whatever we see today will be different tomorrow.

Of course there’s the question of whether these technologies are complementary or competitive. Florian argued that it came down to religion, and that over time companies will often evolve to be very similar to one another. But if it is a religious decision, then who was making that decision?

The panel agreed that it is often the developers themselves who make decisions, but that eventually companies will choose to deliberately use multiple platforms or they will make a decision to use just one.

Finally, Jesse asked the panel about how the wishes of companies for a strong ROI affects OpenStack, leading to a discussion about the importance of really strong use cases, and showing customers how OpenStack can improve speed or flexibility.

Coming up

So now we head into day 2 of the conference, where it’s all about thought leadership, community, and user stories. Look for commentary from users such as Tapjoy and thought leadership from voices such as James Staten from Microsoft, Luke Kanies of Puppet, and Adrian Cockroft of Battery Ventures.



The post OpenStack Days Silicon Valley (The Unlocked Infrastructure Conference) Day 1 appeared first on Mirantis | The Pure Play OpenStack Company.

by Catherine Kim at August 10, 2016 09:31 PM


Get DreamHost Cloud Credit In Exchange For Your Knowledge

Here’s your chance to show off your cloud knowledge: share a tutorial on how you run applications on DreamHost Cloud and get your monthly bill down! Chances are your bill will go to $0… Sweet deal!

The DreamHost Cloud team is launching the “Cloud Documentation Bounty”, a program in which cloud users can submit documentation to the Cloud Knowledge Base and get free cloud resources in return. You can write an article on how to do a task with DreamHost DreamCompute or DreamObjects, and get up to $100 credit for your effort.

He shoots...
Doesn’t that sound great? The Cloud team keeps all its tutorials and documents in a convenient git repository on; new articles written in Sphinx/ReStructured Text format can be easily pushed as Pull Requests to that repository. We’ll review the articles submitted, and once the PR merges your Cloud bill will be lowered by $100. The article will quickly be published in the Knowledge Base automatically by an automatic job (we love our Jenkins builder).

What kinds of articles fit in our knowledge base? Good question, glad you asked! We’re looking for articles that teach how to use DreamHost Cloud and OpenStack in creative and efficient ways; for example, launching a noSQL service with Ansible, running a self-hosted server, or how you use DreamObjects to store your backups and other files. Check the existing documents for cloud servers and object storage to get a feel for our knowledge base.

We are really excited to launch this program and hope to see your pull request on our github repository. Do you have more questions? Check the rules of the Cloud Documentation Bounty Program and start writing to get free cloud resources.

PS. You don’t have to necessarily use github: send us your articles via email or hand-written and it’ll be just fine.

The post Get DreamHost Cloud Credit In Exchange For Your Knowledge appeared first on Welcome to the Official DreamHost Blog.

by Stefano Maffulli at August 10, 2016 05:00 PM


Speedup Kolla with Ansible fact caching


The Ansible master playbook of the Kolla project is big and consists of a lot of playbooks. The fact gathering before each call of a playbook requires time. The required time depends on the size of the environment and is possibly not negligible.

Fact Caching, introduced in Ansible 1.8, helps avoid the side effects of excessive coffee consumption while waiting for the execution of actions. Ansible ships with two cache plugins: JSON and Redis. The following example highlights the Redis configuration.

Configuring Fact Caching

To set up Redis fact caching, proceed as follows on the Kolla deployment host.

  1. Start a Redis container:
    $ docker run -d --name redis -p 6379:6379 redis
  2. Install the Python client for Redis:
    $ pip install redis
  3. Add the following code to the Ansible configuration file, e.g. /etc/ansible/ansible.cfg:
    gathering = smart
    fact_caching = redis
    fact_caching_timeout = 3600
    fact_caching_connection =

Testing Fact Caching

Use the following playbook for a quick test:

- hosts: all
    - debug: var=ansible_memtotal_mb

- hosts: all
    - debug: var=ansible_memtotal_mb

- hosts: all
    - debug: var=ansible_memtotal_mb

- hosts: all
    - debug: var=ansible_memtotal_mb

Save the playbook to testing.yml and run it with the inventory file of the Kolla environment. The execution time should be fairly short, once the facts have been initially cached prior to the first run.

$ ansible-playbook -i KOLLA-INVENTORY-FILE testing.yml

The post Speedup Kolla with Ansible fact caching appeared first on Betacloud.

by Christian Berendt at August 10, 2016 04:09 PM


Planet OpenStack is a collection of thoughts from the developers and other key players of the OpenStack projects. If you are working on OpenStack technology you should add your OpenStack blog.


Last updated:
August 27, 2016 08:03 AM
All times are UTC.

Powered by: