March 24, 2017

OpenStack Blog

OpenStack Developer Mailing List Digest March 18-24

SuccessBot Says

  • Yolanda [1]: Wiki problems have been fixed, it’s up and running
  • johnthetubaguy [2]: First few patches adding real docs for policy have now merged in Nova. A much improved sample file [3].
  • Tell us yours via OpenStack IRC channels with message “#success <message>”
  • All: [4]

Release Naming for R

  • It’s time to pick a name for our “R” release.
  • The assoicated summit will be in Vancouver, so the geographic location has been chosen as “British Colombia”.
  • Rules:
    • Each release name must start with the letter of the ISO basic Latin alphabet following the initial letter of the previous release, starting with the initial release of “Austin”. After “Z”, the next name should start with “A” again.
    • The name must be composed only of the 26 characters of the ISO basic Latin alphabet. Names which can be transliterated into this character set are also acceptable.
    • The name must refer to the physical or human geography of the region encompassing the location of the OpenStack design summit for the corresponding release. The exact boundaries of the geographic region under consideration must be declared before the opening of nominations, as part of the initiation of the selection process.
    • The name must be a single word with a maximum of 10 characters. Words that describe the feature should not be included, so “Foo City” or “Foo Peak” would both be eligible as “Foo”.
  • Full thread [5]

Moving Gnocchi out

  • The project Gnocchi which has been tagged independent since it’s inception has potential outside of OpenStack.
  • Being part of the big tent helped the project be built, but there is a belief that it restrains its adoption outside of OpenStack.
  • The team has decided to move it out of OpenStack [6].
    • In addition out of the OpenStack infrastructure.
  • Gnocchi will continue thrive and be used by OpenStack such as Ceilometer.
  • Full thread [7]

POST /api-wg/news

  • Guides under review:
    • Define pagination guidelines (recently rebooted) [8]
    • Create a new set of api stability guidelines [9]
    • Microversions: add next_min_version field in version body [10]
    • Mention max length limit information for tags [11]
    • Add API capabilities discovery guideline [12]
    • WIP: microversion architecture archival doc (very early; not yet ready for review) [13]
  • Full thread [14]


by Mike Perez at March 24, 2017 09:11 PM

OSIC - The OpenStack Innovation Center

Making your OpenStack monitoring stack highly available using Open Source tools

Making your OpenStack monitoring stack highly available using Open Source tools


By Ianeta Hutchinson and Nish Patwa

Operators tasked with maintaining production environments are relying on monitoring stacks to provide insight to resource usage and a heads-up to threats of downtime. Perhaps the most critical function of a monitoring stack is providing alerts which trigger mitigation steps to ensure an environment stays up and running. Downtime of services can be business-critical, and often has extremely high cost ramifications. Operators working in cloud environments are especially reliant on monitoring stacks due to the increase in potential inefficiency and downtime that comes with greater resource usage. The constant visibility of resources and alerts that a monitoring stack provides, makes it a fundamental component of any cloud.

The number of enterprises looking to utilize Open Source cloud environments, such as OpenStack, is increasing and has lead to a need for an Open Source monitoring solution. Telegraf Influx Grafana Kapacitor (TIGK) is the solution utilized in OpenStack clouds with an Ansible deployment. Below is the combination of components that make up TIGK allowing it to capture, store, display, and trigger alerts for metrics from the systems CPU, RAM, and I/O (network and disk):

T = Telegraf, a plugin-driven server agent for collecting and reporting metrics (installed in all hosts in the cloud).
I = InfluxDB, a time series database built from the ground up to handle high write and query loads.
G = Grafana, a web based dashboard that displays metric information.
K = Kapacitor, a data processing framework proving alerting, anomaly detection and action frameworks.

Having the monitoring stack highly available is the only way to ensure an operator has the visibility necessary to capacity plan or the alerts to become aware of a potential threat of downtime. While TIGK is Open Source, the solution to making TIGK highly available was not… until now.

Telegraf, Grafana, and Kapacitor each have redundancy ensuring their resiliency. However, InfluxDB, without an architectural solution, acted as a potential single point of unrecoverable failure for the monitoring stack.

The OSIC DevOps team designed a highly available (HA) architecture by using the Open Source tool Influx Relay as an additional layer to the logging hosts. By including a relay mechanism which sends requests to all databases, multiple logging hosts are optimized, and resiliency of the TIGK stack is ensured. With this set up, regardless of a failure to an Influx Relay or InfluxDB, support can continue without a stoppage in taking writes or serving queries.

Below is our architectural diagram showing the inclusion of Influx relay. The team demonstrate high availability by using Openstack-Ansible as the deployment tool, and HAProxy as a load balancer to balance between influx relays for writes, and influxdb for read queries.

This illustrated how HAProxy differentiates the routes of HTTP requests depending on their type:

If you follow the solid line, it shows that when HAProxy receives metrics sent from the Telegraf plugin, they are recognised as write requests. HAProxy uses a load balancing algorithm to route the requests to a single Influx Relay. The Influx Relay, which had been listening for write requests, forwards the request from HAProxy to every InfluxDB server to be stored. This allows the metric information to be consistent across all instances and therefore highly available.

If you follow the dashed line, it shows that when HAProxy determines a read query has been received, it uses a load balancing algorithm to route the read query directly to an InfluxDB. Once there, the query is processed and the appropriate results are provided in response.

This architecture designed by the OSIC DevOps team has been implemented into the OpenStack-Ansible playbooks. The configuration for this highly available, Open Source monitoring stack can be found here.

by OSIC Team at March 24, 2017 04:17 PM


Zane Bitter - OpenStack Heat, OpenStack PTG, Atlanta

At the OpenStack PTG last month, Zane Bitter speaks about his work on OpenStack Heat in the Ocata cycle, and what comes next.

<iframe allowfullscreen="" frameborder="0" height="315" src="" width="560"></iframe>

Rich: Tell us who you are and what you work on.

Zane: My name is Zane Bitter, and I work at Red Hat on Heat … mostly on Heat. I'm one of the original Heat developers. I've been working on the project since 2012 when it started.

Heat is the orchestration service for OpenStack. It's about managing how you create and maintain your resources that you're using in your OpenStack cloud over time. It manages dependencies between various things you have to spin up, like servers, volumes, networks, ports, all those kinds of things. It allows you to define in a declarative way what resources you want and it does the job of figuring out how to create them in the right order and do it reasonably efficiently. Not waiting too long between creating stuff, but also making sure you have all the dependencies, in the right order.

And then it can manage those deployments over time as well. If you want to change your thing, it can figure out what you need do to change, if you need to replace a resource, what it needs to do to replace a resource, and get everything pointed to the right things again.

Rich: What is new in Ocata? What have you been working on in this cycle?

Zane: What I've been working on in Ocata is having a way of auto-healing services. If your service dies for some reason, you'd like that to recover by itself, rather than having to page someone and say, hey, my service is down, and then go in there and manually fix things up. So I've been working on integration between a bunch of different services, some of which started during the previous cycle.

I was working with Fei Long Wang from Catalyst IT who is PTL of Zaqar, getting some integration work between Zaqar and Mistral, so you can now trigger a Mistral workflow from a message on the Zaqar queue. So if you set that up as a subscription in Zaqar, it can fire off a thing when it gets a message on that queue, saying, hey, Mistral, run this workflow.

That in turn is integrated with Aodh - ("A.O.D.H". as, some people call it. I'm told the correct pronunciation is Aodh.) - which is the alarming service for OpenStack. It can …

Rich: For some reason, I thought it was an acronym.

Zane: No, it's an Irish name.

Rich: That's good to know.

Zane: Eoghan Glynn was responsible for that one.

You can set up the alarm action for an alarm in Aodh to be to post a message to this queue. When you combine these together, that means that when an alarm goes off, it posts a message to a queue, and that can trigger a workflow.

What I've been working on in Ocata is getting that all packaged up into Heat templates so we have all the resources to create the alarm in Aodh, hook it up with the subscription … hook up the Zaqar queue to a Mistral subscription, and have that all configured in a template along with the workflow action, which is going to call Heat, and say, this server is unhealthy now. We know from external to Heat, we know that this server is bad, and then kick off the action which is to mark the server unhealthy. We then create a replacement, and then when that service is back up, we remove the old one.

Rich: Is that done, or do you still have stuff to do in Pike.

Zane: It's done. It's all working. It's in the Heat templates repository, there's an example in there, so you can try that out. There's a couple caveats. There's a missfeature in Aodh - there's a delay between when you create the alarm and when … there's a short period where, when an event comes in, it may not trigger an alarm. That's one caveat. But other than that, once it's up and working it works pretty reliably.

The other thing I should mention is that you have to turn on event alarms in Aodh, which is basically triggering alarms off of events in the … on the Oslo messaging notification bus, which is not on by default, but it's a one line configuration change.

Rich: What can we look forward to in Pike, or is it too early in the week to say yet?

Zane: We have a few ideas for Pike. I'm planning to work on a template where … so, Zaqar has pre-signed URLs, so you can drop a pre-signed URL into an instance, and allow that instance … node server, in other words … to post to that Zaqar queue without having any Keystone credentials, and basically all it can do with that URL is post to that one queue. Similar to signed URLs in ____. What that should enable us to do is create a template where we're putting signed URLs, with an expiry, into a server, and then we can, before that expires, we can re-create it, so we can have updating credentials, and hook that up to a Mistral subscription, and that allows the service to kick off a Mistral work flow to do something the application needs to do, without having credentials for anything else in OpenStack. So you can let both Mistral and Heat use Keystone trusts, to say, I will offer it on behalf of the user who created this workflow. So if we can allow them to trigger that through Zaqar, there's a pretty secure way of giving applications access to modify stuff in the OpenStack cloud, but locking it down to only the stuff you want modified, and not risking that if someone breaks into your VM, they've got your Keystone credentials and can do whatever they want withour account.

That's one of the things I'm hoping to work on.

As well, we're continuing with Heat development. We've switched over to the new convergence architecture. In Newton, I think, was the first release to have that on by default. We're looking at improving performance with that now. We've got the right architecture for scaling out to a lot of Heat engines. Right now, it's a little heavy on database, a little heavy on memory, which is the tradeoff you make when you go from a monolithic architecture, which can be quite efficient, but doesn't scale out well, to, you scale out but there's potentially performance problems. I think there's some low-hanging fruit there, we should be able to crank up performance. Memory use, and database accesses. Look for better performance out of the convergence architecture in Heat, coming up in Pike.

by Rich Bowen at March 24, 2017 03:30 PM

The journey of a new OpenStack service in RDO

When new contributors join RDO, they ask for recommendations about how to add new services and help RDO users to adopt it. This post is not a official policy document nor a detailed description about how to carry out some activities, but provides some high level recommendations to newcomers based on what I have learned and observed in the last year working in RDO.

Note that you are not required to follow all these steps and even you can have your own ideas about it. If you want to discuss it, let us know your thoughts, we are always open to improvements.

1. Adding the package to RDO

The first step is to add the package(s) to RDO repositories as shown in RDO documentation. This tipically includes the main service package, client library and maybe a package with a plugin for horizon.

In some cases new packages require some general purpose libraries. If they are not in CentOS base channels, RDO imports them from Fedora packages into a dependencies repository. If you need a new dependency which already exists in Fedora, just let us know and we'll import it into the repo. If it doesn't exist, you'll have to add the new package into Fedora following the existing process.

2. Create a puppet module

Although there are multiple deployment tools for OpenStack based on several frameworks, puppet is widely used by different tools or even directly by operators so we recommend to create a puppet module to deploy your new service following the Puppet OpenStack Guide. Once the puppet module is ready, remember to follow the RDO new package process to get it packaged in the repos.

3. Make sure the new service is tested in RDO-CI

As explained in a previous post we run several jobs in RDO CI to validate the content of our repos. Most of the times the first way to get it tested is by adding the new service to one of the puppet-openstack-integration scenarios which is also recommended to get the puppet module tested in upstream gates. An example of how to add a new service into p-o-i is in this review.

4. Adding deployment support in Packstack

If you want to make it easier for RDO users to evaluate a new service, adding it to Packstack is a good idea. Packstack is a puppet-based deployment tool used by RDO users to deploy small proof of concept (PoC) environments to evaluate new services or configurations before deploying it in their production clouds. If you are interested you can take a look to these two reviews which added support for Panko and Magnum in Ocata cycle.

5. Add it to TripleO

TripleO is a powerful OpenStack management tool able to provision and manage cloud environments with production-ready features, as high availability, extended security, etc… Adding support for new services in TripleO will help the users to adopt it for their cloud deployments. The TripleO composable roles tutorial can guide you about how to do it.

6. Build containers for new services

Kolla is the upstream project providing container images and deployment tools to operate OpenStack clouds using container technologies. Kolla supports building images for CentOS distro using binary method which uses packages from RDO. Operators using containers will have it easier it if you add containers for new services.

Other recomendations

Follow OpenStack governance policies

RDO methodology and tooling is conceived according to OpenStack upstream release model, so following policies about release management and requirements is a big help to maintain packages in RDO. It's specially important to create branches and version tags as defined by the releases team.

Making potential users aware of availability of new services or other improvements is a good practice. RDO provides several ways to do this as sending mails to our mailing lists, writing a post in the blog, adding references in our documentation, creating screencast demos, etc… You can also join the RDO weekly meeting to let us know about your work.

Join RDO Test Days

RDO organizes test days at several milestones during each OpenStack release cycle. Although we do Continuous Integration testing in RDO, it's good to test that it can be deployed following the instructions in the documentation. You can propose new services or configurations in the test matrix and add a link to the documented instructions about how to do it.

Upstream documentation

RDO relies on upstream OpenStack Installation Guide for deployment instructions. Keeping it up to date is recommended.

by amoralej at March 24, 2017 03:15 PM

OpenStack Superuser

Automating OpenStack’s Gerrit commands with a CLI

Every OpenStack developer has to interact with the Gerrit code review system. Reviewers and core reviewers have to do this even more and project team leads do this a lot.

The web-based interface is not conducive to many of the more common things that one has to do while managing a project, so early on, I used the Gerrit query CLI.

Along the way, I started writing a simple CLI that I could use to automate more things, and recently, a few people asked about these tools and whether I’d share them.

I’m not claiming that this is unique, or that this hasn’t been done before; it evolved slowly and there may be a better set of tools out there that does all of this (and more). However, I don’t know about them, so, if you have similar tools, please do share (comment below).

I’ve cleaned up this tool a bit (removed things like my private key, username and password) and made them available here.

Full disclosure: they are kind of rough at the edges and you could cause yourself some grief if you aren’t quite sure of what you’re doing.

Here’s a quick introduction:


It should be nothing more than cloning the repository “” and running the install command. Note, I use python 2.7 as my default Python on Ubuntu 16.04. If you use python 3.x, your mileage may vary.

Simple Commands

The simplest command is ls to list reviews

gerrit-cli ls owner:self

As you can see, the search here is a standard Gerrit query search.

You don’t have to type complex queries every time; you can store and reuse queries. A very simple configuration file is used for this (a sample configuration file is also provided and gets installed by default).

amrith@amrith-work:~$ cat .gerrit-cli/gerrit-cli.json
    # global options
    "host": "",
    "port": 29418,

    # "dry-run": true,

    # user defined queries
    "queries": {
        # each query is necessarily a list, even if it is a single string
        "trove-filter": ["(project:openstack/trove-specs OR project:openstack/trove OR project:openstack/trove-dashboard OR project:openstack/python-troveclient OR project:openstack/trove-integration)"],

        # the simple filter uses the trove-filter and appends status:open and is therefore a list

        "simple": ["trove-filter", "status:open"],

        "review-list": ["trove-filter", "status:open", "NOT label:Code-Review>=-2,self"],

        "commitids": ["simple"],

        "older-than-two-weeks": ["simple", "age:2w"]

    # user defined results
    "results": {
        # each result is necessarily a list, even if it is a single column
        "default": ["number:r", "project:l", "owner:l", "subject:l:80", "state", "age:r"],
        "simple": ["number:r", "project:l", "owner:l", "subject:l:80", "state", "age:r"],
        "commitids": [ "number:r", "subject:l:60", "owner:l", "commitid:l", "patchset:r" ],
        "review-list": [ "number:r", "project:l", "branch:c", "subject:l:80", "owner:l", "state", "age:r" ]

The file is a simple JSON and you can comment lines just as you would in python (#…).

Don’t do anything, just – – dry-run

The best way to see what’s going on is to use the –dry-run command (or, to be sure, uncomment the line in your configuration file).

amrith@amrith-work:~$ gerrit-cli --dry-run ls owner:self
ssh -p 29418 gerrit query --format=JSON --current-patch-set --patch-sets --all-approvals owner:self
| Number | Project | Owner | Subject | State | Age |

So the owner:self  prompt makes a Gerrit query and formats and displays the output as shown above.

So, what columns are displayed? The configuration contains a section called “results” and a default result is defined there.

"default": ["number:r", "project:l", "owner:l", "subject:l:80", "state", "age:r"],

You can override the default and cause a different set of columns to be shown. If a default is not found, the code has a hard coded default as well.

Similarly, you could run the query

amrith@amrith-work:~$ gerrit-cli --dry-run ls
ssh -p 29418 gerrit query --format=JSON --current-patch-set --patch-sets --all-approvals owner:self status:open
| Number | Project | Owner | Subject | State | Age |

and a default query will be generated for you, that query is owner:self status:open.

You can nest these definitions as shown in the default configuration.

amrith@amrith-work:~$ gerrit-cli --dry-run ls commitids
ssh -p 29418 gerrit query --format=JSON --current-patch-set --patch-sets --all-approvals (project:openstack/trove-specs OR project:openstack/trove OR project:openstack/trove-dashboard OR project:openstack/python-troveclient OR project:openstack/trove-integration) status:open
| Number | Project | Owner | Subject | State | Age |

The query commitids is expanded as follows.

commitids -> simplesimple -> trove-filter, statusopentrove-filter -> (...)

What else can I do?

You can do a lot more than just list reviews:

amrith@amrith-work:~$ gerrit-cli --help
usage: gerrit [-h] [--host HOST] [--port PORT] [--dry-run]
              [--config-file CONFIG_FILE] [-v]
              {ls,show,update,abandon,restore,recheck} ...

A simple gerrit command line interface

positional arguments:
    ls                  list reviews
    show                show review(s)
    update              update review(s)
    abandon             abandon review(s)
    restore             restore review(s)
    recheck             abandon review(s)

optional arguments:
  -h, --help            show this help message and exit
  --host HOST           The gerrit host. Default:
  --port PORT           The gerrit port. Default: 29418
  --dry-run             Whether or not to actually execute commands that
                        modify a review.
  --config-file CONFIG_FILE
                        The path to the gerrit-cli configuration file to use
                        for this session. (Default: ~/.gerrit-cli/gerrit-
  -v, --verbose         Provide additional (verbose) debug output.

Other things that I do quite often (and like to automate) are update, abandon, restore and recheck.

A word of caution: when you aren’t sure what the command will do, use –dry-run. Otherwise, you could end up in a world of hurt.

Like, when you accidentally abandon 100 reviews 🙂

And even if you know what your query should do, remember I’ve hidden some choice bugs in the code. You may hit those, too.


Amrith Kumar, a frequent contributor to Superuser, is CTO and co-founder of Trove, has served as a project team lead (PTL) for Trove almost too many times to count and is co-author of  “Trove.”  This post first appeared on his Hype Cycles blog.

Superuser is always interested in community content, email:

The post Automating OpenStack’s Gerrit commands with a CLI appeared first on OpenStack Superuser.

by Amrith Kumar at March 24, 2017 12:22 PM


OpenStack is Betamax to AWS VHS, and that’s totally ok, right?

OpenStack Betamax - Amazon Web Services AWS VHS

I woke up this a bit too early morning to yet another “OpenStack is dead, no it’s not” argument on the twitters. Generally, after hurling a couple of incendiaries in the conversation would have me back to sleep in no time. Today though, it took me back to a conversation over lunch with @manpageman a couple of weeks ago where he said “OpenStack is Betamax to AWS VHS, and that’s totally ok, right?”. It’s probably the most philosophical OpenStack thing I’ve ever heard, but not atypical from our resident philosopher and amateur cultural marxist.

We all know about the Betamax VHS war that played out from the late seventies, and how VHS won. We also know that Betamax still made a significant amount of money for its owners, and that it was adopted for niche use cases where, for whatever reason, it was better suited for use than VHS.

That’s why, back in the 80s and 90s when I fancied myself as a purveyor of fine audio and a composer of not so fine musical works, I used Betamax tapes to record PCM digital audio because that was one of the niche use cases that Betamax then filled. And it made my average works sound amazing. The other niche use case was of course in video and broadcast where Betamax was omnipresent.

I said last year that OpenStack use is 85% telco, 10% research, and 5% everything else. This view is based on the 6 plus years of my company working on it, a view gained independently of vendor, marketing, community, survey or any other influences, a view gained from what we have designed, deployed, operated, and failed at.

The 85% telco in my opinion mirrors the experience of Betamax in audio and broadcast-land. OpenStack is the best tool for what telcos are trying to achieve. For now. Like Betamax, there is significant business to be had with OpenStack. This comes by not looking at what it can’t do or might have done, but what it is doing. It will have a life that will likely be shorter than Betamax, but Betamax tapes were produced for 40 years till 2015, so I think there’s a few years left yet to be servicing those niche use cases yet.

Oh, and by niche, I mean if global tier one telcos are right now basing massive amounts of their business on OpenStack, that’s quite a “niche”.

So is OpenStack Betamax and Amazon Web Services VHS, and is that totally ok?

The post OpenStack is Betamax to AWS VHS, and that’s totally ok, right? appeared first on Aptira Cloud Solutions.

by Tristan at March 24, 2017 12:01 AM

March 23, 2017

OpenStack Superuser

Moving OpenStack beyond borders (and possibly to Mars)

The inaugural OpenStack Days Poland event drew more than 300 users, upstream developers, operators and vendors to the Copernicus Science Center in the heart of Warsaw, Poland on March 22.

Although Warsaw is Poland’s capital city—and according to “Forbes,” a hotbed of startups and multinational tech companies’ European branches—this meetup traces its roots west to Wroclaw, the Silicon Valley of Poland, according to some event speakers.

Wroclaw’s large technology community formed an OpenStack group and held five local meetups before planning this Warsaw-based event designed to attract the attention of both users and developers from throughout Poland. The event is also considered the first meetup for the just-forming Warsaw- and Krakow-based Openstack user groups.

<figure class="wp-caption aligncenter" id="attachment_5897" style="width: 605px">room<figcaption class="wp-caption-text"> The audience came from throughout Poland and beyond, with several dozen users and some upstream developers, including Dragonflow’s PTL Omer Anson. Photo: Heidi Joy Tretheway. // </figcaption> </figure>

Indeed, there was good-natured arguing (over beers) about which city was best among those hailing from Krakow in the south, Gdansk in the north, Warsaw in the center and Wroclaw (say it VROT-swahv) in the west.

Sponsors and speakers hailed from all of these locations—and from both local companies and multinational vendors that are familiar names in OpenStack. A handful of attendees came from Ukraine and other neighboring countries, and a few presentations were in English, but most in Polish.

<figure class="wp-caption aligncenter" id="attachment_5898" style="width: 605px">pierogi<figcaption class="wp-caption-text"> If you can’t learn Polish, at least learn Polish cooking. Here, both potato-onion and strawberry pierogis are under construction at Polish Your Cooking. Photo: Heidi Joy Tretheway. // </figcaption> </figure>

“I think many companies in Warsaw are adopting OpenStack rapidly now because there is a strong ecosystem of companies that can make it happen with a lesser learning curve,” a local representative from one OpenStack sponsor company said. “This year, we’re seeing much more rapid adoption and companies moving to large-scale deployments.”

Some of the organizations mentioned as running OpenStack are the browser Opera, the Central and Eastern European online auction site Allegro, and one of the largest Polish hosting providers Nazwa.

Cloud services provider OVH noted that it hosts more than 80 petabytes of data on Swift through its Dropbox-like infrastructure offering, and it will soon launch a new public cloud from a data center near Warsaw. OVH customers include Mailjet, Villeroy & Boch, and PrestaShop.

<figure class="wp-caption aligncenter" id="attachment_5901" style="width: 605px">hall<figcaption class="wp-caption-text"> The market hall of vendors and sponsors was busy nonstop, with the “hallway track” very popular throughout the event. Photo: Heidi Joy Tretheway. // </figcaption> </figure>

Rob McMahon, director of cloud for Red Hat’s EMEA region, told the audience that they are processing exabytes (1 EB = 1,000 PB) of data on behalf of NASA’s Jet Propulsion Laboratory. “I’d like to think that Martian life can be found with an OpenStack platform,” he said.


This is Heidi Joy Tretheway’s first visit to Warsaw and her fourth cycle leading OpenStack’s User Survey. She was thrilled to see many other presenters quoting aspects of the User Survey and shared a sneak peek at the April 2017 statistics with attendees.


The post Moving OpenStack beyond borders (and possibly to Mars) appeared first on OpenStack Superuser.

by Heidi Joy Tretheway at March 23, 2017 09:00 AM

March 22, 2017

OpenStack Superuser

Open source management as a marathon, not a sprint

John Dickinson has been a project team lead  (PTL) for Swift, OpenStack’s object storage service, pretty much since it took off in 2011. At the time he was working at Rackspace, since 2012 he’s been director of technology at San Francisco-based startup SwiftStack.

A frequent speaker at OpenStack Summits and meetups, you can also find him at the upcoming Boston Summit giving an update on Swift.

Dickinson offers Superuser advice for approaching PTLs and talks about how he’s lasted this long without getting winded. Spoiler alert: modesty appears to be the secret to his longevity.

You’re an example of a marathoner PTL – how do you do it?

The Swift community is great. I’ve never worked with such a talented and dedicated group of people and their support and passion is what keeps me going.

What are some of the biggest changes/advancements you’ve seen with Swift since you started?

Swift has grown from being a storage engine custom-built for a public cloud service provider into a more complete storage system that can be used for public cloud and private cloud at all scales. Swift is the best open-source object storage system available. I’m tremendously proud of what the community has produced over the last seven years.

My vision for Swift has always been for it to be used by everyone, every day. As more companies use Swift for more things, we get closer to that goal.

Significant features that we’ve written include global cluster support, storage policies, erasure coding and at-rest encryption. These are user-driven features not developed in a vacuum, but with actual use cases attached to them. That’s how we succeed in the vision. We prioritize work that users are asking for, making changes based on data, without compromising on the stability and reliability Swift is known for.

<script async="async" charset="utf-8" src=""></script>

What advice do you have for new contributors approaching a PTL or project?

Being a PTL requires a different skill set than being a developer contributor. You won’t be great without practice, and you need great people around you to help out. So have a clear vision of what you want to see happen and surround yourself with good people.

How does the PTG affect your work with Swift?

Now that the PTG is over, I can reflect on the good things that happened. The best part of the PTG was spending time with my fellow Swift contributors. Not only did we get to spend time together discussing code changes, in-person gatherings are a great time to spend together as friends.

<script async="async" charset="utf-8" src=""></script>

From a feature/design perspective, we made great progress at the PTG discussing some exciting features and optimizations that have been in-progress for quite some time. Some of the biggest upcoming changes in Swift are focused on optimization for larger scale deployments and features for data migration.

The PTG/Forum split is a big change to how things have been organized in the past. The PTG was successful, from my perspective and I hope the upcoming Summit in Boston will similarly be productive. I’m looking forward from hearing from the ops teams that run Swift clusters and learning from them.

Get involved!
Use Ask OpenStack for general questions about Swift or OpenStack
For Swift road map or development issues, subscribe to the OpenStack development mailing list  and use the tag [swift]
Participate in the weekly Swift meetings: Wednesdays at 21:00UTC in #openstack-meeting on freenode IRC

Get involved!
Use Ask OpenStack for general questions
For roadmap or development issues, subscribe to the OpenStack development mailing list, and use the tag [kuryr]

Participate in the meetings:

  • Every two weeks (on odd weeks) on Tuesday at 0300 UTC in #openstack-meeting-4 (IRC webclient)
  • Every two weeks (on even weeks) on Monday at 1500 UTC in #openstack-meeting-4 (IRC webclient)

The post Open source management as a marathon, not a sprint appeared first on OpenStack Superuser.

by Nicole Martinelli at March 22, 2017 11:46 AM


Sydney OpenStack Summit – The countdown is on!

The Sydney OpenStack Summit is drawing closer, and we’re excited to check out the new International Convention Center.

This must-attend Open Source event will feature a mix of open technologies building the modern infrastructure stack, including OpenStack, Kubernetes, Docker, Ansible, Ceph, OVS, OpenContrail, OPNFV and more. You’ll hear from some of the top Australian influencers, as well as industry leaders from around the globe.

If you’re planning to attend, make sure you visit our stand. Not only will you get to meet the Aptira team, but there be prizes, giveaways, and we’ll be showcasing our new Managed Cloud offering.

There may even be a sneaky after party or two. Stay tuned for details!

The post Sydney OpenStack Summit – The countdown is on! appeared first on Aptira Cloud Solutions.

by Tristan at March 22, 2017 03:50 AM

March 21, 2017

OpenStack Superuser

Getting to know the essential OpenStack components better

If you skipped the first part of this series, I am not responsible for the technical confusion and emotional stress created while you read this one. I highly recommend reading part one before continuing. However, if you like to live dangerously, then please carry on.

Part 1 recap:

  • You were introduced to the main character, OpenStack.
  • You learned its whereabouts and became acquainted with the “neighborhood.”
  • You learned what it needs to thrive and how to set it up for yourself.

Here’s an overview of what we learned in the last post:

Basic setup for deploying OpenStack

The first step to implementing OpenStack is to set up the identity service, Keystone.

A. Keystone


Simply put, Keystone is a service that manages all identities. These identities can belong to your customers who you have offered services to, and also to all the micro-services that make up OpenStack. These identities have usernames and passwords associated with them in addition to the information on who is allowed to do what. There is much more to it but we will leave some detail out for now. If you work in tech, you’re already familiar with the concept of identity/access cards. These cards not only identify who you are but also control which doors you can and cannot open on company premises. So in order to set up this modern day OpenStack watchman, perform the following steps:

Log in to MariaDB:

sudo mysql -u root -p

Create a database for Keystone and give the user full Keystone privileges to the newly created database:

GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \ 
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \ 

Install the Keystone software:

sudo apt install keystone

Edit the Keystone configuration file:

sudo vi /etc/keystone/keystone.conf 
   #Tell keystone how to access the DB 
   connection = mysql+pymysql://keystone:MINE_PASS@controller/keystone (Comment out the exiting connection entry) 
   #Some token management I don’t fully understand. But put it in, its important) 
   provider = fernet

This command will initialize your Keystone database using the configuration that you just did above:

sudo su -s /bin/sh -c "keystone-manage db_sync" keystone

Since we have no identity management (because you are setting it up right now!), we need to bootstrap the identity management service to create an admin user for Keystone

sudo keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone 
sudo keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

Since OpenStack is composed of a lot of micro-services, each service that we define will need to have an endpoint URL(s). This is how other services will access this service (notice, in the sample below, that there are three URLs). Run the following:

sudo keystone-manage bootstrap --bootstrap-password MINE_PASS \ 
  --bootstrap-admin-url http://controller:35357/v3/ \ 
  --bootstrap-internal-url http://controller:35357/v3/ \ 
  --bootstrap-public-url http://controller:5000/v3/ \ 
  --bootstrap-region-id RegionOne

You need to configure Apache for Keystone. Keystone uses Apache to entertaining requests it receives from its other buddy services in OS. Lets just say that Apache is like a good secretary that is better at handling and managing requests than if Keystone tried to do it independently.

sudo vi /etc/apache2/apache2.conf  
    ServerName controller 
sudo service apache2 restart 
sudo rm -f /var/lib/keystone/keystone.db

One of the most useful ways to interact with OS is via the command line, since, if you want to interact with OS, you need to be authenticated and authorized. An easy way to do this is to create the following file and then source it in your command line.

sudo vi ~/keystone_admin 
    export OS_USERNAME=admin 
    export OS_PROJECT_NAME=admin 
    export OS_USER_DOMAIN_NAME=default 
    export OS_PROJECT_DOMAIN_NAME=default 
    export OS_AUTH_URL=http://controller:35357/v3 
    export PS1='[\u@\h \W(keystone_admin)]$ '

If you want to source just use the following command on the controller:

source ~/keystone_admin

Before we proceed, we need to talk about a few additional terms. OpenStack utilizes the concepts of domains, projects and users.

  • Users are, well, just users of OpenStack.
  • Projects are similar to customers in an OpenStack environment. So, if I am using my OpenStack environment to host VMS for Customer ABC and Customer XYZ, then ABC and XYZ could be two projects.
  • Domains are a recent addition (as if things weren’t already complex enough) that allows you further granularity. If you wanted to have administrative divisions within OpenStack so each division could manage their own environments,  you would use domains. So you could put ABC and XYZ in different domains and have separate administration for both, or you could put them in the same domain and manage them with a single administration. Its just an added level of granularity. And you thought your relationships were complex!

Create a special project to hold all the internal users (most micro-services in OpenStack will have their own service users and they are associated to this special project.)

openstack project create --domain default \ 
  --description "Service Project" service

Verify Operations

Run the following command to request an authentication token using the admin user:

openstack --os-auth-url http://controller:35357/v3 \
  --os-project-domain-name default --os-user-domain-name default \
  --os-project-name admin --os-username admin token issue

| Field      | Value                                                           |
| expires    | 2016-11-30 13:05:15+00:00                                       |
| id         | gAAAAABYPsB7yua2kfIZhoDlm20y1i5IAHfXxIcqiKzhM9ac_MV4PU5OPiYf_   |
|            | m1SsUPOMSs4Bnf5A4o8i9B36c-gpxaUhtmzWx8WUVLpAtQDBgZ607ReW7cEYJGy |
|            | yTp54dskNkMji-uofna35ytrd2_VLIdMWk7Y1532HErA7phiq7hwKTKex-Y     |
| project_id | b1146434829a4b359528e1ddada519c0                                |
| user_id    | 97b1b7d8cb0d473c83094c795282b5cb                                |

Congratulations, you have the keys!

Now we meet, Glance the image service. Glance is essentially a store for all the different flavors (images) of virtual machines that you may want to offer to your customers. When a customer requests a particular type of virtual machine, Glance finds the correct image in its repository and hands it over to another service (which we will discuss later).

B. Glance

<figure class="wp-caption alignnone" id="attachment_349">ose2-2
<figcaption>Glance</figcaption> </figure>

To configure OpenStack’s precious image store, perform the following steps on the @controller:

Log in to the DB:

sudo mysql -u root -p

Create the database and give full privileges to the Glance user:

GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \ 
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \ 

Source the keystone_admin file to get command line access:

source ~/keystonerc_admin

Create the Glance user:

openstack user create --domain default --password-prompt glance

Give the user rights:

openstack role add --project service --user glance admin

Create the Glance service:

openstack service create --name glance \ 
  --description "OpenStack Image" image

Create the Glance endpoints:

openstack endpoint create --region RegionOne \ 
  image public http://controller:9292 
openstack endpoint create --region RegionOne \ 
  image internal http://controller:9292 
openstack endpoint create --region RegionOne \ 
  image admin http://controller:9292

Install the Glance software:

sudo apt install glance

Configure the configuration file for Glance:

sudo vi /etc/glance/glance-api.conf
  #Configure the DB connection 
  connection = mysql+pymysql://glance:MINE_PASS@controller/glance 
  #Tell glance how to get authenticated via keystone. Every time a service needs to do something it needs to be authenticated via keystone. 
  auth_uri = http://controller:5000 
  auth_url = http://controller:35357 
  memcached_servers = controller:11211 
  auth_type = password 
  project_domain_name = default 
  user_domain_name = default 
  project_name = service 
  username = glance 
  password = MINE_PASS 
  #(Comment out or remove any other options in the [keystone_authtoken] section.) 
  flavor = keystone 
  #Glance can store images in different locations. We are using file for now 
  stores = file,http 
  default_store /[= file 
  filesystem_store_datadir = /var/lib/glance/images/

Edit another configuration file:

sudo vi /etc/glance/glance-registry.conf
  #Configure the DB connection 
  connection = mysql+pymysql://glance:MINE_PASS@controller/glance 
  #Tell glance-registry how to get authenticated via keystone. 
  auth_uri = http://controller:5000 
  auth_url = http://controller:35357 
  memcached_servers = controller:11211 
  auth_type = password 
  project_domain_name = default 
  user_domain_name = default 
  project_name = service 
  username = glance 
  password = MINE_PASS 
  #(Comment out or remove any other options in the [keystone_authtoken] section.) 
  #No Idea just use it. 
  flavor = keystone

This command will initialize the Glance database using the configuration files above:

sudo su -s /bin/sh -c "glance-manage db_sync" glance

Start the Glance services:

sudo service glance-registry restart 
sudo service glance-api restart

Verify Operation

Download a cirros cloud image:


Log in to the command line:

source ~/keystonerc_admin

Create an OpenStack image using the command below:

openstack image create "cirros" \ 
  --file cirros-0.3.4-x86_64-disk.img \ 
  --disk-format qcow2 --container-format bare \ 

List the images and ensure that your image was created successfully:

openstack image list
| ID                                   | Name   | Status |
| d5edb2b0-ad3c-453a-b66d-5bf292dc2ee8 | cirros | active |

Are you noticing a pattern here? In part one, I mentioned that the OpenStack components are similar but subtly different. As you will continue to see, most OpenStack services follow a standard pattern in configuration. The pattern is as follows:

General set of steps for configuring OpenStack components

Most OpenStack components will follow the sequence above with minor deviations. So, if you’re having trouble configuring a component, it’s a good idea to refer to this list and see what steps you’re missing.

What follows is probably one of the most important parts of OpenStack. It’s called Nova and, although it has nothing to do with stars and galaxies, it is still quite spectacular. At the end of this last section you got operating system image (from here on out, references to “image” mean “Glance image”).

Now I’ll introduce another concept, called an instance. An instance is what is created out of an image. This is the virtual machine that you use to provide services to your customers. In simpler terms, let’s imagine you had a Windows CD and you use this CD to install Windows on a laptop. Then you use the same CD to install another Windows on another laptop. You input different license keys for both, create different users for each and they are then two individual and independent laptops running Windows from the same CD.

Using this analogy, an image is equivalent to the Windows CD with the Windows running on each of the laptops acting as different instances. Nova does the same thing by taking the CD from Glance and creating, configuring and managing instances in the cloud which are then handed to customers.

C. Nova


Nova is one of the more complex components that resides on more than one machine and does different things on each. A component of Nova sits on the controller and is responsible for overall management and communication with other OpenStack services and external world services. The second component sits on each compute (yes, you can have multiple computes, but we will look at those later). This service is primarily responsible for talking to the virtual machine monitors and making them launch and manage instances. Perform the following configuration to configure the Nova component:


Create the database(s) and grant relevant privileges:

sudo mysql -u root -p 

GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \ 
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \ 
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \ 
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \ 

Log in to the command line:

source ~/keystonerc_admin

Create the user and assign the roles:

openstack user create --domain default \ 
  --password-prompt nova 
openstack role add --project service --user nova admin 

Create the service and the corresponding endpoints:

openstack service create --name nova \ 
  --description "OpenStack Compute" compute 
openstack endpoint create --region RegionOne \ 
  compute public http://controller:8774/v2.1/%\(tenant_id\)s 
openstack endpoint create --region RegionOne \ 
  compute internal http://controller:8774/v2.1/%\(tenant_id\)s 
openstack endpoint create --region RegionOne \ 
  compute admin http://controller:8774/v2.1/%\(tenant_id\)s

Install the software:

sudo apt install nova-api nova-conductor nova-consoleauth \ 
  nova-novncproxy nova-scheduler

Configure the configuration file:

sudo vi /etc/nova/nova.conf 
  #Configure the DB-1 access 
  connection = mysql+pymysql://nova:MINE_PASS@controller/nova_api 
  #Configure the DB-2 access (nova has 2 DBs)
  connection = mysql+pymysql://nova:MINE_PASS@controller/nova 
  #Configure how to access RabbitMQ 
  transport_url = rabbit://openstack:MINE_PASS@controller 
  #Use the below. Some details will follow later 
  auth_strategy = keystone 
  my_ip = 
  use_neutron = True 
  firewall_driver = nova.virt.firewall.NoopFirewallDriver 
  #Tell Nova how to access keystone 
  auth_uri = http://controller:5000 
  auth_url = http://controller:35357 
  memcached_servers = controller:11211 
  auth_type = password 
  project_domain_name = default 
  user_domain_name = default 
  project_name = service 
  username = nova 
  password = MINE_PASS 
  #This is a remote access to instance consoles (French ? Just take it on faith. We will explore this in a much later episode) 
  vncserver_listen = $my_ip 
  vncserver_proxyclient_address = $my_ip 
  #Nova needs to talk to glance to get the images 
  api_servers = http://controller:9292 
  #Some locking mechanism for message queing (Just use it.) 
  lock_path = /var/lib/nova/tmp

Initialize both of the databases using the configuration done above:

sudo su -s /bin/sh -c "nova-manage api_db sync" nova 
sudo su -s /bin/sh -c "nova-manage db sync" nova

Start all the Nova services:

sudo service nova-api restart 
sudo service nova-consoleauth restart 
sudo service nova-scheduler restart 
sudo service nova-conductor restart 
sudo service nova-novncproxy restart

Install the software:

sudo apt install nova-compute

Configure the configuration file:

sudo vi /etc/nova/nova.conf
  #Define DB access 
  transport_url = rabbit://openstack:MINE_PASS@controller 
  #Take it on faith for now  
  auth_strategy = keystone 
  my_ip = 
  use_neutron = True 
  firewall_driver = nova.virt.firewall.NoopFirewallDriver 
  #Tell the nova-compute how to access keystone 
  auth_uri = http://controller:5000 
  auth_url = http://controller:35357 
  memcached_servers = controller:11211 
  auth_type = password 
  project_domain_name = default 
  user_domain_name = default 
  project_name = service 
  username = nova 
  password = MINE_PASS 
  #This is a remote access to instance consoles (French ? Just take it on faith. We will explore this in a much later episode) 
  enabled = True 
  vncserver_listen = 
  vncserver_proxyclient_address = $my_ip 
  novncproxy_base_url = http://controller:6080/vnc_auto.html 
  #Nova needs to talk to glance to get the images 
  api_servers = http://controller:9292 
  #Some locking mechanism for message queuing (Just use it.) 
  lock_path = /var/lib/nova/tmp

The following requires some explanation. In a production environment, your compute will be a physical machine, so the below steps will NOT be required. But since this is a lab, we need to set the virtualization type for KVM hypervisor to qemu (as opposed to KVM). This setting runs the hypervisor without looking for the hardware acceleration that is provided by KVM on a physical machine. So, you are going to run virtual machines inside a virtual machine in the lab and, trust me, it works.😉

For a virtual compute:

sudo vi /etc/nova/nova-compute.conf 
  virt_type = qemu

Start the Nova service:

sudo service nova-compute restart

Verify Operation


Log in to the command line:

source ~/keystonerc_admin

Run the following command to list the Nova services. Ensure the state is up as show below:

openstack compute service list
| ID | Binary           | Host                  | Zone     | Status  | State | Updated At                 |
|  3 | nova-consoleauth | controller            | internal | enabled | up    | 2016-11-30T12:54:39.000000 |
|  4 | nova-scheduler   | controller            | internal | enabled | up    | 2016-11-30T12:54:36.000000 |
|  5 | nova-conductor   | controller            | internal | enabled | up    | 2016-11-30T12:54:34.000000 |
|  6 | nova-compute     | compute1              | nova     | enabled | up    | 2016-11-30T12:54:33.000000 |

Notice the similarity in the sequence of steps followed to configure Nova?

Time for some physics! Remember that our goal is to provide services to our customers. These services are in the form of virtual machines, or services running over these virtual machines. If we want to cater to many customers, then each of them will have their own set of services they are consuming. These services, like any other infrastructure, will require a network. You may need things like routers, firewalls, load balancers, VPN and so on. Now imagine setting these up manually for each customer. Yeah, that’s not happening. This is exactly what Neutron does for you.

D. Neutron


Of course, it’s never going to be easy. In my personal opinion, Neutron is of OpenStack’s personality that has the worst temper and suffers from very frequent mood swings. In simpler terms, it’s complex. In our setup, major Neutron services will be residing on two servers, namely the controller and the Neutron. You could put everything on one machine and it would work, however splitting is suggested by the official documentation. (My hunch is that it has something to do with easier scaling.) There will also be a Neutron component on the compute node which I will explain later.

However, before we get into detailing the configuration, I need to explain a few minor terms:

  • When we talk about networks under neutron, we will come across mainly two types of networks. One is called an external network. An external network is usually configured once and represents the network used by OS to access the external world. The second is tenant networks, and these are the networks assigned to customers.
  • An OpenStack environment also requires a virtual switching (bridging) component in order to manage virtualized networking across neutron and compute nodes. The two components mostly used are Linux Bridge and Open vSwitch. (If you want to understand a bit more about Open vSwitch, you can refer to one of my other entries Understanding Open vSwitch). We will be using Open vSwitch for our environment. Please note that if you intend to use Linux Bridge, the configuration will be different.

To deploy Neutron, perform the following configuration:

Create the database and assign full rights to the Neutron user (yawn!):

sudo mysql -u root -p 
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \ 
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \ 

Log in to command line:

source ~/keystonerc_admin

Create the Neutron user and add the role:

openstack user create --domain default --password-prompt neutron 
openstack role add --project service --user neutron admin

Create the Neutron service and the respective endpoints:

openstack service create --name neutron \ 
  --description "OpenStack Networking" network 
openstack endpoint create --region RegionOne \ 
  network public http://controller:9696 
openstack endpoint create --region RegionOne \ 
  network internal http://controller:9696 
openstack endpoint create --region RegionOne \ 
  network admin http://controller:9696

Install the software components:

sudo apt install neutron-server neutron-plugin-ml2

Configure the Neutron config file:

sudo vi /etc/neutron/neutron.conf 
  #This is a multi-layered plugin 
  core_plugin = ml2 
  service_plugins = router 
  allow_overlapping_ips = True 
  notify_nova_on_port_status_changes = True 
  notify_nova_on_port_data_changes = True 
  #Configure the DB connection 
  connection = mysql+pymysql://neutron:MINE_PASS@controller/neutron 
  #Tell neutron how to talk to keystone 
  auth_uri = http://controller:5000 
  auth_url = http://controller:35357 
  memcached_servers = controller:11211 
  auth_type = password 
  project_domain_name = default 
  user_domain_name = default 
  project_name = service 
  username = neutron 
  password = MINE_PASS 
  #Tell neutron how to talk to nova to inform nova about changes in the network 
  auth_url = http://controller:35357 
  auth_type = password 
  project_domain_name = default 
  user_domain_name = default 
  region_name = RegionOne 
  project_name = service 
  username = nova 
  password = MINE_PASS

Configure the plugin file:

sudo vi /etc/neutron/plugins/ml2/ml2_conf.ini 
  #In our environment we will use vlan networks so the below setting is sufficient. You could also use vxlan and gre, but that is for a later episode 
  type_drivers = flat,vlan 
  #Here we are telling neutron that all our customer networks will be based on vlans 
  tenant_network_types = vlan 
  #Our SDN type is openVSwitch 
  mechanism_drivers = openvswitch,l2population 
  extension_drivers = port_security 
  #External network is a flat network 
  flat_networks = external 
  #This is the range we want to use for vlans assigned to customer networks.  
  network_vlan_ranges = external,vlan:1381:1399 
  #Use Ip tables based firewall 
  firewall_driver = iptables_hybrid

Note that I tried to run the su command directly using sudo and, for some reason, it fails for me. An alternative is to use “sudo su” (get root access) and then run the database using the config files above. Run the following sequence to instantiate the database.

sudo su - 
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \ 
  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

Start all the Neutron services:

sudo servive neutron-* restart

Edit the Nova configuration file:

sudo vi /etc/nova/nova.conf 
  #Tell nova how to get in touch with neutron, to get network updates 
  url = http://controller:9696 
  auth_url = http://controller:35357 
  auth_type = password 
  project_domain_name = default 
  user_domain_name = default 
  region_name = RegionOne 
  project_name = service 
  username = neutron 
  password = MINE_PASS 
  service_metadata_proxy = True 
  metadata_proxy_shared_secret = MINE_PASS

Restart Nova services:

sudo service nova-* restart

Install the required services:

sudo apt install neutron-plugin-ml2 \ 
  neutron-l3-agent neutron-dhcp-agent \ 
  neutron-metadata-agent neutron-openvswitch-agent

Run the following Open vSwitch commands to create the following bridges:

Create a bridge named”br-ex;” this will connect OS to the external network:

sudo ovs-vsctl add-br br-ex

Add a port from the br-ex bridge to the ens10 interface. In my environment, ens10 is the interface connected on the External Network. You should change it as per your environment.

sudo ovs-vsctl add-port br-ex ens10

The following bridge that we will add is used by the VLAN networks for the customer networks in OS. Run the following command to create the bridge.

sudo ovs-vsctl add-br br-vlan

Add a port fromt the br-vlan bridge to the ens9 interface. In my environment, ens9 is the interface connected on the tunnel network. You should change it as per your environment.

sudo ovs-vsctl add-port br-vlan ens9 

For our Open vSwitch configuration to persist beyond server reboots, we need to configure the interface file accordingly.

sudo vi /etc/network/interfaces 
  # This file describes the network interfaces available on your system 
  # and how to activate them. For more information, see interfaces(5). 
  source /etc/network/interfaces.d/* 
  # The loopback network interface 
  # No Change 
  auto lo 
  iface lo inet loopback 
  #No Change on management network 
  auto ens3 
  iface ens3 inet static 
  # Add the br-vlan bridge 
  auto br-vlan 
  iface br-vlan inet manual 
  up ifconfig br-vlan up 
  # Configure ens9 to work with OVS 
  auto ens9 
  iface ens9 inet manual 
  up ip link set dev $IFACE up 
  down ip link set dev $IFACE down 
  # Add the br-ex bridge and move the IP for the external network to the bridge 
  auto br-ex 
  iface br-ex inet static 

  # Configure ens10 to work with OVS. Remove the IP from this interface  
  auto ens10 
  iface ens10 inet manual 
  up ip link set dev $IFACE up 
  down ip link set dev $IFACE down

Reboot to ensure the new configuration is fully applied

sudo reboot

Configure the neutron config file

sudo vi /etc/neutron/neutron.conf
  auth_strategy = keystone
  #Tell neutron how to access RabbitMQ
  transport_url = rabbit://openstack:MINE_PASS@controller

  #Tell neutron how to access keystone
  auth_uri = http://controller:5000
  auth_url = http://controller:35357
  memcached_servers = controller:11211
  auth_type = password
  project_domain_name = default
  user_domain_name = default
  project_name = service
  username = neutron
  password = MINE_PASS

Configure the Open vSwitch agent config file

sudo vi /etc/neutron/plugins/ml2/openvswitch_agent.ini
  #Configure the section for OpenVSwitch
  #Note that we are mapping alias(es) to the bridges. Later we will use these aliases (vlan,external) to define networks inside OS.
  bridge_mappings = vlan:br-vlan,external:br-ex

  l2_population = True

  #Ip table based firewall
  firewall_driver = iptables_hybrid

Configure the Layer 3 Agent configuration file:

sudo vi /etc/neutron/l3_agent.ini
  #Tell the agent to use the OVS driver
  interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
  #This is required to be set like this by the official documentation (If you don’t set it to empty as show below, sometimes your router ports in OS will not become Active)
  external_network_bridge =

Configure the DHCP Agent config file:

sudo vi /etc/neutron/dhcp_agent.ini
  #Tell the agent to use the OVS driver
  interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
  enable_isolated_metadata = True

Configure the Metadata Agent config file:

sudo vi /etc/neutron/metadata_agent.ini

  nova_metadata_ip = controller
  metadata_proxy_shared_secret = MINE_PASS

Start all Neutron services:

sudo service neutron-* restart


Install the ml2 plugin and the openvswitch agent:

sudo apt install neutron-plugin-ml2 \

Create the Open vSwitch bridges for tenant VLANs:

sudo ovs-vsctl add-br br-vlan

Add a port on the br-vlan bridge to the ens9 interface. In my environment ens9 is the interface connected on the tunnel network. You should change it as per your environment.

sudo ovs-vsctl add-port br-vlan ens9

In order for our Open vSwitch configuration to persist beyond reboots we need to configure the interface file accordingly.

sudo vi /etc/network/interfaces

  # This file describes the network interfaces available on your system
  # and how to activate them. For more information, see interfaces(5).

  source /etc/network/interfaces.d/*

  # The loopback network interface
  #No Change
  auto lo
  iface lo inet loopback

  #No Change to management network
  auto ens3
  iface ens3 inet static

  # Add the br-vlan bridge interface
  auto br-vlan
  iface br-vlan inet manual
  up ifconfig br-vlan up

  #Configure ens9 to work with OVS
  auto ens9
  iface ens9 inet manual
  up ip link set dev $IFACE up
  down ip link set dev $IFACE down

Reboot to ensure the new network configuration is applied successfully:

sudo reboot

Configure the Neutron config file:

sudo vi /etc/neutron/neutron.conf
  auth_strategy = keystone
  #Tell neutron component how to access RabbitMQ
  transport_url = rabbit://openstack:MINE_PASS@controller

  #Configure access to keystone
  auth_uri = http://controller:5000
  auth_url = http://controller:35357
  memcached_servers = controller:11211
  auth_type = password
  project_domain_name = default
  user_domain_name = default
  project_name = service
  username = neutron
  password = MINE_PASS

Configure the Nova config file:

sudo vi /etc/nova/nova.conf
  #Tell nova how to access neutron for network topology updates
  url = http://controller:9696
  auth_url = http://controller:35357
  auth_type = password
  project_domain_name = default
  user_domain_name = default
  region_name = RegionOne
  project_name = service
  username = neutron
  password = MINE_PASS

Configure the Open vSwitch agent configuration:

sudo vi /etc/neutron/plugins/ml2/openvswitch_agent.ini

#Here we are mapping the alias vlan to the bridge br-vlan
 bridge_mappings = vlan:br-vlan

 l2_population = True

 firewall_driver = iptables_hybrid

It’s a good idea to reboot the compute at this point. (I was getting connectivity issues without rebooting. Let me know how it goes for you.)

sudo reboot

Start all Neutron services:

sudo service neutron-* restart

Verify Operation


Log in to the command line:

source ~/keystonerc_admin

Run the following command to list the neutron agents. Ensure the “Alive” status is “True” and “State” is “Up” as show below:

openstack network agent list
| ID                                   | Agent Type         | Host                  | Availability Zone | Alive | State | Binary                    |
| 84d81304-1922-47ef-8b8e-c49f83cff911 | Metadata agent     | neutron               | None              | True  | UP    | neutron-metadata-agent    |
| 93741a55-54af-457e-b182-92e15d77b7ae | L3 agent           | neutron               | None              | True  | UP    | neutron-l3-agent          |
| a3c9c1e5-46c3-4649-81c6-dc4bb9f35158 | Open vSwitch agent | neutron               | None              | True  | UP    | neutron-openvswitch-agent |
| ba9ce5bb-6141-4fcc-9379-c9c20173c382 | DHCP agent         | neutron               | nova              | True  | UP    | neutron-dhcp-agent        |
| e458ba8a-8272-43bb-bb83-ca0aae48c22a | Open vSwitch agent | compute1              | None              | True  | UP    | neutron-openvswitch-agent |

It’s becoming a bit of a drag, I know. But hang on: we’re almost there.

Up to this point, our interactions with OpenStack have taken place in a mostly black-and-white environment. It’s  time to add some color to this relationship and see what we can achieve by pressing the right buttons.

E. Horizon


Horizon is the component that handles the graphical user interface for OpenStack. It’s simple and sweet — and so is configuring it.

Perform the following configuration to deploy Horizon:

Install the software:

sudo apt install openstack-dashboard

Configuration file updates. Please search for these entries in the file and then replace them to avoid any duplicates:

sudo vi /etc/openstack-dashboard/
  OPENSTACK_HOST = "controller"
  ALLOWED_HOSTS = ['*', ]

  SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

  CACHES = {
   'default': {
   'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
   'LOCATION': 'controller:11211',
   "identity": 3,
   "image": 2,
   "volume": 2,

Replace TIME_ZONE with an appropriate time zone identifier. For more information, see the list of time zones.

Start the dashboard service:

sudo service apache2 reload

Verify Operation

Using an computer that has access to the controller open the following URL in the browser:


Please replace controller in the above URL with the controller IP if you cannot resolve the controller by its alias. So in my case the URL will become:

You should see a login screen similar to the one below

OpenStack Dashboard login screen

Enter admin as the username and the password you used to set up the user. In my case, it is MINE_PASS. If you are successfully logged in and see an interface similar (not necessarily the same) to the one below then your Horizon dashboard is working just fine.

OpenStack Dashboard


Look at the diagram below. Here’s what we’ve achieved so far.

<figure class="wp-caption alignnone" id="attachment_350">ose2-7</figure>


  • We will learn about OpenStack’s block storage component…
  • …and its orchestration service
  • The fun continues!

Once again thanks for reading. If you have any questions/comments please comment below so everyone can benefit from the discussion.

This post first appeared on the WhatCloud blog. Superuser is always interested in community content, email:

Cover Photo // CC BY NC

The post Getting to know the essential OpenStack components better appeared first on OpenStack Superuser.

by Nooruddin Abbas at March 21, 2017 11:31 AM

March 20, 2017


5 Minutes Stacks, episode 55: Wordpress

Episode Two : Wordpress


In the CMS Open-Source galaxy, WordPress is the most used in term of community, available features and user adoption. The Automattic compagny, which develop and distribute Wordpress, provides a SaaS offer allowing a user to create its blog in few minutes. However, those who experiments know that limits can be easily found by the hosting capabilities.

Today, Cloudwatt provides the necessary toolset to start your Wordpress instance in a few minutes and to become its master.

The deployement base is an Ubuntu xenial instance. The Apache and MariaDB servers are deployed on a single instance.


The versions

  • Ubuntu 16.04
  • Apache 2.4.18
  • Wordpress 4.7.1
  • MariaDB 10.0.28
  • PHP 7.0.13

A one-click deployment sounds really nice…

… Good! Go to the Apps page on the Cloudwatt website, choose the apps, press DEPLOY and follow the simple steps… 2 minutes later, a green button appears… ACCESS: you have your Wordpress!.

All of this is fine, but you do not have a way to run the stack thru the console ?

Yes ! Using the console, you can deploy a Wordpress server:

  1. Go the Cloudwatt Github in the applications/bundle-xenial-wordpress repository
  2. Click on the file nammed bundle-xenial-wordpress.heat.yml
  3. Click on RAW, a web page appear with the script details
  4. Save as its content on your PC. You can use the default name proposed by your browser (just remove the .txt)
  5. Go to the « Stacks » section of the console
  6. Click on « Launch stack », then click on « Template file » and select the file you’ve just saved on your PC, then click on « NEXT »
  7. Named your stack in the « Stack name » field
  8. Enter your keypair in the « keypair_name » field
  9. Choose the instance size using the « flavor_name » popup menu and click on « LAUNCH »

The stack will be automatically created (you can see its progress by clicking on its name). When all its modules will become “green”, the creation will be completed. Then you can go on the “Instances” menu to discover the flotting IP value that has been automatically generated. Now, just run this IP adress in your browser and enjoy !

It is (already) FINISH !

Install cli

If you like only the command line, you can go directly to the “CLI launch” version by clicking this link

For further

Install your Wordpress


wordpress2 wordpress3

Homepage + BackOffice


Configuration of the database

The configuration of the database is in /data/wordpress/wp-config.php file.

So watt ?

The goal of this tutorial is to accelarate your start. At this point you are the master of the stack. You have a SSH access point on your virtual machine thru the flotting IP and your private keypair (default user name cloud).

The interesting entry access points are:

  • /data/wordpress : Wordpress installation repository.
  • /data/mysql : Mariadb nodes datadir is a cinder volume.

Resources you could be interested in:

Install cli

The prerequisites to deploy this stack

Size of the instance

Per default, the script is proposing a deployement on an instance type “Small” ( Instances are charged by the minute and capped at their monthly price (you can find more details on the Tarifs page on the Cloudwatt website). Obviously, you can adjust the stack parameters, particularly its defaut size.

By the way…

If you do not like command lines, you can go directly to the “run it thru the console” section by clicking here

What will you find in the repository

Once you have cloned the github, you will find in the bundle-xenial-wordpress/ repository:

  • bundle-xenial-wordpress.heat.yml : HEAT orchestration template. It will be use to deploy the necessary infrastructure.
  • : Stack launching script. This is a small script that will save you some copy-paste.
  • : Flotting IP recovery script.


Initialize the environment

Have your Cloudwatt credentials in hand and click HERE. If you are not logged in yet, you will go thru the authentication screen then the script download will start. Thanks to it, you will be able to initiate the shell acccesses towards the Cloudwatt APIs.

Source the downloaded file in your shell. Your password will be requested.

$ source COMPUTE-[...]
Please enter your OpenStack Password:

Once this done, the Openstack command line tools can interact with your Cloudwatt user account.

Adjust the parameters

With the bundle-xenial-wordpress.heat.yml file, you will find at the top a section named parameters. The sole mandatory parameter to adjust is the one called keypair_name. Its default value must contain a valid keypair with regards to your Cloudwatt user account. This is within this same file that you can adjust the instance size by playing with the flavor parameter.

heat_template_version: 2015-04-30

description: All-in-one Wordpress stack

    default: amaury-ext-compute         <-- Indicate here your keypair
    description: Keypair to inject in instances
    type: string

      default:              <-- Indicate here the flavor size
      description: Flavor to use for the deployed instance
      type: string
        - allowed_values:


Start up the stack

In a shell, run the script with the name you want to give it as parameter:

$ ./ Wordpress
| id                                   | stack_name | stack_status       | creation_time        |
| ed4ac18a-4415-467e-928c-1bef193e4f38 | Wordpress  | CREATE_IN_PROGRESS | 2015-04-21T08:29:45Z |

Last, wait 5 minutes until the deployement been completed.

At each new deployement of the stack, a password is generated directly in the /data/wordpress/config-default.php configuration file.


Once all of this done, you can run the script.

./ Wordpress

It will gather the assigned flotting IP of your stack. You can then paste this IP in your favorite browser and start to configure your Wordpress instance.

In the background

The script is taking care of running the API necessary requests to:

  • start an Ubuntu Xenial based instance
  • do an update of the system packages
  • install Apache, PHP, MariaDB and Wordpress
  • configure MariaDB with a wordpress dedicated user and database, with a generated password
  • show a flotting IP on the internet

Have fun. Hack in peace.

by Mohamed-Ali at March 20, 2017 11:00 PM


Using Kubernetes Helm to install applications

The post Using Kubernetes Helm to install applications appeared first on Mirantis | Pure Play Open Cloud.

After reading this introduction to Kubernetes Helm, you will know how to:

  • Install Helm
  • Configure Helm
  • Use Helm to determine available packages
  • Use Helm to install a software package
  • Retrieve a Kubernetes Secret
  • Use Helm to delete an application
  • Use Helm to roll back changes to an application

Difficulty is a relative thing. Deploying an application using containers can be much easier than trying to manage deployments of a traditional application over different environments, but trying to manage and scale multiple containers manually is much more difficult than orchestrating them using Kubernetes.  But even managing Kubernetes applications looks difficult compared to, say, “apt-get install mysql”. Fortunately, the container ecosystem has now evolved to that level of simplicity. Enter Helm.

Helm is a Kubernetes-based package installer. It manages Kubernetes “charts”, which are “preconfigured packages of Kubernetes resources.”  Helm enables you to easily install packages, make revisions, and even roll back complex changes.

Next week, my colleague Maciej Kwiek will be giving a talk at Kubecon about Boosting Helm with AppController, so we thought this might be a good time to give you an introduction to what it is and how it works.

Let’s take a quick look at how to install, configure, and utilize Helm.

Install Helm

Installing Helm is actually pretty straightforward.  Follow these steps:

  1. Download the latest version of Helm from  (Note that if you are using an older version of Kubernetes (1.4 or below) you might have to downgrade Helm due to breaking changes.)
  2. Unpack the archive:
    $ gunzip helm-v2.2.3-darwin-amd64.tar.gz
    $ tar -xvf helm-v2.2.3-darwin-amd64.tar
    x darwin-amd64/
    x darwin-amd64/helm
    x darwin-amd64/LICENSE
    x darwin-amd64/
  3. Next move the helm executable to your path:
    $ mv dar*/helm /usr/local/bin/.
  4. Finally, initialize helm to both set up the local environment and to install the server portion, Tiller, on your cluster.  (Helm will use the default cluster for Kubernetes, unless you tell it otherwise.)
    $ helm init
    Creating /Users/nchase/.helm 
    Creating /Users/nchase/.helm/repository 
    Creating /Users/nchase/.helm/repository/cache 
    Creating /Users/nchase/.helm/repository/local 
    Creating /Users/nchase/.helm/plugins 
    Creating /Users/nchase/.helm/starters 
    Creating /Users/nchase/.helm/repository/repositories.yaml 
    Writing to /Users/nchase/.helm/repository/cache/stable-index.yaml
    $HELM_HOME has been configured at /Users/nchase/.helm.
    Tiller (the helm server side component) has been instilled into your Kubernetes Cluster.
    Happy Helming!

Note that you can also upgrade the Tiller component using:

helm init --upgrade

That’s all it takes to install Helm itself; now let’s look at using it to install an application.

Install an application with Helm

One of the things that Helm does is enable authors to create and distribute their own applications using charts; to get a full list of the charts that are available, you can simply ask:

$ helm search
NAME                          VERSION DESCRIPTION                                       
stable/aws-cluster-autoscaler 0.2.1   Scales worker nodes within autoscaling groups.    
stable/chaoskube              0.5.0   Chaoskube periodically kills random pods in you...
stable/chronograf             0.1.2   Open-source web application written in Go and R...

In our case, we’re going to install MySQL from the stable/mysql chart. Follow these steps:

  1. First update the repo, just as you’d do with apt-get update:
    $ helm repo update
    Hang tight while we grab the latest from your chart repositories...
    ...Skip local chart repository
    Writing to /Users/nchase/.helm/repository/cache/stable-index.yaml
    ...Successfully got an update from the "stable" chart repository
    Update Complete. ⎈ Happy Helming!⎈ 
  2. Next, we’ll do the actual install:
    $ helm install stable/mysql

    This command produces a lot of output, so let’s take it one step at a time.  First, we get information about the release that’s been deployed:

    NAME:   lucky-wildebeest
    LAST DEPLOYED: Thu Mar 16 16:13:50 2017
    NAMESPACE: default

    As you can see, it’s called lucky-wildebeest, and it’s been successfully DEPLOYED.

    Your release will, of course, have a different name. Next, we get the resources that were actually deployed by the stable/mysql chart:

    ==> v1/Secret
    NAME                    TYPE    DATA  AGE
    lucky-wildebeest-mysql  Opaque  2     0s
    ==> v1/PersistentVolumeClaim
    NAME                    STATUS  VOLUME                                    CAPACITY  ACCESSMODES  AGE
    lucky-wildebeest-mysql  Bound   pvc-11ebe330-0a85-11e7-9bb2-5ec65a93c5f1  8Gi       RWO          0s
    ==> v1/Service
    NAME                    CLUSTER-IP  EXTERNAL-IP  PORT(S)   AGE
    lucky-wildebeest-mysql   <none>       3306/TCP  0s
    ==> extensions/v1beta1/Deployment
    lucky-wildebeest-mysql  1        1        1           0          0s

    This is a good example because we can see that this chart configures multiple types of resources: a Secret (for passwords), a persistent volume (to store the actual data), a Service (to serve requests) and a Deployment (to manage it all).

    The chart also enables the developer to add notes:

    MySQL can be accessed via port 3306 on the following DNS name from within your cluster:
    To get your root password run:
        kubectl get secret --namespace default lucky-wildebeest-mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo
    To connect to your database:
     Run an Ubuntu pod that you can use as a client:
        kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il
     Install the mysql client:
        $ apt-get update && apt-get install mysql-client -y
     Connect using the mysql cli, then provide your password:
    $ mysql -h lucky-wildebeest-mysql -p

These notes are the basic documentation a user needs to use the actual application. There let’s see how we put it all to use.

Connect to mysql

The first lines of the notes make it seem deceptively simple to connect to MySql:

MySQL can be accessed via port 3306 on the following DNS name from within your cluster:

Before you can do anything with that information, however, you need to do two things: get the root password for the database, and get a working client with network access to the pod hosting it.

Get the mysql password

Most of the time, you’ll be able to get the root password by simply executing the code the developer has left you:

$ kubectl get secret --namespace default lucky-wildebeest-mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo

Some systems — notably MacOS — will give you an error:

$ kubectl get secret --namespace default lucky-wildebeest-mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo
Invalid character in input stream.

This is because of an error in base64 that adds an extraneous character. In this case, you will have to extract the password manually.  Basically, we’re going to execute the same steps as this line of code, but one at a time.

Start by looking at the Secrets that Kubernetes is managing:

$ kubectl get secrets
NAME                     TYPE                                  DATA      AGE
default-token-0q3gy   3         145d
lucky-wildebeest-mysql   Opaque                                2         20m

It’s the second, lucky-wildebeest-mysql that we’re interested in. Let’s look at the information it contains:

$ kubectl get secret lucky-wildebeest-mysql -o yaml
apiVersion: v1
  mysql-password: a1p1THdRcTVrNg==
  mysql-root-password: REJUem1iQWlrTw==
kind: Secret
  creationTimestamp: 2017-03-16T20:13:50Z
    app: lucky-wildebeest-mysql
    chart: mysql-0.2.5
    heritage: Tiller
    release: lucky-wildebeest
  name: lucky-wildebeest-mysql
  namespace: default
  resourceVersion: "43613"
  selfLink: /api/v1/namespaces/default/secrets/lucky-wildebeest-mysql
  uid: 11eb29ed-0a85-11e7-9bb2-5ec65a93c5f1
type: Opaque

You probably already figured out where to look, but the developer’s instructions told us the raw password data was here:


So we’re looking for this:

apiVersion: v1
  mysql-password: a1p1THdRcTVrNg==
  mysql-root-password: REJUem1iQWlrTw==
kind: Secret

Now we just have to go ahead and decode it:

$ echo "REJUem1iQWlrTw==" | base64 --decode

Finally!  So let’s go ahead and connect to the database.

Create the mysql client

Now we have the password, but if we try to just connect iwt the mysql client on any old machine, we’ll find that there’s no connectivity outside of the cluster.  For example, if I try to connect with my local mysql client, I get an error:

$ ./mysql -h lucky-wildebeest-mysql.default.svc.cluster.local -p
Enter password: 
ERROR 2005 (HY000): Unknown MySQL server host 'lucky-wildebeest-mysql.default.svc.cluster.local' (0)

So what we need to do is create a pod on which we can run the client.  Start by creating a new pod using the ubuntu:16.04 image:

$ kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never 

$ kubectl get pods
NAME                                      READY     STATUS             RESTARTS   AGE
hello-minikube-3015430129-43g6t           1/1       Running            0          1h
lucky-wildebeest-mysql-3326348642-b8kfc   1/1       Running            0          31m
ubuntu                                    1/1       Running            0          25s

When it’s running, go ahead and attach to it:

$ kubectl attach ubuntu -i -t

Hit enter for command prompt

Next install the mysql client:

root@ubuntu2:/# apt-get update && apt-get install mysql-client -y
Get:1 xenial InRelease [247 kB]
Get:2 xenial-updates InRelease [102 kB]
Setting up mysql-client-5.7 (5.7.17-0ubuntu0.16.04.1) ...
Setting up mysql-client (5.7.17-0ubuntu0.16.04.1) ...
Processing triggers for libc-bin (2.23-0ubuntu5) ...

Now we should be ready to actually connect. Remember to use the password we extracted in the previous step.

root@ubuntu2:/# mysql -h lucky-wildebeest-mysql -p
Enter password: 

Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 410
Server version: 5.7.14 MySQL Community Server (GPL)

Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

Of course you can do what you want here, but for now we’ll go ahead and exit both the database and the container:

mysql> exit
root@ubuntu2:/# exit

So we’ve successfully installed an application — in this case, MySql, using Helm.  But what else can Helm do?

Working with revisions

So now that you’ve seen Helm in action, let’s take a quick look at what you can actually do with it.  Helm is designed to let you install, upgrade, delete, and roll back revisions. We’ll get into more details about upgrades in a later article on creating charts, but let’s quickly look at deleting and rolling back revisions:

First off, each time you make a change with Helm, you’re creating a Revision.  By deploying MySql, we created a Revision, which we can see in this list:

NAME              REVISION UPDATED                  STATUS CHART         NAMESPACE
lucky-wildebeest     1        Sun Mar 19 22:07:56 2017 DEPLOYEmysql-0.2.5   default  
operatic-starfish    2        Thu Mar 16 17:10:23 2017 DEPLOYEredmine-0.4.0 default  

As you can see, we created a revision called lucky-wildebeest, based on the mysql-0.2.5 chart, and its status is DEPLOYED.

We could also get back the information we got when it was first deployed by getting the status of the revision:

$ helm status intended-mule
LAST DEPLOYED: Sun Mar 19 22:07:56 2017
NAMESPACE: default

==> v1/Secret
NAME                 TYPE    DATA  AGE
intended-mule-mysql  Opaque  2     43m

==> v1/PersistentVolumeClaim
NAME                 STATUS  VOLUME                                    CAPACITY  ACCESSMODES  AGE
intended-mule-mysql  Bound   pvc-08e0027a-0d12-11e7-833b-5ec65a93c5f1  8Gi       RWO          43m

Now, if we wanted to, we could go ahead and delete the revision:

$ helm delete lucky-wildebeest

Now if you list all of the active revisions, it’ll be gone.

$ helm ls

However, even though the revision s gone, you can still see the status:

$ helm status lucky-wildebeest
LAST DEPLOYED: Sun Mar 19 22:07:56 2017
NAMESPACE: default

MySQL can be accessed via port 3306 on the following DNS name from within your cluster:

To get your root password run:

    kubectl get secret --namespace default lucky-wildebeest-mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo

To connect to your database:

 Run an Ubuntu pod that you can use as a client:

    kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il

 Install the mysql client:

    $ apt-get update && apt-get install mysql-client -y

 Connect using the mysql cli, then provide your password:

    $ mysql -h lucky-wildebeest-mysql -p

OK, so what if we decide that we’ve changed our mind, and we want to roll back that deletion?  Fortunately, Helm is designed for that.  We can specify that we want to rollback our application to a specific revision (in this case, 1).

$ helm rollback lucky-wildebeest 1
Rollback was a success! Happy Helming!

We can see that the application is back, and the revision has been incremented:

NAME              REVISION UPDATED                  STATUS CHART         NAMESPACE
lucky-wildebeest     2        Sun Mar 19 23:46:52 2017 DEPLOYEmysql-0.2.5   default  

We can also check the status:

$ helm status intended-mule
LAST DEPLOYED: Sun Mar 19 23:46:52 2017
NAMESPACE: default

==> v1/Secret
NAME                 TYPE    DATA  AGE
intended-mule-mysql  Opaque  2     21m

==> v1/PersistentVolumeClaim
NAME                 STATUS  VOLUME                                    CAPACITY  ACCESSMODES  AGE
intended-mule-mysql  Bound   pvc-dad1b896-0d1f-11e7-833b-5ec65a93c5f1  8Gi       RWO          21m

Next time, we’ll talk about how to create charts for Helm.  Meanwhile, if you’re going to be at Kubecon, don’t forget Maciej Kwiek’s talk on Boosting Helm with AppController.

The post Using Kubernetes Helm to install applications appeared first on Mirantis | Pure Play Open Cloud.

by Nick Chase at March 20, 2017 09:20 PM

Arie Bregman

InfraRed: Deploying and Testing Openstack just made easier!

Deploying and testing OpenStack is very easy If you read the headline and your eyebrows raised, you are at the right place. I believe that most of us, who experienced at least one deployment of OpenStack, will agree that deploying OpenStack can be a quite frustrating experience. It doesn’t matter if you are using it for […]

by bregman at March 20, 2017 08:05 PM

Alessandro Pilotti

Setting the Windows admin password in OpenStack

We’re getting quite a few questions about how to set the admin password in OpenStack Windows instances, so let’s clarify the available options.

nova get-password

The secure and proper way to set passwords in OpenStack Windows instances is by letting Cloudbase-Init generate a random password and post it encrypted on the Nova metadata service. The password can then be retrieved with:

nova get-password <instance> [<ssh_private_key_path>]

You need to boot your instance with a SSH keypair (exactly like you would do on Linux for SSH public key authentication). In this case the public key is used to encrypt the password before posting it to the Nova HTTP metadata service. This way nobody will be able to decrypt it without having the keypair’s private key.

This option is also well supported in Horizon, but not enabled by default. To enable it, just edit openstack_dashboard/local/ and add:


To retrieve the password in Horizon, select “RETRIEVE PASSWORD” from the instance dropdown menu:

Horizon Retrieve password 1
Browse for your private key:

Horizon Retrieve password 3

Click “DECRYPT PASSWORD” (de decryption will occur in the browser, no data will be sent to the server) and retrieve your password:

Horizon Retrieve password 4


nova boot –meta admin_pass

In case a password automatically generated is not suitable, there’s an option to provide a password via command line. This is NOT RECOMMENDED due to the security implications of sharing clear text passwords in the metadata content.

In this case the password is provided to the Nova instance via metadata service and assigned by Cloudbase-Init to the admin user:

nova boot --meta admin_pass="<password>" ...

Given the previously mentioned security concerns this feature is disabled by default in Cloudbase-Init. In order to enable it inject_user_password must be set to true in the cloudbase-init.conf and cloudbase-init-unattend.conf config files:

inject_user_password = true


Password change in userdata script

The userdata can contain any PowerShell content (note the starting #ps1 line to identify it as such), including commands for creating users or setting passwords, providing a much higher degree of flexibility. The same security concerns for clear text content apply as above.
The main limitation is that it does not work with Heat or other solutions that already employ the userdata content for other means.

Passwordless authentication

Nova allows X509 keypairs to support passwordless authentication for Windows. This is highly recommended as it does not require any password, similarly to SSH public key authentication on Linux. The limitations of this option is that it works only for remote PowerShell and WinRM and not for RDP.

The post Setting the Windows admin password in OpenStack appeared first on Cloudbase Solutions.

by Alessandro Pilotti at March 20, 2017 04:02 PM


Blog posts, week of March 20

Here's what the RDO community has been blogging about in the last week.

Joe Talerico and OpenStack Performance at the OpenStack PTG in Atlanta by Rich Bowen

Last month at the OpenStack PTG in Atlanta, Joe Talerico spoke about his work on OpenStack Performance in the Ocata cycle.


RDO CI promotion pipelines in a nutshell by amoralej

One of the key goals in RDO is to provide a set of well tested and up-to-date repositories that can be smoothly used by our users:


A tale of Tempest rpm with Installers by chandankumar

Tempest is a set of integration tests to run against OpenStack Cloud. Delivering robust and working OpenStack cloud is always challenging. To make sure what we deliver in RDO is rock-solid, we use Tempest to perform a set of API and scenario tests against a running cloud using different installers like puppet-openstack-integration, packstack, and tripleo-quickstart. And, it is the story of how we integrated RDO Tempest RPM package with installers so it can be consumed by various CI rather than using raw upstream sources.


An even better Ansible reporting interface with ARA 0.12 by dmsimard

Not even a month ago, I announced the release of ARA 0.11 with a bunch of new features and improvements.


Let rdopkg manage your RPM package by

rdopkg is a RPM packaging automation tool which was written to efortlessly keep packages in sync with (fast moving) upstream.


Using Software Factory to manage Red Hat OpenStack Platform lifecycle by Maria Bracho, Senior Product Manager OpenStack

by Nicolas Hicher, Senior Software Engineer – Continuous Integration and Delivery Software-Factory Software-Factory is a collection of services that provides a powerful platform to build software. It enables the same workflow used to develop OpenStack: using Gerrit for code reviews, Zuul/Nodepool/Jenkins as a CI system, and Storyboard for stories and issues tracker. Also, it ensures a reproducible test environment with ephemeral Jenkins slaves.


by Rich Bowen at March 20, 2017 01:53 PM

OpenStack Superuser

Pay it forward: Sign up for Speed Mentoring at the OpenStack Summit Boston

After a successful launch at the Austin Summit, Speed Mentoring is back in action in Boston.

Organized by the Women of OpenStack, it’s designed to be a lightweight mentoring initiative to provide technical or career guidance to beginners in the community. Mentees should already be part of the community; they should have gone through, or be familiar with Upstream Training.

Intel is sponsoring the Boston edition, which is now seeking 10-20 mentors interested in offering either technical or career advice.

Who should apply?

“We’re happy for any mentor of any gender from a technical or non-technical background who has worked in OpenStack long enough to know their way around — at least one year of experience,” Emily Hugenbruch, software engineer at IBM and driving force behind the project, tells Superuser. “They should be excited about OpenStack and interested in sharing their advice and expertise with others.”

In terms of what mentees are seeking, she says they’re looking to network, for advice on how to break into the community and how to advance their careers. “It’s up to mentors if they want to allow mentees to contact them after the Speed Mentoring event, although we encourage it.”

How does it work?

Mentors sign up here, filling out a survey about their areas of interest and expertise. That info is turned into special baseball cards, which are given out at the event by way of introduction. Mentors meet with small groups of mentees in 15-minute intervals and answer their questions about how to grow in the community.

It’s a fast-paced event and a great way to meet new people, introduce them to the Summit and welcome them into the OpenStack community, Hugenbruch says. Mentors are provided with mentee questions in advance and should plan to arrive at 7:15 a.m. (Breakfast and caffeine are provided!)

Mentors will be contacted ahead of time to go over logistics before the Summit. Hugenbruch says a call is planned on April 17 (it’ll be recorded for those who can’t make it) so that serves as the deadline for mentor applications.

In the Austin edition, 150 people attended and 66 matches were made.

The post Pay it forward: Sign up for Speed Mentoring at the OpenStack Summit Boston appeared first on OpenStack Superuser.

by Superuser at March 20, 2017 12:23 PM

Cisco Cloud Blog

Building A Cloud Community

Six months ago, I inherited a stagnant OpenStack San Diego user group and its dozen orphaned members. I had discovered the benefits of working with OpenStack the previous year when a client asked me to develop a cyber security solution for its OpenStack powered cloud. OpenStack was a breath of fresh air after my experience with closed, proprietary public cloud environments. I was motivated to ensure other people in the industry know its benefits. To really get people excited about OpenStack, I needed to include hands-on experience; give people some “stick time” using OpenStack. This user group needed an OpenStack cloud for its (no longer) orphaned members.

by John Studarus at March 20, 2017 11:00 AM

Community leadership planning, new board members, and more OpenStack news

Explore what's happening this week in OpenStack, the open source cloud computing project.

by Jason Baker at March 20, 2017 05:00 AM

March 19, 2017

OpenStack Blog

OpenStack Developer Mailing List Digest March 11-17

SuccessBot Says

  • Dims [1]: Nova now has a python35 based CI job in check queue running Tempest tests (everything running on py35)
  • jaypipes [2]: Finally got a good functional test created that stresses the Ironic and Nova integration and migration from Newton to Ocata.
  • Lbragstad [3]: the OpenStack-Ansible project has a test environment that automates rolling upgrade performance testing
  • annegentle [4]: Craig Sterrett and the App Dev Enablement WG: New links to more content for the appdev docs [5]
  • jlvillal [6]: Ironic team completed the multi-node grenade CI job
  • Tell us yours via OpenStack IRC channels with message “#success <message>”
  • All: [7]

Pike Release Management Communication

  • The release liaison is responsible for:
    • Coordinating with the release management team.
    • Validating your team release team requests.
    • Ensure release cycle deadlines are met.
    • It’s encouraged to nominate a release liaison. Otherwise this tasks falls back to the PTL.
  • Ensure the releaase liaison has time and ability to handle the communication necessary.
    • Failing to follow through on a needed process step may block you from meeting deadlines or releasing as our milestones are date-based, not feature-based.
  • Three primary communication tools:
    • Email for announcements and asynchronous communication
      • “[release]” topic tag on the openstack-dev mailing list.
      • This includes the weekly release countdown emails with details on focus, tasks, and upcoming dates.
    • IRC for time sensitive interactions
      • With more than 50 teams, the release team relies on your presence in the freenode #openstack-release channel.
    • Written documentation for relatively stable information
      • The release team has published the schedule for the Pike cycle [8]
      • You can add the schedule to your own calendar [9]
  • Things to do right now:
    • Update your release liaisons [10].
    • Make sure your IRC and email address listed in projects.yaml [11].
  • Update your mail filters to look for “[release]” in the subject line.
  • Full thread [12]

OpenStack Summit Boston Schedule Now Live!

  • Main conference schedule [13]
  • Register now [14]
  • Hotel discount rates for attendees [15]
  • Stackcity party [16]
  • Take the certified OpenStack Administrator exam [17]
  • City guide of restaurants and must see sites [18]
  • Full thread [19]

Some Information About the Forum at the Summit in Boston

  • “Forum” proper
    • 3 medium sized fishbowl rooms for cross-community discussions.
    • Selected and scheduled by a committee formed of TC and UC members, facilitated by the Foundation staff members.
    • Brainstorming for topics [20]
  • “On-boarding” rooms
    • Two rooms setup classroom style for projects teams and workgroups who want to on-board new team members.
    • Examples include providing introduction to your codebase for prospective new contributors.
    • These should not be tradiitonal “project intro” talks.
  • Free hacking/meetup spaces
    • Four to five rooms populated with roundtables for ad-hoc discussions and hacking.
  • Full thread [21]


The Future of the App Catalog

  • Created early 2015 as a market place of pre-packaged applications [22] that you can deploy using Murano.
  • This has grown to 45 Glance images, 13 Heat templates and 6 Tosca templates. Otherwise did not pick up a lot of steam.
  • ~30% are just thin wrappers around Docker containers.
  • Traffic stats show 100 visits per week, 75% of which only read the index page.
  • In parallel, Docker developed a pretty successful containerized application marketplace (Docker Hub) with hundreds or thousands regularly updated apps.
    • Keeping the catalog around makes us look like we are unsuccessfully trying to compete with that ecosystem, while OpenStack is in fact complimentary.
  • In the past, we have retired projects that were dead upstream.
    • The app catalog is however has an active maintenance team.
    • If we retire the app catalog, it would not be a reflection on that team performance, but that the beta was arguably not successful in build an active market place and a great fit from a strategy perspective.
  • Two approaches for users today to deploy docker apps in OpenStack:
    • Container-native approach using “docker run” after using Nova or K8s cluster using Magnum.
    • OpenStack Native approach “zun create nginx”.
  • Full thread [23][24]

ZooKeeper vs etcd for Tooz/DLM

  • Devstack defaults to ZooKeeper and is opinionated about it.
  • Lots of container related projects are using etcd [25], so do we need to avoid both ZooKeeper and etcd?
  • For things like databases and message queues, it’s more than time for us to contract on one solution.
    • For DLMs ZooKeepers gives us mature/ featureful angle. Etcd covers the Kubernetes cooperation / non-java angle.
  • OpenStack interacts with DLM’s via the library Tooz. Tooz today only supports etcd v2, but v3 is planned which would support GRPC.
  • The OpenStack gate will begin to default to etcd with Tooz.
  • Full thread [26]

Small Steps for Go

  • An etherpad [27] has been started to begin tackling the new language requirements [28] for Go.
  • An golang-commons repository exists [29]
  • Gopher cloud versus having a golang-client project is being discussed in the etherpad. Regardless we need support for os-client-config.
  • Full thread [30]

POST /api-wg/news

  • Guidelines under review:
    • Add API capabilities discovery guideline [31]
    • Refactor and re-validate API change guidelines [32]
    • Microversions: add next_min_version field in version body [33]
    • WIP: microversion architecture archival doc [34]
  • Full thread [35]

Proposal to Rename Castellan to oslo.keymanager

  • Castellan is a python abstraction to different keymanager solutions such as Barbican. Implementations like Vault could be supported, but currently is not.
  • The rename would emphasize the Castellan is an abstraction layer.
    • Similar to oslo.db supporting MySQL and PostgreSQL.
  • Instead of oslo.keymanager, it can be rolled into the oslo umbrella without a rename. Tooz sets the precedent of this.
  • Full thread [36]

Release Countdown for week R-23 and R-22

  • Focus:
    • Specification approval and implementation for priority features for this cycle.
  • Actions:
    • Teams should research how they can meet the Pike release goals [37][38].
    • Teams that want to change their release model should do so before end of Pike-1 [39].
  • Upcoming Deadlines and Dates
    • Boston Forum topic formal submission period: March 20 – April 2
    • Pike-1 milestone: April 13 (R-20 week)
    • Forum at OpenStack Summit in Boston: May 8-11
  • Full thread [40]

Deployment Working Group

  • Mission: To collaborate on best practices for deploying and configuring OpenStack in production environments.
  • Examples:
    • OpenStack Ansible and Puppet OpenStack have been collaborating on Continuous Integration scenarios but also on Nova upgrades orchestration
    • TripleO and Kolla share the same tool for container builds.
    • TripleO and Fuel share the same Puppet OpenStack modules.
    • OpenStack and Kubernetes are interested in collaborating on configuration management.
    • Most of tools want to collect OpenStack parameters for configuration management in a common fashion.
  • Wiki [41] has been started to document how the group will work together. Also an etherpad [42] for brainstorming.


by Mike Perez at March 19, 2017 12:54 AM

March 17, 2017


Joe Talerico and OpenStack Performance at the OpenStack PTG in Atlanta

Last month at the OpenStack PTG in Atlanta, Joe Talerico spoke about his work on OpenStack Performance in the Ocata cycle.

Subscribe to our YouTube channel for more videos like this.

<iframe allowfullscreen="" frameborder="0" height="315" src="" width="560"></iframe>

Joe: Hi, I'm Joe Talerico. I work on OpenStack at Red Hat, doing OpenStack performance. In Ocata, we're going to be looking at doing API and dataplane performance and performance CI. In Pike we're looking at doing mix/match workloads of Rally, Shaker, and perfkit benchmarker, and different styles, different workloads running concurrently. That's what we're looking forward to in Pike.

Rich: How long have you been working on this stuff?

Joe: OpenStack performance, probably right around four years now. I started with doing Spec Cloud development, and Spec Cloud development turned into doing performance work at Red Hat for OpenStack … actually, it was Spec Virt, then Spec Cloud, then performance at OpenStack.

Rich: What kind of things were in Ocata that you find interesting?

Joe: In Ocata … for us … well, in Newton, composable roles, but building upon that, in TripleO, being able to do … breaking out the control plane even further, being able to scale out our deployments to much larger clouds. In Ocata, we're looking to work with CNCF, and do a 500 node deployment, and then put OpenShift on top of that, and find some more potential performance issues, or performance gains, going from Newton to Ocata. We've done this previously with Newton, we're going to redo it with Ocata.

by Rich Bowen at March 17, 2017 06:59 PM

OpenStack Superuser

How a small team keeps Twitter’s Fail Whale at bay

Following the strong wake created by the Fail Whale, Twitter created a life raft in the form of stateless containerized micro services. In just a few years, they’ve scaled to hundreds of teams running thousands of services on tens of thousands of hosts and in hundreds of thousands of containers.

Ian Downes is engineering manager for the compute platform team at Twitter. His team of about 10 engineers and a few other staffers buoys a platform providing container infrastructure to much of the stateless services powering and its advertising business. Downes spoke recently at Container World on “Twitter’s Micro Services Architecture: Operational & Technical Challenges.”

When people talk about containerization, he says, it’s often about how it can enable scale and disruption, but that doesn’t interest Downes much.

“What I’m more interested in are scaleable operations — independent of what scale you’re at,” he says. “What happens when you increase in size, number of machines, number of VMs, number of customers, the number of services you’re running? If you double that, or quadruple, does your operational workload double or quadruple along with it, or does it basically stay the same?”

Thanks to concerted efforts in the last few years, Twitter has infrastructure that can scale very suddenly -doubling, if necessary – and, Downes says, has blown through several orders of magnitudes of growth without a corresponding operations burden.

<script async="async" charset="utf-8" src=""></script>

Early on, Twitter was a monolithic application, made infamous by Fail Whale outages in 2010-2012. The social media company was towed under by events including a cascading bug, the Summer Olympics and the FIFA World Cup. They now run thousands of services for hundreds of teams inside the company, all done on their own infrastructure, their own machines, including tens of thousands of hosts and hundreds of thousands of containers.


“What’s interesting, though, is that the platform that my team manages that infrastructure and provides that platform for those users with a team of about 10 engineers. That’s a pretty amazing number.”

The journey from a monolithic application to micro services wasn’t a simple one. Before coming to the common platform, each one of those micro services ran their own structure and each ops team had their own way of doing things.  Downes likens it to cat herding — customers have habits, are opinionated and want to do things in a particular way. That meant keeping a ratio of 10:1 – 10 customers or 10 machines for every engineer.

“For a platform to be successful at the scale we intended, we had to view customers more like sheep,” he says. The soft-spoken Downes underlines that the analogy isn’t intended in a disparaging way, but simply that, like in his native New Zealand, a whole herd of sheep can be managed with two or three skilled sheepdogs.  “It’s all nice and smooth and very low fuss.”

How they did it

It sounds like a tall order, but Downes says the solution is simple: a contract with customers that decouples availability and applications from operations. This contract asks users to architect for individual instances of their application being rescheduled and, in turn, his team promises to keep those services healthy. (A service may have 100 or 1,000 instances that are identical copies of the service, he adds.)

“Obviously, failures still happen,” he says. “It doesn’t matter whether you’re running on virtualized infrastructure or the cloud. You have to architect for it, but that’s not sufficient because it doesn’t give any leeway for operations and we don’t want them to architect and solve their problems in a different way.”


At any point in time, the compute team can schedule instances of applications and move them around inside the cluster. Doing that will keep the service healthy – not running. “Running is not sufficient,” Downes emphasizes. However, his team doesn’t guarantee how many instances will run for each application. If they request to run 100 instances, they may be running fewer (due to failure or rescheduling) but the engineering team also doesn’t guarantee that it won’t exceed the number of instances, either.

“This seems a little strange, but it can happen with partitioning,” he says. For example, when the agent installed on the host running the instance loses connectivity. Because the compute team doesn’t know the state of those instances, they’ll spin up additional ones somewhere else in the cluster, so an application could exceed the number of instances. In general, the number of instances running at any given time doesn’t matter.

The compute team set a target of keeping 95 percent of instances healthy for at least 5 minutes as part of the contract. “We say ‘we won’t take out too much of the service at any given time and we won’t do it too quickly.’


To make it work for customers, there are a few caveats in place. They make it easy for customers to scale without fretting over instances —  the more the better. But customers are also required to express how sensitive those instances are to different failure rates.

Downes says it’s a way to chop up instances into groups that can be operated on. He offered an example of 20 distributed across five different racks. If the host goes down, they can lose one out of 20 (only five percent) if the rack goes down, they lose 25 percent of capacity (obviously a large fraction) so they’re encouraged to scale way beyond 20…“It’s good for them, it’s good for failures and it’s good for us.” (In response to a follow-up question, Downes says Twitter uses Apache Aurora Mesos but details were beyond the scope of his 20-minute talk.)

<figure class="wp-caption alignnone" id="attachment_5828" style="width: 605px">nsmail-2<figcaption class="wp-caption-text"> Ian Downes of Twitter at Container World. Photo: Nicole Martinelli, OpenStack Foundation.
</figcaption> </figure>

Every operation in the cluster is aware of the contract for each job. “So if we want to migrate something – we actually kill and then restart – we’ll give it five minutes once it becomes healthy before we act on anything that might impact that job again.” One host running 10 different containers, 10 different user loads have to keep track of across all the machines. “This means we can do actions on the cluster without impacting our customers.”

The next question is whether it’s sufficient to enable the compute team to manage the operational work load. The answer: Absolutely. Downes says they can roll an entire 30,000-node cluster running a full production workload — meaning they can reboot those nodes — in 24 hours with zero customer impact. While Downes calls that “aggressive” and not something they regularly do, they have done it for a kernel update and successfully, too. “There was no panic, no fires to put out, no alerts were triggered — we didn’t get a single customer request.”


 What’s next: Extreme cat makeover

Downes says the team is exploring whether the contract concept can be taken further. “We see ourselves almost as a public cloud. Other teams inside the company provide machines to us. If those machines fail, we go over the wall and say, ‘Can you give us a machine that’s healthy?’ It’s very much like running in the public cloud in that way.”

He admits that the timing is different — if a host fails and it’s taken it offline they may not get a replacement for a day or two. “We are, in effect, customers. We tell those providers, ‘We don’t care if you need to act on our machines, if it’s at the rack level, just go in and do it.’” For example, the operations team may want to take the top rack switch offline and do some maintenance on it — they’re welcome to without any interference or notice from the compute team.

This means his team has decoupled itself from the infrastructure and decoupled users from the infrastructure. “It’s incredibly powerful, we have a resilient platform that we can poke and prod and perform operations on and it doesn’t affect our users.”


The picture that he painted of a flock of sheep expertly herded by a few dogs isn’t the whole picture, he admits. “The reality is a little different. We have a lot of sheep and a few cats in the mix as well.” The bulk of Twitter’s shared cluster is where the sheep run – the customers who accept that contract run their services there. On the side, running through the same scheduler etc., are the cats. These customers have special requirements — persistent storage services, special hardware requirements, churning,  etc. — that don’t fit into the contract but want to take advantage of the orchestration, containerization and tooling.

“The unfortunate thing is that they dominate our operation workload even though they’re only about 15 percent of our cluster capacity,” he says. “In theory, we had a hybrid contract, the ‘cats’ could bring machines into the hybrid cluster and we’ll run them through our infrastructure but you need to maintain the host, update them, etc.”

Those good intentions often go astray, however.  And, despite the agreement, Downes and team end up maintaining them. That’s where the yowling begins: “it’s very painful to manage these machines,” Downes says. “Our operations burden is dominated by these special cases.”

Converting those cats into sheep is what they’re working on now. It may entail extending that contract, loosening it — they’ll only restart or reschedule some applications over a few days rather than five minutes. Another solution may be that stateful services have a “best effort restart” on the same host (so customers can reattach to the same storage) or delegating more work to internal teams.

“These are all questions that we’re trying to answer,” Downes says. “We’ve been incredibly successful in scaling up the stateless infrastructure, the next question is whether can we take those existing customers already in the cluster that are causing us this burden and take on new customers while maintaining operational scalability.”

Stay tuned.

Cover Photo // CC BY NC

Photo // CC BY NC

The post How a small team keeps Twitter’s Fail Whale at bay appeared first on OpenStack Superuser.

by Nicole Martinelli at March 17, 2017 12:04 PM


RDO CI promotion pipelines in a nutshell

One of the key goals in RDO is to provide a set of well tested and up-to-date repositories that can be smoothly used by our users:

  • Operators deploying OpenStack with any of the available tools.
  • Upstream projects using RDO repos to develop and test their patches, as OpenStack puppet modules, TripleO or kolla.

To include new patches in RDO packages as soon as possible, in RDO Trunk repos we build and publish new packages when commits are merged in upstream repositories. To ensure the content of theses packages is trustworthy, we run different tests which helps us to identify any problem introduced by the changes committed.

This post provides an overview of how we test RDO repositories. If you are interested in collaborate with us in running an improving it, feel free to let us know in #rdo channel in freenode or rdo-list mailing list.

Promotion Pipelines

Promotion pipelines are composed by a set of related CI jobs that are executed for each supported OpenStack release to test the content of a specific RDO repository. Currently promotion pipelines are executed in diferent phases:

  1. Define the repository to be tested. RDO Trunk repositories are identified by a hash based on the upstream commit of the last built package. The content of these repos doesn't change over time. When a promotion pipeline is launched, it grabs the latest consistent hash repo and sets it to be tested in following phases.

  2. Build TripleO images. TripleO is the recommended deployment tool for production usage in RDO and as such, is tested in RDO CI jobs. Before actually deploying OpenStack using TripleO the required images are built.

  3. Deploy and test RDO. We run a set of jobs which deploy and test OpenStack using different installers and scenarios to ensure they behave as expected. Currently, following deployment tools and configurations are tested:
    • TripleO deployments. Using tripleo-quickstart we deploy two different configurations, minimal and minimal_pacemaker which apply different settings that cover most common options.
    • OpenStack Puppet scenarios. Project puppet-openstack-integration (a.k.a. p-o-i) maintains a set of puppet manifest to deploy different OpenStack services combinations and configurations (scenarios) in a single server using OpenStack puppet modules and run tempest smoke tests for the deployed services. The tested services on each scenario can be found in the README for p-o-i. Scenarios 1, 2 and 3 are currently tested in RDO CI as
    • Packstack deployment. As part of the upstream testing, packstack defines three deployment scenarios to verify the correct behavior of the existing options. Note that tempest smoke tests are executed also in these jobs. In RDO-CI we leverage those scenarios to test new packages built in RDO repos.
  4. Repository and images promotion. When all jobs in the previous phase succeed, the tested repository is considered good and it is promoted so that users can use these packages:

Tools used in RDO CI

  • Jobs definitions are managed using Jenkings Job Builder, JJB, via gerrit review workflow in
  • weirdo is the tool we use to run p-o-i and Packstack testing scenarios defined upstream inside RDO CI. It's composed of a set of ansible roles and playbooks that prepares the environment and deploy and test the installers using the testing scripts provided by the projects.
  • TripleO Quickstart provides a set of scripts, ansible roles and pre-defined configurations to deploy an OpenStack cloud using TripleO in a simple and fully automated way.
  • ARA is used to store and visualize the results of ansible playbook runs, making easier to analize and troubleshoot them.


RDO is part of the CentOS Cloud Special Interest Group so we run promotion pipelines in CentOS CI infrastructure where Jenkins is used as continuous integration server..

Handling issues in RDO CI

An important aspect of running RDO CI is managing properly the errors found in the jobs included in the promotion pipelines. The root cause of these issues sometimes is in the OpenStack upstream projects:

  • Some problems are not catched in devstack-based jobs running in upstream gates.
  • In some cases, new versions of OpenStack services require changes in the deployment tools (puppet modules, TripleO, etc…).

One of the contributions of RDO to upstream projects is to increase test coverage of the projects and help to identify the problems as soon as possible. When we find them we report it upstream as Launchpad bugs and propose fixes when possible.

Every time we find an issue, a new card is added to the TripleO and RDO CI Status Trello board where we track the status and activities carried out to get it fixed.

Status of promotion pipelines

If you are interested in the status of the promotion pipelines in RDO CI you can check:

  • CentOS CI RDO view can be used to see the result and status of the jobs for each OpenStack release.

  • RDO Dashboard shows the overal status of RDO packaging and CI.

More info

by amoralej at March 17, 2017 09:00 AM

March 16, 2017

NFVPE @ Red Hat

A (happy happy joy joy) ansible-container hello world!

Today we’re going to explore ansible-container, a project that gives you Ansible workflow for Docker. It provides a method of managing container images using ansible commands (so you can avoid a bunch of dirty bash-y Dockerfiles), and then provides a specification of “services” which is eerily similar (on purpose) to docker-compose. It also has paths forward for managing the instances of these containers on Kubernetes & OpenShift – that’s pretty tight. We’ll build two images “ren” and “stimpy”, which contain nginx and output some Ren & Stimpy quotes so we can get a grip on how it’s all put together. It’s better than bad – it’s good!

by Doug Smith at March 16, 2017 01:50 PM

OpenStack Superuser

Exploring the OpenStack neighborhood: An offbeat install guide

We’ve all seen the plethora of installation, configuration, best practices and not-so-best practices books, guides, articles and blogs on OpenStack. This series will not be one of them.

I don’t want to produce yet another installation guide. Rather, I want to share my experiences learning and understanding the OpenStack platform. (My fear is that, by the time I am done with these articles, they will sound like yet another guide, but I will try to make this series a bit more entertaining by adding my two cents where applicable.)

I believe that, sometimes, OpenStack is misunderstood. It’s not complicated; however, like all magnanimous things, it is complex. So what do you do when you encounter something complex? You take a 180-degree turn and run like your life depended on it! Just kidding.

However, if you’re like me and adore this nerdy misery in your life, you’ll try to make some sense out of the complexity. You will never understand it in its entirety and attempting to do so is futile. Rather, you need to ask yourself what you want it, then learn more about those aspects accordingly.

However, before we jump in,  let me introduce the main character of this series, OpenStack. What is it? Simply put, it’s a software platform that will allow you to embrace the goodies of the cloud revolution. In simpler terms, if you want to rent servers, VMs, containers, development platforms and middleware to customers while charging them for your services, OpenStack will allow you to do this. There are other, more complicated questions you may have, such as: Who are your customers? How are they paying? What are they getting? All of these will be answered in due time.

OpenStack has a great community that meets twice per year in wonderful locations across the globe, bringing the best brains of the business together to talk about technology and how it’s changing the world.

Like anything important in life, if you want something and you’re serious about it, you have got to define your goals and do the ground work. This platform is no exception. Right now, our goal is to perform a manual installation of OpenStack.

For the record: I think that all vendors that offer OpenStack are doing a wonderful job with their respective installers and, in a production environment, you may want to use these installers. Besides being automated and simple to setup, they also take care of high availability and upgrades among other things. But, if one day you have to troubleshoot an OpenStack environment (and believe me, you will), you will be thankful that you did the manual install in your lab to understand how things are configured. That’s precisely our goal here.

So let’s explore the neighborhood. Go through the diagram below:

OpenStack Base Environment

The above setup describes where OpenStack will reside in our setup. Below are the details:

Server 1 controller
Comment Here is where it keeps all the important stuff
Server OS Ubuntu 16.04
Resources 1 CPU, 4GB RAM
IP Addresses
Technical Brief This is the main hub that hosts most of the fundamental services that form OpenStack
——————– —————————————————-
Server 2 neutron
Comment This is how it interacts with the outside world
Server OS Ubuntu 16.04
Resources 1 CPU, 4GB RAM
IP Addresses,
Technical Brief This is the networking component that handles all IP based communications for components and hosted guests
 ——————–  —————————————————
Server 3 compute1
Comment This is where it entertains all its guests
Server OS Ubuntu 16.04
Resources 1 CPU, 4GB RAM
IP Addresses
Technical Brief This is the server where you host the virtual machines that are rented to your customers.
 ——————–  —————————————————
Management Network (inner voice) Used for all internal communication within OpenStack. When one OS component wants to talk to the other.
Tenant Networks (chatty guests) Network used by customer virtual machines. When customers are rented virtual machines, they can also be given certain networks, in case they want to setup more than one machine and make them talk to each other. This network is used by such customer networks.
External Network (Lets call overseas) Used by OS to talk to the outside world. Lets say your customer machine needs internet, then this is the network used by the respective virtual machines to access the internet.

On each of my three servers, the /etc/hosts file looks like this:

A. Host file configuration: /etc/hosts

# controller controller 

# compute1 compute1 

# neutron neutron

Note that I am assigning “controller”, “compute1” and “neutron” as aliases. If you change these, make sure to use the changed references when you use them in the configuration files later.

Also, networking seems to have given me some issues in the past so below is a sample /etc/network/interfaces file for each of the servers:

B. Interface file configuration: /etc/network/interfaces

From this point on, whenever you see “@SERVERNAME,” it simply implies that the configuration that I am talking about needs to be done on that server. So “@controller” means, “please perform these configurations/installations on the controller server.”


# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

# The management1 interface
auto ens3
iface ens3 inet static


# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

# The management1 interface
auto ens3
iface ens3 inet static

# The Tunnel interface
auto ens9
iface ens9 inet manual
up ifconfig ens9 up

# The external interface
auto ens10
iface ens10 inet static


# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

# The management1 interface
auto ens3
iface ens3 inet static

# The tunnel interface
auto ens9
iface ens9 inet manual
up ifconfig ens9 up

I’m not going into details for each interface, but there is one thing to keep in mind: the tunnel interface in both the Neutron and compute nodes will not have any IP since it is used as a trunk for multiple tenant (or customer) networks in OpenStack. Also if you are using VLAN tagging, make sure that this particular interface is untagged.


C. NTP configuration:

NTP is network time protocol. Since our protagonist (OpenStack) resides across three houses (servers), we need to make sure that the clocks in all these houses show the same time. What happens if they don’t? Ever get off a 15-hour flight? You are jet lagged, completely out of sync with the local time, rendering you useless until you have rested and the body has readjusted. Something similar happens to distributed software working across systems with out of sync clocks. The symptoms are sometimes quite erratic and it is not always straight forward to pin point the problem. So setup NTP as follows:

Configure @controller as the master clock:

  • Install the service:
sudo apt install chrony

NOTE: using sudo vi means that I want you to edit the file. The indented lines that follow are what need to be edited in the file

  • Configure a global NTP server in the configuration file:
sudo vi /etc/chrony/chrony.conf
 server iburst
  • Start the NTP service:
sudo service chrony restart

Configure @neutron and @compute1 to sync time with the controller:

Install the service:

 sudo apt install chrony

Configure controller as the master NTP server in the configuration file:

 sudo vi /etc/chrony/chrony.conf
   server controller iburst  #(Comment all other pool or server entries)

Start the NTP service:

sudo service chrony restart

D. OpenStack packages

newton-logoSince we are performing the install on an Ubuntu Linux system, we need to add the corresponding repositories to get the OpenStack Software. For the record, I’m working with the Newton release of OpenStack.

On @controller, @neutron and @compute1 perform the following configuration:  

sudo apt install software-properties-common
sudo add-apt-repository cloud-archive:newton

sudo apt update && sudo apt dist-upgrade
sudo reboot #(This seems to be a good idea to avoid surprises)
sudo apt install python-openstackclient #(Just the OS client tools)

E. Maria DB

Like most distributed systems, OS requires a database to store configuration and data. In our case, it resides on the controller and almost all components will access it. Do note that the base configuration of most components reside in configuration files, however what you create inside OS is saved in the database. Consider this the intellectual capital of the OS environment. Perform the following steps on the @controller node:

Install the software:

sudo apt install mariadb-server python-pymysql

Configure the database:

sudo vi /etc/mysql/mariadb.conf.d/99-openstack.cnf
  bind-address =
  #The bind-address needs to be set to the Management IP of the controller node.
  default-storage-engine = innodb
  max_connections = 4096
  collation-server = utf8_general_ci
  character-set-server = utf8

Restart the service:

sudo service mysql restart

Secure the installation:

sudo mysql_secure_installation

(Answer the questions that follow with information relevant to your environment.)

F. RabbitMQ

By now, you must have realized that OpenStack, our main character, is unusual. All of its parts are not tightly knit together. Rather, it’s composed of a number of autonomous/semi-autonomous modules that perform their respective functions to form the greater whole. However, in order to do their respective parts, they need to communicate effectively and efficiently. In certain cases, a centralized message queuing system, RabbitMQ, is used to pass messages between components within OpenStack. Please do note that certain components sit on more than one system and perform different functions in different situations; we will leave that for a later episode. For now, perform the following configuration on the @controller node:

Install the software:

sudo apt install rabbitmq-server

Create a master user. Replace “MINE_PASS” with your own password:

sudo rabbitmqctl add_user openstack MINE_PASS

Allow full permissions for the user created above:

sudo rabbitmqctl set_permissions openstack ".*" ".*" ".*"

G. Memcached:

Honestly, I’m not an expert at this one, but I do know that it makes loading the web pages for the application a bit faster (one way of giving OpenStack a better short-term memory, I suppose).

Install the software on the @controller node:

sudo apt install memcached python-memcache

Configure the IP for the controller management interface in the “conf” file:

sudo vi /etc/memcached.conf

Start the service:

sudo service memcached restart


  • You now know the central character’s name.
  • You now know its whereabouts.
  • You now know what things it needs to survive and how to set them up, if you wanted one for yourself.
  • You are now EXCITED for the next episode of these tutorials! (hopefully.)

What we’ve achieved so far

If you’ve gotten this far, thanks for your patience. If you have any questions or comments, please comment below section so that everyone can benefit from the discussion.

This post first appeared on the WhatCloud blog. Superuser is always interested in community content, email:

Cover Photo // CC BY NC

The post Exploring the OpenStack neighborhood: An offbeat install guide appeared first on OpenStack Superuser.

by Nooruddin Abbas at March 16, 2017 12:46 PM

March 15, 2017

OpenStack Superuser

Community leadership charts course for OpenStack

Last week, about 40 people from the OpenStack Technical Committee, User Committee, Board of Directors and Foundation Staff convened in Boston to talk about the future of OpenStack. We candidly discussed the challenges we face as a community, but also why our mission to deliver open infrastructure is more important than ever.

To kick things off, Mark Collier opened with a state of the union address, talking about the strength of our community, the number of users running OpenStack at scale across various industries and the progress we’ve made working across adjacent open source projects. OpenStack is one of the largest, global open source communities. In 2016 alone, we had 3,479 unique developers from dozens of countries and hundreds of organizations contribute to OpenStack, and the number of merged changes increased 26 percent year-over-year. The size and diversity of the OpenStack community is a huge strength, but like any large organization, scale presents its own set of challenges.

Allison Randal, who had done a lot of work beforehand to organize the strategy session, then laid out topical categories which present a unique set of challenges and opportunities for OpenStack. Each workshop participant was then challenged to define the first, most important action we should take to make progress in each category.

The five topical categories were: 1) how we communicate about ‘What is OpenStack,’ 2) unanswered requirements in OpenStack, 3) interacting with and supporting adjacent technologies, 4) changes to the technology and 5) community health. Throughout the day, we developed a plan of action for each category that will help focus community efforts and allow us to make significant progress over the next six months.

OpenStack Board / TC / UC Meeting

Six-month Community Roadmap TL;DR Version

Over the next six months, we’ll be focusing community efforts on:

  1. Better communicating and categorizing projects within the OpenStack landscape currently known as “the big tent” to help users understand what is OpenStack and the state of different projects
  2. Bringing together developers/users/product teams at the Forum in Boston to improve our process for turning requirements into code
  3. Making individual OpenStack projects like Cinder block storage or Keystone identity service easier to consume by adjacent technology communities like Kubernetes (breaking the perception that you must use all of OpenStack or nothing)
  4. Simplifying the existing projects by reducing the number of supported configurations and options
  5. Growing the next generation of community leaders and helping them rise up

Each action also has a specific owner, and will be fleshed out over the next few weeks so we can talk about progress in the April 11th board meeting. For now, I’ll dive into more detail around each category, including the context, conversations in the room and different ideas for those who want to dig in.

How we communicate about ‘What is OpenStack’

Today, any related open source project can add themselves to the OpenStack git, communication tools and infrastructure for testing. The general community has witnessed this through an explosion of new programs and innovative ideas. While such growth and interest is an immensely positive result, retaining a process to define trademark use, official projects, core capabilities and code requirements has challenged the basis of what is OpenStack

Over the last two years, there have been a series of changes in how we communicate about the projects that make up OpenStack. Previously, new projects would often start in Stackforge and once they wanted to become an official part of OpenStack, they would apply to the Technical Committee (TC) to become incubated until they met criteria to become part of the integrated release.

To solve growing pains, about two years ago the TC implemented two different policies: 1) adopted a new framework commonly referred to as “the big tent” (now a somewhat controversial name) and 2) stopped using the Stackforge branding. The combination of these changes essentially laid the groundwork for a two-tier model (official/unofficial projects) rather than three-tier model (Stackforge/incubated/integrated). The Interop Working Group defines “core” as capabilities and code requirements with testing to validate commercial products, and the concept of the “integrated release” no longer exists. To provide more visibility into the state of official projects, the TC and User Committee also define “tags” expressing varying states of maturity, development processes, etc.

Proposals to improve the current state included better communicating the value and position of different projects, defining “constellations” or deployment patterns consisting of groups of projects for different use cases (e.g. OpenStack for NFV), and better categorizing the existing OpenStack official projects. There was a lot of discussion about subjective versus objective judgments in how to achieve this goal. Ultimately, it was decided that better mapping projects within OpenStack is the first, most important step. Thierry Carrez, chairman of the TC, will be spearheading these important efforts with a cross-community team of volunteers.

Adjacent Technologies

We’ve been talking as a community about building the LAMP stack of the cloud, thinking of OpenStack as programmable infrastructure and recognizing the important technologies above, around and below it that people are combining for different use cases. How we better integrate and collaborate with these different technology communities was a key topic of conversation in Boston.

Proposals ranged from more focus on cross-community engagement, including upstream work and technical collaboration in adjacent communities to making sure we avoid “not-invented-here” syndrome and consume technologies outside of OpenStack. Ultimately, the group decided that cross-community work was critical and efforts were underway, but one of the first, most important things we need to do is make individual OpenStack services like Cinder block storage and Keystone identity service easily consumable on their own, alongside these other technologies. We need to change the mindset that you have to consume all of the common OpenStack services and demonstrate that each project is valuable on its own and will be combined with different technologies in unique and valuable ways. Chris Price, recently elected to the Board by the Gold Members as part of Ericsson and who also participates in OPNFV, will be coordinating these efforts. It was also one of the more popular teams for volunteers.

Unanswered Requirements

While we’ve done a great job building a community of users and operators who participate and contribute directly in OpenStack, optimizing the feedback loop for a project of this scale has been an ongoing challenge. We now have an elected User Committee that oversees 11 working groups, including a Product Working Group that helps create user stories and communicate the road map for key projects. However, the challenge discussed in the strategy workshop was bridging the user stories created by the Product Working Group to real blueprints (with applied resources) for the technical contributors, which require more in-depth gap analysis and community buy in.

Discussions ranged from how we prioritize requirements to how we reduce the number of new requirements and focus on refactoring / embracing adjacent technologies to focusing on scale, but the group ultimately decided to bring the primary stakeholders (User Committee/TC/Product WG) together at the Boston Summit Forum to collaborate/communicate around user stories, gap analysis, what fits in the current state of tech, prioritize what would have the greatest impact in reducing pain for users. Melvin Hillsman, a newly elected member of the User Committee, will be wrangling this effort.

Changes to the Technology

Changes to the technology was added as the fifth category after we realized some of the proposals for change didn’t quite fit into ‘Communicating about OpenStack’ or ‘Unanswered Requirements.’ In order to address user feedback around complexity, proposed ideas in this category included culling official projects that may not be strategic or meet our quality standards, welcoming competing implementations within the OpenStack umbrella to enable greater change and innovation, converging the number of deployment tools (especially container-based deployment tools) and recording tribal knowledge. Ultimately, the group decided the first, most important action we need to take is simplifying existing OpenStack projects, including reducing the number of configuration options. Mike Perez, who works as a cross-project development coordinator at the OpenStack Foundation and is also an elected member of the TC, will be taking the lead on this effort working closely with the TC.


Cultivating Community Health

Our goal is to create a sustainable and productive community where diversity is valued and leadership opportunities are successful. There were several different proposals around improving community health, including on-boarding efforts, improvements to processes and tools and recognizing relevant contributions to adjacent communities and growing leaders in the community. The vote was very close between on-boarding and growing leadership, but it was generally recognized there are a number of on-boarding efforts like Upstream University already in place, while growing leadership was a new important focal point. Steven Dake, who works at Cisco and is an individually elected member of the Board of Directors, volunteered to lead the group defining the next steps toward that goal.

Get Involved!

Throughout the day, there was lively participation and discussion about all of the topics. A broad consensus emerged that we’re entering an exciting and important phase for OpenStack that presents new challenges but also huge opportunities. If you want to help drive the future of OpenStack in any of these areas, please email the Foundation mailing list, contact one of the team leaders directly (their names are linked to their unique OpenStack profile in each section) or join the next open board meeting on April 11.

The post Community leadership charts course for OpenStack appeared first on OpenStack Superuser.

by Lauren Sell at March 15, 2017 04:34 PM


A tale of Tempest rpm with Installers

Tempest is a set of integration tests to run against OpenStack Cloud. Delivering robust and working OpenStack cloud is always challenging. To make sure what we deliver in RDO is rock-solid, we use Tempest to perform a set of API and scenario tests against a running cloud using different installers like puppet-openstack-integration, packstack, and tripleo-quickstart. And, it is the story of how we integrated RDO Tempest RPM package with installers so it can be consumed by various CI rather than using raw upstream sources.

And the story begins from here:

In RDO, we deliver Tempest as an rpm to be consumed by anyone to test their cloud. Till Newton release, we maintained a fork of Tempest which contains the script to auto generate tempest.conf for your cloud and a set of helper scripts to run Tempest tests as well as with some backports for each release. From Ocata, we have changed the source of Tempest rpm from forked Tempest to upstream Tempest by keeping the old source till Newton in RDO through rdoinfo. We are using rdo-patches branch to maintain patches backports starting from Ocata release.

With this change, we have moved the script from the forked Tempest repository to a separate project python-tempestconf so that it can be used with vanilla Tempest to generate Tempest config automatically.

What have we done to make a happy integration between Tempest rpm and the installers?

Currently, puppet-openstack-integration, packstack, and tripleo-quickstart heavily use RDO packages. So using Tempest rpm with these installers will be the best match. Before starting the integration, we need to make the initial ground ready. Till Newton release, all these installers are using Tempest from source in their respective CI. We have started the match making of Tempest rpm with installers. puppet-openstack-integration and packstack consume puppet-modules. So in order to consume Tempest rpm, first I need to fix the puppet-tempest.


It is a puppet-module to install and configure Tempest and openstack-services Tempest plugins based on the services available from source as well as packages. So we have fixed puppet-tempest to install Tempest rpm from the package and created a Tempest workspace. In order to use that feature through puppet-tempest module []. you need to add install_from_source => 'false' and tempest_workspace => 'path to tempest workspace' to tempest.pp and it will do the job for you. Now we are using the same feature in puppet-openstack-integration and packstack.


It is a collection of scripts and manifests for puppet module testing (which powers the openstack-puppet CI). From Ocata release, we have added a flag TEMPEST_FROM_SOURCE flag in script. Just change TEMPEST_FROM_SOURCE to false in the, Tempest is then installed and configured from packages using puppet-tempest.


It is a utility to install OpenStack on CentOS, Red Hat Enterprise Linux or other derivatives in proof of concept (PoC) environments. Till Newton, Tempest is installed and ran by packstack from the upstream source and behind the scenes, puppet-tempest does the job for us. From Ocata, we have replaced this feature by using Tempest RDO package. You can use this feature by running the following command:

$ sudo packstack --allinone --config-provision-tempest=y --run-tempest=y

It will perform packstack all in one installation and after that, it will install and configure Tempest and run smoke tests on deployed cloud. We are using the same in RDO CI.


It is an ansible based project for setting up TripleO virtual environments. It uses triple-quickstart-extras where validate-tempest roles exist which is used to install, configure and run Tempest on a tripleo deployment after installation. We have improved the validate-tempest role to use Tempest rpm package for all releases (supported by OpenStack upstream) by keeping the old workflow and as well as using Ocata Tempest rpm and using ostestr for running Tempest tests for all releases and using python-tempestconf to generate tempest.conf through this patch.

To see in action, Run the following command:

$ wget
$ bash --install-deps
$ bash -R master --tags all $VIRTHOST

So finally the integration of Tempest rpm with installers is finally done and they are happily consumed in different CI and this will help to test and produce more robust OpenStack cloud in RDO as well as catch issues of Tempest with Tempest plugins early.

Thanks to apevec, jpena, amoralej, Haikel, dmsimard, dmellado, tosky, mkopec, arxcruz, sshnaidm, mwhahaha, EmilienM and many more on #rdo channel for getting this work done in last 2 and half months. It was a great learning experience.

by chandankumar at March 15, 2017 11:41 AM

Cisco Cloud Blog

Why Millennials Don’t Think In Boxes

I was born in the mid-80s. I started breaking down computers quite early and I had PSTN internet by the time I was 16 (having frequent arguments with my parents when they regularly disconnected me in order to hold meaningless important conversations with other members of our extended Greek family).

by Kostas Roungeris at March 15, 2017 11:00 AM

SUSE Conversations

OpenStack Private Cloud is Doing Just Fine

Sometimes you have to dig beneath the surface of headlines to understand what’s really going on.  I guess that’s one thing most of us have learned in recent months. Sometimes you need some careful analysis to get to the real story. It seems like that’s as true in the IT world as it is with tabloid …

+read more

The post OpenStack Private Cloud is Doing Just Fine appeared first on SUSE Blog. Mark_Smith

by Mark_Smith at March 15, 2017 09:50 AM

March 14, 2017

StackHPC Team Blog

Logging Services for Guest Workloads: A Step Closer

How can we make a workload easier on cloud? In a previous article we presented the lay of the land for HPC workload management in an OpenStack environment. A substantial part of the work done to date focuses on automating the creation of a software-defined workload management environment - SLURM-as-a-Service. The projects that look at enriching the environment available to workload management services once they are up and running in the cloud appear to be less common.

One example that came along last week was the merge upstream of a new spec for multi-tenant log retrieval in Monasca. This proposal was made and seen through by StackHPC's Steve Simpson.

Monasca and Multi-Tenant Monitoring

Monasca monitors OpenStack, but it goes further than that.

From its inception, Monasca has been designed with the distinction of supporting multi-tenant telemetry. Any tenant host, service or workload can submit telemetry data to a Monasca API endpoint, and have it collected and salted away. Later, the user can log in to a dashboard (Grafana in many cases), and interactively explore the telemetry data that they collected about the operation of their instances.

Can your tenants do that?

The intention is that complex services like telemetry and monitoring are provided as a service, without requiring the users to create and deploy their own.

Adding Logging to the Mix

Time-series telemetry is certainly useful, but is only one part of a comprehensive solution. We also want to gather data on events that occur, and logs of activity from the services and operating systems that underpin our research computing platforms.

The Monasca project (led by the team from Fujitsu) have been working on logging support for a little while. They first presented their work at the Tokyo summit:

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="500" seamless="seamless" src="" width="750"></iframe>

Logging for system and OpenStack services has been up and running in Monasca for a few releases.

What has been missing (until now) has been a way of providing multi-tenant access to log retrieval.

Reducing the Time to Science

It's clear our users have work to do, and our OpenStack projects exist to support that.

Using Monasca, we can already present log data inline with telemetry data for system administration use cases. For example, here's log and telemetry data collected from monitoring RabbitMQ services, drawn from Monasca and presented together on a Grafana dashboard:

A Grafana dashboard displaying telemetry and log messages for RabbitMQ

Once the new multi-tenant logging API is implemented, we'll be providing our users with the same services for telemetry and logging of their own infrastructure, platforms and workloads.

by Steve Simpson at March 14, 2017 12:30 PM

OpenStack Superuser

Carrots and sticks in an open source community

Leading a community group, it turns out, is completely different than being a manager. I learned this the hard way; I made mistakes, started (and continued) arguments, and sometimes, when faced with a large hole, simply continued to dig.

In late 2014 I had been working on OpenStack documentation for about a year. We were preparing the Kilo release, when the current project team lead (PTL) asked if I would be willing to put my name forward as a candidate for Liberty PTL. In early 2015, I was elected unopposed to lead the documentation team for the Liberty release and all of a sudden I realized: I had no idea how to run a community group.

At this point, I had managed docs teams of various sizes, across many time zones, for five years. I had a business management degree and an MBA to my name, had run my own business, seen a tech startup fail and a new corporate docs team flourish. I felt as though I understood what being a manager was all about. And I guess I did. But I didn’t know what being a PTL was all about. All of a sudden, I had a team where I couldn’t name each individual, couldn’t rely on any one person to come to work on any given day, couldn’t delegate tasks with any authority and couldn’t compensate team members for good work. The only tool I had in my arsenal to get work done was my own ability to convince people that they should.

If you’ve spent time as a manager in a corporate environment, you’ll be used to keeping secrets, actively managing your employees’ careers and pretending you know the answers to things (especially if you don’t). This is entirely the wrong way to go about managing a community and if your team decides you’re treating them like that, they will probably only give you a couple of chances before they eat you alive. If you get it right (even if it’s on the second go-round), the opportunities for growth and satisfaction are unbeatable. Not to mention the sense of achievement!

My first release as a PTL was basically me stumbling around in the dark and poking at the things I encountered. I relied heavily on the expertise of the existing members of the group to work out what needed to be done and I gradually started to learn that the key to getting things done was not just to talk and delegate, but to listen and collaborate. I had to not only tell people what to do, but also convince them that it was a good idea, help them to see it through, and pick up the pieces if they didn’t. You will end up stumbling around in front of your team a lot in open source. Here’s how to make it work for you:


Rewarding the good, punishing the bad

The phrase ‘carrots and sticks’ is often used to describe the process of rewarding good behavior (carrots) while punishing bad behavior (sticks). Rewards are everywhere in today’s corporations. From monetary incentives (bonuses, share schemes, gift cards and shopping vouchers), to various team activities (lunches, dinners, paintball, barista courses, and all manner of competitive activities), these awards are designed to make all your colleagues envy and despise you. Not to mention all those little branded tchotchkes (a stress ball in every imaginable shape!) we all seem to accumulate. There are so many carrots given out, that not getting any can often be a form of stick.

Of course, this is the first and most obvious thing you will notice missing when you start managing a community rather than a team in an organization. You don’t get to pay them. You don’t get to give bonuses. And the tchotchkes are harder to find and more jealously guarded.

The first thing you have to do is work out what you have got. Got access to stickers, t-shirts, or other branded merchandise? Give them out! What about privileges (and no, giving someone more responsibility is not a privilege, so don’t count things like granting core access)? Things like discounted tickets or travel to conferences, access to company-sponsored events like code sprints or meetups, or things like the ability to vote on leadership positions can all be considered perks of being part of a community.

Most of the people in the your community already have a job and a career path and a manager who (hopefully) cares about those things. And that person isn’t you.

There are also other, less obvious, things that you can do to motivate people: always be willing to call out great work (or even mediocre work, especially if it came from someone surprising) in a public way, but keep any criticism private. Thank people, a lot, for everything. Make sure when people email you asking questions you add other people into the conversation, with a note like “this person is an expert on this topic, and I’d love to hear their opinion.” Flattery and thankfulness are some of the best tools you have to motivate people on your team, and they’re entirely free. Just try and keep your ego in check and let other people take credit wherever possible.


Performance measurement as a behavior management tool

One of the more popular performance measurement methods is referred to as a nine box, or a talent matrix. The idea of ranking employees can be quite distasteful, so it’s important to remember that it’s not about ranking staff against each other, but against themselves and their own past performance. You should be able to see each employee move from the lower left to the top right of the matrix as they improve in their role. Once they hit the top right box, you must be ready to promote them, at which point they drop to the center of the matrix, and start the journey up and to the right again. Of course, the opposite is also true: if an employee starts to track down and to the left, you need to be having some serious discussions about the role that the person is in, whether or not they’re having personal issues, or whether they’re the right fit for your team, or your company. The point is, there’s something of a science to this when you’re in a corporate environment.

That science becomes much more of a dark art when you’re leading a community. For starters, though, you need to remember that it’s not really your responsibility. Most of the people in the your community already have a job, and a career path, and a manager who (hopefully) cares about those things. And that person isn’t you. Also, those things are (and should be) fairly opaque to you, it’s really none of your business.

But that doesn’t mean that you shouldn’t care about your team’s performance within the constraints of your group. Most communities have levels of responsibility within them. From leading sub-teams, to being involved in testing, sitting on advisory boards, or becoming core contributors with greater levels of trust and expectation, right up to leading the group itself, you need to be aware of the aspirations of your community members, and ensure you’re letting people know the options available to them should they wish to progress. An extra added complication is where these two things intersect. You never know if a team member is getting pressure from their corporate manager to achieve a certain role within your community, or what value (if any) companies might place on metrics, roles, or positions of trust within your community. You can’t always rely on team members to tell you what they’re trying to achieve, either, sometimes they make you guess.

The best way to ensure you’re helping individuals succeed in their performance goals is to make sure you understand what those goals are. This isn’t always easy and you can’t necessarily assume that all team members want to progress, either. This is even more true in communities than in companies, since a promotion often means more responsibility without any increase in benefits (you’re not paying them, after all). The best way to do this is to ask people, and the best way to get a good answer is to do in private. Don’t be afraid, as a community leader, to reach out to people directly, saying something along the lines of ‘hey, I noticed you’re doing great work, and wanted to have a chat to you about the kinds of things you’re interested in working on, and how I can help you achieve your goals within our community.’ You won’t always get all the answers you might want, in the detail you might need to really help them, but at least you’ll have a better idea than just assuming everyone wants to become a team leader some day.


The performance art of trust

Perhaps more important than rewarding staff, or accurately recording their performance improvements, is proving that you trust them. A little trust goes a long way and can positively impact loyalty, morale and retention. As a manager, there are many occasions where you are privy to information you can’t share with staff: financial or company performance information, restructures, or even layoffs. The thing is, your team members are probably pretty smart and they probably know that this is a thing. You need to reassure them that as soon as you can tell them, you will. But there’s more to it than that; there’s something I like to call the performance art of trust, which is more about telling them when you don’t know something than when you do. If someone asks you a question, and you don’t know the answer, don’t make things up! Just come right out and say it: “I don’t know.” You might find it helpful to practice, because it’s not easy to do when you’re trying to be all manager-y and stuff. In fact, most of your training to become a manager has probably been about pretending you know the answers when you don’t, which is exactly the wrong way to go about it.

Of course, as a community leader, you almost never have access to information before anyone else, so you may be wondering why this is relevant. It’s because the principles of honesty and trust are just as important, if not more so, in community situations than they are in a workplace. Community members will work with you if they want to work with you, and they won’t work with you if you’re a twit. The best way to look like you’re someone worth working with is if you appear open, honest, communicative, trustworthy, and reasonable.

One of the hallmarks of many open-source communities is the somewhat impassioned email communications that occur. Flame wars on mailing lists are a feature of open source communities and they happen out in public. Treat them as performance art, where your audience is focused on one thing: is this person someone I trust to lead this group?

It’s easy to come out of a flame war looking like a buffoon, so here’s a couple of tips:

  • Always read the email you’re replying to thoroughly, several times, and try to understand the sender’s point of view. Work out what the question is (if there isn’t a question, then don’t send a reply).
  • Start your reply by thanking the sender: for bringing up a point you hadn’t considered, for asking questions, or even just for taking the time to put their thoughts into words. This forces you to assume good intent.
  • Answer the question, AND ONLY THE QUESTION. Give reasons for your answer, using dot points if necessary, but don’t bring other issues into it, and certainly don’t launch into personal attacks. Be prepared to question your own beliefs about things (you don’t have to change your mind, but you need to be open to other perspectives).
  • Ask a question of your own: Do they think this is reasonable? Can they think of something you might have missed? Do they have further comments?
  • Don’t hit send yet. Leave it as long as possible; at least a couple of hours, preferably overnight. This not only gives you time to calm down a little, but it also slows down the exchange on the mailing list, which will hopefully remove some heat.
  • Before you hit send, proofread, and take out all the emotional language. Work out if you can consolidate some points, reorganise the content to be clearer, or take out irrelevant information. Be as concise and to-the-point as possible.
  • If the thread has been dragging on and there’s no progress, take it offline. Admit that perhaps you don’t fully understand their viewpoint and offer a video or phone call, even if it’s an early morning or late night for you. Be the bigger person, take the hit. People are always much nicer on the phone, and if nothing else, it stops the flamewar.

To conclude, communities are definitely more forgiving than corporations, so you’ll probably get away with stumbling around a little bit before you find your feet. However, they’re also run almost entirely on trust and goodwill and you can erode that really quickly and sometimes without noticing. Be honest when you don’t know something, own up to your mistakes, never shift blame (even if you really didn’t do it), and always lift your team members up. If you can nail those things, then you’ll find a way through, and I’m willing to bet that you’ll become a better manager in the process as well.

Superuser is always interested in community content, get in touch at

Cover Photo // CC BY NC

Photo // CC BY NC

Photo // CC BY NC

Photo // CC BY NC

The post Carrots and sticks in an open source community appeared first on OpenStack Superuser.

by Lana Brindley at March 14, 2017 12:20 PM

Mark McLoughlin

March 9, 2017 OpenStack Foundation Board Meeting

The OpenStack Foundation Board of Directors met in-person for two days in Boston last week.

The first day was a strategic planning workshop and the second day was a regular board meeting.

Executive Director Update

After a roll call and approving the minutes of the previous meeting, Jonathan gave a presentation outlining his perspective on where OpenStack is at.

He talked about perception challenges that the Foundation is working to address, emphasizing the messages of "costs less, does more" and "all apps need Open Infrastructure".

He described how the Foundation has been iterating on a "pitch deck" and had completed twenty or so interviews with journalists where they presented this deck in a concerted effort to "change the narrative". This begins by talking about some recent industry trends, including a slowdown in the rate of public cloud growth and the coverage of Snap's significant spending on public cloud services compared to their revenues.

It goes on to compare "Gen 1" and "Gen 2" private clouds, with the challenges moving from technology and people to culture and processes. The advantage of a multi-cloud strategy, with sophisticated workload placement was discussed as well as some detailed information backing up the claim that companies are achieving cost savings with OpenStack.

Jonathan described how this presentation had been very well received, and had resulted in some very positive coverage.

Jonathan briefly touched on key improvements in Ocata and information about the profile of our contributors. This resulted in some debate about the value of "vanity metrics" and there was general agreement that while there continues to be demand for this information, we we should all be cautious in over-emphasizing it.

Finally, Jonathan reflected on feedback from the Project Team Gathering in Atlanta. Attendance was strong and attendees reported feeling less distractions and a greater ability to have important discussions because of the smaller venue and attendee count. While it was mostly seen as a great success, there are areas for improvement including the approach to choosing hotels and committing to hotel room bookings, how scheduling is organized, the size of some rooms, etc. Jonathan mentioned the success of the Forum and the OpenStack Summit in Boston will be key.

User Committee and Compensation Committee

The board spent a fairly short time discussing two matters from its committees.

Firstly, a proposal from the User Committee Product Working Group for facilitating the organization of the schedule for The Forum in Boston was well received, but since the board encouraged all interested parties to work with the Foundation staff who are coordinating the planning, particularly Tom Fifield and Thierry Carrez.

Secondly, the board approved Jonathan's goals for 2017, which had been prepared by the Compensation Committee and sent to board members earlier for review. The board only sets Jonathan's goals, and empower him to set the goals for the rest of the staff. Related to Jonathan's goals, the board also had some discussion later around documenting the goals of the board itself.

Membership Applications

Significant time during the day was given over to considering some corporate membership applications.

With HPE resigning its Platinum membership, we had invited applications to take over its slot and received applications from Ericsson and Huawei. Both companies had previously applied in November, 2014 at the OpenStack Summit in Paris to replace Nebula as Platinum member, but Intel had been successful that time.

Anni Lai presented for Huawei and Chris Price presented for Ericsson. Both described their employer's vision for OpenStack, and their wide range of contributions to date. Both also talked about their position in the market, their key customers, and how they are growing the OpenStack ecosystem. The presentations were well received and, after an executive session, the board voted to approve Huawei as a Platinum member. The board expressed their gratitude for Ericsson's interest and preparation, observing that having multiple companies interested in Platinum membership opportunities is a sign of the strength of our community.

Representatives from H3C also presented their application for Gold membership. H3C is a prominent IT vendor in China's enterprise IT market, distributes an OpenStack based cloud product, has several very large reference customers, and is establishing itself as a technical contributor to the project. After executive session, the board voted to approve H3C as a Gold member.

Wrapping Up

One piece of good news wasn't public during the meeting, but has since been announced by Jonathan:

The Board also approved the promotion of Thierry Carrez to VP of Engineering for the OpenStack Foundation. Thierry has been a leader in the technical community since the beginning of OpenStack and has also built a team within the Foundation focused on upstream collaboration.

I think it's safe to say the board were warmly supportive of this change, wished Thierry every success in this new role, and looked forward to working more closely with Thierry than ever before.

And, with that, the board dispersed feeling pretty fried after an intense couple of days of discussions!

by markmc at March 14, 2017 11:00 AM

Hugh Blemings



Welcome to Last week on OpenStack Dev (“Lwood”) for the week just past. For more background on Lwood, please refer here.

Basic Stats for the week 6 to 12 March for openstack-dev:

~447 Messages (down nearly 22% relative to the long term average)

~135 Unique threads (down just shy of 25% relative to the long term average)

Traffic about the same as last week, if anything up slightly.  A busy few days conspired against me so Lwood is a bit short and a bit late this week, apologies to those who I know set their clocks by it’s arrival… ;)

Notable Discussions – openstack-dev

OpenStack Summit Boston Schedule Available

Erin Disney writes that the schedule is now up for the Boston Summit later this year.

Call for Mentors at upcoming Summit

Emily Hugenbruch notes that the upcoming Boston summit will again provide an opportunity for Mentors to assist newcomers to OpenStack in getting up to speed.  If you’re interested, please follow the info in Emily’s email and sign up.

OpenStack PTG Atlanta summary of summaries

As mentioned in the previous couple of Lwoods, with the Atlanta event concluded summaries of the event, mostly from a projects standpoint are rolling in. There were a few more this week past, listed below, and the original blog post has been updated too.

End of Week Wrap-ups, Summaries and Updates

Two this week; Ironic (Ruby Loo) and Nova (Balazs Gibizer)

People and Projects

Core nominations & changes


Further reading

Don’t forget these excellent sources of OpenStack news – most recent ones linked in each case


No tunes this week, was again working remotely and needed all the concentration I could muster despite the relative simplicity of the task at hand! :)

by hugh at March 14, 2017 09:37 AM

Mark McLoughlin

March 8, 2017 OpenStack Foundation Strategic Planning Workshop

The OpenStack Foundation Board of Directors met in-person for two days in Boston last week.

The first day was a joint workshop with the Technical Committee, User Committee, and Foundation staff. The workshop was planned in response to the "OpenStack Futures" discussion at our three previous board conference calls in November, December, and January.


We began the workshop with very brief personal introductions, followed by Alan, Thierry, and Edgar giving an overview of the roles and responsibilities of the Board of Directors, the Technical Committee, and the User Committee.

State of the Union

Next, Mark Collier presented his view of the state of OpenSack, with a particular emphasis on the four areas planned for discussion during the day. Mark began by talking about the exciting opportunity we had by having such breadth and depth of expertise in the room, and appealed to everyone to put aside their particular roles and work together as a single leadership team to talk about the work.

Evolving The Architecture

Mark spent some time trying to demystify the Big Tent change, describing how the previous Stackforge/Incubation/Integrated stages have been replaced by almost any project being welcome to use OpenStack infrastructure with the TC responsible for reviewing applications to join the set of official OpenStack projects known as "The Big Tent".

Mark then described one of the key changes happening in OpenStack right now as the containerization of the control plane, with projects like Kolla, openstack-helm, and TripleO all tackling this area. He also talked about the work happening around running containers on OpenStack itself with projects like Kuryr, Fuxi, Magnum, and Zun, but he also wondered aloud whether we're addressing all the right integration points. He also described some of the ongoing debates about the scope of OpenStack and our technology choices, with topics like the use of golang, the Gluon project, whether we welcome competition within the Big Tent, and community-wide goals.

Finally, Mark gave us a preview of the work happening around version 2 of the OpenStack Project Navigator and talked about how this will play a key role in helping people understand what OpenStack provides and how it can be used.

Unanswered Requirements

Mark talked briefly about the working groups under the User Committee and the transition from Design Summit to Project Team Gathering and Forum formats. These concepts are all important in understanding how OpenStack thinks about our requirements gathering, strategic long-term planning, and implementation planning.

Mark also gave a preview of some detractor quotes from our user survey, and emphasized a common theme - the perceived and actual complexity of OpenStack, both in terms of understanding and operating the software.

Adjacent Communities

Mark classified the various sets of adjacent communities that we are particularly interested in developing strong relationships with. Container technologies like Kubernetes, Docker, and Mesos. PaaS technologies like CloudFoundry and OpenShift. NFV projects like OPNFV and Cloudify. Provisioning technologies like Terraform, Puppet, and Saltstack. And specific ecosystem relationships, with companies like CoreOS.

Mark described the change in the Foundation's event strategy, targeting events like KubeCon, DockerCon, CoreOS Fest, etc. as key events where we should be positioning the OpenStack brand and developing relationships.

He also described particular focus areas of individual staff members which are relevant to the topic - Chris Hoge working with upstream Kubernetes and running OpenStack SIG meetings, David Flanders working on a report around the gaps when running platforms on OpenStack (like Cloud Foundry, OpenShift, Kubernetes, and Terraform), and how Ildiko Vancsa and Kathy Cacciatore are both working closely with OPNFV.

Finally, Mark talked about the Open Source Days event at the OpenStack Summit in Boston, as well as some very early stage discussions for an OpenDev event which would be a small, focused event around improving the integration between applications frameworks and open infrastructure.

Community Health

The final area of discussion was the subject of community health, and Mark first put out some statistics that he felt painted a very reassuring picture of the community's health. In 2016, we had 3,500 unique contributors, 1,850 of which were retained from 2015. In Ocata, we had fewer developers than Newton, most likely because it was a shorter cycle.

Mark contrasted challenges with projects like Trove and Designate losing contributors, while projects like Kuryr, Kolla, and Zun seeing the greatest number of new contributors.

Similarly, Mark talked about HPE laying off upstream developers, Cisco killing off intercloud, a small slowdown in Summit sponsorships, while we have also added 7 more Gold members, and many first-time corportate members and Summit sponsors.

Strategic Planning Exercise

The rest of the day was given over to a multi-stage strategic planning exercise prepared by Allison and Alan. The idea was to discuss these focus areas, gather everyone's ideas for improvement, summarize and categorize these ideas, vote on ideas in each focus area, and finally agree on how to proceed with concrete goals for the next 6-9 months.


The initial discussion covered a lot of ground. Allison introduced each focus area by describing the input we gathered via the etherpads and input she gathered through 1:1 interviews with a variety of people.

One topic of discussion related to how OpenStack can simplify how we describe OpenStack, particularly to reduce confusion introduced with the Big Tent change. Various ideas around categorization, tagging, vertical definitions, a concept of constellations, maturity ratings, and much more, were discussed.

We talked about the promise for the future that OpenStack provides. That there will be evolution over time, that we deliver the cloud solutions of today and will deliver the solutions of tomorrow. That the challenge of smooth upgrades is part of our challenge in delivering "future proof infrastructure".

We talked about the challenges of scalability, manageability, and complexity. The theme of containerized deployments, the need for vertically focused views of OpenStack, for example for Telco users. We discussed the need for OpenStack to be able to evolve over time, with refactoring or rewriting components being only one of the possible approaches we may see over time.

We talked at great length about how OpenStack could work more closely with adjacent communities. How the relationship with these communities should bring value to both communities. We particularly emphasized the need for a closer relationship with the CNCF and the Kubernetes community.

Gathering Ideas

Over lunch, everyone wrote their concrete, actionable ideas for improvement on sticky notes and put them on flipcharts for each of the areas of discussion. Later, Jonathan volunteered to group the ideas into themes, and summarized these themes for the group, facilitating further discussion before voting on which theme in each area we should particularly focus on.

On the subject of communicating about "what is OpenStack", the main themese were marketing activities, various categorization ideas, and idea Allison talk about earlier referred to as "constellations". We later voted to focus on the categorization area and formed a group of interested parties:

Communicate about OpenStack: Categorize (objective data) and map (subjective approach) OpenStack projects as base versus optional (within a specific use case), integrated versus independent release, emerging versus mature, stability, adoption metrics, what works together, services versus consumption (operational tools/client libraries), and other criteria

Names: Thierry Carrez [lead], Alan Clark, Allison Randal, Jon Proulx, Melvin Hillsman, Lauren Sell, Tim Bell, Mark Baker, Kenji Kaneshige

For unanswered requirements, we discussed how to prioritize, ideas around a solution focus, scalability challenges, and a list of specific features that people felt were important. A counter-point was made that rather than focusing on any of these ideas, perhaps the focus should be on working with adjacent communities. Later, we discussed the need to grow the connection between the Product Working Group, the TC, and individual projects. The outcome and group for this was:

Requirements: Bring different groups (UC/technical/etc) together at Forum to collaborate/communicate aroud user stories, gap analysis, what fits in the current state of tech, prioritize what would have the greatest impact in reducing pain for users.

Names: Melvin Hillsman [lead], Yih Leong Sun, Jon Proulx, Rob Esker, Emilien Macchi, Doug Hellmann, Tim Bell, Shamail Tahir

On the topic of adjacent communities, we observed that by far the most dominant area of discussion was the need to create better connection with the Kubernetes community. The themes were community engagement, technical engagement, OpenStack consuming technology from the Kubernetes and containers world, and making OpenStack technology more consumable by Kubernetes. In the end, there was strong consensus to focus on the consumability of OpenStack technologies:

Adjacent Technologies: Make our technology more consumable (independently) by other communities/projects.

Names: Chris Price [lead], Alan Clark, Dims, Rob Esker, Mark Collier, Steven Dake, Mark McLoughlin, Shamail Tahir

For changes to the technology, we discussed simplifications, making containers first class citizens, recording tribal knowlede, culling failed efforts, converging deployment tools, and welcoming emerging or competing projects. The theme we voted to focus on was:

Changes to the Technology: Workstream to simplify existing projects, reduce dependency options, reduce config options.

Names: Mike Perez [lead], --> TC project

Finally, on the subject of community health, we talked about onboarding contributors, reworking our processes, community tools, growing leaders, corporate involvement in the project, and recognizing work with adjacent communities. We voted to focus on the leadership theme:

Community Health: Grow next generation of leadership/experts/cross-project devs within the community

Names: Steven Dake [lead], Chris Price, Jeremy Stanley, Dims, AlanClark, Joseph Wang

Next Steps

For each of these focus areas, the lead person in the group committed to organizing a kick-off meeting by March 22nd. The real work will begin there!

by markmc at March 14, 2017 07:00 AM

March 13, 2017

Erasing complexity, submitting a summit talk, and more OpenStack news

Are you interested in keeping track of what is happening in the open source cloud? is your source for news in OpenStack, the open source cloud infrastructure project.

OpenStack around the web

From news sites to developer blogs, there's a lot being written about OpenStack every week. Here are a few highlights.

by Jason Baker at March 13, 2017 04:45 PM

OpenStack Superuser

How to set up your work environment to become an OpenStack developer

Although there are developer and wiki guides on how to get started with OpenStack, I have found them bit overwhelming as a beginner. After reading various docs and asking for help from my mentor, who is a core contributor in OpenStack, I came up with the following easy-to-follow guide.  If you still face any problems while setting up your environment, feel free to reach out by commenting below.

1. Install Ubuntu in an Oracle Virtual Box

A virtual machine (VM) is recommended because usually we don’t want a ton of dependencies installed on our everyday environment. Also, if at any point we mess up things, it’s easier to start over from scratch using a VM.

  1. Download a suitable Oracle Virtual Box for your operating system.
  2. Download the desired Ubuntu iso file.
  3. Install and start Oracle Virtual Box.
  4. Click on the “New” button in the wizard, give the new virtual machine a name, check “Linux” in the “Type” area and check “Ubuntu” in the “Version” area (32 or 64-bit, depending on downloaded iso file).
  5. Set the amount of RAM (ideally not more than 50% of your total RAM). Something to keep in mind: DevStack will perform best with 4GB or more of RAM.
  6. Select “Create a Virtual Hard Disk Now,” check “VDI (VirtualBox Disk Image),” then select “Dynamically Allocated” and finally set the hard disk size (60- 100 GB ideally)
  7. Double-click your new machine in the left menu and select the downloaded iso file.
  8. Next, click “Install Ubuntu”. Click “Continue” and then select “Erase Disk and Install Ubuntu”. (Note: this will not erase files on your local machine). Complete the rest of the wizard and, finally, you’ll have Ubuntu installed inside your VM.
  9. Before restarting the VM:
    • Select your machine and click on “Settings”.
    • Under the “Storage” tab, check if the installation iso file is still present; if it is, select and remove it.
  10. To work in full screen, install guest additions. To do so, restart your VM, click on “Devices” from menu, select “Insert Guest Additions CD image” and press “Run”. After completion, press “Enter” and restart your VM.

NOTE: Press Ctrl+Alt+t to open the Terminal application. To distinguish commands from normal sentences, $ sign has been added at their beginning.  So, you have to write the words after ‘$’ on the terminal and then press “enter” to run the command. To use copy/paste option for commands, you should open this web page inside any web browser of your created VM. After selecting the command text press Ctrl+C to copy and then inside the terminal press Ctrl+Shft+V to paste. Also, the words which have to be replaced by specific information pertaining to you have been indicated by capitals words, like YOUR_FIRST_NAME.

2. Set up a Stack user with superuser permissions

Devstack should be run as a non-root user with sudo enabled (standard logins to cloud images such as “ubuntu” or “cloud-user” are usually fine). Since this user will be making many changes to your system, it will need to have sudo privileges.

a) Create the group stack and add the user stack in it:

$sudo groupadd stack

$sudo useradd -g stack -s /bin/bash -d /opt/stack -m stack

b) Grant superuser permissions to the stack user:

$sudo su

$echo "stack ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers

c) Logout as your default user:

d) Create a password for the stack user:

$sudo passwd stack

e) Now to login as the stack user:

$su stack

f) Go to the home directory of the stack user:

$cd ~

3. Set up SSH Keys : Source

a) Check the present working directory; it should output “/opt/stack”:


If it doesn’t output that, redo steps 2.e) and 2.f) before moving on.

b) Create a new SSH key, using your provided email as a label:
$ssh-keygen -t rsa -C ""

c) Press the “Enter” key to accept the default file location in which to save the key.

d) At the prompt, type a secure passphrase. You may keep it empty by directly pressing the “Enter” key for no passphrase.

e) Start the ssh-agent in the background
$eval "$(ssh-agent -s)"

f) Add your SSH key to ssh-agent.
$ssh-add ~/.ssh/id_rsa

g) Download and install xclip.
$sudo apt-get install xclip

h) Copy the SSH key (i.e. contents of the file) to your clipboard.

$sudo xclip -sel clip < ~/.ssh/

i) If you don’t have a github account, first create one. Then, login into your github account, go to “Settings”, click “SSH and GPG keys” then select “new SSH key”. Write a description in “Title” and paste your key into the “Key” field (for pasting, press Ctrl+V). Finally press “Add SSH Key”.

j) Test your connection using ssh:

$ssh -T

Now if the fingerprint matches, type “yes.” If you now see you username in the message, you have successfully set up your SSH key!

4. Set up Git : Source

a) Check the present working directory, it should output “/opt/stack”:


If it doesn’t output that, redo steps 2.e) and 2.f) before moving on.

b) Install git from terminal:

$sudo apt-get install git

c) Use config and write in your full name:

$git config --global "YOUR_FIRSTNAME YOUR_LASTNAME"

d) Use config and write in your email address:

$git config --global

e) Check your git configuration

$git config --list   

5. Set up DevStack

a) Check the present working directory, it should output “/opt/stack”:


If it doesn’t output that, redo steps 2.e) and 2.f) before moving on.

b) Download DevStack

$git clone

c) Go to the devstack directory

$cd devstack

d) Copy the sample configurations into this current directory

$cp samples/local.conf .

e) Notice that the main projects (Keystone, Nova, etc.) are already downloaded, but the clients (like python-keystoneclient or python-novaclient) services are not.

The following commands will download the source code from python-keystoneclient and will let you modify it for your development. Then, you can test it against the DevStack’s OpenStack cloud:

$sudo apt-get install vim
$vim local.conf
To get into INSERT MODE Press i
At the bottom, add LIBS_FROM_GIT=python-keystoneclient
To get back into COMMAND MODE Press Esc
To save and quit, write:wq

f) Run the stack script:


You will get the following output after successful completion of the command.

The default users are: admin and demo
The password: nomoresecret

g) In any web browser open:


If you are able to establish a connection, then you have successfully setup your DevStack!

6. Set up Gerrit : Source

Gerrit is the code review system used in OpenStack development. Git-review tool is a git subcommand that handles all the details of working with Gerrit.

a) Join the OpenStack Foundation

b) Create a LaunchPad account

c) Open a terminal window and check the present working directory, it should output “/opt/stack”.


If it doesn’t output that, redo steps 2.e) and 2.f) before moving on.

d) Copy your SSH key:

$ sudo xclip -sel clip < ~/.ssh/

e) Open, click “Sign In” (top-right corner) and log in with your Launchpad ID.

f) Now click on “Settings,” then select “SSH Public Keys” and press “Add Key”. Press Ctrl+V to paste the key and then click “Add.”

g) Install git review

$sudo apt-get install git-review

h) Check if git review works inside the Keystone (or any other project for that matter) directory of OpenStack:

$cd keystone

$git review -s

7. Install dependencies : Source

a) Check the present working directory, it should output “/opt/stack”:


If it doesn’t output that, redo steps 2.e) and 2.f) before moving on.

b) Install the dependencies:

$sudo apt-get install python-dev python3-dev libxml2-dev libsqlite3-dev libssl-dev libldap2-dev libffi-dev

8. Run the tox command

Each project like Keystone, Nova, Cinder etc. has a tox.ini file defined in it. It defines the tox environment and the commands to run for each environment. Please note that the  subsequent runs of tox will be faster because everything fetched will be in .tox already.

a) Check the present working directory, it should output “/opt/stack”:


If it doesn’t output that, redo steps 2.e) and 2.f) before moving on.

b) Install tox and pbr:

$sudo apt-get install python-tox

$sudo pip install pbr

c) Update and upgrade:

$sudo apt-get update

$sudo apt-get upgrade

d) Go inside any project directory like Keystone, Cinder or Nova:

$cd keystone

e) Run tox


f) If you get the following output on the terminal after entering the command, the tox has installed successfully.

____________________________summary ______________________________
py27: commands succeeded
pep8: commands succeeded
api-ref: commands succeeded
docs: commands succeeded
genconfig: commands succeeded
releasenotes: commands succeeded

Now that you have set up your work environment, you can start contributing as a developer. For more tips on that, stay tuned! smiley 2

Nisha Yadav is a former OpenStack Outreachy intern and current OpenStack contributor. This post first appeared on her blog. Superuser is always interested in community content, email:

The post How to set up your work environment to become an OpenStack developer appeared first on OpenStack Superuser.

by Nisha Yadav at March 13, 2017 11:48 AM

Cloudify Engineering

Cloudify's TOSCA Journey - The Convergence of ARIA and TOSCA (An Infographic)

Click the above image to zoom. Cloudify have been early adopters and implementers of TOSCA, betting on TOSCA as early...

March 13, 2017 12:00 AM

March 12, 2017

David Moreau Simard

An even better Ansible reporting interface with ARA 0.12

Not even a month ago, I announced the release of ARA 0.11 with a bunch of new features and improvements.

Today, I’m back with some more great news and an awesome new release, ARA 0.12(.3) !

That’s right, 0.12.3!

Due to the nature of this new release, I wanted to be sure to get feedback from the users before getting the word out.

We got a lot of great input! This allowed us to fix some bugs and significantly improve the performance.

0.12 features a completely re-written and re-designed web application user interface. Let’s look at some of the highlights !

A new web application interface

I know what you’re most interested in is… WHAT DOES IT LOOK LIKE !?

What it looks like

Here’s some highlights of the new user interface — it doesn’t end here so please read on !

The home page now features the data recorded by ARA:


The core of the user interface now revolves around one and single page where you’ll be able to find all the information about your playbooks:


Quickly have a glance at your playbook host summary:


Or dig into the host details to look at all the facts Ansible gathered for you:


Figure out which tasks took the longest just by sorting the table accordingly:


Or search to figure out which tasks failed:


Click on the action to get context on where this task ran:


Or click on the status to take a look at all the details Ansible has to offer:


The logic behind the UI changes

There were three main objectives with this refactor of the web interface.

Improve UX

A lot of effort was spent on the user experience. You need to be able to find what you want: intuitively, quickly and easily.

Data and result tables are now sortable and searchable by default and browsing tips were added to the interface to help you make the most of what it has to offer.

Scalability and performance

The interface must be fast, responsive, clutter-free, make sense and behave consistently across your use case scenarios, whether you are looking at reports which contains five tasks or ten thousand.

Pagination settings have been introduced in order to customize your browsing experience according to your needs.

Static report generation time and weight

Another objective of this user interface work was to optimize the static report generation performance and weight.

Static generation is one of the great features of ARA which is very heavily used in the context of continuous integration where the report is generated and attached to the artifacts of the job.

Here’s a real-life example of the same database being generated on ARA 0.11 and ARA 0.12:

ARA integration tests (5 playbooks, 59 tasks, 69 results):

  • Before: 5.4 seconds, 1.6MB (gzipped), 217 files
  • After: 2 seconds, 1.2MB (gzipped), 119 files

OpenStack-Ansible (1 playbook, 1547 tasks, 1667 results):

  • Before: 6m21 seconds, 31MB (gzipped), 3710 files
  • After: 20 seconds, 8.9MB (gzipped), 1916 files

For larger scale playbooks, we’re looking at a generation performance that is over 19 times faster. I’m really happy about the results.

But wait, there’s more

If you thought the UI work was enough to warrant it’s own release, you’re right !

Some other changes also sneaked their way into this release as well.

First party WSGI support

A lot of ARA users were interested in scaling their centralized deployment. This meant helping users deploy the ARA web interface through WSGI with a web server.

To help people get going, we now ship a WSGI script bundled with ARA and documented how you can set it up with Apache and mod_wsgi. The documentation is available here.

Other things

  • Fixed syntax highlighting when viewing files
  • Preparations for supporting the upcoming Ansible 2.3 release
  • Started working on full python 3 support
  • Various performance improvements

Well, that’s it for now

That was certainly a lot of stuff in one release !

I hope you’re enjoying ARA - if you’re not using it yet, it’s easy !

Have a look at the documentation to learn how to install ARA and how to configure Ansible to use it.

If you have any questions, feel free to drop by on IRC in #ara on the freenode server or hit me up on twitter: @dmsimard.

by dmsimard at March 12, 2017 03:00 PM

March 11, 2017

Sean Roberts

10 Steps for the Boss to Understand Your Upstream Project

Previous articles on Open Source First  have been more strategy than recipe. You need a clear, easy to understand plan for making the case for an upstream project to your manager. To help you with your boss, I have rewritten the How to use Public Projects to Build Products article into a list of ten steps. These …

The post 10 Steps for the Boss to Understand Your Upstream Project appeared first on sarob.

by sarob at March 11, 2017 09:12 PM

March 10, 2017

OpenStack Superuser

Containers are here to stay – until they’re not

Containers are the price of admission to the modern platform, says Google’s Kelsey Hightower, but they’re just the start.

At the recent Container World conference, Hightower, formerly of CoreOS and now a developer advocate at the company that created Kubernetes, talked about the huge gap between deploying applications and managing them in production and how to move beyond the single-machine programming model and start adopting API-driven distributed systems to unlock the true value of containers.

For starters, Hightower doesn’t believe the hype about them. “Containers are the thing we talk about today, that we’re excited about. Five years from now we’ll hate them all. Guarantee it, we’ll hate them all.”

He prefers to talk about what can be done with this new set of ideas and concepts — “moving up the stack of thinking” — considering the container as just a box that allows user to embrace the concept of platform-as-a-system.

“Containers are not platforms, when people say ‘We’re going to adopt containers’ to my mind it’s like saying, ‘We’re going to adopt .zip files.’ What they mean, he says, is they are planning to adopt Mesos, Swarm, Kubernetes, etc. — they are considering adopting a platform for the first time. “The container thing isn’t that exciting anymore,” he says, soliciting applause from the audience for Docker “for making containers real.”

A lot of platforms look like Kubernetes now, he adds, and the interesting part about this platform is the API server.

The current platforms are like Linux distros, it’s a platform but the APIs are all over the place — bash shell, cron jobs, package managers, some shell scripts, etc. Instead if you look at Kubernetes, you see an API server and if you give the user a container, all the components around it will do something fantastic with it, Hightower says.

“First we’ll place it onto a machine — step one, that’s the obvious thing, rolling updates — then once the app is deployed what happens next?” He then launched into a demo of iconic 80s video game Tetris, where the goal is to place the blocks in the right place, based on what shows up on the screen now.

“Users hit one button and blocks fall down stacking up randomly: it’s fully automated and hands-off completely. But you can see to the left and to the right, however, that CPU and memory are totally being lost. You’re just losing on this one, it’s totally automated but you have no resource awareness. Your scripts aren’t designed to handle any conditions that are real time.”

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="" width="640"></iframe>

In the Kubernetes world, each workload is examined and instead of assigning it statically to a machine or relying on scripts, each workload gets evaluated and a decision is made. Bin packing is used, so as the workload “falls,” the user decides where it should land, moving it to the right place.

All of that looks great with new development but there’s a reason to curb container enthusiasm if you’re in a legacy environment. “[For] enterprise, the scenario is not so pretty- you start out in a different world, it’s not greenfield, you already have some things,” Hightower says. You can still use resource managers and get some benefit from them, he says. “One of those benefits is to fill in the blanks — you install these things right underneath your set-up — whether you’re running OpenStack or VMware you can carve out resources within that cluster. Over time, as you do more and more workloads you start to defrag your cluster…You get benefits even in the brownfield world.”

But If you’re in the real enterprise, you work with databases, then what can you do? “Absolutely nothing. You’re screwed.[laughs] I’ll be honest: this stuff doesn’t work in all environments,” he says. “That’s the thing most people don’t talk about. You just can’t put every workload in every operation…There are things to think about when you adopt one of these platforms.”

Since the best things happen with modern architectures, Hightower then ran through a demo deploying an app and presenting it with an SSL certificate to process it at run time. In this example with micro services, some details change (the certificate, IIRC) but the core APIs remain the same ‘automagically.’ “If I scale to 1,000 instances they’d all get the certs and the background would refresh the certificate without notifying the developers or changing any of the Kubernetes API. This is what makes these kinds of platforms really powerful. It’s not just about deploying applications, it’s about being able to build the tools like this and use all of the abstractions just to make it work.”

This is where the road splits from the platforms that try do everything for the user. To answer the question of why a user would do what today is fairly tedious work — create all these YAML files, work with this whole system from the outside and have to learn how to deploy and configure every single app — Hightower says it’s important to look past containers.

The most important thing is the HDI server and without that HDI server we have all these Kubernetes deployments – the agents, the nodes, the scheduler and the proxy etc. — that can tear user run time.

“Most people stop talking about these – they’re getting to the point where they’re not excited anymore — and this is a good thing,” he says. “Because we don’t want to be talking about container packaging forever. But these run times do provide value in communicating to the kernel so we don’t have to do this in every platform. Then they fade away.”


Cover Photo // CC BY NC

The post Containers are here to stay – until they’re not appeared first on OpenStack Superuser.

by Nicole Martinelli at March 10, 2017 02:02 PM


A Hard Road from Dev to Ops

The post A Hard Road from Dev to Ops appeared first on Mirantis | Pure Play Open Cloud.

Last June, in his provocative blog, Mirantis’ co-founder Boris Renski proclaimed to the world that infrastructure software was dead. That blog was a battle cry for us as a company, and the beginning of an organizational evolution away from our exclusive focus on delivering software and towards providing customers with a turnkey infrastructure experience.

It was clear that the future consumption model for infrastructure is defined by public clouds where everything is self-service, API-driven, fully managed and continuously delivered. It was also clear that most vendors, Mirantis included, had misinterpreted where the core of cloud disruption was, overemphasizing disruption in software capabilities around “self-service and API-driven,” while largely ignoring the disruption in delivery approach codified as “fully managed and continuously delivered.” Private cloud had become a label for the new type of software, whereas public cloud was a label for a combination of software and, most importantly, a new delivery model. Private cloud had failed and we needed to change.

As we started piercing the market with our new Build-Operate-Transfer delivery model for open cloud last year, we pulled the trigger on changing the company internally. Mirantis had to reinvent itself, re-examine every part of the company and ask if it was built correctly and/or was needed in order to deliver an awesome customer operations experience. Organically and through acquisition, we added new engineering and operations folks who brought with them the relentless focus on keeping things simple, and emphasized continuously integrating and managing change. We went away from using advanced computer science as the only means to avoid failures in favor of selecting simple configurations that are less likely to fail and investing heavily in the monitoring and intelligence that predicts failure before it occurs and proactively alerts the operator to avoid failures all together.

In the meantime, and despite the challenges, things were picking up in the field. We weren’t alone in realizing that cloud operations are hard, so many OpenStack DIYers that had failed at operations got intrigued by our model. We started winning big managed cloud deals, and made meaningful strides in transitioning our existing marquee accounts like AT&T and VW toward managed open cloud. Most importantly, we weren’t just winning new deals; we were expanding existing ones – a much more important sign of delivering customer value. Today, some of the world’s most iconic companies are running their customer-facing businesses on our managed clouds without needing to pay much attention to how the cloud is run. They simply expect that it works.

Now we are staring at an explosion of new clouds in our sales pipeline. In order to scale and provide an awesome user experience, this week we’ve announced the final set of organizational changes that will complete our transformation, putting our 12 months of difficult transition behind:  

  • We are simplifying the services we offer in our portfolio, focusing less on one-off cloud infrastructure integration and more on strategy, site readiness and cloud tenant on-boarding and care.
  • We are combining our 24×7 software support team and our managed operations team into a single-focused customer success team.
  • Since many of our customers don’t accept managed services from Russia and Ukraine locations (due to regulatory, compliance and corporate security policies), we are shifting roughly 70 jobs from those locations to the U.S., Poland and Czech Republic.

As founders, we felt it was important to share this update publicly, not just because we want the world to know that Mirantis is changing, but also because this transformation is personal to us. We founded Mirantis back in 2000 – originally a small IT services firm, and following this change, some of our best friends and colleagues who have travelled with us for well over a decade will no longer be with the company. We want those who are leaving to know that we are humbled by your brilliance and eternally grateful to have worked alongside such committed and true friends.

As we look at the last twelve months, we’re proud of the change we persevered through as a company. Evolving a company is never easy – for management, employees, partners or customers. Many in our space will need to go through a similar evolution to stay relevant in the public cloud world, and not everybody will make it through. We are fully determined that Mirantis to be part of the pack that does.

Onwards and upwards!

The post A Hard Road from Dev to Ops appeared first on Mirantis | Pure Play Open Cloud.

by Alex Freedland at March 10, 2017 02:00 PM

StackHPC Team Blog

StackHPC at the Sanger Centre OpenStack Day

In the countryside on the outskirts of Cambridge, a very special gathering took place. At the invitation of the OpenStack team at the Wellcome Trust Sanger Institute, the regional Scientific OpenStack community got together for a day of presentations and discussion.

The Sanger Institute put on a great event, and a good deal of birds-of-a-feather discussion was stimulated.

Stig presenting ALaSKA

As part of a fascinating schedule including presentations from Sanger, the Francis Crick Institute, the European Bioinformatics Institute, RackSpace, Public Health England and Cambridge University, Stig presented StackHPC's recent work for the SKA telescope project.

This project is the Science Data Processor (SDP) Performance Prototype. The project is a technology exploration vehicle for evaluating various strategies for the extreme data challenges posed by the SKA.

ALaSKA compute rack

OpenStack is delivering multi-tenant access to a rich and diverse range of bare metal hardware, and cloud-native methodologies are being used to deliver an equally broad range of software stacks. We call it Alaska (A La SKA).

Alaska is a really exciting project that embodies our ethos of driving OpenStack development for the scientific use case. StackHPC is thrilled to be delivering the infrastructure to support it.

by Stig Telfer at March 10, 2017 12:40 PM

March 09, 2017


Let rdopkg manage your RPM package

rdopkg is a RPM packaging automation tool which was written to efortlessly keep packages in sync with (fast moving) upstream.

rdopkg is a little opinionated, but when you setup your environment right, most packaging tasks are reduced to a single rdopkg command:

  • Introduce/remove patches: rdopkg patch
  • Rebase patches on a new upstream version: rdopkg new-version

rdopkg builds upon the concept distgit which simply refers to maintaining RPM package source files in a git repository. For example, all Fedora and CentOS packages are maintained in distgit.

Using Version Control System for packaging is great, so rdopkg extends this by requiring patches to be also maintained using git as opposed to storing them as simple .patch files in distgit.

For this purpose, rdopkg introduces concept of patches branch which is simply a git branch containing… yeah, patches. Specifically, patches branch contains upstream git tree with optional downstream patches on top.

In other words, patches are maintained as git commits. The same way they are managed upstream. To introduce new patch to a package, just git cherry-pick it to patches branch and let rdopkg patch do the rest. Patch files are generated from git, .spec file is changed automatically.

When new version is released upstream, rdopkg can rebase patches branch on new version and update distgit automatically. Instead of hoping some .patch files apply on ever changing tarball, git can be used to rebase the patches which brings many advantages like automatically dropping patches already included in new release and more.


upstream repo requirements

You project needs to be maintained in a git repository and use Semantic Versioning tags for its releases, such as 1.2.3 or v1.2.3.


Fedora packages already live in distgit repos which packagers can get by

fedpkg clone package

If your package doesn't have a distgit yet, simply create a git repository and put all the files from .src.rpm SOURCES in there.

el7 distgit branch is used in following example.

patches branch

Finally, you need a repository to hold your patches branches. This can be the same repo as distgit or a different one. You can use various processes to manage your patches branches, simplest one being packager maintaining them manually like he would with .patch files.

el7-patches patches branch is used in following example.

install rdopkg

rdopkg page contains installation instructions. Most likely, this will do:

dnf copr enable jruzicka/rdopkg
dnf install rdopkg

Initial setup

Start with cloning distgit:

git clone $DISTGIT

Add patches remote which contains/is going to contain patches branches (unless it's the same as origin):

git remote add -f patches $PATCHES_BRANCH_GIT

While optional, it's strongly recommended to also add upstream remote with project upstream to allow easy initial patches branch setup, cherry-picking and some extra rdopkg automagic detection:

git remote add -f upstream $UPSTREAM_GIT

Clean .spec

In this example we'll assume we'll building a package for EL 7 distribution and will use el7 branch for our distgit:

git checkout el7

Clean the .spec file. Replace hardcoded version strings (especially in URL) with macros so that .spec is current when Version changes. Check rdopkg pkgenv to see what rdopkg thinks about your package:

editor foo.spec
rdopkg pkgenv
git commit -a

Prepare patches branch

By convention, rdopkg expects $BRANCH distgit branch to have appropriate $BRANCH-patches patches branch.

Thus, for our el7 distgit, we need to create el7-patches branch.

First, see current Version::

rdopkg pkgenv | grep Version

Assume our package is at Version: 1.2.3.

upstream remote should contain associated 1.2.3 version tag which should correspond to 1.2.3 release tarball so let's use that as a base for our new patches branch:

git checkout -b el7-patches 1.2.3

Finally, if you have some .patch files in your el7 distgit branch, you need to apply them on top el7-patches now.

Some patches might be present in upstream remote (like backports) so you can git cherry-pick them.

Once happy with your patches on top of 1.2.3, push your patches branch into the patches remote:

git push patches el7-patches

Update distgit

With el7-patches patches branch in order, try updating your distgit:

git checkout el7
rdopkg patch

If this fails, you can try lower-level rdopkg update-patches which skips certain magics but isn't reccommended normal usage.

Once this succeeds, inspect newly created commit that updated the .spec file and .patch files from el7-patches patches branch.

Ready to rdopkg

After this, you should be able to manage your package using rdopkg.

Please note that both rdopkg patch and rdopkg new-version will reset local el7-patches to remote patches/el7-patches unless you supply -l/--local-patches option.

To introduce/remove patches, simply modify remote el7-patches patches branch and let rdopkg patch do the rest:

rdopkg patch

To update your package to new upstream version including patches rebase:

git fetch --all
rdopkg new-version

Finally, if you just want to fix your .spec file without touching patches:

rdopkg fix
# edit .spec
rdopkg -c

More information

List all rdopkg actions with:

rdopkg -h

Most rdopkg actions have some handy options, see them with

rdopkg $ACTION -h

Read the friendly manual:

man rdopkg

You can also read RDO packaging guide which contains some examples of rdopkg usage in RDO.

Happy packaging!

March 09, 2017 04:58 PM


Kubernetes Replication Controller, Replica Set and Deployments: Understanding replication options

The post Kubernetes Replication Controller, Replica Set and Deployments: Understanding replication options appeared first on Mirantis | Pure Play Open Cloud.

As a container management tool, Kubernetes was designed to orchestrate multiple containers and replication, and in fact there are currently several ways to do it. In this article, we’ll look at three options: Replication Controllers, Replica Sets, and Deployments.

What is Kubernetes replication for?

Before we go into how you would do replication, let’s talk about why.  Typically you would want to replicate your containers (and thereby your applications) for several reasons, including:
  • Reliability: By having multiple versions of an application, you prevent problems if one or more fails.  This is particularly true if the system replaces any containers that fail.
  • Load balancing: Having multiple versions of a container enables you to easily send traffic to different instances to prevent overloading of a single instance or node. This is something that Kubernetes does out of the box, making it extremely convenient.
  • Scaling: When load does become too much for the number of existing instances, Kubernetes enables you to easily scale up your application, adding additional instances as needed.
Replication is appropriate for numerous use cases, including:
  • Microservices-based applications: In these cases, multiple small applications provide very specific functionality.
  • Cloud native applications: Because cloud-native applications are based on the theory that any component can fail at any time, replication is a perfect environment for implementing them, as multiple instances are baked into the architecture.
  • Mobile applications: Mobile applications can often be architected so that the mobile client interacts with an isolated version of the server application.
Kubernetes has multiple ways in which you can implement replication.

Types of Kubernetes replication

In this article, we’ll discuss three different forms of replication: the Replication Controller, Replica Sets, and Deployments.

Replication Controller

The Replication Controller is the original form of replication in Kubernetes.  It’s being replaced by Replica Sets, but it’s still in wide use, so it’s worth understanding what it is and how it works. A Replication Controller is a structure that enables you to easily create multiple pods, then make sure that that number of pods always exists. If a pod does crash, the Replication Controller replaces it. Replication Controllers also provide other benefits, such as the ability to scale the number of pods, and to update or delete multiple pods with a single command. You can create a Replication Controller with an imperative command, or declaratively, from a file.  For example, create a new file called rc.yaml and add the following text:
apiVersion: v1
kind: ReplicationController
  name: soaktestrc
  replicas: 3
    app: soaktestrc
      name: soaktestrc
        app: soaktestrc
      - name: soaktestrc
        image: nickchase/soaktest
        - containerPort: 80
Most of this structure should look familiar from our discussion of Deployments; we’ve got the name of the actual Replication Controller (soaktestrc) and we’re designating that we should have 3 replicas, each of which are defined by the template.  The selector defines how we know which pods belong to this Replication Controller. Now tell Kubernetes to create the Replication Controller based on that file:
# kubectl create -f rc.yaml
replicationcontroller "soaktestrc" created
Let’s take a look at what we have using the describe command:
# kubectl describe rc soaktestrc
Name:           soaktestrc
Namespace:      default
Image(s):       nickchase/soaktest
Selector:       app=soaktestrc
Labels:         app=soaktestrc
Replicas:       3 current / 3 desired
Pods Status:    3 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
  FirstSeen     LastSeen        Count   From                            SubobjectPath   Type   Reason                   Message
  ---------     --------        -----   ----                            -------------   --------------                  -------
  1m            1m              1       {replication-controller }                       Normal SuccessfulCreate Created pod: soaktestrc-g5snq
  1m            1m              1       {replication-controller }                       Normal SuccessfulCreate Created pod: soaktestrc-cws05
  1m            1m              1       {replication-controller }                       Normal SuccessfulCreate Created pod: soaktestrc-ro2bl
As you can see, we’ve got the Replication Controller, and there are 3 replicas, of the 3 that we wanted.  All 3 of them are currently running.  You can also see the individual pods listed underneath, along with their names.  If you ask Kubernetes to show you the pods, you can see those same names show up:
# kubectl get pods
NAME               READY     STATUS    RESTARTS   AGE
soaktestrc-cws05   1/1       Running   0          3m
soaktestrc-g5snq   1/1       Running   0          3m
soaktestrc-ro2bl   1/1       Running   0          3m
Next we’ll look at Replica Sets, but first let’s clean up:
# kubectl delete rc soaktestrc
replicationcontroller "soaktestrc" deleted

# kubectl get pods
As you can see, when you delete the Replication Controller, you also delete all of the pods that it created.

Replica Sets

Replica Sets are a sort of hybrid, in that they are in some ways more powerful than Replication Controllers, and in others they are less powerful. Replica Sets are declared in essentially the same way as Replication Controllers, except that they have more options for the selector.  For example, we could create a Replica Set like this:
apiVersion: extensions/v1beta1
 kind: ReplicaSet
   name: soaktestrs
   replicas: 3
       app: soaktestrs
         app: soaktestrs
         environment: dev
       - name: soaktestrs
         image: nickchase/soaktest
         - containerPort: 80
In this case, it’s more or less the same as when we were creating the Replication Controller, except we’re using matchLabels instead of label.  But we could just as easily have said:
   replicas: 3
      - {key: app, operator: In, values: [soaktestrs, soaktestrs, soaktest]}
      - {key: teir, operator: NotIn, values: [production]}
In this case, we’re looking at two different conditions:
  1. The app label must be soaktestrc, soaktestrs, or soaktest
  2. The tier label (if it exists) must not be production
Let’s go ahead and create the Replica Set and get a look at it:
# kubectl create -f replicaset.yaml
replicaset "soaktestrs" created

# kubectl describe rs soaktestrs
Name:           soaktestrs
Namespace:      default
Image(s):       nickchase/soaktest
Selector:       app in (soaktest,soaktestrs),teir notin (production)
Labels:         app=soaktestrs
Replicas:       3 current / 3 desired
Pods Status:    3 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
  FirstSeen     LastSeen        Count   From                            SubobjectPath   Type    Reason                   Message
  ---------     --------        -----   ----                            -------------   --------------                   -------
  1m            1m              1       {replicaset-controller }                        Normal  SuccessfulCreate Created pod: soaktestrs-it2hf
  1m            1m              1       {replicaset-controller }                       Normal  SuccessfulCreate Created pod: soaktestrs-kimmm
  1m            1m              1       {replicaset-controller }                        Normal  SuccessfulCreate Created pod: soaktestrs-8i4ra

# kubectl get pods
NAME               READY     STATUS    RESTARTS   AGE
soaktestrs-8i4ra   1/1       Running   0          1m
soaktestrs-it2hf   1/1       Running   0          1m
soaktestrs-kimmm   1/1       Running   0          1m
As you can see, the output is pretty much the same as for a Replication Controller (except for the selector), and for most intents and purposes, they are similar.  The major difference is that the rolling-update command works with Replication Controllers, but won’t work with a Replica Set.  This is because Replica Sets are meant to be used as the backend for Deployments. Let’s clean up before we move on.
# kubectl delete rs soaktestrs
replicaset "soaktestrs" deleted

# kubectl get pods
Again, the pods that were created are deleted when we delete the Replica Set.


Deployments are intended to replace Replication Controllers.  They provide the same replication functions (through Replica Sets) and also the ability to rollout changes and roll them back if necessary. Let’s create a simple Deployment using the same image we’ve been using.  First create a new file, deployment.yaml, and add the following:
apiVersion: extensions/v1beta1
kind: Deployment
  name: soaktest
  replicas: 5
        app: soaktest
      - name: soaktest
        image: nickchase/soaktest
        - containerPort: 80
Now go ahead and create the Deployment:
# kubectl create -f deployment.yaml
deployment "soaktest" created
Now let’s go ahead and describe the Deployment:
# kubectl describe deployment soaktest
Name:                   soaktest
Namespace:              default
CreationTimestamp:      Sun, 05 Mar 2017 16:21:19 +0000
Labels:                 app=soaktest
Selector:               app=soaktest
Replicas:               5 updated | 5 total | 5 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  1 max unavailable, 1 max surge
OldReplicaSets:         <none>
NewReplicaSet:          soaktest-3914185155 (5/5 replicas created)
  FirstSeen     LastSeen        Count   From                            SubobjectPath   Type    Reason                   Message
  ---------     --------        -----   ----                            -------------   --------------                   -------
  38s           38s             1       {deployment-controller }                        Normal  ScalingReplicaSet        Scaled up replica set soaktest-3914185155 to 3
  36s           36s             1       {deployment-controller }                        Normal  ScalingReplicaSet        Scaled up replica set soaktest-3914185155 to 5
As you can see, rather than listing the individual pods, Kubernetes shows us the Replica Set.  Notice that the name of the Replica Set is the Deployment name and a hash value. A complete discussion of updates is out of scope for this article — we’ll cover it in the future — but couple of interesting things here:
  • The StrategyType is RollingUpdate. This value can also be set to Recreate.
  • By default we have a minReadySeconds value of 0; we can change that value if we want pods to be up and running for a certain amount of time — say, to load resources — before they’re truly considered “ready”.
  • The RollingUpdateStrategy shows that we have a limit of 1 maxUnavailable — meaning that when we’re updating the Deployment, we can have up to 1 missing pod before it’s replaced, and 1 maxSurge, meaning we can have one extra pod as we scale the new pods back up.
As you can see, the Deployment is backed, in this case, by Replica Set soaktest-3914185155. If we go ahead and look at the list of actual pods…
# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3914185155-7gyja   1/1       Running   0          2m
soaktest-3914185155-lrm20   1/1       Running   0          2m
soaktest-3914185155-o28px   1/1       Running   0          2m
soaktest-3914185155-ojzn8   1/1       Running   0          2m
soaktest-3914185155-r2pt7   1/1       Running   0          2m
… you can see that their names consist of the Replica Set name and an additional identifier.

Passing environment information: identifying a specific pod

Before we look at the different ways that we can affect replicas, let’s set up our deployment so that we can see what pod we’re actually hitting with a particular request.  To do that, the image we’ve been using displays the pod name when it outputs:
$limit = $_GET['limit'];
if (!isset($limit)) $limit = 250;
for ($i; $i < $limit; $i++){
     $d = tan(atan(tan(atan(tan(atan(tan(atan(tan(atan(123456789.123456789))))))))));
echo "Pod ".$_SERVER['POD_NAME']." has finished!\n";
As you can see, we’re displaying an environment variable, POD_NAME.  Since each container is essentially it’s own server, this will display the name of the pod when we execute the PHP. Now we just have to pass that information to the pod. We do that through the use of the Kubernetes Downward API, which lets us pass environment variables into the containers:
apiVersion: extensions/v1beta1
kind: Deployment
  name: soaktest
  replicas: 3
        app: soaktest
      - name: soaktest
        image: nickchase/soaktest
        - containerPort: 80
        - name: POD_NAME
As you can see, we’re passing an environment variable and assigning it a value from the Deployment’s metadata.  (You can find more information on metadata here.) So let’s go ahead and clean up the Deployment we created earlier…
# kubectl delete deployment soaktest
deployment "soaktest" deleted

# kubectl get pods
… and recreate it with the new definition:
# kubectl create -f deployment.yaml
deployment "soaktest" created
Next let’s go ahead and expose the pods to outside network requests so we can call the nginx server that is inside the containers:
# kubectl expose deployment soaktest --port=80 --target-port=80 --type=NodePort
service "soaktest" exposed
Now let’s describe the services we just created so we can find out what port the Deployment is listening on:
# kubectl describe services soaktest
Name:                   soaktest
Namespace:              default
Labels:                 app=soaktest
Selector:               app=soaktest
Type:                   NodePort
Port:                   <unset> 80/TCP
NodePort:               <unset> 30800/TCP
Endpoints:    ,, + 2 more...
Session Affinity:       None
No events.
As you can see, the NodePort is 30800 in this case; in your case it will be different, so make sure to check.  That means that each of the servers involved is listening on port 30800, and requests are being forwarded to port 80 of the containers.  That means we can call the PHP script with:
In my case, I’ve set the IP for my Kubernetes hosts to hostnames to make my life easier, and the PHP file is the default for nginx, so I can simply call:
# curl http://kube-2:30800
Pod soaktest-3869910569-xnfme has finished!
So as you can see, this time the request was served by pod soaktest-3869910569-xnfme.

Recovering from crashes: Creating a fixed number of replicas

Now that we know everything is running, let’s take a look at some replication use cases. The first thing we think of when it comes to replication is recovering from crashes. If there are 5 (or 50, or 500) copies of an application running, and one or more crashes, it’s not a catastrophe.  Kubernetes improves the situation further by ensuring that if a pod goes down, it’s replaced. Let’s see this in action.  Start by refreshing our memory about the pods we’ve got running:
# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-qqwqc   1/1       Running   0          11m
soaktest-3869910569-qu8k7   1/1       Running   0          11m
soaktest-3869910569-uzjxu   1/1       Running   0          11m
soaktest-3869910569-x6vmp   1/1       Running   0          11m
soaktest-3869910569-xnfme   1/1       Running   0          11m
If we repeatedly call the Deployment, we can see that we get different pods on a random basis:
# curl http://kube-2:30800
Pod soaktest-3869910569-xnfme has finished!
# curl http://kube-2:30800
Pod soaktest-3869910569-x6vmp has finished!
# curl http://kube-2:30800
Pod soaktest-3869910569-uzjxu has finished!
# curl http://kube-2:30800
Pod soaktest-3869910569-x6vmp has finished!
# curl http://kube-2:30800
Pod soaktest-3869910569-uzjxu has finished!
# curl http://kube-2:30800
Pod soaktest-3869910569-qu8k7 has finished!
To simulate a pod crashing, let’s go ahead and delete one:
# kubectl delete pod soaktest-3869910569-x6vmp
pod "soaktest-3869910569-x6vmp" deleted

# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-516kx   1/1       Running   0          18s
soaktest-3869910569-qqwqc   1/1       Running   0          27m
soaktest-3869910569-qu8k7   1/1       Running   0          27m
soaktest-3869910569-uzjxu   1/1       Running   0          27m
soaktest-3869910569-xnfme   1/1       Running   0          27m
As you can see, pod *x6vmp is gone, and it’s been replaced by *516kx.  (You can easily find the new pod by looking at the AGE column.) If we once again call the Deployment, we can (eventually) see the new pod:
# curl http://kube-2:30800
Pod soaktest-3869910569-516kx has finished!
Now let’s look at changing the number of pods.

Scaling up or down: Manually changing the number of replicas

One common task is to scale up a Deployment in response to additional load. Kubernetes has autoscaling, but we’ll talk about that in another article.  For now, let’s look at how to do this task manually. The most straightforward way is to simply use the scale command:
# kubectl scale --replicas=7 deployment/soaktest
deployment "soaktest" scaled

# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-2w8i6   1/1       Running   0          6s
soaktest-3869910569-516kx   1/1       Running   0          11m
soaktest-3869910569-qqwqc   1/1       Running   0          39m
soaktest-3869910569-qu8k7   1/1       Running   0          39m
soaktest-3869910569-uzjxu   1/1       Running   0          39m
soaktest-3869910569-xnfme   1/1       Running   0          39m
soaktest-3869910569-z4rx9   1/1       Running   0          6s
In this case, we specify a new number of replicas, and Kubernetes adds enough to bring it to the desired level, as you can see. One thing to keep in mind is that Kubernetes isn’t going to scale the Deployment down to be below the level at which you first started it up.  For example, if we try to scale back down to 4…
# kubectl scale --replicas=4 -f deployment.yaml
deployment "soaktest" scaled

# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-l5wx8   1/1       Running   0          11s
soaktest-3869910569-qqwqc   1/1       Running   0          40m
soaktest-3869910569-qu8k7   1/1       Running   0          40m
soaktest-3869910569-uzjxu   1/1       Running   0          40m
soaktest-3869910569-xnfme   1/1       Running   0          40m
… Kubernetes only brings us back down to 5, because that’s what was specified by the original deployment.

Deploying a new version: Replacing replicas by changing their label

Another way you can use deployments is to make use of the selector.  In other words, if a Deployment controls all the pods with a tier value of dev, changing a pod’s teir label to prod will remove it from the Deployment’s sphere of influence. This mechanism enables you to selectively replace individual pods. For example, you might move pods from a dev environment to a production environment, or you might do a manual rolling update, updating the image, then removing some fraction of pods from the Deployment; when they’re replaced, it will be with the new image. If you’re happy with the changes, you can then replace the rest of the pods. Let’s see this in action.  As you recall, this is our Deployment:
# kubectl describe deployment soaktest
Name:                   soaktest
Namespace:              default
CreationTimestamp:      Sun, 05 Mar 2017 19:31:04 +0000
Labels:                 app=soaktest
Selector:               app=soaktest
Replicas:               3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  1 max unavailable, 1 max surge
OldReplicaSets:         <none>
NewReplicaSet:          soaktest-3869910569 (3/3 replicas created)
  FirstSeen     LastSeen        Count   From                            SubobjectPath   Type              Reason                  Message
  ---------     --------        -----   ----                            -------------   --------  ------                  -------
  50s           50s             1       {deployment-controller }                        Normal            ScalingReplicaSet       Scaled up replica set soaktest-3869910569 to 3
And these are our pods:
# kubectl describe replicaset soaktest-3869910569
Name:           soaktest-3869910569
Namespace:      default
Image(s):       nickchase/soaktest
Selector:       app=soaktest,pod-template-hash=3869910569
Labels:         app=soaktest
Replicas:       5 current / 5 desired
Pods Status:    5 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
  FirstSeen     LastSeen        Count   From                            SubobjectPath   Type              Reason                  Message
  ---------     --------        -----   ----                            -------------   --------  ------                  -------
  2m            2m              1       {replicaset-controller }                        Normal            SuccessfulCreate        Created pod: soaktest-3869910569-0577c
  2m            2m              1       {replicaset-controller }                        Normal            SuccessfulCreate        Created pod: soaktest-3869910569-wje85
  2m            2m              1       {replicaset-controller }                        Normal            SuccessfulCreate        Created pod: soaktest-3869910569-xuhwl
  1m            1m              1       {replicaset-controller }                        Normal            SuccessfulCreate        Created pod: soaktest-3869910569-8cbo2
  1m            1m              1       {replicaset-controller }                        Normal            SuccessfulCreate        Created pod: soaktest-3869910569-pwlm4
We can also get a list of pods by label:
# kubectl get pods -l app=soaktest
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-0577c   1/1       Running   0          7m
soaktest-3869910569-8cbo2   1/1       Running   0          6m
soaktest-3869910569-pwlm4   1/1       Running   0          6m
soaktest-3869910569-wje85   1/1       Running   0          7m
soaktest-3869910569-xuhwl   1/1       Running   0          7m
So those are our original soaktest pods; what if we wanted to add a new label?  We can do that on the command line:
# kubectl label pods soaktest-3869910569-xuhwl experimental=true
pod "soaktest-3869910569-xuhwl" labeled

# kubectl get pods -l experimental=true
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-xuhwl   1/1       Running   0          14m
So now we have one experimental pod.  But since the experimental label has nothing to do with the selector for the Deployment, it doesn’t affect anything. So what if we change the value of the app label, which the Deployment is looking at?
# kubectl label pods soaktest-3869910569-wje85 app=notsoaktest --overwrite
pod "soaktest-3869910569-wje85" labeled
In this case, we need to use the overwrite flag because the app label already exists. Now let’s look at the existing pods.
# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-0577c   1/1       Running   0          17m
soaktest-3869910569-4cedq   1/1       Running   0          4s
soaktest-3869910569-8cbo2   1/1       Running   0          16m
soaktest-3869910569-pwlm4   1/1       Running   0          16m
soaktest-3869910569-wje85   1/1       Running   0          17m
soaktest-3869910569-xuhwl   1/1       Running   0          17m
As you can see, we now have six pods instead of five, with a new pod having been created to replace *wje85, which was removed from the deployment. We can see the changes by requesting pods by label:
# kubectl get pods -l app=soaktest
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-0577c   1/1       Running   0          17m
soaktest-3869910569-4cedq   1/1       Running   0          20s
soaktest-3869910569-8cbo2   1/1       Running   0          16m
soaktest-3869910569-pwlm4   1/1       Running   0          16m
soaktest-3869910569-xuhwl   1/1       Running   0          17m
Now, there is one wrinkle that you have to take into account; because we’ve removed this pod from the Deployment, the Deployment no longer manages it.  So if we were to delete the Deployment…
# kubectl delete deployment soaktest
deployment "soaktest" deleted
The pod remains:
# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-wje85   1/1       Running   0          19m
You can also easily replace all of the pods in a Deployment using the –all flag, as in:
# kubectl label pods --all app=notsoaktesteither --overwrite
But remember that you’ll have to delete them all manually!


Replication is a large part of Kubernetes’ purpose in life, so it’s no surprise that we’ve just scratched the surface of what it can do, and how to use it. It is useful for reliability purposes, for scalability, and even as a basis for your architecture. What do you anticipate using replication for, and what would you like to know more about? Let us know in the comments!

The post Kubernetes Replication Controller, Replica Set and Deployments: Understanding replication options appeared first on Mirantis | Pure Play Open Cloud.

by Nick Chase at March 09, 2017 01:40 PM

OpenStack Superuser

How a research group avoids Franken-infrastructure with OpenStack

The Van Andel Institute (VAI) is an independent biomedical research and science education organization based in Grand Rapids, Michigan. VAI hosts thirty individual research groups who use genomic sequencing analysis, molecular dynamics simulation and modeling to investigate epigenetics, cancer and neurodegenerative diseases.

Recently, VAI deployed a unique new hybrid infrastructure featuring Bright Computing OpenStack systems management software. The Bright OpenStack deployment wrapper lets VAI manage both high-performance compute (HPC) grid and cluster computing and cloud computing within the same infrastructure, greatly reducing the labor and effort needed for management and change control. Perhaps more importantly, it also helps VAI respond dynamically to the accelerating trend toward cloud computing they see coming down the highway. “Grid and cluster computing has been the standard for years, but we know that cloud computing is the wave of the future,” said Zack Ramjan, research computing architect at VAI. “The hybrid approach we are getting with Bright is providing a path that helps us transition from one to the other.”

The Challenge

Ramjan considers his main challenge finding a way to meld hardware, software and storage into a system that can handle the massive amount of data generated by VAI’s varied research groups. To solve that challenge, he sought an environment that could handle the massively parallel processing (MPP) and analysis produced. “That’s where Bright comes in,” says Ramjan. “We have the cluster and grid and will eventually have the cloud, so we can use a “divide and conquer” approach, efficiently assigning tasks among 50 computers and the cloud resources.”

<script async="async" charset="utf-8" src=""></script>

Another challenge he faces is that the 30 different research groups are working on highly varied projects, each with unique requirements. Some of the researchers are doing genomics and others are doing simulation – and they needed a solution that would work for all of them. Ramjan explains that access to a cloud approach puts the user in the driver seat, so he would not have to develop a single solution that makes everyone happy. Each user can have their own solution, carving off their own virtual piece of the pie. “Bright is helping us manage that variety of approaches. With both the legacy cluster mode and the cloud mode, we are creating an environment in which it is easy for users to come on board and do their specific work efficiently.”


“We know that cloud computing is the wave of the future. The hybrid approach we are getting with Bright is providing a path that helps us transition. ” — Zack Ramjan research computing architect at VAI

The Solution

The total solution includes three key components: Bright OpenStack software; 43 compute nodes, representing 1,100 CPU cores, provided by Silicon Mechanics; along with storage and data hosting supplied by Data Direct Network. Bright OpenStack is the brains of the system and integrates the hardware, managing the CPU units and the storage devices. The user interacts with the Bright dashboard, and Bright OpenStack interacts with the physical elements of the system.

<script async="async" charset="utf-8" src=""></script>

An expert with many years of deploying HPC systems by hand, Ramjan decided to opt for the Bright solution because he didn’t have the time or the manpower in house to quickly set up a hybrid HPC-cloud system. He reasoned that engineering its solution would take years. “We are actually kind of lucky that we started from nothing because we had no legacy baggage, so we could design the solution as we saw fit from the beginning, and we did that with the help of Bright and others.” Bright partner and system Integrator for the project Silicon Mechanics was confident the Bright tool set would be a successful way to tackle the challenge. “As we worked with VAI to define their system architecture, we looked to the Bright OpenStack system management software, knowing its strength in managing complex, private cloud computing environments,” said Daniel Chow, COO/CTO at Silicon Mechanics. “We are excited to see the joint solution empower VAI researchers to accelerate their scientific findings.”

A key feature of the Bright OpenStack software is the ease of management and change control. Ramjan notes that it is very difficult and time consuming to scale up by managing many machines one by one. “With Bright you can see the entire resource, or see your hardware through a single pane of glass. Right now our HPC to cloud ratio may be 90-10, but tomorrow we know our end users are going to be more cloud-centric. With this solution, I can dynamically pull resources into the cloud portion, but if the next day it turns out there’s less cloud demand, we can pull it back. We can dynamically shift that ratio as we see fit without any down time.” According to Ramjan, several other institutions he has spoken to also have their eyes on transitioning to an OpenStack cloud. However, those larger institutions with significant investments in “old school” HPC infrastructure say they are probably three to five years away from doing so. “Because they have such a complex environment, everything has been custom designed in-house and changes must be done with their own labor, which makes them less flexible. With Bright, we bought the solution and everything came with the package. It definitely put us way ahead of the game.”

The Result

The HPC cluster went online in September 2015 and is already highly utilized, which Ramjan considers a good sign. Bright makes it easy for other team members to jump in to provide assistance to users without needing his intervention. He says that Linux can be quirky technically, so without Bright OpenStack, even basic tasks would have fallen solely in his lap, which would have slowed down deployment considerably.

“We know that scientific workloads do not get smaller every year, but are constantly expanding. From our experience, the size of data continues to grow exponentially. We have more than 40 compute nodes representing 1,100 CPU cores today – but what about next year when we get to 2,000 or 3,000 cores? We wanted an expandable and scalable solution – this is a core capability of Bright. The management tool makes it easy to buy new equipment, take it out of the box, put it on the shelf, and plug it in. Bright can pull it right into the existing environment.”

<script async="async" charset="utf-8" src=""></script>

The cloud portion has recently gone live and is available to VAI’s users. Ramjan says that VAI’s early adopters, typically power-users who are quite savvy, are starting to appreciate its value. He expects use of the cloud to grow in popularity as others get more familiar with the resource. Although VAI’s existing workloads are 90 percent grid and cluster, they expect to move toward the cloud in the future. Bright is giving them an expandable and scalable turnkey solution that lets them combine HPC workloads with big data analytics workloads in the same infrastructure, and to have the choice of working in either a bare metal or virtualized infrastructure. It’s also providing a path that helps them transition from one to the other.

This case study first appeared on Superuser is always interested in community content, get in touch at

[Cover Photo]( // CC [BY NC](

The post How a research group avoids Franken-infrastructure with OpenStack appeared first on OpenStack Superuser.

by Superuser at March 09, 2017 12:58 PM

Red Hat Stack

Using Software Factory to manage Red Hat OpenStack Platform lifecycle

by Nicolas Hicher, Senior Software Engineer – Continuous Integration and Delivery


Software-Factory is a collection of services that provides a powerful platform to build software. It enables the same workflow used to develop OpenStack: using Gerrit for code reviews, Zuul/Nodepool/Jenkins as a CI system, and Storyboard for stories and issues tracker. Also, it ensures a reproducible test environment with ephemeral Jenkins slaves.

In this video, Nicolas Hicher will demonstrate how to use Software-Factory to manage a Red Hat OpenStack Platform 9 lifecycle. We will do a deployment and an update on a virtual environment (within an OpenStack tenant).

<iframe allowfullscreen="true" class="youtube-player" height="390" src=";rel=1&amp;fs=1&amp;autohide=2&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" style="border:0;" type="text/html" width="640"></iframe>


For this demo, we will do a deployment within an OpenStack tenant. Using a tool, developed by the engineering team that builds DCI, called python-tripleo-helper. With this tool, we can do a deployment within an OpenStack tenant using the same steps of a full deployment (boot server via IPMI, discover nodes, introspection and deployment). We also patched python-tripleo-helper to add an update command to update the OpenStack (changing parameters, not doing a major upgrade).


The workflow is simple and robust:

  • Submit a review with the templates, the installation script and the tests scripts. A CI job validates the templates.
  • When the review is approved, the gate jobs are executed (installation or update).
  • After the deployment/update is completed, the review is merged.


For this demo, we will do a simple deployment (1 controller and 1 compute nodes) with Red Hat OpenStack 9.0


Since we do the deployment in a virtual environment, we can’t test some advanced features, especially for networking and storage. But other features of the deployed cloud can be validated using the appropriate environments.


We plan to continue to improve this workflow to be able to:

  • Do a major upgrade from Red Hat OpenStack Platform (X to X+1).
  • Manage a bare metal deployment.
  • Improve the Ceph deployment to be able to use more than one object storage device (OSD).
  • Use smoke jobs like tempest to validate the deployment before merging the review.

Also, it should be possible to manage pre-production and production environments within a single git repository, the check job will do the tasks on pre production and after receiving a peer’s validation, the same actions will be applied on production.

by Maria Bracho, Senior Product Manager OpenStack at March 09, 2017 01:59 AM

March 08, 2017


What is the best NFV Orchestration platform? A review of OSM, Open-O, CORD, and Cloudify

The post What is the best NFV Orchestration platform? A review of OSM, Open-O, CORD, and Cloudify appeared first on Mirantis | Pure Play Open Cloud.

As Network Functions Virtualization (NFV) technology matures, multiple NFV orchestration solutions have emerged and 2016 was a busy year. While some commercial products were already available on the market, multiple open source initiatives were also launched, with most  delivering initial code releases, and others planning to roll-out software artifacts later this year. With so much going on, we thought we’d provide you with a technical overview of some of the various NFV orchestration options, so you can get a feel for what’s right for you. In particular, we’ll cover: In addition, multiple NFV projects have been funded under European Union R&D programs. Projects such as OpenBaton, T-NOVA/TeNor and SONATA have their codebases available in public repos, but industry support, involvement of external contributors and further sustainability might be a challenging for these projects, so for now we’ll consider them out of scope for this post, where we’ll review and compare orchestration projects across the following areas:
  • General overview and current project state
  • Compliance with NFV MANO reference architecture
  • Software architecture
  • NSD definition approach
  • VIM and VNFM support
  • Capabilities to provision End to End service
  • Interaction with relevant standardization bodies and communities

General overview and current project state

We’ll start with a general overview of each project, along with, its ambitions, development approach, the involved community, and related information.


The OpenSource MANO project was officially launched at the World Mobile Congress (WMC) in 2016. Starting with several founding members, including Mirantis, Telefónica, BT, Canonical, Intel,, Telekom Austria Group and Telenor, the OSM community now includes 55 different organisations. The OSM project is hosted at ETSI facilities and targets delivering an open source management and orchestration (MANO) stack closely aligned with the ETSI NFV reference architecture. OSM issued two releases,Rel 0 and Rel 1, during 2016. The most recent at the time of this writing, OSM Rel. 1, has been publicly available since October, 2016, and can be downloaded from the official website. Project governance is managed via several groups, including the Technical Steering group responsible for OSM’s technical aspects, the Leadership group, and the End User Advisory group. You can find more details about OSM project may be found at the official Wiki.


The OPEN-O project is hosted by the Linux foundation and was also formally announced at 2016 MWC. Initial project advocates were mostly from Asian companies, such as Huawei, ZTE and China Mobile. Eventually, the project project got further support from Brocade, Ericsson, GigaSpaces, Intel and others. The main project objective is to enable end-to-end service agility across multiple domains using unified platform for NFV and SDN orchestration. The OPEN-O project delivered its first release in November, 2016 plans to roll-out future releases in a 6 month cycle. Overall project governance is managed by the project Board, with technology-specific issues managed by the Technical Steering Committee. You can find more general details about the OPEN-O project may be found at the project web-site.


Originally CORD (Central Office Re-architected as a Datacenter) was introduced as one of the use cases for the ONOS SDN Controller, but it grew-up into a separate project under ON.Lab governance. (ON.Lab recently merged with the Open Networking Foundation.) The ultimate project goal is to combine NFV, SDN and the elasticity of commodity clouds to bring datacenter economics and cloud agility to the Telco Central Office. The reference implementation of CORD combines commodity servers, white-box switches, and disaggregated access technologies with open source software to provide an extensible service delivery platform. CORD Rel.1 and Rel.2 integrate a number of open source projects, such as ONOS to manage SDN infrastructure, OpenStack to deploy NFV workloads, and XOS as a service orchestrator. To reflect use cases’ uniqueness, CORD introduces a number of service profiles, such as Mobile (M-CORD), Residential (R-CORD), and Enterprise (E-CORD).  You can find more details about CORD project can be found at the official project web site.   

Gigaspaces Cloudify

Gigaspaces’ Cloudify is the open source TOSCA-based cloud orchestration software platform.  Originally introduced as a pure cloud orchestration solution (similar to OpenStack HEAT), the platform was further expanded to include NFV-related use cases, and the Cloudify Telecom Edition emerged.   Considering its original platform purpose, Cloudify has an extensible architecture and can interact with multiple IaaS/PaaS providers such as AWS, OpenStack, Microsoft Azure and so on. Overall, Cloudify software is open source under the Apache 2 license and the source code is hosted in a public repository. While the Cloudify platform is open source and welcomes community contributions, the overall project roadmap is defined by Gigaspaces. You can find more details about the Cloudify platform at the official web site.

Compliance with ETSI NFV MANO reference architecture

At the time of this writing, a number of alternatives and specific approaches, such as Lifecycle Service Orchestration (LSO) from Metro Ethernet Forum, have emerged, huge industry support and involvement has helped to promote ETSI NFV Management and Orchestration (MANO) as the de-facto reference NFV architecture. From this standpoint, NFV MANO provides comprehensive guidance for entities, reference points and workflows to be implemented by appropriate NFV platforms (fig. 1): ETSI NFV MANO reference architecture Figure 1 – ETSI NFV MANO reference architecture


As this project is hosted by ETSI, the OSM community tries to be compliant with the ETSI NFV MANO reference architecture, respecting appropriate IFA working group specifications. Key reference points, such as Or-Vnfm and Or-Vi might be identified within OSM components. The VNF and Network Service (NS) catalog are explicitly present in an OSM service orchestrator (SO) component. Meanwhile, a lot of further development efforts are planned to reach feature parity with currently specified features and interfaces.  


While the OPEN-O project in general has no objective to be compliant with NFV MANO, the NFVO component of OPEN-O is aligned with an ETSI reference model, and all key MANO elements, such as VNFM and VIM might be found in an NFVO architecture. Moreover, the scope of the OPEN-O project goes beyond just NFV orchestration, and as a result goes beyond the scope identified by the ETSI NFV MANO reference architecture. One important piece of this project relates to SDN-based networking services provisioning and orchestration, which might be further used either in conjunction with NFV services or as a standalone feature.


Since its invention, CORD has defined its own reference architecture and cross-component communication logic. The reference CORD implementation is very OpenFlow-centric around ONOS, the orchestration component (XOS), and whitebox hardware. Technically, most of the CORD building blocks might be mapped to MANO-defined NFVI, VIM and VNFM, but this is incidental; the overall architectural approach defined by ETSI MANO, as well as the appropriate reference points and interfaces were not considered in scope by the CORD community. Similar to OPEN-O, the scope of this project goes beyond just NFV services provisioning. Instead, NFV services provisioning is considered as one of the several possible use cases for the CORD platform.

Gigaspaces Cloudify

The original focus of the Cloudify platform was orchestration of application deployment in a cloud. Later, when the NFV use case emerged, the Telecom Edition of the Cloudify platform was delivered. This platform combines both NFVO and generic VNFM components of the MANO defined entities (fig. 2). Cloudify in relation to a NFV MANO reference architecture Figure 2 – Cloudify in relation to a NFV MANO reference architecture By its very nature, Cloudify Blueprints might be considered as the NS and VNF catalog entities defined by MANO. Meanwhile, some interfaces and actions specified by the NFV IFA subgroup are not present or considered as out of scope for the Cloudify platform.  From this standpoint, you could say that Cloudify is aligned with the MANO reference architecture but not fully compliant.

Software architecture and components  

As you might expect, all NFV Orchestration solutions are complex integrated software platforms combined from multiple components.


The Open Source MANO (OSM) project consists of 3 basic components (fig. 3): OSM project architecture Figure 3 – OSM project architecture
  • The Service Orchestrator (SO), responsible for end-to-end service orchestration and provisioning. The SO stores the VNF definitions and NS catalogs, manages workflow of the service deployment and can query the status of already deployed services. OSM integrates the orchestration engine as an SO.
  • The Resource Orchestrator (RO) is used to provision services over a particular IaaS provider in a given location. At the time if this writing, the RO component is capable of deploying networking services over OpenStack, VMware, and OpenVIM.  The SO and RO components can be jointly mapped to the NFVO entity in the ETSI MANO architecture
  • The VNF Configuration and Abstraction (VCA) module performs the initial VNF configuration using Juju Charms. Considering this purpose, the VCA module can be considered as a generic VNFM with a limited feature set.
Additionally, OSM hosts the OpenVIM project, which is a lightweight VIM layer implementation suitable for small NFV deployments as an alternative to heavyweight OpenStack or VMware VIMs. Most of the software components are developed in python, while SO, as a user facing entity, heavily relies on a JavaScript and NodeJS framework.


From a general standpoint, the complete OPEN-O software architecture can be split into 5 component groups (Fig.4): OPEN-O project software architecture Figure 4 – OPEN-O project software architecture
  • Common service: Consists of shared services used by all other components.
  • Common TOSCA:  Provides TOSCA-related features such as NSD catalog management, NSD definition parsing, workflow execution, and so on; this component is based on the ARIA TOSCA project.
  • Global Service Orchestrator (GSO): As the name suggests, this group provides overall lifecycle management of the end-to-end service.
  • SDN Orchestrator (SDN-O): Provides abstraction and lifecycle management of SDN services; an essential piece of this block are the SDN drivers, which provide device-specific modules for communication with a particular device or SDN controller.
  • NFV Orchestrator (NFV-O): This group provides NFV services instantiation and lifecycle management.
The OPEN-O project uses a microservices-based architecture, and consists of more than 20 microservices. The central platform element is the Microservice Bus, which is the core microservice of the Common Service components group. Each platform component should register with this bus. During registration, each microservice specifies exposed APIs and endpoint addresses. As a result, the overall software architecture is flexible and can be easily extended with additional modules. OPEN-O Rel. 1 consists of both Java and python-based microservices.   


As mentioned above, CORD was introduced originally as an ONOS application, but grew into a standalone platform that covers both ONOS-managed SDN regions and service orchestration entities implemented by XOS. Both ONOS and XOS provide a service framework to enable the Everything-as-a-Service (XaaS) concept. Thus, the reference CORD implementation consists of both a hardware Pod (consisting of whitebox switches and servers) and a software platform (such as ONOS or XOS with appropriate applications). From the software standpoint, the CORD platform implements an agent or driver-based approach in which XOS ensures that each registered driver used for a particular service is in an operational state (Fig. 5): CORD platform architecture Figure 5 – CORD platform architecture The CORD reference implementation consists of Java (ONOS and its applications) and python (XOS) software stacks. Additionally, Ansible is heavily used by the CORD for automation and configuration management

Gigaspaces Cloudify

From the high-level perspective, platform consists of several different pieces, as you can see in figure 6: Cloudify platform architecture Figure 6 – Cloudify platform architecture
  • Cloudify Manager is the orchestrator that performs deployment and lifecycle management of the applications or NSDs described in the templates, called blueprints.
  • The Cloudify Agents are used to manage workflow execution via an appropriate plugin.
To provide overall lifecycle management, Cloudify integrates third-party components such as:
  • Elasticsearch, used as a data store of the deployment state, including runtime data and logs data coming from various platform components.
  • Logstash, used to process log information coming from platform components and agents.
  • Riemann, used as a policy engine to process runtime decisions about availability, SLA and overall monitoring.
  • RabbitMQ, used as an async transport for communication among all platform components, including remote agents.
The orchestration functionality itself is provided by the ARIA TOSCA project, which defines the TOSCA-based blueprint format and deployment workflow engine. Cloudify “native” components and plugins are python applications.

Approach for NSD definition

The Network Service Descriptor (NSD) specifies components and the relations between them to be deployed on the top of the IaaS during the NFV service instantiation. Orchestration platforms typically use some templating language to define NSDs. While the industry in general considers TOSCA as a de-facto standard to define NSDs, alternative approaches are also available across various platforms.


OSM follows the official MANO specification, which has definitions both for NSDs and VNF Descriptors (VNFD). To define NSD templates, YAML-based documents are used.  NSD is processed by the OSM Service Orchestrator to instantiate a Network Service, which itself might include VNFs, Forwarding Graphs, and Links between them.  A VNFD is a deployment template that specifies a VNF in terms of deployment and operational behaviour requirements.  Additionally VNFD specifies connections between Virtual Deployment Units (VDUs) using the internal Virtual Links (VLs). Each VDU in an OSM presentation relates to a VM or a Container.  OSM uses archived format both for NSD and VNFD. This archive consists of the service/VNF description, initial configuration scripts and other auxiliary details. You can find more information about OSM NSD/VNFD structure at the official website.


In OPEN-O, the TOSCA-based  templates is used to describe the NS/VNF Package. Both the TOSCA general service profile and the more recent NFV profile can be used for NSD/VNFD, which is further packaged according to the the Cloud Service Archive (CSAR) format.    A CSAR is a zip archive that contains at least two directories: TOSCA-Metadata and Definitions. The TOSCA-Metadata directory contains information that describes the content of the CSAR and is referred to as the TOSCA metafile. The Definitions directory contains one or more TOSCA Definitions documents. These Definitions documents contain definitions of the cloud application to be deployed during CSAR processing. More details about OPEN-O NSD/VNFD definitions may be found at the official web site.


To define a new CORD service, you need to define both TOSCA-­based templates and Python-based software components. Particularly when adding a new service, depending on its nature, you might alter one of several platform elements:
  • TOSCA service definition files, appropriate models, specified as YAML text files
  • REST APIs models, specified in Python
  • XOS models, implemented as a django application
  • Synchronizers, used to ensure the Service instantiated correctly and transitioned  to the required state.
The overall service definition format is based on the TOSCA Simple Profile language specification and presented in the YAML format.

Gigaspaces Cloudify

To instantiate a service or application, Cloudify uses templates called “Blueprints” which are effectively orchestration and deployment plans. Blueprints are specified in the form of TOSCA YAML files  and describe the service topology as a set of nodes, relationships, dependencies, instantiation and configuration settings, monitoring, and maintenance. Other than the YAML itself, a Blueprint can include multiple external resources such as configuration and installation scripts (or Puppet Manifests, or Chef Recipes, and so on) and basically any other resource required to run the application. You can find more details about the structure of Blueprints here.

VNFM and VIM support

NFV service deployment is performed on the appropriate IaaS, which itself is a set of virtualized compute, network and storage resources.  The ETSI MANO reference architecture identifies a component to manage these virtualized resources. This component is referred to as the Virtual Infrastructure Manager (VIM). Traditionally, the open source community treats OpenStack/KVM as a “de-facto” standard VIM. However, an NFV service might be span across various VIM types and various hypervisors. Thus multi-VIM support is a common requirement for an Orchestration engine. Additionally, a separate element in a NFV MANO architecture is the VNF Manager, which is responsible for lifecycle management of the particular VNF. The VNFM component might be either generic, treating the VNF as a black box and performing similar operations for various VNFs, or there might be a vendor-specific VNFM that has unique capabilities for management of a given VNF. Both VIM and VNFM communication are performed via appropriate reference points, as defined by the NFV MANO architecture.


The OSM project was initially considered a multi-VIM platform, and at the time of this writing, it supports OpenStack, Vmware and OpenVIM. OpenVIM is a lightweight VIM implementation that is effectively a python wrapper around libvirt and a basic host networking configuration. At the time of this writing, the OSM VCA has limited capabilities, but still can be considered a generic VNFM based on JuJu Charms. Further, it is possible to introduce support for vendor-specific VNFMs,  but additional development and integration efforts might be required on the Service Orchestrator ( side.


Release 1 of the  OPEN-O project supports only OpenStack as a VIM. This support is available as a Java-based driver for the NFVO component. For further releases, support for VMware as a VIM is planned. The Open-O Rel.1 platform has a generic VNFM that is based on JuJu Charms. Furthermore, the pluggable architecture of the OPEN-O platform can support any vendor-specific VNFM, but additional development and integration efforts will be required.


At the time of this writing the reference implementation of the CORD platform is architectured around OpenStack as a platform to spawn NFV workloads. While there is no direct relationship to the NFV MANO architecture, the XOS orchestrator is responsible for VNF lifecycle management, and thus might be thought of as the entity that provides VNFM-like functions.

Gigaspaces Cloudify

When Cloudify was adapted for the NFV use case, it inherited plugins for OpenStack, VMware, Azure and others that were already available for general-purpose cloud deployments. So we can say that Cloudify has MultiVIM support and any arbitrary VIM support may be added via the appropriate plugin. Following Gigaspaces’ reference model for NFV, there is a  generic VNFM that can be used with a Cloudify NFV orchestrator out of the box. Additional vendor-specific VNFM can be onboarded, but appropriate plugin development is required.

Capabilities to provision end-to-end service

NFV service provisioning consists of multiple steps, such as VNF instantiation, configuration, underlay network provisioning, and so on.  Moreover, an NFV service might span multiple clouds and geographical locations. This kind of architecture requires complex workflow management by an NFV Orchestrator, and coordination and synchronisation between infrastructure entities. This section provides an overview of the various orchestrators’ abilities to provision end-to-end service.


The OSM orchestration platform supports NFV service deployment spanning multiple VIMs. In particular, the OSM RO component (openmano) stores information about all VIMs available for deployment, while the Service Orchestrator can use this information during the NSD instantiation process. Meanwhile, underlay networking between VIMs should be preconfigured. There are plans to enable End-to-End network provisioning in future, but OSM Rel. 1 has no such capability.


By design, the OPEN-O platform considers both NFV and SDN infrastructure regions that might be used to provision end-to-end service. So technically, you can say that Multisite NFV service can be provisioned by OPEN-O platform. However, the OPEN-O Rel.1 platform implements just a couple of specific use cases, and at the time of this writing, you can’t use it to provision an arbitrary Multisite NFV service.


The reference implementation of the CORD platform defines the provisioning of a service over a defined CORD Pod. To enable Multisite NFV Service instantiation, an additional orchestration level on the top of CORD/XOS is required. So from this perspective, at the time of this writing, CORD is not capable of instantiating a Multisite NFV service.

Gigaspaces Cloudify

As Cloudify originally supported application deployment over multiple IaaS providers, technically it is possible to create a blueprint to deploy an NFV service that spans across multiple VIMs. However underlay network provisioning might require specific plugin development.

Interaction with standardization bodies and relevant communities

Each of the reviewed projects has strong industry community support. Depending on the nature of each community and the priorities of the project, there is a different focus on collaboration with an industry, other open source projects and standardization bodies.


Being hosted by ETSI, the OSM project closely collaborates with the ETSI NFV working group and follows the appropriate specifications, reference points and interfaces. At the time of this writing there are no collaborations between OSM in the scope of the OPNFV project, but it is under consideration by the OSM community. The same relates to other relevant open source projects, such as OpenStack and OpenDaylight; these projects are used “AS-IS” by the OSM platform without cross collaboration.


The OPEN-O project aims to integrate both SDN and NFV solutions to provide end-to-end service, so there is formal communication to the ETSI NFV group, while the project itself doesn’t strictly follows interfaces defined by the ETSI NFV IFA working group. On other hand there is strong integration effort with the OPNFV community via initiation of the OPERA project, which aims to integrate the OPEN-O platform as a MANO orchestrator for the OPNFV platform.  Additionally there is strong interaction between OPEN-O and MEF as a part of the OpenLSO platform, and the ONOS project towards seamless integration and enabling end-to-end SDN Orchestration.  


Having originated at the On.LAB (recently merged with ONF) this project follows the approach and technology stack defined by ONF. As of the time of this writing, the CORD project has no formal presence in OPNFV. Meanwhile, there is communication with MEF and ONF towards requirements gathering and use cases for the CORD project. In particular, MEF explicitly refers to E-CORD and its applicability for defining their OpenCS MEF project.

Gigaspaces Cloudify

While the Cloudify platform is an open source product, it is mostly developed by a single company, thus the overall roadmap and community strategy is defined by Gigaspaces. This also relates to any collaboration with standardisation bodies: GigaSpaces participates in ETSI-approved NFV PoCs where Cloudify is used as a service orchestrator, and in an MEF-initiated LSO Proof of Concept, where Cloudify is used to provision E-Line EVPL service, and so on.  Additionally, the Cloudify platform is used separately by the OPNFV community in the FuncTest project for vIMS test cases, but this mostly relates to Cloudify use cases, rather than vendor-initiated community collaboration.


Summarising the current state of the NFV orchestration platforms, we may conclude the following: The OSM platform is already suitable for evaluation purposes, and has relatively simple and straightforward architecture. Several sample NSDs and VNFDs are available for evaluation in the public gerrit repo. As a result, the platform can be easily installed and integrated with an appropriate VIM to evaluate basic NFV capabilities, trial use cases and PoCs. The project is relatively young, however, and a number of features still require development and will be available in upcoming releases. Furthermore, lack of support for end-to-end NFV service provisioning across multiple regions, including underlay network provisioning, should be considered in relation to your desired use case. Considering mature OSM community and close interaction with ETSI NFV group this project might emerge as a viable option for production-grade NFV Orchestration. At the time of this writing, the main visible benefit of the OPEN-O platform is the flexible and extendable microservices-based architecture. The OPEN-O approach considers End-to-End service provisioning spanning multiple SDN and NFV regions from the very beginning. Additionally, the OPEN-O project actively collaborates with the OPNFV community toward tight integration of the Orchestrator with OPNFV platform. Unfortunately, at the time of this writing, the OPEN-O platform requires further development to be capable of providing arbitrary NFV service provisioning. Additionally a lack of documentation makes it hard to understand the microservice logic and the interaction workflow. Meanwhile, the recent OPEN-O and ECOMP merge under the ONAP project creates powerful open source community with strong industry support, which may reshape the overall NFV orchestration market. The CORD project is the right option when OpenFlow and whiteboxes are the primary option for computing and networking infrastructure. The platform considers multiple use cases, and a large community is involved in platform development.  Meanwhile, at the time of this writing, the  CORD platform is a relatively “niche” solution around OpenFlow and related technologies pushed to the market by ONF. Gigaspaces Cloudify is a platform that already has a relatively long history, and at the time of this writing emerges as the most mature orchestration solution among the reviewed platforms. While the NFV use case for a Cloudify platform wasn’t originally considered, Cloudify’s pluggable and extendable architecture and embedded workflow engine enables arbitrary NFV service provisioning. However, if you do consider Cloudify as an orchestration engine, be sure to consider the risk of having the decision-making process regarding the overall platform strategy controlled solely by Gigaspaces.


  1. OSM official website
  2. OSM project wiki
  3. OPEN-O project official website
  4. CORD project official website
  5. Cloudify platform official website
  6. Network Functions Virtualisation (NFV); Management and Orchestration
  7. Cloudify approach for NFV Management & Orchestration
  8. ARIA TOSCA project
  9. TOSCA Simple Profile Specification
  10. TOSCA Simple Profile for Network Functions Virtualization
  11. OPNF OPERA project
  12. OpenCS project   
  13. MEF OpenLSO and OpenCS projects
  14. OPNFV vIMS functional testing
  15. OSM Data Models; NSD and VNFD format
  16. Cloudify Blueprint overview

The post What is the best NFV Orchestration platform? A review of OSM, Open-O, CORD, and Cloudify appeared first on Mirantis | Pure Play Open Cloud.

by Guest Post at March 08, 2017 07:36 PM

NFVPE @ Red Hat

Kuryr-Kubernetes will knock your socks off!

Seeing kuryr-kubernetes in action in my “Dr. Octagon NFV laboratory” has got me feeling that barefoot feeling – and henceforth has completely knocked my socks off. Kuryr-Kubernetes provides Kubernetes integration with OpenStack networking, and today we’ll walk through the steps so you can get your own instance up of it up and running so you can check it out for yourself. We’ll spin up kuryr-kubernetes with devstack, create some pods and a VM, inspect Neutron and verify the networking is working a charm.

by Doug Smith at March 08, 2017 05:01 PM

OpenStack Superuser

How Kubernetes on OpenStack powers DreamHost’s new web builder

When it comes to establishing a web presence, many small businesses really just need simple, great-looking web pages.  They don’t have the time or the interest to learn how to use complicated site-building tools.

To address this need, DreamHost developed Remixer, an easy-to-use graphic interface to build websites quickly and easily. The application runs on a Kubernetes cluster deployed on top of DreamHost’s OpenStack-based cloud computing resource, DreamCompute.

Thanks to OpenStack, the Kubernetes cluster is deployed with Terraform, without needing extra plugins or weird extensions. Terraform uses OpenStack APIs to create the virtual machines and network topology necessary for Kubernetes, including all security rules, the persistent volumes for etcd and to store application’s logs. The whole cluster is created with a simple configuration script.


Kubernetes itself runs on custom Container Linux (previously known as CoreOS)-based images as a hypercube single-binary, the preferred deployment form of Kubernetes. A python script downloads the basic Container Linux image, configures it for DreamCompute, adds the single hypercube binary, bundles the image, and uploads it in Glance image service.

Remixer runs a custom version of Kubernetes to work around a known issue dealing with OpenStack volumes. DreamHost developers sent a pull request to Kubernetes upstream to get that fixed, and are hopeful their contribution will improve things for all users in similar situations.

The Kubernetes cluster is made of three types of instances: master nodes, workers, and etcd instances. Many of these instances require persistent volumes from Cinder, and that’s the reason for the custom patch.

Without OpenStack and Kubernetes, Remixer would look very different today or would not exist at all. With the microservices distributed as Docker images, the team can focus on delivering features that add value to the customers instead of imaging bare-metal servers to keep them up with demand.

Master instances are used to provide the high-availability of the main Kubernetes components and they host the ingress controllers used to accept and route external traffic using declarative configuration. The resulting cluster can survive a master node loss. The ingress traffic is also managed by the master nodes, but with the use of an ingress controller which is based on Nginx. Modern browsers will request from the next IP address in the pool if one doesn’t answer. As a result, the ingress management is also redundant if all master IPs are added as DNS records to the serviced domain.

Flannel is used to provide network services for the pods across the node border.  Every node registers itself in etcd with the help of register_node service and timer. Fleet is used to distribute services in the cluster along with services activated on a particular machine depending on its role.

The etcd cluster doesn’t use the etcd discovery service, as the initial hosts for the cluster are generated via Terraform.  Etcd instances use volumes to store the cluster data, so they won’t go away if the instance is lost. To communicate with the etcd cluster every machine has a proxy etcd instance which is connected to the cluster. The etcd cluster is effectively the localhost for other services on the machine.

Remixer’s Kubernetes cluster offers – right from the start – the dashboard, ingress support with automatic TLS certificates management via Let’s Encrypt, log collection and reporting, cluster health/performance metrics in InfluxDB shipped from Heapster, and Grafana as a UI. The size of the cluster is elastic, with worker nodes added or removed with customer’s demand.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="480" src="" width="854"></iframe>

On top of this infrastructure, everything runs Dockerized for the Remixer application itself. On a not-so-busy day there are over 25 pods in the cluster, each running a microservice with specific tasks. The Celery-based worker pods take care of more resource-intensive tasks, like publishing the website and building the page previews. Web pods take care of delivering the front-end application, based on Flux React, while the Nginx pods act as reverse proxies for the Python backend, to integrate with DreamHost APIs to create domains, manage billing, etc. Remixer also requires pods for cache (using Redis) and a RabbitMQ pod.

To complete the app, the DreamHost team designed from the beginning a state-of-the-art analytics service that monitors user experience. This analytics dashboard is designed to help product managers understand how the product is used, in order to minimize user interface pain points and gain visibility on front-end errors. Remixer users are at the center of the development cycle, which is based on hard data.

Without OpenStack and Kubernetes, Remixer would look very different today or would not exist at all. With the microservices distributed as Docker images, the team can focus on delivering features that add value to the customers instead of imaging bare-metal servers to keep up with demand.

Cover Photo // CC BY NC

The post How Kubernetes on OpenStack powers DreamHost’s new web builder appeared first on OpenStack Superuser.

by Stefano Maffulli at March 08, 2017 01:01 PM

SUSE Conversations

SUSE Expert Days już 4 kwietnia w Warszawie

Przygotowania do SUSE Expert Days idą pełną parą. Konferencja ta jest jedną z nielicznych w Polsce okazji do zdobycia wiedzy pomocnej w planowaniu przyszłości własnych centrów danych w oparciu o rozwiązania open source przygotowane na potrzeby biznesu. W tym roku można się będzie dowiedzieć  na niej, jak dzięki otwartemu oprogramowaniu utrzymać przewagę na konkurencyjnym rynku w …

+read more

The post SUSE Expert Days już 4 kwietnia w Warszawie appeared first on SUSE Blog. Rafal Kruschewski

by Rafal Kruschewski at March 08, 2017 12:11 PM

Does your open source project need a president?

Recently I was lucky enough to be invited to attend the Linux Foundation Open Source Leadership Summit. The event was stacked with many of the people I consider mentors, friends, and definitely leaders in the various open source and free software communities that I participate in.

by SpamapS at March 08, 2017 08:00 AM

March 07, 2017

OpenStack Blog

Helping PTG attendees and other developers get to the OpenStack Summit

Although the OpenStack design events have changed, developers and operators still have a critical perspective to bring to the OpenStack Summits. At the PTG, a common whisper heard in the hallways was, “I really want to be at the Summit, but my [boss/HR/approver] doesn’t understand why I should be there.” To help you out, we took our original “Dear Boss” letter and made a few edits for the PTG crowd. If you’re a contributor or developer who wasn’t able to attend the PTG, with a few edits, this letter can also work for you. (Not great with words? Foundation wordsmith Anne can help you out–anne at


Dear [Boss],


I would like to attend the OpenStack Summit in Boston, May 8-11, 2017. At the Pike Project Team Gathering in Atlanta (PTG), I was able to learn more about the new development event model for OpenStack. In the past I attended the Summit to participate in the Design Summit, which encapsulated the feedback and planning as well as design and development of creating OpenStack releases. One challenge was that the Design Summit did not leave enough time for “head down” work within upstream project teams (some teams ended up traveling to team-specific mid-cycle sprints to compensate for that). At the Pike PTG, we were able to kickstart the Pike cycle development, working heads down for a full week. We made great progress on both single project and OpenStack-wide goals, which will improve the software for all users, including our organization.


Originally, I––and many other devs––were under the impression that we no longer needed to attend the OpenStack Summit. However, after a week at the PTG, I see that I have a valuable role to play at the Summit’s “Forum” component. The Forum is where I can gather direct feedback and requirements from operators and users, and express my opinion and our organization’s about OpenStack’s future direction. The Forum will let me engage with other groups with similar challenges, project desires and solutions.


While our original intent may have been to send me only to the PTG, I would strongly like us to reconsider. The Summit is still an integral part of the OpenStack design process, and I think my attendance is beneficial to both my professional development and our organization. Because of my participation in the PTG, I received a free pass to the Summit, which I must redeem by March 14.      


Thank you for considering my request.
[Your Name]

by Anne Bertucio at March 07, 2017 08:00 PM

OpenStack Superuser

Upcoming OpenStack Community Leadership Training

Great community leaders are made, not born. With that idea in mind, OpenStack’s Stewardship Working Group is organizing a second edition of Community Leadership Training, April 11-13.

Who should attend?

“Anyone who wants to be involved in the OpenStack Community in any way should attend – that goes for users who want to be active, developers, or even members of any working group,” Colette Alexander, who is part of the SWG, tells Superuser.

These small, workshopped sessions are capped at 20 participants. The first edition was offered last year to technical committee members, Foundation staff and board members; sample sessions feature topics including “servant leadership” and “mindfulness in management” while time is also set aside for debriefing and reflection. There held at ZingTrain in Ann Arbor, Michigan. ZingTrain has a long track record in areas of leadership work that closely align with the values of democracy, openness and consensus building in the OpenStack community.

If you can confirm your ability to attend, the sign-up is here:

The Etherpad also has further details about timing, place, recommended locations to stay, etc. You can also scroll down to read a sample itinerary of subjects covered in the training. If you have further questions, Alexander suggests asking questions in this mailing list thread, or in #openstack-swg as many folks who frequent that channel have attended.

A few other things to note:

– This is the exact same training that was done last year
– There will be 1-2 attendees from the previous edition to give some context and continuity to the discussions.
– Training costs are fully funded by the Foundation. Attendees need to cover the cost of travel, lodging and some meals (breakfast and lunch during training is provided).
– Deadline for applications is March 24, so please start the work
of getting travel approvals, etc., now.

Any more questions? You can also ping Alexander on IRC at gothicmindfood.

Cover Photo // CC BY NC

The post Upcoming OpenStack Community Leadership Training appeared first on OpenStack Superuser.

by Nicole Martinelli at March 07, 2017 01:03 PM


OpenStack Australia Day Melbourne – 3 months to go!

Aptira - OpenStack Australia Day - Melbourne

The countdown is officially on! Less than 3 months to go until OpenStack Australia Day comes to Melbourne.

This event will feature a range of sessions on the broader cloud and Software Defined Infrastructure ecosystem including OpenStack, containers, PaaS and automation with insights from some of the most talented members of the OpenStack community. Attendees will have the opportunity to hear real business cases, learn about new products, and participate in hands-on workshops, as well as a networking event for a less formal opportunity to engage with the community.

OpenStack Australia Day Melbourne is located at the Rydges Hotel within Melbourne’s CBD. Situated in the heart of the city’s vibrant theatre district with Chinatown, exclusive Collins Street boutiques and world-famous Bourke Street Mall only moments away. This venue will include free event wi-fi for all attendees, a restaurant featuring the best of Victorian and Australian produce, contemporary bars and on-site accommodation. Accommodation discount codes are available upon request and are subject to availability.

The sponsor and catering area is truly unique. Amazing decor, 3D theming of a bygone era, food carts and a donut wall (yes a DONUT WALL!). Delegates will have the opportunity to interact with the community in a setting unlike any other OpenStack event.

For more information regarding sponsorship, speaker submissions and ticket sales, please visit:

I hope to see you all there!

The post OpenStack Australia Day Melbourne – 3 months to go! appeared first on Aptira Cloud Solutions.

by Jessica Field at March 07, 2017 03:35 AM

Hugh Blemings

OpenStack PTG Atlanta 2017 Summary of Summaries


The last couple of editions of Lwood included a list of links to mailing list posts from the preceding week where the writers have provided a summary of particular PTG sessions and/or commentary about the overall event.

Like the previous ones for Austin and Barcelona, the list below is aggregate of these weekly posts into one readily searchable list.

Summaries posted to the OpenStack-Dev mailing list

This list will be updated each week with any new summaries that are posted to the list.  Additions/corrections welcome.

Updates: 20170314 – Added Acceleration, Heat, Ironic, Mistral, Neutron and Octavia links; 20170313 – Corrected TripleO link;

by hugh at March 07, 2017 03:33 AM

March 06, 2017

Gal Sagie

Submitting a Talk To OpenStack Summit

I haven’t written a post for some time now, been busy creating something very special which i hope to share about really soon. I usually write in this blog about technical things, and i will continue to do this after this post :) but i wanted to share some of the insights i gained both from being a returning speaker and track chair in the recent OpenStack summits.

I was fortunate to be a track chair in the last two OpenStack summits (And part of the team for OpenStack Israel 2016 and 2017), a task that i personally take very seriously and make sure to devote and allocate the needed time. To you who isn’t familiar what a track chair is i suggest you read this link which describe how a talk is being selected for OpenStack summit

In this post i wanted to share some of the points that i personally think are important for the potential speaker to address and pay attention to, i believe at least some of this is shared by other track chairs and other committees for other conferences and hope that these tips will help the future track chairs make better decisions.

Your Bio

It is extremely important for me to understand who you are, some of the criteria for my selection is the speaker. No matter what title you hold or what your experience is, there is absolutely no reason that your bio will consist of only one sentence.

I want to hear what is your current role, but also what you work on and how it relates to OpenStack (That’s already 2-3 sentences). If you are a contributor, or you write a blog, or you presented before, please share those details.

I tend to look up the speakers in order to find this information, but you can make this process much easier if you just supply it :)

The One Speaker Syndrome

OpenStack and i feel open source in general is a team work, it’s a community effort and i personally like to see talks with more than one speaker, preferably from different companies.

Its of course not mandatory, and not the highest criteria in this list, but if you are really presenting a topic which is interesting and which the community is working on, i suggest you devote some time and try to find a co-speaker. From my personal experience, when you work in OpenStack it’s not that hard to find this person.

Talks with diverse speakers help us track chairs identify that this is obviously a subject interesting for more than one company and let us feel safer that it’s not going to be a marketing pitch (Of course, i have been disappointed before…) and make sure we will receive few points of view on the subject.

The Topic

This is a personal preference, but to me topics should be engaging, they should trigger your interest and make you read the abstract.

Give some thought to your topic, try to make it unique and capture the essence of what you are going to present and how it might be different than all the others. You will be surprised how many recurring topics, topics which try to be an abstract (too long) or ones that are not really clear i see…

Picking a good topic is probably an art, try to devote some thinking to it.

The Abstract

The abstract should give potential audience a pretty good and accurate picture of what is going to be presented. (in summary)

Keeping a “mysterious” abstract can be appealing at times, but it makes the audience and the track chair job difficult, how can we determine if this is interesting and should be included if we don’t even understand what it is you plan to present.

If there is any information about what you are going to present, like project links, blog posts, code, previous presentations include this in the extra sections you have when submitting a talk.

Try to see if what you are presenting was presented before, it’s hard but i think everyone prefer updated and new things.

Avoid marketing pitching, that’s what the market place is for.

Fixing The World Yesterday

Abstracts that sounds and look unrealistic or are improving everything with nothing are less appealing to me. There are always exceptions, but i think it’s important to show at least some potential feasibility (now or in the future) to whatever it is you are presenting.

Too Much Inner Topics

Your time is limited, your audience might be diverse from beginners to the creators of OpenStack, you need to focus! Don’t try to cover too much, plan your time and presentation and do this before you list all these subjects you going to present in the abstract.

Filling the abstract with as many buzz words as you can is really not helping you getting selected and is really not helping us understand where you going to focus with the limited time you have.


I hope these tips will help you when you submit your next talk proposal. It’s important to note that all of these thoughts are only my own and are in no way reflecting any official statement from OpenStack or anyone else, feel free to disagree and contradict them.

The selection process holds many other criterias that i didn’t list here, in the end it’s everyone’s goal to have the most interesting and diverse agenda for the ENTIRE OpenStack community and the people who attend the summit. If you have any more suggestions or tips i would love to hear them.

Until next time…

March 06, 2017 11:25 PM


Blogs, week of March 6th

There's lots of great blog posts this week from the RDO community.

RDO Ocata Release Behind The Scenes by Haïkel Guémar

I have been involved in 6 GA releases of RDO (From Juno to Ocata), and I wanted to share a glimpse of the preparation work. Since Juno, our process has tremendously evolved: we refocused RDO on EL7, joined the CentOS Cloud SIG, moved to Software Factory.


Developing Mistral workflows for TripleO by Steve Hardy

During the newton/ocata development cycles, TripleO made changes to the architecture so we make use of Mistral (the OpenStack workflow API project) to drive workflows required to deploy your OpenStack cloud.


Use a CI/CD workflow to manage TripleO life cycle by Nicolas Hicher

In this post, I will present how to use a CI/CD workflow to manage TripleO deployment life cycle within an OpenStack tenant.


Red Hat Knows OpenStack by Rich Bowen

Clips of some of my interviews from the OpenStack PTG last week. Many more to come.


OpenStack Pike PTG: TripleO, TripleO UI - Some highlights by jpichon

For the second part of the PTG (vertical projects), I mainly stayed in the TripleO room, moving around a couple of times to attend cross-project sessions related to i18n.


OpenStack PTG, trip report by rbowen

last week, I attended the OpenStack PTG (Project Teams Gathering) in Atlanta.


by Rich Bowen at March 06, 2017 09:51 PM

Blogs, week of Feb 27th

Here's what RDO enthusiasts have been blogging about in the last couple of weeks. I encourage you to particularly read Julie' excellent writeup of the OpenStack Pike PTG last week in Atlanta. And have a look at my video series from the PTG for other engineers' perspectives.

OpenStack Pike PTG: OpenStack Client - Tips and background for interested contributors by jpichon

Last week I went off to Atlanta for the first OpenStack Project Teams Gathering, for a productive week discussing all sort of issues and cross-projects concerns with fellow OpenStack contributors.


SDN with Red Hat OpenStack Platform: OpenDaylight Integration by Nir Yechiel, Senior Technical Product Manager at Red Hat

OpenDaylight is an open source project under the Linux Foundation with the goal of furthering the adoption and innovation of software-defined networking (SDN) through the creation of a common industry supported platform. Red Hat is a Platinum Founding member of OpenDaylight and part of the community alongside a list of participants that covers the gamut  from individual contributors to large network companies, making it a powerful and innovative engine that can cover many use-cases.


Installing TripleO Quickstart by Carlos Camacho

This is a brief recipe about how to manually install TripleO Quickstart in a remote 32GB RAM box and not dying trying it.


RDO Ocata released by jpena

The RDO community is pleased to announce the general availability of the RDO build for OpenStack Ocata for RPM-based distributions, CentOS Linux 7 and Red Hat Enterprise Linux. RDO is suitable for building private, public, and hybrid clouds. Ocata is the 15th release from the OpenStack project, which is the work of more than 2500 contributors from around the world (source).


OpenStack Project Team Gathering, Atlanta, 2017 by Rich Bowen

Over the last several years, OpenStack has conducted OpenStack Summit twice a year. One of these occurs in North America, and the other one alternates between Europe and Asia/Pacific.


Setting up a nested KVM guest for developing & testing PCI device assignment with NUMA by Daniel Berrange

Over the past few years OpenStack Nova project has gained support for managing VM usage of NUMA, huge pages and PCI device assignment. One of the more challenging aspects of this is availability of hardware to develop and test against. In the ideal world it would be possible to emulate everything we need using KVM, enabling developers / test infrastructure to exercise the code without needing access to bare metal hardware supporting these features.


ANNOUNCE: libosinfo 1.0.0 release by Daniel Berrange

NB, this blog post was intended to be published back in November last year, but got forgotten in draft stage. Publishing now in case anyone missed the release…


Containerizing Databases with Kubernetes and Stateful Sets by Andrew Beekhof

The canonical example for Stateful Sets with a replicated application in Kubernetes is a database.


Announcing the ARA 0.11 release by dmsimard

We’re on the road to version 1.0.0 and we’re getting closer: introducing the release of version 0.11!


by Rich Bowen at March 06, 2017 09:51 PM

OpenStack Superuser

Why open source is like a team sport

Heather Kirksey likes to call them as she sees them.

As director for Open Platform for NFV (OPNFV) — a role she alternatively describes as coach, nerd matchmaker and diplomat — she oversees and provides guidance for all aspects of the project, from technology to community and marketing. At the recent Linux Foundation Open Source Leadership Summit, she headed up a session titled “Open Source as a Team Sport” with and OPNFV’s Chris Price and OpenStack’s Jonathan Bryce.

Kirksey put some thought into the proceedings — setting up a crackling fire video on a giant screen and producing a bottle of whiskey to facilitate a “fireside chat.” She kicked off the session with clips from one of her favorite movies, “Miracle.” Based on the true story of a player-turned-coach who brought the 1980 U.S. Olympic hockey team to victory over Russia, early scenes in the movie show how even successful team building can sometimes be a, well, contact sport.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="480" src="" width="854"></iframe>

Superuser sat down with Kirksey – in a busy hallway, minus the whiskey – to ask her more about the parallels between hockey and open source. She tells us why the brutality of hockey is a good metaphor for open source, about leveling the open source playing field for women and how you can get involved with OPNFV.

Of all the team sports, hockey is one of the most violent, right?

Why do you think I like hockey? I like my sports with a side of brutality…In most sports there are tensions that flare up and sometimes it can get raw and there are fisticuffs. At the end of the day, you need to come together because you’re trying to accomplish a goal.

In the first clip I showed, the part where the coach starts talking about how to come together after the two guys started fighting. And he was like, ‘hockey is about flow and creativity’ that’s really where you want to get to when you’re creating something new and innovating, like we are in open-source communities. And remembering that aspect of it — the joy of making that and creating a space where you can find joy and have fun as a community is really important.

So there’s still a Kumbaya at the end, but you may have to punch somebody in the face to get there.

Yes. It’s like après-ski — after you’ve like fallen down a mountain for a couple days, that hot tub and whiskey feel awfully nice.


Kirksey leads a “fireside chat” with Jonathan Bryce and Chris Price.

Do you see your role as coach, like the Kurt Russell character in the movie?

He [Herb Brooks] was one of the legendary coaches of hockey, so it’d be a little self-aggrandizing to make that comparison. But I do feel a large part of my role is to facilitate community, to remind people to have empathy…Sometimes it’s reminding people that we are on the same team and we are trying to solve the same problems.

How does your approach change on a mailing list or IRC versus in-person?

The in-person interaction can be more focused on relationship building and helping people get to know each other. Sometimes if the conflict is really getting intense, there’s no substitute for face-to-face or on-phone real-time interactions to talk through things.

Back when I chaired standards committee, we would have all these disagreements and contributions. And sometimes people would be arguing different aspects and talking past each other. So I developed a framework, for example, if someone was proposing a solution to a problem. First, do we agree that there is a problem? And do we agree that *this* is the problem? All right, now do we agree on how to solve the problem? Or, are we arguing details of the approach or are we arguing the entire approach itself?  You learn to go through it in a structured way…

Earlier, you mentioned your other role as the “United Nations Secretary” between the OPNFV community and other communities. What’s your diplomatic strategy?

You have to learn where the understanding point is for the other community, which involves getting to know them. And then do some education. A lot of people don’t know how networks are deployed currently, nor what they’re trying to transform to. So giving a little bit of 101 — you have a core and an access piece and the places where mobile and fixed line come together and we’re trying to address these issues with those aspects. Or come up with analogies. I’m a big fan of analogies.

One of the things that I try to do a lot of at events, especially outside ones, is try to come up with activities at the event, or parties and things to facilitate people interacting. And trying to broker introductions: ‘You’re working on performance testing in this community.  And you’re working on performance testing in this community. Hey, why not talk — I’ll buy you a beer.’ I pick up a lot of beer tabs for developers. I’m kind of a professional beer-tab picker-upper.

Do you also see yourself as a matchmaker?

Yeah. A little bit sometimes, a matchmaker of nerds. But if you can find that common ground — say both people are doing the same thing but in different communities, then they immediately have a starting point. And generally people are excited to talk about what they’re working on.

Congratulations on the nomination for the Women in Open Source Award. How long do you think it’ll be before it’s *not* about women in open source— just people who are outstanding, period?

Well, I have been in the tech industry for 19 years now and the percentage of women hasn’t really gone up. Although I think we’re having better conversations about it now. We’re having more allies-oriented conversations, which I think are good.
I think a lot of folks  — anywhere on the gender spectrum — are realizing this shit has got to change…For example, folks are now measuring diversity — you’re not going to change what you don’t measure. The fact that a lot of tech companies are now measuring their diversity and setting diversity goals matters. And the fact that the Linux Foundation worked with National Center for Women & Information Technology (NCWIT) to create training for speakers and conferences matters. We’re seeing more structure around it now, for sure.

What else might make a difference?

Root out the structural inequalities of our modern world? [laughs.] That’s obviously is a bigger thing… The issues in tech reflect broader societal issues… Unconscious bias, for example, being one of the big things… Part of it is human, we get used to the way things are and change is difficult. We get used to people who are like us and share common background experiences or common cultural references, even. And so unconscious bias is a really hard thing to root out. Accepting that it exists and being aware of it and trying to catch yourself and being intentionally thoughtful and self-reflective in your hiring decisions, in your community processes and on your conference panels helps.

Get Involved with OPNFV
Whether you’re an employee of a member company or just passionate about network transformation, you’ll find the basics of how to get started (creating an account, mailing lists, Wiki, projects) here.

To meet OPNFVers in person (and maybe get some free beer?) you’ll find them at the upcoming Open Networking Summit, OpenStack Summit and the OPNFV Summit.

For Open Stack individual contributors, what’s a good starting point at OPNFV?

If you’ve got any CI/CD knowledge, that would be great. Also our testing projects are a good place to start, because really what we focus on as a community is NFV stat testing which spans a lot of different pieces from a lot of different communities. We never have enough tests or people writing test cases. That’s a need and a lower barrier to entry, scripting a test case versus diving into changing a networking part of Neutron, for example. And then you get a lot more exposure to the software itself, how it’s configured, how it’s run, how it deploys,  what it’s supposed to do and what success is and what failure is.

You can also reach out to the individual Project Team Leads (PTLs)…They’re great and they love to hear from people who want to contribute to the project.



Cover photo by: Martyn Hayes

The post Why open source is like a team sport appeared first on OpenStack Superuser.

by Nicole Martinelli at March 06, 2017 12:20 PM

Hugh Blemings



Welcome to Last week on OpenStack Dev (“Lwood”) for the week just past. For more background on Lwood, please refer here.

Basic Stats for the week 27 February to 5 March for openstack-dev:

~424 Messages (down nearly 26% relative to the long term average)

~154 Unique threads (down a bit over 14% relative to the long term average)

Traffic picked up a bit relative to last week though once again a fairly brief Lwood – the main thing of note to many will I suspect be the summary of summaries from the PTG

Notable Discussions – openstack-dev

OpenStack Summit returns to Vancouver in 2018

Allison Price announced that the Summit is returning to the fair city of Vancouver in May 2018.

OpenStack PTG Atlanta summary of summaries

With Atlanta PTG concluded the summaries are starting to come in – as I’ve done previously I’ll link them over the next few Lwoods then put together an aggregated list

OpenStack Community Leadership Training open to all

The opportunity to take the well regarded leadership training program that had previously been made available to the TC, Board and Foundation staff is now being extended to all Community members writes Colette Alexander.

End of Week Wrap-ups, Summaries and Updates

Three this week; Horizon (Rob Cresswell), Ironic (Ruby Loo) and Zuul (Robyn Bergernon)

People and Projects

Core nominations & changes


Further reading

Don’t forget these excellent sources of OpenStack news – most recent ones linked in each case


No tunes this week, was working remotely and wasn’t an appropriate setting for tunes (aka I forgot headphones :)

by hugh at March 06, 2017 11:34 AM


Planet OpenStack is a collection of thoughts from the developers and other key players of the OpenStack projects. If you are working on OpenStack technology you should add your OpenStack blog.


Last updated:
March 25, 2017 03:24 PM
All times are UTC.

Powered by: