June 28, 2017

OpenStack Superuser

Catch these updates on OpenStack community goals, strategies

Superuser TV was on the ground at the OpenStack Summit Boston talking to community members about everything from edge computing to OPNFV.

A few highlights, in case you missed them:

You can check out all 26 interviews in a playlist on the OpenStack Foundation’s YouTube page.

The post Catch these updates on OpenStack community goals, strategies appeared first on OpenStack Superuser.

by Superuser at June 28, 2017 11:57 AM

June 27, 2017

Ben Nemec

TripleO Network Isolation Template Generator Update

Just a quick update on the TripleO Network Isolation Template Generator. A few new features have been added recently that may be of interest.

The first, and most broadly applicable, is that the tool can now generate either old-style os-apply-config based templates, or new-style tripleo-heat-templates native templates. The latter are an improvement because they allow for much better error handling, and if bugs are found it is much easier to fix them. If your deployment is using Ocata or newer TripleO then you'll want to use the version 2 templates. If you need to support older releases, select version 1.

In addition support for some additional object types has been added. In particularl, the tool can now generate templates for OVS DPDK deployments. I don't have any way to test these templates, unfortunately, so the output is solely based on the examples in the os-net-config repo. Hopefully those are accurate. :-)

If you do try any of the new (or old) features of the tool and have feedback don't hesitate to let me know. To my knowledge I'm still the primary user of the tool so it would be nice to know what, if anything, other people are doing with it.

by bnemec at June 27, 2017 08:52 PM

OpenStack Superuser

High availability of live migration with OpenStack: a report

As  the number of companies moving application workloads into public and private clouds mushrooms, live migration becomes  a key issue.

The OpenStack Innovation Center (OSIC) benchmark​ ​tested live migration to find the best way to move forward with non-impacting cloud maintenance. The team — Ala Raddaoui, Alexandra Settle, John Garbutt and Sarafraj Singh — deployed two 22-node OpenStack clouds using OpenStack-Ansible to test two types of live migration. The tests show that live migration works, both with and without the use of shared storage. You can read details about the test methods and results in a 14-page report they prepared.

“You can use it to avoid the downtime needing to reboot a host for maintenance,” the team states in the report. They recommend that if you decide to empty a host, one live migration at a time works best as it takes shorter time to live migrate the VM and has less impact on VM downtime. The team recommends trying to use shared storage where possible and setting the progress timeout to zero if you’re using a release prior to Newton.

“It’s worth noting we do not have a graph for the number of failed live-migrations, because there were none during these test runs,” the report states. “Initial test runs uncovered some bugs that were fixed upstream in the Ocata branch and backported where applicable. After the bugs were addressed, no failures were found.”

You can download the complete report here: High Availability of Live Migration



The post High availability of live migration with OpenStack: a report appeared first on OpenStack Superuser.

by Superuser at June 27, 2017 01:07 PM

James Page

Ubuntu OpenStack Development Summary – 27th June 2017

Welcome to the third Ubuntu OpenStack development summary!

This summary is intended to be a regular communication of activities and plans happening in and around Ubuntu OpenStack, covering but not limited to the distribution and deployment of OpenStack on Ubuntu.

If there is something that you would like to see covered in future summaries, or you have general feedback on content please feel free to reach out to me (jamespage on Freenode IRC) or any of the OpenStack Engineering team at Canonical!

OpenStack Distribution

Stable Releases

The next cadence of stable fixes is undergoing testing:

Cinder: RBD calls block entire process (Kilo)

Cinder: Upload to image does not copy os_type property (Kilo)

Swift: swift-storage processes die of rsyslog is restarted (Kilo, Mitaka)

Neutron: Router HA creation race (Mitaka, Newton)

Mitaka Stable Point Releases

Newton Stable Point Releases

Ocata Stable Point Releases

Nova-LXD storage pool compatibility

You’ll notice some lag in the cadence flow at the moment; we’re working with the Ubuntu SRU team to see how that can be optimised better going forwards.

Development Release

Builds for all architectures for Ceph 12.0.3 can be found in:


The first RC for Luminous was released last week, so expect that to appear soon in the same location; pending successful testing 12.1.0 will also be uploaded to Artful and the Ubuntu Cloud Archive for OpenStack Pike.

You’ll also find updates GlusterFS 3.10.x packages in Ubuntu Artful and the Ubuntu Cloud Archive for OpenStack Pike.

OpenStack Pike Milestone 2 is now in the Ubuntu Cloud Archive for OpenStack Pike which can be added to Ubuntu 16.04 LTS installations using:

sudo add-apt-repository cloud-archive:pike

This milestone involved over 70 package updates and 5 new packages for new OpenStack dependencies!

OpenStack Snaps

After some review and testing, the snap sub-team decided to switch back to strict mode for all OpenStack snaps; classic mode was pushing complexity out of snapd and into every snap which was becoming hard to maintain, so moving back to strict mode snaps made sense.

Alongside this work, we’ve been working on the first cut of ‘snapstack’, a tool to support testing of snaps as part of the gating process for development, and as part of the CI/CD process for migration of snaps across channels in the snap store.

If you want to give the current snaps a spin to see what’s possible checkout snap-test.

Nova LXD

Work on Nova-LXD in the last few weeks has focussed on moving the Tempest DevStack OpenvSwitch experimental gate into the actual blocking gate; this work is now complete for Ocata and Pike releases; tests are not executed against the older Newton stable branch due to a number of races in the VIF plugging part of the driver. This is a significant step forward in assuring the quality of the driver going forwards.

Work is also underway on refactoring the VIF plugging codebase to integrate better with os-vif and Neutron; this should improve the quality of the driver when used with the Linuxbridge mechanism driver in Neutron, and will make integration of other SDN choices easier in the future. This work will also resolve compatibility issues with the native Open vSwitch firewall driver and Nova-LXD.

OpenStack Charms

New Charms

Specifications are up for review for the proposed Gnocchi and GlusterFS charms. Please feel free to read through and provide any feedback on the proposed specifications!

Pike Updates

A few minor updates to support the Pike development release are working through review; these should be landed soon (the team aims to maintain deployability of development milestones alongside OpenStack development).

IRC (and meetings)

As always, you can participate in the OpenStack charm development and discussion by joining the #openstack-charms channel on Freenode IRC; we also have a weekly development meeting in #openstack-meeting-4 at either 1000 UTC (odd weeks) or 1700 UTC (even weeks) – see http://eavesdrop.openstack.org/#OpenStack_Charms for more details.


by JavaCruft at June 27, 2017 09:50 AM

Red Hat Stack

OpenStack Down Under – OpenStack Days Australia 2017

As OpenStack continues to grow and thrive around the world the OpenStack Foundation continues to bring OpenStack events to all corners of the globe. From community run meetups to more high-profile events like the larger Summits there is probably an OpenStack event going on somewhere near you.

One of the increasingly popular events is the OpenStack Days series. OpenStack Days are regionally focussed events sponsored by local user groups and businesses in the OpenStack universe. The are intended to be formal events with a detailed structure, keynotes and sponsorship.

This year’s OpenStack Days – Australia was held June 1st in Melbourne, Australia and Red Hat was proud to be a sponsor with speakers in multiple tracks!


Despite being a chilly Melbourne winter’s day there was an excellent turnout. Keynotes featured OpenStack Foundation Executive Director Jonathan Bryce presenting a recap of the recent Boston Summit and overall state of OpenStack. He was joined by Heidi Joy Tretheway, Senior Marketing Manager for the OpenStack Foundation, who presented OpenStack User Survey results (you can even mine the data for yourself). Local community leaders also presented, sharing their excitement for the upcoming Sydney Summit, introducing a pre-summit Hackathon and sharing ideas to help get OpenStack into college curriculum.

Just in: The latest OpenStack User Survey has just begun! Get your OpenStack Login ready and contribute today!

Red Hatters Down Under

Red Hatters in Australia and New Zealand (ANZ) are a busy bunch, but we were lucky enough to have had two exceptional Red Hat delivered presentations on the day.

Andrew Hatfield, Practice Lead – Cloud Storage and Big Data


The first, from Practice Lead Andrew Hatfield, kicked off the day’s Technical Track to an overflowing room. Andrew’s session, entitled “Hyperconverged Cloud – not just a toy anymore” introduced the technical audience to the idea of OpenStack infrastructure being co-located with Ceph storage on the same nodes. He discussed how combining services, such as Ceph OSD’s onto Nova computes, is fully ready for production workloads in Red Hat OpenStack Platform Version 11. Andrew pointed out how hyperconvergence, when matched properly to workload, can result in both intelligent resource utilisation and cost savings. For industries such as Telco this is an important step forward for both NFV deployments and Edge computing, where a small footprint and highly tuned workloads are required.

Peter Jung, Business Development Executive | Cloud Transformation


In our second session of the day, Red Hat Business Development Executive Peter Jung presented “OpenStack and Red Hat: How we learned to adapt with our customers in a maturing market.” Peter discussed how Red Hat’s OpenStack journey has evolved over the last few years. With the release of Red Hat OpenStack Platform 7 a few short years ago and its introduction of the deployment lifecycle tool, Red Hat OpenStack Platform director, Red Hat has worked hard to solve real-world user issues around deployment and lifecycle management. Peter’s session covered a range of topics and customer examples, further demonstrating how Red Hat continues to evolve with the maturing OpenStack market.

Community led!

Other sessions on the day covered a wide range of topics across the three tracks providing something for everyone. From an amazing talk in the “Innovation Track” by David Perry of The University of Melbourne around running containers on HPC (with live demo!) to a cheeky “Technical Track” talk by Alexander Tsirel of Hivetec on creating a billing system running on OpenStack, the range, and depth, of the content was second to none. ANZ and Stackers really are a dynamic and exciting bunch! The day ended with an interesting panel covering topics ranging from the complexities of upgrading to the importance of diversity.

Keep an eye on OpenStack Australia’s video channel for session videos! And check out previous event videos from Canberra and Sydney for more OpenStack Down Under!

Of course, I’d be remiss if I were to not mention one of the most talked about non-OpenStack items of the day: The Donut Wall. Just have to see it to believe it, and here it is:

A big success Down Under

The day was a resounding success and the principal organisers, Aptira and the OpenStack Foundation, continue to do an exceptional job to keep the OpenStack presence strong in the ANZ region. We at Red Hat are proud and excited to continue to work with the businesses and community that make OpenStack in Australia and New Zealand amazing and are extremely excited to be the host region for the next OpenStack Summit, being held in Sydney November 6-8, 2017. Can’t wait to see you there!

Want to find out how Red Hat can help you plan, implement and run your OpenStack environment? Join Red Hat Architects Dave Costakos and Julio Villarreal Pelegrino in
“Don’t fail at scale: How to plan, build, and operate a successful OpenStack cloud” today.

Use OpenStack every day? Want to know more about how the experts run their clouds? Check out the Operationalising OpenStack series for real-world tips, tricks and stories from OpenStack experts just like you!

by August Simonelli, Technical Marketing Manager, Cloud at June 27, 2017 01:59 AM

Carlos Camacho

OpenStack versions - Upstream/Downstream

I’m adding this note as I’m prune to forget how upstream and downstream versions are matching.

  • RHOS Version 0 = Diablo
  • RHOS Version 1 = Essex
  • RHOS Version 2 = Folsom
  • RHOS Version 3 = Grizzly
  • RHOS Version 4 = Havana
  • RHOS Version 5 = Icehouse
  • RHOS Version 6 = Juno
  • RHOS Version 7 = Kilo
  • RHOS Version 8 = Liberty
  • RHOS Version 9 = Mitaka
  • RHOS Version 10 = Newton
  • RHOS Version 11 = Ocata
  • RHOS Version 12 = Pike
  • RHOS Version 13 = Queens
  • RHOS Version 14 = R
  • RHOS Version 15 = S

by Carlos Camacho at June 27, 2017 12:00 AM

June 26, 2017


Victoria Martinez de la Cruz: OpenStack Manila

Victoria Martinez de la Cruz talks Manila, Outreachy, at the OpenStack PTG in Atlanta

by Rich Bowen at June 26, 2017 07:43 PM

Ihar Hrachyshka - What's new in OpenStack Neutron for Ocata

Ihar Hrachyshka talks about his work on Neutron in Ocata, and what's coming in Pike.

by Rich Bowen at June 26, 2017 07:43 PM

OpenStack Superuser

A quick tip for the young, ambitious developer

Life, like surfing, is all about the wave selection and reading waves is a tough skill. If you are looking for the exponential wave to surf for career growth, then read on to unravel the mystery.

Have you ever pursued something difficult that seemed the logical next step in your career, and then feeling motivated you went on to tackle the monster all by yourself, only to later find yourself almost crushed? If so, consider enrolling yourself in a mentorship program.

“There are options in life. It’s not necessary that high achievements can only be garnered by choosing difficult options,” from the movie “Dear Zindagi.”

Typically, a mentor is an older, more experienced person who works with the mentee to pursue the mentee’s best interests and goals. Mentor-mentee relationships can either be formal – organized through a mentoring program or informal – established through connections.

Most of the benefits of being mentored are commonly recognized, so here are the big ones as a refresher.

  • Structured learning can save you hours of struggle. The role your skill set will play in your career is pretty straightforward: The better you are at what you do, the more successful you will be.
  • Gain exposure from second-hand experiences. This reminds me of a quote by Otto von Bismarck – “Only a fool learns from his own mistakes. The wise man learns from the mistakes of others.”
  • Improve your performance.  We all know how easy it is to get lost in the woods on days of low self-motivation.
  • Build a network and make connections with peers that may otherwise take years to develop.

Remember not to be fooled by the impression that the outcome of your career is dependent on the actions and inputs of others. The truth is, you are in charge of the chemical reaction in which the mentor is merely a catalyst.

If you’re looking for a mentorship program to get onboard as an upstream developer in OpenStack,  there are plenty of ways. The weekend prior to OpenStack Summits there’s a two-day Upstream Institute – an intensive program designed to share knowledge about the diverse ways of contributing to Openstack. Also, the Women of OpenStack group runs Speed Mentoring session at the OpenStack Summits which is a great icebreaker to get to know new and experienced people in the OpenStack community.

Nonetheless, if you want to start by working remotely, keep an eye on the weekly meetings and discussion going around the IRC channels. You may want to checkout out about Outreachy OpenStack internship which helps people from groups underrepresented in the free and open source software get involved and runs twice every year. Outreachy has very good projects with awesome mentors. How do I know? Because, I was also in your shoes once, kudos to all the contributors who helped me all along and made it worthwhile.

But, the story of surfing the exponential wave doesn’t end here. Paying it forward at the Speed Mentoring Session at the OpenStack Summit in Boston was an unmatchable experience. Another enriching journey which excites me to the core is about to begin with the upcoming Outreachy internship round May-Aug 2017 for which I will be co-mentoring for the Keystone documentation auditing project. It’s been said that you never really learn something until you teach it. So, take out some time to work and develop yourself after the mentee phase and once you feel more confident, switch to the mentor phase.

We tend to think about a mentorship program only from the mentee’s perspective but a little pondering will show that the returns from a mentor’s side are great career boosters as well. Few good pointers worth noting, are listed below.

  • Refresh your knowledge. Mentoring someone can provide a greater perspective and clarity about what you already know.
  • Hone your leadership, management and communication skills.
  • Build your credibility and reputation as a role model.

Mentoring does not necessarily mean that you must take out a lot of time from your already hectic schedule. All it means is that you spend quality time with the mentee because every interaction counts.

Whether you become a mentor or a mentee, it’s a classic win-win situation. Now that the facts are clear, choose a role that brings you a step closer to your goals. I would love to hear your takeaways from the journey in the comments section!


The post A quick tip for the young, ambitious developer appeared first on OpenStack Superuser.

by Nisha Yadav at June 26, 2017 03:23 PM

June 23, 2017

OpenStack Superuser

How to install Cloudify 4 on AWS and OpenStack


There are generally two paths to installing Cloudify Manager on your cloud – either via a Cloudify Manager Pre-baked Image or bootstrapping (which gives the user the option of choosing what is installed on the machine). In an effort to make the installation of Cloudify Manager as simple as possible, this tutorial will walk you through the more basic path to provisioning your AWS or OpenStack infrastructure and get your first Cloudify Manager up and configured – using a pre-baked image. In the next post, we will walk you through bootstrapping Cloudify Manager on your cloud of choice, including Azure (which we don’t have an image for yet) and we will follow that up with deploying your first demo web application.

Keep in mind that this method of installing Cloudify Manager is not necessarily ideal, but it will give you the best understanding of what is required to get your environment set up in about five minutes and should work the very first time you try it (barring unforseen circumstances based on your environment or cloud setup).

One last note: We will be installing the Open Source Community Edition of Cloudify in this tutorial.

To ask a question or report an issue, please visit the Cloudify users group.


There are a few things to keep in mind and prepare before getting started, so please read through this post carefully at least once before beginning.

  • Python 2.7 installed on your computer.
  • Virtualenv installed on your computer.
  • IaaS Cloud provider and API credentials and sufficient permissions to provision network and compute resources (a new, clean environment is always best):
  • AWS Credentials
  • Openstack Credentialsskip step 5 in those instructions — do not “source” the file.

Getting Started

It is highly suggested you run Cloudify in a virtual environment such as virtualenv. Once you have created your environment in the terminal (or command prompt or Powershell), continue with the Cloudify CLI installation.


1. Install Cloudify CLI on your computer by downloading the binary file for your operating system (Linux, Ubuntu, or Windows) and install. If you are using Mac, just run pip install cloudify in the terminal and it will install for you.


2. Download and extract this blueprint archive to the directory (folder) of your choice AND make sure your terminal is currently in that directory.


3. Install your environment’s infrastructure by executing one of the example commands below, inserting your account credentials where indicated. The simplest way to do this is to copy the text to a text editor, add the details, and the paste them into the terminal.

NOTE: AWS users – this process will automatically be run on US-EAST-1 (N. Virginia). For slightly more advanced users who may want to use a different region, open the “aws-blueprint.yaml” file in a text editor to customize the inputs.


For AWS run:

$ cfy install cloudify-environment-setup-latest/aws-blueprint.yaml -i aws_access_key_id=[INSERT_YOUR_AWS_ACCESS_KEY] -i aws_secret_access_key=[INSERT_YOUR_AWS_SECRET_ACCESS_KEY] --task-retries=30 --task-retry-interval=5 --install-plugins

For Openstack run:

$ cfy install cloudify-environment-setup-latest/openstack-blueprint.yaml -i username=[INSERT_YOUR_OPENSTACK_USERNAME] -i password=[INSERT_YOUR_OPENSTACK_PASSWORD] -i tenant_name=[INSERT_YOUR_OPENSTACK_TENANT_NAME] -i auth_url=[INSERT_YOUR_OPENSTACK_V2.0AUTH_URL] -i region=[INSERT_YOUR_OPENSTACK_REGION] -i external_network_name=[INSERT_YOUR_OPENSTACK_EXTERNAL_NETWORK_NAME] -i cloudify_image_id=[INSERT_YOUR_OPENSTACK_CENTOS_OR_CLOUDIFY_IMAGE_ID] -i ubuntu_trusty_id_examples=[INSERT_YOUR_OPENSTACK_UBUNTU_TRUSTY_IMAGE_ID] -i small_openstack_image_flavor=[INSERT_YOUR_OPENSTACK_SMALL_IMAGE_FLAVOR_ID] -i large_openstack_image_flavor=[INSERT_YOUR_OPENSTACK_LARGE_IMAGE_FLAVOR_ID] --task-retries=30 --task-retry-interval=5 --install-plugins


4. Get info to configure Cloudify Manager by running ‘cfy deployments outputs’ in your terminal.

The output should look like this:


For the purpose of this tutorial, you will only need to follow the “Configuration” steps. Ignore the “Bootstrap” and “Demo” sections. This will ready our environment to run the webapp we will deploy in a future post.

5. Configure your manager:

At this stage, it is suggested to wait 5 minutes for all of the services to synchronize.

Initialize the manager CLI profile:

You need to initialize a manager profile in order to control your manager. Copy the text from your outputs in the previous step and paste it in your terminal. It will look like this:

$ cfy profiles use -s cfyuser -k ~/.ssh/cfy-manager-key -u admin -p admin -t default_tenant **.**.***.***


Upload the plugins for your manager:

Note: the exact plugins you need to upload will vary. For this example, you will be shown the plugins to upload in your outputs.


Create your secrets:

Adding secrets to your manager make your deployments more secure. The exact secrets you add also vary by clouds. Again, copy and paste from your previous step outputs and paste into your terminal.

Note that in the last command, the double-quotes are unescaped:

The deployment output was like this:

$ cfy secrets create agent_key_private -s \"$(<~/.ssh/cfy-agent-key)\"

But you will need to remove the \ on either side of the quotes so it looks like this:

$ cfy secrets create agent_key_private -s "$(<~/.ssh/cfy-agent-key)"


Your manager is now installed and configured!

6. When you are ready to uninstall your environment, run:

$ cfy profiles use local
$ cfy uninstall --allow-custom-parameters -p ignore_failure=true --task-retries=30 --task-retry-interval=5


Watch the tutorial video below to see the tutorial in action:

Luther Trammell is a cloud solutions architect at Cloudify, an open source TOSCA-based cloud management platform. This tutorial first appeared on Cloudify’s blog.

Superuser is always interested in community content, get in touch at editorATopenstack.org

The post How to install Cloudify 4 on AWS and OpenStack appeared first on OpenStack Superuser.

by Luther Trammell at June 23, 2017 12:21 PM


Introducing Software Factory - part 1

Introducing Software Factory

Software Factory is an open source, software development forge with an emphasis on collaboration and ensuring code quality through Continuous Integration (CI). It is inspired by OpenStack's development workflow that has proven to be reliable for fast-changing, interdependent projects driven by large communities.

Software Factory is batteries included, easy to deploy and fully leverages OpenStack clouds. Software Factory is an ideal solution for code hosting, feature and issue tracking, and Continuous Integration, Delivery and Deployment.

Why Software Factory ?

OpenStack, as a very large collection of open source projects with thousands of contributors across the world, had to solve scale and interdependency problems to ensure the quality of its codebase; this led to designing new best practices and tools in the fields of software engineering, testing and delivery.

Unfortunately, until recently these tools were mostly custom-tailored for OpenStack, meaning that deploying and configuring all these tools together to set up a similar development forge would require a tremendous effort.

The purpose of Software Factory is to make these new tools easily available to development teams, and thus help to spread these new best practices as well. With a minimal effort in configuration, a complete forge can be deployed and customized in minutes, so that developers can focus on what matters most to them: delivering quality code!

Innovative features

Multiple repository project support

Software projects requiring multiple, interdependent repositories are very common. Among standard architectural patterns, we have:

  • Modular software that supports plugins,drivers, add-ons …
  • Clients and Servers
  • Distribution of packages
  • Micro-services

But being common does not mean being easy to deal with.

For starters, ensuring that code validation jobs are always built from the adequate combination of source code repositories at the right version can be headache-inducing.

Another common problem is that issue tracking and task management solutions must be aware that issues and tasks might and will span over several repositories at a time.

Software Factory supports multiple repository projects natively at every step of the development workflow: code hosting, story tracking and CI.

Smart Gating system

Usually, a team working on new features will split the tasks among its members. Each member will work on his or her branch, and it will be up to one or more release managers to ensure that branches get merged back correctly. This can be a daunting task with long living branches, and prone to human errors on fast-evolving projects. Isn't there a way to automate this?

Software Factory provides a CI system that ensures master branches are always "healthy" using a smart gate pipeline. Every change proposed on a repository is tested by the CI and must pass these tests before being eligible to land on the master branch. Nowadays this is quite common in modern CI systems, but Software Factory goes above and beyond to really make the life of code maintainers and release managers easier:

Automatic merging of patches into the repositories when all conditions of eligibility are satisfied

Software Factory will automatically merge mergeable patches that have been validated by the CI and at least one privileged repository maintainer (in Software Factory terminology, a "core developer"). This is configurable and can be adapted to any workflow or team size.

The Software Factory gating system takes care of some of the release manager's tasks, namely managing the merge order of patches, testing them, and integrating them or requiring further work from the author of the patch.

Speculative merging strategy

Actually, once a patch is deemed mergeable, it is not merged immediately into the code base. Instead, Software Factory will run another batch of (fully customizable) validation jobs on what the master code base would look like if that patch plus others mergeables patches was merged. In other words, the validation jobs are run on a code base consisting of:

  • the latest commit on the master branch, plus
  • any other mergeable patches for this repository (rebased on each others in approval order), plus
  • the mergeable patch on top

This ensures not only that the patch to merge is always compatible with the current code base, but also detects compatibility problems between patches before they can do any harm (if the validation jobs fail, the patch remains at the gate and will need to be reworked by its author).

This speculative strategy minimizes greatly the time needed to merge patches because the CI assumes by default that all mergeable patches will pass their validation jobs successfully. And even if a patch in the middle of the currently tested chain of patches fails, then the CI will discard only the failing patch and automatically rebase the others (those previously rebased on the failed one) to the closest one. This is really powerful especially when integration tests take a long time to run.

Jobs orchestration in parallel or chained

For developers, a fast CI feedback is critical to avoid context switching. Software Factory can run CI jobs in parallel for any given patch, thus reducing the time it takes to assess the quality of submissions.

Software Factory also supports chaining CI jobs, allowing artifacts from a job to be consumed by the next one (for example, making sure software can be built in a job, then running functional tests on that build in the next job).

Complete control on jobs environments

One of the most common and annoying problems in Continuous Integration is dealing with jobs flakiness, or unstability, meaning that successive runs of the same job in the same conditions might not have the same results. This is often due to running these jobs on the same, long-lived worker nodes, which is prone to interferences especially if the environment is not correctly cleaned up between runs.

Despite all this, long-lived nodes are often used because recreating a test environment from scratch might be costly, in terms of time and resources.

The Software Factory CI system brings a simple solution to this problem by managing a pool of virtual machines on an OpenStack cloud, where jobs will be executed. This consists in:

  • Provisioning worker VMs according to developers' specifications
  • Ensuring that a given minimal amount of VMs are ready for new jobs at any time
  • Discarding and destroying VMs once a job has run its course
  • Keeping VMs up to date when their image definitions have changed
  • Abiding by cloud usage quotas as defined in Software Factory's configuration

A fresh VM will be selected from the pool as soon as a job must be executed.

This approach has several advantages:

  • The flakiness due to environment decay is completely eliminated
  • Great flexibility and liberty in the way you define your images
  • CI costs can be decreased by using computing resources only when needed

On top of this Software Factory can support multiple OpenStack cloud providers at the same time, improving service availability via failover.

User-driven platform configuration, and configuration as code

In lots of organizations, development teams rely on operational teams to manage their resources, like adding contributors with the correct access credentials to a project, or provisioning a new test node. This can induce a lot of delays for reasons ranging from legal issues to human error, and be very frustrating for developers.

With Software Factory, everything can be managed by development teams themselves.

Software Factory's general configuration is stored in a Git repository, aptly named config, where the following resources among others can be defined:

  • Git projects/repositories/ACLs
  • job running VM images and provisioning
  • validation jobs and gating strategies

Users of a Software Factory instance can propose, review and approve (with the adequate accreditations) configuration changes that are validated by a built-in CI workflow. When a configuration change is approved and merged the platform re-configures itself automatically, and the configuration change is immediately applied.

This simplifies the work of the platform operator, and gives more freedom, flexibility and trust to the community of users when it comes to managing projects and resources like job running VMs.

Others features included in Software Factory

As we said at the beginning of this article Software Factory is a complete development forge. Here is the list of its main features:

  • Code hosting
  • Code review
  • Issue tracker
  • Continuous Integration, Delivery and Deployment
  • Job logs storage
  • Job logs search engine
  • Notification system over IRC
  • Git repository statistics and reporting
  • Collaboration tools: etherpad, pasteboard, voip server

To sum up

We can list how Software Factory may improve your team's productivity. Software Factory will:

  • help ensure that your code base is always healthy, buildable and deployable
  • ease merging patches into the master branch
  • ease managing projects scattered across multiple Git repositories
  • improve community building and knowledge sharing on the code base
  • help reduce test flakiness
  • give developers more freedom towards their test environments
  • simplify projects creation and configuration
  • simplify switching between services (code review, CI, …) thanks to its navigation panel

Software Factory is also operators friendly:

  • 100% Open source, self hosted, ie no vendor lock-in or dependency to external providers
  • based on Centos 7 and benefits from the stability of this Linux distribution
  • fast and easy to deploy and upgrade
  • simple configuration and documentation

This article is the first in a series that will introduce the components of Software Factory and explain how they provide those features.

In the meantime you may want to check out softwarefactory-project.io and review.rdoproject.org (both are Software Factory deployments) to explore the features we discussed in this article.

Thank you for reading and stay tuned !

by Software Factory Team at June 23, 2017 08:30 AM

June 22, 2017

NFVPE @ Red Hat

Let’s create a workflow for writing CNI plugins (including writing your first CNI plugin!)

In this tutorial, we’re going to write a CNI plugin, that is a “container network interface” plugin, that in this case we’ll specifically use in Kubernetes. A CNI plugin executes on start & stop of a container, and you use it to, generally, modify the infra container’s network namespace in order to configure networking for the pod. We can use this to customize how we setup networking. Today, we’ll both write a simple Go application to say “Hello, world!” to CNI to inspect how it works a little bit, and we’ll follow that up by looking at my CNI plugin Ratchet CNI (an implementation of koko in CNI) a little bit to grok the development workflow.

by Doug Smith at June 22, 2017 10:30 PM

OpenStack Superuser

What makes OpenStack relevant in a container-driven world

Much has been said about the relationship between OpenStack and containers.

Thierry Carrez, OpenStack’s VP of engineering, provides his take with a nod to characters of the hit TV show “Silicon Valley.”  You can catch him talking more on this topic (and ask him questions!) at upcoming events, including OW2 and OpenStack Days in Taiwan, Japan, China, U.K. and Nordic region. Below is an edited transcript of his recent talk at the Boston Summit.

When a new technology appears, it creates confusion as people try to wrap their heads around it. OpenStack was becoming mainstream in 2011, then it was containers in 2014, Kubernetes in 2016 and every single time the same thing happens. People think the new technology is set to replace everything else —as if a single technology can solve all of the world’s problems. The rise of containers and container orchestration systems has created a lot of confusion, especially with respect to OpenStack, the previous “hot” technology.

“If containers are replacing VMs and OpenStack is VM-centric does that mean OpenStack is not relevant anymore?” You also might ask why deploy OpenStack, if all you want to do is cloud-native? Or, for people who actually understand that those aren’t different technologies, there’s still plenty of confusion about whether OpenStack runs containers, or is it running on containers, etc.

These aren’t hypothetical questions. They are questions I hear all the time.  To clarify the situation, I’d like to introduce a set of personas. Personas are a tool used in user experience studies — they put a face and profile on a typical user and describe his or her life to highlight their needs.

The Cast

Here are three — slightly overlapping — personas:

The first one, let’s call him Dinesh, is the application developer. Dinesh writes the applications that run your business. He cares mostly about speed: Speed to design, speed to develop, speed to deploy, speed to market. He also likes to use the latest tools, because they make him more efficient and keep him relevant on the job market. Today, Dinesh is looking into 12-factor applications and serverless technologies.
Dinesh doesn’t want to bother too much about infrastructure. He doesn’t want to deploy any servers, doesn’t want any difference between his development environment, his test environment and his production environments that could introduce bugs in his applications. Finally, Dinesh does not obsess too much about cost or vendor lock-in. He loves Amazon Web Services (AWS), it’s very convenient and he’s happy with it.

The second persona is Bertram, the application operator. Bertram handles the deployment, monitoring and scaling of the apps that Dinesh writes. (These personas may overlap.) In some companies, the Dineshs and Bertrams share the same desks and offices. But in lot of companies they are slightly different roles, with different priorities — Bertram cares a lot about performance and reliability more than he cares about speed, for example.  He’s the one on call, so he wants solid and proven tools because he doesn’t want to be caught at 10 p.m. on a Saturday because something caught fire in production. Bertram doesn’t want to micromanage infrastructure but he still wants to look under the hood to understand how it works, he still wants to understand enough to select the right technology. Finally, Bertram is concerned about lock-in, because he likes to pick the best technical tool and being locked reduces his options.

Our third persona is Erlich, the infrastructure provider — even in a serverless world, someone has to rack servers and that’s his job. Erlich might be operating a public cloud infrastructure and offering infrastructure resources to anyone around the world with a credit card, or he might be operating private cloud infrastructure and offering infrastructure services to people in a given organization, that doesn’t change his role much. Erlich doesn’t want to bother too much over specific work loads, he wants to provide generic programmable infrastructure so Dinesh and Bertram can do their jobs. His primary metric is cost. Erlich also cares about evolution — the ability to change his systems so they’re relevant for what comes along next.

The tech

So that’s our cast, now let’s talk about the tech: containers, Kubernetes and OpenStack. Containers are, at heart, a packaging format. They’re a convenient way to package applications with libraries and dependencies. They also offer pretty nifty deployment tooling, convenient ones that you can use to deploy in applications in relatively isolated environments. Docker’s contribution was bundling those namespace and control group kernel technologies together with that convenient tooling to make those tools accessible to everyone. With the success of Docker, we’ve also seen the rise of application marketplaces as more and more companies publish their applications under containerized formats. All of this makes containers extremely appealing to Dinesh.

Kubernetes is one abstraction level up from containers. It’s a way to describe your application using groups of containers, the role they have in the application, how to scale them, deploy them and maintain them semi-automatically. In a way, it’s a deployment platform for containerized applications. It’s great because it captures operational best practices from Google’s experience and embeds it in the way you describe those resources. It’s also good at managing application life cycle and scaling — scale up and down on demand, and handle things like rolling upgrades, etc. … Bertram loves Kubernetes because it encapsulates operational best practices and it’s also pretty solid. It’s open source, so he can look under the hood. It can be run on public or private clouds, so he’s basically free from lock-in. Containers are great for Dinesh, Kubernetes is great for Bertram, but neither of these solves problems for Erlich.

Erlich wants to provide programmable infrastructure for the Dineshs and Bertrams of the world. He has two choices: specific infrastructure or open infrastructure. Specific infrastructure works if Erlich is dead sure that everything that Dinesh and Bertram will ever want is containers and a container orchestration system like containers and Kubernetes. They’ll use that combination forever. If that’s the case, there’s little value in unnecessarily deploying on top of OpenStack resources. He can deploy a Kubernetes cluster on bare metal servers directly.

Open infrastructure is for when you want options. Bertram and Dinesh need access to containers and container orchestration systems, plus access to VMs, to bare metal machines, to Mesos clusters to Docker Swarm clusters and provide those options with shared networking and storage. For example, if they want to combine VMs, containers and other things, they have to be able to communicate and store and access data. You also want to provide advanced services, like object storage, or database-as-a-service. These make Dinesh more efficient, since he doesn’t have to reinvent object storage, and Bertram more efficient because he doesn’t have to micromanage databases.

You also want multi-tenancy and interoperability. Beyond that, you want burst capacity to a public cloud that can absorb it. You want scaling to millions of CPU cores. You want seamless operations — things like common log file formats or common configuration file formats. And you want whatever comes next. The framework you deploy must be capable of integrating with the technology that Dinesh and Bertram will want tomorrow. You don’t want to reinvent and reinstall everything for the next hot thing.

That’s what OpenStack provides. OpenStack’s goal is to give infrastructure providers a way to answer the needs of application developers and application operators. That means programmable infrastructure, VMs, bare metal machines, containers, container-orchestration engines (not just Kubernetes, but support for others, too), open infrastructure, the ability to plug-in services, interoperable infrastructure, compatible clouds that you can burst to and future-proof infrastructure.

You want the promise that the framework you’re deploying today will still be relevant tomorrow when new technology comes along — technology that today we don’t know what it will be — and you’ll be able to integrate it. Because, make no mistake, there will be something else.  Kubernetes and containers are not the end of infrastructure technology. OpenStack is an integration engine that will be able to reuse the same framework tomorrow.

How they work together

Here are practical examples. The first is raw resources.

To provide access to raw resources, VMs or bare metal machines, you deploy Kubernetes on that and then you can deploy on containerized applications. To do that, you deploy a stack with Keystone for authentication, Cinder for block storage, Neutron for networking, Glance to store the disk images and Nova to provide the VMs you get those basic resources from.

If you want bare metal machines, then you would also deploy Ironic to drive access to those bare metal machines. This is OpenStack’s most basic use case: You provide raw infrastructures and it’s someone else’s job to deploy Kubernetes on top of it.

Now, if you want to have a Kubernetes cluster deployed directly, it’s more of a container orchestration-engine-as-a service. In that case, you’d directly have Kubernetes without having to deploy it yourself. You’d deploy the same stack with two more projects, Magnum and Heat. Heat for orchestration and Magnum to provide disk container orchestration-as-a-service system.

But at that point you might say, “Well, I just want to run a container, why are you deploying this Kubernetes cluster for me?” or “There’s this thing on Docker Hub and I just want to run it. How do I do that? Do I have to instantiate to VM and then install Docker on it, and then run whatever command on whatever?”

OpenStack has a solution for that. Essentially, you want OpenStack to absorb your container and run it. For that we have a container management service project called Zun. It lets you run any container and will provision a bare metal machine through Nova and Ironic, run it for you, then kill it when you’re done. It’s really as simple as “Zun create” the name of the container on Docker Hub. And all those options are backed with shared networking and storage.

The last example: let’s say you want to deploy is Kubernetes, but Kubernetes also needs identity management, access to block storage, access to networking and you might want to leverage all the drivers and plugins that we’ve developed in the OpenStack community, and give them access to it.

So to make sure Kubernetes doesn’t reinvent the wheel and leverage those projects to provide those functionalities, there’s a new project called Stackube, which has filed to become an official OpenStack project.

It bridges Kubernetes and provides plugins to Keystone, Cinder and Neutron. It’s also a truly multi-tenant Kubernetes installation using Hyper.sh technology to properly isolate the tenants — basically a Kubernetes multi-tenant distribution that reuses a number of OpenStack components.

How Erlich becomes Bertram

Now that you see how it all fits together, let’s switch things up. Why do people say we can run OpenStack on Kubernetes then? At this point you probably realize that OpenStack is a complex application…The deployment is very complicated because of all the moving parts and upgrades are difficult.

The idea of a deployment substrate to handle the complexity of upgrades and orchestration of the OpenStack application isn’t a new one. People have been running OpenStack on top of OpenStack with the TripleO project, for example. So we would run an OpenStack undercloud and use it to deploy the rest of the user accessible OpenStack overcloud instance.

Now if we get back to the technologies of what containers are — a packaging format, deployment tooling — it sounds like it could be useful to deploy OpenStack. You could use containers as a packaging format rather than relying on these 12 packages and use that convenient deployment tooling to simplify the deployment of OpenStack.

Those OpenStack packages can be published in a packaged format. A number of projects are exploring that space, including OpenStackAnsible,  and the original Kolla… So a development platform for containerized apps that encapsulates operational best practices, manages application lifecycle and scaling. It can be useful to deploy, upgrade and maintain that OpenStack application. And, especially to simplify scaling and rolling upgrades, we could run OpenStack on a Kubernetes substrate — something that a number of projects are exploring.

There’s also the OpenStack-Helm project, a new initiative to produce a collection of Helm charts to deploy OpenStack that you can deploy with the Helm client onto a Kubernetes substrate. (These are two slightly different approaches for the same problem, which is leveraging Kubernetes to deploy the OpenStack application.)

In conclusion: containers are a packaging format with good tooling, ensuring the needs of application developers. Kubernetes is a best-practice application deployment system ensuring the needs of app operators. OpenStack is an infrastructure framework enabling all sorts of infrastructure solutions, ensuring the needs of infrastructure providers.

Containers can be aligned with OpenStack, providing infrastructure, allowing them to share networking and storage with other types of computer resources in rich environments. Kubernetes clusters can be deployed manually, or through a provisioning API on OpenStack resources, giving their pods the same benefits of shared infrastructure.

Finally, OpenStack operators can leverage container and Kubernetes technologies to facilitate their deployment and management of OpenStack itself. Containers can be aligned on OpenStack, providing infrastructure, allowing them to share networking and storage with other types of computer resources in rich environments. They are different, complimentary technologies.

Catch his entire 25-minute talk below.


The post What makes OpenStack relevant in a container-driven world appeared first on OpenStack Superuser.

by Superuser at June 22, 2017 12:07 PM

James Page

Ubuntu OpenStack Pike Milestone 2

The Ubuntu OpenStack team is pleased to announce the general availability of the OpenStack Pike b2 milestone in Ubuntu 17.10 and for Ubuntu 16.04 LTS via the Ubuntu Cloud Archive.

Ubuntu 16.04 LTS

You can enable the Ubuntu Cloud Archive for OpenStack Pike on Ubuntu 16.04 LTS installations by running the following commands:

sudo add-apt-repository cloud-archive:pike
sudo apt update

The Ubuntu Cloud Archive for Pike includes updates for Barbican, Ceilometer, Cinder, Congress, Designate, Glance, Heat, Horizon, Ironic, Keystone, Manila, Murano, Neutron, Neutron FWaaS, Neutron LBaaS, Neutron VPNaaS, Neutron Dynamic Routing, Networking OVN, Networking ODL, Networking BGPVPN, Networking Bagpipe, Networking SFC, Nova, Sahara, Senlin, Trove, Swift, Mistral, Zaqar, Watcher, Senlin, Rally and Tempest.

We’ve also now included GlusterFS 3.10.3 in the Ubuntu Cloud Archive in order to provide new stable releases back to Ubuntu 16.04 LTS users in the context of OpenStack.

You can see the full list of packages and versions here.

Ubuntu 17.10

No extra steps required; just start installing OpenStack!

Branch Package Builds

If you want to try out the latest master branch updates, or updates to stable branches, we are maintaining continuous integrated packages in the following PPA’s:

sudo add-apt-repository ppa:openstack-ubuntu-testing/newton
sudo add-apt-repository ppa:openstack-ubuntu-testing/ocata
sudo add-apt-repository ppa:openstack-ubuntu-testing/pike

bear in mind these are built per-commitish (30 min checks for new commits at the moment) so ymmv from time-to-time.

Reporting bugs

Any issues please report bugs using the ‘ubuntu-bug’ tool:

sudo ubuntu-bug nova-conductor

this will ensure that bugs get logged in the right place in Launchpad.

Still to come…

In terms of general expectation for the OpenStack Pike release in August we’ll be aiming to include Ceph Luminous (the next stable Ceph release) and Open vSwitch 2.8.0 so long as the release schedule timing between projects works out OK.

Any finally – if you’re interested in the general stats – Pike b2 involved 77 package uploads including new 4 new packages for new Python module dependencies!

Thanks and have fun!


by JavaCruft at June 22, 2017 10:00 AM

June 21, 2017

OpenStack Superuser

Certified OpenStack Administrator exam upgrades to Newton

The OpenStack Foundation has launched version two of the Certified OpenStack Administrator (COA) exam, the only vendor-independent OpenStack exam offered by the OpenStack Foundation and its training partners. Since establishing the test last year, over 1,000 people have taken the COA, with hundreds more scheduled to take their exams in the upcoming months.

The COA exam is a virtual, skills-based exam that can be taken anywhere in the world on either Ubuntu or SUSE. Version two of the exam, upgraded to OpenStack Newton, went live June 21 and offers new features for an improved user experience.

New features include a rapid question navigation menu and relocation of all exam resources and openrc files to the user’s home directory for simplified access. Test takers will also see more messaging encouraging them to use the OpenStack Documentation; after all, part of administration is navigating documentation to solve problems! And finally, given the global breadth of the OpenStack community and COA test takers—from Nigeria to China to Canada to Sweden to Brazil—improved exam speed and simplified question formats will provide a better test taking experience no matter where in the world testers are.

At the OpenStack Summit Boston, Jason Vervlied, a cloud engineer at Ultimate Software, said that the COA is, “an invaluable resource. It shows who’s ready and who isn’t.” While openstack.org/coa recommends a minimum of six months of OpenStack experience, longtime Stackers are also taking the exam to test their own abilities. “In a skills-based test like the COA, you can’t guess; you have to have the knowledge to back it up,” said Amy Marrich, an OpenStack trainer with the Linux Academy.

Matt McEuen, an associate director at AT&T, emphasized the importance of the new features and upgrade. While not all deployments run Newton, the release update is an important part of the exam’s value. “We’re really eager to have our engineering staff be fluent in the state of the art of OpenStack. We don’t want them, in their own skills, to be a version or two behind; we want them to be up to date.”

For newcomers, be it new to OpenStack or students just starting careers, the COA has set them apart. “Coming out of college and not having a lot of cloud experience, preparing for the exam and passing it helped me be successful in my job,” Rick Bartra, a developer at AT&T shared.

“For folks interested in a job, it really sets you apart,” explained McEuen. “One of the things that is most difficult to understand in an interview is someone’s drive and motivation. A certification like this shows that you are willing to go above and beyond. Even beyond the skills that it proves you have—which are incredibly valuable in the employees we are looking for—it also shows that the candidate is really driven to figure out what they need to and be self motivated.”

In the year its since inception, the COA has proven its value across roles. For developers, it’s the “real need to understand how the code that they’re writing fits into the big picture…to be a well-rounded engineer,” said McEuen. Barta followed with an example: “Just in troubleshooting, we get production defects, and having taken the exam and having a more broad view of OpenStack, the exam prepared me for this.”

Ready to show your OpenStack expertise? Learn more about the COA exam, find a training partner to help you prepare or register for an exam at openstack.org/coa

Cover Photo // CC BY NC

The post Certified OpenStack Administrator exam upgrades to Newton appeared first on OpenStack Superuser.

by Anne Bertucio at June 21, 2017 02:06 PM

StackHPC Team Blog

HPC Networking in OpenStack: Part 1

This post is the first in a series on HPC networking in OpenStack. In the series we'll discuss StackHPC's current and future work on integrating OpenStack with high performance network technologies. This post sets the scene and the varied networking capabilities of one of our recent OpenStack deployments, the Performance Prototype Platform (P3), built for the Square Kilometre Array (SKA) telescope's Science Data Processor (SDP).

(Not Too) Distant Cousins

There are many similarities between the cloud and HPC worlds, driving the adoption of OpenStack for scientific computing. Viewed from a networking perspective however, HPC clusters and modern cloud infrastructure can seem worlds apart.

OpenStack clouds tend to rely on overlay network technologies such as GRE and VXLAN tunnels to provide separation between tenants. These are often implemented in software, running atop a statically configured physical Ethernet fabric. Conversely, HPC clusters may feature a variety of physical networks, potentially including technologies such as Infiniband and Intel Omnipath Architecture. Low overhead access to these networks is crucial, with applications accessing the network directly in bare metal environments or via SR-IOV in when running in virtual machines. Performance may be further enhanced by using NICs with support for Remote Direct Memory Access (RDMA).

Background: the SKA and its SDP

The SKA is an awe-inspiring project, to which any short description of ours is unlikely to do justice. Here's what the SKA website has to say:

The Square Kilometre Array (SKA) project is an international effort to build the world’s largest radio telescope, with eventually over a square kilometre (one million square metres) of collecting area. The scale of the SKA represents a huge leap forward in both engineering and research & development towards building and delivering a unique instrument, with the detailed design and preparation now well under way. As one of the largest scientific endeavours in history, the SKA will bring together a wealth of the world’s finest scientists, engineers and policy makers to bring the project to fruition.

The SDP Consortium forms part of the SKA project, aiming to build a supercomputer-scale computing facility to process and store the data generated by the SKA telescope. The data ingested by the SDP is expected to exceed the global Internet traffic per day. Phew!

Artist's impression of SKA dishes in South Africa

The SKA will use around 3000 dishes, each 15 m in diameter. Credit: SKA Organisation

Performance Prototype Platform: a High Performance Melting Pot

The SDP architecture is still being developed, but is expected to incorporate the concept of a compute island, a scalable unit of compute resources and associated network connectivity. The SDP workloads will be partitioned and scheduled across these compute islands.

During its development, a complex project such as the SDP has many variables and unknowns. For the SDP this includes a variety of workloads and an assortment of new hardware and software technologies which are becoming available.

The Performance Prototype Platform (P3) aims to provide a platform that roughly models a single compute island, and allows SDP engineers to evaluate a number of different technologies against the anticipated workloads. P3 provides a variety of interesting compute, storage and network technologies including GPUs, NVMe memory, SSDs, high speed Ethernet and Infiniband.

OpenStack offers a compelling solution for managing the diverse infrastructure in the P3 system, and StackHPC is proud to have built an OpenStack management plane that allows the SDP team to get the most out of the system. The compute plane is managed as a bare metal compute resource using Ironic. The Magnum and Sahara services allow the SDP team to explore workloads based on container and data processing technologies, taking advantage of the native performance provided by bare metal compute.

How Many Networks?

The P3 system features multiple physical networks with different properties:

  • 1GbE out of band management network for BMC management
  • 10GbE control and provisioning network for bare metal provisioning, private workload communication and external network access
  • 25/100GbE Bulk Data Network (BDN)
  • 100Gbit/s EDR Infiniband Low Latency Network (LLN)
Physical networks in the deployment

On this physical topology we provision a set of static VLANs for the control plane and external network access, and dynamic VLANS for use by workloads. Neutron manages the control/provisioning network switches, but due to current limitations in ironic it cannot also manage the BDN or LLN, so these are provided as a shared resource.

The complexity of the networking in the P3 system means that automation is crucial to making the system managable. With the help of ansible's network modules, the Kayobe deployment tool is able to configure the physical and virtual networks of the switches and control plane hosts using a declarative YAML format.

Ironic's networking capabilities are improving rapidly, adding features such as multi-tenant network isolation and port groups but still have a way to go to reach parity with VMs. In a later post we'll discuss the work being done upstream in ironic by StackHPC to support multiple physical networks.

Next Time

In the next article in this series we'll discuss how the Kayobe project uses Ansible's network modules to define physical and virtual network infrastructure as code.

by Mark Goddard at June 21, 2017 02:00 PM

June 20, 2017

Artom Lifshitz

Virtual device role tagging, better explained

Now that Nova’s device role tagging feature talked about in a previous blog post is getting some real world usage, I’m starting to realise that it’s woefully under-documented and folks are having some misconceptions about what it is and how to use it.

Let’s start with an example. You boot a VM with 3 network interfaces, each for a different purpose, each connected to a different virtual network:

nova boot --nic <public data nic spec> \
          --nic <private data nic spec> \
          --nic <management data nic spec> \
          --nic <Skylink uplink nic spec>

You SSH to your new VM, run ifconfig, and see:

eth0: flags=4163 mtu 1500
  ether 00:00:00:00:00:af txqueuelen 1000 (Ethernet)

eth1: flags=4163 mtu 1500
  inet netmask broadcast
  ether 00:00:00:00:00:01 txqueuelen 1000 (Ethernet)

eth2: flags=4163 mtu 1500
  ether 00:00:00:00:00:3d txqueuelen 1000 (Ethernet)

eth3: flags=4163 mtu 1500
  ether 00:00:00:00:00:5e txqueuelen 1000 (Ethernet)

Great, your 4 network interfaces are there. The second one has an IP address. Therefore, eth1 must be your management interface because that’s the only network you have DHCP running on. However, you can’t tell eth0, eth2, and eth3 apart and you don’t know which one is for public data, which one is for private data, and which one is for the robot apocalypse.

That’s an important point to reiterate: if you have a VM with multiple network interfaces, the order in which they appear in the guest OS does not necessarily reflect the order in which they were given in the server boot request. In our example, the management interface was given second to last, but ends up as the second interface in the guest OS, eth1.

So we’re back to our problem: how do we tell eth0, eth2 and eth3 apart – or more generally, how does the guest OS know which network interface is which if DHCP is not enabled for some of them? This is solved with device role tags.

Let’s go back to our example, and boot the same VM with device role tags applied to the network interfaces:

nova boot --nic <public data nic spec>,tag=public \
          --nic <private data nic spec>,tag=pvt \
          --nic <management nic spec>,tag=mgmt \
          --nic <Skylink uplink nic spec>

We haven’t tagged the Skynet uplink interface, this will be important a bit later.

Booting the VM with tags on the network interfaces lets us know which network interface is which because Nova transmits the tags to the guest operating system. It does so in 2 ways.

The first way is for the guest to query Nova’s metadata API. Let’s SSH into our example VM and query the metadata API with curl:

$ curl

This will give us a big JSON document. We’re looking for the following section:

"devices": [
    "type": "nic",
    "bus": "pci",
    "address": "00:01.0",
    "mac": "00:00:00:00:00:5e",
    "tags": ["public"]
    "type": "nic",
    "bus": "pci",
    "address": "00:02.0",
    "mac": "00:00:00:00:00:01",
    "tags": ["mgmt"]
    "type": "nic",
    "bus": "pci",
    "address": "00:03.0",
    "mac": "00:00:00:00:00:af",
    "tags": ["pvt"]

Each element in the devices array corresponds to one of the network interfaces that we’ve tagged when we booted the VM. Each device element contains our tag, but also other information about the device, such as PCI and MAC addresses. Using those, we can cross-reference with the output of ifconfig and figure out which network interface is which. For instance, we know that the network interface tagged with public has the 00:00:00:00:00:5e MAC address. Therefore, eth3 is the public data interface. Similarly, the interface tagged with pvt has the 00:00:00:00:00:af MAC address. Therefore, eth0 is the private data interface.

You’ve noticed by now that there are only 3 devices in the metadata we’ve received from the metadata API. Remember how we didn’t tag the Skynet uplink interface? Devices that aren’t tagged don’t appear in the metadata. The array is called devices but it should have been more accurately called tagged_devices. It makes sense to only include tagged devices since every other piece of information is already known to the guest OS. Let’s pretend we include the Skynet uplink interface in the devices array:

  "type": "nic",
  "bus": "pci",
  "address": "00:03.0",
  "mac": "00:00:00:00:00:3d"

There is nothing here that we can’t already find out with lspci or ifconfig, and it doesn’t help us in any way.

If the metadata API is not available, the config drive can be used. The JSON document returned by our previous curl command can also be found at openstack/latest/meta_data.json on the config drive.

Volumes (the –block-device parameter to the nova boot command) can be tagged in much the same way as network interfaces. The device tagging metadata is slightly different than for network interfaces, but the idea is the same: show the guest OS the device tag, along with other information it can use to figure out which disk the tag applies to. For example, we can boot a VM with two volumes:

nova boot --block-device <catpix volume>,tag=important \
          --block-device <database or whatever>,tag=db \

Our devices sections would then look something like this:

"devices": [
    "type": "disk",
    "bus": "ide",
    "address": "0:1",
    "tags": ["db"]
    "type": "disk",
    "bus": "ide",
    "address": "1:0",
    "tags": ["important"]

I hope this makes virtual device role tagging clearer. I’m hoping to merge tagged device attachment in Pike. If that happens, a blog post explaining how it works will follow.

by notartom at June 20, 2017 01:33 PM

OpenStack Superuser

How to craft a successful OpenStack Summit proposal

OpenStack Summits are conferences for developers, users and administrators of OpenStack cloud software.

For the upcoming Summit, sessions and tracks are organized into five categories: Business and strategy, architecture and operations, developers, OpenStack Academy and The Forum.

The deadline for proposals is July 14, 2017 at 11:59 p.m. Pacific Time (July 15, 2017 at 6:59 UTC) Find your time zone here. If you’ve applied to speak at the Summit before, take note that there are some new rules for 2017.  At the Sydney Summit, the new high-level themes are: business and strategy, careers in the cloud, technical presentations, and forum and collaborative sessions.  You can also submit your talk in tracks ranging from architecture to working groups — see the complete list here.

The OpenStack Foundation typically receives more than 1,500 submissions for the OpenStack Summit. Proposals go from an idea on the back of a napkin to center stage in a few steps. After you’ve submitted the proposal, the OpenStack community reviews and votes on all of them.

For each track, a group of subject matter experts examine votes and orchestrates them into the final sessions. Track chairs see where the votes come from, so if a company stuffs the virtual ballot box to bolster a pitch, they can correct that imbalance. They also keep an eye out for duplicate ideas, often combining them into panel discussions.

Standing tall in the room session featuring (from left to right) Beth Cohen, Nalee Jang, Shilla Saebi, Elizabeth K. Joseph, Radha Ratnaparkhi and Rainya Mosher

Find your audience

Rapid growth of the OpenStack community means that many summit attendees are relative newcomers. At the previous, around 50-60 percent were first-time attendees.

OpenStack Summit Austin demographics
Attendee data from the OpenStack Summit Austin

For each of those Summits, developers made up about 20 percent of attendees; product managers, strategists and architects made up roughly another quarter. Users, operators and sys admins were about 20 percent; CEOs, business development and marketers about 20 percent each with an “other” category coming in under 10 percent.

“Don’t make knowledge assumptions,” says Anne Gentle, who works at Cisco on OpenStack documentation and has powered through 13 Summits to date. But you don’t have to propose a talk for beginners, she adds, “be ready to tackle something deeply technical, don’t limit yourself.”

Consider the larger community, too. Your talk doesn’t necessarily have to be about code, says Niki Acosta of Cisco, adding that recent summit panels have explored gender diversity, community building and startup culture.

Set yourself up for success

There are some basic guidelines for getting your pitch noticed: use an informative title (catchy, but not cute — more below), set out a problem and a learning objective in the description, match the format of your talk to a type of session (hands-on, case study), make sure the outline mirrors what you can actually cover in the time allotted and, lastly, show your knowledge about the topic.

Be relevant

Remember that you’re pitching for an OpenStack Summit, not a sales meeting or embarking on a public relations tour. Be honest about who you work for and push your pitch beyond corporate puffery.

Diane Mueller, who works at Red Hat on OpenShift, spells it out this way. “I have corporate masters and we have agendas about getting visibility for our projects and the work we’re doing. But the Summit is all about OpenStack.” Instead of saying “give me an hour to talk about platform-as-a-service,” highlight an aspect of your business that directly relates to OpenStack. “It may be about how you deploy Heat or Docker,” she adds, but it’s not a vendor pitch.

While you want to keep the volume on corporate-speak low, all three speakers agreed that the bio is the place to get loud. Make sure to highlight your knowledge of OpenStack and any contributions you’ve made to the community. “Contributors get respect and priority,” Mueller says. “So whatever you’ve done — organizing, documentation, Q/A, volunteering at events — make sure you mention it.”

Be clear and complete

State your intent clearly in the abstract, title and description. The abstract should highlight what the “attendee will gain, rather than what you’re going to say,” Acosta says. “Focus on the voter and the attendee rather than making it all about you.” If English is your second language, proofread closely before submitting. If you’re struggling with the writing, make sure to add links for background, complete your bio and upload a photo.

Gentle notes that although the team regularly gets pitches from around the world and works with speakers whose native tongue isn’t English, making your proposal as clear as possible goes a long way to getting it accepted. For examples, check out the sample proposals at O’Reilly.

“I’ve read some really bad abstracts,” says Mueller. “The worst ones are just one line that says, ‘I’m going to talk about updates to Project X.’”

Nervous? Don’t fly solo

If you’ve got great ideas for a talk but hate the thought of standing up alone in front of an audience, there are a few workarounds. Try finding a co-presenter, bringing a customer or putting together a panel.

“Reach out to people who have the same role as you do at different companies,” says Acosta. “There’s nothing more exciting than a panel with competitors who have drastically different methodologies and strategies.”

Toot your own horn

Make your title captivating — but not too cute — and social-media ready. Voting for your proposal and attendance at your session often depend on the strength of the title.

“Tweet early, tweet often,” says Gentle. “I always get a little nervous around voting time, that’s natural. But trust in the process.”

Start stumping for your proposal as soon as you submit it. Your boss, the PR team and product manager should all be on board; letting your company know early may be key to getting travel approved. Network with your peers to get the word out, too. Finally, remember to vote for yourself. You don’t want to miss out by just one vote.

And, if you don’t get accepted this time, keep trying.

The rate of rejection is “quite high,” Acosta admits. “Don’t be discouraged. It doesn’t mean that your session topic wasn’t good. It just means that yours didn’t make it this time.”

Photos: lead CC-licensed, thanks M1ke-Skydive on Flickr; Standing tall in the room session at the Vancouver Summit courtesy of the OpenStack Foundation.

The post How to craft a successful OpenStack Summit proposal appeared first on OpenStack Superuser.

by Superuser at June 20, 2017 11:02 AM

Mark McLoughlin

June 20th OpenStack Foundation Board Meeting

The OpenStack Foundation Board of Directors met for a two hour conference call on Tuesday. The usual disclaimer applies - this my informal recollection of the meeting. It’s not an official record. Jonathan Bryce has posted an official summary of the meeting.

Interoperability Working Group Update

After the usual formalities of a roll call and approving the previous meeting minutes, the board heard from Egle Sigler, Mark Voelker, and Chris Hoge of the Interoperability Working Group.

The topics of the discussion are laid out in the working group's board report with Egle covering the upcoming 2017.08 guideline, Mark covering the proposal for extension programs, and Chris covering version 2.0 of the interop schema.

The extension programs proposal resulted in the most discussion. Mark described how the proposal explains how the OpenStack Powered trademark programs work today, the history of those programs, and how the additional programs would work.

The first type of program is a "vertical" program - examples given include "OpenStack Powered for NFV" or "OpenStack Powered for Containers". These would add requirements for additional capabilities specific to these use cases, provided those capabilities are already widely deployed in the context of those use cases.

The second type of program is an "add-on" program - for example, "OpenStack Powered DNS". This would require capabilities specific to that service, and ensure interoperability between implementations of a given service. It is anticipated that individual project teams would be responsible for definining the interop requirements.

Anni asked how these programs would relate to the idea of "constellations" as a way of describing OpenStack components, but the working group didn't see any immediate overlap with that idea.

I raised a concern that if obscure projects are free to define add-on programs of their own, it could dilute the value of the OpenStack Powered programs overall. However, it was clarified that, while the individual project teams could define interop requirements, each individual new program would require board approval.

Anni also raised a concern that vertical programs should not be exclusive - i.e. it should be possible for a single product to qualify for all vertical programs at once, so these programs need to not have conflicting requirements. The working group agreed with this, and explained that they had already taken this feedback into account.

Finally, the working group explained that their goal is for the board to approve this framework at our Fall meeting.

Membership Changes

The last topic for the board to consider was some membership changes and applications. Put simply, Canonical wished to move from being a Platinum member to being a Gold member, and Ericsson wished to apply for Canonical's Platinum member slot.

Chris Price presented Ericsson's case for Platinum membership. Interestingly, this was the second time that Ericsson had applied for Platinum membership in the past year. The previous time, at the March 9 board meeting, Ericsson and Huawei applied for the slot left vacant by HPE. Huawei was successful with their application that time around.

Chris explained Ericsson's vision for OpenStack, and how they plan to continue developing and driving forward the OpenStack ecosystem. He also explained Ericsson's role in working with adjacent communities like OpenDaylight and OPNFV, as well as their role in related standard's bodies.

Next up, Mark Baker described how Canonical felt that with several "industry giants" looking to take up Platinum membership, that the right thing for a smaller entity like Canonical to do from a community perspective, was to take a step back and allow others with greater resources to take their Platinum member slot. However, he also emphasized how OpenStack remains at the core of Canonical's business.

After some brief questions, the board went into executive session and, on return, both applications were approved.

Next Meeting

The board's next meeting is a 2 hour conference call on Tuesday, August 22. Our next in-person meeting will be in Denver on Sunday, September 10.

by markmc at June 20, 2017 10:00 AM

June 19, 2017

OpenStack Superuser

Clearing up why fog computing is important

Fog and edge computing score big on the tech jargon bingo boards of today.

SuperuserTV talks to Adrien Lebre, who co-chairs the OpenStack Fog, Edge and Distributed Computing Working Group, about why it’s important and how you can get involved. Lebre is a researcher at Inria, France’s national institute for computer science research and applied mathematics. Below is an edited partial transcript of the interview.

What is fog, edge and massively distributed computing?
Why is it important right now?

During the last couple of years, there’s been a trend towards building mega data centers by companies like Google, Microsoft, and Amazon. The idea is that you build a mega data center with thousands of servers to cope with the demands of cloud computing. Unfortunately, with the new usage trends — internet of things, tactical internet — these mega data centers cannot satisfy the latency needs for these applications. So,we need to propose a new model that will be able to satisfy all this latency-critical requirements. The idea is to deploy smaller data centers, but closer to the end users, leveraging the internet backbone. Each network point deploys a couple of servers and those servers can satisfy the needs in terms of competition, storage, etc. for these new kinds of applications.

Can you give us a couple of examples of how it would be used, in layman’s terms?
A typical application: you have your smartphone, and it has some limitations in terms of capabilities. Let’s say you want to watch a TV show in 3D on it, the idea is to use the competition that we provide with this edge infrastructure, so instead of running the app on your smartphone, it runs on the closest edge cloud.

A second example is the medical field — think tele-surgery applications. Say you have a doctor who needs to perform surgery remotely. In this case the latency is critical, so you have to provide cloud computing capabilities much closer to these end users.

What role does OpenStack play?
When we started with this initiative, it was from scratch. We took a white paper and started thinking about how a massively distributed architecture that’s deployed close to the end user would look like…
When we started thinking about developing a proof-of-concept, we discovered it’d require a lot of engineering work: enter OpenStack. Instead of re-inventing the wheel we chose to join the community to leverage the rich ecosystem and see how to build on it for our needs.
We created a dedicated working group — the Fog, Edge and Distributed Computing Group — in OpenStack to gather and federate all the developers working on these issues and with the goal of creating this new operating system.

Catch the entire interview with Lebre below.

Get involved!

Here are some ways you can shape what’s next in fog and edge computing:

  • Sign up to the openstack-dev mailing list and look for posts with “[Massively distributed]” in the subject line.
  • Take part in the bi-monthly meetings on irc #openstack-distributed suggest your agenda items and take part in current discussions (Time should be defined according to the different timezone of participants).
  • Share particular use-cases or superuser stories
    Review specs and provide your input
  • Email Adrien Lebre <adrien.lebreATinria.fr> with your suggestions, questions, …
  • Check out past meetings: https://etherpad.openstack.org/p/massively_distributed_ircmeetings_2016

The post Clearing up why fog computing is important appeared first on OpenStack Superuser.

by Superuser at June 19, 2017 03:15 PM

Red Hat Stack

Back to Boston! A recap of the 2017 OpenStack Summit

This year the OpenStack® Summit returned to Boston, Massachusetts. The Summit was held the week after the annual Red Hat® Summit, which was also held in Boston. The combination of the two events, back to back, made for an intense, exciting and extremely busy few weeks.

More than 5,000 attendees and 1,000 companies were in attendance for OpenStack Summit. Visitors came from over 60 countries and could choose from more than 750 sessions.

And of course all sessions and keynotes are now easily accessible for online viewing at your own leisure.


The Summit proved to be a joyful information overload and I’d like to share with you some of my personal favorite moments.

Keynotes: “Costs Less, Does More.”

As in previous years, the Summit kicked off its first two days with a lengthy set of keynotes. The keynote sessions highlighted a variety of companies using OpenStack in many different ways highlighting the “Costs Less, Does More” theme. GE talked about using OpenStack in healthcare for compliance and Verizon discussed their use of Red Hat OpenStack Platform for NFV and edge computing. AT&T and DirectTV showed how they are using OpenStack to deliver customers a highly interactive and customizable on demand streaming service. 
Throughout the keynotes it became quite clear, to me, that OpenStack is truly moving beyond its “newcomer status” and is now solving a wider range of industry use cases than in the past.

In his keynote Red Hat’s Chief Technologist Chris Wright’s discussed Red Hat’s commitment and excitement in being part of the OpenStack community and he shared some numbers from the recent user survey. Chris also shared an important collaboration between Red Hat, Boston University and the Children’s Hospital working to significantly decrease the time required to process and render 3D fetal imaging using OpenShift, GPUs and machine learning. Watch his keynote to learn more about this important research.

Screen Shot 2017-06-05 at 11.46.02 AM
Image courtesy of the OpenStack Foundation

Another interesting keynote reinforcing the “Costs Less, Does More” theme was “The U.S. Army Cyber School: Saving Millions & Achieving Educational Freedom Through OpenStack” by Major Julianna Rodriguez and Captain Christopher W. Apsey. Starting just two short years ago, with almost no hardware, they now use OpenStack to enable their students to solve problems in a “warfare domain.” To do this they require instructors to react quickly to their students requirements and implement labs and solutions that reflect the ever-changing and evolving challenges they face in today’s global cyber domain. The school created an “everything as code for courseware” agile solution framework using OpenStack. Instructors can “go from idea, to code, to deployment, to classroom” in less than a day. And the school is able to do this with a significant cost savings avoiding the “legacy model of using a highly licensed and costly solution.” Both their keynote and their session talk detail very interesting and unexpected OpenStack solution.

Screen Shot 2017-06-16 at 1.33.09 PM


Finally, of particular note and point of pride for those of us in the Red Hat community, we were thrilled to see two of our customers using Red Hat OpenStack Platform share this year’s Superuser Award. Both Paddy Power Betfair and UKCloud transformed their businesses while also contributing back in significant ways to the OpenStack community. We at Red Hat are proud to be partnered with these great, community-minded and leading edge organizations! You can watch the announcement here.

Community Strong!

Another recurring theme was the continuation, strength, and importance of the community behind OpenStack. Red Hat’s CEO Jim Whitehurst touched on this in his fireside chat with OpenStack Foundation Executive Director Jonathan Bryce. Jim and Jonathan discussed how OpenStack has a strong architecture and participation from vendors, users, and enterprises. Jim pointed out that having a strong community, governance structure and culture forms a context for great things to happen, suggesting, “You don’t have to worry about the roadmap; the roadmap will take care of itself.”

Screen Shot 2017-06-05 at 11.49.02 AM
Image courtesy of the OpenStack Foundation

As to the state of OpenStack today, and where it is going to be in, say, five years, Jim’s thoughts really do reflect the strength of the community and the positive future of OpenStack. He noted that the OpenStack journey is unpredictable but has reacted well to the demands of the marketplace and reminded us “if you build … the right community the right things will happen” I think it’s safe to say this community remains on the right track!

The Big Surprise Guest.

IMG_0526 (1)

There was also a surprise guest that was teased throughout the lead up to the Summit and not revealed until many of us arrived at the venue in the morning: Edward Snowden. Snowden spoke with OpenStack Foundation COO Mark Collier in a wide ranging and interesting conversation. Topics included Snowden’s view around the importance in assuring the openness of underlying IaaS layers, warning that it is “fundamentally disempowering to sink costs into an infrastructure that you do not fully control.” He also issued a poignant piece of advice to computer scientists proclaiming “this is the atomic moment for computer scientists.”

I think any community that happily embraces a keynote from both the U.S. Army Cyber School and Edward Snowden in the same 24 hour period is very indicative of an incredibly diverse, intelligent and open community and is one I’m proud to be a part of!

So many great sessions!

As mentioned, with over 750 talks there was no way to attend all of them. Between the exciting Marketplace Hall filled with awesome vendor booths, networking and giveaways, and the many events around the convention center, choosing sessions was tough. Reviewing the full list of recorded sessions reveals just how spoiled for choice we were in Boston.

Even more exciting is that with over 60 talks, Red Hat saw its highest speaker participation level of any OpenStack Summit. Red Hatters covered topics across all areas of the technology and business spectrum. Red Hat speakers ranging from engineering all the way to senior management were out in force! Here’s a short sampling of some of the sessions.

Product management

Red Hat Principal Product Manager Steve Gordon’s “Kubernetes and OpenStack at scale” shared performance testing results when running a 2000+ node OpenShift Container Platform cluster running on a 300-node Red Hat OpenStack Platform cloud. He detailed ways to tune and run OpenShift, Kubernetes and OpenStack based on the results of the testing.


Security and User Access

For anyone who has ever wrestled with Keystone access control, or who simply wants to better understand how it works and where it could be going, check out Adam Young, Senior Software Engineer at Red Hat and Kristi Nikolla, Software Engineer with the Massachusetts Open Cloud team at Boston University’s session entitled Per API Role Based Access Control. Adam and Kristi discuss the challenges and limitations in the current Keystone implementation around access control and present their vision of its future in what they describe as “an overview of the mechanism, the method, and the madness of RBAC in OpenStack.” Watch to the end for an interesting Q&A session. For more information on Red Hat and the Massachusetts Open Cloud, check out the case study and press release.

Red Hat Services

Red Hat Services featured talks highlighting real world Red Hat OpenStack Platform installations. In “Don’t Fail at Scale- How to Plan for, Build, and Operate a Successful OpenStack Cloud” David Costakos, OpenStack Solutions Architect, and Julio Villarreal Pelegrino, Principal Architect, lightheartedly brought the audience through the real-world do’s and don’t’s of an OpenStack deployment.


And in “Red Hat – Best practices in the Deployment of a Network Function Virtualization Infrastructure” Julio and Stephane Lefrere, Cloud Infrastructure Practice Lead, discussed the intricacies and gotchas of one of the most complicated and sophisticated deployments in the OpenStack space: NFV. Don’t miss it!

Red Hat Technical Support

Red Hat Cloud Success Architect Sadique Puthen and Senior Technical Support Engineer Jaison Raju took a deep dive into networks in Mastering in Troubleshooting NFV Issues. Digging into the intricacies and picking apart a complex NFV-based deployment would scare even the best networking professionals but Sadique and Jaison’s clear and detailed analysis, reflecting their real world customer experiences, is exceptional. I’ve been lucky enough to work with these gentleman from the field side of the business and I can tell you the level of skills present in the support organization within Red Hat is second to none. Watch the talk and see for yourself, you won’t be disappointed!

Red Hat Management

Red Hat’s Senior Director of Product Marketing Margaret Dawson, presented Red Hat – Cloud in the Era of Hybrid-Washing: What’s Real & What’s Not?. Margaret’s session digs into the real-world decision making processes required to make the Digital Transformation journey a success. She highlights that “Hybrid Cloud” is not simply two clouds working together but rather a detailed and complex understanding and execution of shared processes across multiple environments.

As you can see, there was no shortage of Red Hat talent speaking at this year’s Summit.

To learn more about how Red Hat can help you in your Digital Transformation journey check out the full “Don’t Fail at Scale” Webinar!

See you in six months in Sydney!

Each year the OpenStack Summit seems to get bigger and better. But this year I really felt it was the beginning of a significant change. The community is clearly ready to move to the next level of OpenStack to meet the increasingly detailed enterprise demands. And with strong initiatives from the Foundation around key areas such as addressing complexity, growing the community through leadership and mentoring programs, and ensuring a strong commitment to diversity, the future is bright.


I’m really excited to see this progress showcased at the next Summit, being held in beautiful Sydney, Australia, November 6-8, 2017! Hope to see you there.

As Jim Whitehurst pointed out in his keynote, having a strong community, governance structure and culture really is propelling OpenStack into the future!

The next few years are going to be really, really exciting!

by August Simonelli, Technical Marketing Manager, Cloud at June 19, 2017 12:36 PM


Recent blog posts, June 19

Using Ansible Validations With Red Hat OpenStack Platform – Part 3 by August Simonelli, Technical Marketing Manager, Cloud

In the previous two blogposts (Part 1 and Part 2) we demonstrated how to create a dynamic Ansible inventory file for a running OpenStack cloud. We then used that inventory to run Ansible-based validations with the ansible-playbook command from the CLI.

Read more at http://redhatstackblog.redhat.com/2017/06/15/using-ansible-validations-with-red-hat-openstack-platform-part-3/

TripleO deep dive session index by Carlos Camacho

This is a brief index with all TripleO deep dive sessions, you can see all videos on the TripleO YouTube channel.

Read more at http://anstack.github.io/blog/2017/06/15/tripleo-deep-dive-session-index.html

TripleO deep dive session #10 (Containers) by Carlos Camacho

This is the 10th release of the TripleO “Deep Dive” sessions

Read more at http://anstack.github.io/blog/2017/06/15/tripleo-deep-dive-session-10.html

OpenStack, Containers, and Logging by Lars Kellogg-Stedman

I've been thinking about logging in the context of OpenStack and containerized service deployments. I'd like to lay out some of my thoughts on this topic and see if people think I am talking crazy or not.

Read more at http://blog.oddbit.com/2017/06/14/openstack-containers-and-logging/

John Trowbridge: TripleO in Ocata by Rich Bowen

John Trowbridge (Trown) talks about his work on TripleO in the OpenStack Ocata period, and what's coming in Pike.

Read more at http://rdoproject.org/blog/2017/06/john-trowbridge-tripleo-in-ocata/

Doug Hellmann: Release management in OpenStack Ocata by Rich Bowen

Doug Hellmann talks about release management in OpenStack Ocata, at the OpenStack PTG in Atlanta.

Read more at http://rdoproject.org/blog/2017/06/doug-hellmann-release-management-in-openstack-ocata/

Using Ansible Validations With Red Hat OpenStack Platform – Part 2 by August Simonelli, Technical Marketing Manager, Cloud

In Part 1 we demonstrated how to set up a Red Hat OpenStack Ansible environment by creating a dynamic Ansible inventory file (check it out if you’ve not read it yet!).

Read more at http://redhatstackblog.redhat.com/2017/06/12/using-ansible-validations-with-red-hat-openstack-platform-part-2/

by Rich Bowen at June 19, 2017 09:44 AM

June 16, 2017

OpenStack Blog

OpenStack Developer Mailing list Digest June 10-16


  • TC report 24 by Chris Dent 1
  • Release countdown for week R-10 and R-9, June 16-30 by Thierry 2
  • TC Status Update by Thierry 3.

Making Fuel a Hosted Project

  • Fuel originated from Mirantis as their OpenStack installer.
  • Approved as an official OpenStack project November 2015.
  • The goal was to get others involved to make one generic OpenStack installer.
  • In Mitaka and Newton it represented more commits than Nova.
  • While the Fuel team embraced open collaboration, it failed to attract other organizations.
  • Since October 2016 Fuel’s activity dropped from it’s main sponsor.
    • 68% drop between 2016 and 2017.
    • Project hasn’t held a meeting for three months.
    • Activity dropped from ~990 commits/month (April 2016, August 2016) to 52 commits in April 2017 and 25 commits May 2017.
  • Full thread: 4

Moving Away from “big tent” Terminology

  • Back in 2014 our integrated release was not really integrated, too big to be installed by everyone, yet too small to accommodate the growing interest in other forms of “open infrastructure”.
  • Incubation process created catch-22’s.
  • Project structure reform 4 discussions switched us to a simpler model: project teams would be approved based on how well they’d it the OpenStack overall mission and community principles rather than maturity.
    • Nick named the big tent 5
  • It ended up mostly creating confusion due to various events and mixed messages which we’re still struggling with today.
  • This was discussed during a TC office hour in channel openstack-tc 6
  • There is still no agreement on how to distinguish official and unofficial projects. The feedback in this thread will be used to assist the TC+UC+Board sub group on better communicating what is OpenStack.
  • Full thread: 7


[1] – http://lists.openstack.org/pipermail/openstack-dev/2017-June/118314.html

[2] – http://lists.openstack.org/pipermail/openstack-dev/2017-June/118476.html

[3] – http://lists.openstack.org/pipermail/openstack-dev/2017-June/118480.html

[4] – https://governance.openstack.org/tc/resolutions/20141202-project-structure-reform-spec.html

[5] – http://inaugust.com/posts/big-tent.html

[6] – http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2017-06-15.log.html#t2017-06-15T13:00:53

[7] – http://lists.openstack.org/pipermail/openstack-dev/2017-June/thread.html#118368

#openstack #openstack-dev-digest

by Mike Perez at June 16, 2017 08:58 PM

OpenStack Superuser

Deploying OpenStack Designate with Kolla

During the Ocata release, OpenStack DNS-as-a-Service (Designate) support was implemented in the Kolla project.

This post will guide you through a basic deployment and tests of Designate.

Install required dependencies and tools for kolla-ansible and Designate.

# yum install -y epel-release
# yum install -y python-pip python-devel libffi-devel gcc openssl-devel ansible ntp wget bind-utils
# pip install -U pip

Install Docker and downgrade to 1.12.6. At the time of writing this post, libvirt had issues connecting to D-Bus due to SElinux issues with Docker 1.13.

# curl -sSL https://get.docker.io | bash
# yum downgrade docker-engine-1.12.6 docker-engine-selinux-1.12.6
# yum install -y python-docker-py

Configure Docker daemon to allow insecure-registry (use the IP where your remote registry will be located).

# mkdir -p /etc/systemd/system/docker.service.d
# tee /etc/systemd/system/docker.service.d/kolla.conf <<-'EOF'
ExecStart=/usr/bin/dockerd --insecure-registry

Reload the systemd daemons and start, stop, disable and enable the following services:

# systemctl daemon-reload
# systemctl stop libvirtd
# systemctl disable libvirtd
# systemctl enable ntpd docker
# systemctl start ntpd docker

Download Ocata registry created in tarballs.openstack.org.
Create a Kolla registry from the downloaded tarball.

# wget https://tarballs.openstack.org/kolla/images/centos-binary-registry-ocata.tar.gz
# mkdir /opt/kolla_registry
# sudo tar xzf centos-binary-registry-ocata.tar.gz -C /opt/kolla_registry
# docker run -d -p 4000:5000 --restart=always -v /opt/kolla_registry/:/var/lib/registry --name registry registry:2

Install kolla-ansible.

# pip install kolla-ansible
# cp -r /usr/share/kolla-ansible/etc_examples/kolla /etc/kolla/
# cp /usr/share/kolla-ansible/ansible/inventory/* .

Configure kolla/globals.yml configuration file with the following content.
Change values when necessary (IP addresses, interface names).
This is a sample minimal configuration:

# vi /etc/kolla/globals.yml
kolla_internal_vip_address: ""
kolla_base_distro: "centos"
kolla_install_type: "binary"
docker_registry: ""
docker_namespace: "lokolla"
network_interface: "enp0s8"
neutron_external_interface: "enp0s9"

Configure designate options in globals.yml.
dns_interface must be network reachable from Nova instances if internal DNS resolution is needed.

enable_designate: "yes"
dns_interface: "enp0s8"
designate_backend: "bind9"
designate_ns_record: "sample.openstack.org"

Configure inventory, add the nodes in their respective groups.

# vi ~/multinode

Generate passwords.

# kolla-genpwd

Ensure the environment is ready to deploy with prechecks.
Until prechecks does not succeed do not start deployment.
Fix what is necessary.

# kolla-ansible prechecks -i ~/multinode

Pull Docker images on the servers, this can be skipped because will be made in deploy step, but doing it first will ensure all the nodes have the images you need and will minimize the deployment time.

# kolla-ansible pull -i ~/multinode

Deploy kolla-ansible and do a woot for Kolla 😉

# kolla-ansible deploy -i ~/multinode

Create credentials file and source it.

# kolla-ansible post-deploy -i ~/multinode
# source /etc/kolla/admin-openrc.sh

Check that all containers are running and none of them are restarting or exiting.

# docker ps -a --filter status=exited --filter status=restarting
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

Install required python clients

# pip install python-openstackclient python-designateclient python-neutronclient

Execute a base OpenStack configuration (public and internal networks, cirros image).
Do no execute this script if custom networks are going to be used.

# sh /usr/share/kolla-ansible/init-runonce

Create a sample Designate zone.

# openstack zone create --email admin@sample.openstack.org sample.openstack.org.
| Field          | Value                                |
| action         | CREATE                               |
| attributes     |                                      |
| created_at     | 2017-02-22T13:14:39.000000           |
| description    | None                                 |
| email          | admin@sample.openstack.org           |
| id             | 4a44b0c9-bd07-4f5c-8908-523f453f269d |
| masters        |                                      |
| name           | sample.openstack.org.                |
| pool_id        | 85d18aec-453e-45ae-9eb3-748841a1da12 |
| project_id     | 937d49af6cfe4ef080a79f9a833d7c7d     |
| serial         | 1487769279                           |
| status         | PENDING                              |
| transferred_at | None                                 |
| ttl            | 3600                                 |
| type           | PRIMARY                              |
| updated_at     | None                                 |
| version        | 1                                    |

Configure designate-sink to make use of the previously created zone, sink will need zone_id to automatically create neutron and nova records into Designate.

# mkdir -p /etc/kolla/config/designate/designate-sink/
# vi /etc/kolla/config/designate/designate-sink.conf
zone_id = 4a44b0c9-bd07-4f5c-8908-523f453f269d
zone_id = 4a44b0c9-bd07-4f5c-8908-523f453f269d

After configure designate-sink.conf, reconfigure Designate to make use of this configuration.

# kolla-ansible reconfigure -i ~/multinode --tags designate

List networks.

# neutron net-list
| id                                   | name     | tenant_id                        | subnets                                          |
| 3b56c605-5a01-45be-9ed6-e4c3285e4366 | demo-net | 937d49af6cfe4ef080a79f9a833d7c7d | 7f28f050-77b2-426e-b963-35b682077993 |
| 6954d495-fb8c-4b0b-98a9-9672a7f65b7c | public1  | 937d49af6cfe4ef080a79f9a833d7c7d | 9bd9feca-40a7-4e82-b912-e51b726ad746 |

Update the network with a dns_domain.

# neutron net-update 3b56c605-5a01-45be-9ed6-e4c3285e4366 --dns_domain sample.openstack.org.
Updated network: 3b56c605-5a01-45be-9ed6-e4c3285e4366

Ensure dns_domain is properly applied.

# neutron net-show 3b56c605-5a01-45be-9ed6-e4c3285e4366
| Field                     | Value                                |
| admin_state_up            | True                                 |
| availability_zone_hints   |                                      |
| availability_zones        | nova                                 |
| created_at                | 2017-02-22T13:13:06Z                 |
| description               |                                      |
| dns_domain                | sample.openstack.org.                |
| id                        | 3b56c605-5a01-45be-9ed6-e4c3285e4366 |
| ipv4_address_scope        |                                      |
| ipv6_address_scope        |                                      |
| mtu                       | 1450                                 |
| name                      | demo-net                             |
| port_security_enabled     | True                                 |
| project_id                | 937d49af6cfe4ef080a79f9a833d7c7d     |
| provider:network_type     | vxlan                                |
| provider:physical_network |                                      |
| provider:segmentation_id  | 27                                   |
| revision_number           | 6                                    |
| router:external           | False                                |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   | 7f28f050-77b2-426e-b963-35b682077993 |
| tags                      |                                      |
| tenant_id                 | 937d49af6cfe4ef080a79f9a833d7c7d     |
| updated_at                | 2017-02-22T13:25:16Z                 |

Create a new instance in the previously updated network.

# openstack server create \
    --image cirros \
    --flavor m1.tiny \
    --key-name mykey \
    --nic net-id=3b56c605-5a01-45be-9ed6-e4c3285e4366 \

Once the instance is “ACTIVE”, check the IP associated.

# openstack server list
| ID                                   | Name  | Status | Networks          | Image Name |
| d483e4ee-58c2-4e1e-9384-85174630428e | demo1 | ACTIVE | demo-net= | cirros     |

List records in the designate-zone.
As you can see there is a record in designate associated with the instance IP:

# openstack recordset list sample.openstack.org.
| id                                   | name                             | type | records                                   | status | action |
| 4f70531e-c325-4ffd-a8d3-8172bd5163b8 | sample.openstack.org.            | SOA  | sample.openstack.org.                     | ACTIVE | NONE   |
|                                      |                                  |      | admin.sample.openstack.org. 1487770304    |        |        |
|                                      |                                  |      | 3586 600 86400 3600                       |        |        |
| a9a09c5f-ccf1-4b52-8400-f36e8faa9549 | sample.openstack.org.            | NS   | sample.openstack.org.                     | ACTIVE | NONE   |
| aa6cd25d-186e-425b-9153-699d8b0811de | 10-0-0-3.sample.openstack.org.   | A    |                                  | ACTIVE | NONE   |
| 713650a5-a45e-470b-9539-74e110b15115 | demo1.None.sample.openstack.org. | A    |                                  | ACTIVE | NONE   |
| 6506e6f6-f535-45eb-9bfb-4ac1f16c5c9b | demo1.sample.openstack.org.      | A    |                                  | ACTIVE | NONE   |

Validate that designate resolves the DNS record.
You can use Designate mDNS service or directly to bind9 servers to validate the test.

# dig +short -p 5354 @ demo1.sample.openstack.org. A
# dig +short -p 53 @ demo1.sample.openstack.org. A

If you find any issue with designate in kolla-ansible or kolla, please fill a bug https://bugs.launchpad.net/kolla-ansible/+filebug

Eduardo Gonzalez

This post first appeared on Eduardo Gonzalez’s blog. Superuser is always interested in community content, email: editorATsuperuser.org.

The post Deploying OpenStack Designate with Kolla appeared first on OpenStack Superuser.

by Eduardo Gonzalez at June 16, 2017 11:01 AM

Loïc Dachary

Shrink an OpenStack image

After a while openstack image create creates increasingly large files because the blocks used and freed are not trimmed and it is not uncommon for hypervisors to not support fstrim. The image can be shrinked and the virtual machine recreated from it to reclaim the unused space.

$ openstack image save --file 2017-06-16-gitlab.qcow2 2017-06-16-gitlab
$ qemu-img convert 2017-06-16-gitlab.qcow2 -O raw work.img
$ sudo kpartx -av work.img
add map loop0p1 (252:0): 0 104855519 linear 7:0 2048
$ sudo e2fsck -f /dev/mapper/loop0p1
cloudimg-rootfs: 525796/6400000 files (0.1% non-contiguous), 2967491/13106939 blocks
$ sudo resize2fs -p -M /dev/mapper/loop0p1
The filesystem on /dev/mapper/loop0p1 is now 3190624 (4k) blocks long.
$ sudo kpartx -d work.img
loop deleted : /dev/loop0

Create a smaller image that is big enough for the resized file system.

$ sudo virt-df -h work.img
Filesystem                                Size       Used  Available  Use%
work.img:/dev/sda1                         12G       9.7G       2.0G   83%
$ qemu-img create -f raw smaller.img 13G
Formatting 'smaller.img', fmt=raw size=13958643712

Resize the large image into the smaller one:

$ sudo virt-resize --shrink /dev/sda1 work.img smaller.img
Resize operation completed with no errors.  Before deleting the old disk,
carefully check that the resized disk boots and works correctly.
$ ls -lh work.img smaller.img
-rw-r--r-- 1 ubuntu ubuntu 13G Jun 16 08:38 smaller.img
-rw-r--r-- 1 ubuntu ubuntu 50G Jun 16 08:31 work.img
$ qemu-img convert smaller.img -O qcow2 smaller.qcow2

Upload the smaller image

time openstack image create --file smaller.qcow2 \
     --disk-format=qcow2 --container-format=bare 2017-06-16-gitlab-smaller

by Loic Dachary at June 16, 2017 08:49 AM

June 15, 2017


“Not Our Software” Is No Excuse for Forklift Upgrades: CI/CD Using MCP DriveTrain — Q&A

Last week we spoke to Ryan Day about using Continuous Integration/Continuous Deployment (CI/CD) to keep not just your own software, but also externally produced software, up to date.

by Ryan Day at June 15, 2017 05:58 PM

Red Hat Stack

Using Ansible Validations With Red Hat OpenStack Platform – Part 3

In the previous two blogposts (Part 1 and Part 2) we demonstrated how to create a dynamic Ansible inventory file for a running OpenStack cloud. We then used that inventory to run Ansible-based validations with the ansible-playbook command from the CLI.

In the final part of our series, we demonstrate how to run those same validations using two new methods: the OpenStack scheduling service, Mistral, and the Red Hat OpenStack director UI.


Method 2: Mistral

Validations can be executed using the OpenStack Mistral Unified CLI. Mistral is the task service on the director and can be used for doing everything from calling local scripts, as we are doing here, to launching instances.

You can easily find the available validations using Mistral from the openstack unified CLI. The command returns all the validations loaded on director, which can be a long list. Below we have run the command, but omitted all but the ceilometerdb-size check:

[stack@undercloud ansible]$  openstack action execution run tripleo.validations.list_validations | jq '.result[]'
  "name": "Ceilometer Database Size Check",
  "groups": [
  "id": "ceilometerdb-size",
  "metadata": {},
  "description": "The undercloud's ceilometer database can grow to a substantial size if metering_time_to_live and event_time_to_live is set to a negative value (infinite limit). This validation checks each setting and fails if variables are set to a negative value or if they have no custom setting (their value is -1 by default).\n"

Next step is to execute this workflow by using the “id” value found in the Mistral output:

$ openstack workflow execution create tripleo.validations.v1.run_validation '{"validation_name": "ceilometerdb-size"}'

The example below is what it looks like when run on the director and it contains the final piece of information needed to execute our check:


Look for the “Workflow ID”, and once more run a Mistral command using it:

$ openstack workflow execution output show 4003541b-c52e-4403-b634-4f9987a326e1

The output on the director is below:


As expected, the negative value in metering_time_to_live has triggered the check and the returned output indicates it clearly.

Method 3: The Director GUI

The last way we will run a validation is via the director UI. The validations visible from within the UI depend on what playbooks are present in the /usr/share/openstack-tripleo-validations/validations/ directory on the director. Validations can be added and removed dynamically.

Here is a short (60-second) video which demonstrates adding the ceilometerdb-size validation to the director via the CLI and then running it from the UI:

Pretty cool, right?

Where to from here?

As you write your own validations you can submit them upstream and help grow the community. To learn more about the upstream validations check out their project repository on github.

And don’t forget, contributing an approved commit to an OpenStack project can gain you Active Technical Contributor (ATC) status for the release cycle. So, not only do you earn wicked OpenStack developer cred, but you may be eligible to attend a Project Teams Gathering (PTG) and receive discounted entry to the OpenStack Summit for that release.

With the availability of Ansible on Red Hat OpenStack Platform you can immediately access the power Ansible brings to IT automation and management. There are more than 20 pre-supplied TripleO validation playbooks supplied with Red Hat OpenStack Platform 11 director and many more upstream.

Ansible validations are ready now. Try them out. Join the community. Keep your Cloud happy.


That’s the end of our series on Ansible validations. Don’t forget to read Part 1 and Part 2 if you haven’t already.

Thanks for reading!

Further info about Red Hat OpenStack Platform

For more information about Red Hat OpenStack Platform please visit the technology overview page, product documentation, and release notes.

Ready to go deeper with Ansible? Check out the latest collection of Ansible eBooks, including a free samples from every title!

And don’t forget you can evaluate Red Hat OpenStack Platform for free for 60 days!

The “Operationalizing OpenStack” series features real-world tips, advice and experiences from experts running and deploying OpenStack.


by August Simonelli, Technical Marketing Manager, Cloud at June 15, 2017 01:01 PM

OpenStack Superuser

Hanging in cloud city

Seattle is the city of coffee, Kurt Cobain and cloudy skies, so it was a fitting extension of our moody weather that the theme of GeekWire’s inaugural, recent one-day Cloud Tech Summit was “Seattle, Cloud Capital of the World.” While fascinating and crucial things are happening in the industry all over the world––and it’s critical not to lose that global perspective––it’s hard to deny that something cloudy is happening here.

GeekWire is a local tech publication that curated the day’s content, inviting speakers from Amazon Web Services, Microsoft Azure, Google Cloud Platform, Apptio, Cloud Foundry and Docker to take the stage. If you were following the sound bytes on Twitter, you might have seen multiple cloud providers claiming the same customer: But wait, I thought that was a GCP user? And weren’t they an AWS user?…Hold on, how do BMW and Adobe Marketing use Azure, but are also OpenStack users?…What is going on here?!

The most powerful take away from the day is that all of those statements can be correct. You can be an AWS user and an OpenStack user. You can be an Azure user and an OpenStack user. It isn’t an exclusive relationship: it’s multi-cloud.

Multi-cloud was the giant elephant in the room that Apptio CEO Sunny Gupta and Google Cloud director of product management, office of the CTO Greg DeMichillie brought up, that we could have, and maybe should have, spent all day talking about.

At the Barcelona OpenStack Summit in October 2016, Jonathan Bryce, OpenStack Foundation executive director, said, “The future is multi-cloud.” Months later, that’s not just the future, but what’s already happening.

Keeping an eye on Twitter hashtag #GWCloudTech, I could tell some audience members were confused. Yes, this means there will be multiple cloud providers in the marketplace, but that’s not really what we’re after here. What we’re after is the reality that different workloads belong in different homes for reasons like cost, performance, compliance and different business requirements. Some things need to go public; some things need to go private. What goes where for who is still a major question that organizations are figuring out for themselves, and in open environments like our OpenStack community, they’re happily sharing what they’ve learned.

But while we figure out the optimized “what” and “where,” it’s a matter of fact that none of it can happen without the “how,” and the “how” is open APIs. We can’t ignore the realities of the landscape and will be hurt if we ignore the technical demands of those realities: interoperability and compatibility.

Google’s DeMichillie drove this point home over and over again during his fireside chat with Senior Writer at Fortune Barb Darrow: “There’s no doubt we’re going to live in this world where we have applications running on an on-prem environment and a cloud environment.”

Open APIs matter because, as DeMichillie reiterated, they allow you to find the right mix, running what you need to on-prem and in the cloud. As demands change they allow you to change your mind, move providers, or move technologies, without the costliness and headaches of migrating from scratch. How do you adopt your architecture to emerging technologies, like Kubernetes? With open APIs.

The open, cross-community collaboration in the Kubernetes and OpenStack Working Groups is a perfect example. Both groups have mutual users, and those users can benefit from access to emerging technologies on both their cloud platforms. As DeMichillie put it, “Cloud is a really interesting opportunity to not repeat mistakes of the past. Take the openness that Linux brought to the world. We need to make sure that we don’t take a step back and we don’t make that mistake.”

Want to join the conversation around openness and multi-cloud? You can join us at an upcoming OpenStack Seattle MeetUp, or find me for a cup of Seattle coffee at anne at openstack.org, IRC annabelleB, or Twitter @whyhiannabelle.








The post Hanging in cloud city appeared first on OpenStack Superuser.

by Anne Bertucio at June 15, 2017 11:23 AM

Cisco Cloud Blog

A Rich Partner Ecosystem for a Successful Cloud Strategy

I continue to bet on Cloud Computing (I am not referring to the horse that won the Preakness Stakes ahead of ‘Classic Empire’) but as I place that bet … my assumption … is that the various major industry players will continue to work together in a meaningful way toward the progressive standardization of cloud technologies.

by Enrico Fuiano at June 15, 2017 11:00 AM

Galera Cluster by Codership

China Mobile & Intel Deploy Galera for 1000-Node OpenStack Cloud

The white paper shares lessons learned from a careful, component level analysis of China Mobile’s 1000-node OpenStack cloud. They uncovered three major issues in the course of this study, all addressed through OpenStack configuration changes. The result: a more stable and performant China Mobile OpenStack cloud and insights into scale bottlenecks and areas for future work.

Read the full story: Analyzing and Tuning China Mobile’s OpenStack Production Cloud white paper below


by Sakari Keskitalo at June 15, 2017 08:13 AM

Carlos Camacho

TripleO deep dive session #10 (Containers)

This is the 10th release of the TripleO “Deep Dive” sessions

In this session we will have an update for the TripleO containers effort, thanks to Jiri Stransky.

So please, check the full session content on the TripleO YouTube channel.

Please check the sessions index to have access to all available content.

by Carlos Camacho at June 15, 2017 12:00 AM

June 14, 2017

Cisco Cloud Blog

Cloud Unfiltered Podcast, Episode 11: Ken Owens

When I think “bleeding edge,” in regards to technology I become anxious. Maybe I should get excited, but really, the feeling is one of vague panic. I think of products that I am probably not ready for and product engineers that are going to be frustrated with my inability to instantly grasp the full impact […]

by Ali Amagasu at June 14, 2017 05:22 PM

OpenStack Superuser

How OpenSwitch is democratizing the networking market

SuperuserTV sat down with Albert Fishman, marketing chair of OpenSwitch to find out more about the project.

Started by HP and now part of the Linux Foundation, it was one 11 open source projects participating in the Open Source Days at the OpenStack Summit Boston. OpenSwitch was born from the need to decouple OpenSwitch is an open source, Linux-based network operating system (NOS) designed to power enterprise-grade switches from multiple hardware vendors that will enable organizations to rapidly build datacenter networks that are customized for unique business needs.

“It was a very observable response to the market needs of disaggregation and switching,” Fishman says. “People are trying to decouple the software and the hardware. The refreshment cycles in hardware are driven by speeds and feeds needs while the software cadence is driven by a faster pace of innovation.” It’s a huge market, he adds, that in 2017 will reach almost a billion dollars in revenue — but one that’s still closed to all but a networking “elite” — Google, Facebook, Amazon etc. OpenSwitch aims to open the way for users outside those giants.

Catch the entire three-minute interview below for more on what to expect from the project in the next six-12 months. “Our target is to see it being deployed, that would be a huge achievement,” he says. “We’ll also integrate with other technologies including OpenStack.”

The post How OpenSwitch is democratizing the networking market appeared first on OpenStack Superuser.

by Superuser at June 14, 2017 03:25 PM

Lars Kellogg-Stedman

OpenStack, Containers, and Logging

I've been thinking about logging in the context of OpenStack and containerized service deployments. I'd like to lay out some of my thoughts on this topic and see if people think I am talking crazy or not.

There are effectively three different mechanisms that an application can use to emit …

by Lars Kellogg-Stedman at June 14, 2017 04:00 AM

June 13, 2017

Chris Dent

TC Report 24

No meeting this week, but some motion on a variety of proposals and other changes. As usual, this document doesn't report on everything going on with the Technical Committee. Instead it tries to focus on those thing which I subjectively believe may have impact on community members.

I will be taking some time off between now and the first week of July so there won't be another of these until July 11th unless someone else chooses to do one.

New Things

No recently merged changes in policy, plans, or behavior. The office hours announced in last weeks's report are happening and the associated IRC channel, #openstack-tc is gaining members and increased chatter.

Pending Stuff

Queens Community Goals

Progress continues on the discussion surrounding community goals for the Queens cycle. There are enough well defined goals that we'll have to pick from amongst the several that are available to narrow it down. I would guess that at some point in the not too distant future there will be some kind of aggregated presentation to help us all decide. I would guess that since I just said that, it will likely be me.

Managing Binary Artifacts

With the addition of a requirement to include architecture in the metadata associated with the artifact the Guidelines for managing releases of binary artifacts appears to be close to making everyone happy. This change will be especially useful for those projects that want to produce containers.


There was some difference of opinion on the next steps on documenting the state of PostgreSQL, but just in the last couple of hours today we seem to have reached some agreement to do only those things on which everyone agrees. Last week's report has a summary of the discussion that was held in a meeting that week. Dirk Mueller has taken on the probably not entirely pleasant task of consolidating the feedback. His latest work can be found at Declare plainly the current state of PostgreSQL in OpenStack. The briefest of summaries of the difference of opinion is that for a while the title of that review had "MySQL" where "PostgreSQL" is currently.

Integrating Feedback on the 2019 TC Vision

The agreed next step on the Draft technical committee vision for public feedback has been to create a version which integrates the most unambiguous feedback and edits the content to have more consistent tense, structure and style. That's now in progress at Begin integrating vision feedback and editing for style. The new version includes a few TODO markers for adding things like a preamble that explains what's going on. As the document evolves we'll be simultaneously discussing the ambiguous feedback and determining what we can use and how that should change the document.

Top 5 Help Wanted List

The vision document mentions a top ten hit list that will be used in 2019 to help orient contributors to stuff that matters. Here in 2017 the plan is to start smaller with a top 5 list of areas where new individuals and organizations can make contributions that will have immediate impact. The hope is that by having a concrete and highly visible list of stuff that matters people will be encouraged to participate in the most productive ways available. Introduce Top 5 help wanted list provides the framework for the concept. Once that framework merges anyone is empowered to propose an item for the list. That's the best part.

by Chris Dent at June 13, 2017 07:30 PM


Thiago da Silva and Christian Schwede: OpenStack Swift

Thiago da Silva and Christian Schwede discuss what's new OpenStack Swift in the Ocata release, at the OpenStack PTG, 2017

by Rich Bowen at June 13, 2017 05:56 PM

John Trowbridge: TripleO in Ocata

John Trowbridge (Trown) talks about his work on TripleO in the OpenStack Ocata period, and what's coming in Pike.

by Rich Bowen at June 13, 2017 05:56 PM

Giulio Fidente: TripleO and Ceph in OpenStack Ocata

Giulio Fidente talks about TripleO and Ceph at the OpenStack PTG in Atlanta.

by Rich Bowen at June 13, 2017 05:56 PM

Doug Hellmann: Release management in OpenStack Ocata

Doug Hellmann talks about release management in OpenStack Ocata, at the OpenStack PTG in Atlanta.

by Rich Bowen at June 13, 2017 05:56 PM

SUSE Conversations

What Customers Really Think about OpenStack Cloud

According to the French philosopher René Descartes, to know what people really think, pay attention to what they do, rather than what they say. In the light of that wisdom, if you want to know what customers really think about OpenStack cloud software, you should pay attention to what they are doing with it.  Or …

+read more

The post What Customers Really Think about OpenStack Cloud appeared first on SUSE Blog. Mark_Smith

by Mark_Smith at June 13, 2017 02:37 PM

OpenStack Superuser

Containers are easy, running them in production is something else

SuperuserTV talks to David Aronchick, senior product manager for the Google Container Engine and lead product management on behalf of Google for Kubernetes.

“Running containers is easy but when it comes to running containers in production you need something more sophisticated. That’s what Kubernetes is designed to do – provide a way to run containers across many thousands of containers simultaneously,” Aronchick says, adding that n the “box” it includes all the necessary components to run in production at scale including — monitoring, logging, deploying, orchestration etc.

Aronchick also talks about the Cloud Native Computing Foundation, the Open Container Initiative and Kubernetes day at the Summit – including why he likes when people stop talking about Kubernetes.

He also offers tips on how to ramp up on Kubernetes. Starting with  kubernetes.io and  the GitHub repository. “From there you’ll find an absolute litany of various ways to engage — Slack channels, email lists, Twitter accounts,” Aronchick says. “The entire community is out there listening and trying to make things better.”
You can catch the entire 4:30 interview below.

The post Containers are easy, running them in production is something else appeared first on OpenStack Superuser.

by Superuser at June 13, 2017 01:27 PM


Writing VNFs for OPNFV

The entire OPNFV stack, ultimately, serves one purpose: to run virtual network functions (VNFs) that in turn constitute network services.

by Amar Kapadia at June 13, 2017 12:12 AM

June 12, 2017

Red Hat Stack

Using Ansible Validations With Red Hat OpenStack Platform – Part 2

In Part 1 we demonstrated how to set up a Red Hat OpenStack Ansible environment by creating a dynamic Ansible inventory file (check it out if you’ve not read it yet!).

Next, in Part 2 we demonstrate how to use that dynamic inventory with included, pre-written Ansible validation playbooks from the command line.


Time to Validate!

The openstack-tripleo-validations RPM provides all the validations. You can find them in /usr/share/openstack-tripleo-validations/validations/ on the director host. Here’s a quick look, but check them out on your deployment as well.


With Red Hat OpenStack Platform we ship over 20 playbooks to try out, and there are many more upstream.  Check the community often as the list of validations is always changing. Unsupported validations can be downloaded and included in the validations directory as required.

A good first validation to try is the ceilometerdb-size validation. This playbook ensures that the ceilometer configuration on the Undercloud doesn’t allow data to be retained indefinitely. It checks the metering_time_to_live and event_time_to_live parameters in /etc/ceilometer/ceilometer.conf to see if they are either unset or set to a negative value (representing infinite retention). Ceilometer data retention can lead to decreased performance on the director node and degraded abilities for third party tools which rely on this data.

Now, let’s run this validation using the command line in an environment where we have one of the values it checks set correctly and the other incorrectly. For example:

[stack@undercloud ansible]$ sudo awk '/^metering_time_to_live|^event_time_to_live/' /etc/ceilometer/ceilometer.conf

metering_time_to_live = -1


Method 1: ansible-playbook

The easiest way is to run the validation using the standard ansible-playbook command:

$ ansible-playbook /usr/share/openstack-tripleo-validations/validations/ceilometerdb-size.yaml


So, what happened?

Ansible output is colored to help read it more easily. The green “OK” lines for the “setup” and “Get TTL setting values from ceilometer.conf” tasks represent Ansible successfully finding the metering and event values, as per this task:

  - name: Get TTL setting values from ceilometer.conf
    become: true
    ini: path=/etc/ceilometer/ceilometer.conf section=database key={{ item }} ignore_missing_file=True
    register: config_result
      - "{{ metering_ttl_check }}"
      - "{{ event_ttl_check }}"

And the red and blue outputs come from this task:

  - name: Check values
    fail: msg="Value of {{ item.item }} is set to {{ item.value or "-1" }}."
    when: item.value|int < 0 or item.value  == None
    with_items: "{{ config_result.results }}"

Here, Ansible will issue a failed result (the red) if the “Check Values” task meets the conditional test (less than 0 or non-existent). So, in our case, since metering_time_to_live was set to -1 it met the condition and the task was run, resulting in the only possible outcome: failed.

With the blue output, Ansible is telling us it skipped the task. In this case this represents a good result. Consider that the event_time_to_live value is set to 259200. This value does not match the conditional in the task (item.value|int < 0 or item.value  == None). And since the task only runs when the conditional is met, and the task’s only output is to produce a failed result, it skips the task. So, a skip means we have passed for this value.

For even more detail you can run ansible-playbook in a verbose mode, by adding -vvv to the command:

$ ansible-playbook -vvv /usr/share/openstack-tripleo-validations/validations/ceilometerdb-size.yaml

You’ll find an excellent and interesting amount of information is returned and worth the time to review. Give it a try on your own environment. You may also want to learn more about Ansible playbooks by reviewing the full documentation.

Now that you’ve seen your first validation you can see how powerful they are. But the CLI is not the only way to run the validations.

Ready to go deeper with Ansible? Check out the latest collection of Ansible eBooks, including free samples from every title!

In the final part of the series we introduce validations with both the OpenStack scheduling service, Mistral, and the director web UI. Check back soon!

The “Operationalizing OpenStack” series features real-world tips, advice and experiences from experts running and deploying OpenStack.

by August Simonelli, Technical Marketing Manager, Cloud at June 12, 2017 11:19 PM


Recent blog posts: June 12

Experiences with Cinder in Production by Arne Wiebalck

The CERN OpenStack cloud service is providing block storage via Cinder since Havana days in early 2014. Users can choose from seven different volume types, which offer different physical locations, different power feeds, and different performance characteristics. All volumes are backed by Ceph, deployed in three separate clusters across two data centres.

Read more at http://openstack-in-production.blogspot.com/2017/06/experiences-with-cinder-in-production.html

Using Ansible Validations With Red Hat OpenStack Platform – Part 1 by August Simonelli, Technical Marketing Manager, Cloud

Ansible is helping to change the way admins look after their infrastructure. It is flexible, simple to use, and powerful. Ansible uses a modular structure to deploy controlled pieces of code against infrastructure, utilizing thousands of available modules, providing everything from server management to network switch configuration.

Read more at http://redhatstackblog.redhat.com/2017/06/08/using-ansible-validations-with-red-hat-openstack-platform-part-1/

Upstream First…or Second? by Adam Young

From December 2011 until December 2016, my professional life was driven by OpenStack Keystone development. As I’ve made an effort to diversify myself a bit since then, I’ve also had the opportunity to reflect on our approach, and perhaps see somethings I would like to do differently in the future.

Read more at http://adam.younglogic.com/2017/06/upstream-first-or-second/

Accessing a Mistral Environment in a CLI workflow by John

Recently, with some help of the Mistral devs in freenode #openstack-mistral, I was able to create a simple environment and then write a workflow to access it. I will share my example below.

Read more at http://blog.johnlikesopenstack.com/2017/06/accessing-mistral-environment-in-cli.html

OpenStack papers community on Zenodo by Tim Bell

At the recent summit in Boston, Doug Hellmann and I were discussing research around OpenStack, both the software itself but also how it is used by applications. There are many papers being published in proceedings of conferences and PhD theses but finding out about these can be difficult. While these papers may not necessarily lead to open source code contribution, the results of this research is a valuable resource for the community.

Read more at http://openstack-in-production.blogspot.com/2017/06/openstack-papers-community-on-zenodo.html

Event report: Red Hat Summit, OpenStack Summit by rbowen

During the first two weeks of May, I attended Red Hat Summit, followed by OpenStack Summit. Since both events were in Boston (although not at the same venue), many aspects of them have run together.

Read more at http://drbacchus.com/event-report-red-hat-summit-openstack-summit/

by Rich Bowen at June 12, 2017 02:57 PM

James Page

Ubuntu OpenStack Dev Summary – 12th June 2017

devsumWelcome to the second Ubuntu OpenStack development summary!

This summary is intended to be a regular communication of activities and plans happening in and around Ubuntu OpenStack, covering but not limited to the distribution and deployment of OpenStack on Ubuntu.

If there is something that you would like to see covered in future summaries, or you have general feedback on content please feel free to reach out to me (jamespage on Freenode IRC) or any of the OpenStack Engineering team at Canonical!

OpenStack Distribution

Stable Releases

The current set of OpenStack Newton point releases have been released:


The next cadence cycle of stable fixes is underway – the current candidate list includes:

Cinder: RBD calls block entire process (Kilo)

Cinder: Upload to image does not copy os_type property (Kilo)

Swift: swift-storage processes die of rsyslog is restarted (Kilo, Mitaka)

Neutron: Router HA creation race (Mitaka, Newton)

We’ll also sweep up any new stable point releases across OpenStack Mitaka, Newton and Ocata projects at the same time:




Development Release

x86_64, ppc64el and s390x builds of Ceph 12.0.3 (the current Luminous development release) are available for testing via PPA whilst misc build issues are resolved with i386 and armhf architectures:


OpenStack Pike b2 was out last week; dependency updates have been uploaded (including 5 new packages) and core project updates are being prepared this week pending processing of new packages in Ubuntu Artful development. .

OpenStack Snaps

We’re really close to a functional all-in-one OpenStack cloud using the OpenStack snaps – work is underway on the nova-hypervisor snap to resolve some issues with use of sudo by the nova-compute and neutron-openvswitch daemons. Once this work has landed expect a more full update on efforts to-date on the OpenStack snaps, and how you can help out with snapping the rest of the OpenStack ecosystem!

If you want to give the current snaps a spin to see what’s possible checkout snap-test.

Nova LXD

Work on support for new LXD features to allow multiple storage backends has been landed into nova-lxd. Support for LXD using storage pools has also been added to the nova-compute and lxd charms.

The Tempest experimental gate is now functional again (hint: use ‘check experimental’ on a Gerrit review). Work is also underway to resolve issues with Neutron linuxbridge compatibility in OpenStack Pike (raised by the OpenStack-Ansible team – thanks!), including adding a new functional gate check for this particular networking option.

OpenStack Charms

Deployment Guide

The charms team will be starting work on the new OpenStack Charms deployment guide in the next week or so; if you’re an OpenStack Charm user and would like to help contribute to a best practice guide to cover all aspects of building an OpenStack cloud using MAAS, Juju and the OpenStack Charms we want to hear from you!  Ping jamespage in #openstack-charms on Freenode IRC or attend our weekly meeting to find out more.

Stable bug backports

If you have bugs that you’d like to see backported to the current stable charm set, please tag them with the ‘stable-backport’ tag (and they will pop-up in the right place in Launchpad) – you can see the current pipeline here.

We’ve had a flurry of stable backports of the last few weeks to fill in the release gap left when the project switched to a 6 month release cadence so be sure to update and test out the latest versions of the OpenStack charms in the charm store.

IRC (and meetings)

You can participate in the OpenStack charm development and discussion by joining the #openstack-charms channel on Freenode IRC; we also have a weekly development meeting in #openstack-meeting-4 at either 1000 UTC (odd weeks) or 1700 UTC (even weeks) – see http://eavesdrop.openstack.org/#OpenStack_Charms for more details.


by JavaCruft at June 12, 2017 01:41 PM


How to Write a Cinder Driver

After too many hours of trial and error and searching for the right solution on how to properly write and integrate your own backend in cinder, here are all the steps and instructions necessary. So if you are looking for a guide on how to integrate your own cinder driver, look no further.

Why do we need a Cinder Driver, and why are we even using Cinder? We created Hera, a distributed storage system, based on ZFS, used in SESAME, a 5G project, in which we are project partners. To achieve an integration of Hera into SESAME, which uses OpenStack, we had to create a Cinder driver.

First of all, we have the Hera storage system with a RESTful API, all the logic and functionality is already available. We position the driver as a proxy between Cinder and Hera. To implement the driver methods, one does not have to look very far, there is a page on the OpenStack Cinder docs that explain which methods need to be implemented and what they do. For a basic Cinder Driver skeleton check out this repository: Cinder Driver Example.

We have decided for a normal volume driver, but you may also decide for another driver that you want to write, then you need to inherit from another base driver, e.g.: write your Driver for a san volumes (SanDriver) or for iSCSI volumes (ISCSIDriver). Also we have always looked at other drivers (mainly the LVM driver) for some guidance during the implementation.

These methods are necessary for a complete driver, while implementing it we wanted to try single methods after implementing them. Once the mandatory methods were implemented, and we attempted to execute the driver’s code, nothing was happening! We quickly realised, that the get_volume_stats method returns crucial information of the storage system to the Cinder scheduler. The scheduler will not know anything of the driver if values are not returned, so to quickly test we had this dict hardcoded and the scheduler stopped complaining.

    'volume_backend_name': 'foo',
    'vendor_name': 'bar',
    'driver_version': '3.0.0',
    'storage_protocol': 'foobar',
    'total_capacity_gb': 42,
    'free_capacity_gb': 42

In order to provider parameters to your driver, you can also add them in the following way, as part of the driver implementation. Here, we add a REST endpoint as a configuration option to the volume_opts part.

volume_opts = [
               help='the api endpoint at which the foo storage system sits')

All of the options that are defined, can be overwritten by putting them inside the /etc/cinder/cinder.conf file under the configuration of our own driver.

In order to understand what values Cinder will give to a driver the volume parameter can be used. When you get to implement the functionality of the driver, you will want to know what is passed to the driver by cinder, especially the volume dict parameter is of great interest, and it will have these values:

// and if any of the following are set

To test your methods quickly and easily, it is very important that the driver is in the correct directory, in which all the Cinder drivers are installed, otherwise Cinder will, naturally,  not find the driver. This can differ on how OpenStack has been installed on your machine. With devstack the drivers are on: /opt/stack/cinder/cinder/volume/drivers . With packstack they will be on: /usr/lib/python2.7/site-packages/cinder/volume/drivers .

There was one last head ache that needed to be resolved to allow full integration of our cinder driver. When the driver is placed the correct directory, we proceed to add the necessary options (as shown below) to the /etc/cinder/cinder.conf file.

# first we need to enable the backend (lvm is already set by default)
enabled_backends = lvmdriver-1,foo
# then add these options to your driver configuration at the end of the file
volume_backend_name = foo # this is super important!!!
volume_driver = cinder.volume.drivers.foo.FooDriver # path to your driver
# also add the options that you can set freely (volume_opts)
foo_api_endpoint = ''

You must to set the volume_backend_name because it links Cinder to the correct backend, without it nothing will ever work (NOTHING!).

Finally, when you want to execute operations on it, you must create the volume type for your Cinder driver:

cinder type-create foo
cinder type-key foo set volume_backend_name=foo

Now restart the Cinder services (c-vol, c-sch, c-api) and you should be able to use your own storage system through cinder.

by anke at June 12, 2017 12:01 PM

OpenStack Superuser

Takeaways from 1,000 deployments a day: Paddy Power Betfair

Superuser caught up with Steven Armstrong, principal DevOps automation engineer, Paddy Power Betfair. Paddy Power Betfair is a recent winner of the Superuser Award — they tied with UKCloud at the Boston Summit.

One of the reasons the community and judges were so impressed was the eye-popping number of 1,000 deployments a day using OpenStack APIs. (Here’s the full nomination for details.) Armstrong says they manage that feat by bringing teams in for a sprint so they can on board micro-service applications. “We try and create a self-service model so we teach them how to use the OpenStack infrastructure…” He says that he first time they takes a bit longer, because they’re learning how to manage the infrastructure themselves, but after awhile they deploy on their own. “That’s really the key here, training them so it’s a full-service model.” The system removes obstacles for developers to get new products to market, he adds.

His team of just eight engineers look after the main OpenStack platform, networking with Nuage and upgrades. They’ve also got on boarding teams in each location — London, Cluj, Dublin and Porto offices. “They’re small teams and because everything that we do is automated, so it’s really just an educational piece to get them to use the platform.”

What’s next at Paddy Power Betfair? Armstrong says they’re now in the middle of a migration project — moving all customer-facing applications to OpenStack — and they’re at about 25 percent of the total. They expect to complete the migration by the middle of 2018.

You can catch the entire 4:30 interview below.

The post Takeaways from 1,000 deployments a day: Paddy Power Betfair appeared first on OpenStack Superuser.

by Superuser at June 12, 2017 11:40 AM

June 10, 2017

OpenStack Superuser

Inside the first OpenStack Forum

Tom Fifield, OpenStack community manager, talks to SuperuserTV about the benefits of combining developers and operators in a new OpenStack Summit component called the Forum, which replaced the Design Summit that took place during previous Summits.

This new event removes the divide between developers and users, supporting more strategic conversations, such as how OpenStack engages with adjacent communities. Other sessions focused on topics such as cross-project collaboration and hierarchical quotas.

Overall, The Forum aims at bridging the gap between developers and operators, furthering the conversations that revolve around OpenStack’s principle of open design. Find out more from the four-minute interview below.

The post Inside the first OpenStack Forum appeared first on OpenStack Superuser.

by Ashlee Ferguson at June 10, 2017 01:34 PM

June 09, 2017

OpenStack Blog

OpenStack Developer Mailing List Digest June 3-9

SuccessBot Says

  • fungi 1: OpenStack general mailing list archives from Launchpad (July 2010 to July 2013) have been imported into the current general archive on lists.openstack.org.
  • andreaf 2: Tempest ssh validation running by default in the gate on master

etcd as a Base Service Update

  • Update to base service resolution from the TC 3.
  • Projects wanting to use etc v3 API grpc 4.
  • Projects that depend on eventlet, use the etcd3 v3 alpha HTTP API 5.
  • If you use too, there are two driver choices 67.
  • Oslo.cache driver 8.
  • Devstack uses etcd3 by default 9.
  • Cinder points to it 10.
  • Keystone using etcd3 for caching 11.
  • oslo.config to store configurations in etcd3 12.
  • Full thread: 13

Global Request ID Progress

  • oslo.context / oslo.middleware – everything DONE
  • devstack logging additional globalrequestid – DONE
  • cinder:
    • client supports globalrequestid – DONE
    • Cinder calls Nova with globalrequestid – TODO (waiting on Novaclient release)
    • Cinder calls Glance with globalrequestid – TODO
  • neutron:
    • client supports globalrequestid – IN PROGRESS (this landed, released, but the neutron client release had to be blocked for unrelated issues).
    • Neutron calls Nova with globalrequestid – TODO (waiting on Novaclient release)
  • nova:
    • Convert to oslo.middleware (to accept globalrequestid) – DONE
    • client supports globalrequestid – IN PROGRESS (waiting for release here 14)
    • Nova calls cinder with globalrequestid – DONE
    • Nova calls neutron with globalrequestid – TODO (waiting on working neutronclient release)
    • Nova calls Glance with global request id – IN PROGRESS (review needs final +2 here 15)
  • glance:
    • client supports globalrequestid – DONE
    • Glance supports setting globalrequestid – IN REVIEW 16 *(some debate on this).
  • Full thread: 17

Unreleased Libraries

  • Several teams with library deliverables that haven’t see any release this cycle:
    • glance-store
    • instack
    • pycadf
    • python-barbicanclient
    • python-congressclient
    • python-designateclient
    • python-searchlightclient
    • python-swiftclient
    • python-tackerclient
    • requestsexceptions
  • Full thread 18

POST /api-wg/news

  • Guidelines proposed for freeze:
    • Add guideline about consuming endpoints from catalog 19.
    • Add support for historical service type aliases 20.
    • Describe the publication of service-types-authority data 21.
  • Guidelines Under Review
    • Microversions: add nextminversion field in version body 22.
    • A suite of several documents about doing version discovery 23
    • WIP: microversion architecture archival doc (very early; not yet ready for review) 24
  • Full thread: 25

TC Report 23

  • Chris Dent already does a wonderful summary 26.

Project Teams Gathering – Denver September 11-15th

  • What: Second Project Team Gathering
  • When: September 11-15
  • Where Denver Coloradoat the Renaissance Hotel 27
  • Schedule:
    • How long: PTG will run for 5 days Monday – Friday, September 11-15th
    • Inter-project team work: Monday – Tuesday
    • Single project meetings: Wednesday-Friday
  • Check with PTL’s before booking travel as some teams may not meet all three days.
  • Work in progress schedule 28
  • The OpenStack Foundation has reserved a block of discounted rooms at $149/night USD. Rooms will be available 27 until August 20 or until they sell out.
  • Check if you need a visa 29
  • Requests for invitation letters can be submitted here 30, and must be received by Friday, August 25, 2017.
  • Travel support program first round starts July 2nd. Apply now 31
  • Full thread: 32

[1] – http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2017-05-26.log.html

[2] – http://eavesdrop.openstack.org/irclogs/%23openstack-qa/%23openstack-qa.2017-05-28.log.html

[3] – https://governance.openstack.org/tc/reference/base-services.html

[4] – https://pypi.python.org/pypi/etcd3

[5] – https://pypi.python.org/pypi/etcd3gw

[6] – https://github.com/openstack/tooz/blob/master/setup.cfg#L29

[7] – https://github.com/openstack/tooz/blob/master/setup.cfg#L30

[8] – https://github.com/openstack/oslo.cache/blob/master/setup.cfg#L33

[9] – http://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/etcd3

[10] – http://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/cinder#n356

[11] – https://review.openstack.org/#/c/469621/

[12] – https://review.openstack.org/#/c/454897/

[13] – http://lists.openstack.org/pipermail/openstack-dev/2017-June/thread.html#117967

[14] – https://review.openstack.org/#/c/471323/

[15] – https://review.openstack.org/#/c/467242/

[16] – https://review.openstack.org/#/c/468443/

[17] – http://lists.openstack.org/pipermail/openstack-dev/2017-June/thread.html#117924

[18] – http://lists.openstack.org/pipermail/openstack-dev/2017-June/118146.html

[19] – https://review.openstack.org/#/c/462814/

[20] – https://review.openstack.org/#/c/460654/

[21] – https://review.openstack.org/#/c/462815/

[22] – https://review.openstack.org/#/c/446138/

[23] – https://review.openstack.org/#/c/459405/

[24] – https://review.openstack.org/#/c/444892/

[25] – http://lists.openstack.org/pipermail/openstack-dev/2017-June/118069.html

[26] – http://lists.openstack.org/pipermail/openstack-dev/2017-June/117950.html

[27] – http://www.marriott.com/meeting-event-hotels/group-corporate-travel/groupCorp.mi?resLinkData=the%20OpenStack%20Project%20Teams%20Gathering%5Edensa%60fntfnta%60149.00%60USD%60false%604%609/7/17%609/19/17%608/20/17&app=resvlink&stop_mobi=yes

[28] – https://docs.google.com/spreadsheets/d/1xmOdT6uZ5XqViActr5sBOaz_mEgjKSCY7NEWcAEcT-A/edit?usp=sharing

[29] – http://travel.state.gov/content/visas/en/general/visa-wizard.html

[30] – https://openstackfoundation.formstack.com/forms/visa_form_denver_ptg

[31] – https://openstackfoundation.formstack.com/forms/travelsupportptg_denver

[32] – http://lists.openstack.org/pipermail/openstack-dev/2017-June/118002.html

by Mike Perez at June 09, 2017 10:04 PM

OpenStack in Production

Experiences with Cinder in Production

The CERN OpenStack cloud service is providing block storage via Cinder since Havana days in early 2014.  Users can choose from seven different volume types, which offer different physical locations, different power feeds, and different performance characteristics. All volumes are backed by Ceph, deployed in three separate clusters across two data centres.

Due to its flexibility, the volume concept has become very popular with users and the service has hence grown during the past years to over 1PB of allocated quota, hosted in more than 4'000 volumes. In this post, we'd like to share some of the features we use and point out some of the pitfalls we've run into when running (a very stable and easy to maintain) Cinder service in production.

Avoiding sadness: Understanding the 'host' parameter

With the intent to increase the resiliency, we configured the service from the start to run on multiple hosts. The three controller nodes were set up in an identical way, so all of them ran the API ('c-api'), scheduler ('c-sched') and volume ('c-vol') services.

With the first upgrades, however, we realised that there was a coupling between a volume and the 'c-vol' service that had created it: each volume is associated with its creation host which, by default, is identified by hostname of the controller. So, when the first controller needed to be replaced, the 'c-sched' wasn't able to find the original 'c-vol' service which would be able to execute volume operations. At the time, we fixed this by changing the corresponding volume entries in the Cinder database to point to the new host that was added.

As the Cinder configuration allows the 'host' to be set directly in 'cinder.conf',  we set this parameter to be the same on all controllers with the idea to remove the coupling between the volume and the specific 'c-vol' which was used to create it. We ran like this for quite a while and although we never saw direct issues related to this setting, in hindsight it may explain some of the issues we had with volumes getting stuck in transitional states. The main problem here is the clean-up being done as the daemons start up: as they assume exclusive access to 'their' volumes, volumes in transient states will be "cleaned up", e.g. their state reset, when a daemon starts, so in a setup with identical 'host's, this may cause undesired interferences.

Taking this into account, our setup has been changed to keep the 'c-api' on all three controller, but run the 'c-vol' and 'c-sched' services on one host only. Closely following the recent work of the Cinder team to improve the locking and allow for Active/Active HA we're looking forward to have Active/Active HA 'c-vol' services fully available again.

Using multiple volume types: QoS and quota classes

The scarce resource on our Ceph backend is not space, but IOPS, and after we handed out the first volumes to users, we quickly realised that some resource management was needed. We achieved this by creating a QoS spec and associating it with the one volume type we had at the time:

# cinder qos-create std-iops write_iops_sec=100 read_iops_sec=100
# cinder qos-associate <std_iops_qos_id> <std_volume_type_id>

This setting does not only allow you to limit the amount of IOPS used on this volume type, but also to define different service levels. For instance, for more demanding use cases we added a high IOPS volume type to which access is granted on a per request basis:

# cinder type-create high-iops
# cinder qos-create high-iops write_iops_sec=500 read_iops_sec=500
# cinder qos-associate <high_iops_qos_id> <high_iops_volume_type_id>

Note that both types are provided by the same backend and physical hardware (which also allows for a conversion without data movement between these types using 'cinder retype')! Note also that for attached volumes a detach/re-attach cycle is needed to have QoS changes taking effect.

In order to manage the initial default quotas for these two (and the other five volume types the service offers), we use Cinder's support for quota classes. As apart from the std-iops volume type all other volume types are only available on request, the initial quota is usually set to '0'. So, in order to create the default quotas for a new type, we would hence update the default quota class by running a command like:

# cinder type-create new-type
# cinder quota-class-update --volume-type new-type --volumes 0 --snapshots 0 --gigabytes 0 default

Of course, this method can also be used to define different initial quotas for new volume types, but it is in any case a way to avoid setting the initial quotas explicitly after project creation.

Fixing a long-standing issue: Request timeouts and DB deadlocks

For quite some time, our Cinder deployment had suffered from request timeouts leading to volumes left in error states when doing parallel deletions. Though easily reproducible, this was infrequent (and subsequently received the corresponding attention ...). Recently, however, this became a much more severe issue with the increased use of Magnum and Kubernetes clusters (which use volumes and hence launch parallel volumes deletions at larger scale when being removed). This affected the overall service availability (and, subsequently, received the corresponding attention here as well ...).

In this situations, the 'c-vol' logs showed lines like

"Deadlock detected when running 'reservation_commit': Retrying ..."

and hence indicated locking problem. We weren't able to pinpoint in the code how a deadlock would occur, though. A first change that mitigated the situation was to reduce the 'innodb_lock_wait_timeout' from its default value of 50 seconds to 1 second: the client was less patient and exercised the retry logic decorates the database interactions much earlier. Clearly, this did not address the underlying problem, but at least allowed the service to handle these parallel deletions in a much better way.

The real fix, suggested by a community member, implied to try and change a setting we had carried forward since the initial setup of the service: the connection string in 'cinder.conf' was not specifying a driver and hence using the mysql Python wrapper (rather than the recommended 'pymysql' Python implementation). After changing our connection from

connection = mysql://cinder:<pw>@<host>:<port>/cinder


connection = mysql+pymysql://cinder:<pw>@<host>:<port>/cinder

the problem basically disappeared!

So the underlying reason was the management of the green thread parallelism in the wrapper vs. the native Python implementation: while the former enforces serialisation (and hence eventually deadlocks in SQLAlchemy), the latter allows for proper parallel execution of the requests to the database. The OpenStack oslo team is now looking into issuing a warning when it detects this obsolete setting.

As using the 'pymysql' driver is generally recommended and, for instance, default in devstack deployments, volunteers to help with this issue had a really hard time to reproduce the issues we experienced ... another lesson learnt when keeping services running for a longer period :)

by Arne Wiebalck (noreply@blogger.com) at June 09, 2017 04:07 PM

OpenStack Superuser

Reduce (complexity) and deploy: a strategic focus for OpenStack

Chris Price, open source strategist for SDN, cloud and NFV at Ericsson, and Mike Perez cross-project developer coordinator at the OpenStack Foundation, talk to Superuser TV about the how they are working with adjacent technologies and on simplifying OpenStack.

Perez says that the variety of features in OpenStack with numerous configuration options have slowed things down for many potential users. His work on some aspects of this are so new that there’s been a lot of confusion.  “I’ve heard people say, ‘someone’s working on this new thing — this guy Mike, a guy called Perez and there’s someone who goes by Thingee — and those three people are all me.” But, he says, that hasn’t cogged the machine — they pushed 18 patches in one day — though there’s a lot of work left to do.

“We have options that are deprecated all the way from Folsom that are still existing in code,” Perez says. “But a lot of them are gone, as of today.”

Price says that so far they’ve learned there’s a lot of good capability and features in the OpenStack community on offer to other communities. “We’ve also learned that we have some work to do to make it relevant. There’s investment and activity into bringing those technologies forward and make them more consumable.”

As for what’s next, Perez says it may be more hands-off than on. “We have a lot of different initiatives, some tie into these adjacent technologies where we’re kind of competing with other communities where it makes sense for them to go ahead and develop what they’re good at and allow us to focus on what we’re good at with infrastructure.”

To get involved, you can reach out to Perez on IRC or Twitter, where his handle is Thingee or over email: mikeATopenstack.org

Catch the whole five-minute interview below.

The post Reduce (complexity) and deploy: a strategic focus for OpenStack appeared first on OpenStack Superuser.

by Superuser at June 09, 2017 11:33 AM

June 08, 2017

Red Hat Stack

Using Ansible Validations With Red Hat OpenStack Platform – Part 1

Ansible is helping to change the way admins look after their infrastructure. It is flexible, simple to use, and powerful. Ansible uses a modular structure to deploy controlled pieces of code against infrastructure, utilizing thousands of available modules, providing everything from server management to network switch configuration.

With recent releases of Red Hat OpenStack Platform access to Ansible is included directly within the Red Hat OpenStack Platform subscription and installed by default with Red Hat OpenStack Platform director.

In this three-part series you’ll learn ways to use Ansible to perform powerful pre and post deployment validations against your Red Hat OpenStack environment, utilizing the special validation scripts that ship with recent Red Hat OpenStack Platform releases.


Ansible, briefly …

Ansible modules are commonly grouped into concise, targeted actions called playbooks. Playbooks allow you to create complex orchestrations using simple syntax and execute them against a targeted set of hosts. Operations use SSH which removes the need for agents or complicated client installations. Ansible is easy to learn and allows you to replace most of your existing shell loops and one-off scripts with a structured language that is extensible and reusable.

Introducing … OpenStack TripleO Validations

Red Hat ships a collection of pre-written Ansible playbooks to make cloud validation easier. These playbooks come from the OpenStack TripleO Validations project (upstream, github). The project was created out of a desire to share a standard set of validations for TripleO-based OpenStack installs. Since most operators already have many of their own infrastructure tests, sharing them with the community in a uniform way was the next logical step.

On Red Hat OpenStack Platform director, the validations are provided by the openstack-tripleo-validations RPM installed during a director install. There are many different tests for all parts of a deployment: prep, pre-introspection, pre-deployment, post-deployment and so on. Validation can be run in three different ways: directly with ansible-playbook, via the Mistral workflow execution, and thought the director UI.

Let’s Get Started!

Red Hat OpenStack Platform ships with an Ansible dynamic inventory creation script called tripleo-ansible-inventory. With it you can dynamically include all Undercloud and Overcloud hosts in your Ansible inventory. Dynamic inventory of hosts makes it easier to do administrative and troubleshooting tasks against infrastructure in a repeatable way. This helps manage things like server restarts, log gathering and environment validation. Here’s an example script, run on the director node, to get Ansible’s dynamic inventory setup quickly.


pushd /home/stack
# Create a directory for ansible
mkdir -p ansible/inventory
pushd ansible

# create ansible.cfg
cat << EOF > ansible.cfg
inventory = inventory
library = /usr/share/openstack-tripleo-validations/validations/library

# Create a dynamic inventory script
cat << EOF > inventory/hosts
# Unset some things in case someone has a V3 environment loaded
source ~/stackrc
PLAN_NAME=\$(openstack stack list -f csv -c 'Stack Name' | tail -n 1 | sed -e 's/"//g')
/usr/bin/tripleo-ansible-inventory \$*

chmod 755 inventory/hosts
# run inventory/hosts.sh --list for example output

cat << EOF >> ~/.ssh/config
Host *
 StrictHostKeyChecking no
chmod 600 ~/.ssh/config

This script sets up a working directory for your Ansible commands and creates an Ansible configuration file called ansible.cfg, which includes the openstack-tripleo-validations playbooks in the Ansible library. This helps with running the playbooks easily. Next, the script creates the dynamic inventory file (~/inventory/hosts) by using /usr/bin/tripleo-ansible-inventory executed against the Overcloud’s Heat stack name.

You can run the inventory file with the –list flag to see what has been discovered:

[stack@undercloud inventory]$ /home/stack/ansible/inventory/hosts --list | jq '.'
  "compute": [
  "undercloud": {
    "vars": {
      "ansible_connection": "local",
      "overcloud_admin_password": "AAABBBCCCXXXYYYZZZ",
      "overcloud_horizon_url": ""
    "hosts": [
  "controller": [
  "overcloud": {
    "vars": {
      "ansible_ssh_user": "heat-admin"
    "children": [

We now have a dynamically generated inventory as required, including groups, using the director’s standard controller and compute node deployment roles.

We’re now ready to run the validations! 

Ready to go deeper with Ansible? Check out the latest collection of Ansible eBooks, including free samples from every title!

This is the end of the first part of our series. Check out Part 2 to learn how you can use this dynamic inventory file with the included validations playbooks!

The “Operationalizing OpenStack” series features real-world tips, advice, and experiences from experts running and deploying OpenStack.

by August Simonelli, Technical Marketing Manager, Cloud at June 08, 2017 12:16 PM

OpenStack Superuser

Making containers work in a siloed company: the athenahealth story

Growing healthy container ops in a siloed company isn’t easy — but they are in fine form at athenahealth.

Superuser TV talked to Ryan Wallner and Kevin Scheunemann, both on the technical staff of athenahealth, at the recent OpenStack Summit Boston on how they got there.

How did you get started deploying OpenStack on containers?

Scheunemann: We initially started by deploying OpenStack using Mirantis Fuel and we found some limitations on how the network was deployed. So we started looking at other options — Kolla seemed interesting because it was basically network agnostic. You could deploy your own network in any way you want and just put the containers on top and run the applications on OpenStack.

Wallner: As a company we’re moving toward containerizing a lot of things, not just infrastructure but applications, too. We have this grand vision for everything being containerized and even running OpenStack on other container systems.

How did you do it? What’s your high-level architecture?

Wallner: We started off experimenting with Kolla, so we wanted to get a sense of how it’s being used to deploy, what the Docker images are like, seeing how we want to modify it, etc. We run OpenStack on a bunch of Dell servers on bare metal so we have a lot of automation around deploying those servers. We were already using Ansible extensively to get those servers ready for OpenStack. So Ansible was a really good fit when we chose Kolla. After testing with Kolla for awhile, we switched to deploying it on bare metal. Kolla gave us a lot of flexibility — it’s pretty much bare metal and containers running OpenStack and all of our work loads for developers running on OpenStack.

Scheunemann: We started with plain old VLAN networking and that was great to start with. When it was time to scale that up, we needed more of a spine-leaf architecture and most of the stock deployers don’t support that. That’s where we started working with containers.

What were some of the challenges?

Wallner: We moved from flat VLan to spine-leaf architecture in the last few months and we wanted to move all the routing to the hosts….Kolla was flexible but it was also “opinionated” in ways — the way Ansible reads IP addresses didn’t necessarily work with the way we put IP addresses on the loop path. There’s a bunch of instrumental challenges around getting the fabric to work…Once we found out how it worked it was nice that Kolla was flexible.

Scheunemann: We had the technical side and the processes side. Coming from a very siloed company, we have a whole dedicated networking team and bringing them on board to make sure that their vision and our vision is realized in the network for the infrastructure has been helpful. It’s also a challenge – because they do things differently. We learn from each other and hopefully we can grow and become one infrastructure.
Catch the entire eight-minute interview below for more on what they’re planning in the future.

The post Making containers work in a siloed company: the athenahealth story appeared first on OpenStack Superuser.

by Superuser at June 08, 2017 11:09 AM

June 07, 2017

Adam Young

Upstream First…or Second?

From December 2011 until December 2016, my professional life was driven by OpenStack Keystone development. As I’ve made an effort to diversify myself a bit since then, I’ve also had the opportunity to reflect on our approach, and perhaps see somethings I would like to do differently in the future.

OpenStack moves slowly, and Keystone moves slower than the average OpenStack project. As a security sensitive project, it is very risk adverse, and change requires overcoming a lot of inertia. Very different in pacing from the WebUI and application development I’ve done in the past. Even (mostly) internal projects like BProc had periods of rapid development. But Keystone stalled, and I think that is not healthy.

One aspect of the stall is the slow adoption of new technology, and this is, in part, due to the policy we had in place for downstream development that something has to be submitted upstream before it could be included in a midstream or downstream product. This is a very non-devops style code deployment, and I don’t blame people for being resistant to accepting code into the main product that has never been tested in anger.

When I started on Keystone, very, very little was core API. The V2 API did not even have a mechanisms for changing passwords: it was all extensions. When I wrote the Trusts API, at the last minute, I was directed by several other people to make it into an extension, even though something that touched as deeply into so many parts of othe code could not, realistically be an extension. The result was only halfway done as an extension, that could neither be completely turned off or ignored by core pieces, but that still had fragments of the namespace OS-TRUST floating around in it.

As Keystone pushed toward the V3 API, the idea of extensions was downplayed and then excised. Federation was probably the last major extension, and it has slowly been pulled into the main V3 API. There was no replacement for extensions. All features went into the main API into the next version.

Unfortunately, Keystone is lacking some pretty fundamental features. The ability to distinguish cluster level from project scoped authorization, (good ole bug 968696) Is very fundamentally coded into all of the remote services. Discoverability of features or URLS is very limited. Discoverability of authorization is non-existent.

How could I have worked on something for so long, felt so strongly about it, and yet had so little impact? Several things came in to play, but the realization struck me recently as I was looking at OpenShift origin.

Unlike RDO, which followed OpenStack, OpenShift as a project pre-dated the (more)upstream Kuberenetes project on which it is now based. Due to the need to keep up with the state of container development, OpenShift actually shifted from an single-vendor-open-source-project approach to embrace Kubernetes. In doing so, however, it had the demands of real users and the need for a transition strategy. OpenShift specific operations show up in the discovery page, just under their own top level URL.The oc, and oadm commands are still used to perform operations that are not yet in the upstream kubectl. This ability to add on to upstream has proved to be Kubernetes’ strength.

The RBAC mechanism in Kubernetes has most of what I wanted from the RBAC mechanism in Keystone (exception is Implied Roles.) This was developed in Origin first, and then submitted to the upstream Kubernetes project without (AFAICT) any significant fanfare. One reason I think it was accepted so simply was that the downstream deployment had vetted the idea. Very little teaches software what it needs to do like having users.

PKI tokens are a pretty solid example of a feature that would have been much better with a rapid deployment and turnaround time. If I had put PKI tokens into production in a small way prior to submitting them upstream, the mechanism would have be significantly more efficient: we would have discovered the size issues with the X-AUTH_TOKEN headers, the issues with revocation, and built the mechanisms to minimize the token payload.

We probably would have used PKI token mechanism for K2K, not SAML, as originally envisioned.

We ended up with a competing implementation come with Fernet coming out of Rackspace, and that had the weight of “our operators need this.”

I understand why RDO pursued the Upstream First policy. We did not want to appear to be attempting a vendor lock in. We wanted to be good community members. And I think we accomplished that. But having a midstream or downstream extension strategy to vet new ideas appears essential. Both Upstream Keystone and midstream RDO could have worked to make this happen. Its worth remembering for future development.

It is unlikely that a software spec will better cover the requirements than actually having code in production.

by Adam Young at June 07, 2017 01:58 PM


The reality of DevOps: ECommerce on OpenStack using Mirantis Cloud Platform

We can talk about the benefits of a DevOps environment all we want, but for the people who are directly involved, the reality is much more complicated.

by Nick Chase at June 07, 2017 12:00 PM

OpenStack Superuser

OpenStack Queens release: let your voice be heard

Mike Perez from the OpenStack Foundation discusses some of the community-wide goals for the Queens release, which include increasing consistency and improving reference API documentation for users trying to use OpenStack RESTful APIs.

Goals for the future release develop during Forum sessions at the Summit then make their way to the Technical Committee for final decisions.  You can get involved by joining the developer’s mailing list or proposing your goals on the Etherpad as well as keep track of deadlines for the Queens release here.

Some of the goals the community is currently focusing on for the Pike release (in development now and due at the end of August) include functional testing for Python 3.5 and supporting the Web Server Gateway Interface (WSGI) inside various projects that will allow these releases to operate as agnostic technologies with web servers including Apache and NGINX.

Another major goal includes efforts to support finer policies regarding admin roles within projects, which involve written specifications. These specifications currently exist as an actual implementation in Nova which will soon make its way out to the rest of the projects to provide a consistent experience, Perez explains.

There’s also a focus on facilitating version discovery through the deciphering of endpoints, which will benefit app developers who currently have to sift through mass amounts of code.

Check out the entire seven-minute interview — conducted with OpenStack’s youngest contributor Mila —  below.

The post OpenStack Queens release: let your voice be heard appeared first on OpenStack Superuser.

by Ashlee Ferguson at June 07, 2017 11:51 AM

OpenStack in Production

OpenStack papers community on Zenodo

At the recent summit in Boston, Doug Hellmann and I were discussing research around OpenStack, both the software itself but also how it is used by applications. There are many papers being published in proceedings of conferences and PhD theses but finding out about these can be difficult. While these papers may not necessarily lead to open source code contribution, the results of this research is a valuable resource for the community.

Increasingly, publications are made with Open Access conditions which are free of all restrictions on access. For example, all projects receiving European Union Horizon 2020 funding are required to make sure that any peer-reviewed journal article they publish is openly accessible, free of charge. Reviewing with the OpenStack scientific working group, Open access was also felt to be consistent with OpenStack's Open principles of Open Source, Open Design, Open Development and Open Community.

There are a number of different repositories available where publications such as this can be made available. The OpenStack scientific working group are evaluating potential approaches and Zenodo looks like a good candidate as it is already widely used in the research community, open source on github and the application also runs in the CERN Data Centre on OpenStack. Preservation of data is one of CERN's key missions and this is included in the service delivery for Zenodo.

The name Zenodo is derived from Zenodotus, the first librarian of the Ancient Library of Alexandria and father of the first recorded use of metadata, a landmark in library history.

Accessing the Repository

The list of papers can be seen at https://zenodo.org/communities/openstack-papers. Along with keywords, there is a dedicated search facility is available within the community so that relevant papers can be found quickly.

Submitting New Papers

Zenodo allows new papers to be submitted for inclusion into the OpenStack Papers repository. There are a number of steps to be performed.

Please ensure that these papers are available under open access conditions before submitting them to the repository if published elsewhere. Alternatively, if the papers can be published freely, they can be published in Zenodo for the first time and receive the DOI directly.
  1. Log in to Zenodo. This can be done using your github account if you have one or by registering a new account via the 'Sign Up' button.
  2. Once logged in, you can go to the openstack repository at https://zenodo.org/communities/openstack-papers and upload a new paper.
  3. The submission will then be verified before publishing.
To submit for this repository, you need to provide
  • Title of the paper
  • Author list
  • Description (the Abstract is often a good content)
  • Date of publication
If you know the information, please provide the following also
  • DOI (Digital Object Identifier) used to uniquely identify the object. In general, these will already be allocated to the paper since the original publication will have allocated one. If none is specified, Zenodo will create one, which is good for new publications but bad practice to generate duplicate DOIs for published works. So please try to find the original, which also it helps with future cross referencing.
  • There are optional fields at upload time for adding more metadata (to make it machine readable), such as “Journal” and “Conference”. Adding journal information improves the searching and collating of documents for the future so if this information is known, it is good to enter it.
Zenodo provides the synchronisation facilities for repositories to exchange information (OAI 2.0). Planet OpenStack feeds using this would be an interesting enhancement to consider or adding RSS support to Zenodo would be welcome contributions.

by Tim Bell (noreply@blogger.com) at June 07, 2017 10:02 AM

June 06, 2017

Chris Dent

TC Report 23

This week had a TC meeting. Notes from that are in the last section below. Information about other pending or new changes that may impact readers are in the earlier sections.

New Things

The TC now has office hours and a dedicated IRC channel on freenode: #openstack-tc. Office hours will be for an hour at 09:00 UTC on Tuesdays, 01:00 UTC on Wednesdays, and 15:00 UTC on Thursdays. The idea is that some segment of the TC membership will be available during at least these times for unstructured discussion with whomever from the community wants to join in.

etcd has been approved as a base service. Emilien has posted an update on using etcd with configuration management.

Pending Stuff

Queens Community Goals

Last week's report has a bunch of links on community goals for Pike. There are enough of them that we'll have to pick and choose amongst them. A new one proposes policy and docs in code. The document has a long list of benefits of doing this. A big one is that you can end up with a much smaller policy file. Even one that doesn't exist at all if you choose to use the defaults.

The email version of the report spawned a series of subthreads on the efficacy and relative fairness of how we deal with plugins in tempest. It's unclear yet if there is any actionable followup from that. One option might be to propose an adjustment to the original split plugins goal to see how much or little support there is for the idea of all tests being managed via plugins.

Managing Binary Artifacts

With the increased interest in containers and integrating and interacting with other communities (where the norms and requirements for useful releases are sometimes different from those established in OpenStack) some clarity was required on the constraints a project must satisfy to do binary releases. Guidelines for managing releases of binary artifacts have been proposed. They are not particularly onerous but the details deserve wide review to make sure we're not painting ourselves into any corners or introducing unnecessary risks.

Meeting Stuff

This week had a scheduled TC meeting for the express purpose of discussing what to do about PostgreSQL. The remainder of this document has notes from that meeting.

There are several different concerns related to PostgreSQL, but the most pressing one is that much of the OpenStack documentation makes it appear that the volume of testing and other attention that MySQL receives is also applied to PostgreSQL. This is not the case (for a variety of reasons). There has been general agreement that something must be done, at least presenting warnings in the documentation.

A first proposal was created which provides a good deal of historical context and laid out some steps for making the current state of PostgreSQL in OpenStack more clear. After some disagreement over the extent and reach of the document I created a second proposal that tried to say less in general but specifically less about MySQL and tried to point to a future where PostgreSQL could be a legitimate option (in part because there are real people out there using it, in part because two implementations is healthy in many ways).

This drew out a lot of discussion, including some about the philosophy of how we manage databases, but much of it only identified fairly tightly held differences and did not move us towards helping real people.

After some fatigue it was decided to have this meeting whereupon we decided (taking the entire hour to talk about it) that there were two things we could agree to, one we could not, and a need to have a discussion about some of the philosophical concerns at some other time.

The things we can agree about are:

  • The OpenStack documentation needs to indicate the lower attention that PostgreSQL currently receives from the upstream community.

  • We need better insight of usage of OpenStack by people who are "behind" a vendor and may not care about or know about the user survey and thus we need to work with the foundation board to improve that situation.

Where we don't agree is whether the resolution being proposed needs to include information about work being done to evaluate where on a scale of "no big deal" to "really hard" a transition to MySQL from PostgreSQL might be. This work is already planned or in progress by SUSE. I, and maybe a few others (not entirely clear), feel that while this is useful work, including it in a resolution about the current state of PostgreSQL support is at least irrelevant and at worst effectively a statement of a desire to kill support for PostgreSQL. Publishing such a statement, even if casually and without intent, could signal that effort to improve the attention PostgreSQL gets would be wasted effort.

Which leads to one of the philosophical concerns: Having even limited support for PostgreSQL means that OpenStack is demonstrating support for the idea that the database layer should be an abstraction and which RDBMS (or RDBMS interface-alike) used in a deployment is a deployers choice. For some this is a sign of quality and maturity (somewhat like being able to choose which ever WSGI server you feel is ideal for your situation). For others, not choosing a specific RDBMS builds in limitations that will prevent OpenStack from being able to scale and upgrade elegantly.

We were unable to agree on this point but at least some people felt it a topic we need to address in order to be able to fully resolve the PostgreSQL question. On the other side of the same coin: since there is as yet no resolution on the merit of a strong database abstraction layer it would be inappropriate to overstate the OpenStack commitment to MySQL.

The next step is that dirk has been volunteered to integrate the latest feedback on the first proposal. Once that is done, we will iterate there. People have committed to keeping their concerns and feedback focused around making the document be about those things on which we agree.

by Chris Dent at June 06, 2017 10:10 PM

John Likes OpenStack

Accessing a Mistral Environment in a CLI workflow

Recently, with some help of the Mistral devs in freenode #openstack-mistral, I was able to create a simple environment and then write a workflow to access it. I will share my example below.

You can define a mistral environment file in YAML:

(undercloud) [stack@undercloud 101]$ cat env.yaml
name: "my_env"
foo: bar
- ""
- ""
(undercloud) [stack@undercloud 101]$
You can then ask Mistral to store that enviornment:

(undercloud) [stack@undercloud 101]$ mistral environment-create -f yaml env.yaml
Name: my_env
Description: null
Variables: "{\n \"foo\": \"bar\", \n \"service_ips\": {\n \"ceph_mon_ctlplane_node_ips\"\
: [\n \"\", \n \"\"\n ]\n\
\ }\n}"
Scope: private
Created at: '2017-06-06 16:31:01'
Updated at: null
(undercloud) [stack@undercloud 101]$
Observe it in the environment list:

(undercloud) [stack@undercloud 101]$ mistral environment-list
| Name | Description | Scope | Created at | Updated at |
| tripleo | None | private | 2017-06-02 | |
| .undercloud- | | | 21:24:12 | |
| config | | | | |
| overcloud | None | private | 2017-06-02 | 2017-06-02 23:32:53 |
| | | | 21:24:21 | |
| ssh_keys | SSH keys for | private | 2017-06-02 | |
| | TripleO | | 21:24:40 | |
| | validations | | | |
| my_env | None | private | 2017-06-06 | |
| | | | 16:32:41 | |
(undercloud) [stack@undercloud 101]$
Look at it directly:

(undercloud) [stack@undercloud 101]$ mistral environment-get my_env
| Field | Value |
| Name | my_env |
| Description | |
| Variables | { |
| | "foo": "bar", |
| | "service_ips": { |
| | "ceph_mon_ctlplane_node_ips": [ |
| | "", |
| | "" |
| | ] |
| | } |
| | } |
| Scope | private |
| Created at | 2017-06-06 16:32:41 |
| Updated at | |
(undercloud) [stack@undercloud 101]$
You can define a workflow which can access the variables in the Mistral environment:

version: "2.0"
action: std.echo output=<% $.get('__env') %>
on-complete: show_env_synax2
action: std.echo output=<% env() %>
on-complete: show_ips
action: std.echo output=<% env().get('service_ips', {}).get('ceph_mon_ctlplane_node_ips', []) %>
You can then have a Mistral worfklow use it by specifying it as a param as per the documentation.

mistral execution-create workflow_identifier [workflow_input] [params]
In [params] we specify the environment name. If your workflow has no [workflow_input], then pass '' to make it clear your are specifying the environment name with params as the second argument.

First we create (or update) our workflow:

(undercloud) [stack@undercloud 101]$ mistral workflow-update mistral-env.yaml
| ID | Name | Project ID | Tags | Input | Created at | Updated at |
| 18e9daee-06db- | wf | f282a331978146 | | | 2017-06-05 | 2017-06-06 |
| 42bc-b0bf- | | ce988911bc5643 | | | 17:04:31 | 19:04:06 |
| 228c19bf2c99 | | 5db4 | | | | |
(undercloud) [stack@undercloud 101]$
Next we execute our workflow and indicate that the [workflow_input] is empty by passing '' and after that we pass some JSON specifying that the "env" key should be "my_env" as defined above:

(undercloud) [stack@undercloud 101]$ mistral execution-create wf '' '{"env": "my_env"}'
| Field | Value |
| ID | f2c62c11-d5b6-4698-88af-3ef91240b837 |
| Workflow ID | 18e9daee-06db-42bc-b0bf-228c19bf2c99 |
| Workflow name | wf |
| Description | |
| Task Execution ID | |
| State | RUNNING |
| State info | None |
| Created at | 2017-06-06 19:05:17 |
| Updated at | 2017-06-06 19:05:17 |
(undercloud) [stack@undercloud 101]$
As a shortcut we save the UUID of the execution, and use it to get the IDs of the list of tasks:

(undercloud) [stack@undercloud 101]$ UUID=f2c62c11-d5b6-4698-88af-3ef91240b837
(undercloud) [stack@undercloud 101]$ mistral task-list $UUID | awk {'print $2'} | egrep -v 'ID|^$'
(undercloud) [stack@undercloud 101]$
Next we make sure our ID maps to the task we want to see the output for:

(undercloud) [stack@undercloud 101]$ mistral task-get edf9576b-e4b7-41c9-9d0d-2486e886ce96
| Field | Value |
| ID | edf9576b-e4b7-41c9-9d0d-2486e886ce96 |
| Name | show_env_synax1 |
| Workflow name | wf |
| Execution ID | f2c62c11-d5b6-4698-88af-3ef91240b837 |
| State | SUCCESS |
| State info | None |
| Created at | 2017-06-06 19:05:17 |
| Updated at | 2017-06-06 19:05:18 |
(undercloud) [stack@undercloud 101]$
So what was the result of using syntax1?

(undercloud) [stack@undercloud 101]$ mistral task-get-result edf9576b-e4b7-41c9-9d0d-2486e886ce96
"foo": "bar",
"service_ips": {
"ceph_mon_ctlplane_node_ips": [
(undercloud) [stack@undercloud 101]$
The environment we passed. Note that the more compact syntax2 does the same thing:

(undercloud) [stack@undercloud 101]$ mistral task-get-result 6a7f2793-41a4-4ef9-8366-4d59f936044d
"foo": "bar",
"service_ips": {
"ceph_mon_ctlplane_node_ips": [
(undercloud) [stack@undercloud 101]$
What's nice is that we can specifically pick items out with the env() dictionary as shown in the show_ips task.

(undercloud) [stack@undercloud 101]$ mistral task-get-result 5e6559d0-d875-4f30-8567-dfd1dbf7ac32
(undercloud) [stack@undercloud 101]$
As a refresh the output of the task above, came from the following task:

action: std.echo output=<% env().get('service_ips', {}).get('ceph_mon_ctlplane_node_ips', []) %>

by John (noreply@blogger.com) at June 06, 2017 06:42 PM


Planet OpenStack is a collection of thoughts from the developers and other key players of the OpenStack projects. If you are working on OpenStack technology you should add your OpenStack blog.


Last updated:
June 28, 2017 06:50 PM
All times are UTC.

Powered by: