March 18, 2019


One Small Step Just Won’t Cut It

Tristan Good Company Step Challenge Winner

As you may have already noticed, many of our Solutionauts like to run. We run in the City to Surf. We run around conferences. We run all up in ur Cloud.

So we recently setup a platform to hold company step challenges. Aptira Solutionauts from all over the world stepped out from behind their computers to take the title of Aptira’s Greatest Stepper! Here’s how it went down.

Farzaneh and John were first out of the gates, with John taking part in a craft beer tour in Manly – racking up lots of steps in-between breweries and downing a few well deserved pints along the way. This is an excellent use for your Aptira bottle opener thongs by the way!

Tom got lost in the Taiwanese mountains while Bharat was transversing the Indian subcontinent and Jarryd hit the beach. Kat came in first out of the girls almost hitting 30k steps and Jess conveniently broke her toe the day before the challenge, coming in last place. Excuses much??

Our winner took first place with an unbelievable 122,578 steps. Seriously Tristan, are you even human?

The next step challenge will be taking place soon. No Tristan’s aloud. Stay tuned to find out who wins!

The post One Small Step Just Won’t Cut It appeared first on Aptira.

by Aptira at March 18, 2019 10:48 PM

OpenStack Superuser

Inside open infrastructure: The latest from the OpenStack Foundation

Welcome to the latest edition of the OpenStack Foundation Open Infrastructure newsletter, a digest of the latest developments and activities across open infrastructure projects, events and users. Sign up to receive the newsletter and email to contribute.

Spotlight on… The Project Teams Gathering (PTG) in Denver

In open collaboration, it is important for contributors to regularly meet in person. It allows open source projects to build shared understandings, discuss common priorities, iterate quickly on solutions for complex problems, and make fast progress on critical issues. It is a major step to establish a project identity beyond each separate organization participating.
The Project Teams Gathering (PTG) is a work event for contributors to various open source projects, special interest groups or working groups, organized by the OpenStack Foundation. It provides meeting facilities allowing those various groups to meet face-to-face, exchange and get work done in a productive setting. The co-location of those various meetings, combined with the dynamic scheduling of the event, make it easy to cross-pollinate between groups, or participate in multiple team meetings.

Historically, the PTG was organized as a separate event, run at a different time and location from our other events. For the first time in Denver in May 2019, the PTG will be run just after the Summit, in the same venue. This should make it accessible to a wider set of contributors.

As the OpenStack Foundation evolved to more broadly support openly developing open infrastructure, the PTG is now open to a larger set of open source projects. In Denver we’ll obviously have various OpenStack project teams taking the opportunity to meet, but also OSF pilot projects like Kata Containers, StarlingX and Airship. Beyond that, the event is open to other open infrastructure projects: at the last event we welcomed a Tungsten Fabric developers meeting, and in Denver we’ll have Rust-VMM developers leveraging the event to meet in person. Rust-VMM is a nascent open collaboration to develop common Rust virtualization crates, reusable between CrosVM and Firecracker.

You can learn more about the upcoming PTG, and see the full list of teams that will meet there by visiting the PTG website. If you are a contributor to one of those projects, we’d really like to see you there!

OpenStack Foundation news

  • Here are the latest updates on the Open Infrastructure Summit in Denver, April 29 – May:
    • The schedule is live and registration is open. Check out the lineup of speakers and get your tickets now before prices increase on April 11 at 11:59 p.m. PT.
    • After Denver, the Open Infrastructure Summit heads to Shanghai, the week of November 4. Sponsor Sales are now open, learn more here.
  • Last week, the OpenStack Foundation Board of Directors reviewed confirmation guidelines for new Open Infrastructure Projects under the Foundation. After reviewing the process by which the guidelines were drafted and their current state, the Board unanimously approved the guidelines.

OpenStack Foundation Project News




  • The community reached their first milestone to containerize the control plane services of StarlingX for the upcoming release. For details, check out the Wiki.
  • There will be a hands-on workshop at the Open Infrastructure Summit. If you’re interested in learning how to deploy StarlingX and trying out some of the cool features of the platform, sign up for the workshop in Denver.


Questions / feedback / contribute

This newsletter is edited by the OpenStack Foundation staff to highlight open infrastructure communities. We want to hear from you!
If you have feedback, news or stories that you want to share, reach us through and to receive the newsletter, sign up here.

The post Inside open infrastructure: The latest from the OpenStack Foundation appeared first on Superuser.

by OpenStack Foundation at March 18, 2019 12:26 PM

CERN Tech Blog

Splitting the CERN OpenStack Cloud into Two Regions

Overview The CERN Cloud Infrastructure has been available since 2013 for all CERN users. During the last 6 years it has grown from few hundred to more than 300k cores. The Cloud Infrastructure is deployed in two data centres (Geneva, Switzerland and Budapest, Hungary). Back in 2013 we decided to have only one region across both data centres for simplicity. We wanted to offer an extremely simple solution to be adopted easily by our users.

by CERN ( at March 18, 2019 12:00 PM

March 15, 2019

OpenStack Superuser

Strengthening open infrastructure: Integrating OpenStack and Kubernetes

OpenStack and Kubernetes are currently the most popular open infrastructure solutions, so it’s worthwhile to provide users access to a platform that provides both services, using a single personal account. Currently this is hardly possible, since the two systems provide different authentication mechanisms. OpenStack uses its own identity system, Keystone, while Kubernetes delegates authentication to external providers through a mechanism of plug-ins.

Previous attempts to integrate assumed password-based authentication for OpenStack and enabled Kubernetes users to authenticate with their OpenStack passwords through Keystone.

However, there other means to authenticate to Keystone, for example through federated authentication, which is a more secure and scalable solution, redirecting to an external trusted identity provider. In particular, the federated GARR cloud platform uses federated authentication through EduGain, enabling SSO for all users of the worldwide research community.

To provide a general solution, we developed an innovative technique, based on a novel feature of OpenStack called application credentials that became fully available with the Rocky release.

The proposed solution requires a modified version of one of the official SDK libraries for OpenStack. This change has been approved by the project maintainers and will be released as part of the next official distribution of the library.

The implementation of Keystone authentication for Kubernetes relies on a WebHook, one of the authentication modules provided by Kubernetes authentication. When WebHook authentication is enabled, the Kubernetes API redirects requests to a RESTful service. In our case, we use an OpenStack-Keystone service as authentication provider.

To simplify usage, we’ve extended the OpenStack dashboard by adding a button for downloading a config file, ready to use with kubectl, that includes the application credentials.

GARR deployed a multi-tenant Kubernetes cluster on bare metal to reduce management overhead resource fragmentation and delays in cluster creation. We used the same declarative modeling tools for deploying the cluster with MaaS and Juju, side by side with our OpenStack infrastructure. This facilitates maintenance and scaling of the infrastructure with a single set of tools. Tough multi-tenancy limits restrict the rights for users to access common extensions. Therefore, we provide a set of specific roles and bindings to give normal users without privileges on namespace kube-system but with rights to perform installations, for example through Helm.

What follows is the architecture of the solution, its components and their implementation in a real-world production environment as well as an installation and configuration guide for users. We conclude with suggestions for future extensions in order to deal with role-based access control (RBAC.)


We’ll start with how to integrate authentication between OpenStack, an infrastructure-as-a-service (IaaS) provider, and Kubernetes, a container deployment service to allow OpenStack users to seamlessly access Kubernetes services.

In Kubernetes, processing of a request goes through the following stages:

  • Authentication
  • Authorization
  • Admission control

Kubernetes supports several authentication strategies that may invoke external authenticator providers (e.g. LDAP or OpenID Connect) available through plug-ins, as shown in step one in the following diagram:

Kubernetes authentication architecture.

Each plug-in, which is invoked as an external command through the library, implements its protocol-specific logic, then returns opaque credentials. Credential plug-ins typically require a server-side component with support for WebHook token authentication to interpret the credential format produced by the client plug-in.

The solution we developed to provide Keystone authentication for Kubernetes consists of the following modules:

  • A credential plug-in for Keystone authentication
  • A service implementing a WebHook for authentication
  • A service implementing a WebHook for authorization (currently the same as module two)

Workflow of Kubernetes authentication through Keystone.

Here are the steps for the authentication process:

  1. A user issues a kubectl command or issues an API call, which is handled by client-go.
  2. The credential plug-in obtains the user’s Keystone credential, either from the kubeconfig file or by prompting the user and requests a token to Keystone using the aforementioned credentials.
  3. The token from Keystone returns to the client through the credential plug-in.
  4. The client uses this token as a bearer token against the Kubernetes API server.
  5. The Kubernetes API server uses the WebHook token authenticator to validate the token against the Keystone service.
  6. The Keystone service verifies the token and returns the user’s username and groups.

The solution we present is of general interest, since it allows cloud providers to offer both a container deployment platform based on Kubernetes and IaaS services provided by OpenStack, both accessible through a single set of credentials.

An earlier solution for integrating Kubernetes authentication with OpenStack relied on password authentication. But OpenStack can be configured to use federated authentication, like the one used in the GARR Federated Cloud Platform, provided by Idem or EduGain. Consequently, password authentication isn’t available for normal users in this scenario.

A development team from SWITCH and GARR worked jointly to find a more general solution. The recent Queens release for Keystone introduced the mechanism of application credentials. Through this mechanism, an application can request a token that can be used thereafter to validate user requests before performing operations on his behalf. Furthermore, in the Rocky release of the Horizon dashboard, a panel has been added allowing users to create application credentials.

The key idea of this solution is to use an application credential obtained from Keystone and pass it to Kubernetes for validating user requests. This requires exploiting the plug-in architecture provided by Kubernetes to insert suitable steps in the authentication process. In particular, Kubernetes needs to convert credentials into a token and later use that token whenever needed to validate each individual request before performing it.

The ability to obtain credentials directly from the dashboard allows users to be completely autonomous in setting up integrated Kubernetes/Keystone authentication. For example, the given credentials can be inserted in the user configuration file for kubectl, the standard command-line interface for operating on Kubernetes. Afterwards, the user can access Kubernetes without any further complications.

A limitation of the current solution is that it requires installing a plug-in on the user’s machine, which has these drawbacks:

  • Binary versions for each machine architecture and for each Kubernetes release must be maintained
  • Mobile devices are not supported

Keystone authentication with application credentials for Kubernetes

Since the Queens release of OpenStack, Keystone has supported application credentials. These credentials can be used by applications to authenticate through Keystone with the assigned privileges by the user who created them. In particular, such credentials can be used by the Kubernetes API to authenticate and authorize operations.

In the solution presented here, authentication is performed by a plugin (kubectl-keystone-auth), while authorization is delegated by the Kubernetes API through a WebHook to a RESTful web service (k8s-keystone-auth).

In the next section, we describe how to use Keystone Application Credentials for authenticating to Kubrernetes and use them for Kubernetes services.

Create application credentials with Horizon

The following screenshots illustrate the steps needed to an application credential through the OpenStack Horizon dashboard.

Select Application Credentials in the Identity Panel:

Fill out the form to create an application credential:

Download both an openrc file to set OpenStack environment variables for using the generated application credential and a configuration file for kubectl:

The button “Download kubeconfig file” is an extension that we developed for the Horizon dashboard, which creates a a preconfigured ./kube/config file ready to use to work on Kubernetes. It contains the application credentials and other parameters for connecting to the Kubernetes API server.

The code for this extension is available on GitLab and mirrored on GitHub.

Enable Kubernetes authentication via application credentials

Once the application credential is created, you can download the kubectl config file with the button “Download kube config file”.

The credential plugin kubectl-keystone-auth is required in order to enable application credentials authentication. It can be either downloaded or compiled from sources.

Download the credential plugin

Download kubectl-keystone-auth for your architecture from:

Install it in a folder accessible by kubectl, for example:

$ mkdir -p ~/.kube/bin

$ cp -p kubectl-keystone-auth ~/.kube/bin

Build the credential plugin

A working installation of Golang is needed to build the plugin. Follow the instructions at:

Clone the repository for cloud-provider-openstack:

$ git clone






$ cd $GOPATH/src/kubernetes/cloud-provider-openstack

Build the plugin with:

$ sudo make client-keystone-auth

Install it in a folder accessible by kubectl, for example:

$ mkdir -p ~/.kube/bin

$ cp -p client-keystone-auth ~/.kube/bin/kubectl-keystone-auth

Setting up Keystone authentication

This section describes the steps that a cloud administrator needs to perform to setup Keystone authentication in a Kubernetes cluster.

The Kubernetes API server must be configured with WebHook token authentication to invoke an authenticator service for validating tokens with Keystone. The service to be invoked cannot be Keystone itself, since the payload produced by the WebHook has a different format than the requests expected by the Keystone API for application credentials.

Here’s an example of a WebHook payload:


"apiVersion": "",

"kind": "SubjectAccessReview",

"spec": {

"resourceAttributes": {

"namespace": "kittensandponies",

"verb": "get",

"group": "",

"resource": "pods"


"user": "jane",

"group": ["group1"]



While this token validation request to Keystone has an empty payload and parameters are passed as follows: token of authorized user in the X-Auth-Token request header, token to validate in the X-Subject-Token request header. The response has the following form:


"token": {

"audit_ids": [



"expires_at": "2015-11-05T22:00:11.000000Z",

"issued_at": "2015-11-05T21:00:33.819948Z",

"methods": [



"user": {

"domain": {

"id": "default",

"name": "Default"


"id": "10a2e6e717a245d9acad3e5f97aeca3d",

"name": "admin",

"password_expires_at": null




The program that implements the authenticator service is called k8s-keystone-auth. Steps to obtaining it are described below.

Configure the Kubernetes API server

The Kubernetes API receives a request including a Keystone token. In the Kubernetes language, this is a Bearer Token. To validate the Keystone token the Kubernetes API server will use a WebHook. The service invoked through the WebHook will in turn contact the Keystone service that generated the token in order to validate it.

Here we describe how to configure the Kubernetes API server to invoke the k8s-keystone-auth authenticator through a WebHook.

Create the following file in /path/to/webhook.kubeconfig:

apiVersion: v1

- cluster:
insecure-skip-tls-verify: true
name: webhook
- context:
cluster: webhook
user: webhook
name: webhook
current-context: webhook
kind: Config
preferences: {}
- name: webhook

where KEYSTONE_URL is the endpoint of the Keystone service.

Execute the following command in the master Kubernetes API node to configure it:

$ sudo snap set kube-apiserver authentication-token-webhook-config-file = /path/to/webhook.kubeconfig

If you do not used snap edit file in /etc/kubernetes/manifests/kube-apiserver.yaml and add this line as parameter to the kubectl command:

- --authentication-token-webhook-config-file=webhook.kubeconfig

Install the Keystone authenticator service

The Keystone authenticator service is the component in charge of validating requests containing bearer tokens.

The Keystone authenticator service is implemented by the program k8s-keystone-auth. You can either download a pre-compiled version or build it from various sources.

Download the Keystone authenticator

You can find pre-compiled versions of k8s-keystone-auth for different architectures in the following repository:

Deploy via Juju

In order to deploy the Keystone authorization service on a cluster managed through Juju, we provide a charm that automates its deployment. The service will be automatically replicated on all the Kubernetes Master units, ensuring high availability. The charm is available on the public repository:

The k8s-keystone-auth service can be deployed by doing:

$ juju deploy cs:~csd-garr/kubernetes-keystone

--config keystone-url='KEYSTONE_URL'

--config k8s-keystone-auth-url='DOWNLOAD_URL'

--config authn-server-url='AUTHN_URL'

--config authz-server-url='AUTHZ_URL'

$ juju add-relation kubernetes-master kubernetes-keystone

The configuration parameters are:

KEYSTONE_URL URL of the Keystone endpoint.

DOWNLOAD_URL URL for downloading the Keystone authenticator server program.

AUTHN_URL URL of the WebHook authentication service.

AUTHZ_URL URL for the WebHook authorization service.

Configuration parameters can also be passed through a YAML file as explained here:

Alternatively, the WebHook authenticator service can be deployed as a Kubernetes pod. This requires a Docker image for k8s-keystone-auth to be deployed within a Docker container

The steps for building the Docker image are described in the section “Build the Keystone authenticator,” including the following:

$ make image-k8s-keystone-auth

The following deployment file is used for deploying the WebHook authenticator service on Kubernetes itself.


kind: Deployment


name: k8s-keystone-auth

namespace: kube-system


app: k8s-keystone-auth
replicas: 1
app: k8s-keystone-auth
app: k8s-keystone-auth
dedicated: k8s-master
- name: k8s-keystone-auth
image: rdil/k8s-keystone-auth:latest
imagePullPolicy: Always
- ./bin/k8s-keystone-auth
- --tls-cert-file
- /etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file
- /etc/kubernetes/pki/apiserver.key
- --keystone-url
- k8s-auth-policy
- --sync-config-file
- /etc/kubernetes/pki/identity/keystone/syncconfig.yaml
- mountPath: /etc/kubernetes/pki

name: k8s-certs

readOnly: true

- mountPath: /etc/ssl/certs

name: ca-certs

readOnly: true


- containerPort: 844


- name: k8s-certs


path: /etc/kubernetes/pki

type: DirectoryOrCreate

- name: ca-certs


path: /etc/ssl/certs

type: DirectoryOrCreate


kind: Service

apiVersion: v1


name: k8s-keystone-auth-service

namespace: kube-system



app: k8s-keystone-auth


- protocol: TCP

port: 8443

targetPort: 8443


where KEYSTONE_URL is for the GARR Cloud Platform.

In order to deploy the component on the master node (because the master node contains the sync config file and the WebHook that forward the auth request to the localhost), we labeled the master node (kubectl label nodes name_of_your_node dedicated=k8s-master).

The pod can be deployed on the master node (kubectl taint nodes –all and we added to the deployment specification a section to choose the master node as node for the deployment.

We also added “hostNetwork: true” in order to put the pod on the same network as the master node in order to make the communication with it possible.

OpenStack client using application credentials

Application Credentials can be used also with the OpenStack Client for authenticating to Keystone.

In order to use application credentials, the following variables must be set:

export OS_AUTH_TYPE=v3applicationcredential




where KEYSTONE_URL for the GARR Cloud Platform is, while CREDENTIAL_NAME and CREDENTIAL_SECRET are the values from the application credential.

The openstack command, so configured, will then pass authentication:

$ openstack token issue


| Field | Value |


| expires | 2018-09-04T09:58:33+0000 |

| id | 99e74e7a8ec14b1bb945672662580ea7 |

| project_id | daadb4bcc9704054b108de8ed263dfc2 |

| user_id | 4ae5e9b91b1446408523cb01e5da46d5 |



We’ve presented a solution for integrating Kubernetes with OpenStack, exploiting Keystone as a common authentication service and using application credentials provided by Keystone.

In a follow-up post, we’ll describe how we set up a multi-tenant Kubernetes cluster on bare metal using automation tools MaaS and Juju. The cluster is shared among users of the GARR comunity, providing better performance and reduced costs. However, we needed to provide some common functionality in order to enable users to install services that are normally installed n the system namespace kube-system. In particular, we’ll show how to deal with the creation of Kubernetes dashboard and to exploit Helm for installing packaged containerized applications.

About the authors

The work was carried out by GARR‘s Giuseppe Attardi, Alberto Colla, Alex Barchiesi, Roberto Di Lallo, Fulvio Galeazzi Claudio Pisa and Saverio Proto at SWITCH as part of the GÉANT project (GN4-2.)

The post Strengthening open infrastructure: Integrating OpenStack and Kubernetes appeared first on Superuser.

by Giuseppe Attardi, Alberto Colla, Alex Barchiesi, Roberto Di Lallo, Fulvio Galeazzi, Claudio Pisa, Saverio Proto at March 15, 2019 02:15 PM

Chris Dent

Placement Update 19-10

Placement update 19-10 is here. We're fast approaching placement's first official release.

Most Important

There are several tasks left before we can cut the release, mostly related to documentation and other writing related things. I've attempted to enumerate them in a "Prepping the RC" section below. These are things that need to be done before next Thursday, preferably sooner.

It's also important to be thinking about how placement would like to engage (as a group) with the PTG (the Forum is already decided: there will be an extraction related Forum session).

What's Changed

  • Oh, hey, I'm, like, the placement PTL. Mel and I decided early in the week that whatever the official timetable, I'll take the baton from here. Thanks to everyone who helped to get placement to where we are now.

  • The stack of code that got rid of the List classes in favor of module level methods, which also happened to move each "object" type to its own module, has merged. I'm glad we got this in before release as it ought to make debugging and digging around a bit easier.

  • Lots of little documentation tuneups (from story 2005190) have merged, including pointing to storyboard for bugs. These chanages scrape the surface of what remains (listed below).

  • I wrote up a blog post on profiling wsgi apps which I'd been doing to confirm that the many refactorings that have happened recently weren't having a negative impact (they are not).

  • We decided to wait for Train for the negative-member-of functionality and the allocation ratio change in osc-placement.

  • Kolla has merged several changes for extracted placement. Thanks!


Skipping this section until after the release candidate(s) are done.


We've got a StoryBoard project group now. I've started using it. Tagging bugs with bug and also making use of a cleanup tag to indicate things that needs to be cleaned up. There are worklists for both of these:

Please be prepared for these structures to evolve as we gain some understanding of how StoryBoard works.

There are still bugs in launchpad and we need to continue to watch there:

Many of these are about nova's use of placement. At some point after RC we should do a bug review, and port placement-only things to StoryBoard.


osc-placement is currently behind by 13 microversions.

Pending changes:

Prepping the RC

Things that need to happen so we can cut a placement release candidate:

  • Anything currently open that we want in. There are only 6 pending patches that might be options (everything else is either waiting for Train or already +W), so a quick look at them is worth the effort.

  • We've started a cycle-highlights etherpad, as announced by this email. We've probably got enough, but feel free to add to it if you think of something.

  • There's a story for preparing placement docs for stein. The story includes several tasks, many of which are already merged. Have a look and assign yourself a task if you can commit to having it done by early next week. There are some biggies:

    • Creating the canonical how to upgrade from placement-in-nova to placement-in-placement document. As stated very well by Tetsuoro, this is effectively translating the grenade upgrade script to English.

    • Ensuring the install docs are sane and complete. I have asked packaging-related people for their input, as they're the ones who know how their packages are (or will be) set up, but there's also an "install from-pypi" hole that needs to be filled.

  • The releasenotes need to be evaluated for correctness and effective annotation of upgrade concerns. They will also need a prelude, probably pointing to the "upgrading from nova" doc mentioned above. For a sample, see nova's rocky prelude.

Main Themes

We'll come back to themes once the RC is cut.

Other Placement

Other Service Users

We'll also hold off here until the RC is cut. In the future if you stick "placement" somewhere in your commit message I'll probably eventually find your in-progress placement-related changes.


Once the release is released, it will be time to start thinking about what we want Train to look like. There are pending Stein feature specs that we will want to do (and will need to be put in our specs directory, once it exists), but other than the various ideas about ways to do multi-nova/cloud partitioning of resource providers and multi-service partitioning of allocations (both of which need much more well-defined uses cases before we start thinking about the solutions) I've not heard a lot of clamouring from services and operators for features in Placement. If you have heard, or are clamouring, please make yourself known. I'd personally like us to focus on enabling existing services that use or want to use placement (nova, neutron, blazar, cyborg) and its existing features rather than new features. No need to have any immediate thoughts or decisions on this, but some background thinking is warranted.

Also, OMG, we need a logo. How about an Australian Magpie? They make a cool noise.

by Chris Dent at March 15, 2019 02:03 PM

March 14, 2019

Adam Young

Building the Kolla Keystone Container

Kolla has become the primary source of Containers for running OpenStack services. Since if has been a while since I tried deliberately running just the Keystone container, I decided to build the Kolla version from scratch and run it.

UPDATE: Ozz wrote it already, and did it better:

I had an clone of the Kolla repo already, but if you need one, you can get it by cloning

git clone git://

All of the dependencies you need to run the build process are handled by tox. Assuming you can run tox elsewhere, you can use that here, too:

tox -e py35

That will run through all the unit tests. They do not take that long.

To build all of the containers you can active the virtual environment and then use the build tool. That takes quite a while, since there are a lot of containers required to run OpenStack.

$ . .tox/py35/bin/activate
(py35) [ayoung@ayoungP40 kolla]$ tools/ 

If you want to build just the keystone containers….

 python tools/ keystone

Building this with no base containers cached took me 5 minutes. Delta builds should be much faster.

Once the build is complete, you will have a bunch of container images defined on your system:

kolla/centos-binary-keystone 7.0.2 69049739bad6 33 minutes ago 800 MB
kolla/centos-binary-keystone-fernet 7.0.2 89977265fcbb 33 minutes ago 800 MB
kolla/centos-binary-keystone-ssh 7.0.2 4b377e854980 33 minutes ago 819 MB
kolla/centos-binary-barbican-keystone-listener 7.0.2 6265d0acff16 33 minutes ago 732 MB
kolla/centos-binary-keystone-base 7.0.2 b6d78b9e0769 33 minutes ago 774 MB
kolla/centos-binary-barbican-base 7.0.2 ccd7b4ff311f 34 minutes ago 706 MB
kolla/centos-binary-openstack-base 7.0.2 38dbb3c57448 34 minutes ago 671 MB
kolla/centos-binary-base 7.0.2 177c786e9b01 36 minutes ago 419 MB 7 1e1148e4cc2c 3 months ago 202 MB

Note that the build instructions live in the git repo under docs.

by Adam Young at March 14, 2019 03:43 PM

SUSE Conversations

Is 2019 the Year IoT and Edge Computing Comes of Age?

The Internet of Things (IoT) is one of the hottest technology topics of the moment – and for very good reasons.  We all know we’re living in an increasingly interconnected world and are constantly looking for new ways to take full advantage of it. On a more personal level, our mobile smart devices have become […]

The post Is 2019 the Year IoT and Edge Computing Comes of Age? appeared first on SUSE Communities.

by Terri Schlosser at March 14, 2019 01:00 PM

March 13, 2019


Introducing RabbitMQ to Machine Learning Algorithms

Aptira Machine Learning RabbitMQ

It has been an interesting week. Without any fanfare, I quietly introduced our Machine Learning code to the production RabbitMQ cluster for the first time. Hoping to improve our ability to detect problems with our internal OpenStack deployment, and without checking the state first, I nearly fell off my chair when it found the first real anomaly after only 50 seconds!

A little background is in order. Much has been written about the capabilities of Machine Learning in many exciting fields, from image categorization and natural language processing to game theory. A slightly less flashy and mundane use is detecting anomalies in streams of data. This happens to be very useful for complex systems like Software Defined Networks and OpenStack where telemetry is abundant, but often in too great a quantity for easy use, or too complex to write simple threshold-based monitoring rules for. 

We’ve spent the last few months working with Machine Learning technology, using open source switch telemetry data sets to prove out the anomaly detection concept in our areas of expertise, then leveraging a second type of ML to automatically categorize and fix issues in a controlled lab environment. 

Moving away from switch telemetry, our internal OpenStack presented itself as a great first candidate for live integration, with RabbitMQ chosen as the component to analyse due to its central role in the platform. After extracting a several days of telemetry for each queue and processing the data into a usable format, a model was trained to recognise ‘normal’ behaviour. This involved over a hundred million data points and took several hours to get to a useable level of accuracy. 

Trained model in hand we were now ready for live feed from Prometheus!  The system now analyses more than 20 data points for nearly 300 queues on Rabbit every 5 seconds, detecting and reporting anomalies both as they arrive and when they are fixed. 

Exciting times! 

How can we make OpenStack work for you?
Find out what else we can do with OpenStack.

Find Out Here

The post Introducing RabbitMQ to Machine Learning Algorithms appeared first on Aptira.

by Simon Sellar at March 13, 2019 02:17 AM

March 12, 2019


Return of the Smesh (Spinnaker Shpinnaker and Istio Shmistio to make a Smesh! Part 2)

Bruce continues his discussion of service meshes with an overview of how they work and an introduction to Istio.

by Bruce Mathews at March 12, 2019 10:30 PM


Upcoming Webinar Thurs 3/14: Web Application Security – Why You Should Review Yours

Please join Percona’s Information Security Architect, David Bubsy, as he presents his talk Web Application Security – Why You Should Review Yours on March 14th, 2019 at 6:00 AM PDT (UTC-7) / 9:00 AM EDT (UTC-4).

Register Now

In this talk, we take a look at the whole stack and I don’t just mean LAMP.

We’ll cover what an attack surface is and some areas you may look to in order to ensure that you can reduce it.

For instance, what’s an attack surface?

Acronym Hell, what do they mean?

Vulnerability Naming, is this media naming stupidity or driving the message home?

Detection, Prevention and avoiding the boy who cried wolf are some further examples.

Additionally, we’ll cover emerging technologies to keep an eye on or even implement yourself to help improve your security posture.

There will also be a live compromise demo (or backup video if something fails) that covers compromising a PCI compliant network structure to reach the database system. Through this compromise you can ultimately exploit multiple failures to gain bash shell access over the MySQL protocol.

by David Busby at March 12, 2019 08:59 PM

Chris Dent

Profiling WSGI Apps

If you're making web apps or HTTP APIs with Python, WSGI remains a solid choice for the underlying framework for handling HTTP requests and responses. ASGI is getting a lot of attention these days (see, for example, starlette) but in many cases the concurrency model of WSGI, which can be roughly translated as "let the web server deal with that", is easier to deal with or more appropriate for the service being provided.

Or it may be that you're maintaining an existing WSGI app, and switching to ASGI is not something you can or want to do soon.

In either case, you've got your WSGI app, and you'd like to make it faster. What do you do? Profile.

WSGI apps can be profiled by wrapping the application in middleware which starts and stops the python profiler for each request and does something with the profile information. The one I prefer is a part of Werkzeug, the ProfilerMiddleware.

The most convenient way to use it is to configure profile_dir to point to an (existing!) directory. For each individual request, a file will be created in that directory, named after the request method and URL. It will contain the profile stats for that request.

There are lots of ways to interpret those stats. One is to use the pstats module but that can be cumbersome. Lately I've been using snakeviz. It provides a web-based interface to graphically navigate the profile data and see at a glance where python code is consuming time.

snakeviz sunburst

Putting It Together

To provide a bit more insight into how this might be useful, I'm going to explain how I used it to find a section of the OpenStack placement code that might respond well to some optimization. Recently we gained a significant performance improvement in placement by removing oslo.versionedobjects (useful stuff, but not needed in placement) but it feels like there is plenty of room for more. Here's what I did to explore:

Updated my local repo of placement:

git clone git://
cd placement

Established a suitable virtual environment and added some additional libraries (werkzeug has the profiler, uwsgi will be the web server):

tox -epy37 --notest
.tox/py37/bin/pip install uwsgi werkzeug

Mangled placement/ to make it easy to control whether the profiling happens. This can be important because for some requests (such as loading all the masses of data done in a step below) you don't want to profile. Patch also available in in gerrit.

diff --git a/placement/ b/placement/
index e442ec40..c9b44b8f 100644
--- a/placement/
+++ b/placement/
@@ -11,6 +11,8 @@
 #    under the License.
 """Deployment handling for Placmenent API."""

+import os
 from microversion_parse import middleware as mp_middleware
 import oslo_middleware
 from oslo_middleware import cors
@@ -28,6 +30,11 @@ from placement import resource_class_cache as rc_cache
 from placement import util

+    from werkzeug.contrib import profiler
 # TODO(cdent): NAME points to the config project being used, so for
 # now this is "nova" but we probably want "placement" eventually.
 NAME = "nova"
@@ -61,6 +68,13 @@ def deploy(conf):
     request_log = requestlog.RequestLog

     application = handler.PlacementHandler(config=conf)
+    # If PROFILER_OUTPUT is set, generate per request profile reports
+    # to the directory named therein.
+        application = profiler.ProfilerMiddleware(
+            application, profile_dir=PROFILER_OUTPUT)
     # configure microversion middleware in the old school way
     application = microversion_middleware(
         application, microversion.SERVICE_TYPE, microversion.VERSIONS,

What's happening here is that if the OS_WSGI_PROFILER environment variable is set, then the PlacementHandler application is wrapped by the ProfilerMiddleware. It will dump profile data to the directory named in OS_WSGI_PROFILER. This is a messy, but quick and convenient, way to manage things.

Created a short script to start the application with the right configuration settings (via the environment):


export OS_PLACEMENT_DATABASE__CONNECTION=postgresql+psycopg2://cdent@
export OS_API__AUTH_STRATEGY=noauth2

.tox/py37/bin/uwsgi -M --venv .tox/py37 --http :8000 --wsgi-file .tox/py37/bin/placement-api --processes 2 --threads 10

If you're following along at home, for DATABASE_CONNECTION you will need to establish the database (and access controls) yourself.

Loaded a bunch of data into the service. For this I used placeload, a tool I made for this kind of thing. Add 1000 resource providers with inventory, aggregates and traits with:

placeload http://ds1:8000 1000

(ds1 is the hostname of the VM where I have this stuff running, your name will be different.)

Made sure it's all there by getting a list of resource providers:

curl -H 'openstack-api-version: placement latest' \
     -H 'x-auth-token: admin' \

Restarted the WSGI app, ready to profile. Ctrl-c to kill the uwsgi server, then:

export OS_WSGI_PROFILER=/tmp/placeprof

Made a request, this one to get a list of all those resource providers again:

curl -H 'openstack-api-version: placement latest' \
     -H 'x-auth-token: admin' \

Ran snakeviz on the output:

snakeviz /tmp/placeprof/

This opens the default web browser in the local GUI, so if you were running your server remotely, you'll need to copy the *.prof file to somewhere local, and install snakeviz there. Or go back in time and use X11.

If the output style is set to 'Sunburst' (the other option is 'Icicle') you see a set of concentric circles like those in the image above. The length of an arc indicates how much time a particular call used up (total time across all calls). Mousing over will reveal more information. Clicking will zoom on that call, making it the center of the sunburst.

The initial output shows that the _serialize_links method in the resource provider handler (placement/handlers/ is using a lot of time. Looking at the table below the sunburst (which starts out the same as the output produced by the pstats module, ordered by total time in a call) we can see that though it uses only a relatively small amount of time per call, it is called many times (once per resource provider). There may be an opportunity for a micro-optimization here. The same three conditionals with the same results are performed every time we go into the call. We could instead pull that out, do it once, and save the results. It might not be worth the effort, but the profiling gives us something to look at. If we clear out that hump, we can then find the next one.

There are plenty of other requests that could and should be profiled. Notably GET /allocation_candidates and PUT /allocations/{uuid}. These are the two requests which are most important to the running a cloud (once it is up and running stably).

Also, it is worth noting that there is a stack of changes already in progress which changes a lot of the code being measured here. Profiling those changes suggests it might be time to explore using generators instead of lists in many places. I didn't, however, want to make this post rely on unmerged code.

Profiling can be a useful tool. Even if it doesn't lead to dramatic changes in the code and spectacular performance improvements, doing the research is very revealing about what your code is getting up to. Like all tools, it is one of several. No silver bullets.


As is so often the case, profiling can also lead you down a hole. Moving the conditionals in the _serialize_links described above doesn't gain much of anything. So if there is an optimization to be had here, it will involve some more robust changes to how resource provider output is assembled. But now at least we know.

by Chris Dent at March 12, 2019 06:00 PM

OpenStack Superuser

Takeaways from the latest OpenStack Ops Meetup

BERLIN — Operators love stability and continuity, so it was fitting that the most recent OpenStack Operators meetup was held in a building that served as the main post office for over 90 years. Deutsche Telekom provided both the space and the sustenance for the recent OpenStack Operators Meetup in Berlin.

More than 30 people from Japan, Europe and the United States traveled to Germany’s capital for the two-day event where OpenStack operators and users shared experience in ops and innovation.

Chris Morgan, cloud services team lead at Bloomberg, acted as the main moderator and organized the agenda for the event. Day one started with containers (of course!) and the current landscape of deployment tools. The sessions were spread between different hosts and everyone was able to actively contribute.

Dmitry Tantsur, principal software engineer at Red Hat, continued in the next session with bare metal provisioning. This session was also a good example to see what others do and what are the most common tools used. The coffee breaks were great for getting to know people or talking about the next local meetup, as Christoph Streit (ScaleUp Tech) and Nils Magnus (T-Systems) did.

Elvis Noudjeu, senior OpenStack operator at IQ-Optimize Software AG, moderated a perennial hot topic for ops: logging and monitoring. Many tools and repos are collected on the session’s Etherpad.

Masahito Muroi, a software engineer at NTT Japan, talked next about documentation and localization. Most of the operator docs like Operators Guide are integrated in and supplied with the translation platform Zanata. He stressed the importance of localization and also raised the use case for translating Japanese docs into English.

To kick off the afternoon sessions, Jens-Christian Fischer from Switch brought in an interesting session format that was voted the best session of the day. Architecture Show& Tell/Lightning Talks: Participants running OpenStack deployments could briefly share their experiences, comments and talk about what’s next. The same format was applied for the Ops War Stories Lighting Talks.

The second day was even faster paced and packed with sessions. Morgan started the day with a session dedicated to latest OpenStack projects Airship, StarlingX, Kata Containers and Zuul called “All the Young Dudes.” Comments about the new Open Infrastructure Summit were made: will this align with an Open Infrastructure Ops Meetup? We’ll see.

Testing and tuning were the next topics on the second morning. Later that day, Colleen Murphy, software engineer at SUSE offered insights into Keystone’s evolution and usage. It was a valuable feedback session from operators to developers.

The Etherpads — here’s a list of them from the event — are a great way to ensure that you can attend the next one.

“Show the Etherpad to your manager so they can see how important these in-person events are,” Morgan said.  “Ops meetups are for collaboration and sharing. Most of the content can be read later, but there’s nothing better than being there yourself.”

Photo // CC BY NC

The post Takeaways from the latest OpenStack Ops Meetup appeared first on Superuser.

by Frank Kloeker at March 12, 2019 04:02 PM

March 11, 2019

OpenStack Superuser

Inside private and hybrid cloud: Must-see sessions at the Open Infrastructure Summit

Join the people building and operating open infrastructure at the inaugural Open Infrastructure Summit.  The Summit schedule features over 300 sessions organized by use cases including: artificial intelligence and machine learning, continuous integration and deployment, containers, edge computing, network functions virtualization, security and public, private and multi-cloud strategies.

In this post we’re highlighting some of the sessions you’ll want to add to your schedule about private and hybrid cloud.  Check out all the sessions, workshops and lightning talks focusing on these topics here.

Open-source networking: The useful, the scrap heap, and the broken

Open-source networking is still top of mind in the networking industry and the open-source world. While some argue it reached critical mass in 2015 it’s continued to drive forward at an increasing pace, say Cisco’s Kyle Mestery and Ian Wells. In this intermediate-level talk, they’ll cover how to effectively work across many upstream open-source networking projects. They’ll offer a comparison of how these projects take on work, how they consume this work and how they fold it into their projects as well as a light-hearted look at strategies that work and those guaranteed to fail. Details here.

Improving resource availability in CERN’s private cloud

Teams at CERN are working hard to upgrade the Large Hadron Collider for the next set of experiments kicking off in 2021. During the next run, the computational needs required to process all the data produced in the LHC are expected to increase dramatically. To meet these upcoming needs, the CERN private cloud is always looking into ways of optimizing and making more resources available to researchers. Jose Castro Leon and Spyros Trigazis will review the tools, based on OpenStack services including Mistral and Watcher, that allow CERN to automate the distribution of workloads, increase their efficiency and optimize the cloud environment. In this intermediate-level talk, the pair will also show upcoming work to push even the limits of their current service offering by optimizing workloads in Kubernetes clusters and preemptible instances. Details here.

How Blizzard Entertainment uses autoscaling for “Overwatch”

Blizzard Entertainment is a video game developer and publisher that has been using OpenStack as a private cloud to host its game services since 2012. This talk from Duc Truong and Jude Cross discusses the OpenStack autoscaling implementation at Blizzard to support its best-selling team-based shooter title “Overwatch.” The beginner-level case study will focus on the unique challenges of running video games in the cloud and presents the advantages of utilizing autoscaling. Details here.

Airskiff: Your on-ramp to Airship development

Airship is a platform that enables operators to reliably and repeatably provision production-grade cloud infrastructure using declarative configuration. Airship is a substantial project and it can be tough to know where to get started. Operator-scale target hardware, sizeable configuration documents and non-trivial deployment times challenge developers who want to quickly get started with Airship development.

This presentation by AT&T’s Matt McEuen and Drew Walters offers developers (as well as operators) a solution to quickly test their software workloads using Airship and to deploy their own code changes to Airship itself while demonstrating the flexibility and resiliency that Airship provides. Details here.

Self-healing on network failures with Vitrage, Mistral and Heat

A network failure in a complex system can have ripple effects throughout the system. For an application, especially in the world of NFV (but not only), quick recovery from failure is crucial.
In this session, Nokia’s Ifat Afek and Muhamad Najjarwe with EasyStack’s Rico Lin will show how Heat uses Vitrage and Mistral to guarantee self-healing of the Heat stack – either on the physical or virtual layer. Vitrage, the OpenStack Root Cause Analysis service, is used to analyze the system state and identify the affected resources. Mistral, the OpenStack workflow service, provides a workflow for healing and Heat automates the entire healing process.
They’ll demonstrate how these three projects work together to provide a new and easy way to self-heal your application with accuracy, and how to keep it unbroken and make sure everyone can use it. They’ll also talk about the self-healing SIG and discuss its future plans. Details here.

Giant leap upgrades

Deploying an OpenStack cloud in a production environment is a great achievement and sometimes overwhelming task in itself, says Jimmy McCrory of Box. Keeping that cloud up-to-date, even within a release cycle or two behind the latest, can sometimes feel impossible. His presentation will cover the steps that were taken to plan, test and perform OpenStack upgrades spanning four OpenStack releases (Mitaka to Queens) during single maintenance windows.
He’ll also be covering the mistakes made, lessons learned and tips for operators dreading their next, or even first, major upgrade.
Details here.

See you at the Open Infrastructure Summit in Denver, April 29-May 1! Register here.

Cover photo // CC BY NC

The post Inside private and hybrid cloud: Must-see sessions at the Open Infrastructure Summit appeared first on Superuser.

by Superuser at March 11, 2019 02:04 PM

March 08, 2019

OpenStack Superuser

Why you should master Rook for Ceph storage on Kubernetes

Rook is an open-source cloud native storage orchestrator for Kubernetes that one of its maintainers, Alexander Trost, says is simple to use. (Presumably easier than mastering chess, from which the project takes its name.)

Trost, a dev-ops engineer at Cloudibility, gave a talk about using Rook at the recent Free Open Source Developers’ European Meeting (FOSDEM) that ran through the architecture and advantages of using Rook. (There are so many that he ran out of room for the demo, but you can get the demo files on GitHub.)

Why would you want to take up this project? With Rook, ops teams can run software distributed systems (SDS) (such as Ceph) on top of Kubernetes. Developers can then use that storage to dynamically create persistent volumes (PV) in Kubernetes to deploy applications, such as Jenkins, WordPress and any other app that requires state. Ceph is a popular open-source SDS that can provide many popular types of storage systems, such as object, block and file system and runs on top of commodity hardware. Rook, currently an incubating-level project of the CNCF, can also be used with other storage providers including CockroachDB, EdgeFS, Minio and Cassandra.

As for what Rook can help you do better with Ceph, Trost says the main benefits are health checks for MONs with automatic failover, simple management of Ceph clusters, pools, filesystem and RGW through Kubernetes objects as well as offering storage selection in one central place.

Learn more

To get started, you can take a look at the Quick Start Guides, at the GitHub repo, join the forum or the Slack channel. Take a look at the FOSDEM slides or view the 39-minute talk here.

At the upcoming Open Infrastructure Summit, there are two sessions dedicated to the project: “Rook: A new and easy way to run your Ceph storage on Kubernetes,” with Blaine Gardner a Rook-Ceph maintainer and software engineer at Suse Enterprise Storage and Dirk Müller, also at Suse, and “Storage 101: Rook and Ceph” with Red Hat’s Federico Lucifredi, Sean Cohen and Sébastien Han.


Cover photo // CC BY NC

The post Why you should master Rook for Ceph storage on Kubernetes appeared first on Superuser.

by Superuser at March 08, 2019 03:08 PM

Chris Dent

Placement Update 19-09

Here's another placement update. It was feature freeze this week so people are probably a bit worn, I'll try to keep this light and quick.

Most Important

We need to decide if we want to do what amounts to a feature freeze exception for Tetsuro's negative member-of handling linked below and work out how/when to fit Mel's allocation ratio work on osc-placement (also below) into the world so people can use it. If people have placement related forum sessions they would like to see happen the deadline is either today (March 8th) or Monday, depending on which and how you read various pieces of info. Matt has already set up an extraction-related session.

It's also important to be thinking about how placement would like to engage (as a group) with the PTG.

If you want to run for PTL (of placement or anything else) the deadline is 23:45 UTC, 12 March.

What's Changed

  • The refactoring of the objects/ file into smaller, more single-purpose files, continues to merge. In case anyone is feeling like this is a lot of churn for little purpose, it does have purpose: This is making the code more accessible to people who aren't familiar with it. It is also providing a good review (a variety of warts and bugs and have been found and fixed) of stale code.

  • pep8's whitespace handling has been turned back on in placement. We inherited the exceptions to the rules from nova, where fixing for those rules would have been messy. In placement it wasn't that much of a big deal.

  • A conf setting [placement_database]/sync_on_startup has been added. Defaults to False. If True, the web service process will attempt the equivalent of placement-manage db sync at startup.

  • Improved debug logging when requesting allocation candidates, so that operators can more accurately determine what requirement led to reduced or no available hosts.

  • 1.5.0 of osc-placement was released. It now goes up to microversion 1.18 (filter resource providers by required traits). (Note to everyone: We should make the info that shows up on PyPI more useful, by changing the README).

  • VGPU reshaping for libvirt merged!


Near to Done

Not yet Done

These will get punted to Train.

Not yet Approved


There is a storyboard project group for placement projects. Through to the end of Stein we'll be paying attention to both it and Launchpad. The sole story in storyboard right now is updating docs to indicate we are using storyboard. So meta!

In launchpad:

That large drop on in progress is because I went through and caught up those bugs which had fixes committed but had not been automatically updated.


osc-placement is currently behind by 13 microversions.

Pending changes:

Main Themes


  • VGPU reshaping for libvirt merged!

  • The bandwidth-resource-provider topic has merged a vast amount of code but there is some left. Some of it may still merge in Stein, but the rest will be for Train. The microversion (2.72 provides the API-level bits to allow booting a host with a minimum bandwidth requirement.

    At least some of the remaining changes will be backported.

There were no nibbles on last week's plea, so I'll say again: If anyone reading this is in a position to provide third party CI with fancy hardware for NUMA, NFV, FPGA, and GPU related integration testing with nova, there's a significant need for that.


The work that removed oslo.versionedobjects then moved on to removing the List classes (e.g. AllocationList) in favor of using native Python lists and breaking up the file into smaller files. The first portion of that (scrub-Lists) merged. The next stage is on cd/de-list-phase2.

This will continue until each type has its own file.

Other Placement

Other Service Users

(If you stick "placement" somewhere in your commit message I'll probably eventually find your in-progress placement-related changes.)


Not Nova


Go forth and document and bug fix and raise a full and flavorful stein.

by Chris Dent at March 08, 2019 02:48 PM

March 07, 2019

OpenStack Superuser

Spectacular security failures: Experts share all in a book

Sharing horror stories is a part of dev ops culture. Who doesn’t like reliving some epic fail? One that hopefully didn’t cost you a job or threaten the survival of the company.

While sharing a beer and some war stories, Edwin Kwan, Stefan Streichsbier and DJ Schleen decided there should be a way to share knowledge in the larger community. So now there’s an entire book dedicated them. The 180-page publication from Sonatype is titled “Epic failures in DevSecOps.” Over eight chapters, a host of experts share what they were trying to accomplish, what went wrong, how they tried to resolve it as well as the final outcome and lessons learned. You can download the book, free with email registration, here.

In chapter three, “The Problem with Success,” Schleen, currently a dev sec ops evangelist and architect at a large healthcare organization, shares a tale that may sound familiar. During the implementing of four security tools for the dev ops pipeline, he ran into a problem with staggering scanning cues.

“While trying to understand why scans were taking so long, we decided to take a deeper look at the source code to determine what was happening. What we uncovered was that our dev ops teams were not only scanning the code they were building themselves, but were also scanning all of the open-source software components that their application required…As third-party open source components go, many of them have quite a few vulnerabilities and some even critical. Scanning these unnecessary libraries resulted in higher defect densities and the additional volume of source code was responsible for clogging our engines.” Schleen admits that the failure was due to lack of planning for scaling — and not communicating to delivery teams what they needed to scan for.
“We got the culture and the technique right but missed the mark with the tools,” he concludes. Other stories in the book touch on open-source tech including Node JS, Postman and BDD-Security.

The book is clearly meant to be part therapeutic, part warning. “The stories presented here are not a roadmap. What they do is acknowledge failure as a part of the knowledge base of the DevSecOps Community,” says the book’s editor Mark Miller.This is only the first volume in a series: Miller invites readers to share their own horror stories for the next one.

Check out the full book here.

The post Spectacular security failures: Experts share all in a book appeared first on Superuser.

by Nicole Martinelli at March 07, 2019 03:35 PM

Trinh Nguyen

Searchlight at Stein-3 (R-8,7,6)

Yahoo!!! We reached the Stein-3 (Stein R-5) [13] which is a very important milestone [11]. During this week, there are a couple of events (Searchlight related) will happen including:
  • Feature freeze: we have some features are being developed and expected to release them at Stein R-1.
  • Final release for client libraries: at this point, I can tell there would be no more major changes of the python-searchlightclient
  • Stein community goals completed: Searchlight runs tests under Python 3 by default and has a basic framework for pre-upgrade checks.
  • Train PTL self-nomination: I would say that I will run for another term as Searchlight PTL in order to build a foundation for the multi-cloud vision of Searchlight.
Following are the major changes we made at Stein-3:
  • TC vision reflection [1]: this is a good practice to compare Searchlight vision with the TC vision [12] to make sure the team is going in the right direction which designed by the OpenStack community.
  • Replace httplib2 with requests [2]: this is to make the functional tests more stable by tweaking the Elasticsearch setup.
  • Add python 3.7 unit test job [6] [7] [8]
And, they are some on-going tasks that we hope to finish in a couple more weeks.
  • Docker deployment sample [3]
  • Tacker plugin blueprint [4], and implementation [10]
  • Multiple OpenStack support [5]
Keep moving forward!!!



by Trinh Nguyen ( at March 07, 2019 01:32 AM

March 06, 2019

OpenStack Superuser

How you can influence the Open Infrastructure Forum

At the Forum, the open infrastructure community gathers to brainstorm the requirements for the next release, gather feedback on the past version and engage in strategic discussions that go beyond goals for the next release cycle.

Now’s the time for you to weigh in for the upcoming Denver Summit. You can add ideas for sessions at the Forum in a couple of ways. If you’re working on a project or have a specific interest, check out the list here. If you want to post an idea, but aren’t working with a specific team or working group, use the catch-all Etherpads for the Technical Committee and the User Committee. You can also check out which projects are already on tap for onboarding and updates at the Forum here.

Not sure what might work? Great forum sessions typically:

  • Involve both developers and users
  • Multiple projects/teams/working groups collaborating
  • Have a concrete outcome/a conclusion to work toward

Here’s more detail on the types of sessions that work for this event:

Project-specific sessions

Where developers can ask users specific questions about their experience, users can provide feedback from the last release and cross-community collaboration on the priorities and ‘blue sky’ ideas for the next release can occur.

Strategic, community-wide discussions

This is the time and place to think about the big picture, including beyond just one release cycle and new technologies.

Cross-project sessions

In a similar vein to what took place at past Design Summits, but with increased emphasis on issues that are of relevant to all areas of the community.

One more idea: if you’ve organized or attended any open infrastructure events in the past year, you’ve probably heard talks or been in discussions that are perfect for the Forum. Pitch them!

A committee of Technical and User Committee representatives and Foundation Staff will then schedule the sessions into available times.

You have until March 10 to propose your session for this edition of the Forum which is co-located with the Open Infrastructure Summit.

Cover photo // CC BY NC

The post How you can influence the Open Infrastructure Forum appeared first on Superuser.

by Superuser at March 06, 2019 03:30 PM

Sean McGinnis

March 2019 OpenStack Board Notes

This is the second report in a hopefully ongoing series of notes capturing OpenStack Foundation Board Meeting activities.

As a reminder (or in case you weren’t aware) the planned OSF board meetings are published on the wiki and are open to everyone. Occasionally there is a need to have a private, board member only portion of the call to go over any legal affairs that can’t be discussed publicly, but that should be a rare occasion.

I would encourage anyone interested to listen in. Or join us at the next face-to-face “joint leadership” meeting with the Technical Committee and User Committee on April 28, the Sunday prior to the next Open Infrastructure Summit in Denver, Co.

March 5, 2019 OpenStack Foundation Board Meeting

The original agenda can be found here and the official minutes are here. Jonathan Bryce also usually sends out unofficial minutes to Foundation mailing list. The March 5th notes can be found here.

Board Member Changes

The Platinum and Gold board member positions are tied to the sponsor organizations, with the Platinum members being appointed and the Gold members being selected as part of an election amongst that group.

Due to shifting job responsibilities, career moves, etc., representatives from this group do change from time to time. Brian Stein (Rackspace) and Mark Baker (Canonical) have left the board, replaced by Andy Cathrow and Ryan Beisner, respectively.

Confirmation Guidelines

Allison Randall has been leading the effort of formalizing confirmation guidelines for new projects coming in under OpenStack Foundation governance. The draft of these guidelines have been available for some time, so this was more of a final review to see if there were any further details that needed to be worked through.

Personally, I find section 3 (“Technical best practices”) to be a little vague in practice, but not enough to raise it as an issue in the meeting. These are guidelines afterall, not a legal rubric.

I am very happy with the detail in section 4 (“Open collaboration”). The items listed under here were the main points of concern I had with how new projects will be handled. I believe things like following the Four Opens and adhering to the community code of conduct are very important to make sure there is some sort of cohesion between the different groups that will make up the greater community. I’m also happy to see having an OSI license explicitly called out.

The main conversation around this topic ended up being the need (or not) to have non-public discussions about incoming projects. There were some strong opinions as to whether it was even appropriate for an open source organization such as ours should even have private board meetings to discuss these things and whether that would cause the appearance of making decisions on new projects behind closed doors.

Mark Radcliffe’s main point was there could be times where legal reasons would cause the need. The good example I saw was if a director worked for a company that somehow had a legal restriction that would prevent them from making public comments about a project being led by a competing organization.

Then further down the route of needing private board meetings, would it be better to have them for every project so it’s not as obvious that there are some concerns when they actually are needed for a new project.

I think it was Mark McLoughlin that raised the point that we do reserve the option to have an exclusive portion of our regular board meetings, so we could be discrete by letting Alan Clark know that we would like to use that time at the next board meeting to have a private discussion without making it too obvious that there are concerns about a project.

I could see some of the legal arguments for maybe needing this, but ultimately I think we need to stay open in these discussions. By being part of the board, we are taking on the obligation of publicly discussing matters of the OSF. If someone has a legal reason to not discuss something in the open, then I think it’s probably better they either recuse themselves from the discussion or have a direct conversation with the team or Alan to raise their concerns.

We are pushing for the Four Opens for the community. I believe the Board needs to operate as openly as possible too.

We will be planning another meeting in April to give everyone time to think through some of the concerns and then hopefully agree on how this will be handled going forward.

The good thing is that the confirmation guidelines themselves were approved and we now have something in place as we evaluate new OSF projects.

Compensation Committee Update

Alan presented the current draft of the compensation committee’s work on setting goals for the OpenStack Foundation staff for 2019. I am not able to find a public reference to that at the moment, but I will see about posting a future update.

Basically, this sets the goals for Jonathon Bryce for directing Foundation staff activities for the year. It includes things like the new project confirmations, promotion of OpenStack activities, and other efforts for the staff to work towards.

India Events

Prakash Ramachandran raised this topic to get some awareness of efforts he is leading to promote OpenStack and Open Infra events in India, and the desire to at least have a discussion about a potential future Open Infrastructure Summit in India in the future.

Unfortunately we ran out of time. I would have liked to have had more discussion on this topic. We have a very large contributor base in India and many Indian companies using OpenStack. I think it would be great to help connect with this community by having a Summit there and I was glad to hear about some of the smaller events Prakash and team have been organizing in different regions.

(side note: I had a great experience going to an OpenStack Days event in Bangalore a few years ago.)

Hopefully we will hear more on this in the future.

by Sean McGinnis at March 06, 2019 12:00 AM

March 05, 2019

OpenStack Superuser

A new partnership aims to push forward artificial intelligence and machine learning with OpenStack

Cloud service provider Vault has teamed up with researchers from the University of Technology Sydney and Australia’s national science agency, the Commonwealth Scientific and Industrial Research Organisation (CSIRO), for a $500,000, three-year research project centered on OpenStack.

“As the government invests in cloud migration, there will be significant value in being able to analyze data stored in the cloud,” Vault CEO and founder Rupert Taylor-Price said in a statement adding that enhancements made to the current OpenStack architecture will be subsequently built into Vault’s cloud.

The news comes ahead of a session at the Open Infrastructure Summit featuring Jacob Anders, CSIRO’s high-performance computing technical lead. At the session titled “It’s a Cloud.. it’s a SuperComputer.. no, it’s SuperCloud!“, he’ll talk about how building on open standards, the team is bringing the infrastructure-as-code methodology to bare metal. The resulting SuperCloud supports a vast array of dev ops tools, enabling users to programmatically request HPC resources from compute, to NVMe, to InfiniBand networks. The cloud allows building HPC clusters, RDMA storage and containerized workloads quickly, with a simple playbook.

For the backstory on how these collaborations have evolved, check out this talk from the Sydney Summit by Taylor-Price. He shared his extensive knowledge of the use of OpenStack in the Australian Government, focusing on how Vault leveraged the simplicity and openness provided by OpenStack to build one of the world’s most secure cloud platforms exclusively for use by the Australian government. Vault was one of the first companies to gain Australian Signals Directorate certification for the storage and processing of classified data. Check out his half-hour session here.

For more about HPC, AI and OpenStack, take a look at the dedicated track at the upcoming Open Infrastructure Summit. It features case studies from the National Institute of Standards and Technology, Huawei and Lenovo. More on that track here.

Via Computer World

The post A new partnership aims to push forward artificial intelligence and machine learning with OpenStack appeared first on Superuser.

by Superuser at March 05, 2019 03:03 PM

March 04, 2019

OpenStack Superuser

How to rock your talk at the Open Infrastructure Summit

Kudos on getting your talk accepted for the first-ever Open Infrastructure Summit in Denver. The competition is fierce: typically only a fraction of pitches are accepted. Here are some thoughts on making it a great one.

Divorce your slides

Many of us are more comfortable over IRC or email than IRL. That’s why public speaking is challenging! Use the fact that you’ll be in person to your advantage: It’s the difference between adding a smiley face to your message and actually smiling (Or  dancing? How about wearing your code?) during your talk. You have more elements to work with – visual, sound, text. Where this most often falls down is over reliance on slides. The slides should add something — more context, detail, say an outline of the architecture — not just repeating word for word the statements you’re making. If you are married to the text on them, folks will wonder why they just signed up for 40 minutes of you reading aloud. Yawn.

Mind the buzz words, spell out key acronyms

The flip side of the above point: Moving so fast with industry jargon and so many acronyms that only your closest colleagues have any idea what you’re talking about is also not advised. Even if you have pitched your session as appropriate for people with an advanced level of knowledge, it’s still worth your time to include terms for the ambitious intermediate people in the room.
If you are prone to over jargon, let me introduce you to Unsuck-it.  You’ll get a few laughs while weeding out terms like “best-of-breed” and “value proposition” from your talk. Otherwise, chances are high that you’ll spot a row of people in the back playing buzzword bingo. Who can blame them?

Shelf the sales pitch

This is probably the fastest way to empty a room: Make sure your talk is focused on selling your products and how bleeding edge your company is. The best use case study talks will include the challenges faced (whether with your technology or the client’s) and how you’re working with the community (or how you’d like to) to solve problems upstream. Think about sharing experience with your peers, not a room of potential clients.

Yeah, you really do need to practice

This doesn’t happen often, but do not be the person who is testing out the presentation (“Oh, hey, I’m winging it!”) in a full room at the Summit. You need to know the material well enough to use your slides as signposts, not a memory device.  If you took point number one about using the live environment to your advantage and are risking a demo – run it enough times so that you can be reasonably confident that it will work.

And if English isn’t your first language, here are some thoughts from my personal experience of public speaking in second language. Practice in this case means you need to say the entire presentation out loud at least two or three times. If there are specific words you’re stumbling over, find synonyms, check the audio pronunciation on sites like Forvo or similar talks on YouTube. The community is global and really wants to share expertise from all corners; people will not be grading your accent or pronunciation — aim for clarity and comprehension.

Consider the jokes, mind your memes

Personally, my life would be complete if I never sat through another tech talk with any reference at all to “Star Wars.” And by that I mean quoting the movies, using a GIF, playing a clip, referencing the t-shirt you’re wearing. Seriously. It’s an obvious way to seek common geek ground with the audience and it’s been done.way.too.many.times. Enough with the member berries, already.
Of course, what some people consider wired and others consider tired is subjective. Just spare an extra thought or two about how much wordplay, inside jokes, pop culture references are really going to add value to your talk. Personality is great. Humor is also good. If you’re determined to add something special to your talk, consider Kelsey Hightower the bar. Here he is presenting “Diana” as part of a recent keynote. Aim that high.

What are some of your favorite talk techniques or pet peeves? Let us know in the comments!

The post How to rock your talk at the Open Infrastructure Summit appeared first on Superuser.

by Nicole Martinelli at March 04, 2019 03:09 PM

March 01, 2019

OpenStack Superuser

Meet the latest release of Kayobe: Even easier deployment of containerized OpenStack to bare metal

Kayobe is a free and open source deployment tool for containerized OpenStack control planes,based on Kolla and Kolla-Ansible and embodying current best practices. Kayobe is seeing broad adoption for research computing configurations and use cases.

After its beginnings with OpenStack Ocata, Kayobe is now onto its fourth major OpenStack release with support for Rocky. Admittedly, Rocky was finalized back in November 2018. StackHPC’s dedicated team (who drive much of the work on Kayobe) has been busy with some major pieces of work, both within StackHPC and around the OpenStack ecosystem. Thanks to growing strength and breadth, the team was actually quicker with this release than it was with Queens and expects to be quicker still with the forthcoming Stein release.

In addition to support for deploying and managing Rocky, the release notes describe many new features in this release.

Mark Goddard presented our work on Kayobe at the recent UKRI Cloud Workshop at the Francis Crick Institute in London.

Goddard speaking at the recent cloud workshop.

“The Kayobe 5.0.0 release includes a number of useful features. We now have a full upgrade path for the seed services from Ocata to Rocky. The Python package now includes the Ansible playbooks, meaning that you can now use Kayobe without a copy of the source code repository,” Goddard says.  “This sets us up for more reproducible and easy to install Kayobe control host environments.”

Get involved

The team is now working on the Stein version – get in touch on IRC at #openstack-kayobe or the openstack-discuss mailing list to help shape the next release.

Stig Telfer, Mark Goddard and John Garbutt are leading a hands-on workshop titled “Containerized OpenStack deployment using Kolla, Ansible and Kayobe” at the upcoming Open Infrastructure Summit.

This post first appeared on the StackHPC blog. Superuser is always interested in community content – get in touch at

Cover photo // CC BY NC

The post Meet the latest release of Kayobe: Even easier deployment of containerized OpenStack to bare metal appeared first on Superuser.

by Stig Telfer at March 01, 2019 05:01 PM

Chris Dent

Placement Update 19-08

Welcome back to the placement update. If I've read the signs correctly, I should now be back to this as a regular thing. Apologies for the gap, I had to attend to some other responsibilities.

Most Important

A lot has changed in the past few months, so it's hard to extract out a most important. It will depend on who is reading. Review what's changed for a summary of important stuff.

What's Changed

  • Placement is now its own official project. Until elections are held (it looks like nominations start this coming Tuesday), Mel is the PTL.

  • Setting up storyboard for placement-related projects is in progress. For the time being we are continuing to use launchpad for most tracking. See a related email thread.

  • Deleting placement code from nova has been put on hold until Train to make it easier for certain types of upgrades to happen. New installs should prefer the extracted code, as the nova-side is frozen, but the placement side is not.

  • A large stack of code to remove oslo.versionedobjects from placement has merged. This has resulted in a significant change in performance on the perfload test that runs in the gate. While not a complete representation of the entire system, it's enough to say "yeah, that was worth it": A request for allocation candidates that used to take around 2.5 seconds now takes 1.2. That refactoring continues (see below), seeking additional simplifications.

  • Microversion 1.31 adds in_tree and in_treeN query parameters to GET /allocation_candidates. This is useful in a variety of nested resource provider scenarios, including the big bandwidth QoS changes that are in progress in nova and neutron.

  • Placement is now publishing install docs but it is important to note that those docs have not been validated (as far as I'm aware) by the packagers. That's a thing that needs to happen, presumably by the packagers.

  • os-resource-classes 0.3.0 has been released with a normalize_name function.

  • There are some pending specs from nova which are primarily placement feature specs. We'll continue with those as is (see below), but come the next cycle the plan is to manage specs in the placement repo, not have a separate repo, and not have separate spec cores.


Near to Done

Not yet Done

Not yet Approved



osc-placement is currently behind by 14 microversions.

Code for 1.18 is under review.

Main Themes

This section now overlaps a bit with the Specs/Features bit above. This will settle out with a bit more clarity as we move along.


  • Reshaper handing in nova keeps exposing additional things that need to be remembered on the nova-side, so there are a few patches remaining related to vgpu reshaping but it is mostly ready.

  • The bandwidth-resource-provider topic has merged a vast amount of code but there is still plenty left.

Related to all this nested stuff: The complex hardware models that drove the development of the nested resource provider system are challenging to test. The cloud hardware provided to OpenStack infrastructure does not expose the hardware that would allow real integration tests. If anyone reading this is in a position to provide third party CI with fancy hardware for NUMA, NFV, FPGA, and GPU related integration testing with nova, there's a significant need for that.


(I think refactoring should be a constant theme. To reflect that, I'm going to have a section here. Editorial privilege or something.)

There's a collection of patches in progress, currently under the topic scrub-Lists that is a follow up to the patches that removed oslo versioned objects. That work pointed out some opportunities to DRY-up the List classes (e.g., UsageList) to remove some duplication and simplify. Then, after looking at that, it became clear that entirely removing the List classes, in favor of using python native lists, would further simplify the code.

Apart from the previously mentioned performance and simplicity benefits of these changes, it's also managed to expose and fix a few bugs, simple because we were looking at things and moving them around. If you pick up rocks, you can see the bugs and squash them. If you don't, they breed.

Other Placement

Other Service Users


See also the several links above for more nova changes. Also, I'm a bit behind on my tracking in this area, so there is likely plenty of other stuff too. This will improve over time.

Not Nova


Though this is long, it doesn't really bring us fully up to date. If something is missing that you think is important please let me know. Once I'm back in the flow it should become increasingly complete.

by Chris Dent at March 01, 2019 01:16 PM

What's happening in the OpenStack community?

In many ways, 2018 was a transformative year for the OpenStack Foundation.

by Jonathan Bryce at March 01, 2019 08:00 AM

February 28, 2019

OpenStack Superuser

Inside open infrastructure: The latest from the OpenStack Foundation

Welcome to the latest edition of the OpenStack Foundation Open Infrastructure newsletter, a digest of the latest developments and activities across open infrastructure projects, events and users. Sign up to receive the newsletter and email to contribute.

Spotlight on.. Denver Summit + Project Team Gathering update

You’re probably aware that the OpenStack Summit was renamed the Open Infrastructure Summit to reflect the wide array of open-source tools used to automate modern infrastructure and the challenges of integrating them to solve advanced use cases.

The agenda for the April 29–May 1 event is now live, featuring talks and training sessions that cover the implementation of more than 30 open-source projects, including four new projects hosted at the OpenStack Foundation (OSF): Kata Containers, Zuul, Airship and StarlingX. As in previous conferences, the agenda is organized into Tracks and tagged by project so you can easily find sessions.

Here’s an overview of the content and discussions you can expect to see at the Summit:

  • Breakout sessions around CI/CD, container infrastructure, private and hybrid cloud, public cloud, telecom and NFV, HPC/GPU/AI.
  • Featured speakers from ARM, AT&T, Baidu, Boeing, Blizzard Entertainment and Haitong Securities among open infrastructure use cases featuring Ceph, Docker, Kata Containers, Kubernetes, OpenStack and 30 other open source technologies.
  • In addition to almost 100 sessions about OpenStack, the schedule includes project updates, onboarding sessions and demos across the OSF’s pilot projects—Airship, Kata Containers, StarlingX and Zuul.
  • Collaborative sessions will be offered at the Forum, where open infrastructure operators and upstream developers will gather to jointly chart the future of open source infrastructure, discussing topics ranging from upgrades to networking models and how to get started contributing.

Project Team Gathering

  • Check out the list of the teams already signed up for the May 2-4 event on the PTG site. They include many OpenStack projects and special interest groups (SIGs) in addition to several pilot projects.  Expect the final schedule soon.

OpenStack Foundation news

  • After receiving and incorporating feedback from the Board of Directors committee, OSF project leadership bodies and the broader community, a draft of the OSF Project Confirmation Guidelines will be reviewed by the Board during their March 5 meeting. We would like to give special thanks to Allison Randal for driving this effort.
  • The Diversity & Inclusion Working Group is conducting an anonymous survey to better understand the diversity and makeup of the community. More on these efforts here.

OpenStack Foundation project news

  • Registration is open for the Ops meetup in Berlin, March 6-7, 2019, a community-driven, collaborative event for people running OpenStack infrastructure.
  • Congratulations to Amy Marrich, Belmiro Moreira and John Studarus on their elections to the OpenStack User Committee.
  • Voting for the OpenStack Technical Committee started Feb 26, 2019 23:45 UTC and continues through Mar 05, 2019 23:45 UTC. Check out the election page to review the candidates’ platforms.
  • The OpenStack community just launched a new Bare Metal Special Interest Group, with a focus on highlighting the many ways that OpenStack Ironic is being used in production to manage hardware clusters. The SIG has just started a white paper covering the philosophy and use cases for managing hardware with Ironic. This is a community-based collaboration and you’re invited to participate in writing it!


  • The Airship team is now producing monthly releases of Treasure Map, the Airship documentation project that provides a complete, continuously tested sample configuration for deploying open infrastructure with Airship. If you’re ready to run Airship in production, Treasure Map will guide that journey.

Kata Containers

  • Samuel Ortiz and Xu Wang were reelected as returning members of the Kata Containers Architecture Committee. Congratulations and thank you for your continued leadership!
  • The Kata Content SIG holds monthly meetings to work on marketing content for the Kata Containers blog and other community channels. If you would like to get involved, please join the next meeting on March 13 at 7:00 a.m. PT.


  • There will be plenty of opportunities to meet up with the StarlingX community at the Open Infrastructure Summit in Denver; learn more about the project related sessions as well as plans for the Forum and PTG on the StarlingX blog.
  • Stay tuned for further announcements about project update and onboarding sessions as well as the RSVP link to the hands-on workshop where you can learn more about the platform by deploying it and play with the latest features.


  • This week, Jim Blair provided a Zuul update on the mailing list, including specs around features including an authenticated web API enabling tenant- or project-scoped privileged actions,  web and AMQP triggers, python3 only node support, and a URLTrigger driver.

OSF supported events

  • Join industry experts for a week at RSA Conference 2019 discovering better solutions, making better connections, and learning how to keep the digital world safe. Save on the price of a full conference pass with the OpenStack member discount.
  • OpenStack Days and Open Infra Days are regional events organized by the local user groups to support their community. Find an event in your community. Here are some upcoming events:
    • The Open Infrastructure Days UK early-bird pricing discount runs until the end of February.  With an amazing set of speakers and workshops already confirmed, register before the prices increase.
    • CERN is hosting an OpenStack Day event on May 27 2019. It’s a great opportunity to learn more about how OpenStack is used in science and research.

Questions / feedback / contribute

This newsletter is edited by the OpenStack Foundation staff to highlight open infrastructure communities. We want to hear from you!
If you have feedback, news or stories that you want to share, reach us through and to receive the newsletter, sign up here.


The post Inside open infrastructure: The latest from the OpenStack Foundation appeared first on Superuser.

by OpenStack Foundation at February 28, 2019 06:15 PM

Fleio Blog

Fleio 2019.02.1: OpenStack Swift object storage and floating IPs pricing rules

Here comes a brand new version of Fleio, OpenStack billing, and control panel – version 2019.02.1. We usually make one planned release per month, but we’re also trying to react quickly to customers’ needs. A customer asked to get some features as soon as possible, so we’re making the second release this month. Here are […]

by adrian at February 28, 2019 02:10 PM

SUSE Conversations

Transformation and Future Trends at SUSECON 2019

At the core of nearly all digital transformation initiatives is technology that helps companies move faster, drive innovation and fuel growth—all without missing a beat in day-to-day operations. This year at SUSECON 2019 in Nashville, we’re displaying how our open, open source approach to software-defined infrastructure and application delivery solutions makes it easier for organizations […]

The post Transformation and Future Trends at SUSECON 2019 appeared first on SUSE Communities.

by jkotzen at February 28, 2019 01:53 AM

February 27, 2019


Introduction to YAML: Deployments and other Kubernetes Objects Q&A

Recently we gave a webinar that gave an Introduction to YAML: Deployments and other Kubernetes Objects. Here's a look at some of the Q&As we covered (and some that we didn't have time for).

by Nick Chase at February 27, 2019 10:41 PM

February 26, 2019

OpenStack Superuser

Get your learning on at the next OpenStack Upstream Institute

Are you interested in learning about how to contribute upstream? Do you have questions about how releases work for OpenStack? How to get a feature implemented and merged? Or how to report a bug? Then OpenStack Upstream Institute is the place for you!

The weekend before the Open Infrastructure Summit begins (Saturday and Sunday) we host Upstream Institute (OUI.) This training is a great opportunity to meet established members of the community and learn how to work with them upstream. We walk through the basics of contribution. We cover everything from how our community is structured and the release cadence to configuring your local development environment and the accounts you need to push code and file bugs.

Past students come from all different kinds of backgrounds. We have had students, independent contractors. We’ve had people come representing companies big and small. With a dedicated team of mentors and our continually improved content, there is something for everyone to learn.

Mads Boye, a HPC specialist at Aalborg University attended OUI at an OpenStack Day event in Stockholm last year. When asked what his favorite part of the training was, he said: “I liked the walk-through of the organization, the hands-on aspect of committing changes to the documentation and, of course, the sticker prizes!”

By the end of day two, you will have all the tools to push a patch and become an ATC (if you don’t know what that is, you will also learn that in the training.)

The OpenStack Upstream Institute was designed by the OpenStack foundation to share knowledge about the different ways to contribute to OpenStack. The program was built with the principle of open collaboration in mind and was designed to teach attendees how to find information, as well as how to navigate the intricacies of the technical tools for each project. The training program to share knowledge about the different ways of contributing to OpenStack like providing new features, writing documentation, participating in working groups.

Aimed at beginners, trainers are all-star volunteers from the community. It’s broken down into modules– so if you’re a developer, project manager or interested in Working Groups, you can follow what most interests you. If you’re interested in mentoring at OUI, please email me at So come and join us!

Kendall Nelson is an upstream developer advocate at the OpenStack Foundation based in Seattle. She started working on Cinder and os-brick in the Liberty release and has since gotten involved in StoryBoard, the Women of OpenStack (WoO), WoO Mentoring and the OpenStack Upstream Institute.

The post Get your learning on at the next OpenStack Upstream Institute appeared first on Superuser.

by Kendall Nelson at February 26, 2019 05:05 PM

February 25, 2019

StackHPC Team Blog

Kayobe 5.0.0: The Rocky Release

Kayobe is a free and open source deployment tool for containerised OpenStack control planes, based on Kolla and Kolla-Ansible, and embodying current best practices. Kayobe is seeing broad adoption for research computing configurations and use cases.

After its beginnings with OpenStack Ocata, Kayobe is now onto its fourth major OpenStack release with support for Rocky.

Admittedly, Rocky was finalised back in November 2018. StackHPC's dedicated team (who drive much of the work on Kayobe) have been busy with some major pieces of work, both within StackHPC and around the OpenStack ecosystem. Thanks to growing strength and breadth, the team was actually quicker with this release than it was with Queens, and expects to be quicker still with the forthcoming Stein release.

In addition to support for deploying and managing Rocky, the release notes describe many new features in this release.

Mark Goddard presented our work on Kayobe at the recent UKRI Cloud Workshop at the Francis Crick Institute in London.

Mark Goddard at Cloud WG Workshop 2019

Mark says, "The Kayobe 5.0.0 release includes a number of useful features. We now have a full upgrade path for the seed services from Ocata to Rocky. The Python package now includes the Ansible playbooks, meaning that you can now use Kayobe without a copy of the source code repository. This sets us up for more reproducible and easy to install Kayobe control host environments. Thanks to everyone who contributed to the release. Now onto Stein - get in touch via #openstack-kayobe or the openstack-discuss mailing list to help shape the next release!"

by Stig Telfer at February 25, 2019 10:00 PM

February 21, 2019

Fleio Blog

Fleio 2019.02: LXC/LXD containers with console, OpenStack discounts, move instances

Fleio – OpenStack billing and control panel for service providers – version 2019.02 is now available. Some of the new features are unique to Fleio and not present in OpenStack Horizon: LXD support with console Move instance between clients (and OpenStack projects) Staff users can boot instances from any image, regardless if the image is […]

by adrian at February 21, 2019 12:52 PM

February 14, 2019

SUSE Conversations

SUSE OpenStack Cloud 9 Release Candidate 1 is here!

We are happy to announce the release of SUSE OpenStack Cloud 9 Release Candidate 1! Cloud Lifecycle Manager is available Today we are releasing SUSE OpenStack Cloud 9 CLM along with SUSE OpenStack Cloud 9 Crowbar! You will now find it in the download area: SUSE-OPENSTACK-CLOUD-9-x86_64-RC1-DVD1.iso the ISO to install Cloud 9 with Cloud Lifecycle […]

The post SUSE OpenStack Cloud 9 Release Candidate 1 is here! appeared first on SUSE Communities.

by Vincent Moutoussamy at February 14, 2019 03:49 PM

StackHPC Team Blog

StackHPC at the UKRI Cloud Workshop

We always enjoy attending the UKRI Cloud Working Group Workshop held annually at the awesome Francis Crick Institute. The sizeable crowd it draws and the high quality of content are both healthy signs of the vitality of cloud for research computing.

This year's workshop demonstrated a maturing approach to use of cloud, with some notable focus on various methods for harnessing hybrid and public clouds for dynamic and bursting workloads. Public cloud companies presented on new and forthcoming HPC-aware features, while research organisations presented on mobility to avoid lock-in to cloud vendors. How these two contrasting tensions play out will be interesting over the next few years.

There was also a welcome focus on operating and sustaining cloud-hosted infrastructure and platforms. In particular, Matt Pryor from STFC/JASMIN presented their current project on a user-friendly application portal, coupled with Cluster-as-a-Service deployments of Slurm and Kubernetes, with focus on both usability for scientists and day-2 operations for administrators. StackHPC is proud to be working with the JASMIN team on implementing this well-considered initiative and we hope to write more about it in due course.

We always particpate as much as possible, and this year StackHPC was more involved than we have ever been before. Five members of our team attended, and in a one-day programme three presentations were delivered by the team - a real achievement for a ten-person company.

We presented three prominent areas of recent work. John Garbutt spoke about our recent work on storage for the software-defined supercomputer, in particular SKA SDP buffer prototyping and the Cambridge Data Accelerator.

John Garbutt at Cloud WG Workshop 2019

Pictured here with David Yuan of EMBL and Matt Pryor of STFC

Mark Goddard presented our work on Kayobe, a free and open source deployment tool for containerised OpenStack control planes, based on Kolla and Kolla-Ansible, and embodying current best practices. Kayobe is seeing broad adoption for research computing configurations and use cases.

Mark Goddard at Cloud WG Workshop 2019

Bharat Kunwar delivered a demonstration of Pangeo, the second of the day after Jacob Tomlinson presented the work of the Met Office Informatics Lab. With a focus on data-intensive analytics on private cloud infrastructure, Bharat demonstrated the deployment of Pangeo on a bare metal HPC OpenStack deployment, using Kubernetes deployed by Magnum. In addition to demonstrating containers running on bare metal, Bharat demonstrated storage attachments backed by Ceph and RDMA-enabled BeeGFS. All of that in ten minutes!

Bharat Kunwar at Cloud WG Workshop 2019

by Stig Telfer at February 14, 2019 02:00 PM

February 13, 2019

CERN Tech Blog

RadosGW Keystone Sync

We have recently enabled an S3 endpoint in CERN private cloud. This service is offered by RadosGW on top of a Ceph cluster. This storage resource comes to complement the cloud offering and allows our users to store object data using S3 or swift APIs. In order to enable the validation, you need to configure RadosGW to validate the keys against the Identity service (Keystone) in OpenStack and then create the new service and the endpoint in the Identity API.

by CERN ( at February 13, 2019 01:05 PM

February 12, 2019

Emilien Macchi

OpenStack Containerization with Podman – Part 5 (Image Build)

For this fifth episode, we’ll explain how we will build containers with Buildah. Don’t miss the first, secondthird and fourth episodes where we learnt how to deploy, operate, upgrade and monitor Podman containers.

In this post, we’ll see the work that we can replace Docker by Buildah to build our container images.


In OpenStack TripleO, we have nearly 150 images (all layers included) for all the services that we can deploy. Of course you don’t need to build them all when deploying your OpenStack cloud, but in our production chain we build them all and push the images to a container registry, consumable by the community.

Historically, we have been using “kolla-build” and the process to leverage the TripleO images build is documented here.


kolla-build only supports Docker CLI at this time and we recognized that changing its code to support something else sounded a painful plan, as Docker was hardcoded almost everywhere.

We decided to leverage kolla-build to generate the templates of the images, which is actually a tree of Dockerfile per container.

The dependencies format generated by Kolla is a JSON:

So what we do is that when running:

openstack overcloud container image build --use-buildah

We will call kolla-build with –list-dependencies that generates a directory per image, where we have a Dockerfile + other things needed during the builds.

Anyway, bottom line is: we still use Kolla to generate our templates but don’t want Docker to actually build the images.

In tripleo-common, we are implementing a build and push that will leverage “buildah bud” and “buildah push”.

buildah bud” is a good fit for us because it allows us to use the same logic and format as before with Docker (bud == build-using-dockerfile).

The main challenge for us is that our images aren’t small, and we have a lot of images to build, in our production chain. So we decided to parallelize the last layers of the images (which don’t have childs).

For example, 2 images at the same layer level will be built together, also a child won’t be built in parallel of its parent layer.

Here is a snippet of the code that will take the dependencies dictionary and build our containers:

Without the “fat daemon” that is Docker, using Buildah puts some challenges here where running multiple builds at the same time can be slow because of the locks to avoid race conditions and database corruptions. So we capped the number of workers to 8, to not make Buildah locking too hard on the system.

What about performances? This question is still under investigation. We are still testing our code and measuring how much time it takes to build our images with Buildah. One thing is sure, you don’t want to use vfs storage backend and use overlayfs. To do so, you’ll need to run at least Fedora 28 with 4.18 kernel, install fuse-overlayfs and Buildah should use this backend by default.


Please select full screen:

In the next episode, we’ll see how we are replacing the docker-registry by a simple web server. Stay tuned!

by Emilien at February 12, 2019 03:30 AM

February 11, 2019


python-tempestconf’s journey

For those who are not familiar with the python-tempestconf, it’s a tool for generating a tempest configuration file, which is required for running Tempest tests against a live OpenStack cluster. It queries a cloud and automatically discovers cloud settings, which weren’t provided by a user.

Internal project

In August 2016 config_tempest tool was decoupled from Red Hat Tempest fork and the python-tempestconf repository under the github redhat-openstack organization was created. The tool became an internal tool used for generating tempest.conf in downstream jobs which were running Tempest.

Why we like `python-tempestconf`

The reason why is quite easy. We at Red Hat were (and still are) running many different OpenStack jobs with different configurations which execute Tempest. And there python-tempestconf stepped in. We didn’t have to implement the logic for creating or modifying tempest.conf within the job configuration, we just used python-tempestconf which did that for us. It’s not only about the generating tempest.conf itself, because the tool also creates basic users, uploads an image and creates basic flavors which all of them are required for running Tempest tests.

Usage of python-tempestconf was also beneficial for engineers who liked the idea of not struggling with creating a tempest.conf file from scratch but rather using the tool which was able to generate it for them. The generated tempest.conf was sufficient for running simple Tempest tests.

Imagine you have a fresh OpenStack deployment and you want to run some Tempest tests, because you want to make sure that the deployment was successful. In order to do that, you can run the python-tempestconf which will do the basic configuration for you and will generate a tempest.conf, and execute Tempest. That’s it, isn’t it easy?

I have to admit, when I joined Red Hat and more specifically OpenStack team, I kind of struggled with all the information about OpenStack and Tempest, it was too much new information. Therefore I really liked when I could generate a tempest.conf which I could use for running just basic tests. If I had to generate the tempest.conf myself, my learning process would be a little bit slower. Therefore, I’m really grateful that we had the tool at that time.

Shipping in a package

At the beginning of 2017 we started to ship python-tempestconf rpm package. It’s available in RDO repositories from Ocata and higher. python-tempestconf package is also installed as a dependency of openstack-tempest package. So if a user installs openstack-tempest, also python-tempestconf will be installed. At this time, we also changed the entrypoint and the tool is executed via discover-tempest-config command. However, you could have already read all about it in this article.

Upstream project

By the end of 2017 python-tempestconf became an upstream project and got under OpenStack organization.

We have significantly improved the tool since then, not only its code but also its documentation, which contains all the required information for a user, see here. In my opinion every project which is designed for wider audience of users (python-tempestconf is an upstream project, so this condition is fulfilled), should have a proper documentation. Following python-tempestconf’s documentation should be any user able to execute it, set wanted arguments and set some special tempest options without any bigger problems.

I would say that there are 3 greatest improvements. One of them is the user documentation, which I’ve already mentioned. The second and third are improvements of the code itself and they are os-client-config integration and refactoring of the code in order to simplify adding new OpenStack services the tool can generate config for.

os-client-config is a library for collecting client configuration for using an OpenStack cloud in a consistent way. By importing the library a user can specify OpenStack credentials by 2 different ways:

  • Using OS_* environment variables, which is maybe the most common way. It requires sourcing credentials before running python-tempestconf. In case of packstack environment, it’s keystonerc_admin/demo file and in case of devstack there is openrc script.
  • Using --os-cloud parameter which takes one argument – name of the cloud which holds the required credentials. Those are stored in a cloud.yaml file.

The second code improvement was the simplification of adding new OpenStack services the tool can generate tempest.conf for. If you want to add a service, just create a bug in our storyboard, see python-tempestconf’s contributor guide. If you feel like it, you can also implement it. Adding a new service requires creating a new file, representing the service and implementing a few required methods.

To conclude

The tool has gone through major refactoring and got significantly improved since it was moved to its own repository in August 2016. If you’re a Tempest user, I’d recommend you try python-tempestconf if you haven’t already.

by Martin Kopec at February 11, 2019 02:50 PM

Chris Dent

Placement Container Playground 9

This is the ninth in a series about running the OpenStack placement service in a container. The previous update was Playground 8.

The container playground series introduced running placement in Kubernetes in Playground 4 and then extended it in Playground 5 to add a Horizontal Pod Autoscaler.

But it was very clumsy. For some other work, I've needed to learn about Helm. I was struggling to get traction so figured the best way to learn how things worked was to make a helm chart for placement. There already is one in openstack-helm but it is embedded in the openstack-helm ecosystem and not very playgroundy. So I set out to play.

The result of that work is in a pull request to placedock (since merged). A relatively simple helm-chart is built from the starting points provided by helm create. It started out simply deploying a placement service with an internal database, but through iteration it will now set up ingress handling and the aforementioned autoscaler with:

helm install --set ingress.enabled=true \
             --set replicaCount=0 \
             --name placement

(replicaCount=0 is used to signal "make me some autoscaling, not a fixed set of replicas".)

There's more info in the placedock README.

by Chris Dent at February 11, 2019 11:00 AM

Sean McGinnis

Jan 2019 OpenStack Board Notes

Chris Dent used to post regular, semi-subjective, takes on what was going on in the OpenStack Technical Committee. Even though I was there for most of the conversations that were being recapped, I found these very valuable. There were many times that multiple things were discussed over the course of the week, often overlapping, and usually competing for metal attention. So having these regular recaps helped me to remember and keep up on what was going on. I also found the subjective nature useful over “just the facts” summaries to be able to understand some perspectives on topics that I sometimes agreed with, sometimes didn’t, and often just hadn’t taken into consideration.

That was all while being actively involved in the TC. So I can imagine if there was someone interested in what was going on, but not able to devote the significant time it takes to read IRC logs, mailing list posts, and patch reviews that it takes to really stay on top of everything, these kinds of things are really the best way for them to be able to get that type of information.

This is my first attempt to take that kind of communication and apply it to the OpenStack Foundation Board meetings. The Board of Directors meet several times a year, usually via conference calls with a few face to face meetings where possible around Summits and other events. Hopefully these will be useful for anyone interested in what is happening at the board level. I’m also hoping they help me be better about taking notes and keeping track of things, so hopefully it’s a win-win.

January 29, 2019 OpenStack Foundation Board Meeting

The original agenda can be found here and the official minutes will be posted here. Jonathan Bryce also usually sends out unofficial minutes to Foundation mailing list. The January 29th notes can be found here.


Alan Clark started things off with the typical procedural stuff for meetings like this. Roll call was taken to make sure there was quorum. Minutes from the last board meeting in December were voted on and officially approved.

These meetings just recently switched from using WebEx to using Zoom. Apparently, as a result of that, the normal meeting reminders that folks were used to were not sent out, so there were a few absent and late. But I think overall there was a pretty good showing for the 28 board members. I believe there were only six board members not in attendance, which considering time zones, seems reasonable.

Alan thanked the outgoing board members for their time on the board. There were five of us, across Platinum, Gold, and Individual members, that were new to the board and we all had a few minutes to introduce ourselves and give a little background before moving on to business.

Policy Reminders

Especially useful for the new folks like me, there was a reminder about some of the policies that board members need to follow. I was aware of some, but definitely not all of these. I think it’s useful to include here:

Reading the Transparency policy was useful, and I especially liked seeing there is a heading title “General Policy Favoring Transparency”.

Meeting Schedule

Two face to face meetings are planned this year, one before the next Open Infrastructure Summit in Denvery in April, and another date to be determined before the following Summit in Shanghai, China in early November.

Speaking of which, up until that point we knew the Summit was planned in China but no official location was announced. It is now official that the Open Infrastructure Summit China will be held the week of November 4 in Shanghai.

Committee Updates

There are various committees within the board to work on different areas. The list of committees can be found on the Foundation wiki.

Most of the discussion was just making everyone aware of these. There are some listed on the wiki that are just there for historical purposes.

Since financial issues have a lot of impact on the community, especially now that there are less sponsors than there were a few years ago, I have decided to join the Compensation Committee. There are a few others that I find interesting and think are important, but I will wait a little bit before signing up for more. I know I have a tendency to want to sign up for anything I think I can help with without really thinking too much about the time commitment, so I am trying to be better about raising my hand too much.

OpenStack Foundation Pilot Project Guidelines

Allison Randall have a report on the effort to coming up with a set of written guidelines for new projects coming in as pilot projects under the OpenStack Foundation.

As the Foundation expands its scope to include more project beyond OpenStack to support “open infrastructure”, I think it’s important that we are careful about what we include to keep true to our existing community and our core identity. This group is working on writing a set of guidelines to help ensure that happens.

The current draft guidelines can be found here.

Some I would like to see a little more specific or reworded to be less subjective (following “Technical best practices”?) but I am happy to see things called out like “Open collaboration”. I am also a little concerned about how open governance is left.

To me, the Four Opens are what really defined the OpenStack community compared to other open source environments I had seen. I will have a hard time feeling really comfortable with adding any new projects that do not at least strive towards following the four opens.

Foundation Update

Jonathan and team wrapped thing up with a staff update. The slides can be found here.

It was great to see that there has actually been a 33% increase in community membership over the last year. But seeing the activity in the projects, this tells me that more and more involvement is casual or part-time. We have been working on making things more welcoming to new contributors and talking about ways to make contributing easier for those that aren’t spending a significant part of their work week on OpenStack and related projects. I think these efforts are very important and will be critical to our ability to getting more done going forward.

Following on from Allison’s update, there was a large part about the expanding role of the Foundation and the projects currently in the Pilot phase. Right now, these are Airship, Kata Containers, StarlingX, and Zuul.

There was some good coverage about where the Foundation can play a role in an expanded role of promoting open infrastructure. The main goals for this year were reported as strengtheing OpenStack (branch and community), helping expand market opportunities for open infrastructure, and evolving the business model of the Foundation to be this broader-scoped entity.

I’ll probably have more to say on some of those soon, but overall I think it was a good update and, all things considered, I think we are on the right track and at least paying attention to the things we need to going forward.

by Sean McGinnis at February 11, 2019 12:00 AM

February 08, 2019

StackHPC Team Blog

Scientific OpenStack Hackathon

This week we have been hosting a gathering of technical teams from a number of prominent UK scientific institutions affiliated to the IRIS consortium, including the Culham Centre for Fusion Energy, Manchester University, the Royal Observatory Edinburgh, Cambridge University, Rutherford Appleton Laboratory and the Diamond Light Source. We were also joined by our friends from Bristol is Open.

The group was gathered for a hackathon, aimed at helping to spread technical knowledge about Kolla, Kolla-Ansible and Kayobe, and how they can be used together to create OpenStack deployments optimised for scientific computing use cases - a concept we informally refer to as Scientific OpenStack.

Kayobe is a free and open source deployment tool for containerised OpenStack control planes, embodying current best practices. Kayobe is seeing broad adoption for research computing configurations and use cases.

Scientific OpenStack hackathon 2019

Aside from helping make progress with many new OpenStack projects, a secondary aim of the hackathon has been, along with other users of Kayobe worldwide and the OpenStack Scientific SIG, to cement a strong set of inter-institutional technical relationships, enabling a self-supporting community to grow for this space.

by Stig Telfer at February 08, 2019 10:00 PM

February 06, 2019

Stephen Finucane

Updating the Firmware for a Mellanox ConnectX-3 NIC

In a previous post, I provided a guide on configuring SR-IOV for a Mellanox ConnectX-3 NIC. I've since picked up a second one of these and was attempting to follow through on the same guide. However, when I attempted to "query" the device, I saw the following: $ sudo mstconfig -d 02:00.0 query Device #1: ---------- Device type: ConnectX3 PCI device: 02:00.0 -E- Failed to query device: 02:00.0. Unsupported FW (version 2.

February 06, 2019 03:41 PM

Trinh Nguyen

Searchlight weekly report - Stein R-12,11,10,9

For the last four weeks, we're working on hardening our multi-vision and preparing for the Open Infrastructure Summit in Denver this April [1]. The team had submitted one session to discuss and showcase our progress on implementing the multi-cloud features [2] and waiting for voting results.

For the Denver summit, we decided to give a demonstration of Searchlight that has:

  • Search resources across multiple OpenStack Clouds [3]
  • Frontend UI that adds the views for multi-cloud search [4]

So, from now to before the summit, we will focus on developing the [3] and [4] features for Searchlight. For more details about our multi-cloud vision for Searchlight, please have a look at [5].

Btw, It's the Lunar New Year now in Viet Nam. HAPPY NEW YEAR!!!



by Trinh Nguyen ( at February 06, 2019 07:32 AM

February 05, 2019

Carlos Camacho

TripleO - Deployment configurations

This post is a summary of the deployments I usually test for deploying TripleO using quickstart.

The following steps need to run in the Hypervisor node in order to deploy both the Undercloud and the Overcloud.

You need to execute them one after the other, the idea of this recipe is to have something just for copying/pasting.

Once the last step ends you can/should be able to connect to the Undercloud VM to start operating your Overcloud deployment.

The usual steps are:

01 - Prepare the hypervisor node.

Now, let’s install some dependencies. Same Hypervisor node, same root user.

# In this dev. env. /var is only 50GB, so I will create
# a sym link to another location with more capacity.
# It will take easily more tan 50GB deploying a 3+1 overcloud
sudo mkdir -p /home/libvirt/
sudo ln -sf /home/libvirt/ /var/lib/libvirt

# Disable IPv6 lookups
# sudo bash -c "cat >> /etc/sysctl.conf" << EOL
# net.ipv6.conf.all.disable_ipv6 = 1
# net.ipv6.conf.default.disable_ipv6 = 1
# sudo sysctl -p

# Enable IPv6 in kernel cmdline
# sed -i s/ipv6.disable=1/ipv6.disable=0/ /etc/default/grub
# grub2-mkconfig -o /boot/grub2/grub.cfg
# reboot

sudo yum groupinstall "Virtualization Host" -y
sudo yum install git lvm2 lvm2-devel -y
sudo yum install libvirt-python python-lxml libvirt -y

02 - Create the toor user (from the Hypervisor node, as root).

sudo useradd toor
echo "toor:toor" | sudo chpasswd
echo "toor ALL=(root) NOPASSWD:ALL" \
  | sudo tee /etc/sudoers.d/toor
sudo chmod 0440 /etc/sudoers.d/toor
sudo su - toor

mkdir .ssh
ssh-keygen -t rsa -N "" -f .ssh/id_rsa
cat .ssh/ >> .ssh/authorized_keys
cat .ssh/ | sudo tee -a /root/.ssh/authorized_keys
echo '' | sudo tee -a /etc/hosts

export VIRTHOST=
ssh root@$VIRTHOST uname -a

Now, follow as the toor user and prepare the Hypervisor node for the deployment.

03 - Clone repos and install deps.

git clone \
chmod u+x ./tripleo-quickstart/
bash ./tripleo-quickstart/ \
sudo setenforce 0

Export some variables used in the deployment command.

04 - Export common variables.

export CONFIG=~/deploy-config.yaml
export VIRTHOST=

Now we will create the configuration file used for the deployment, depending on the file you choose you will deploy different environments.

05 - Click on the environment description to expand the recipe.

OpenStack [Containerized & HA] - 1 Controller, 1 Compute

cat > $CONFIG << EOF
  - name: control_0
    flavor: control
    virtualbmc_port: 6230
  - name: compute_0
    flavor: compute
    virtualbmc_port: 6231
node_count: 2
containerized_overcloud: true
delete_docker_cache: true
enable_pacemaker: true
run_tempest: false
extra_args: >-
  --libvirt-type qemu
  -e /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml
OpenStack [Containerized & HA] - 3 Controllers, 1 Compute

cat > $CONFIG << EOF
  - name: control_0
    flavor: control
    virtualbmc_port: 6230
  - name: control_1
    flavor: control
    virtualbmc_port: 6231
  - name: control_2
    flavor: control
    virtualbmc_port: 6232
  - name: compute_1
    flavor: compute
    virtualbmc_port: 6233
node_count: 4
containerized_overcloud: true
delete_docker_cache: true
enable_pacemaker: true
run_tempest: false
extra_args: >-
  --libvirt-type qemu
  --control-scale 3
  --compute-scale 1
  -e /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml
OpenShift [Containerized] - 1 Controller, 1 Compute

cat > $CONFIG << EOF
# Original from
composable_scenario: scenario009-multinode.yaml
deployed_server: true

network_isolation: false
enable_pacemaker: false
overcloud_ipv6: false
containerized_undercloud: true
containerized_overcloud: true

# This enables TLS for the undercloud which will also make haproxy bind to the
# configured public-vip and admin-vip.
undercloud_generate_service_certificate: false
undercloud_enable_validations: false

# This enables the deployment of the overcloud with SSL.
ssl_overcloud: false

# Centos Virt-SIG repo for atomic package
  # NOTE(trown) The atomic package from centos-extras does not work for
  # us but its version is higher than the one from the virt-sig. Hence,
  # using priorities to ensure we get the virt-sig package.
  - type: package
    pkg_name: yum-plugin-priorities
  - type: generic
    reponame: quickstart-centos-paas
    filename: quickstart-centos-paas.repo
  - type: generic
    reponame: quickstart-centos-virt-container
    filename: quickstart-centos-virt-container.repo
      - atomic
    priority: 1

extra_args: ''

container_args: >-
  # If Pike or Queens
  #-e /usr/share/openstack-tripleo-heat-templates/environments/docker.yaml
  # If Ocata, Pike, Queens or Rocky
  #-e /home/stack/containers-default-parameters.yaml
  # If >= Stein
  -e /home/stack/containers-prepare-parameter.yaml

  -e /usr/share/openstack-tripleo-heat-templates/openshift.yaml
# NOTE(mandre) use container images mirrored on the dockerhub to take advantage
# of the proxy setup by openstack infra
docker_openshift_cluster_monitoring_image: coreos-cluster-monitoring-operator
docker_openshift_configmap_reload_image: coreos-configmap-reload
docker_openshift_prometheus_operator_image: coreos-prometheus-operator
docker_openshift_prometheus_config_reload_image: coreos-prometheus-config-reloader
docker_openshift_kube_rbac_proxy_image: coreos-kube-rbac-proxy
docker_openshift_kube_state_metrics_image: coreos-kube-state-metrics

deploy_steps_ansible_workflow: true
config_download_args: >-
  -e /home/stack/config-download.yaml
composable_roles: true

  - name: Controller
    CountDefault: 1
      - primary
      - controller
      - External
      - InternalApi
      - Storage
      - StorageMgmt
      - Tenant
  - name: Compute
    CountDefault: 0
      - compute
      - External
      - InternalApi
      - Storage
      - StorageMgmt
      - Tenant

tempest_config: false
test_ping: false
run_tempest: false

From the Hypervisor, as the toor user run the deployment command to deploy both your Undercloud and Overcloud.

06 - Deploy TripleO.

bash ./tripleo-quickstart/ \
      --clean          \
      --release master \
      --teardown all   \
      --tags all       \
      -e @$CONFIG      \

Updated 2019/02/05: Initial version.

Updated 2019/02/05: TODO: Test the OpenShift deployment.

Updated 2019/02/06: Added some clarifications about where the commands should run.

by Carlos Camacho at February 05, 2019 12:00 AM

February 03, 2019

Sean McGinnis

Gofish - A golang client library for Redfish and Swordfish

One of the things I have been looking in to for the OpenSDS project is adding support for the Swordfish API standard. I believe by OpenSDS providing a standardized API it could both make it attractive for storage vendors to natively integrate with OpenSDS - with one integration they would pick up support for CSI, Cinder, Swordfish, and other integration points - as well as make OpenSDS an interesting option for those needing to manage a heterogenous mix of storage devices in their data center in a common, single pane of glass, way.

We are still looking at how this could be integrated and how much focus it should have. But it became clear to me that we would need a good way to exercise this API if/when it is in place.

We are also considering southbound integration in OpenSDS to be able to manage Swordfish-enabled storage. This would enable OpenSDS support for storage without needing to write any custom code if they have already made the investment in exposing a Swordfish API. If we were to do that, we would need a client library that would allow us to easily interact with these devices.

So between the two approaches being considered, along with wanting to get a little more time and experience with golang, I decided to start implementation on an open library for Redfish and Swordfish. I’ve just pushed up a very rudementary start with the Gofish library.

What is Swordfish

The DMTF released the first Redfish standard in 2015. The goal of Redfish was to replace IPMI and other proprietary APIs for managing data center devices with a simple, standard, RESTful API that would provide consistency and ease of use across physical, hybrid, and virtual IT devices.

For years, the Storage Network Industry Association (SNIA) had been working on SMI-S as a standard storage management protocol. There was industry adoption, but there was still a general sense that SMI-S was too big and too complex for both implementors and consumers of the API.

SNIA recognized the simplicity and ease of use of the Redfish specification and chose to extend the DMTF spec with storage related objects and capabilities. These additions can be seen (at a high level) with the purple objects in this model from a presentation by Richelle Ahlvers, the chair of the Scalable Storage Management Technical Working Group (SSM TWG):

Object model

Introducing Gofish

To start, I just used the API responses from the Swordfish API emulator as a reference to get a very basic client library working which I have called Gofish. Cause I’m just so clever and witty like that.

This is very, very rudimentary at this point. There is no authentication mechanism yet, something that any real Redfish or Swordfish provider will surely require. The object model is also very limited and incomplete. Luckily all of the Redfish and Swordfish schemas are published in easy to consume yaml or json formats that should make it easy to script up the generation of the rest of the schema.

Even in its basic state, I did want to get this out there in case anyone else is interested. I’ve published this under the Apache 2 license in a public GitHub repo, so anyone that is interested in contributing, feel free to propose pull requests and try out the code.

I am hoping to work on this as time permits to get it more full featured and robust. I am limited by my access to Redfish and Sworfish enabled devices, so I may be constrained until I can track down something more than the emulator to use for testing. But hopefully this project evolves into something useful as more devices implement this support and more management tools are developed to take advantage of it.

by Sean McGinnis at February 03, 2019 12:00 AM

January 31, 2019

SUSE Conversations

Why was 2018 a landmark year for Cloud computing? And what’s next on the horizon?

Let me start with an obvious statement: cloud computing is continuing to grow – and really fast. That’s hardly headline news anymore. “Cloud” has been around for nearly 20 years now and it’s a constant theme within the IT industry. A big year for cloud So, why was last year such a standout year for […]

The post Why was 2018 a landmark year for Cloud computing? And what’s next on the horizon? appeared first on SUSE Communities.

by Terri Schlosser at January 31, 2019 01:00 PM

January 29, 2019

John Likes OpenStack

How do I re-run only ceph-ansible when using tripleo config-download?

After config-download runs the first time, you may do the following:

cd /var/lib/mistral/config-download/
bash --tags external_deploy_steps

The above runs only the external deploy steps, which for the ceph-ansible integration, means run the ansible which generates the inventory and then execute ceph-ansible.

More on this in TripleO config-download User’s Guide: Deploying with Ansible.

If you're using the standalone deployer, then config-download does not provide the You can workaround this by doing the following:

cd /root/undercloud-ansible-su_6px97
ansible -i inventory.yaml -m ping all
ansible-playbook -i inventory.yaml -b deploy_steps_playbook.yaml --tags external_deploy_steps

The above makes the following assumptions:

  • You ran standalone with `--output-dir=$HOME` as root and that undercloud-ansible-su_6px97 was created by config download and contains the downloaded playbooks. Use `ls -ltr` to find the latest version.
  • If you're using the newer python3-only versions you ran something like `ln -s $(which ansible-3) /usr/local/bin/ansible`
  • That config-download already generated the overcloud inventory.yaml (the second command above is just to test that the inventory is working)

by John ( at January 29, 2019 10:26 PM

January 28, 2019

SUSE Conversations

SUSE OpenStack Cloud 9 Beta 7 is out!

We are happy to announce the release of SUSE OpenStack Cloud 9 Beta 7! Cloud Lifecycle Manager is available Today we are releasing SUSE OpenStack Cloud 9 CLM along with SUSE OpenStack Cloud 9 Crowbar! You will now find it in the download area: SUSE-OPENSTACK-CLOUD-9-x86_64-Beta7-DVD1.iso the ISO to install Cloud 9 with Cloud Lifecycle Manager, […]

The post SUSE OpenStack Cloud 9 Beta 7 is out! appeared first on SUSE Communities.

by Vincent Moutoussamy at January 28, 2019 02:59 PM

January 25, 2019

Chris Dent

Placement Update 19-03

Hello, here's a quick placement update. This will be just a brief summary rather than usual tome. I'm in the midst of some other work.

Most Important

Work to complete and review changes to deployment to support extracted placement is the main thing that matters.

The next placement extraction status checkin will be 17.00 UTC, February 6th.

What's Changed

  • Changes to allow database status checks in placement-status upgrade check have either merged or will soon. These combine with online data migrations to ensure that the state of an upgraded installations has healthy consumers and resource providers.
  • libvirt vgpu reshaper code is ready for review and has an associated functional test. When that stuff merges the main remaining extraction-related tasks are in the deployment tools.
  • os-resource-classes 0.2.0 was released, adding the PCPU class.


Main Themes



Deployment related changes:

Delete placement from nova.


Please refer to last week for lots of pending changes.

by Chris Dent at January 25, 2019 12:47 PM

January 23, 2019


Upgrade Cinder : what’s new for block storage service.

Dear Users,

The block storage service is now in Pike version on our two regions. This version of the component offers some new new features and improvements.

New Cinder API version

A new endpoint for block storage service, corresponding to the new API 3.0, has been added in the service catalog. This API is implemented using the microversion framework, allowing changes to the API while maintaining backwards compatibility.

The basic idea is that your request can be processed with a particular version of the API. This is done with an HTTP OpenStack-API-Version header which version number is semantically increasing from 3.0.

OpenStack-API-Version: volume <version> 

To use microversions you can export the OS_VOLUME_API_VERSION variable with the microversion you wish to use.

The microversion of the 3.0 API includes all the main v2 APIs and the /v3 url is used to call the 3.0 APIs.

New features

Revert volume to snapshot

This feature allows you to revert a volume to the state it was in at the moment you created its latest snapshot, without the need to create a new volume. To revert a volume it has to be detached and the revert is possible only from the latest volume snapshot which was taken.

The volume restoration will work even if the volume has been extended since the last snapshot.

The cinder API microversion to use for this feature is 3.40.

Full OpenStack specifications for this feature

Usage example :

cinder revert-to-snapshot <latest_snapshot> 

Volume and snapshot groups

This new feature allows you to:

  • group volumes from a same application to simplify their management

  • make snapshots of multiple volumes from the same group at the same time to insure data consistency

The microverions of cinder’s API for this fonctionality are 3.13, 3.14, 3.15.

Full OpenStack documentation

Usage example :


To create a volume group you need a group type. To list available group types:

cinder group-type-list 

| ID                                   | Name                   | Description                                 | 


| 56ce02a6-282e-444e-aded-619096303e36 | consistency_group_type | Default group type for consistent snapshots | 


This group type has the extra spec: consistent_group_snapshot_enabled set to True.

Volume group creation:

cinder group-create 56ce02a6-282e-444e-aded-619096303e36 <volume-type> 

To add an existing volume to this group:

cinder group-update --add-volumes <volume_id> <group_id> 

Creation of a new volume in this group:

cinder create --group-id <group_id> --volume-type <volume-type> ... 

To see the consistency group a volume belongs to:

cinder show <volume-id> 

Snapshot creation of all the volumes belonging to the same group:

cinder group-snapshot-create <group_id> 

To see if a snapshot belongs to a group:

cinder snapshot-show <snapshot_id> 

Deletion of a snapshot group and all it’s snapshots:

cinder group-snapshot-delete <group_snapshot_id> 

Deletion of a volume group and all it’s volumes:

cinder group-delete --delete-volumes True <group_id> 

The Cloudwatt support team remains at your disposition to answer all your questions:

by Nolwenn Cauchois at January 23, 2019 12:00 AM

January 22, 2019

Open edX and OpenStack for complex learning environments

This combination can close the skills gap by enabling IT professionals to acquire critical skills in complex distributed systems technology from any location.

by fghaas at January 22, 2019 08:01 AM

January 19, 2019

OpenStack in Production

OpenStack In Production - moving to a new home

During 2011 and 2012, CERN IT took a new approach to how to manage the infrastructure for analysing the data from the LHC and other experiments. The Agile Infrastructure project was formed covering service provisioning, configuration management and monitoring by adopting commonly used open source solutions with active communities to replace the in house tool suite.

In 2019, the CERN cloud managed infrastructure has grown by a factor of 10 compared to the resources in 2013. This has been achieved in collaboration with the many open source communities we have worked with over the past years, including
  • OpenStack
  • RDO
  • CentOS
  • Puppet
  • Foreman
  • Elastic
  • Grafana
  • Collectd
  • Kubernetes
  • Ceph
These experiences have been shared with over 40 blogs and more than 100 different talks at open source events during this time from small user groups to large international conferences. 

The OpenStack-In-Production blog has been covering the experiences, with a primary focus on the development of the CERN cloud service. However, the challenges of the open source world are now covering many more projects so it is time to move to a new blog to cover not only work on OpenStack but other communities and the approaches to integrated these projects into the CERN production infrastructure.

Thus, this blog will be moving to its new home at, incorporating our experiences with these other technologies. For those who would like to follow a specific subset of our activities, there are also taxonomy based content to select new OpenStack articles at and the legacy blog content at

One of the most significant benefits we've found from sharing is receiving the comments from other community members. These often help to guide us in further investigations on solving difficult problems and to identify common needs to work on together upstream.

Look forward to hearing from you all on the techblog web site.


by Tim Bell ( at January 19, 2019 09:31 AM

January 18, 2019

Chris Dent

Placement Update 19-02

Hi! It's a placement update! The main excitement this week is we had a meeting to check in on the state of extraction and figure out the areas that need the most attention. More on that in the extraction section within.

Most Important

Work to complete and review changes to deployment to support extracted placement is the main thing that matters.

What's Changed

  • Placement is now able to publish release notes.

  • Placement is running python 3.7 unit tests in the gate, but not functional (yet).

  • We had that meeting and Matt made some notes.



Last week was spec freeze so I'll not list all the specs here, but for reference, there were 16 specs listed last week and all 16 of them are neither merged nor abandoned.

Main Themes

The reshaper work was restarted after discussion at the meeting surfaced its stalled nature. The libvirt side of things is due some refactoring while the xenapi side is waiting for a new owner to come up to speed. Gibi has proposed a related functional test. All of that at:

Also making use of nested is this spectacular stack of code at bandwidth-resource-provider:

Eric's in the process of doing lots of cleanups to how often the ProviderTree in the resource tracker is checked against placement, and a variety of other "let's make this more right" changes in the same neighborhood:

That stuff is very close to ready and will make lots of people happy when it merges. One of the main areas of concern is making sure it doesn't break things for Ironic.


As noted above, there was a meeting which resulted in Matt's Notes, an updated extraction etherpad, and an improved understanding of where things stand.

The critical work to ensure a healthy extraction is with getting deployment tools working. Here are some of the links to that work:

We also worked out that getting the online database migrations happening on the placement side of the world would help:

Documentation is mostly in-progress, but needs some review from packagers. A change to openstack-manuals depends on the initial placement install docs.

There is a patch to delete placement from nova on which we've put an administrative -2 until it is safe to do the delete.


There are 13 open changes in placement itself. Several of those are easy win cleanups.

Of those placement changes, the online-migration-related ones are the most important.

Outside of placement (I've decided to trim this list to just stuff that's seen a commit in the last two months):


Because I wanted to see what it might look like, I made a toy VM scheduler and placer, using etcd and placement. Then I wrote a blog post. I wish there was more time for this kind of educational and exploratory playing.

by Chris Dent at January 18, 2019 03:43 PM


One Man’s Crush on Technology: OpenKilda

Aptira Crush on Technology: OpenKilda

For good or bad, Technologists can be pretty passionate people. I mean, how many other professionals would happily describe an inanimate object, or worse, a virtual concept like software as sexy? If you were to ask, the reasons for their love of one piece of technology or another would be as personal as well, anything you might be passionate about. 

For me, it’s the elegance and intelligence of the solution that excites me. Perhaps call it a professional acknowledgment for pragmatic and effective solutions. An appreciation for solutions that have been well thought out and provide opportunities for scale, growth and enhancement. 

It was Late in the spring of ‘17 that I first became aware of OpenKilda.  As part of an availability and performance assessment, I had spent some time thinking about what a unique Web-Scale SDN Controller should look like. How should it operate? What were the basic, functional, building blocks that were needed? That was when the slides for OpenKilda crossed my desk. 

The architecture slides is what had me enamoured; built from the ground up using mature, established components to support the challenges of transport and geographically diverse networks. Components that of themselves, were known for their intelligent design. I’d like to think that if I was going to design an SD-WAN controller, it would look like this. 

OpenKilda set it-self apart in the SDN Controller market. It wasn’t trying to be a general SDN Controller, shoe horned into WAN and Transport applications. It was a true WAN and transport SDN solution from birth. 

Still a little immature, was OpenKilda that diamond in the rough we were all looking for? To my eyes the solution was certainly elegant: Lean yet powerful. Simple, yet sophisticated.  But ultimately, there was one thing I could see that had me very excited: Opportunity. 

The value of a product or solution is not in what it does, but the value it can create for others. OpenKilda’s make-up of mature, open sourced components like Kafka, Storm and Elastic, is what presented that value.  

Access to established communities, plug & play extensions and a wider pool of available talent, meant OpenKilda was potentially more extensible than the others. Across those components, a diverse, already established ecosystem of vendors, service providers and integrators, meant there were potentially more invested interests in its success.  

What’s more, John Vestal and team (OpenKilda’s creators) were eager to share OpenKilda with the world. Hopefully building on, and building out, what they had already started.  Yes, it was fair to say I was excited.  Some birds are simply never meant to be caged. 

 …It would be nearly a year before I could broker a more intimate introduction. A short but deep exploration under the covers as we considered what lay on the road ahead. Telecommunications, Media, Finance; The opportunities are potentially wide and expansive.  Will OpenKilda be the key to unlocking them?  I think it just could be…  

Remove the complexity of networking at scale.
Learn more about our SDN & NFV solutions.

Learn More

The post One Man’s Crush on Technology: OpenKilda appeared first on Aptira.

by Craig Armour at January 18, 2019 04:35 AM

January 17, 2019

Trinh Nguyen

Viet OpenStack first webinar 5 Nov. 2018

Yesterday, 5 November 2018, at 21:00 UTC+7, about 25 Vietnamese developers attended the very first webinar of the Vietnam OpenStack User Group [1]. This is part of a series of Upstream Contribution Training based on the OpenStack Upstream Institute [2]. The topic is "How to contribute to OpenStack". Our target is to guide new and potential developers to understand the development process of OpenStack and how they are governed.

The webinar was planned to do in Google Hang Out but with the free version, only maximum 10 people can join the video call. So, we decided to use Zoom [3]. But, because it limits to 45m per meeting for the free account, we did 2 sessions for the webinar. Thank the proactive and supports of the Vietnam OpenStack User Group administrators, the webinar went very well. Whatever works.

I uploaded the training's content on GitHub [4] and will update it based on the attendee's feedbacks. A couple feedbacks I got after the webinar are:
  • Should have exercises
  • Find a more stable webinar tool
  • The training should happen earlier
  • The topics should be simpler for new contributors to follow
You can find the recorded videos of the webinar here:

Session 1:

Session 2:

We continue to gather feedback from the attendees and plan for the second webinar next month.



by Trinh Nguyen ( at January 17, 2019 02:22 AM

Viet OpenStack (now renamed Viet OpenInfa) second webinar 10 Dec. 2018

Yes, we did it, the second OpenStack Upstream Contribution webinar. This time we focused on debugging tips and tricks for first-time developers. We also had time to introduce some of the great tools such as Zuul CI [1] (and how to use the Zuul status page [2] to keep track of running tasks), ARA report [3], and tox [4] etc. During the session, attendees had shared some great experience when debugging OpenStack projects (e.g., how to read logs, use ide, etc.). And,  a lot of good questions has been raised such as how to use ipdb [7] to debug running services (using ipdb to debug is quite hardcore I think :)) etc. You can check out this GitHub link [5] for chat logs and other materials.

I want to say thanks to all the people at the Jitsi open source project [6] that provides a great conferencing platform for us. We were able to have video discussion smoothly without any limitation or interruption and the experience was so great.

Watch the recorded video here:



by Trinh Nguyen ( at January 17, 2019 02:22 AM

January 16, 2019

Ben Nemec

OpenStack Virtual Baremetal Imported to OpenStack Infra

As foretold in a previous post, OVB has been imported to OpenStack Infra. The repo can now be found at All future development will happen there so you should update any existing references you may have. In addition, changes will now be proposed via Gerrit instead of Github pull requests. \o/

For the moment, the core reviewer list is largely limited to the same people who had commit access to the Github repo. The TripleO PTL and one other have been added, but that will likely continue to change over time. The full list can be found here.

Because of the still-limited core list, not much about the approval process will change as a result of this import. I will continue to review and single-approve patches just like I did on Github. However, there are plans in the works to add CI gating to the repo (another benefit of the import) and once that has happened we will most likely open up the core reviewer list to a wider group.

Questions and comments via the usual channels.

by bnemec at January 16, 2019 06:18 PM

Trinh Nguyen

VietOpenInfra third webinar - 14th Jan. 2019

Yay, finally after the new year holiday we can organize the third upstream training webinar for OpenStack developers in Vietnam [1]. This time we invited Kendall Nelson [2], Upstream Developer Advocate for the OpenStack Foundation, to teach us about the Storyboard [3] and Launchpad [4] task management tools (she's also one of the core developers of the Storyboard project).

We first started with the Jitsi conferencing platform [5] but we could not communicate with Kendall (in the US) for some reason. So, we decided to switch back to Zoom [6] and everything went well after that. There were about 12 people attended the webinar and we had a good conversation with Kendall about some aspects of Storyboard which is quite new to some users. You can check out the conversation (log chat) here [7]. Below is the recorded video:

We would like to say thanks to Kendall Nelson for her kind acceptance to teach us this time even though the schedule was pretty early for her (6AM her time). We learned a lot from her presentation and even someone in the audiences would want to contribute to the Storyboard project (here are some low hanging fruit to work on [9]).

P/S: You can follow this link [8] for the previous webinars.



by Trinh Nguyen ( at January 16, 2019 01:53 AM

January 14, 2019

SUSE Conversations

Looking for a reason to attend SUSECON? I’ve got 5!

In today’s business environment, every company is a digital company. IT infrastructure needs to not only keep pace but also move fast enough to accommodate strategic business and technology initiatives such as cloud, mobile and the Internet of Things. At SUSECON 2019, see how our open, open source approach helps our customers and partners transform […]

The post Looking for a reason to attend SUSECON? I’ve got 5! appeared first on SUSE Communities.

by Kent Wimmer at January 14, 2019 08:58 PM

January 13, 2019

Chris Dent

etcd + placement + virt-install → compute

I've had a few persistent complaints in my four and half years of working on OpenStack, but two that stand out are:

  • The use of RPC—with large complicated objects being passed around on a message bus—to make things happen. It's fragile, noisy, over-complicated, hard to manage, hard to debug, easy to get wrong, and leads to workarounds ("raise the timeout") that don't fix the core problem.

  • It's hard, because of the many and diverse things to do in such a large commmunity, to spend adequate time reflecting, learning how things work, and learning new stuff.

So I decided to try a little project to address both and talk about it before it is anything worth bragging about. I reasoned that if I use the placement service to manage resources and etcd to share state, I could model a scheduler talking to one or more compute nodes. Not to do something so huge as replace nova (which has so much complexity because it does many complex things), but to explore the problem space.

Most of the initial work involved getting some simple etcd clients speaking to to etcd and placement and mocking out the creation of fake VMs. After that I dropped the work because of the many and diverse things to do, leaving a note to myself to investigate using virt-install.

I took nine months to come back to, but over the course of a couple hours on two or three days I had it booting VMs on multiple compute nodes.

In my little environment a compute node starts up, learns about its environment, and creates a resource provider and associated inventories representing the virtual cpus, disk, and memory it has available. It then sits in a loop, watching an etcd key associated with itself.

Beside the compute process there's a faked out metadata server running.

A scheduler takes a resource request and asks placement for list of allocation candidates. The first candidate is selected, an allocation is made for the resources and the allocations and an image URL are put to the etcd key that the compute node is watching.

The compute sees the change on the watched key, fetches the image, resizes it to the allocated disk size, then boots it with virt-install using the allocated vcpus and memory. When the VM is up another key is set in etcd containing the IP of the created instance.

If the metadata server has been configured with an ssh public key, and the booted image looks for the metadata server, you can ssh into the booted VM using that key. For now it is only from the same host as the compute-node. Real networking is left as an exercise to the reader.

In the course of the work described in those ↑ short paragraphs is more learning about some of the fundamentals of creating a virtual machine than a few years of reading and reviewing inscrutable nova code. I should have done this much sooner.

The really messy code is in etcd-compute on GitHub.

by Chris Dent at January 13, 2019 09:00 PM

January 11, 2019

Ben Nemec

Debugging a Segfault in oslo.privsep

I recently helped track down a bug exposed by a recent oslo.privsep release that added threading to allow parallel privileged calls. It was a segfault happening in the privsep daemon that was caused by a C call in a privileged Neutron module. This, as you might expect, was a little tricky to debug so I thought I'd document the process for posterity.

There were a couple of reasons this was tough. First, it was a segfault, which meant something went wrong in the underlying C code. Python debuggers need not apply. Second, there's a bunch of forking that happens to start the privsep daemon, which meant I couldn't just run Python in gdb. Well, maybe I could have, but my gdb skills are not strong enough to navigate through a bunch of different forks.

To get gdb attached to the correct process, I followed the debugging with gdb instructions from Python, specifically the ones to attach to an existing process. To make sure I had time to get it attached, I added a sleep to the startup of the privsep daemon installed in my Neutron tox venv. Essentially I would run the test:

tox -e dsvm-functional -- neutron.tests.functional.agent.linux.test_netlink_lib.NetlinkLibTestCase.test_list_entries

Find the privsep-helper process that was eventually started, then attach gdb to it with:

gdb python [pid]

I also needed to install some debuginfo packages on my system to get useful tracebacks from the libraries involved. Gdb gave me the install command to do so, which was handy. I believe the important part here was dnf debuginfo-install libnetfilter_conntrack, but that will vary depending on what you're debugging.

Once gdb was attached, I typed c to tell it to continue (gdb interrupts the process when you attach), then once the segfault happened I used commands like bt, list, and print to examine the code and state where the crash happened. This allowed me to determine that we were passing in a bad pointer as one of the parameters for the C call. It turned out we were truncating pointers because we hadn't specified the proper parameter and return types, so large memory addresses were being squeezed into ints that were too small to hold them. Why the oslo.privsep threading change exposed this I don't know, but my guess is that it has something to do with the address space changing when the calls were made from a thread instead of the main process.

In any case, after quite a bit of cooperative debugging in the OpenStack community and a fair amount of rust removal from my gdb skills, we were able to resolve this bug and unblock the use of threaded oslo.privsep. This should allow us to significantly reduce the attack surface for OpenStack services, resulting in much better security.

I hope this was useful, and as always if you have any questions or comments don't hesitate to contact me.

by bnemec at January 11, 2019 09:24 PM

Chris Dent

Placement Update 19-01

Hello! Here's placement update 19-01. Not a ton to report this week, so this will mostly be updating the lists provided last week.

Most Important

As mentioned last week, there will be a meeting next week to discuss what is left before we can pull the trigger on deleting the placement code from nova. Wednesday is looking like a good day, perhaps at 1700UTC, but we'll need to confirm that on Monday when more people are around. Feel free to respond on this thread if that won't work for you (and suggest an alternative).

Since deleting the code is dependent on deployment tooling being able to handle extracted placement (and upgrades to it), reviewing that work is important (see below).

What's Changed

  • It was nova's spec freeze this week, so a lot of effort was spent getting some specs reviewed and merged. That's reflected in the shorter specs section, below.

  • Placement had a release and was published to pypi. This was a good excuse to write (yet another) blog post on how easy it is to play with.



With spec freeze this week, this will be the last time we'll see this section until near the end of this cycle. Only one of the specs listed last week merged (placement for counting quota).

Main Themes

Making Nested Useful

I've been saying for a few weeks that "progress continues on gpu-reshaping for libvirt and xen" but it looks like the work at:

is actually stalled. Anyone have some insight on the status of that work?

Also making use of nested is bandwidth-resource-provider:

There's a review guide for those patches.

Eric's in the process of doing lots of cleanups to how often the ProviderTree in the resource tracker is checked against placement, and a variety of other "let's make this more right" changes in the same neighborhood:


Besides the meeting mentioned above, I've refactored the extraction etherpad to make a new version that has less noise in it so the required actions are a bit more clear.

The tasks remain much the same as mentioned last week: the reshaper work mentioned above and the work to get deployment tools operating with an extracted placement:

Loci's change to have an extracted placement has merged.

Kolla has a patch to include the upgrade script. It raises the question of how or if the should be distributed. Should it maybe end up in the pypi distribution?

(The rest of this section is duplicated from last week.)

Documentation tuneups:


There are still 13 open changes in placement itself. Most of the time critical work is happening elsewhere (notably the deployment tool changes listed above).

Of those placement changes, the database-related ones from Tetsuro are the most important.

Outside of placement:


If anyone has submitted, or is planning to, a proposal for summit that is placement-related, it would be great to hear about it. I had thought about doing a resilient placement in kubernetes with cockroachdb for the edge sort of thing, but then realized my motivations were suspect and I have enough to do otherwise.

by Chris Dent at January 11, 2019 03:43 PM


Planet OpenStack is a collection of thoughts from the developers and other key players of the OpenStack projects. If you are working on OpenStack technology you should add your OpenStack blog.


Last updated:
March 19, 2019 08:07 AM
All times are UTC.

Powered by: