February 20, 2019

OpenStack Superuser

Open Infrastructure Summit brings together people from over 30 open source projects

The Open Infrastructure Summit (formerly the OpenStack Summit) is the twice annual event supported by the OpenStack Foundation that gathers the people building and using open source technologies—30+ projects in fact—to power their infrastructure services to collaborate with each other. Last month, the Programming Committees shared their picks and today, the schedule went live featuring new users—Baidu, Boeing and Blizzard Entertainment to name a few—as well as technical deep dives into different open-source projects.

The Summit agenda covers 11 tracks, ranging from use cases like artificial intelligence and machine learning to edge computing and container infrastructure while also covering topics like security and getting started.

“Getting all of the awesome open source tools to play well together, from Tensorflow to Kubernetes to OpenStack, is the challenge infrastructure operators are rising to these days,” said Mark Collier, COO of the OpenStack Foundation. “Sharing that knowledge and advancing the state of the art in infrastructure automation is what the first Open Infrastructure Summit is all about, and we’ve got you covered with these amazing speakers!”

If you’re planning to attend the Summit—held from April 29 – May 1 in Denver, here’s a taste of what you can expect:

  • Blizzard Entertainment, a premier developer and publisher of entertainment software, has been using OpenStack as a private cloud to host its game services since 2012. Blizzard Entertainment will demonstrate how it is autoscaling its best-selling game “Overwatch” in OpenStack.
  • Boeing, the world’s largest aerospace company, will describe its method for replacing engines in flight and share experiences and lessons learned with upgrades on mission-critical OpenStack systems.
  • Adobe Advertising Cloud undertook a rapid, seven-month journey to migrate from VMs to Kubernetes and reach a multi-cloud/multi-region deployment. The Adobe Advertising Cloud team will share lessons learned from seven real-world production challenges it faced with StatefulSets, GitOps, autoscaling for ML and auto-remediation.
  • ARM will provide an update on advancements being made in using Kata Containers on Arm’s 64-bit architecture. The session will include demonstrations of running Kata Containers on aarch64 processors with Docker and also with Kubernetes and CRI-O.
  • Members of the AT&T team behind OpenStack-Helm and Airship will brush off the crystal ball to look into the future of running critical infrastructure at scale in Kubernetes. Their session will show how Kubernetes native workflows such as Argo can solve day-two issues.

If this lineup has your attention, check out the full schedule and register before prices increase on February 27 at 11:59 p.m. PT. Need travel funding? Travel support applications are open until February 27.


Cover photo // CC BY NC

The post Open Infrastructure Summit brings together people from over 30 open source projects appeared first on Superuser.

by Allison Price at February 20, 2019 04:33 PM

February 19, 2019

OpenStack Superuser

Community Contributor Awards Open for Denver Summit

So many folks work tirelessly behind the scenes to make OpenStack great, whether they are fixing bugs, contributing code, helping newbies on IRC or just making everyone laugh at the right moment.

You can help them get recognized (with a very shiny medal!) by nominating them for the next Contributor Awards given out at the upcoming OpenStack Summit Denver. These are informal, quirky awards — winners in previous editions included the “Duct Tape” award and the “Don’t Stop Believin’ Cup” — that shine a light on the extremely valuable contributions to Kata, Airship, StarlingX, Zuul and OpenStack.

There’s so many different areas worthy of celebration, but there are a few kinds of community members who deserve a little extra love:

  • They are undervalued
  • They don’t know they are appreciated
  • They bind the community together
  • They keep it fun
  • They challenge the norm
    Other: (write-in)

As before, rather than starting with a defined set of awards, the community is asked to submit names in those broad categories. The OSF community team then has a little bit of fun on the back end, massaging the award titles to devise something worthy of their underappreciated efforts.

“There are A LOT of people out there who could use a pat on the back and the affirmation that they do good work for the community, ” says OSF’s Kendall Nelson. Please nominate anyone you think is deserving of an award! Deadline is April 14.


Awards will be presented during the feedback session at the Summit.

Cover photo // CC BY NC

The post Community Contributor Awards Open for Denver Summit appeared first on Superuser.

by Superuser at February 19, 2019 03:02 PM

February 15, 2019

OpenStack Superuser

Updates from the Kubernetes Special Interest Group: How to run K8s on OpenStack clouds

Bridging the two communities, the Kubernetes Special Interest Group (SIG-K8s) has been hard at work delivering OpenStack and Kubernetes integrations. There are several projects in the OpenStack and Kubernetes ecosystems that participate in the SIG, including:

The cloud-provider-openstack project published its Kubernetes-matched 1.13 release in December, followed by a 1.13.1 release in January, allowing users to make a Kubernetes installation hosted on an OpenStack cloud aware of the available resources and manage them directly. Features include creating ingress controllers (also known as load balancers) with Octavia, managing block storage devices through Cinder and having direct access to the status of nodes through Nova. In the latest release, the provider also supports Kubernetes key management with Barbican. When the Kubernetes API asks for resource, cloud-provider-openstack is the fundamental layer that delivers it on OpenStack.

There are several ways to run Kubernetes on OpenStack clouds:

  • One of the most mature and widely used is OpenStack Magnum, which offers a user-facing API to deploy managed OpenStack-hosted Kubernetes clusters. It’s in heavy production use today, including on public clouds like Vexxhost and Catalyst Cloud. Currently, CERN, the European Organization for Nuclear Research, operates more than 300 managed Kubernetes clusters with OpenStack Magnum.
  • The Kops project greatly simplifies the deployment of Kubernetes on OpenStack. With just a set of OpenStack cloud credentials, users can issue basic commands to create, update and delete Kubernetes clusters.
  • Looking ahead, SIG-OpenStack community members are also excited to be participating in the new Cluster-API project in the Kubernetes community. Cluster-API provides native Kubernetes APIs to allow Kubernetes to self-host and manage the entire cluster lifecycle from initial deployment through upgrades to cluster deletion. This work is in its early stages, with active development on the OpenStack implementation.

Get involved

If you’d like to learn more or get involved with OpenStack and Kubernetes integrations:

And if you’d like to take a deeper dive on OpenStack and container integrations, check out the white paper  “Leveraging OpenStack and Containers: A Comprehensive Review,”  written by the SIG-K8s community.

Cover photo // CC BY NC

The post Updates from the Kubernetes Special Interest Group: How to run K8s on OpenStack clouds appeared first on Superuser.

by Chris Hoge at February 15, 2019 03:09 PM

February 14, 2019

OpenStack Superuser

Inside open infrastructure: The latest from the OpenStack Foundation

Welcome to the latest edition of the OpenStack Foundation Open Infrastructure Newsletter, a digest of the latest developments and activities across open infrastructure projects, events and users. Sign up to receive the newsletter and email community@openstack.org to contribute.

OpenStack Foundation news

  • Last week, we published the OpenStack Foundation 2018 Annual Report report highlighting the incredible work and advancements achieved by the community. Read the latest on project updates, community initiatives, OSF events updates and more.
  • The Open Infrastructure Summit + PTG Denver, April 29-May 3, is fast approaching; registration is open and the schedule is expected to launch next week. In the meantime, nominations are open for two awards which will be handed out at the Summit: The Community Contributor Awards recognize community members who are making a big impact. Nominate community members, especially those who aren’t in the most visible roles, who do the dirty jobs, who step up to drive work or lend a helping hand. We’re also want to recognize new community members working hard from day one. Nominations close April 14th at 7:00 UTC.
    The Superuser Awards nominations are also open. If you know a team innovating with open infrastructure while contributing back to open source communities, nominate them by March 22.

OpenStack Foundation project news


  • Every cycle, common release goals are defined for the OpenStack community. The discussion is underway to define goals for the next cycle, with three potential candidates: Project clean-up, moving legacy clients to python-openstackclient and healthcheck middleware.
  • The election process to renew half of the Technical Committee seats has just started. You have until February 19 to nominate yourself. Learn more about the role of the TC.
  • Registration is open for the OpenStack Ops meetup in Berlin, March 6-7, 2019, a community-driven, collaborative event for people running OpenStack infrastructure.


Kata Containers

  • The Kata Containers Architecture Committee elections are now open. The Architecture Committee (AC) serves as the project’s leadership body that’s responsible for technical decisions. The AC is comprised of five members, who are elected by contributors. The AC elections take place every six months, in February (two seats available) and in September (three seats available). Between now and February 17 the window is open for community members to declare their candidacy. Between February 18-24 the candidates will enter a debate period where the community can ask candidates questions about their platform on the kata-dev mailing list. Voting is open between February 25 – March 3. Contributors (defined as anyone who has code merged in the Kata Containers project in the last 12 months) are eligible to vote using the CIVS condorcet voting method. Election results will be published on March 4. Full details about the election process are available here.
  • The Kata Containers community activity metrics are now being tracked using the Bitergia analytics monitoring platform. It currently tracks Git data, GitHub activity, and mailing-list activity. View the Kata community metrics dashboard here.


  • The community is working on improving and restructuring the materials currently available on their documentation website to make it easier to find information as well as to better support new users and contributors.
  • StarlingX is already set up in one of the OPNFV Pharos labs and the communities are working on further collaboration on testing StarlingX as part of a full-stack environment.


OSF supported events

  • 42,000 people talking cybersecurity – from Bruce Schneier to Tina Fey – are heading to the RSA Conference 2019 March 4 – 8 in San Francisco. From the latest trends to best practices, RSAC 2019 is your one-stop-shop for cybersecurity intel.
  • The OpenStack Foundation is also attending the Open Compute Summit, March 14-15 in San Jose, as well as Service Mesh Day March 28-29 in Seattle. Around the world, you’ll also find OpenStack and OpenInfra Days hosted by local communities on the OSF events page.

Questions / feedback / contribute

This newsletter is edited by the OpenStack Foundation staff to highlight open infrastructure communities. We want to hear from you!
If you have feedback, news or stories that you want to share, reach us through community@openstack.org and to receive the newsletter, sign up here.

The post Inside open infrastructure: The latest from the OpenStack Foundation appeared first on Superuser.

by OpenStack Foundation at February 14, 2019 05:14 PM

SUSE Conversations

SUSE OpenStack Cloud 9 Release Candidate 1 is here!

We are happy to announce the release of SUSE OpenStack Cloud 9 Release Candidate 1! Cloud Lifecycle Manager is available Today we are releasing SUSE OpenStack Cloud 9 CLM along with SUSE OpenStack Cloud 9 Crowbar! You will now find it in the download area: SUSE-OPENSTACK-CLOUD-9-x86_64-RC1-DVD1.iso the ISO to install Cloud 9 with Cloud Lifecycle […]

The post SUSE OpenStack Cloud 9 Release Candidate 1 is here! appeared first on SUSE Communities.

by Vincent Moutoussamy at February 14, 2019 03:49 PM

StackHPC Team Blog

StackHPC at the UKRI Cloud Workshop

We always enjoy attending the UKRI Cloud Working Group Workshop held annually at the awesome Francis Crick Institute. The sizeable crowd it draws and the high quality of content are both healthy signs of the vitality of cloud for research computing.

This year's workshop demonstrated a maturing approach to use of cloud, with some notable focus on various methods for harnessing hybrid and public clouds for dynamic and bursting workloads. Public cloud companies presented on new and forthcoming HPC-aware features, while research organisations presented on mobility to avoid lock-in to cloud vendors. How these two contrasting tensions play out will be interesting over the next few years.

There was also a welcome focus on operating and sustaining cloud-hosted infrastructure and platforms. In particular, Matt Pryor from STFC/JASMIN presented their current project on a user-friendly application portal, coupled with Cluster-as-a-Service deployments of Slurm and Kubernetes, with focus on both usability for scientists and day-2 operations for administrators. StackHPC is proud to be working with the JASMIN team on implementing this well-considered initiative and we hope to write more about it in due course.

We always particpate as much as possible, and this year StackHPC was more involved than we have ever been before. Five members of our team attended, and in a one-day programme three presentations were delivered by the team - a real achievement for a ten-person company.

We presented three prominent areas of recent work. John Garbutt spoke about our recent work on storage for the software-defined supercomputer, in particular SKA SDP buffer prototyping and the Cambridge Data Accelerator.

John Garbutt at Cloud WG Workshop 2019

Pictured here with David Yuan of EMBL and Matt Pryor of STFC

Mark Goddard presented our work on Kayobe, a free and open source deployment tool for containerised OpenStack control planes, based on Kolla and Kolla-Ansible, and embodying current best practices. Kayobe is seeing broad adoption for research computing configurations and use cases.

Mark Goddard at Cloud WG Workshop 2019

Bharat Kunwar delivered a demonstration of Pangeo, the second of the day after Jacob Tomlinson presented the work of the Met Office Informatics Lab. With a focus on data-intensive analytics on private cloud infrastructure, Bharat demonstrated the deployment of Pangeo on a bare metal HPC OpenStack deployment, using Kubernetes deployed by Magnum. In addition to demonstrating containers running on bare metal, Bharat demonstrated storage attachments backed by Ceph and RDMA-enabled BeeGFS. All of that in ten minutes!

Bharat Kunwar at Cloud WG Workshop 2019

by Stig Telfer at February 14, 2019 02:00 PM

February 13, 2019

CERN Tech Blog

RadosGW Keystone Sync

We have recently enabled an S3 endpoint in CERN private cloud. This service is offered by RadosGW on top of a Ceph cluster. This storage resource comes to complement the cloud offering and allows our users to store object data using S3 or swift APIs. In order to enable the validation, you need to configure RadosGW to validate the keys against the Identity service (Keystone) in OpenStack and then create the new service and the endpoint in the Identity API.

by CERN (techblog-contact@cern.ch) at February 13, 2019 01:05 PM

February 12, 2019

OpenStack Superuser

Tips for applying for the Superuser Award

The Superuser Awards are open — you’ve got until March 22 to nominate your team for the Denver Summit.

Over the years, we’ve had a truly stellar group of winners (AT&T, CERN, China Mobile, Comcast, NTT Group) and finalists (GoDaddy, FICO, Walmart, Workday among many others). While only one wins the title, all the finalists get a shoutout with an overview of what makes them super from the keynote stage at the Denver Summit.

The Superuser Awards recognize teams using open infrastructure to meaningfully improve business and differentiate in a competitive industry, while also contributing back to the open-source community. They aim to cover the same mix of open technologies as our publication, namely OpenStack, Kubernetes, Kata Containers, AirShip, StarlingX, Ceph, Cloud Foundry, OVS, OpenContrail, Open Switch, OPNFV and more.

When evaluating winners for the Superuser Award, nominees are judged on the unique nature of use case(s), as well as integrations and applications by a particular team.

Here are a few things that will make your application stand out:

    • Take a look at what works.  You can browse the applications of previous finalists here and the winners here.
    • Take your time. The application seems short — seven questions — but most of those questions cover a lot of ground.
  • Boil it down. It’s a balancing act: you’ll want to include significant milestones and contributions but stay within character count. (The space allotted for each answer is 800 characters, roughly 130 words.) Offers of libations in Denver will not persuade the judging committee or Foundation staff to accept supplemental materials!
  • Remember that you’re talking to your peers. Your nomination is first taken into consideration by the larger community and then by the Superuser Editorial Advisory Board which makes the final call. Most of them are technically-minded folks who tend to be more impressed with metrics than marketing speak.
  • Lead with your most impressive accomplishments. All of the applications from nominees are published on Superuser for community voting. Most likely they’ll see a mention on social media, scan through your post, then click to vote. Make sure they see the best bits first.
  • Proofread and fact check with your team before submitting. The Superuser editorial staff goes through every finalist application and edits them for grammar and house style, but do keep in mind that the information you submit in the application will be published.

We’re looking forward to hearing more about your accomplishments with open infrastructure – deadline for Superuser Awards for the Denver Summit is March 22. Apply now!

Cover photo // CC BY NC

The post Tips for applying for the Superuser Award appeared first on Superuser.

by Nicole Martinelli at February 12, 2019 03:06 PM

Emilien Macchi

OpenStack Containerization with Podman – Part 5 (Image Build)

For this fifth episode, we’ll explain how we will build containers with Buildah. Don’t miss the first, secondthird and fourth episodes where we learnt how to deploy, operate, upgrade and monitor Podman containers.

In this post, we’ll see the work that we can replace Docker by Buildah to build our container images.


In OpenStack TripleO, we have nearly 150 images (all layers included) for all the services that we can deploy. Of course you don’t need to build them all when deploying your OpenStack cloud, but in our production chain we build them all and push the images to a container registry, consumable by the community.

Historically, we have been using “kolla-build” and the process to leverage the TripleO images build is documented here.


kolla-build only supports Docker CLI at this time and we recognized that changing its code to support something else sounded a painful plan, as Docker was hardcoded almost everywhere.

We decided to leverage kolla-build to generate the templates of the images, which is actually a tree of Dockerfile per container.

The dependencies format generated by Kolla is a JSON:

So what we do is that when running:

openstack overcloud container image build --use-buildah

We will call kolla-build with –list-dependencies that generates a directory per image, where we have a Dockerfile + other things needed during the builds.

Anyway, bottom line is: we still use Kolla to generate our templates but don’t want Docker to actually build the images.

In tripleo-common, we are implementing a build and push that will leverage “buildah bud” and “buildah push”.

buildah bud” is a good fit for us because it allows us to use the same logic and format as before with Docker (bud == build-using-dockerfile).

The main challenge for us is that our images aren’t small, and we have a lot of images to build, in our production chain. So we decided to parallelize the last layers of the images (which don’t have childs).

For example, 2 images at the same layer level will be built together, also a child won’t be built in parallel of its parent layer.

Here is a snippet of the code that will take the dependencies dictionary and build our containers:

Without the “fat daemon” that is Docker, using Buildah puts some challenges here where running multiple builds at the same time can be slow because of the locks to avoid race conditions and database corruptions. So we capped the number of workers to 8, to not make Buildah locking too hard on the system.

What about performances? This question is still under investigation. We are still testing our code and measuring how much time it takes to build our images with Buildah. One thing is sure, you don’t want to use vfs storage backend and use overlayfs. To do so, you’ll need to run at least Fedora 28 with 4.18 kernel, install fuse-overlayfs and Buildah should use this backend by default.


Please select full screen:

In the next episode, we’ll see how we are replacing the docker-registry by a simple web server. Stay tuned!

by Emilien at February 12, 2019 03:30 AM

February 11, 2019

OpenStack Superuser

How the Vienna Biocenter powers HPC with OpenStack

OpenStack clouds for HPC environments are becoming an increasingly  popular choice across the globe because of the obvious advantages promised by the framework, says Petar Forai of the Vienna Biocenter.

Forai, along with colleagues Erich Birngruber and Ümit Seren, recently gave a look into the “CLIP”(CLoud Infrastructure Project) project. The main goal of the project is to consolidate multiple independent computing environments and HPC infrastructures into one common platform suitable for a wide variety of academic computing use cases.

Forai is the deputy head of IT for the research institutes Institute of Molecular Pathology (IMP), Institute of Molecular Biotechnology (IMBA) and the Gregor Mendel Institute (GMI) at the Vienna Biocenter. The cloud platform engineering team of 14 is tasked with delivery and operations of IT infrastructure for  40 research groups or about 500 scientists. The IT department delivers a full stack of services from workstations, networking, application hosting and development, among other things, including HPC for the campus.

The current infrastructure is a “nightmare to manage,” Forai says: an expanse of siloed islands of infrastructure that can’t talk to each other with no way to automate across them. The future? A Slurm Workload Manager that adjusts to demand with a tightly connected grid of virtual machines and an OpenStack private cloud presiding over compute nodes. It took the team about two months to build a proof-of-concept, about eight to analyze how best to use OpenStack and about two months to move into production, though some of that work is still ongoing. Of the lessons learned, Forai says “OpenStack is not a product. It is a framework.” He also advises two or three OpenStack environments (development, staging, production in thier case) to practice and understand upgrades and updates. A final note consideration, from Forai’s point of view is that the out-of-box experience and scalability for certain OpenStack subcomponents is “not optimal” and should be considered more like a reference implementation.

The team details the deployment choices and offers an outline of the system architecture, taking a deep dive into:

Catch the hour-long session here.

Cover photo // CC BY NC

The post How the Vienna Biocenter powers HPC with OpenStack appeared first on Superuser.

by Superuser at February 11, 2019 03:05 PM


python-tempestconf’s journey

For those who are not familiar with the python-tempestconf, it’s a tool for generating a tempest configuration file, which is required for running Tempest tests against a live OpenStack cluster. It queries a cloud and automatically discovers cloud settings, which weren’t provided by a user.

Internal project

In August 2016 config_tempest tool was decoupled from Red Hat Tempest fork and the python-tempestconf repository under the github redhat-openstack organization was created. The tool became an internal tool used for generating tempest.conf in downstream jobs which were running Tempest.

Why we like `python-tempestconf`

The reason why is quite easy. We at Red Hat were (and still are) running many different OpenStack jobs with different configurations which execute Tempest. And there python-tempestconf stepped in. We didn’t have to implement the logic for creating or modifying tempest.conf within the job configuration, we just used python-tempestconf which did that for us. It’s not only about the generating tempest.conf itself, because the tool also creates basic users, uploads an image and creates basic flavors which all of them are required for running Tempest tests.

Usage of python-tempestconf was also beneficial for engineers who liked the idea of not struggling with creating a tempest.conf file from scratch but rather using the tool which was able to generate it for them. The generated tempest.conf was sufficient for running simple Tempest tests.

Imagine you have a fresh OpenStack deployment and you want to run some Tempest tests, because you want to make sure that the deployment was successful. In order to do that, you can run the python-tempestconf which will do the basic configuration for you and will generate a tempest.conf, and execute Tempest. That’s it, isn’t it easy?

I have to admit, when I joined Red Hat and more specifically OpenStack team, I kind of struggled with all the information about OpenStack and Tempest, it was too much new information. Therefore I really liked when I could generate a tempest.conf which I could use for running just basic tests. If I had to generate the tempest.conf myself, my learning process would be a little bit slower. Therefore, I’m really grateful that we had the tool at that time.

Shipping in a package

At the beginning of 2017 we started to ship python-tempestconf rpm package. It’s available in RDO repositories from Ocata and higher. python-tempestconf package is also installed as a dependency of openstack-tempest package. So if a user installs openstack-tempest, also python-tempestconf will be installed. At this time, we also changed the entrypoint and the tool is executed via discover-tempest-config command. However, you could have already read all about it in this article.

Upstream project

By the end of 2017 python-tempestconf became an upstream project and got under OpenStack organization.

We have significantly improved the tool since then, not only its code but also its documentation, which contains all the required information for a user, see here. In my opinion every project which is designed for wider audience of users (python-tempestconf is an upstream project, so this condition is fulfilled), should have a proper documentation. Following python-tempestconf’s documentation should be any user able to execute it, set wanted arguments and set some special tempest options without any bigger problems.

I would say that there are 3 greatest improvements. One of them is the user documentation, which I’ve already mentioned. The second and third are improvements of the code itself and they are os-client-config integration and refactoring of the code in order to simplify adding new OpenStack services the tool can generate config for.

os-client-config is a library for collecting client configuration for using an OpenStack cloud in a consistent way. By importing the library a user can specify OpenStack credentials by 2 different ways:

  • Using OS_* environment variables, which is maybe the most common way. It requires sourcing credentials before running python-tempestconf. In case of packstack environment, it’s keystonerc_admin/demo file and in case of devstack there is openrc script.
  • Using --os-cloud parameter which takes one argument – name of the cloud which holds the required credentials. Those are stored in a cloud.yaml file.

The second code improvement was the simplification of adding new OpenStack services the tool can generate tempest.conf for. If you want to add a service, just create a bug in our storyboard, see python-tempestconf’s contributor guide. If you feel like it, you can also implement it. Adding a new service requires creating a new file, representing the service and implementing a few required methods.

To conclude

The tool has gone through major refactoring and got significantly improved since it was moved to its own repository in August 2016. If you’re a Tempest user, I’d recommend you try python-tempestconf if you haven’t already.

by Martin Kopec at February 11, 2019 02:50 PM

Chris Dent

Placement Container Playground 9

This is the ninth in a series about running the OpenStack placement service in a container. The previous update was Playground 8.

The container playground series introduced running placement in Kubernetes in Playground 4 and then extended it in Playground 5 to add a Horizontal Pod Autoscaler.

But it was very clumsy. For some other work, I've needed to learn about Helm. I was struggling to get traction so figured the best way to learn how things worked was to make a helm chart for placement. There already is one in openstack-helm but it is embedded in the openstack-helm ecosystem and not very playgroundy. So I set out to play.

The result of that work is in a pull request to placedock (since merged). A relatively simple helm-chart is built from the starting points provided by helm create. It started out simply deploying a placement service with an internal database, but through iteration it will now set up ingress handling and the aforementioned autoscaler with:

helm install --set ingress.enabled=true \
             --set replicaCount=0 \
             --name placement

(replicaCount=0 is used to signal "make me some autoscaling, not a fixed set of replicas".)

There's more info in the placedock README.

by Chris Dent at February 11, 2019 11:00 AM

Sean McGinnis

Jan 2019 OpenStack Board Notes

Chris Dent used to post regular, semi-subjective, takes on what was going on in the OpenStack Technical Committee. Even though I was there for most of the conversations that were being recapped, I found these very valuable. There were many times that multiple things were discussed over the course of the week, often overlapping, and usually competing for metal attention. So having these regular recaps helped me to remember and keep up on what was going on. I also found the subjective nature useful over “just the facts” summaries to be able to understand some perspectives on topics that I sometimes agreed with, sometimes didn’t, and often just hadn’t taken into consideration.

That was all while being actively involved in the TC. So I can imagine if there was someone interested in what was going on, but not able to devote the significant time it takes to read IRC logs, mailing list posts, and patch reviews that it takes to really stay on top of everything, these kinds of things are really the best way for them to be able to get that type of information.

This is my first attempt to take that kind of communication and apply it to the OpenStack Foundation Board meetings. The Board of Directors meet several times a year, usually via conference calls with a few face to face meetings where possible around Summits and other events. Hopefully these will be useful for anyone interested in what is happening at the board level. I’m also hoping they help me be better about taking notes and keeping track of things, so hopefully it’s a win-win.

January 29, 2019 OpenStack Foundation Board Meeting

The original agenda can be found here and the official minutes will be posted here. Jonathan Bryce also usually sends out unofficial minutes to Foundation mailing list. The January 29th notes can be found here.


Alan Clark started things off with the typical procedural stuff for meetings like this. Roll call was taken to make sure there was quorum. Minutes from the last board meeting in December were voted on and officially approved.

These meetings just recently switched from using WebEx to using Zoom. Apparently, as a result of that, the normal meeting reminders that folks were used to were not sent out, so there were a few absent and late. But I think overall there was a pretty good showing for the 28 board members. I believe there were only six board members not in attendance, which considering time zones, seems reasonable.

Alan thanked the outgoing board members for their time on the board. There were five of us, across Platinum, Gold, and Individual members, that were new to the board and we all had a few minutes to introduce ourselves and give a little background before moving on to business.

Policy Reminders

Especially useful for the new folks like me, there was a reminder about some of the policies that board members need to follow. I was aware of some, but definitely not all of these. I think it’s useful to include here:

Reading the Transparency policy was useful, and I especially liked seeing there is a heading title “General Policy Favoring Transparency”.

Meeting Schedule

Two face to face meetings are planned this year, one before the next Open Infrastructure Summit in Denvery in April, and another date to be determined before the following Summit in Shanghai, China in early November.

Speaking of which, up until that point we knew the Summit was planned in China but no official location was announced. It is now official that the Open Infrastructure Summit China will be held the week of November 4 in Shanghai.

Committee Updates

There are various committees within the board to work on different areas. The list of committees can be found on the Foundation wiki.

Most of the discussion was just making everyone aware of these. There are some listed on the wiki that are just there for historical purposes.

Since financial issues have a lot of impact on the community, especially now that there are less sponsors than there were a few years ago, I have decided to join the Compensation Committee. There are a few others that I find interesting and think are important, but I will wait a little bit before signing up for more. I know I have a tendency to want to sign up for anything I think I can help with without really thinking too much about the time commitment, so I am trying to be better about raising my hand too much.

OpenStack Foundation Pilot Project Guidelines

Allison Randall have a report on the effort to coming up with a set of written guidelines for new projects coming in as pilot projects under the OpenStack Foundation.

As the Foundation expands its scope to include more project beyond OpenStack to support “open infrastructure”, I think it’s important that we are careful about what we include to keep true to our existing community and our core identity. This group is working on writing a set of guidelines to help ensure that happens.

The current draft guidelines can be found here.

Some I would like to see a little more specific or reworded to be less subjective (following “Technical best practices”?) but I am happy to see things called out like “Open collaboration”. I am also a little concerned about how open governance is left.

To me, the Four Opens are what really defined the OpenStack community compared to other open source environments I had seen. I will have a hard time feeling really comfortable with adding any new projects that do not at least strive towards following the four opens.

Foundation Update

Jonathan and team wrapped thing up with a staff update. The slides can be found here.

It was great to see that there has actually been a 33% increase in community membership over the last year. But seeing the activity in the projects, this tells me that more and more involvement is casual or part-time. We have been working on making things more welcoming to new contributors and talking about ways to make contributing easier for those that aren’t spending a significant part of their work week on OpenStack and related projects. I think these efforts are very important and will be critical to our ability to getting more done going forward.

Following on from Allison’s update, there was a large part about the expanding role of the Foundation and the projects currently in the Pilot phase. Right now, these are Airship, Kata Containers, StarlingX, and Zuul.

There was some good coverage about where the Foundation can play a role in an expanded role of promoting open infrastructure. The main goals for this year were reported as strengtheing OpenStack (branch and community), helping expand market opportunities for open infrastructure, and evolving the business model of the Foundation to be this broader-scoped entity.

I’ll probably have more to say on some of those soon, but overall I think it was a good update and, all things considered, I think we are on the right track and at least paying attention to the things we need to going forward.

by Sean McGinnis at February 11, 2019 12:00 AM

February 08, 2019

OpenStack Superuser

Walking the walk: Why it’s a crooked path for free software activists

At some point — and usually it’s early on — every free and open source software enthusiast bumps up against a big problem.
You believe in the principles of FOSS, but the world still essentially runs on proprietary software. Maybe it’s your employer. Your family and friends. Your bank. Every. single. airlines. Your church, your club, your volunteer organization. Almost every thing you do is tinged by it.

Bradley M. Kuhn and Karen Sandler feel your pain. Kuhn labored for years as executive director of the Free Software Foundation, where founder Richard Stallman shuns everything from mobile phones to loyalty cards on principle. Sandler needs a pacemaker and has been waging a long battle to access the software behind it.

The pair, who now work together at the Software Freedom Conservancy say that while “ideally, it would be possible to live a software freedom lifestyle in the way a vegetarian lives a vegetarian lifestyle: minor inconveniences at some restaurants and stores, but generally most industrialized societies provide opportunity and resources to support that moral choice” they recognize we’re not there yet. Trying to be “purists” isn’t easy, they admit in a recent FOSDEM keynote.

The advent of network services mixing server-side secret software and proprietary JavaScript or apps are central to the decline in the ability to live a productive, convenient life in software freedom, they maintain. However, few in our community focus on the implications of this and related problems and few now even try to resist. “We have tried to resist and while we have succeeded occasionally, we have failed overall to live life in software freedom,” they declare.

The SFC, for example, has a rule: People can use proprietary software if it helps them do their jobs. While he tried to use only FOSS, because he was in charge of the banking Kuhn found himself with “this weird problem,” unable to sign in without JavaScript running in Firefox. So it came down to enabling it or “I’d be in the bank all day, every day, doing all of our transactions,” Kuhn says.

Sandler agrees. “If you’re functional in the world, if you need to take care of things yourself — book a flight or basically do anything on the web — you’ll find that same situation,” Sandler says. “This is the cruel reality.” They also share a few funny anecdotes about “outsourcing” proprietary software use  — getting other folks to call ride shares or use Yelp — but say they’ve since stopped the “farce.” At the SFC, if it’s a question of asking someone to interact with them using FOSS or staffers having to use proprietary software, they’ll take the onus of using proprietary “as a way to support other people’s software freedom.”

They did have some suggestions on how volunteer developers can most effectively focus efforts to build a world where everyone can live in software freedom: mindfulness minus obsession, getting out of your comfort zone, making small choices, spending the time to shine a light on the problem and not letting the paradox paralyze you. For more, checkout their podcast (or oggcast, if you want to be precise) “Free as in Freedom” (FAIF.us)

Catch the whole 45-minute keynote from FOSDEM here.

H/T Thierry Carrez

The post Walking the walk: Why it’s a crooked path for free software activists appeared first on Superuser.

by Nicole Martinelli at February 08, 2019 04:51 PM

February 07, 2019

OpenStack Superuser

Private Storage-as-a-Service (STaaS) with OpenStack Cinder volumes for hybrid and multi-cloud

Private storage-as-a-service is billed as a way to easily scale cloud capacity and performance while keeping control over data and maintaining the freedom to connect to multiple public clouds.

In about seven minutes, you can get an overview of how to do it with OpenStack Cinder. The tutorial is the handiwork of students Shivangi Jadon, Ornella D’souza, Dhanaraj V Kidiyoor, Sudheendra Harwalkar, supervised by faculty and research associate Sudheendra Harwalkar and Dr. Dinkar Sitaram at the Center for Cloud Computing and Big Data (CCBD) at PES University.
The basic steps:

  • Log in to three clouds (A,B,C)
  • Create volumes V1 and V2
  • Create instance in cloud B and C
  • Attach volumes in cloud A to instance in cloud B and C
  • Verify the volume attached by creating file system and file
  • Voila’: File system and files are created

The second half of the short video demos how to create a community STaaS:

  • Log in to three clouds (A, B, C)
  • Create instance and volume in all three
  • Attach volume in one cloud to an instance in the other cloud
  • Verify the volume attached to each instance

The team also outlines a few alternative multi-cloud scenarios:

  • Move data storage to cloud where instance (VM) is running for better performance
  • Multiple instance (VM) attach to same volume in private/community cloud  STaaS in multi-cloud environment
  • Private/community cloud STaaS volume attachment to instance (VM) in public cloud (AWS, Azure, GCP, Vmware and etc.)
  • Public cloud STaaS for user-specific needs

Get involved

Cinder is a block storage service for OpenStack. It virtualizes the management of block storage devices and provides end users with a self-service API to request and consume those resources without requiring any knowledge of where their storage is actually deployed or on what type of device. This is done through the use of either a reference implementation (LVM) or plugin drivers for other storage. You can learn more about Cinder (including documentation, code, bugs) on the Wiki.  If you’re interested in identity, authentication, authorization, and/or policy for OpenStack, public meetings are held weekly in IRC on Wednesdays at 16:00 UTC.

What are you doing with open infrastructure? Superuser is always interested in community projects and knowledge sharing with tutorials. We cover OpenStack, Kubernetes, Kata Containers, AirShip, StarlingX, Ceph, Cloud Foundry, OVS, OpenContrail, Open Switch, OPNFV and more. Get in touch at editorATopenstack.org

The post Private Storage-as-a-Service (STaaS) with OpenStack Cinder volumes for hybrid and multi-cloud appeared first on Superuser.

by Superuser at February 07, 2019 04:52 PM

February 06, 2019

OpenStack Superuser

Continuous integration wanted: How a large online classified company uses Zuul

When you’re dominating the country’s online classified ads, you can’t afford continuous integration snags. That’s why France’s fifth most visited site, LeBonCoin, adopted Zuul for gating and changed their dev team structure to scale and move faster.

In a recent feature story on LeMagIt (Google translate here), LeBonCoin dev ops engineers Guillaume Chenuet, Sonia Ouchtar and Benoit Bayszczak outlined how they’re using Zuul, an open source CI/CD service.

Back in 2015, the site had only one team dedicated to continuous integration, with a system that relied on Gerrit for code reviews and Jenkins for test automation. A year later, they extended tooling infrastructure in order to collaborate more with other team members, Chenuet says. As the company reconfigured its architecture for micro-services, the team structure shifted to from more traditional siloed organization (front-end, back-end) to multi-disciplinary teams focused on site features. The reorganization helped to improve development cycles, but also created a bottleneck with the CI tools in place. That’s when they decided to take a page from the OpenStack Foundation playbook and adopt Zuul.

In the end, the move to Zuul was about optimizing processes at scale and parallel development. “We were already agile to a certain extent. But when version 2 of Zuul appeared we (realized) we’d reached the limit of the tools we had in place,” says Bayszczak.

With the previous monolithic infrastructure, build times were 45 minutes for each change but with Zuul that was down to just five minutes, says Ouchtar. (They have since moved to Zuul v3.) The team deployed two OpenStack clusters to provide the infrastructure for running their tests. Their current CI set up consists of two clusters with three controllers, three storage components and five compute components, all in redundancy on two data centers with 100 VMs running on the clusters. The team expects to add more compute and additional OpenStack components in the future, perhaps making the services available to developers instead of reserving them only for the continuous integration teams.

Test it for yourself

Zuul is an open source CI/CD platform designed to tackle the complexity of open source integration by gating new code against multiple projects and systems before landing a single patch. Zuul currently supports Gerrit and GitHub and leverages the Ansible ecosystem for third-party modules. Zuul is a new top-level pilot project at the OSF but has been in development for six years and proven at scale supporting the OpenStack project.

Get the Source
Zuul is Free and Open Source Software. Download the source from git.zuul-ci.org or install it from PyPI.

Read the Docs
Zuul offers extensive documentation.

Join the Mailing List
Zuul has mailing lists for announcements and discussions.

Chat on IRC
Join #zuul on FreeNode.

The post Continuous integration wanted: How a large online classified company uses Zuul appeared first on Superuser.

by Nicole Martinelli at February 06, 2019 05:19 PM

Stephen Finucane

Updating the Firmware for a Mellanox ConnectX-3 NIC

In a previous post, I provided a guide on configuring SR-IOV for a Mellanox ConnectX-3 NIC. I've since picked up a second one of these and was attempting to follow through on the same guide. However, when I attempted to "query" the device, I saw the following: $ sudo mstconfig -d 02:00.0 query Device #1: ---------- Device type: ConnectX3 PCI device: 02:00.0 -E- Failed to query device: 02:00.0. Unsupported FW (version 2.

February 06, 2019 03:41 PM

Trinh Nguyen

Searchlight weekly report - Stein R-12,11,10,9

For the last four weeks, we're working on hardening our multi-vision and preparing for the Open Infrastructure Summit in Denver this April [1]. The team had submitted one session to discuss and showcase our progress on implementing the multi-cloud features [2] and waiting for voting results.

For the Denver summit, we decided to give a demonstration of Searchlight that has:

  • Search resources across multiple OpenStack Clouds [3]
  • Frontend UI that adds the views for multi-cloud search [4]

So, from now to before the summit, we will focus on developing the [3] and [4] features for Searchlight. For more details about our multi-cloud vision for Searchlight, please have a look at [5].

Btw, It's the Lunar New Year now in Viet Nam. HAPPY NEW YEAR!!!


[1] https://www.openstack.org/summit/denver-2019/
[2] https://etherpad.openstack.org/p/searchlight-denver-2019
[3] https://storyboard.openstack.org/#!/story/2004840
[4] https://storyboard.openstack.org/#!/story/2004872
[5] https://etherpad.openstack.org/p/searchlight-multi-cloud

by Trinh Nguyen (noreply@blogger.com) at February 06, 2019 07:32 AM

February 05, 2019

OpenStack Superuser

Diversity in tech: Make your voice heard

Help inform the OpenStack community on how to improve contributor diversity by taking the current survey.  It’s a speedy 16 questions total with queries about “Where do you identify on the gender spectrum?” (32 check boxes available), linguistic diversity, disability and a final fill-in-the-blank option to tell pollsters anything they didn’t ask but need to know about.

These surveys have been carried out periodically — along with independent research on gender in the OSF community, latest pre-print of that here. The first diversity survey was launched to sound out the community in the fall of 2015; the latest edition got started in August and is the first since the Women of OpenStack group folded into the Diversity & Inclusion Working Group.  That move was aimed at bringing in folks who don’t identify as women and making allies feel more welcome, says Amy Marrich, a former WOO member who now runs the D&I WG. “There have always been comments from allies about not attending things like the WOO networking lunch because they didn’t want to intrude. We definitely didn’t want to exclude anyone.”

This time the survey team is working with members of the CHAOSS project, Linux Foundation project focused on creating analytics and metrics to help define community health. They’ll be assisting by helping analyze the results and feeding the results into the CHAOSS software and metrics development work so that other open source projects can benefit.

Please take the time to fill out the survey and share it with others in the community. You’ll find the survey at: https://www.surveymonkey.com/r/OpenStackDiversity

Get involved

Beyond the survey, you can participate in meetings of the The Diversity & Inclusion WG held every two weeks on Mondays at 17:00 UTC in the #openstack-diversity channel on IRC. You can also check out the current and previous agendas at the Etherpad.

And if you’re headed to the upcoming Open Infrastructure Summits, stay tuned. Superuser will feature updates on the workshops, lunches and events focusing on diversity.

The post Diversity in tech: Make your voice heard appeared first on Superuser.

by Superuser at February 05, 2019 05:08 PM

Carlos Camacho

TripleO - Deployment configurations

This post is a summary of the deployments I usually test for deploying TripleO using quickstart.

The following steps need to run in the Hypervisor node in order to deploy both the Undercloud and the Overcloud.

You need to execute them one after the other, the idea of this recipe is to have something just for copying/pasting.

Once the last step ends you can/should be able to connect to the Undercloud VM to start operating your Overcloud deployment.

The usual steps are:

01 - Create the toor user (from the Hypervisor node, as root).

sudo useradd toor
echo "toor:toor" | sudo chpasswd
echo "toor ALL=(root) NOPASSWD:ALL" \
  | sudo tee /etc/sudoers.d/toor
sudo chmod 0440 /etc/sudoers.d/toor
sudo su - toor

Now, follow as the toor user and prepare the Hypervisor node for the deployment.

02 - Prepare the hypervisor node.

# Disable IPv6 lookups
sudo bash -c "cat >> /etc/sysctl.conf" << EOL
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1

sudo sysctl -p

mkdir .ssh
ssh-keygen -t rsa -N "" -f .ssh/id_rsa
cat .ssh/id_rsa.pub >> .ssh/authorized_keys
cat .ssh/id_rsa.pub | sudo tee -a /root/.ssh/authorized_keys
echo '' | sudo tee -a /etc/hosts

export VIRTHOST=
sudo yum groupinstall "Virtualization Host" -y
sudo yum install git lvm2 lvm2-devel -y
ssh root@$VIRTHOST uname -a

Now, let’s install some dependencies. Same Hypervisor node, same toor user.

03 - Clone repos and install deps.

git clone \
chmod u+x ./tripleo-quickstart/quickstart.sh
bash ./tripleo-quickstart/quickstart.sh \
sudo setenforce 0

Export some variables used in the deployment command.

04 - Export common variables.

export CONFIG=~/deploy-config.yaml
export VIRTHOST=

Now we will create the configuration file used for the deployment, depending on the file you choose you will deploy different environments.

05 - Click on the environment description to expand the recipe.

OpenStack [Containerized & HA] - 1 Controller, 1 Compute

cat > $CONFIG << EOF
  - name: control_0
    flavor: control
    virtualbmc_port: 6230
  - name: compute_0
    flavor: compute
    virtualbmc_port: 6231
node_count: 2
containerized_overcloud: true
delete_docker_cache: true
enable_pacemaker: true
run_tempest: false
extra_args: >-
  --libvirt-type qemu
  --ntp-server pool.ntp.org
  -e /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml
OpenStack [Containerized & HA] - 3 Controllers, 1 Compute

cat > $CONFIG << EOF
  - name: control_0
    flavor: control
    virtualbmc_port: 6230
  - name: control_1
    flavor: control
    virtualbmc_port: 6231
  - name: control_2
    flavor: control
    virtualbmc_port: 6232
  - name: compute_1
    flavor: compute
    virtualbmc_port: 6233
node_count: 4
containerized_overcloud: true
delete_docker_cache: true
enable_pacemaker: true
run_tempest: false
extra_args: >-
  --libvirt-type qemu
  --ntp-server pool.ntp.org
  --control-scale 3
  --compute-scale 1
  -e /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml
OpenShift [Containerized] - 1 Controller, 1 Compute

cat > $CONFIG << EOF
# Original from https://github.com/openstack/tripleo-quickstart/blob/master/config/general_config/featureset033.yml
composable_scenario: scenario009-multinode.yaml
deployed_server: true

network_isolation: false
enable_pacemaker: false
overcloud_ipv6: false
containerized_undercloud: true
containerized_overcloud: true

# This enables TLS for the undercloud which will also make haproxy bind to the
# configured public-vip and admin-vip.
undercloud_generate_service_certificate: false
undercloud_enable_validations: false

# This enables the deployment of the overcloud with SSL.
ssl_overcloud: false

# Centos Virt-SIG repo for atomic package
  # NOTE(trown) The atomic package from centos-extras does not work for
  # us but its version is higher than the one from the virt-sig. Hence,
  # using priorities to ensure we get the virt-sig package.
  - type: package
    pkg_name: yum-plugin-priorities
  - type: generic
    reponame: quickstart-centos-paas
    filename: quickstart-centos-paas.repo
    baseurl: https://cbs.centos.org/repos/paas7-openshift-origin311-candidate/x86_64/os/
  - type: generic
    reponame: quickstart-centos-virt-container
    filename: quickstart-centos-virt-container.repo
    baseurl: https://cbs.centos.org/repos/virt7-container-common-candidate/x86_64/os/
      - atomic
    priority: 1

extra_args: ''

container_args: >-
  # If Pike or Queens
  #-e /usr/share/openstack-tripleo-heat-templates/environments/docker.yaml
  # If Ocata, Pike, Queens or Rocky
  #-e /home/stack/containers-default-parameters.yaml
  # If >= Stein
  -e /home/stack/containers-prepare-parameter.yaml

  -e /usr/share/openstack-tripleo-heat-templates/openshift.yaml
# NOTE(mandre) use container images mirrored on the dockerhub to take advantage
# of the proxy setup by openstack infra
docker_openshift_etcd_namespace: docker.io/
docker_openshift_cluster_monitoring_namespace: docker.io/tripleomaster
docker_openshift_cluster_monitoring_image: coreos-cluster-monitoring-operator
docker_openshift_configmap_reload_namespace: docker.io/tripleomaster
docker_openshift_configmap_reload_image: coreos-configmap-reload
docker_openshift_prometheus_operator_namespace: docker.io/tripleomaster
docker_openshift_prometheus_operator_image: coreos-prometheus-operator
docker_openshift_prometheus_config_reload_namespace: docker.io/tripleomaster
docker_openshift_prometheus_config_reload_image: coreos-prometheus-config-reloader
docker_openshift_kube_rbac_proxy_namespace: docker.io/tripleomaster
docker_openshift_kube_rbac_proxy_image: coreos-kube-rbac-proxy
docker_openshift_kube_state_metrics_namespace: docker.io/tripleomaster
docker_openshift_kube_state_metrics_image: coreos-kube-state-metrics

deploy_steps_ansible_workflow: true
config_download_args: >-
  -e /home/stack/config-download.yaml
composable_roles: true

  - name: Controller
    CountDefault: 1
      - primary
      - controller
      - External
      - InternalApi
      - Storage
      - StorageMgmt
      - Tenant
  - name: Compute
    CountDefault: 0
      - compute
      - External
      - InternalApi
      - Storage
      - StorageMgmt
      - Tenant

tempest_config: false
test_ping: false
run_tempest: false

From the Hypervisor, as the toor user run the deployment command to deploy both your Undercloud and Overcloud.

06 - Deploy TripleO.

bash ./tripleo-quickstart/quickstart.sh \
      --clean          \
      --release master \
      --teardown all   \
      --tags all       \
      -e @$CONFIG      \

Updated 2019/02/05: Initial version.

Updated 2019/02/05: TODO: Test the OpenShift deployment.

Updated 2019/02/06: Added some clarifications about where the commands should run.

by Carlos Camacho at February 05, 2019 12:00 AM

February 04, 2019

OpenStack Superuser

How to get production-ready Kubernetes in OpenStack public clouds

Platform services have came a long way. Not only are they becoming more popular, but they’re also driving true multi-cloud interoperability says Feilong Wang, head of R&D at Catalyst Cloud, a public cloud based in New Zealand built on OpenStack.

The combination of OpenStack and Kubernetes is becoming a standard option that allows users to benefit from both virtual machines and containers for their cloud-native applications, he adds. Wang and colleague Xingchao Yu talked about the company’s journey and shared a demo at the recent Linux Conf Australia.

Managed vs. metal

The first question is why you would use a managed Kubernetes service instead of just building something on bare metal instead. Wang cites two recent examples — Atlassian’s experience of Kubernetes “the hard way” and a GitHub discussion about the production readiness of Microsoft AKS — of the difficulties involved. “Don’t get me wrong, I’m just saying that building a production-ready Kubernetes service is not easy, that’s why we wanted to share our journey,” he clarifies.

What does production-ready mean, anyway?

There are as many opinions on this as engineers, but at Catalyst they define it using four factors: strong data security, high availability resiliency, good performance and scalability and ease-of-use. Here’s a breakdown of each:

Strong data security
• RBAC backed by Keystone
• Network policies
• Rolling upgrades and patching

Good performance/scalability
• Network performance
• Storage performance
• Time to deploy the cluster
• Horizontal scalability (auto-scaling)

High availability/resiliency
• Highly available master nodes
• Highly available worker nodes
• Auto-healing

And, finally, ease-of-use. Wang says most of Catalyst’s customers use Terraform or Ansible to talk to the OpenStack API; for container orchestration with Magnum, Catalyst also provides an API to manage the cluster. He also points out that they release all the work they’ve done so that the community can benefit from it. “We upstream everything – we don’t have any secret code,” he adds. As for what they’re working on upstream the current list includes: health checks and auto-healing, rolling upgrades, Octavia ingress controller and an ingress controller for DNS service Designate.

It hasn’t always been smooth sailing, however. Wang talks about a lag — 15 minutes, or many cups of coffee in engineer time — in creating a production cluster.  Users get two load balancers in front of multiple masters, so for the two load balancers under the hood, OpenStack needs to create at least four working machines to get the load balancer. That means creating three master nodes and two or three worker nodes to get into production. So it’s a “pretty big stack” but “we’re working on improving that,” he says. Another limitation, Wang notes, is getting Kubernetes Docker images from Docker Hub. When a user creates a cluster, OpenStack Magnum needs to talk to Docker Hub for those images, wherever they are stored — most likely outside New Zealand — creating latency. Catalyst plans to create a local container orchestration to resolve the issue.

Xingchao Yu, cloud engineer at Catalyst Cloud, offered this demo to show how users can create Kubernetes clusters in just a few clicks using two templates.

Catch the entire 45-minute session here.

The post How to get production-ready Kubernetes in OpenStack public clouds appeared first on Superuser.

by Superuser at February 04, 2019 03:08 PM

February 03, 2019

Sean McGinnis

Gofish - A golang client library for Redfish and Swordfish

One of the things I have been looking in to for the OpenSDS project is adding support for the Swordfish API standard. I believe by OpenSDS providing a standardized API it could both make it attractive for storage vendors to natively integrate with OpenSDS - with one integration they would pick up support for CSI, Cinder, Swordfish, and other integration points - as well as make OpenSDS an interesting option for those needing to manage a heterogenous mix of storage devices in their data center in a common, single pane of glass, way.

We are still looking at how this could be integrated and how much focus it should have. But it became clear to me that we would need a good way to exercise this API if/when it is in place.

We are also considering southbound integration in OpenSDS to be able to manage Swordfish-enabled storage. This would enable OpenSDS support for storage without needing to write any custom code if they have already made the investment in exposing a Swordfish API. If we were to do that, we would need a client library that would allow us to easily interact with these devices.

So between the two approaches being considered, along with wanting to get a little more time and experience with golang, I decided to start implementation on an open library for Redfish and Swordfish. I’ve just pushed up a very rudementary start with the Gofish library.

What is Swordfish

The DMTF released the first Redfish standard in 2015. The goal of Redfish was to replace IPMI and other proprietary APIs for managing data center devices with a simple, standard, RESTful API that would provide consistency and ease of use across physical, hybrid, and virtual IT devices.

For years, the Storage Network Industry Association (SNIA) had been working on SMI-S as a standard storage management protocol. There was industry adoption, but there was still a general sense that SMI-S was too big and too complex for both implementors and consumers of the API.

SNIA recognized the simplicity and ease of use of the Redfish specification and chose to extend the DMTF spec with storage related objects and capabilities. These additions can be seen (at a high level) with the purple objects in this model from a presentation by Richelle Ahlvers, the chair of the Scalable Storage Management Technical Working Group (SSM TWG):

Object model

Introducing Gofish

To start, I just used the API responses from the Swordfish API emulator as a reference to get a very basic client library working which I have called Gofish. Cause I’m just so clever and witty like that.

This is very, very rudimentary at this point. There is no authentication mechanism yet, something that any real Redfish or Swordfish provider will surely require. The object model is also very limited and incomplete. Luckily all of the Redfish and Swordfish schemas are published in easy to consume yaml or json formats that should make it easy to script up the generation of the rest of the schema.

Even in its basic state, I did want to get this out there in case anyone else is interested. I’ve published this under the Apache 2 license in a public GitHub repo, so anyone that is interested in contributing, feel free to propose pull requests and try out the code.

I am hoping to work on this as time permits to get it more full featured and robust. I am limited by my access to Redfish and Sworfish enabled devices, so I may be constrained until I can track down something more than the emulator to use for testing. But hopefully this project evolves into something useful as more devices implement this support and more management tools are developed to take advantage of it.

by Sean McGinnis at February 03, 2019 12:00 AM

February 01, 2019

OpenStack Superuser

Superuser Awards nominations open for Open Infrastructure Summit

Nominations for the Open Infrastructure Superuser Awards are open and will be accepted through midnight (Pacific Daylight Time) March 22, 2019.

All nominees will be reviewed by the community and the Superuser editorial advisors will determine the winner that will be announced onstage at the Summit in late April.

Open infrastructure is providing infrastructure resources to developers and users, by integrating various open source solutions. The benefits are obvious, whether that infrastructure is provided in a private or a public context: the absence of lock-in, the power of interoperability opening up new possibilities, the ability to look under the hood, fix by yourself, improve the software and contribute back your changes.

The Superuser Awards recognize teams using open infrastructure to meaningfully improve business and differentiate in a competitive industry, while also contributing back to the open-source community.  They aim to cover the same mix of open technologies as our publication, namely OpenStack, Kubernetes, Kata Containers, AirShip, StarlingX, Ceph, Cloud Foundry, OVS, OpenContrail, Open Switch, OPNFV and more.

Teams of all sizes are encouraged to apply. If you fit the bill, or know a team that does, we encourage you to submit a nomination here.

After the community has a chance to review all nominees, the Superuser editorial advisors will narrow it down to four finalists and select the winner.

When evaluating winners for the Superuser Award, judges take into account the unique nature of use case(s), as well as integrations and applications by a particular team. Questions include how is this team innovating with open infrastructure, for example working with container technology, NFV or unique workloads.

Additional selection criteria includes how the workload has transformed the company’s business, including quantitative and qualitative results of performance as well as community impact in terms of code contributions, feedback and knowledge sharing.

Winners will take the stage at the Open Infrastructure Summit in Denver. Submissions are open now until March 22, 2019. You’re invited to nominate your team or someone you have worked with, too.

Launched at the Paris Summit in 2014, the community has continued to award winners at every Summit to users who show how open infrastructure is making a difference and provide strategic value in their organization. Past winners include CERN, Comcast, NTT GroupAT&T and Tencent TStack.

For more information about the Superuser Awards, please visit http://superuser.openstack.org/awards.

The post Superuser Awards nominations open for Open Infrastructure Summit appeared first on Superuser.

by Superuser at February 01, 2019 04:24 PM

January 31, 2019

OpenStack Superuser

Inside open infrastructure: The latest from the OpenStack Foundation

Welcome to the latest edition of the OpenStack Foundation Open Infrastructure Newsletter, a digest of the latest developments and activities across open infrastructure projects, events and users. Sign up to receive the newsletter and email community@openstack.org to contribute.

Spotlight on… StarlingX

StarlingX, a pilot project supported by the OpenStack Foundation (OSF), provides an integrated open source platform optimized for edge computing and IoT use cases. StarlingX integrates well-known open source components — Ceph, Kubernetes, relevant OpenStack services among others — taking them to the next level to meet strict requirements like providing low latency and high bandwidth for applications with a small footprint on the infrastructure layer. The platform is designed to be scalable and the community is continuously working on making it more flexible, reliable and feature rich.

The StarlingX community held its first contributor meetup January 15-16 in Chandler, Arizona to discuss topics relevant to design, development and testing activities, onboarding, and processes.

Earlier, the community had decided to switch to two project releases per year starting with the next release scheduled for May 2019, based on OpenStack Stein. Subsequent StarlingX software releases will align with the OpenStack release cadence. At the meetup, the community discussed the status of items on the release roadmap to agree on an execution plan and priorities. Discussion points included the status of items the community is working on to contribute to upstream projects such as OpenStack Nova and Neutron. Additional topics included items that can stretch to multiple releases including containerization of control plane services, multi operating system (multi-OS) support and further support for mixed workloads including VMs, containers and bare metal.

StarlingX is supported by a growing community and the members are prioritizing onboarding and documentation to help users and new contributors try out the software and get involved. The community is currently developing on a hands-on workshop to guide attendees through the installation process and highlight some software features.

The next Technical Steering Committee (TSC) election will occur in the second quarter of this year with five seats open.

If you’d like to get involved in the community in general:

OpenStack Foundation news

  • After you register for the Open Infrastructure Summit Denver (by the February 27 early bird deadline!), set your sights on Shanghai, China the week of November 4, 2019. Registration and sponsorship opportunities will be available soon, but you can sign up for updates here.
  • Today, community voting for the Open Infrastructure Summit Denver opened! Vote for your favorite session ideas before February 4 to help the Programming Committees shape the final schedule. Let’s bring the best possible content to Denver!
  • This weekend, the OpenStack Foundation will have a booth at FOSDEM. Stop by the booth to meet members of the open infrastructure community and get involved.
  • The Diversity & Inclusion Working Group is conducting an anonymous survey to better understand the diversity and makeup of the community. Participation is appreciated so we can better understand and serve the community. Share any questions with working group chair, Amy Marrich (spotz on IRC).

OpenStack Foundation Project News


  • The next OpenStack Ops meetup will take place in Berlin, March 6 – 7. For more details, see the event planning Etherpad.
  • OpenStack SIGs (special interest groups) are groups of contributors interested in working on a specific problem space in OpenStack. The newest one is (lucky number 13!) is the Auto-scaling SIG, dedicated to improving user experience on auto-scaling and related features including metering, cluster scheduling and application lifecycle management.
  • Technical Committee elections are coming up. Nominations open February 12 and voting launches February 26.


Kata Containers

Questions / feedback / contribute

This newsletter is edited by the OpenStack Foundation staff to highlight open infrastructure communities. We want to hear from you!
If you have feedback, news or stories that you want to share, reach us through community@openstack.org and to receive the newsletter, sign up here.

The post Inside open infrastructure: The latest from the OpenStack Foundation appeared first on Superuser.

by OpenStack Foundation at January 31, 2019 05:36 PM

SUSE Conversations

Why was 2018 a landmark year for Cloud computing? And what’s next on the horizon?

Let me start with an obvious statement: cloud computing is continuing to grow – and really fast. That’s hardly headline news anymore. “Cloud” has been around for nearly 20 years now and it’s a constant theme within the IT industry. A big year for cloud So, why was last year such a standout year for […]

The post Why was 2018 a landmark year for Cloud computing? And what’s next on the horizon? appeared first on SUSE Communities.

by Terri Schlosser at January 31, 2019 01:00 PM

January 29, 2019

John Likes OpenStack

How do I re-run only ceph-ansible when using tripleo config-download?

After config-download runs the first time, you may do the following:

cd /var/lib/mistral/config-download/
bash ansible-playbook-command.sh --tags external_deploy_steps

The above runs only the external deploy steps, which for the ceph-ansible integration, means run the ansible which generates the inventory and then execute ceph-ansible.

More on this in TripleO config-download User’s Guide: Deploying with Ansible.

If you're using the standalone deployer, then config-download does not provide the ansible-playbook-command.sh. You can workaround this by doing the following:

cd /root/undercloud-ansible-su_6px97
ansible -i inventory.yaml -m ping all
ansible-playbook -i inventory.yaml -b deploy_steps_playbook.yaml --tags external_deploy_steps

The above makes the following assumptions:

  • You ran standalone with `--output-dir=$HOME` as root and that undercloud-ansible-su_6px97 was created by config download and contains the downloaded playbooks. Use `ls -ltr` to find the latest version.
  • If you're using the newer python3-only versions you ran something like `ln -s $(which ansible-3) /usr/local/bin/ansible`
  • That config-download already generated the overcloud inventory.yaml (the second command above is just to test that the inventory is working)

by John (noreply@blogger.com) at January 29, 2019 10:26 PM

OpenStack Superuser

Takeaways from the first StarlingX contributor meetup

As the StarlingX project is still in its infancy, it’s crucial to have face time that developers as well as leaders can use to discuss both technical and process related matters to ensure the community has a balanced ecosystem and clear roadmap to execute on. To start the year with a clear focus, the community held its first contributor meetup event hosted by Intel in Chandler, Arizona on January 15-16.

During the two-day event the attendees discussed – both in person and remotely –  a wide range of topics from release planning through documentation and testing to onboarding. I’d like to give you a short summary of the main topics (as I remember them!) so that you can get up to speed and get involved.

Release planning

The StarlingX community is switching to two releases per year, with the next one targeted for the end of May, 2019 — closer than you think. With this in mind, it’s very important to keep both the release timelines and goals under review as well as careful planning to keep the roadmap on track.

Hereby I would like to note that the upcoming release will include the Stein version of the integrated OpenStack services. Along with this move the release cadence will also align with OpenStack from there onwards to ensure that the StarlingX platform incorporates the latest features, fixes, and enhancements of the upstream projects.

During the meetup, we went through the items planned for the release and also checked the milestone plans to see if there’s anything to fine tune. A few of the bigger items under discussion for the upcoming releases were containerization both for control plane services as well as supporting mixed workloads and multi-operating system (multi-OS) support. You can find more about the plan on the StarlingX Release Plan wiki page.


For most people, this is one one of the least interesting topics but it’s crucial to have good documentation for StarlingX to guide users as well as contributors about how the software and the community operates.

Here’s the team’s near-term priority list:

  • API Guide
  • Build Guide
  • Install Guide
  • Deployment Guide

Longer-term plans include documents such as an Operations Guide. Most of the above documents already exist but there’s always room for improvement and they also need to be kept up to date with regard to the feature development activity. Meetup attendees discussed review culture and ways to ensure documentation coverage along with the code changes. You can find existing guides on the StarlingX documentation web page and open tasks to work on in StoryBoard.

Compiler flags

This came up last year in connection to security hardening activities. The community agreed to decide on a per-item basis while also working on testing the performance impact of changes in compiler flag and similar settings.


The community is actively working on testing the software on function-level as well as an integrated package. While unit testing is a more straightforward activity to cover, some of the new sub-projects still need the framework to be set up. Functional and integration testing falls into a similar bucket with some further DevStack integration in flight to ensure a flexible framework that provides the possibility to test the APIs and the integration points between the different services. Similarly to activities in the area of documentation, the community is working on integrating testing into the design and development process to avoid it becoming an afterthought in culture.

Besides the code-level coverage, the Test team is also working on plans for sanity, stability, regression and performance testing. Because StarlingX provides a full-stack platform it’s very important to perform tests beyond checking the basic functionality which only ensures the expected behavior of the APIs. Having the full platform under test can be challenging from the point of view of available hardware resources as well as making sure that the pipeline feeds the information back to the development process as a continuous feedback loop.

One of the biggest challenges is running performance tests in an open environment reliably since those tests are usually tied to specific hardware configurations and so forth. As high performance is in high demand for many edge computing use cases, we need to know if a change affects the platform’s performance significantly which is why this is one of the priorities to put together a test plan to cover this path.


While StarlingX is still a young community, it already has a large code base, many integrated services and established processes to follow. Providing easy entry-points for newcomers who are interested in participating in the sub-teams’ activities is a high priority for the project. We discussed different ways to boost these onboarding efforts.

Documentation came up during this section as a base criteria to help people understand how StarlingX works both as software as well as a community. Furthermore, we talked about visualizing easy tasks or bugs to fix as potential starting points to contribute.

Beyond documenting better how to get started with the software and the community, we also discussed providing office hours where experts could answer any questions that newcomers have in a more synchronous way. The attendees voted for IRC as a platform to experiment with this idea by using the #starlingx channel on Freenode for the purpose. Stay tuned for further information on this while the community works out the hours.

For a more hands-on experience, the community is planning to provide a workshop during the Open Infrastructure Summit in Denver to show you how to deploy StarlingX and introduce a few of its features. Stay tuned for further information on the Denver Summit schedule.


It’s a high priority item for the community to reach out to new users as well as adjacent communities in edge computing to understand better the needs and provide better integration points. You’ll find community members participating in relevant industry events, workshops and meetups where you can ask questions about StarlingX and how to become part of the community.


As the community is setting up the governance models, members decided to choose leaders through an election process. The first election is scheduled for the second quarter of 2019 where five of the Technical Steering Committee (TSC) seats will be up for election. If you’re interested in running for the TSC elections, make sure that you are actively participating so the community gets to know you. We’re still working on the election process and exact dates, you can check the governance web page for updates.

That’s all for my summary. While it may seem like a long list, we had even more discussions and brainstorming sessions over the two days of the meetup. If you have any questions to the above items or if you’re interested in trying out the software or getting involved in the community please reach out on the mailing list or IRC or join one of the weekly community meetings. You can find further details and pointers on the StarlingX website.

In addition to other events, you’ll find the StarlingX Community at the Open Infrastructure Summit and Project Teams Gathering (PTG) this April in Denver to either get started with the software and learn more about the community or participate in the next round of design and planning discussions.

Don’t miss it!


The post Takeaways from the first StarlingX contributor meetup appeared first on Superuser.

by Ildiko Vancsa at January 29, 2019 04:46 PM

January 28, 2019

OpenStack Superuser

Cloud native computing at scale: Avoid becoming the next cryto-mine for hackers

A couple of high-profile container breaches have shown that lax security can be a real gold mine for hackers.

Tesla’s, for starters, began with a Kubernetes misconfiguration. The team left the admin console control panel open — without a password.

“There’s a lot of hackers who get in then deploy crypto-mining tools. They’re basically using Tesla’s resources to mine for coins,” says Kashif Zaidi, principal consultant at Aqua Security. In a Jenkins breach, a misconfig was money in the bank. “They made three million dollars, then those guys could afford to buy a Tesla.”

Zaidi discussed these notable crypto-jacking breaches using container technologies and general vulnerabilities in a container environment at a recent meetup hosted by Nebulaworks. The Docker Hub breach happened under a similar situation. Hackers found images that had backdoor access so when people deployed those images, hackers were notified, then they could log on and deploy crypto-mining tools.

“These are major examples, but if you Google container breaches you’ll find there’s so many examples of people getting hacked and they’re all pretty much doing crypto mining tools,” Zaidi says. “It’s easy and it is really untraceable. The coin goes somewhere and someone makes money.”

Security is best introduced at build time, he says. “The best practice to integrate with your CI/CD to introduce security early on.” That way you’ll avoid having to drop everything to fix it.

What’s another way can you avoid becoming the next cash cow for hackers? Monitor, monitor, monitor. A survey found that 92 percent of IT and security professionals reported concerns about security risks due to misconfiguration. Despite that, fewer than a third are continuously monitoring for misconfiguration. And while 82 percent reported security and compliance incidents due to cloud infrastructure misconfiguration, few companies have automated remediation processes that can prevent them.

Tools you can use

Zaidi says that Kube Bench, a Go application available on GitHub and developed by Aqua, can help change that. The application that checks whether Kubernetes is deployed according to security best practices. It runs the checks documented the CIS Kubernetes Benchmark and tests are configured with YAML files, making it easy to update as test specifications evolve.

MicroScanner is another tool Aqua developed and available on GitHub that lets you check your container images for vulnerabilities. If your image has any known high-severity issue, MicroScanner can fail the image build, making it easy to include as a step in your CI/CD pipeline.

Catch the whole hour-long presentation below.

Cover photo // CC BY NC

The post Cloud native computing at scale: Avoid becoming the next cryto-mine for hackers appeared first on Superuser.

by Superuser at January 28, 2019 05:36 PM

SUSE Conversations

SUSE OpenStack Cloud 9 Beta 7 is out!

We are happy to announce the release of SUSE OpenStack Cloud 9 Beta 7! Cloud Lifecycle Manager is available Today we are releasing SUSE OpenStack Cloud 9 CLM along with SUSE OpenStack Cloud 9 Crowbar! You will now find it in the download area: SUSE-OPENSTACK-CLOUD-9-x86_64-Beta7-DVD1.iso the ISO to install Cloud 9 with Cloud Lifecycle Manager, […]

The post SUSE OpenStack Cloud 9 Beta 7 is out! appeared first on SUSE Communities.

by Vincent Moutoussamy at January 28, 2019 02:59 PM

January 25, 2019

OpenStack Superuser

Acting fast: Disaster recovery-as-a-service with OpenStack Keystone

In less time that it takes some people to reverse out of a parking lot, you can get started on a multi-cloud disaster recovery-as-a-service (DRaaS) using OpenStack Keystone Federation.

This video tutorial complete with jaunty music that presumably will keep people in disaster recovery mode upbeat lasts under seven minutes.

Disaster Recovery-as-a-Service (DRaaS) has been called a key business workload for the cloud; Keystone is an OpenStack service that provides API client authentication, service discovery, and distributed multi-tenant authorization by implementing OpenStack’s Identity API. It supports LDAP, OAuth, OpenID Connect, SAML and SQL.

The tutorial is the handiwork of student Shivangi Jadon supervised by faculty and research Associate Sudheendra Harwalkar at the Center for Cloud Computing and Big Data (CCBD) at PES University. It first gives an overview of federated cloud services framework architecture, a typical multi-cloud environment then offers a demo of DRaaS on multiple OpenStack clouds.

The main steps:

  • Create instance and volume in cloud-A
  • Instance and volume backup from cloud-A to cloud-C
  • Recovery in cloud-C
  • Recovery in cloud-B using backed-up snapshots in cloud-C
  • Remote cloud B instance and volume backup and recovery from cloud-A

The tutorial runs just six minutes and fifty-two seconds, that’s less than it takes most people to fall asleep or finish one of those trendy workouts.

Check out the full video below.

Get involved

You can learn more about Keystone (including documentation, code, bugs) on the wiki.  If you’re interested in identity, authentication, authorization, and/or policy for OpenStack, public meetings are held weekly on IRC in #openstack-meeting, on Tuesdays at 18:00 UTC or check out more use cases here.

What are you doing with open infrastructure? Superuser is always interested in community projects and knowledge sharing with tutorials. We cover OpenStack, Kubernetes, Kata Containers, AirShip, StarlingX, Ceph, Cloud Foundry, OVS, OpenContrail, Open Switch, OPNFV and more. Get in touch at editorATopenstack.org

The post Acting fast: Disaster recovery-as-a-service with OpenStack Keystone appeared first on Superuser.

by Superuser at January 25, 2019 03:06 PM

Chris Dent

Placement Update 19-03

Hello, here's a quick placement update. This will be just a brief summary rather than usual tome. I'm in the midst of some other work.

Most Important

Work to complete and review changes to deployment to support extracted placement is the main thing that matters.

The next placement extraction status checkin will be 17.00 UTC, February 6th.

What's Changed

  • Changes to allow database status checks in placement-status upgrade check have either merged or will soon. These combine with online data migrations to ensure that the state of an upgraded installations has healthy consumers and resource providers.
  • libvirt vgpu reshaper code is ready for review and has an associated functional test. When that stuff merges the main remaining extraction-related tasks are in the deployment tools.
  • os-resource-classes 0.2.0 was released, adding the PCPU class.


Main Themes



Deployment related changes:

Delete placement from nova.


Please refer to last week for lots of pending changes.

by Chris Dent at January 25, 2019 12:47 PM

January 24, 2019

OpenStack Superuser

Testing, one two three: How these OPNFV tools can help any open infrastructure project

As the number of open-source projects booms, so does the need for resiliency and interoperability testing.
The Open Platform for NFV (OPNFV) community spent about four years of collective brainpower developing testing tools that can come in handy for open-source projects.

These tools can be run against many different types of deployments as well as test different components of a system (VIM, Vswitch, storage, etc.). They can also perform different types of testing including infrastructure verification, feature validation, stress and resiliency testing, performance benchmarking and characterization.

OPNFV is a collaborative open-source platform for network functions virtualization. This niche specialty — even in the open-source community — explains why people from other communities weren’t aware what was being developed in this “weird telco NFV domain” says Georg Kunz, a senior systems designer at Ericsson, who gave a talk about the tools at the Berlin Summit. “That’s a little unfortunate. Most of the test cases and tools that we’ve developed and the methodology around them is actually valuable for for everybody.”

Here’s a brief overview of the three areas in the OPNFV testing ecosystem. Functional testing, called func test “a fairly evolved and fairly flexible framework,” that offers pre-integrated upstream test tools including RefStack,Tempest, OPNFV- specific VNF tests and application-level Kubernetes.

Func test is built around an internal framework called cross testing that allows users to hook in different test components and describe how to call them. It focuses on the API level layer of either your OpenStack or Kubernetes deployment and basically treats the system underneath as a black box,  basically just talking to the API. Func test is a series of Docker containers split up according to test tier – health check, smoke testing and so on. The other two testing areas are non-functional testing to measure system performance and characteristics: Yardstick (NFVI and VNF performance) Bottlenecks (load tests, staging manager), Vsperf (NFVI dataplane performance and StorPerf (NFVI storage performance). And, finally, compliance verification with Dovetail.

Kunz also says that in addition to trying out these tools the community hopes to have input and feedback.Check out the OPNFV test working group, or through the mailing list using the project name with hashtag.

For more, watch the whole presentation here or the download the slides.

The post Testing, one two three: How these OPNFV tools can help any open infrastructure project appeared first on Superuser.

by Superuser at January 24, 2019 03:02 PM

January 23, 2019

OpenStack Superuser

How to get a travel grant to the Open Infrastructure Summit

Open infrastructure runs on the power of key contributors.

If you have helped out and want to attend the upcoming Project Team Gathering (PTG) or the  Open Infrastructure Summit but need funds for travel, lodging or a conference pass, the Travel Support Program is here for you.

For every Summit, the OpenStack Foundation funds around 30 dedicated contributors from the open infrastructure community to attend.  Contributors to projects including Kubernetes, Kata Containers, AirShip, StarlingX, Ceph, Cloud Foundry, OVS, OpenContrail, Open Switch, OPNFV are invited to apply by February 27.

You don’t have to be a code jockey, either. In addition to developers and reviewers, the Support program welcomes documentation writers, organizers of user groups around the world, translators and forum moderators. (The Support program doesn’t include university students, however, who are encouraged to apply for a discounted registration pass.)

Although applying is a quick process, remember to frame your request clearly. Spend some time answering the question about why you want to attend the Summit. If you make it all about how you’d like to network or visit the town where the summit is taking place, your request is likely to get voted down.

Applications are voted on by the Travel Committee, which is made up of three people each from the User Committee, Board of Directors, OpenStack Ambassadors, Project Team Leads/Technical Committee members and Foundation staff. The composition of the committee is refreshed for each Summit.

“The biggest common mistake people make is not conveying their value to the community,” says Allison Price, marketing coordinator at the OpenStack Foundation who also participates in voting on the applications. “Focus on what you can contribute to the discussions or pinpoint sessions that would be useful to your business and you have a much better chance.”

Asking your company to pay for part of the expenses or finding a buddy to room with won’t influence your chances of getting in. However, she does recommend at least asking if your company will cover some of the costs — because often your employer is happy to chip in and it allows the Foundation to help to more people.

Approved grantees to the Summit will be notified by March 12, 2019.

The post How to get a travel grant to the Open Infrastructure Summit appeared first on Superuser.

by Superuser at January 23, 2019 03:05 PM


Upgrade Cinder : what’s new for block storage service.

Dear Users,

The block storage service is now in Pike version on our two regions. This version of the component offers some new new features and improvements.

New Cinder API version

A new endpoint for block storage service, corresponding to the new API 3.0, has been added in the service catalog. This API is implemented using the microversion framework, allowing changes to the API while maintaining backwards compatibility.

The basic idea is that your request can be processed with a particular version of the API. This is done with an HTTP OpenStack-API-Version header which version number is semantically increasing from 3.0.

OpenStack-API-Version: volume <version> 

To use microversions you can export the OS_VOLUME_API_VERSION variable with the microversion you wish to use.

The microversion of the 3.0 API includes all the main v2 APIs and the /v3 url is used to call the 3.0 APIs.

New features

Revert volume to snapshot

This feature allows you to revert a volume to the state it was in at the moment you created its latest snapshot, without the need to create a new volume. To revert a volume it has to be detached and the revert is possible only from the latest volume snapshot which was taken.

The volume restoration will work even if the volume has been extended since the last snapshot.

The cinder API microversion to use for this feature is 3.40.

Full OpenStack specifications for this feature

Usage example :

cinder revert-to-snapshot <latest_snapshot> 

Volume and snapshot groups

This new feature allows you to:

  • group volumes from a same application to simplify their management

  • make snapshots of multiple volumes from the same group at the same time to insure data consistency

The microverions of cinder’s API for this fonctionality are 3.13, 3.14, 3.15.

Full OpenStack documentation

Usage example :


To create a volume group you need a group type. To list available group types:

cinder group-type-list 

| ID                                   | Name                   | Description                                 | 


| 56ce02a6-282e-444e-aded-619096303e36 | consistency_group_type | Default group type for consistent snapshots | 


This group type has the extra spec: consistent_group_snapshot_enabled set to True.

Volume group creation:

cinder group-create 56ce02a6-282e-444e-aded-619096303e36 <volume-type> 

To add an existing volume to this group:

cinder group-update --add-volumes <volume_id> <group_id> 

Creation of a new volume in this group:

cinder create --group-id <group_id> --volume-type <volume-type> ... 

To see the consistency group a volume belongs to:

cinder show <volume-id> 

Snapshot creation of all the volumes belonging to the same group:

cinder group-snapshot-create <group_id> 

To see if a snapshot belongs to a group:

cinder snapshot-show <snapshot_id> 

Deletion of a snapshot group and all it’s snapshots:

cinder group-snapshot-delete <group_snapshot_id> 

Deletion of a volume group and all it’s volumes:

cinder group-delete --delete-volumes True <group_id> 

The Cloudwatt support team remains at your disposition to answer all your questions: support@cloudwatt.com

by Nolwenn Cauchois at January 23, 2019 12:00 AM

January 22, 2019

OpenStack Superuser

Check out these open infrastructure project updates

If you’re interested in getting up to speed on what’s next for open infrastructure software, the project update videos from the recent Summit Berlin are available now.

In them you’ll hear from the project team leaders (PTLs) and core contributors about what they’ve accomplished, where they’re heading for future releases plus how you can get involved and influence the roadmap.

You can find the complete list of them on the video page. You can also get a complete overview of the projects (and how to get involved) on the OpenStack project navigator. For more on projects independent from the OSF (Airship, Kata Containers and Zuul) follow the links to those sites.

Some project updates that you won’t want to miss, in alphabetical order:


Airship is a collection of loosely coupled, interoperable open-source tools that are nautically themed.


Bare metal provisioning service implements services and associated libraries to provide massively scalable, on demand, self-service access to compute resources, including bare metal, virtual machines and containers.

Kata Containers

Kata Containers bridges the gap between traditional VM security and the lightweight benefits of traditional Linux* containers.


Keystone is an OpenStack service that provides API client authentication, service discovery and distributed multi-tenant authorization by implementing OpenStack’s Identity API. It supports LDAP, OAuth, OpenID Connect, SAML and SQL.


Heat orchestrates the infrastructure resources for a cloud application based on templates in the form of text files that can be treated like code. Heat provides both an OpenStack-native ReST API and a CloudFormation-compatible Query API.



Murano enables application developers and cloud administrators to publish various cloud-ready applications in a browsable catalog.


This project aims implement services and associated libraries to provide massively scalable, on demand, self service access to compute resources, including bare metal, virtual machines and containers.


Octavia is an open source, operator-scale load balancing solution designed to work with OpenStack.


Swift is a highly available, distributed, eventually consistent object/blob store. Organizations can use Swift to store lots of data efficiently, safely, and cheaply. It’s built for scale and optimized for durability, availability, and concurrency across the entire data set. Swift is ideal for storing unstructured data that can grow without bound.


Zuul drives continuous integration, delivery and deployment systems with a focus on project gating and interrelated projects.


The post Check out these open infrastructure project updates appeared first on Superuser.

by Superuser at January 22, 2019 03:06 PM


Open edX and OpenStack for complex learning environments

This combination can close the skills gap by enabling IT professionals to acquire critical skills in complex distributed systems technology from any location.

by fghaas at January 22, 2019 08:01 AM

January 21, 2019

OpenStack Superuser

EBSCO: A quick case study in OpenStack and managed Kubernetes

If you’ve checked out an ebook or trawled through an online newspaper archive at a school or public library, it was probably thanks to EBSCO Information Services.

Used by about five million people worldwide, every day, this division of EBSCO Industries is one of the leading providers of resources for libraries including discovery, resource management, databases and ebooks. Which probably also makes it one of the largest tech companies you use but don’t know by name.

Powering these Information Services is a private cloud in transition. “We built a private cloud and we’re also embracing the public cloud as well. So we’re in a transition right now between a very well established and productive application framework that we built on our own managed data centers. To make that environment more productive, we’ve invested in private cloud technologies like OpenStack. We’re using Platform9, we’re using AVI software, load balancers, we have a lot of technology going there,” says CIO Doug Jenkins.

They’re currently one of Platform9’s largest deployments: more than a 1,000 sockets and a workload that involves deploying Heat stacks hundreds of times a day. “They’ve stressed OpenStack and Neutron and Heat to levels we’d not seen before,” Bich Le, Platform9 chief architect tells Superuser in an interview at KubeCon + CloudNativeCon North America where the San Francisco-based startup announced five new managed Kubernetes customers.

A client of Platform9’s for about a year-and-a-half, EBSCO is now expanding to Kubernetes, making a “big push” as part of their modernization/digital transformation initiative, Lee says. As part of the push, they looked into Amazon EKS, but ultimately because of the multitude of data centers and mix of private data center and cloud and a mix of workloads they gravitated to Platform9. Le says they are just getting started with Kubernetes. In the short term, they’re redesigning many of their applications. Some of the databases are still kept external so they’re running stateless apps against those, but Le says that with time they’re considering migrating the stateful ones as well. They’re currently in the process of architecting a new stack for their next generation applications. They have, however already chosen some components, including Istio for the network mesh.

One thing will remain constant: They plan to continue running everything on OpenStack infrastructure-as-a-service “not bare metal, but Kubernetes on OpenStack,” Le explains. “It’s a very natural choice to make…if you have a mature IAAS, it’s just so much more attractive. The IAAS gives you a lot of flexibility, control and great utilization over your hardware.”

Especially if you have a mix of VMs and containers it makes sense to run them side-by-side, he adds. “If something goes wrong with a Kubernetes node, you can just kill it. It’s much more complicated when you’re dealing with bare metal.”



The Linux Foundation provided travel and accommodation to KubeCon.

Cover photo // CC BY NC

The post EBSCO: A quick case study in OpenStack and managed Kubernetes appeared first on Superuser.

by Nicole Martinelli at January 21, 2019 05:00 PM

January 19, 2019

OpenStack in Production

OpenStack In Production - moving to a new home

During 2011 and 2012, CERN IT took a new approach to how to manage the infrastructure for analysing the data from the LHC and other experiments. The Agile Infrastructure project was formed covering service provisioning, configuration management and monitoring by adopting commonly used open source solutions with active communities to replace the in house tool suite.

In 2019, the CERN cloud managed infrastructure has grown by a factor of 10 compared to the resources in 2013. This has been achieved in collaboration with the many open source communities we have worked with over the past years, including
  • OpenStack
  • RDO
  • CentOS
  • Puppet
  • Foreman
  • Elastic
  • Grafana
  • Collectd
  • Kubernetes
  • Ceph
These experiences have been shared with over 40 blogs and more than 100 different talks at open source events during this time from small user groups to large international conferences. 

The OpenStack-In-Production blog has been covering the experiences, with a primary focus on the development of the CERN cloud service. However, the challenges of the open source world are now covering many more projects so it is time to move to a new blog to cover not only work on OpenStack but other communities and the approaches to integrated these projects into the CERN production infrastructure.

Thus, this blog will be moving to its new home at https://techblog.web.cern.ch/techblog/, incorporating our experiences with these other technologies. For those who would like to follow a specific subset of our activities, there are also taxonomy based content to select new OpenStack articles at https://techblog.web.cern.ch/techblog/tags/openstack/ and the legacy blog content at https://techblog.web.cern.ch/techblog/tags/openinfra/.

One of the most significant benefits we've found from sharing is receiving the comments from other community members. These often help to guide us in further investigations on solving difficult problems and to identify common needs to work on together upstream.

Look forward to hearing from you all on the techblog web site.


by Tim Bell (noreply@blogger.com) at January 19, 2019 09:31 AM

January 18, 2019

OpenStack Superuser

Tips for the Open Infrastructure Summit

There are only three months until the first Open Infrastructure Summit—April 29-May 1 in Denver, Colorado —and only three days to get your sessions submitted. Typically, just 25 percent of submissions are chosen for the conference, but don’t worry, the Summit Programming Committees are here to help.

Superuser is talking to Programming Committee members of the Tracks for the Denver Summit to help shed light on what topics, takeaways and projects they’re hoping to see in sessions submitted to their Track.

The deadline for the call for presentations for the first Open Infrastructure Summit is this week: Wednesday, January 23 at 11:59 p.m. PT.

So far, we’ve covered the Open Development and Public Cloud tracks. In this article, you can find content tips for the Artificial Intelligence (AI) /Machine Learning / High-Performance Computing (HPC), Hands-On Workshops, Private and Hybrid Cloud, Security and Telecom and NFV Tracks.

AI / machine learning / HPC

Programming Committee Member: Armstrong Foundjem, research associate, MCIS Research Laboratory, École Polytechnique de Montréal.

I look forward to seeing current use cases that are solving real world problems using AI and machine learning with best practices applied to various industrial and academic settings. These might include, but aren’t limited to, the use of deep learning (i.e self-driving cars), natural language processing (NLP), in software engineering/development to predict defects, release engineering (DevOps)/automation, CI/CD, robotics, health care and financial models for predictive analytics. Also, I’d like to see works on performance analysis, scientific research, data mining and visualization techniques, IoT and computation.

Attendees can expect to take home applications of the most recent works in the industry. including, such as cutting-edge research in AI, HPC, scientific research, data mining and visualization and IoT. I’d like to see attendees of this track to go home satisfied that they are in step with current advances in AI and a sense of where we are today.

Hands-on workshops

Programming Committee Member: Stefano Canepa

A Hands-on Workshop needs to allow the attendee to work on something they are not an expert in, something that they recognize as valuable for their cloud but that they do not feel comfortable to implement on their own. The attendee will be presented  steps-by-steps instructions to guide them to get the expected result and get an environment ready to operate. The presenter has to guide the attendees using slides and, ideally, with the help of other subject experts, help every single attendee. Issues and solutions have to be shared with the whole audience.

Summit attendees should expect to take away practical knowledge they can apply to their day job immediately after the Summit.

Private and hybrid cloud

Programming Committee Member: Yih Leong Sun, OpenStack User Committee member

Topics for this Track includes Private and Hybrid Cloud deployment, success stories, use cases, from different industry segments, including financial services, retail and other non-IT industries. Hybrid Cloud pain points and potential solutions, as well as best practices including what type of workload is best proven for private vs. hybrid deployment.Summit attendees should walk away understanding the private and hybrid cloud landscape by learning from other cloud practitioners.


Programming Committee Members: Gage Hugo, member of technical Staff, AT&T and Josephine Seifert, innovation assistant, Cloud&Heat.

There are many security issues that plague modern infrastructure, it’s always the effort of those involved in security to help triage and fix these issues. The sessions will hopefully shed light on existing issues, explain how these issues were overcome and what knowledge was gleaned from the process. Anything regarding many of the current security issues present in today’s environment would also be most welcome. Presentations or workshops that describe best practices based on real-world experiences and currently available mechanisms in Open Infrastructure are interesting as well.

Attendees will become more aware of existing security enhancing mechanisms as well as of any of the issues regarding security in the community, from tackling current issues, improving upon already existing areas and how to contribute if they wish to do so.

Telecom and NFV

Programming Committee Member: Nate Johnston, principal software developer, OpenStack at Red Hat

I’d love to see talks describing the implementation challenges for NFV solutions and proposals on what’s next in the evolution of NFV, or ways to decrease time-to-market for NFV.  For some companies understanding usage and controlling spending on clouds is difficult; how can utilization metrics be integrated into financial models to create holistic showback/chargeback capability?  How can a telco with a traditional “zoned” model for network security integrate hybrid cloud into the model, or extend that model onto the cloud(s)?  How are micro-services and the “observability” paradigm driving the evolution of how service-level agreements are handled?

Telcos are driving some of the major changes in networking because for them, the network is the product; making it be able to serve more applications and uses helps them differentiate from their competitors and offer unique value to their customers.  But one way or another networks are a part of every service and application these days!  So I hope attendees gain knowledge and inspiration from the evolution in the the technology and practices of operating telco networks and clouds, so they can get more value out of their own networks and clouds.

Get your talk ideas in before Wednesday, January 23 at 11:59 p.m. PT.

The post Tips for the Open Infrastructure Summit appeared first on Superuser.

by Allison Price at January 18, 2019 05:06 PM

Chris Dent

Placement Update 19-02

Hi! It's a placement update! The main excitement this week is we had a meeting to check in on the state of extraction and figure out the areas that need the most attention. More on that in the extraction section within.

Most Important

Work to complete and review changes to deployment to support extracted placement is the main thing that matters.

What's Changed

  • Placement is now able to publish release notes.

  • Placement is running python 3.7 unit tests in the gate, but not functional (yet).

  • We had that meeting and Matt made some notes.



Last week was spec freeze so I'll not list all the specs here, but for reference, there were 16 specs listed last week and all 16 of them are neither merged nor abandoned.

Main Themes

The reshaper work was restarted after discussion at the meeting surfaced its stalled nature. The libvirt side of things is due some refactoring while the xenapi side is waiting for a new owner to come up to speed. Gibi has proposed a related functional test. All of that at:

Also making use of nested is this spectacular stack of code at bandwidth-resource-provider:

Eric's in the process of doing lots of cleanups to how often the ProviderTree in the resource tracker is checked against placement, and a variety of other "let's make this more right" changes in the same neighborhood:

That stuff is very close to ready and will make lots of people happy when it merges. One of the main areas of concern is making sure it doesn't break things for Ironic.


As noted above, there was a meeting which resulted in Matt's Notes, an updated extraction etherpad, and an improved understanding of where things stand.

The critical work to ensure a healthy extraction is with getting deployment tools working. Here are some of the links to that work:

We also worked out that getting the online database migrations happening on the placement side of the world would help:

Documentation is mostly in-progress, but needs some review from packagers. A change to openstack-manuals depends on the initial placement install docs.

There is a patch to delete placement from nova on which we've put an administrative -2 until it is safe to do the delete.


There are 13 open changes in placement itself. Several of those are easy win cleanups.

Of those placement changes, the online-migration-related ones are the most important.

Outside of placement (I've decided to trim this list to just stuff that's seen a commit in the last two months):


Because I wanted to see what it might look like, I made a toy VM scheduler and placer, using etcd and placement. Then I wrote a blog post. I wish there was more time for this kind of educational and exploratory playing.

by Chris Dent at January 18, 2019 03:43 PM


One Man’s Crush on Technology: OpenKilda

Aptira Crush on Technology: OpenKilda

For good or bad, Technologists can be pretty passionate people. I mean, how many other professionals would happily describe an inanimate object, or worse, a virtual concept like software as sexy? If you were to ask, the reasons for their love of one piece of technology or another would be as personal as well, anything you might be passionate about. 

For me, it’s the elegance and intelligence of the solution that excites me. Perhaps call it a professional acknowledgment for pragmatic and effective solutions. An appreciation for solutions that have been well thought out and provide opportunities for scale, growth and enhancement. 

It was Late in the spring of ‘17 that I first became aware of OpenKilda.  As part of an availability and performance assessment, I had spent some time thinking about what a unique Web-Scale SDN Controller should look like. How should it operate? What were the basic, functional, building blocks that were needed? That was when the slides for OpenKilda crossed my desk. 

The architecture slides is what had me enamoured; built from the ground up using mature, established components to support the challenges of transport and geographically diverse networks. Components that of themselves, were known for their intelligent design. I’d like to think that if I was going to design an SD-WAN controller, it would look like this. 

OpenKilda set it-self apart in the SDN Controller market. It wasn’t trying to be a general SDN Controller, shoe horned into WAN and Transport applications. It was a true WAN and transport SDN solution from birth. 

Still a little immature, was OpenKilda that diamond in the rough we were all looking for? To my eyes the solution was certainly elegant: Lean yet powerful. Simple, yet sophisticated.  But ultimately, there was one thing I could see that had me very excited: Opportunity. 

The value of a product or solution is not in what it does, but the value it can create for others. OpenKilda’s make-up of mature, open sourced components like Kafka, Storm and Elastic, is what presented that value.  

Access to established communities, plug & play extensions and a wider pool of available talent, meant OpenKilda was potentially more extensible than the others. Across those components, a diverse, already established ecosystem of vendors, service providers and integrators, meant there were potentially more invested interests in its success.  

What’s more, John Vestal and team (OpenKilda’s creators) were eager to share OpenKilda with the world. Hopefully building on, and building out, what they had already started.  Yes, it was fair to say I was excited.  Some birds are simply never meant to be caged. 

 …It would be nearly a year before I could broker a more intimate introduction. A short but deep exploration under the covers as we considered what lay on the road ahead. Telecommunications, Media, Finance; The opportunities are potentially wide and expansive.  Will OpenKilda be the key to unlocking them?  I think it just could be…  

Remove the complexity of networking at scale.
Learn more about our SDN & NFV solutions.

Learn More

The post One Man’s Crush on Technology: OpenKilda appeared first on Aptira.

by Craig Armour at January 18, 2019 04:35 AM

January 17, 2019

OpenStack Superuser

Tips for talks on the Public Cloud Track for the Denver Summit

Have an open source public cloud story? It’s time to talk about it. The call for presentations for the first Open Infrastructure Summit is open until next Wednesday, January 23.

Typically, just 25 percent of submissions are chosen for the conference. In light of that fierce competition, Superuser is talking to Programming Committee members of the Tracks for the Denver Summit to help shed light on what topics, takeaways and projects they’re hoping to see in sessions submitted to their Track.

Here we’re featuring the Public Cloud Track with tips from Tobias Rydberg, chair of the OpenStack Public Cloud Working Group. He talked to Superuser about some of the content that should be submitted to this track as well as what attendees can expect. Want more help on your submission? Rydberg offered to help over Twitter direct message or in IRC (#open-infra-summit-cfp) before next week’s deadline.

Public Cloud Track topics

Architecture and hardware, economics, cloud portability, features and needs, federation, hardware, operations and upgrades, multi-tenancy, networking, performance, scale, security and compliance, service-level agreements (SLAs), storage, open-source platforms, tools and and SDKs, UI/UX, upgrades, user experience.

What content you would like to see submitted to this Track?

We’re looking for a broad variety of presentations in the Public Cloud Track. Everything from the business perspective to technical talks. Summit attendees would love to hear more from operators who have delivered OpenStack as a public cloud, how you as a public cloud operator handle your daily business, what challenges you have and how you solve them. It’s also helpful to share what provisioning tools you’re using and how do you manage upgrades.

What will Summit attendees take away from these sessions?

Attending the Public Cloud track at the OpenInfra Summit in Denver will give attendees a better understanding of the benefits and challenges of using open source in the public cloud sector, both as an operator and as an end user. We hope that attendees will leave with more knowledge and ideas how to evolve and improve their current operations and business.

Who’s the target audience for this track?

The potential audience of this track is pretty broad – it could be developer wanting to get a better understanding of the challenges with OpenStack and Open Source in a pubic cloud environment, operators looking for ideas and solutions to their businesses as well as potential end users interested seeing the benefits of using open-source solutions.

Cover photo // CC BY NC

The post Tips for talks on the Public Cloud Track for the Denver Summit appeared first on Superuser.

by Allison Price at January 17, 2019 03:00 PM

Trinh Nguyen

Viet OpenStack first webinar 5 Nov. 2018

Yesterday, 5 November 2018, at 21:00 UTC+7, about 25 Vietnamese developers attended the very first webinar of the Vietnam OpenStack User Group [1]. This is part of a series of Upstream Contribution Training based on the OpenStack Upstream Institute [2]. The topic is "How to contribute to OpenStack". Our target is to guide new and potential developers to understand the development process of OpenStack and how they are governed.

The webinar was planned to do in Google Hang Out but with the free version, only maximum 10 people can join the video call. So, we decided to use Zoom [3]. But, because it limits to 45m per meeting for the free account, we did 2 sessions for the webinar. Thank the proactive and supports of the Vietnam OpenStack User Group administrators, the webinar went very well. Whatever works.

I uploaded the training's content on GitHub [4] and will update it based on the attendee's feedbacks. A couple feedbacks I got after the webinar are:
  • Should have exercises
  • Find a more stable webinar tool
  • The training should happen earlier
  • The topics should be simpler for new contributors to follow
You can find the recorded videos of the webinar here:

Session 1: https://youtu.be/k3U7MjBNt-k

Session 2: https://youtu.be/nIkmIgTvfd4

We continue to gather feedback from the attendees and plan for the second webinar next month.


[1] https://www.meetup.com/VietOpenStack/events/hpcglqyxpbhb/
[2] https://docs.openstack.org/upstream-training/upstream-training-content.html
[3] https://zoom.us
[4] https://github.com/dangtrinhnt/vietstack-webinars

by Trinh Nguyen (noreply@blogger.com) at January 17, 2019 02:22 AM

Viet OpenStack (now renamed Viet OpenInfa) second webinar 10 Dec. 2018

Yes, we did it, the second OpenStack Upstream Contribution webinar. This time we focused on debugging tips and tricks for first-time developers. We also had time to introduce some of the great tools such as Zuul CI [1] (and how to use the Zuul status page [2] to keep track of running tasks), ARA report [3], and tox [4] etc. During the session, attendees had shared some great experience when debugging OpenStack projects (e.g., how to read logs, use ide, etc.). And,  a lot of good questions has been raised such as how to use ipdb [7] to debug running services (using ipdb to debug is quite hardcore I think :)) etc. You can check out this GitHub link [5] for chat logs and other materials.

I want to say thanks to all the people at the Jitsi open source project [6] that provides a great conferencing platform for us. We were able to have video discussion smoothly without any limitation or interruption and the experience was so great.

Watch the recorded video here: https://youtu.be/rI2zPQYtX-g


[1] https://zuul-ci.org/
[2] http://zuul.openstack.org/status
[3] http://logs.openstack.org/78/570078/10/check/neutron-grenade-multinode/303521d/ara-report/reports/
[4] https://tox.readthedocs.io/en/latest/
[5] https://github.com/dangtrinhnt/vietstack-webinars/tree/master/second
[6] https://jitsi.org/
[7] https://pypi.org/project/ipdb/

by Trinh Nguyen (noreply@blogger.com) at January 17, 2019 02:22 AM

January 16, 2019

OpenStack Superuser

Inside open infrastructure: The latest from the OpenStack Foundation

Welcome to the latest edition of the OpenStack Foundation Open Infrastructure Newsletter, a digest of the latest developments and activities across open infrastructure projects, events and users. Sign up to receive the newsletter and email community@openstack.org to contribute.

Spotlight on… Zuul

Zuul, a pilot project supported by the OpenStack Foundation (OSF), is a suite of free/libre open source software that drives continuous integration, delivery and deployment (CI/CD) with a focus on project gating and coordinating changes across interrelated projects.

Zuul tests cross-project changes in parallel so users can easily validate changes to multiple systems together before landing a single patch.

Since 2012, Zuul has been proven at scale as a critical part of the OpenStack development process. In early 2018, version 3 was released and Zuul became an independently-governed effort, distinct from the OpenStack project. The third major release also marked a significant rewrite to improve general re-usability of Zuul outside of the OpenStack project and has seen adoption in organizations like BMW, leboncoin, GoDaddy and OpenLab. Many of Zuul’s users are also contributors, with development coming from the likes of Red Hat, BMW, GoDaddy, Huawei, GitHub and the OSF.

Zuul now supports code management through connection drivers for Gerrit, GitHub and generic Git remote repositories, with work underway to add Pagure. Since Zuul relies on Ansible for job definition language, it can run builds on any operating system that Ansible can manage. Zuul’s resource pool manager, Nodepool, can manage workloads on resources dynamically provisioned through APIs for OpenStack and Kubernetes, or on separately-maintained “static” servers and is working to add Red Hat OpenShift, Amazon Elastic Compute Cloud (AWS EC2), Microsoft Azure and VMware vSphere support to that list. Major Zuul design discussions currently underway include support for using multiple concurrent Ansible versions to run jobs from its executor, and distributing the resource scheduler to eliminate it as a single point of failure.

If you’re interested in trying out Zuul for yourself, check out these resources:

OpenStack Foundation news

  • The Call for Presentations is currently open for the Open Infrastructure Summit that is being held April 29-May 1 in Denver, Colorado. Check out the updated Summit tracks and submit your session by next week’s deadline: Wednesday, January 23 at 11:59 p.m. PT.
  • The Travel Support Program for the Denver Summit and PTG is also open. Submit your application before February 27 at 11:59 p.m. PT.
  • All OpenStack Foundation members received a link to vote in the Board of Directors Individual election and bylaws amendments this week. Check your email and cast your vote by this Friday, January 18, 2019 at 11:00 a.m. CST/1700 UTC.
  • The Diversity & Inclusion Working Group is conducting an anonymous survey to better understand the diversity and makeup of the community. Participation is appreciated so we can better understand and serve the community. Share any questions with working group chair, Amy Marrich (spotz on IRC).

OpenStack Foundation project news


  • The development of the upcoming OpenStack release reached the Stein-2 milestone last week. We now know what deliverables to expect in the final Stein release, planned for April 10.
  • It’s been a month since the OpenStack community switched back to using a single list for discussion, forming a single community of contributors. Please read Jeremy Stanley’s report to learn more.


  • The Airship Team continues to work towards the 1.0 release and invites comments and feedback. A developer and user feedback session to help new users become more engaged with 1.0 release is in the works. Details to come on the Airship mailing list.
  • A specification for leveraging Ironic as a bare metal driver has merged. Catch up on the full discussion by watching the recording of the January 10 Airship Design Meeting. Want to get involved with Airship design and learn more? The team meets every Thursday at 11:00 a.m CT for an open design meeting.

Kata Containers

  • Over the past several weeks the Kata team has been busy working on the 1.5 release, scheduled to land January 23. It will offer support for containerd v2, Firecracker and live upgrades. The 1.5 release candidate is available now for preview.
  • The Kata community has formed a new Marketing Content special interest group that will begin monthly meetings on January 16. Details are available in the #kata-marketing channel in the Slack group.


  • The first StarlingX Contributor Meetup is ongoing in Chandler, Arizona to cover both technical and community-related. topics
  • The community set up a mail report of StarlingX builds from CENGN to make sure any issue is corrected immediately.

Questions / feedback / contribute

This newsletter is edited by the OpenStack Foundation staff to highlight open infrastructure communities. We want to hear from you!
If you have feedback, news or stories that you want to share, reach us through community@openstack.org and to receive the newsletter, sign up here.

The post Inside open infrastructure: The latest from the OpenStack Foundation appeared first on Superuser.

by OpenStack Foundation at January 16, 2019 08:03 PM

Ben Nemec

OpenStack Virtual Baremetal Imported to OpenStack Infra

As foretold in a previous post, OVB has been imported to OpenStack Infra. The repo can now be found at https://git.openstack.org/cgit/openstack/openstack-virtual-baremetal. All future development will happen there so you should update any existing references you may have. In addition, changes will now be proposed via Gerrit instead of Github pull requests. \o/

For the moment, the core reviewer list is largely limited to the same people who had commit access to the Github repo. The TripleO PTL and one other have been added, but that will likely continue to change over time. The full list can be found here.

Because of the still-limited core list, not much about the approval process will change as a result of this import. I will continue to review and single-approve patches just like I did on Github. However, there are plans in the works to add CI gating to the repo (another benefit of the import) and once that has happened we will most likely open up the core reviewer list to a wider group.

Questions and comments via the usual channels.

by bnemec at January 16, 2019 06:18 PM

Trinh Nguyen

VietOpenInfra third webinar - 14th Jan. 2019

Yay, finally after the new year holiday we can organize the third upstream training webinar for OpenStack developers in Vietnam [1]. This time we invited Kendall Nelson [2], Upstream Developer Advocate for the OpenStack Foundation, to teach us about the Storyboard [3] and Launchpad [4] task management tools (she's also one of the core developers of the Storyboard project).

We first started with the Jitsi conferencing platform [5] but we could not communicate with Kendall (in the US) for some reason. So, we decided to switch back to Zoom [6] and everything went well after that. There were about 12 people attended the webinar and we had a good conversation with Kendall about some aspects of Storyboard which is quite new to some users. You can check out the conversation (log chat) here [7]. Below is the recorded video:

We would like to say thanks to Kendall Nelson for her kind acceptance to teach us this time even though the schedule was pretty early for her (6AM her time). We learned a lot from her presentation and even someone in the audiences would want to contribute to the Storyboard project (here are some low hanging fruit to work on [9]).

P/S: You can follow this link [8] for the previous webinars.


[1] https://www.meetup.com/VietOpenStack/events/257860457
[2] https://twitter.com/knelson92
[3] https://storyboard.openstack.org/
[4] https://launchpad.net/openstack
[5] https://www.meetup.com/VietOpenStack/events/257860457
[6] https://zoom.us
[7] https://github.com/dangtrinhnt/vietstack-webinars/tree/master/third
[8] https://github.com/dangtrinhnt/vietstack-webinars
[9] https://storyboard.openstack.org/#!/board/115

by Trinh Nguyen (noreply@blogger.com) at January 16, 2019 01:53 AM

January 15, 2019

OpenStack Superuser

Going by the Ansible playbook: How eBay Classifieds survived Spectre and Meltdown

Just about a year ago, the security community got a nasty wake-up call: Spectre and Meltdown.

Considered “pretty catastrophic” by experts, they were a series of vulnerabilities discovered by various security researchers around performance optimization techniques built in modern CPUs. Those optimizations (involving superscalar capabilities, out-of-order execution, and speculative branch prediction) essentially created a side-channel that could be exploited to deduce the content of computer memory that should normally not be accessible.

For e-commerce giant eBay, keeping the nightmares away was a particularly complex project. The eBay Classifieds Group has a private cloud distributed in two geographical regions (with future plans in the works for a third), around 1,000 hypervisors and a capacity of 80,000 cores. The team needed to patch hypervisors on four availability zones for each region with the latest kernel, KVM version and BIOS updates. During these updates the zones were unavailable and all the instances restarted automatically.

Bruno Bompastor and Adrian Joian, from eBay’s cloud reliability team, shared how shoring up their system against these vulnerabilities stretched from January until July. One of the takeaways? First that Ansible is a great tool for infrastructure automation. “We decided to use Ansible as our main tool and heavily relied on Ansible roles as a way to organize tasks,” Bompastor says. As an example, the team has OpenStack roles, hardware roles, update roles and — the most important one for this project — the checker role, to scan for these vulnerabilities. They ran a checker on the host (an open-source script that basically tests the variants they wanted to check). Available on GitHub, “it’s a very nice script that covers everything like this…”

They also gave an inside look all the work the team had to perform to shut down, update and boot successfully an infrastructure fully patched and without data loss. They discussed how the team managed of SDN (Juniper Contrail) and LBaaS (Avi Networks) when restarting this massive number of cores.

Catch the whole case study on YouTube or the slides here.

The post Going by the Ansible playbook: How eBay Classifieds survived Spectre and Meltdown appeared first on Superuser.

by Superuser at January 15, 2019 05:06 PM

January 14, 2019

SUSE Conversations

Looking for a reason to attend SUSECON? I’ve got 5!

In today’s business environment, every company is a digital company. IT infrastructure needs to not only keep pace but also move fast enough to accommodate strategic business and technology initiatives such as cloud, mobile and the Internet of Things. At SUSECON 2019, see how our open, open source approach helps our customers and partners transform […]

The post Looking for a reason to attend SUSECON? I’ve got 5! appeared first on SUSE Communities.

by Kent Wimmer at January 14, 2019 08:58 PM

OpenStack Superuser

TripleO networks: From simplest to not-so-simple

TripleO (OpenStack On OpenStack) is a program aimed at installing, upgrading and operating OpenStack clouds using OpenStack’s own cloud facilities as the foundations – building on Nova, Neutron and Heat to automate fleet management at datacenter scale.

If you read the TripleO setup for network isolation, it lists eight distinct networks. Why does TripleO need so many networks? Let’s take it from the ground up.

Table of contents

WiFi to the workstation

WifI to the Workstation

I run Red Hat OpenStack Platform (OSP) Director, which is the productized version of TripleO.  Everything I say here should apply equally well to the upstream and downstream variants.

My setup has OSP Director running in a virtual machine (VM). To get that virtual machine set up requires network connectivity. I perform this via wireless, as I move around the house with my laptop. The workstation has a built-in wireless card.

Let’s start here: Director runs inside a virtual machine on the workstation.  It has complete access to the network interface card (NIC) via macvtap.  This NIC is attached to a Cisco Catalyst switch.  A wired cable to my laptop is also attached to the switch.  This allows me to set up and test the first stage of network connectivity:  SSH access to the virtual machine running in the workstation.

Provisioning network

The blue network here is the provisioning network.  This reflects two of the networks from the TripleO document:

  • IPMI* (IPMI System controller, iLO, DRAC)
  • Provisioning* (Undercloud control plane for deployment and management)

These two distinct roles can be served by the same network in my set up, and, in fact they must be.  Why?  Because my Dell servers have a NIC that acts as both the IPMI endpoint and is the only NIC that supports PXE.  Thus, unless I wanted to do some serious VLAN wizardry, and get the NIC to switch both (tough to debug during the setup stage), I’m better off with them both using untagged VLAN traffic.  This way, each server is allocated two static IPv4 address, one used for IPMI and one that will be assigned during the hardware provisioning.

Apologies for the acronym soup.  It bothers me, too.

Another way to think about the set of networks that you need is via DHCP traffic.  Since the IPMI cards are statically assigned their IP addresses, they don’t need a DHCP server.  But the hardware’s operating system will get its IP address from DHCP.  So it’s OK if these two functions share a network.

This doesn’t scale very well.  IPMI and IDrac can both support DHCP and that would be the better way to go in the future, but it’s beyond the scope of what I’m willing to mess with in my lab.

Deploying the overcloud

In order to deploy the overcloud, the director machine needs to perform two classes of network calls:

  1. SSH calls to the bare metal OS to launch the services, almost all of which are containers.  This is on the blue network above.
  2. HTTPS calls to the services running in those containers.  These services also need to be able to talk to each other.  This is on the yellow internal API network above.  (I didn’t color code “yellow” as you can’t read it. )

Internal (not) versus external

You might notice that my diagram has an additional network; the external API network is shown in red.

Provisioning and calling services are two very different use cases.  The most common API call in OpenStack is POST https://identity/v3/auth/token.  This call is made prior to any other call.  The second most common is the call to validate a token.  The create token call needs to be access able from everywhere that OpenStack is used. The validate token call does not.  But, if the API server only listens on the same network that’s used for provisioning, that means the network is wide open;  people who should only be able to access the OpenStack APIs have the capability to send network attacks against the IPMI cards.

To split this traffic, either the network APIs need to listen on both networks or the provisioning needs to happen on the external API network. Either way, both networks are going to be set up when the overcloud is deployed.

Thus, the red server represents the API servers that are running on the controller and the yellow server represents the internal agents that are running on the compute node.

Some Keystone history

When a user performs an action in the OpenStack system, they make an API call.  This request is processed by the web server running on the appropriate controller host.  There’s no difference between a Nova server requesting a token and project member requesting a token. These were seen as separate use cases and were put on separate network ports.  The internal traffic was on port 35357 and the project member traffic was on port 5000.

It turns out that running on two different ports of the same IP address doesn’t solve the problem people were trying to fix.  They wanted to limit API access via network, not by port.  Therefore, there really was no need for two different ports, but rather two different IP addresses.

This distinction still shows up in the Keystone service catalog, where endpoints are classified as external or internal.

Deploying and using a virtual machine

Now our diagram has become a little more complicated.  Let’s start with the newly added red laptop, attached to the external API network.  This system is used by our project member and is used to create the new virtual machine via the compute create_server API call.

Here’s the order of how it works:

  1. The API call comes from the outside, travels over the red external API network to the Nova server (shown in red)
  2. The Nova posts messages to the the queue, which are eventually picked up and processed by the compute agent (shown in yellow).
  3. The compute agent talks back to the other API servers (also shown in red) to fetch images, create network ports and connect to storage volumes.
  4. The new VM (shown in green) is created and connects via an internal, non-routable IP address to the metadata server to fetch configuration data.
  5. The new VM is connected to the provider network (also shown in green.)

At this point, the VM is up and running.  If an end user wants to connect to it they can do so.  Obviously, the provider network doesn’t run all the way through the router to the end user’s system, but this path is the open-for-business network pathway.

Note that this is an instance of a provider network as Assaf Muller defined in his post.

Tenant networks

Let’s say you’re not using a provider network.  How does that change the set up?  First, let’s re-label the green network as the external network.  Notice that the virtual machines don’t connect to it now.  Instead, they connect via the new purple networks.

Note that the purple networks connect to the external network in the network controller node, show in purple on the bottom server.  This service plays the role of a router, converting the internal traffic on the tenant network to the external traffic.  This is where the floating IPs terminate and are mapped to an address on the internal network.

Wrap up

The TripleO network story has evolved to support a robust configuration that splits traffic into component segments.  The diagrams above attempt to pass along my understanding of how they work and why.

I’ve left off some of the story, as I don’t show the separate networks that can be used for storage.  I’ve collapsed the controllers and agents into a simple block to avoid confusing detail: my goal is accuracy, but here it sacrifices precision.  It also only shows a simple rack configuration, much like the one here in my office.  The concepts presented should allow you to understand how it would scale up to a larger deployment.  I expect to talk about that in the future as well.

I’ll be sure to update  this article with feedback. Please let me know what I got wrong and what I can state more clearly.

About the author

Adam Young is a cloud solutions architect at Red Hat, responsible for helping people develop their cloud strategies. He has been a long time core developer on Keystone, the authentication and authorization service for OpenStack. He’s also worked on various systems management tools, including the identity management component of Red Hat Enterprise Linux based on the FreeIPA technology. A 20-year industry veteran,  Young has contributed to multiple projects, products and solutions from Java based eCommerce web sites to Kernel modifications for Beowulf clustering. This post first appeared on his blog.

Cover photo // CC BY NC

The post TripleO networks: From simplest to not-so-simple appeared first on Superuser.

by Adam Young at January 14, 2019 03:01 PM

January 13, 2019

Chris Dent

etcd + placement + virt-install → compute

I've had a few persistent complaints in my four and half years of working on OpenStack, but two that stand out are:

  • The use of RPC—with large complicated objects being passed around on a message bus—to make things happen. It's fragile, noisy, over-complicated, hard to manage, hard to debug, easy to get wrong, and leads to workarounds ("raise the timeout") that don't fix the core problem.

  • It's hard, because of the many and diverse things to do in such a large commmunity, to spend adequate time reflecting, learning how things work, and learning new stuff.

So I decided to try a little project to address both and talk about it before it is anything worth bragging about. I reasoned that if I use the placement service to manage resources and etcd to share state, I could model a scheduler talking to one or more compute nodes. Not to do something so huge as replace nova (which has so much complexity because it does many complex things), but to explore the problem space.

Most of the initial work involved getting some simple etcd clients speaking to to etcd and placement and mocking out the creation of fake VMs. After that I dropped the work because of the many and diverse things to do, leaving a note to myself to investigate using virt-install.

I took nine months to come back to, but over the course of a couple hours on two or three days I had it booting VMs on multiple compute nodes.

In my little environment a compute node starts up, learns about its environment, and creates a resource provider and associated inventories representing the virtual cpus, disk, and memory it has available. It then sits in a loop, watching an etcd key associated with itself.

Beside the compute process there's a faked out metadata server running.

A scheduler takes a resource request and asks placement for list of allocation candidates. The first candidate is selected, an allocation is made for the resources and the allocations and an image URL are put to the etcd key that the compute node is watching.

The compute sees the change on the watched key, fetches the image, resizes it to the allocated disk size, then boots it with virt-install using the allocated vcpus and memory. When the VM is up another key is set in etcd containing the IP of the created instance.

If the metadata server has been configured with an ssh public key, and the booted image looks for the metadata server, you can ssh into the booted VM using that key. For now it is only from the same host as the compute-node. Real networking is left as an exercise to the reader.

In the course of the work described in those ↑ short paragraphs is more learning about some of the fundamentals of creating a virtual machine than a few years of reading and reviewing inscrutable nova code. I should have done this much sooner.

The really messy code is in etcd-compute on GitHub.

by Chris Dent at January 13, 2019 09:00 PM

January 11, 2019

Ben Nemec

Debugging a Segfault in oslo.privsep

I recently helped track down a bug exposed by a recent oslo.privsep release that added threading to allow parallel privileged calls. It was a segfault happening in the privsep daemon that was caused by a C call in a privileged Neutron module. This, as you might expect, was a little tricky to debug so I thought I'd document the process for posterity.

There were a couple of reasons this was tough. First, it was a segfault, which meant something went wrong in the underlying C code. Python debuggers need not apply. Second, there's a bunch of forking that happens to start the privsep daemon, which meant I couldn't just run Python in gdb. Well, maybe I could have, but my gdb skills are not strong enough to navigate through a bunch of different forks.

To get gdb attached to the correct process, I followed the debugging with gdb instructions from Python, specifically the ones to attach to an existing process. To make sure I had time to get it attached, I added a sleep to the startup of the privsep daemon installed in my Neutron tox venv. Essentially I would run the test:

tox -e dsvm-functional -- neutron.tests.functional.agent.linux.test_netlink_lib.NetlinkLibTestCase.test_list_entries

Find the privsep-helper process that was eventually started, then attach gdb to it with:

gdb python [pid]

I also needed to install some debuginfo packages on my system to get useful tracebacks from the libraries involved. Gdb gave me the install command to do so, which was handy. I believe the important part here was dnf debuginfo-install libnetfilter_conntrack, but that will vary depending on what you're debugging.

Once gdb was attached, I typed c to tell it to continue (gdb interrupts the process when you attach), then once the segfault happened I used commands like bt, list, and print to examine the code and state where the crash happened. This allowed me to determine that we were passing in a bad pointer as one of the parameters for the C call. It turned out we were truncating pointers because we hadn't specified the proper parameter and return types, so large memory addresses were being squeezed into ints that were too small to hold them. Why the oslo.privsep threading change exposed this I don't know, but my guess is that it has something to do with the address space changing when the calls were made from a thread instead of the main process.

In any case, after quite a bit of cooperative debugging in the OpenStack community and a fair amount of rust removal from my gdb skills, we were able to resolve this bug and unblock the use of threaded oslo.privsep. This should allow us to significantly reduce the attack surface for OpenStack services, resulting in much better security.

I hope this was useful, and as always if you have any questions or comments don't hesitate to contact me.

by bnemec at January 11, 2019 09:24 PM

OpenStack Superuser

Buffer your open infrastructure knowledge: Upcoming training & webinars

It’s January. The time of year when the mind turns to calls for papers. Lifelong learning. And binge watching home organization shows rather than actually decluttering. (Maybe just me on that last one?)

Here are top picks for free or low cost upcoming learning opps. If you’ve recently done a webinar and want to share your takeaways about it for Superuser, remember that contributed posts earn you Active User Contributor status.

Intent-based network load balancer automation and Ansible

In this session and live demo from Red Hat, you will learn from actual customer use cases on Ansible automation modules from configuring network functions, automating load balancer clusters how to and create L4-L7 configuration.
Details here.

Storyboard and Launchpad

The Vietnam Open Infrastructure Community’s first meetup (and webinar) of the year will feature the OpenStack Foundation’s Kendall Nelson talking about StoryBoard and Launchpad. StoryBoard is a web application for task tracking across inter-related projects. Launchpad is a web application and website that allows users to develop and maintain software.
Details here.

Using NetApp Trident with cloud volumes ONTAP for provisioning Kubernetes persistent volumes

The webinar twill show you how to use provisioning persistent volumes for Kubernetes using NetApp’s Trident and Cloud Volumes ONTAP. NetApp Trident is a dynamic storage provisioner leveraging cloud volumes ONTAP for Kubernetes persistent volume claims. Trident is a fully supported open source project maintained by NetApp. Details here.

Kubernetes and Istio Security

Gadi Naor, CTO and co- founder at startup Alcide will cover basic Istio security features, managing and restricting traffic to external services with Kubernetes and Istio network policies, spotting security anomalies as as well as some interesting use case. Details here.

Modernize your data center through open source

Cumulus’ co-founder and current CTO, JR Rivers, will discuss the growth of the open source community and how Cumulus is helping bring open source principles into modern data center networks. He’ll dive into some of the company’s contributions to the open source community such as Open Network Install Environment (ONIE), ifupdown2, VRF for Linux and FRRouting. Details here.

Suse Expert Days 2019

Offered in more than 80 cities worldwide, the SUSE Expert Days tour offers a free day of technical discussions, presentations and demos. The theme? Open. Redefined. Participants will learn about how to

  • Transform IT infrastructure
  • Create a more agile business
  • Make room for innovation

Events kick off in January for Europe and in February Latin America and North America. Full list of events here.

The post Buffer your open infrastructure knowledge: Upcoming training & webinars appeared first on Superuser.

by Nicole Martinelli at January 11, 2019 05:03 PM

Chris Dent

Placement Update 19-01

Hello! Here's placement update 19-01. Not a ton to report this week, so this will mostly be updating the lists provided last week.

Most Important

As mentioned last week, there will be a meeting next week to discuss what is left before we can pull the trigger on deleting the placement code from nova. Wednesday is looking like a good day, perhaps at 1700UTC, but we'll need to confirm that on Monday when more people are around. Feel free to respond on this thread if that won't work for you (and suggest an alternative).

Since deleting the code is dependent on deployment tooling being able to handle extracted placement (and upgrades to it), reviewing that work is important (see below).

What's Changed

  • It was nova's spec freeze this week, so a lot of effort was spent getting some specs reviewed and merged. That's reflected in the shorter specs section, below.

  • Placement had a release and was published to pypi. This was a good excuse to write (yet another) blog post on how easy it is to play with.



With spec freeze this week, this will be the last time we'll see this section until near the end of this cycle. Only one of the specs listed last week merged (placement for counting quota).

Main Themes

Making Nested Useful

I've been saying for a few weeks that "progress continues on gpu-reshaping for libvirt and xen" but it looks like the work at:

is actually stalled. Anyone have some insight on the status of that work?

Also making use of nested is bandwidth-resource-provider:

There's a review guide for those patches.

Eric's in the process of doing lots of cleanups to how often the ProviderTree in the resource tracker is checked against placement, and a variety of other "let's make this more right" changes in the same neighborhood:


Besides the meeting mentioned above, I've refactored the extraction etherpad to make a new version that has less noise in it so the required actions are a bit more clear.

The tasks remain much the same as mentioned last week: the reshaper work mentioned above and the work to get deployment tools operating with an extracted placement:

Loci's change to have an extracted placement has merged.

Kolla has a patch to include the upgrade script. It raises the question of how or if the mysql-migrate-db.sh should be distributed. Should it maybe end up in the pypi distribution?

(The rest of this section is duplicated from last week.)

Documentation tuneups:


There are still 13 open changes in placement itself. Most of the time critical work is happening elsewhere (notably the deployment tool changes listed above).

Of those placement changes, the database-related ones from Tetsuro are the most important.

Outside of placement:


If anyone has submitted, or is planning to, a proposal for summit that is placement-related, it would be great to hear about it. I had thought about doing a resilient placement in kubernetes with cockroachdb for the edge sort of thing, but then realized my motivations were suspect and I have enough to do otherwise.

by Chris Dent at January 11, 2019 03:43 PM

January 10, 2019

Ben Nemec

OpenStack Virtual Baremetal Master is Now 2.0-dev

As promised in my previous update on OVB, the 2.0-dev branch has been merged to master. If this breaks you, switch to the stable/1.0 branch, which is the same as master was prior to the 2.0-dev merge. Note that this does not mean OVB is officially 2.0 yet. I've found a couple more deprecated things that need to be removed before we declare 2.0. That will likely happen soon though.

by bnemec at January 10, 2019 08:30 PM

OpenStack Superuser

Tips for talks on the Open Development Track for the Denver Summit

Time to harness those brainstorms: The call for presentations for the first Open Infrastructure Summit is open until January 23. OpenStack Summit veterans will notice some changes in the event beyond the name. In order to reflect the diversity of projects, use cases and software development in the ecosystem, several conference tracks have been added or renamed.

Historically, just 25 percent of submissions are chosen for the conference. In light of that fierce competition, Superuser is talking to Programming Committee members of the Tracks for the Denver Summit to help shed light on what topics, takeaways and projects they are hoping to see covered in sessions submitted to their Track.
Here we’re featuring the Open Development Track, formerly the Open Source Community Track. Thierry Carrez, VP of engineering for the OpenStack Foundation and Allison Randal, board member of the OpenStack Foundation, are tasked with leading selections for this Track. They shared these insights ahead of the submission deadline.

Open Development Track topics

The 4 Opens, the future of free and open source software, challenges of open collaboration, open development best practices and tools, open source governance models, diversity and inclusion, mentoring and outreach, community management.

Describe the content you’d like to see submitted to this Track.

Today, open source licensing is not enough, we need to define standards on how open source software should be built. Models of open development come with their benefits and challenges, and with their best practices and tools. I’d like this track to be where those open development models, those standards on how open source should be built are discussed. Beyond that, this track will cover open source governance models, the challenges of diversity and inclusion, the need for mentoring and outreach, and community management topics.

What should Summit attendees take away from sessions in this Track?

Too much of open source software development today is closed, one way or another. Its development may be done behind closed doors, or its governance may be locked down to ensure lasting control by its main sponsor. I hope that this track will expose the benefits of open collaboration and help users tell the difference between different degrees of openness. I also hope this track will explain why diversity is critical to the success of open source projects and inspire attendees to participate in mentoring and outreach.

Who’s the target audience for this track?

This Track is broadly applicable to anyone who participates in an open source project, as a designer, operator, developer,community member, or sponsor. No specialist background knowledge is required, you’ll gain value from the sessions even if you are completely new to open collaboration. Experienced community leaders will benefit from exchanging ideas and best practices across communities, while new community leaders, or anyone curious about getting involved in community leadership, will benefit from the experiences of those who have gone before them.

The post Tips for talks on the Open Development Track for the Denver Summit appeared first on Superuser.

by Allison Price at January 10, 2019 03:03 PM

Trinh Nguyen

Searchlight at Stein-2 (R-14 & R-13)

Finally, we have reached the Stein-2 milestone. Over the past three months, we have been working on clarifying the use cases of Searchlight as well as envisioning a sustainable future for Searchlight. We decided to make the project a multi-cloud application that can provide search capacity across multiple cloud orchestration platforms (e.g., AWS, Azure, K8S, etc.). The effort was made possible by the contribution of Thuy Dang (our newest core member) and Sa Pham [3]. You can check out the documents at [4].

The projects are versioned as following:
  • searchlight:
  • searchlight-ui:
python-searchlightclient wasn't released in this milestone because there were no big changes.

Here are the major changes included in this release:
  • Searchlight use cases and vision [1]
  • Fix bug [2]
Let's searching!!!


[1] https://review.openstack.org/#/c/629104/
[2] https://review.openstack.org/#/c/621996/
[3] https://review.openstack.org/#/c/629471/
[4] https://docs.openstack.org/searchlight/latest/user/usecases.html

by Trinh Nguyen (noreply@blogger.com) at January 10, 2019 06:31 AM

January 09, 2019

OpenStack Superuser

How open infrastructure drives the “empowered edge”

Edge computing is at the top of tech bingo terms for yet another year. Gartner Inc. recently announced that it expects that in the next five years, “empowered edge” will be moving everything from internet of things to 5G: “Cloud computing and edge computing will evolve as complementary models with cloud services being managed as a centralized service executing, not only on centralized servers, but in distributed servers on-premises and on the edge devices themselves.”

Here’s a roundup of Superuser articles pushing those edgy boundaries.

The last mile: Where edge meets containers

Learn more about StarlingX: The edge project taking flight

How open source projects are pushing the shift to edge computing

Where the cloud native approach is taking NFV architecture for 5G

Carnegie Mellon’s clear view on 5G cloudlets

Forecasting the future of cloud computing in China

Get involved

If you’d like to know more about edge initiatives, check out the Edge Working Group resources. You can also dial-in to the Edge WG weekly calls or weekly Use cases calls or check the StarlingX sub-project team calls and find further material on the website about how to contribute or jump on IRC for OpenStack project team meetings in the area of your interest. And if you’re working with edge and want to write about it (unlocking Active User Contributor status as well!) just email editorATopenstack.org

The post How open infrastructure drives the “empowered edge” appeared first on Superuser.

by Superuser at January 09, 2019 03:01 PM

January 08, 2019

OpenStack Superuser

Where to connect with OpenStack in 2019

Every year, thousands of Stackers connect in real life to share knowledge about open infrastructure at local meetups, OpenStack Days, the Summits and hackathons.

Start filling out your 2019 calendar at the events page or find your local folks on the OpenStack Foundation (OSF) Meetup page.

Right out of the gate in January, user groups from New Delhi to Munich are putting their heads together. You’ll also find community members answering questions from a booth at the fun-packed FOSDEM in Brussels, February 2-3. (For what to expect, check out our coverage here.) There are a host of spring events – consider checking out OpenInfraDays in London on April 1. It will cover all things cloud computing, from the latest developments in bare metal hardware infrastructure through to  scaling scientific computing workloads.

The next milestone: the first-ever Open Infrastructure Summit, taking place in April 29-May 1 in Denver.  Remember to plan your trip with an eye on the Project Teams Gathering (PTG) May 2-4, 2019, also in Denver. It’s an event for anyone who self-identifies as a member in a specific given project team as well as operators who are specialists on a given project and willing to spend their time giving feedback on their use case and contributing their usage experience to the project team. Wondering what it’s like when the “zoo” of OpenStack projects get together? Check out this write-up.

Then there’s an entire summer of global get-togethers that will be focusing on open infrastucture. (The calendar, much like the technology, is constantly evolving so keep an eye on that events page as it fills up for the year.)

Once you’ve saved the date, remember that Superuser is always interested in community content – get in touch at editorATopenstack.org and send us your write-ups (and photos!) of events. Articles posted are eligible for Active User Contributor (AUC) status.

Cover Photo // CC BY NC

The post Where to connect with OpenStack in 2019 appeared first on Superuser.

by Superuser at January 08, 2019 03:09 PM

January 07, 2019

John Likes OpenStack

ceph-ansible podman with vagrant

These are just my notes on how I got vagrant with libvirt working on CentOS7 and then used ceph-ansible's fedora29 podman tests to deploy a containerized ceph nautilus preview cluster without docker. I'm doing this in hopes of hooking Ceph into the new podman TripleO deploys.

Configure Vagrant with libvirt on CentOS7

I already have a CentOS7 machine I used for tripleo quickstart. I did the following to get vagrant working on it with libvirt.

1. Create a vagrant user

sudo useradd vagrant
sudo usermod -aG wheel vagrant
sudo usermod --append --groups libvirt vagrant
sudo su - vagrant
mkdir .ssh
chmod 700 .ssh/
cd .ssh/
curl https://github.com/fultonj.keys > authorized_keys
chmod 600 authorized_keys

Continue as the vagrant user.

2. Install the Vagrant and other RPMs

Download the CentOS Vagrant RPM from https://www.vagrantup.com/downloads.html and install other RPMs needed for it to work with libvirt.

sudo yum install vagrant_2.2.2_x86_64.rpm
sudo yum install qemu libvirt libvirt-devel ruby-devel gcc qemu-kvm
vagrant plugin install vagrant-libvirt

Note that I already had many of the libvirt deps above from quickstart.

3. Get a CentOS7 box for verification

Download the centos/7 box.

[vagrant@hamfast ~]$ vagrant box add centos/7
==> box: Loading metadata for box 'centos/7'
box: URL: https://vagrantcloud.com/centos/7
This box can work with multiple providers! The providers that it
can work with are listed below. Please review the list and choose
the provider you will be working with.

1) hyperv
2) libvirt
3) virtualbox
4) vmware_desktop

Enter your choice: 2
==> box: Adding box 'centos/7' (v1811.02) for provider: libvirt
box: Downloading: https://vagrantcloud.com/centos/boxes/7/versions/1811.02/providers/libvirt.box
box: Download redirected to host: cloud.centos.org
==> box: Successfully added box 'centos/7' (v1811.02) for 'libvirt'!
[vagrant@hamfast ~]$
Create a Vagrant file for it with `vagrant init centos/7`.

4. Configure Vagrant to use a custom storage pool (Optional)

Because I was already using libvirt directly with an images pool, vagrant was unable to download the centos/7 system. I like this as I want to keep my images pool separate for when I use libvirt directly. To make Vagrant happy I created my own pool for it and added the following to my Vagrantfile:

Vagrant.configure("2") do |config|
config.vm.provider :libvirt do |libvirt|
libvirt.storage_pool_name = "vagrant_images"

After doing the above `vagrant up` worked for me.

Run ceph-ansible's Fedora 29 podman tests

1. Clone ceph-ansible master

git clone git@github.com:ceph/ceph-ansible.git; cd ceph-ansible

2. Satisfy dependencies

sudo pip install -r requirements.txt
sudo pip install tox
cp vagrant_variables.yml.sample vagrant_variables.yml
cp site.yml.sample site.yml

Optionally: modify Vagrantfile for vagrant_images storage pool

3. Deploy with the container_podman

tox -e dev-container_podman -- --provider=libvirt

The above will result in tox triggering vagrant to create 10 virtual machines and then ceph-ansible will install ceph on them.

4. Inspect Deployment

Verify the virtual machines are running:

[vagrant@hamfast ~]$ cd ~/ceph-ansible/tests/functional/fedora/29/container-podman
[vagrant@hamfast container-podman]$ cp vagrant_variables.yml.sample vagrant_variables.yml
[vagrant@hamfast container-podman]$ vagrant status
Current machine states:

mgr0 running (libvirt)
client0 running (libvirt)
client1 running (libvirt)
rgw0 running (libvirt)
mds0 running (libvirt)
rbd-mirror0 running (libvirt)
iscsi-gw0 running (libvirt)
mon0 running (libvirt)
mon1 running (libvirt)
mon2 running (libvirt)
osd0 running (libvirt)
osd1 running (libvirt)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.
[vagrant@hamfast container-podman]$

Connect to a monitor and see that it's running Ceph containers

[vagrant@hamfast container-podman]$ vagrant ssh mon0
Last login: Mon Jan 7 17:11:28 2019 from
[vagrant@mon0 ~]$

[vagrant@mon0 ~]$ sudo podman ps
c494695eb0c2 docker.io/ceph/daemon:latest-master /opt/ceph-container... 4 hours ago Up 4 hours ago ceph-mgr-mon0
dbabf02df984 docker.io/ceph/daemon:latest-master /opt/ceph-container... 4 hours ago Up 4 hours ago ceph-mon-mon0
[vagrant@mon0 ~]$

[vagrant@mon0 ~]$ sudo podman images
docker.io/ceph/daemon latest-master 24fdc8c3cb3f 4 weeks ago 726MB
[vagrant@mon0 ~]$

[vagrant@mon0 ~]$ which docker
/usr/bin/which: no docker in (/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin)
[vagrant@mon0 ~]$
Observe the status of the Ceph cluster:

[vagrant@mon0 ~]$ sudo podman exec dbabf02df984 ceph -s
id: 9d2599f2-aec7-4c7c-a88e-7a8d39ebb557
application not enabled on 1 pool(s)

mon: 3 daemons, quorum mon0,mon1,mon2 (age 71m)
mgr: mon1(active, since 70m), standbys: mon2, mon0
mds: cephfs-1/1/1 up {0=mds0=up:active}
osd: 4 osds: 4 up (since 68m), 4 in (since 68m)
rbd-mirror: 1 daemon active
rgw: 1 daemon active

pools: 13 pools, 124 pgs
objects: 194 objects, 3.5 KiB
usage: 54 GiB used, 71 GiB / 125 GiB avail
pgs: 124 active+clean

[vagrant@mon0 ~]$

Observe the installed versions:

[vagrant@mon0 ~]$ sudo podman exec -ti dbabf02df984 /bin/bash
[root@mon0 /]# cat /etc/redhat-release
CentOS Linux release 7.6.1810 (Core)
[root@mon0 /]#

[root@mon0 /]# ceph --version
ceph version 14.0.1-1496-gaf96e16 (af96e16271b620ab87570b1190585fffc06daeac) nautilus (dev)
[root@mon0 /]#

Observe the OSDs

[vagrant@hamfast container-podman]$ vagrant ssh osd0
[vagrant@osd0 ~]$ sudo su -
[root@osd0 ~]# podman ps
4fe23502592c docker.io/ceph/daemon:latest-master /opt/ceph-container... About an hour ago Up About an hour ago ceph-osd-2
f582b4311076 docker.io/ceph/daemon:latest-master /opt/ceph-container... About an hour ago Up About an hour ago ceph-osd-0
[root@osd0 ~]# lsblk
sda 8:0 0 50G 0 disk
sdb 8:16 0 50G 0 disk
├─test_group-data--lv1 253:1 0 25G 0 lvm
└─test_group-data--lv2 253:2 0 12.5G 0 lvm
sdc 8:32 0 50G 0 disk
├─sdc1 8:33 0 25G 0 part
└─sdc2 8:34 0 25G 0 part
└─journals-journal1 253:3 0 25G 0 lvm
vda 252:0 0 41G 0 disk
├─vda1 252:1 0 1G 0 part /boot
└─vda2 252:2 0 40G 0 part
└─atomicos-root 253:0 0 40G 0 lvm /sysroot
[root@osd0 ~]#

[root@osd0 ~]# podman exec 4fe23502592c cat var/lib/ceph/osd/ceph-2/type
[root@osd0 ~]#

by John (noreply@blogger.com) at January 07, 2019 08:07 PM


Planet OpenStack is a collection of thoughts from the developers and other key players of the OpenStack projects. If you are working on OpenStack technology you should add your OpenStack blog.


Last updated:
February 21, 2019 10:07 AM
All times are UTC.

Powered by: