October 21, 2017

Rich Bowen

CERN CentOS Dojo, part 3 of 4: Friday Dojo

On Friday, I attended the CentOS Dojo at CERN, in Meyrin Switzerland.

CentOS dojos are small(ish) gatherings of CentOS enthusiasts that happen all over the world. Each one has a different focus depending on where it is held and the people that plan and attend it.

You can read more about dojos HERE.

On Friday, we had roughly 60-70 people in attendance, in a great auditorium provided by CERN. We had 97 people registered, and 75% is pretty standard turnout for free-to-register events, so we were very pleased.

You can get a general idea of the size of the crowd in this video:

The full schedule of talks can be seen here: https://indico.cern.ch/event/649159/timetable/#20171020

There was an emphasis on large-scale computing, since that’s what CERN does. And the day started with an overview of the CERN cloud computing cluster. Every time I attend this talk (and I’ve seen it perhaps 6 times now) the numbers are bigger and more impressive.

CERN and Geneva

This time, they reported 279 thousands cores in their cluster. That’s a lot. And it’s all running RDO. This makes me insanely proud to be a small part of that endeavor.

Other presentations included reports from various SIGs. SIGs are Special Interest Groups within CentOS. This is where the work is done to develop projects on top of CentOS, including packaging, testing, and promotion of those projects. You can read more about the SIGs here: https://wiki.centos.org/SpecialInterestGroup

If you want to see your project distributed in the CentOS distro, a SIG is the way to make this happen. Drop by the centos-devel mailing list to propose a SIG or join an existing one.

The entire day was recorded, so watch this space for the videos and slides from the various presentations.

The CERN folks appeared very pleased with the day, and stated their intention to do the event again on an annual basis, if all works out. These things aren’t free to produce, of course (even though we strive to make them always free to attend) so if your organization is interested in sponsoring future dojos, please contact me. I’ll also be publishing a blog post over on seven.centos.org in the coming days about what’s involved in doing one of these events, in case you’d like to host one at your own facility..

by rbowen at October 21, 2017 11:59 AM

CERN Centos Dojo, event report: 2 of 4 – CERN tours

(This post is the second in a series of four. They are gathered here.)

The second half of Thursday was where we got to geek out and tour various parts of CERN.

I was a physics minor in college, many years ago, and had studied not just CERN, but many of the actual pieces of equipment we got to tour, so this was a great privilege.

We started by touring the data center where the data from all of the various physics experiments is crunched into useful information and discoveries. This was amazing for a number of reasons.

From the professional side, CERN is the largest installation of RDO – the project I work with at work – that we know of. 279 thousand cores running RDO OpenStack.

For those not part of my geek world, that translates into hundreds of thousands of physical computers, arranged in racks, crunching data to unlock the secrets of the universe.

For those that are part of my geek world, you can understand why this was an exciting thing to see in person and walk through.

The full photo album is here, but I want to particularly show a couple of shots:

Visiting CERN

Here we have several members of the RDO and CentOS team standing in front of some of the systems that run RDO.

Visiting CERN

And here we have a photo that only a geek can love – this is the actual computer on which the very first website ran. Yes, boys and girls, that’s Tim Berners-Lee’s desktop computer from the very first days of the World Wide Web. It’s ok to be jealous.

There will also be some video over on my YouTube channel, but I haven’t yet had an opportunity to edit and post that stuff.

Next, we visited the exhibit about the Superconducting Super Collider, also known as the Large Hadron Collider. This was stuff that I studied in college, and have geeked out about for the years since then.

There are pictures from this in the larger album, but I want to point out one particular picture of something that absolutely blew my mind.

Most of the experiments in the LHC involve accelerating sub-atomic particles (mostly protons) to very high speeds – very close to the speed of light – and then crashing them into something. When this happens, bits of it fly off in random directions, and the equipment has to detect those bits and learn things about them – their mass, speed, momentum, and so on.

In the early days, one of the the ways that they did this was to build a large chamber and string very fine wires across it, so that when the particles hit those wires it would cause electrical impulses.

Those electrical impulses were captured by these:

CERN visit

Those are individual circuit boards. THOUSANDS of them, each individually hand-soldered. Those are individual resistors, capacitors, and ICs, individually soldered to boards. The amount of work involved – the dedication, time, and attention to detail – is simply staggering. This photo is perhaps 1/1000th of the total number of boards. If you’ve done any hand-soldering or electronic projects, you’ll have a small sense of the scale of this thing. I was absolutely staggered by this device.

Outside on the lawn were several pieces of gigantic equipment that were used in the very early days of particle physics, and this was like having the pages of my college text book there in front of me. I think my colleagues thought I’d lost my mind a little.

College was a long time ago, and most of the stuff I learned has gone away, but I still have the sense of awe of it all. That an idea (let’s smash protons together!) resulted in this stuff – and more than 10,000 people working in one place to make it happen, is really a testament to the power of the human mind. I know some of my colleagues were bored by it all, but I am still reeling a little from being there, and seeing and touching these things. I am so grateful to Tim Bell and Thomas Oulevey for making this astonishing opportunity available to me.

Finally, we visited the ATLAS experiment, where they have turned the control room into a fish tank where you can watch the scientists at work.

CERN visit

What struck me particularly here was that most of the people in the room were so young. I hope they have a sense of the amazing opportunity that they have here. I expect that a lot of these kids will go on to change the world in ways that we haven’t even thought of yet. I am immensely jealous of them.

So, that was the geek chapter of our visit. Please read the rest of the series for the whole story.

by rbowen at October 21, 2017 11:13 AM

CERN Centos Dojo 2017, Event report (0 of 4)

For the last few days I’ve been in Geneva for the CentOS dojo at CERN.

What’s CERN? – http://cern.ch/

What’s a dojo? – https://wiki.centos.org/Events/Dojo/

What’s CentOS? – http://centos.org/

A lot has happened that I want to write about, so I’ll be breaking this into several posts:

(As usual, if you’re attempting to follow along on Facebook, you’ll be missing all of the photos and videos, so you’ll really want to go directly to my blog, at http://drbacchus.com/)


by rbowen at October 21, 2017 10:21 AM

October 20, 2017

OpenStack @ NetApp

Introducing the New and Improved OpenStack Deployment and Operations Guide

By Chad Morgenstern and Jenny Yang Have you visited NetApp’s OpenStack section of thePub lately?  If not, you may have missed NetApp’s new and improved OpenStack Deployment and Operation’s Guide.  Now a living document, easy for you to read and for us to maintain, the Guide is your one stop source for all things NetApp ... Read more

The post Introducing the New and Improved OpenStack Deployment and Operations Guide appeared first on thePub.

by Jenny Yang at October 20, 2017 07:00 PM

OpenStack Superuser

OpenStack Project Teams Gathering: What gets done and why you should attend

DENVER–Over 400 developers gathered in Denver, Colorado at the most recent Project Teams Gathering (PTG) to collaborate on their respective OpenStack projects. There was a veritable zoo present (each project has an animal mascot) as over 40 teams worked tirelessly for five days thanks to an endless supply of coffee, as well as our sponsors, Fujitsu, IBM, Huawei, Red Hat and Verizon.

Missed this PTG? Save the date for the next one: it will take place in Dublin, Ireland the week of February 26, 2018.

Superuser talked to some folks present to find out what makes it worthwhile.


Rico Lin, EasyStack, and Project Team Leader (PTL) of Heat.

What are your thoughts on the new PTG format?

“I think it’s very useful for developers. I can just grab team members and PTLs to sit down and work on patches together.”

He points out that if the team is considering something, meeting face-to-face is very useful because decisions get made, and goals are accomplished. In the past, he’s seen issues with video meet-ups because people don’t always show up, but at the PTG, people are very focused.

“People concentrate here, since they’re already in their respective teams.”

Do you think the PTG is more valuable than the Design Summit was?

“Yeah, definitely. I was always running somewhere at the Design Summit, because there was cross-project going on, users are there, operators are there…it was almost impossible to write any lines of code as a developer, which wasn’t that productive.”

He points out that the reason for the separate gathering of the developers was to give them time to both discuss what they want to design at the Forum, and then to make something happen at the PTG.

Lin says that the separation of users and operators allows them to keep tuning the code to improve it for the real use cases.


Jaesuk Ahn, OpenStack Architect at SK Telecom.

How did the operator feedback from the forum effect how you went into this PTG?

Ahn says it was nice to be able to meet vendors and get their thoughts on the project and their opinions about the issues.

“There, I was able to not only discuss technological problems, but also user stories and requirements. So based on that and the feedback I received at the Summit, I was able to tell my company about not only why it’s important for technological reasons, but why it’s important for the business aspect and what other vendors think of this project so that it will continue to succeed.”

Thierry Carrez, vice president of engineering at the OpenStack Foundation.

Now that we have had both a PTG and a Forum, how do you think the process is going?

“I think it’s going well. The fact that we staggered the two events is really conducive to us having to really rely on the type of feedback we should be working on for the next cycle a few months ahead, and then come to the PTG at the start of the dev cycle with a list of things to work on.

Previously, we had a lot of frustration in the Design Summit because at the same time, the devs were trying to organize the work for the upcoming cycle. We had new requirements coming from users in the room that were trying to get their priorities through, and it was frustrating for devs, who couldn’t really get their requirements through. They had to constantly rehash all the background of the things that we did, so I think splitting it made it a more pleasant experience for the various people involved.  Also a more productive set of events by really focusing them on a specific stage of the development cycle.”

What has changed going into the second PTG?

“The PTG is trying to strike a balance between highly dynamic content where people discuss what they need to discuss and sometimes those priorities change based on who is in the room or what’s been recently discussed on the mailing list, so it’s really difficult to come up with a schedule in advance. So the flexibility of the event is really key to having productive meetings by spending the right amount of time on the right topics. At the same, it’s a balance to have because you also need to make enough of that content that will be discussed emerge externally to the team in a way that others that are interested in participating can provide input to those discussions and know that those discussions are coming.

The big change that we’ve implemented between the two events is a dynamic scheduling system that shows what’s being discussed right now and what will be discussed next. There’s some flexibility in terms of timing, so you can see at a glance what’s going on at the PTG; what’s being discussed. So if you’re wondering where to go, you can just scan the list instead of checking all the rooms and all the different Etherpads. [The PTG bot] was pretty well adopted. I was a bit worried that no one would pick it up, but I feel like the attendees found it and used it. We obviously have a long list of improvements, but it’s made a lot of difference.”

The post OpenStack Project Teams Gathering: What gets done and why you should attend appeared first on OpenStack Superuser.

by Ashlee Ferguson at October 20, 2017 02:38 PM

October 19, 2017

OpenStack Superuser

How to avoid vendor lock-in with multi-cloud deployments

As businesses outsource their infrastructure to public cloud providers they are in turn taking on major risks. In a recent piece by Financial News (gated), senior executives at Goldman Sachs and Standard Chartered warned that an over reliance on a small band of cloud service providers could result in a major hack or outage wreaking havoc on the global banking system.

Lock-in is a global issue: Bain’s Cloud Computing Survey noted that the share of respondents citing vendor lock-in as a “top three concern” grew from 7 percent to 23 percent from 2012 to 2015. Of course, cloud vendor lock-in issues extend beyond uptime risk; they also include the regulatory risk of changes in data sovereignty policies or the financial risk of having to endure price hikes without any negotiating power; Dropbox went so far as to migrate off of AWS and onto their own system to get control of their costs.

Fortunately, CockroachDB can help eliminate these problems by enabling seamless multi-cloud deployments.

Multi-cloud deployments hedge risk

On stage at the 2017 OpenStack Summit, representatives from some of the top cloud providers including IBM, vmWare, and Red Hat set up a multi-cloud CockroachDB cluster. As servers joined the network, their new peers continually shared information about the cluster and collectively guaranteed clients would get the exact same results no matter which node was queried.

A CockroachDB cluster operates as a self-healing, elastic data layer than can span private and public clouds alike, enabling services to survive a node, data center, or even an entire cloud provider going down without experiencing downtime or lost data. Using CockroachDB means shifting operational thinking from disaster recovery to disaster resilience, and completely side steps the risks associated with vendor lock-in.

Retaining data control

Helping companies of all sizes avoid vendor lock-in is top of mind for us; that’s one of the reasons why we built CockroachDB from the ground up to run on commodity hardware, keeping the choice of where and how data is managed in the hands of IT teams.

In the Financial News article, a Standard Chartered executive pondered if other players would emerge to help eliminate this systematic risk of cloud vendor lock-in. We think CockroachDB answers the call.

Nate Stewart is head of product at Cockroach Labs, this post first appeared on the company’s blog.

Cover Photo // CC BY NC

The post How to avoid vendor lock-in with multi-cloud deployments appeared first on OpenStack Superuser.

by Nate Stewart at October 19, 2017 02:05 PM

October 18, 2017

OpenStack Superuser

OpenStack Days: Coming to a city near you

OpenStack Days are a great way to get plugged into the community near you — whether you’re based in Bangalore or Mexico City.

There were around 20 of them spanning the globe in 2017, and many are already in the planning stages for next year.
You’ll find a list of upcoming OpenStack events — including meetup groups, hackathons and Openstack Days at https://www.openstack.org/community/events/ And if you’re interested in launching an OpenStack Day in your corner, here’s where to find more information.

“People can’t always travel to every Summit…Being able to go to OpenStack Days is really important,” executive director Jonathan Bryce said speaking at the recent OpenStack Days UK.

Participating companies at that event included Memset, UKcloud, DataCentred, Stack Evolution, Rackspace and packed talks by CERN’s Tim Bell and busy sessions with the Public Cloud Working group. Each event features a tailored agenda, the UK Days offered cross-cloud demos, interviews, keynotes and upstream office hours.

Openstack Days typically feature a mix of local and international experts who can distill what the technology means in an immediate way.

“OpenStack now represents to many degrees an operating system for the data center,” says Mark Baker, OpenStack director at Ubuntu, who keynoted at the UK Days with a talk titled “OpenStack and the invisible gorilla.”  “It’s really helping customers deliver scalable, agile applications on-premises in a way that they couldn’t do with any other technology.

The OpenStack Days also act as a bridge to the larger community.

“In addition to the really high-value end users and suppliers that are here, we’re in a global conversation,” noted Colin Bridger, senior director of Northern Europe at Mellanox at the UK Days. “It’s a really important event for us.”

See more from the OpenStack UK Days in the roundup video below.

The post OpenStack Days: Coming to a city near you appeared first on OpenStack Superuser.

by Superuser at October 18, 2017 02:00 PM

October 17, 2017

Chris Dent

TC Report 42

we can't always hold the belief which we believe 5 years ago

After last week's report I got into an interesting conversation about it. The quote above is from that, by flwang1. I thought it was worth highlighting; OpenStack continues to evolve and grow and what was once true may be no longer.

TC elections are in progress. Near the end of last week there was a period where the candidates were presented with the opportunity to answer questions from the community. If you can vote and you haven't yet (or even if you have) there are some good questions and answers:

Voting continues until Oct 20, 2017 23:45 UTC. If you are eligible to vote you should have received an email from "Kendall Nelson (CIVS poll supervisor)" <civs@cs.cornell.edu> with a subject of Poll: Queens TC Election. Several people have reported finding theirs in their spam folder, especially on gmail.

Other Stuff

There's plenty of other stuff going on.

Take your manager to the TC day

On Wednesday, Josh Harlow showed up asking if a meeting between the TC and managers of engineers committed to OpenStack might be fruitful. There were mixed reactions, much around "what would we do with the information gained?" and "how do you make it be something other than a festival of complaint?"

If we can figure out good answers to those questions, I think it could be a useful engagement. There's an implicit assumption that there is straightforward proxying across the several boundaries between the happenings in the OpenStack community and what's going on within the confines of an employer. That assumption doesn't seem to be valid, and even if it is, having some conversations to re-align expectations (from all sides) could be useful.


The ongoing debate related to Glare's application to be an official project stalled today when the proposer chose to withdraw the application to give more time for some of the issues to resolve in a more obvious fashion. This is despite the application having crossed the requisite threshold for approval. There are some highlights from the discussion that are worth remembering for when this comes up again:

  • Thursday's office hour revisited some of the issues.
  • Review comments that represent some of the positions (all on patchset 6):
    • "We talk about encouraging innovation and a certain kind of competition in the project spaces, but it is clearly not a free-market, when it comes down to it we have more of a protectionist view in many areas."
    • "I think the Glare team is refocusing on the right things. I would like to observe that change for a while to see how it goes before approving."
    • The potential overlap with Glance and/or the image API or disruption thereof is a concern.
    • "lack of focus: storing binary blobs and associated metadata is not a goal unto itself"

Foundation Board Activities

In today's office hour, there were two topics related to the TC's interaction with the OpenStack Foundation Board. There will be another joint Board/TC/UC meeting and while agenda items are solicited there's general agreement that the meeting in Denver feels like it was yesterday.

The other topic was about initial discussions around expanding the projects hosted by the Foundation to allow not-OpenStack, but-related-to-OpenStack projects to be in the domain of the Foundation. Examples include tools that enable NFV, edge computing, CI/CD, containers. Stuff that uses infrastructure. If this goes through, one of the potential wins is that existing OpenStack may get renewed focus and clarity of purpose.


If you can vote in the TC elections, please do.

by Chris Dent at October 17, 2017 06:15 PM

OpenStack Superuser

How to deploy multi-cloud serverless and Cloud Foundry APIs at scale

Ken Parmelee, who leads the API gateway for IBM and Big Blue’s open source projects, has a few ideas about open-source methods for “attacking” the API and how to create micro-services and make them scale.

“Micro-services and APIs are products and we need to be thinking about them that way,” Parmelee says. “As you start to put them up people rely on them as part of their business. That’s a key aspect of what you’re doing in this space.”

He took the stage at the recent Nordic APIs 2017 Platform Summit and challenged a few popular notions.

“Fail fast is not really a good concept. You want to get something out there that’s fantastic on the first run. That doesn’t mean you need to take tons of time, but make it fantastic then keep evolving and improving. If it’s really bad in the beginning, people aren’t going to want to stick with you.”

He runs through IBM’s modern serverless architectures including OpenWhisk, an open source partnership between IBM and Apache. The cloud-first distributed event-based programming service is result of focusing on this space for over two years; IBM is a leading contributor and it serves as the foundation of IBM cloud functions. It offers infrastructure-as-a-service, automatically scales, offers support for multiple languages and users pay only for what they actually use. Challenges were also a part of this journey, as they discovered that servers actions need securing and require ease — anonymous access, missing usage trails, fixed URL schemes etc.

Anyone can try out these serverless APIs in just 30 seconds at https://console.bluemix.net/openwhisk/ “This sounds very gimmicky, but it is that easy to do…We’re combining the work we’ve done with Cloud Foundry and released them in Bluemix under the OpenWhisk to provide security and scalability.”

“Flexibility is also hugely important for micro-services,” he says. “When you’re working in the real world with APIs, you start to have to scale across clouds.” That means from your on-premise cloud to the public cloud and “having an honest concept of how you’re going to do that is important.

And when thinking about the “any cloud concept” he warns that it’s not “throw it into any Docker container and it runs anywhere. That’s great but it needs to run effectively in those environments. Docker and Kubernetes help a lot, but you want to operationalize how you’re going to do it.” Think ahead on the consumption of your API – whether it’s running only internally or will expand to a public cloud and be publicly consumable – you need have that “architectural view,” he adds.

“We all hope that what we create has value and gets consumed a lot,” Parmelee says. The more successful the API, the bigger the challenge of taking it to the next level.APIs are building blocks for micro-services or “inter-services.”

The future of APIs is cloud native — regardless of where you start, he says. Key factors are scalability, simplifying back-end management, cost and avoiding vendor lock-in.

You can catch his entire 23-minute talk below or on YouTube.

Cover Photo // CC BY NC

The post How to deploy multi-cloud serverless and Cloud Foundry APIs at scale appeared first on OpenStack Superuser.

by Superuser at October 17, 2017 02:03 PM

SUSE Conversations

New Research Shows Hybrid Cloud Takes Center Stage

  It’s probably not news to anyone that cloud computing has upended traditional IT and has continued to grow unabated for years.    Many commentators have suggested that the term “cloud” will disappear from our lexicon as it has become completely ubiquitous.  IT professionals must now defend their choice to not put a new workload into …

+read more

The post New Research Shows Hybrid Cloud Takes Center Stage appeared first on SUSE Blog. jvonvoros

by jvonvoros at October 17, 2017 12:35 PM


Please allow me to introduce ourselves

Whether we’re people of wealth and taste is debatable, but we have been around for a long time enabling OpenStack on bare metal. As CEO of data infrastructure provider Host-Telecom.com, I’m pleased to announce our first sponsorship of the OpenStack Summit taking place in Sydney from Nov. 6-8th. We’re excited to officially join the community!

Headquartered in the EU-member country of the Czech Republic, Host-Telecom is committed to OpenStack as we support cloud users who want the flexibility and security of an open platform. With customers currently in both the U.S. and Europe, our goal is to extend the use of OpenStack, providing users with a powerful and adaptable platform that allows stability and growth without the exorbitant software licensing costs of commercial cloud solutions. So how do we do that?

Data backup and disaster recovery based on OpenStack

While Host-Telecom runs OpenStack deployments on bare metal, offering cloud storage and cloud Infrastructure as a Service (IaaS), we also provide OpenStack-based cloud backup and disaster recovery services with our partner, Hystax.

The architecture enabling our services modifies the user’s data and IT infrastructure to run on OpenStack in our data center with replication available for VMware vSphere, Hyper-V, OpenStack, and Virtuozzo. Automated disaster recovery plans testing and provides failback to production within minutes. In case of data compromise or infrastructure failure, users begin running seamlessly on bare metal in an OpenStack environment even if they don’t realize it.

An easy path to OpenStack migration

Our service modifications also place users on a fast, easy migration path to OpenStack cloud, which Host-Telecom recommends for increased technical capabilities as well as affordability. Free from the debilitating software licensing fees of commercial platforms, organizations in a bare metal OpenStack environment can grow with much greater stability and predictability. We currently enable VMware to OpenStack migration and are developing solutions for additional platforms as swiftly as possible.

OpenStack roots

Now that you know more about our offerings, let me tell you a bit more about our OpenStack background. While you may be meeting us for the first time, we’ve been active in OpenStack for quite awhile, with Mirantis selecting Host-Telecom from several candidates to build the first in a series of scalability test labs for its OpenStack platform in 2014.

Unable to get reliable test results using VMs, Mirantis developers knew that they needed to deploy on live hardware to see how OpenStack scaled in the real world. Not wanting the cost and maintenance of creating their own data center, Mirantis hired Host-Telecom to create an IT infrastructure with multiple customizations to test their OpenStack deployments. We ultimately built test labs to evaluate Mirantis OpenStack scalability on multiple hundreds of nodes, increasing our capabilities to match OpenStack’s rapid growth. Figure 1 shows the wired side of OpenStack on bare metal.

Man in wires

Fig. 1 – Getting down to building scalability test labs in a a bare metal OpenStack environment with Mirantis.

Expanding the OpenStack user base

We continue to create solutions to facilitate the growth of OpenStack. At Host-Telecom, running OpenStack on bare metal and delivering services that facilitate migration to the platform is a priority because users truly benefit from OpenStack’s power and flexibility as well as the savings of using open source software. My team and I look forward to meeting many of you soon for more in-depth discussions about OpenStack at the Sydney Summit. In the meantime, please contact our U.S Senior Marketing & OpenStack community manager, Denise Boehm, so we can set up time to chat with you in person at the Summit, or by email or phone. See you soon!


Pavel Chernobrov
CEO, Host-Telecom

Pavel Chernobrov

The post Please allow me to introduce ourselves appeared first on Host-Telecom.com.

by Pavel Chernobrov at October 17, 2017 07:08 AM

October 16, 2017

OpenStack Superuser

Openstack Hackathon comes to Sydney

Australia’s first OpenStack Application Hackathon takes place Nov 3-5, 2017 at Doltone House in Australia’s Technology park, the weekend prior to the OpenStack Summit Sydney.

This three-day event is organized by the local OpenStack community and welcomes students and professionals to hack the stack using the most popular open infrastructure platforms and application orchestration tools.

Whether you’re inspired by Internet of Things, scientific research challenges, or have a creative idea to bring to market, knowing the latest tech is how you’ll get there. Hackathon teams will pick a theme–edge computing, scientific research, or market viability–then pick one of three platforms to use as you build your app:

  • Agave, a “science-as-a-service” platform for research and big data

  • Cloudify, an orchestration framework that lets you put your apps on the cloud

  • OpenShift, a container management tool for using Docker containers

All projects will sit atop OpenStack, an open source cloud platform. You can learn more about OpenStack at the upcoming OpenStack Summit in Sydney, November 6-8.

Hackers are invited to join as an individual or as part of a team. Bring your tent/sleeping bag and camp out to take in the full Hackathon experience! All qualified mentors and participants who participate for the duration of the Hackathon will receive a free pass to the Sydney Summit.

Prizes include free tickets to the Vancouver OpenStack Summit for the winning team, all the food and coffee you need to fuel your hack, as well as pre-training and mentor support from renowned technology companies around the world.

Sign up now and get ready to compete for some big prizes!

Cover Photo // CC BY NC

The post Openstack Hackathon comes to Sydney appeared first on OpenStack Superuser.

by Superuser at October 16, 2017 03:36 PM

NFVPE @ Red Hat

Customize OpenStack images for booting from ISCSI

When working with OpenStack Ironic and Tripleo, and using the boot from ISCSI feature, you may need to add some kernel parameters into the deployment image for that to work.When using some specific hardware, you may need that the deployment image contains some specific kernel parameters on boot. For example, when trying to boot from ISCSI with IBFT nics, you need to add following kernel parameters: rd.iscsi.ibft=1 rd.iscsi.firmware=1   The TripleO image that is generated by default doesn’t contain those parameters, because they are very specific depending on the hardware you need. It is not also possible right now to send…

by Yolanda Robla Mota at October 16, 2017 10:00 AM

October 15, 2017

Amrith Kumar

Running for election to the OpenStack TC!

Last week I submitted my candidacy for election to the OpenStack Technical Committee[1], [2]. One thing that I am liking with this new election format is the email exchanges on the OpenStack mailing list to get a sense for candidates points of view on a variety of things. In a totally non-work, non-technical context, I … Continue reading "Running for election to the OpenStack TC!"

by amrith at October 15, 2017 02:20 AM

October 13, 2017

OpenStack Blog

Developer Mailing List Digest September 30 – October 6


Sydney Forum Schedule Available

TC Nomination Period Is Now Over

Prepping for the Stable/Newton EOL

  • The published timeline is:
    • Sep 29 : Final newton library releases
    • Oct 09 : stable/newton branches enter Phase III
    • Oct 11 : stable/newton branches get tagged EOL
  • Given that those key dates were a little disrupted, Tony Breeds is proposing adding a week to each so the new timeline looks like:
    • Oct 08 : Final newton library releases
    • Oct 16 : stable/newton branches enter Phase III
    • Oct 18 : stable/newton branches get tagged EOL
  • Thread

Policy Community Wide Goal Progress

Tempest Plugin Split Community Wide Goal Progress

  • The goal
  • The reviews
  • List of projects which have already completed the goal:
    • Barbican
    • Designate
    • Horizon
    • Keystone
    • Kuryr
    • Os-win
    • Sahara
    • Solum
    • Watcher
  • List of projects which are working on the goal:
    • Aodh
    • Cinder
    • Magnum
    • Manila
    • Murano
    • Neutron
    • Neutron L2GW
    • Octavia
    • Senlin
    • Zaqar
    • Zun
  • Message

by Mike Perez at October 13, 2017 10:22 PM

OpenStack Superuser

OpenStack Community Contributor Awards now open for nominations

So many folks work tirelessly behind the scenes to make OpenStack great, whether they are fixing bugs, contributing code, helping newbies on IRC or just making everyone laugh at the right moment.

You can help them get recognized (with a very shiny medal!) by nominating them for the next Contributor Awards given out at the upcoming OpenStack Summit Sydney. These are informal, quirky awards — winners in previous editions included the “Duct Tape” award and the “Don’t Stop Believin’ Cup” — that shine a light on the extremely valuable work that makes OpenStack excel.

There’s so many different areas worthy of celebration, but there are a few kinds of community members who deserve a little extra love:

  • They are undervalued
  • They don’t know they are appreciated
  • They bind the community together
  • They keep it fun
  • They challenge the norm
    Other: (write-in)

As before, rather than starting with a defined set of awards, the community is asked to submit  names in those broad categories. The OpenStack Foundation community team then has a little bit of fun on the back end, massaging the award titles to devise something worthy of their underappreciated efforts.

The submission form is below, so please nominate anyone you think is deserving of an award! Deadline is October 27.


Awards will be presented during the feedback session at the Summit.

Cover Photo // CC BY NC

The post OpenStack Community Contributor Awards now open for nominations appeared first on OpenStack Superuser.

by Superuser at October 13, 2017 11:15 AM

October 12, 2017

OpenStack Superuser

OpenStack delivers services to thousands of Workday customers

Workdaya leader in human resources software-as-a-service (SaaS)has been active in the OpenStack community for many years. You may have read Superuser articles and watched past OpenStack Summit videos by Edgar Magana, senior principal software development engineer at Workday and a long-time member of the OpenStack User Committee. But how much do you know about how and why Workday uses OpenStack for both development and to run production applications and services?

In an in-depth case study, written in conjunction with members of the OpenStack Enterprise Working Group, Magana shares how Workday stays atop technology to bring new innovations to customers while achieving substantial benefits.

The team has:

  • Consolidated five deployment systems to one
  • Automated application and patch deployment across multiple environments, ensuring all users have exactly the same services at all times
  • Increased ratio of nodes per operator from 500:1 to 10,000:1
  • Reducing the expense of scalability testing

The case study details Workday’s phased migration to OpenStack in every data center, their high availability architecture, CI/CD workflow, current applications and future plans. At the end of 2016, Workday had over 650 servers running OpenStack with more than 50,000 cores of total capacity. By the end of 2018, they will triple their capacity.

By shifting to OpenStack, Workday gains many operational improvements for development, deployment, scalability, usability, onboarding, network isolation, security and automated continuous improvement. The case study elaborates on how each benefit is achieved.

Workday feels strongly that open source is not just about using the software, but also actively participating in the communities, detailed by Magana in an article. Briefly, the team’s contributions to OpenStack range from coding and code reviews to governance. Many of their contributions have been focused on fulfilling security requirements for enterprises. Workday engineers speak regularly at the bi-annual OpenStack Summits, sharing their experiences with other users and collaborating on requirements with OpenStack developers, including the upcoming OpenStack Summit, November 6-8. In Sydney, Workday will be speaking on performance and containerizing the control plane.

OpenStack improvements and enhancements translate into cost-savings and significant business benefits that contribute to Workday’s continued growth.


The post OpenStack delivers services to thousands of Workday customers appeared first on OpenStack Superuser.

by Kathy Cacciatore at October 12, 2017 04:25 PM


5 benefits of contributing to open source projects

Open source was once seen as a risky bet for the enterprise. If open source software was used at all it was by small companies, or by larger firms in stealthy pockets by IT and development professionals who saw the value of the model but couldn't "sell" it upstream.

by Edgar Magana at October 12, 2017 07:00 AM

OpenStack Blog - Swapnil Kulkarni

TC Candidacy – Swapnil Kulkarni (coolsvap)


I am Swapnil Kulkarni(coolsvap), I have been a ATC since Icehouse and I wish
take this opportunity to throw my hat for election to the OpenStack Technical
Committee this election cycle. I started contributing to OpenStack with
introduction at a community event and since then I have always utilized every
opportunity I had to contribute to OpenStack. I am a core reviewer at kolla
and requirements groups. I have also been active in activities to improve the
overall participation in OpenStack, through meetups, mentorship, outreach to
educational institions to name a few.

My focus of work during TC would be to make it easier for people to get
involved in, participate, and contribute to OpenStack, to build the community.
I have had a few hickups in the recent past for community engagement and
contribution activities but my current employment gives me the flexibilty
every ATC needs and I would like to take full advantage of it and increse
the level of contribution.

Please consider this my application and thank you for your consideration.

[1] https://www.openstack.org/community/members/profile/7314/swapnil-kulkarni
[2] http://stackalytics.com/report/users/coolsvap
[3] https://review.openstack.org/510402

by Swapnil Kulkarni at October 12, 2017 12:00 AM

October 11, 2017

OpenStack Superuser

Careers in OpenStack: What employers want

Do you love working with OpenStack? Want to make it your career? You may be a relative newbie or a grizzled veteran, either way, there’s a path for you to land your dream job working with OpenStack. Let’s take a look at what employers want in a prospective OpenStack candidate.

What jobs are out there and what skills are employers looking for?

Today, there are many more jobs available for OpenStack experience than resources that exist in the talent pool. The good news is that as of October 2017, LinkedIn listed almost 6,000 jobs, Indeed 2,500 jobs and Glassdoor 2,600 jobs with OpenStack as a title or a requirement. Want more good news? The jobs are at all skill levels. According to Glassdoor, OpenStack jobs have an average salary of about $90,000 per year, at a wide range of experience levels.

So, now that we’ve established there’s high demand, what kinds of positions are being advertised? Let’s break down the four most requested positions by common titles, responsibilities and requirements.

OpenStack developer (aka OpenStack engineer)

In the past, this title has been a catch-all for a position where the candidate be responsible for all aspects of an OpenStack deployment (engineering, operations, infrastructure, onboarding, etc.) This candidate would touch almost every aspect of a production cloud — from planning, deploying and even operating the company’s production and development clouds. However, as production OpenStack clouds have now scaled to very large enterprise levels and beyond, employers have started to create more specialized positions and distilled the responsibilities f0r the OpenStack Engineer down to a subset of the original scope.

Today, some commonly advertised responsibilities of the OpenStack Engineer can include:

  • Ownership of internal OpenStack projects to extend or customize OpenStack code to satisfy business requirements
  • Working closely with QA and Support teams for bug triage, fix creation and resolution upstream
  • Working with upstream OpenStack projects to contribute bugs and any new code back to the OpenStack foundation

Employers are typically looking to hire someone with intimate knowledge of the OpenStack core projects in detail, down to the code level and the ability to create, modify and upstream bug fixes and enhancements. Some examples of requirements (other than prior OpenStack experience) include:

  • Skills in software design, problem solving, and object-oriented coding skills; familiarity with the OpenStack core projects and the OpenStack Foundation CI system
  • Strengths in coding (Python preferred), data structures, algorithms and designing for performance, scalability, availability, and security
  • Demonstrated experience in one or more static and dynamic languages – Java, Scala and/or C++ / Python, Ruby or Node.js

Based on the requirements above, this type of a position is an obvious fit for someone with experience working on one or more OpenStack projects and who is familiar with OpenStack development and bug fixing. It can also be a great for experienced Python programmers with a DevOps background who are looking to get involved with something new. If you love what you’re doing with OpenStack as a volunteer coder, why not get paid to do it as your job?

OpenStack operator (aka OpenStack operations administrator, OpenStack technical support, etc.)

Operational roles for OpenStack clouds are very similar to operational support roles in legacy infrastructure shops, with the addition of specialized OpenStack skills (to operate and troubleshoot the software.)

These specialized skills include:

  • Technical troubleshooting of customer reported software issues with an OpenStack cloud
  • Helping customers with the Horizon interface, operating cloud management platforms and Heat templates
  • Identifying bugs in underlying OpenStack components and collaboration with engineering staff to isolate root cause analysis

This type of position is often advertised at various skill levels. At higher levels, the position may include additional responsibilities like:

  • Reproducing customer reported bugs in lab environments and use collaboration and reporting tools like JIRA, Confluence and ServiceNow to manage and report
  • Assisting engineering staff by reporting bugs, publishing patches and working with the development team to coordinate upstream patch and bug management
  • Providing backline support to customers by interpreting log files and python errors in OpenStack projects

To perform these functions, an employer would be looking for the following skills:

  • Senior level Linux OS proficiency in the flavors that the enterprise provides; knowing RHEL/Centos, Ubuntu and SUSE well should cover most bases
  • Expert level proficiency in operating an OpenStack cloud via Horizon and/or CLI. The Certified OpenStack Administrator (COA) is a perfect certification and a great starting point for this career
  • Knowledge of some scripting language, the ability to read Python logs and excellent communications skills, since the candidate will be dealing with customers
  • Familiarity with OpenStack’s CI tools may be needed for anyone interested in the higher-level operations positions.

Some of the other prerequisites employers are looking for in an operations employee are a solid understanding of network and distributed computing and basic network concepts like routing, switching and firewalls. Bash/Ansible/Puppet/Chef scripting is always a plus for anyone who is interested in an operations position, it allows you to automate all the things!

OpenStack site reliability engineer (infrastructure architects/administrators, configuration manager, etc)

While the position of site reliability engineer has been around since 2003 when Google hired a team of seven software engineers to run a production environment, it’s relatively a new career path to the OpenStack realm. The position is about 50 percent traditional ops work such as incidents, on-call and break-fix intervention. The rest of the time, site reliability engineers are tasked with the responsibility of creating scalable and highly reliable software systems. Therefore, anyone considering this OpenStack career would spend about half of their time testing out new OpenStack features, scaling OpenStack and ensuring that the environment is highly reliable at scale.

Some common responsibilities may be:

  • Using DevOps processes, creating and automating methods to scale OpenStack compute, control and storage within and across data centers
  • Automate the backup, failover and disaster recovery procedures of an enterprise OpenStack environment
  • Develop, automate and manage patching procedures for underlying OSes, tools and OpenStack components

As you can see, this position has a very broad set of responsibilities from infrastructure management to testing new OpenStack features and projects. Thus, the skills needed to perform these functions also span a large set of domains. A SRE would typically be from either a software development or systems administration background with very strong skills in configuration languages and automation. While a strong operations background is also very desirable to perform highly on tasks like high availability, disaster recovery, backups and scaling.

Some common skills requirements include:

  • Expert level Linux OS troubleshooting. Ability to troubleshoot issues with the underlying components of OpenStack when investigating incidents or testing new features and projects
  • Senior to expert level programming ability. Demonstrated ability to use configuration languages like Puppet, Chef, Ansible, Salt, Bash to create automations and manage systems
  • Senior level OpenStack experience. Must know architecture, operations and be able to troubleshoot bugs within OpenStack to achieve root cause analysis

OpenStack architect (cloud architect, cloud infrastructure architect, OpenStack solution architect, etc.)

So far, we’ve listed positions that can deploy, expand, operate and govern OpenStack clouds. What if an employer is looking to begin their OpenStack journey or looking to expand their cloud organically? What if they need someone to marry business strategy challenges with cloud based technical solutions? What kind of position could be hired to design and architect these comprehensive cloud solutions? This is where the OpenStack/Cloud Architect role fits in.

Cloud architects are typically responsible for the some of the following:

  • Lead strategy for cloud adoption, cloud application design (OpenStack / multi-cloud / hybrid), management and operations.
  • Use established and new architectures create tactical plans for cloud deployments using legacy and emerging compute, network and storage options
  • Design and plan cloud architecture using least cost, least risk and most efficient solutions and have the ability to communicate them to executive management.

Similar to the positions above, this sampling of responsibilities could differ based on the employer’s requirements and how siloed the organization is. Other than having significant cloud experience in OpenStack, experience in one or more of the other cloud platforms is helpful. This is typically a senior level position due to experience being one of the main ingredients employers are looking for.

Some other skills employers are looking for this type of position are:

  • Expert OpenStack/cloud architecture skills. Understanding not only the technical challenges of cloud, but being able to answer how certain cloud functionality will solve business challenges.
  • Demonstrated currency of cloud knowledge. Not only do architects have to know cloud platforms well, they must also be current on latest features, functionality and maturity of the latest technology in cloud. Experience is very important in this position.
  • Ninja level communications skills. Many times, OpenStack and cloud architects have to present ideas, designs and solutions all the way up to the executive level. Sometimes a candidate might even need to bring a customer up to speed from the very beginning of cloud technology. Trust me, it’s like speaking martian to a dolphin sometimes, but it’s the architect’s job to translate and be understood.

Other related positions – DevOps engineer, CI/CD engineer, cloud software architect

Some positions being offered by employers today may not even involve working on OpenStack itself but are positions that need to know how to use OpenStack as a tool for infrastructure as code. If your current job requires developing application code in Java, Node.js, Python, Go or any number of other application programming languages used in the cloud today, it’s a good chance that you’ll have to learn how to interact with infrastructure APIs like OpenStack’s. Since half of the Fortune 100 is running OpenStack, it’s going to be a safe bet for your career to skill up on OpenStack technology before your company installs a corporate OpenStack cloud.

As platforms-as-a-service evolve and containers become even more popular, developers working on distributed infrastructure will have to become even more knowledgeable and flexible on how they are deploying code. Some of this knowledge involves the tools around DevOps and its myriad of tools. How these tools interact with OpenStack and other cloud platforms is invaluable now and will be increasingly more valuable in the future.

Additionally, container technology (Docker, Kubernetes, etc.) shows up in a fair amount of all of the job categories above, so I would be remiss if I didn’t recommend that all job seekers learn the fundamentals of containers and container management platforms.

With a bit more knowledge of what employers are looking for, hopefully this will help you evaluate your tool belt to make sure you’ve equipped yourself with the proper skills for an OpenStack career.

Now get out there and land that OpenStack dream job!

About the author

By day, Ben Silverman is a principal cloud architect for OnX. An international cloud activist, he’s also co-author of “OpenStack for Architects.”  He started his OpenStack career in 2013 by designing and delivering American Express’ first OpenStack environment, worked for Mirantis as a senior architect and has been a contributing member of the OpenStack Documentation team since 2014.

Cover Photo // CC BY NC

The post Careers in OpenStack: What employers want appeared first on OpenStack Superuser.

by Ben Silverman at October 11, 2017 03:47 PM

October 10, 2017

Arrfab's Blog

Using Ansible Openstack modules on CentOS 7

Suppose that you have a RDO/Openstack cloud already in place, but that you'd want to automate some operations : what can you do ? On my side, I already mentioned that I used puppet to deploy initial clouds, but I still prefer Ansible myself when having to launch ad-hoc tasks, or even change configuration[s]. It's particulary true for our CI environment where we run "agentless" so all configuration changes happen through Ansible.

The good news is that Ansible has already some modules for Openstack but it has some requirements and a little bit of understanding before being able to use those.

First of all, all the ansible os_ modules need "shade" on the host included in the play, and that will be responsible of all os_ modules launch. At the time of writing this post, it's not yet available on mirror.centos.org, (a review is open so that will be soon available directly) but you can find the pkg on our CBS builders

Once installed, a simple os_image task was directly failing, despite the fact that auth: was present, and that's due to a simple reason : Ansible os_ modules still want to use v2 API, while it's now defaulting to v3 in Pike release. There is no way to force ansible itself to use v3, but as it uses shade behind the scene, there is a way to force this through os-client-config

That means that you just have to use a .yaml file (does that sound familiar for ansible ?) that will contain everything you need to know about specific cloud, and then just in ansible declare which cloud you're configuring.

That clouds.yaml file can be under $current_directory, ~/.config/openstack or /etc/openstack so it's up to you to decide where you want to temporary host it, but I selected /etc/openstack/ :

- name: Ensuring we have required pkgs for ansible/openstack
    name: python2-shade
    state: installed

- name: Ensuring local directory to hold the os-client-config file
    path: /etc/openstack
    state: directory
    owner: root
    group: root

- name: Adding clouds.yaml for os-client-config for further actions
    src: clouds.yaml.j2
    dest: /etc/openstack/clouds.yaml
    owner: root
    group: root
    mode: 0700

Of course such clouds.yaml file is itself a jinja2 template distributed by ansible on the host in the play before using the os_* modules :

  {{ cloud_name }}:
      username: admin
      project_name: admin
      password: {{ openstack_admin_pass }}
      auth_url: http://{{ openstack_controller }}:5000/v3/
      user_domain_name: default
      project_domain_name: default
    identity_api_version: 3

You just have to adapt to your needs (see doc for this) but the interesting part is the identity_api_version to force v3.

Then, you can use all that in a simple way through ansible tasks, in this case adding users to a project :

- name: Configuring OpenStack user[s]
    cloud: "{{ cloud_name }}"
    default_project: "{{ item.0.name }}"
    domain: "{{ item.0.domain_id }}"
    name: "{{ item.1.login }}"
    email: "{{ item.1.email }}"
    password: "{{ item.1.password }}"           
    - "{{ cloud_projects }}"
    - users  
  no_log: True

From a variables point of view, I decided to just have a simple structure to host project/users/roles/quotas like this :

  - name: demo
    description: demo project
    domain_id: default
    quota_cores: 20
    quota_instances: 10
    quota_ram: 40960
      - login: demo_user
        email: demo@centos.org
        password: Ch@ngeM3
        role: admin # can be _member_ or admin
      - login: demo_user2
        email: demo2@centos.org
        password: Ch@ngeMe2

Now that it works, you can explore all the other os_* modules and I'm already using those to :

  • Import cloud images in glance
  • Create networks and subnets in neutron
  • Create projects/users/roles in keystone
  • Change quotas for those projects

I'm just discovering how powerful those tools are, so I'll probably discover much more interesting things to do with those later.

by Fabian Arrotin at October 10, 2017 10:00 PM

Chris Dent

TC Report 41

If there's a unifying theme in the mix of discussions that have happended in #openstack-tc this past week, it is power: who has it, what does it really mean, and how to exercise it.

This is probably because it is election season. After a slow start there is now a rather large number of candidates, a very diverse group. Is great to see.

On Wednesday, there were questions about the constellations idea mooted in the TC Vision Statement and the extent to which the TC has power to enforce integration between projects. Especially those which are considered core and those that are not (in this particular case Zaqar). The precedent here is that the TC has minimal direct power in such cases (each project is fairly autonomous), whereas individuals, some of whom happen to be on the TC, do have power, by virtue of making specific changes in code. The role of the TC in these kinds of situations is in making ideas and approaches visible (like constellations) and drawing attention to needs (like the top 5 list).

Thursday's discussion provides an interesting counterpoint. There some potential candidates expressed concern about running because they were interested in maintaining the good things that OpenStack has and had no specific agenda for drastic or dramatic change while candidates often express what they'd like to change. This desire for stability is probably a good fit, because in some ways the main power of the TC is choosing which projects to let into the club and in extreme cases kicking bad projects out. That latter is effectively the nuclear option: since nobody wants to use it the autonomy of projects is enhanced.

Continuing the rolling segues: On the same day, ttx provided access to the answers to two questions related to "developer satisfaction" that were added to the PTG survey. These aren't the original questions, they were adjusted to be considerably more open ended than the originals, which were effectively yes or no questions. The questions:

  • What is the most important thing we should do to improve the OpenStack software over the next year?
  • What is one change that would make you happier as an OpenStack Upstream developer?

I intend to analyse and group these for themes when I have the time, but just reading them en masse is interesting if you have some time. One emerging theme is that some projects are perceived to have too much power.

Which bring us to today's office hours where the power to say yes or no to a project was discussed again.

First up Glare There are a few different (sometimes overlapping) camps:

  • If we can't come up with reasons to not include them, we should include them.
  • If we can't come up with reasons to include then, we should not include them.
  • If they are going to cause difficulties for Glance or the stability of the images API, that's a risk.
  • If the Glare use case is abstract storage of stuff, and that's useful for everyone, why should Glare be an OpenStack project and not a more global or general open source project?

This needs to be resolved soon. It would be easier to figure out if there was already a small and clear use case being addressed by Glare with a clear audience.

Then Mogan, a bare metal compute service. There the camps are:

  • The overlaps with Nova and Ironic, especially at the API-level are a significant problem.
  • The overlaps with Nova and Ironic, especially at the API-level are a significant opportunity.

Straight into the log for more.

Finally, we landed on the topic of whether there's anything the TC can do to help with the extraction of placement from nova.

by Chris Dent at October 10, 2017 08:30 PM

OpenStack Superuser

Strengthening ties: OpenStack, the European Telecommunications Standards Institute and Network Functions Virtualization

Open source software and software development methods have become crucial building blocks in the industry. The APIs and interfaces created in an open environment are often the de facto standards as they get picked up and used by everyone from small start-ups to giant multinationals to reduce cost and increase innovation. At the same time, the demand for standardization is still necessary to ensure the compatibility and interoperability that are crucial to fields like telecommunications.

The advantages of combining the two are obvious, but we’re still in the early stages of strengthening these ties and working more closely together.

As luck would have it, Denver recently hosted both the OpenStack PTG and the ETSI NFV plenary, so we organized a joint workshop to explore more collaboration. Representatives from both groups were interested in gaining better understanding of the processes and activities of each organization and also delved into technical discussions on topics of common interest.

Mind the gap

ETSI NFV has two specifications (IFA005 and IFA006) that describe the functional requirements of the API of the Virtual Infrastructure Management (VIM) component of the ETSI NFV architectural framework. Because OpenStack is considered the de facto open source VIM alternative, ETSI NFV performed gap analysis research (TST003 or STF530) to find the differences between the specifications and the APIs of the OpenStack Ocata release.

This investigation identified several gaps between the functionality defined by the IFA documents and what’s provided by the OpenStack projects marked with the ‘tc:approved-release’ tag. While the gap analysis also validated the IFA specs, during the workshop we discussed the OpenStack-related missing requirements with developers from the community.

It’s a starting point for further collaboration between the two groups. During the meeting, we discovered that the language of these requirements does not always translate directly into the development arena. A further to-do item is to create user stories for these gaps, offering vital context to ensure that the best solutions are identified for implementation.

ETSI plugtests

The ETSI Centre for Testing and Interoperability (CTI)  regularly organizes an NFV Plugtest to test the interoperability of the different components implementing the ETSI NFV specifications using open-source based and commercial products. ETSI CTI is providing a neutral and coordinated environment for collaborative testing including the development of the test plan as an open and continuous process during the preparation phase. Registration is open for the second Plugtest where OpenStack is one of the supporting open-source organizations.

Testing activities

The importance of testing is present in standardization bodies as well; ETSI NFV has a dedicated team, called TST, formed to support the testing of the different NFV functional blocks, analyze the differences between NFV specifications and different open source software and investigate the effects of different phenomenons, like DevOps to the ETSI NFV architecture. The representatives of this team also attended the joint workshop in Denver.

There are several areas for collaboration, the groups are currently discussing interoperability testing and considering to look into resiliency and robustness testing as well in the future. The relevant TST documentations that describe the areas in common interest are: TST004 – Path implementation testing, TST005 – VNF Snapshot report, TST007 – Guidelines for interoperability, TST003 – Open source components.


Glare is a new project intending to provide a generic artifact repository implementation. As presented by the PTL, Mikhail Fedosin, this functionality can be valuable as a VNF artifact repository component of the NFV Orchestrator (NFVO) in the ETSI NFV architectural framework. To investigate the implementation possibilities the project team will work together members from ETSI NFV to clarify the functionalities and technical details onwards.

Get involved

We spent such a busy and successful afternoon together that we forgot to take any breaks in order to finish the ambitious agenda. Gergely Csatari, who is driving ETSI NFV efforts, is also involved in several OpenStack projects including OpenStack Manuals and Training Guides. He’ll be working with developers in the community to finalize the gap analysis and moving items into the implementation phase where needed.

This collaboration will continue in the tools and communication channels of the two organizations as an open activity. The best way to get involved: follow the discussions on the OpenStack Developer mailing list and the IRC channels of related projects. You can also reach out to Gergely Csatari (email: gergely.csatari@nokia.com, IRC: csatari) or Ildiko Vancsa (email: ildiko@openstack.org, IRC: ildikov) for more information.

The post Strengthening ties: OpenStack, the European Telecommunications Standards Institute and Network Functions Virtualization appeared first on OpenStack Superuser.

by Ildiko Vancsa at October 10, 2017 02:02 PM


6 new guides and how-tos for OpenStack

Enjoy these great resources for keeping up with what's going on in the world of OpenStack.

by Jason Baker at October 10, 2017 07:00 AM


City Network shifts into high gear with hastexo acquisition

Vienna, Austria and Karlskrona, Sweden (joint press release)

October 10, 2017

City Network, a leading provider of infrastructure-as-a-service (IaaS) based on OpenStack, today announced the acquisition of hastexo, an independent professional services organization specializing in open source solutions for cloud computing, distributed storage, and self-paced on-line training. The move is a direct continuation of City Network’s strategy to build and strengthen its services and education portfolio around OpenStack, Ceph, containers, high availability, and infrastructure.

Based in Austria with consultants in India and Brazil, the highly experienced hastexo team and their open source training platform will be integrated into City Network and its City Cloud IaaS operation. This enables City Network customers to get a head start on deploying open and flexible IT infrastructure, and to take advantage of the hands-on training platform for rapid upskilling and efficient technology adoption.

“Today, speed and agility in innovation and deployments are key priorities for almost all organizations, and to enable this, development and operations need to work closely together to automate the process and ensure quality,” said Johan Christenson, CEO and founder of City Network. “To be successful in this process requires an open, flexible and scalable IT infrastructure. With hastexo’s dedicated training platform and professional services, we will further enhance our capabilities in helping organizations make this transformation.”

Through professional services and its training platform, hastexo is heavily involved in the OpenStack, Ceph, and Open edX communities. In 2015, the company launched the world's first self-guided interactive training platform for scalable, distributed IT systems, hastexo Academy (built on top of Open edX and OpenStack). hastexo Academy redefines professional education by allowing learners to master arbitrarily complex technology in realistic live environments, any time, any place, and without the need for complex hardware or instructors.

hastexo’s professional training platform will be a key part in City Networks IaaS offer City Cloud, especially focusing on training and upskilling of its cloud operators and developers. Furthermore, City Network will meet private and public cloud customers’ educational demands with customized training services, in direct continuation of the services hastexo has been providing, with great success, to customers across the globe.

“My team and I are all multi-year contributors to the OpenStack, Ceph, and Open edX communities, and us becoming part of City Network means that as a global organization, we can help those communities grow even faster” said hastexo CEO Florian Haas, who is joining City Network’s executive team as VP of Professional Services and Education. “We’re thrilled to join the City Network organization and to help more companies leverage the advantages of an open infrastructure.”

Additional information on the acquisition is available in a blog post by hastexo founder Florian Haas.

City Network Founder and CEO Johan Christenson, and hastexo founder Florian Haas

City Network Founder and CEO Johan Christenson, and hastexo founder Florian Haas

About City Network

City Network is a leading European provider of IT infrastructure services.

The company provides public, private and hybrid cloud solutions based on OpenStack from more than 20 data centers around the world. Through its industry specific IaaS City Cloud, it can ensure that customers comply with demands originating from specific laws and regulations concerning auditing, reputability, data handling and data security such as Basel and Solvency. City Network is certified according to ISO 9001, 14001, 27001, 27015 and 27018 – internationally recognized standards for quality, sustainability and information security.

About hastexo

hastexo is an independent professional services organization specializing in open source solutions for cloud computing, distributed storage, and self-paced on-line training. The company is heavily involved in the OpenStack, Ceph, and Open edX communities and offers its services world-wide. hastexo currently serves customers on five continents, and continuously expands its customer base. hastexo’s web site is www.hastexo.com, and its self-paced learning courses are offered from academy.hastexo.com.

by hastexo at October 10, 2017 12:00 AM

October 09, 2017


Project Teams Gathering interviews

Several weeks ago I attended the Project Teams Gathering (PTG) in Denver, and conducted a number of interviews with project teams and a few of the PTLs (Project Technical Leads).

These interviews are now all up on the RDO YouTube channel. Please subscribe, as I'll be doing more interviews like this at OpenStack Summit in Sydney, as well as at future events.

I want to draw particular attention to my interview with the Swift crew about how they collaborate across company lines and across timezones. Very inspiring.

Watch all the videos now.

by Rich Bowen at October 09, 2017 06:09 PM

OpenStack Superuser

The five most common OpenStack questions, answered

At Loom Systems, we receive a continuous stream of questions about OpenStack and OpenStack monitoring and we take the extra step of categorizing them. That’s given us a wealth of information about both the common and not-so-common issues that pop up. Since our goal is to be helpful and give back to both the OpenStack community specifically and the IT industry at large, we’ve put together general answers to the five most common OpenStack questions we receive. If you have a follow-up question, feel free to share it in the comments section; and, of course, keep sending us your questions at support@loomsystems.com

How can I find out which version of OpenStack I have installed?

This is a frequent question we get that affects your entire OpenStack environment, so it’s important to get it right. Here’s a quick way to find out:

  • SSH to your OpenStack hosts
  • Run openstack --version

If you want to know which version you have installed of specific services, the approach is similar:

  • SSH to your OpenStack hosts
  • nova-manage --version
  • cinder-manage --version
  • glance-manage --version

Add it to a post-it or an ongoing cheatsheet. It’ll come in handy again and again.

How do I start/stop OpenStack services manually through the command line?

This is another important one to add to your cheatsheet.

  • Sudo to your OpenStack hosts
  • List all your OpenStack services by running systemctl
  • systemctl start/stop SERVICE_NAME

You can tab complete to finish service names in case you don’t remember the full names of each service. Auto complete is your friend. It could look like this:

  • systemctl stop openstack-glance-api

How do I manually configure a firewall to permit OpenStack service traffic?

On deployments that have restrictive firewalls in place, you may need to configure a firewall manually to permit OpenStack service traffic. Here’s a working list of default ports that OpenStack services respond to:

OpenStack service

Default ports

Port type

Block Storage (cinder)


publicurl and adminurl

Compute (nova) endpoints


publicurl and adminurl

Compute API (nova-api)

8773, 8775


Compute ports for access to virtual machine consoles



Compute VNC proxy for browsers ( openstack-nova-novncproxy)



Compute VNC proxy for traditional VNC clients (openstack-nova-xvpvncproxy)



Identity service (keystone) administrative endpoint



Identity service public endpoint



Image Service (glance) API


publicurl and adminurl

Image Service registry



Networking (neutron)


publicurl and adminurl

Object Storage (swift)

6000, 6001, 6002


Orchestration (heat) endpoint


publicurl and adminurl

Telemetry (ceilometer)


publicurl and adminurl



OpenStack dashboard (Horizon) when it is not configured to use secure access.

HTTP alternate


OpenStack Object Storage (swift) service.



Any OpenStack service that is enabled for SSL, especially secure-access dashboard.



OpenStack Object Storage. Required.

iSCSI target


OpenStack Block Storage. Required.

MySQL database service


Most OpenStack components.

Message Broker (AMQP traffic)


OpenStack Block Storage, Networking, Orchestration, and Compute.

How do I properly reboot a machine running DevStack?

I frequently hear this, “I couldn’t find ./rejoin-stack.sh. How can I just reboot the server and bring it all back up?” I’ve gone through this a lot and here’s the answer. Because DevStack was not meant to either run a cloud or support restoring a running stack after a reboot, rejoin-stack.sh was removed. Instead, you will need to run stack.sh and create a new cloud. Remember to put the stuff you need (like your public key) into local.sh and they will be available for the next deployment. And remember that if you do need to run a cloud and were relying on DevStack, please investigate one of the main alternatives that are designed and tested for cloud operation.

How do I properly reboot a machine running DevStack?

A frequent issue with Cinder Volumes is failing to remove them. If you tried to do cinder delete $volume_id and got an “error_deleting” response, here’s what to do.

  1. Get volume UUID by running the following command:
    [root@rdo-vm-2 devops]#cinder list
  2. You can check the available status and try to reset the state of volume. If it shows “Error_deleting” or “Detaching”, you can reset the state of the volume with:
    [root@rdo-vm-2 devops]#cinder reset-state --state available $volume_uuid
  3. If that also fails, log in to mysql db and use Cinder DB:
    mysql> use cinder
  4. Following cinder mysql query sets the Cinder state to available:
    mysql>update volumes set attach_status='detached', status='available' where id ='$volume_uuid';
  5. If the above workflow does not help, then the below mysql query should solve the issue and delete the volume:
    mysql>update volumes set deleted=1,status='deleted', deleted_at=now(), updated_at=now() where deleted=0 and id='$volume_uuid';

I hope that helps with some of the OpenStack issues you might be frequently facing. If not, send me an email at aviv@loomsystems.com. We’re big believers that OpenStack doesn’t have to be such a challenge and I’d like to show you how.

Aviv Lichtigstein is the head of product evangelism at Loom Systems. This post first appeared on Loom System’s blog. Superuser is always interested in community content, get in touch at editorATopenstack.org.


The post The five most common OpenStack questions, answered appeared first on OpenStack Superuser.

by Aviv Lichtigstein at October 09, 2017 03:46 PM

James Page

Ubuntu Openstack Dev Summary – 9th October 2017

Welcome to the seventh Ubuntu OpenStack development summary!

This summary is intended to be a regular communication of activities and plans happening in and around Ubuntu OpenStack, covering but not limited to the distribution and deployment of OpenStack on Ubuntu.

If there is something that you would like to see covered in future summaries, or you have general feedback on content please feel free to reach out to me (jamespage on Freenode IRC) or any of the OpenStack Engineering team at Canonical!

OpenStack Distribution

Stable Releases

Current in-flight SRU’s for OpenStack related packages:

Ceph 10.2.9 point release

Ocata Stable Point Releases

Pike Stable Point Releases

Horizon Newton->Ocata upgrade fixes

Recently released SRU’s for OpenStack related packages:

Newton Stable Point Releases

Development Release

OpenStack Pike released in August and is install-able on Ubuntu 16.04 LTS using the Ubuntu Cloud Archive:

sudo add-apt-repository cloud-archive:pike

OpenStack Pike also forms part of the Ubuntu 17.10 release later this month; final charm testing is underway in preparation for full Artful support for the charm release in November.

We’ll be opening the Ubuntu Cloud Archive for OpenStack Queens in the next two weeks; the first uploads will be the first Queens milestones, which will coincide nicely with the opening of the next Ubuntu development release (which will become Ubuntu 18.04 LTS).

OpenStack Snaps

The main focus in the last few weeks has been on testing of the gnocchi snap, which is currently install-able from the edge channel:

sudo snap install --edge gnocchi

The gnocchi snap provides the gnocchi-api (nginx/uwsgi deployed) and gnocchi-metricd service;  Due to some incompatibilities between gnocchi/cradox/python-rados the snap is currently based on the 3.1.11 release; hopefully we should work through the issues with the 4.0.x release in the next week or so, as well as having multiple tracks setup for this snap so you can consume a version known to be compatible with a specific OpenStack release.

Nova LXD

The team is currently planning work for the Queens development cycle; pylxd has received a couple of new features – specifically support for storage pools as provided in newer LXD versions, and streaming of image uploads to LXD which greatly reduces the memory footprint of client applications during uploads.

OpenStack Charms

Queens Planning

Out of the recent Queens PTG, we have a number of feature specs landed in the charms specification repository . There are a few more in the review queue; if you’re interested in plans for the Queens release of the charms next year, this is a great place to get a preview and provide the team feedback on the features that are planned for development.

Deployment Guide

The first version of the new Charm Deployment Guide has now been published to the OpenStack Docs website; we have a small piece of followup work to complete to ensure its published alongside other deployment project guides, but hopefully that should wrap up in the next few days.  Please give the guide a spin and log any bugs that you might find!


Over the last few weeks there has been an increased level of focus on the current bug triage queue for the charms; from a peak of 600 open bugs two weeks ago, with around 100 pending triage, we’ve closed out 70 bugs and the triage queue is down to a much more manageable level.  The recently introduced bug triage rota has helped with this effort and should ensure we keep on-top of incoming bugs in the future.


In the run-up to the August charm release, a number of test scenarios which required manual execution where automated as part of the release testing activity;  this automation work reduces the effort to produce the release, and means that the majority of test scenarios can be run on a regular basis.  As a result, we’re going to move back to a three month release cycle; the next charm release will be towards the end of November after the OpenStack summit in Sydney.

IRC (and meetings)

As always, you can participate in the OpenStack charm development and discussion by joining the #openstack-charms channel on Freenode IRC; we also have a weekly development meeting in #openstack-meeting-4 at either 1000 UTC (odd weeks) or 1700 UTC (even weeks) – see http://eavesdrop.openstack.org/#OpenStack_Charms for more details.



by JavaCruft at October 09, 2017 10:44 AM

OpenStack Blog

User Group Newsletter – September 2017

Sydney Summit news

Let’s get excited! The Sydney Summit is getting close, and is less than 30 days away!!

Get all your important information in this Summit guide. It includes suggestions about where to stay, featured speakers, a Summit timeline and much more.

The schedule is LIVE! Plan it via the website or on the go with the Summit app. Stuck on where to start with your schedule? See the best of what the Sydney Summit has to offer with this Superuser article.


*All* non-Australian residents will need a visa to travel to Australia (including United States citizens). Click here for more information


The Forum is where  (users and developers) gather to brainstorm the requirements for the next release, gather feedback on the past version and have strategic discussions that go beyond just one release cycle.

The Forum schedule brainstorming is well underway! Check out the link below for key dates.

You can read more about the Forum here.

#HacktheStack – Cloud App Hackathon

Join us for Australia’s first OpenStack Application Hackathon Nov 3-5, 2017 at Doltone House in Australia’s Technology park, the weekend prior to the OpenStack Summit Sydney.

This 3-day event is organized by the local OpenStack community and welcomes students and professionals to hack the stack using the most popular open infrastructure platforms and application orchestration tools such as OpenShift (Kubernetes and Docker container orchestrator), Cloudify (TOSCA and Docker app orchestrator) and Agave (Science-as-a-Service gateway), in addition to the premier Open Source Infrastructure-as-a-Service: OpenStack!

There are great opportunities to get involved! You can sign up as a participant or share your expertise as a mentor. There are also some fantastic sponsorship opportunities available.

Click for more information here


2018 Summit news – Save the dates!

Back by popular demand, Vancouver is our first Summit destination in 2018. Mark your calendar for May 21-24, 2018.

Our second summit for 2018 will be heading to Berlin! Save the dates for November 13-15th!

New User Groups

We welcome our newest User Groups!

Looking for your local user group or want to start one in your area? Head to the groups portal.


Did you miss OpenDev? Read a great event summary here.

Catch up on all the talks with the event videos here.


OpenStack Days  

OpenStack Days bring together hundreds of IT executives, cloud operators and technology providers to discuss cloud computing and learn about OpenStack. The regional events are organized and hosted annually by local OpenStack user groups and companies in the ecosystem, and are often one or two-day events with keynotes, breakout sessions and even workshops. It’s a great opportunity to hear directly from prominent OpenStack leaders, learn from user stories, network and get plugged into your local community.

See when and where the upcoming OpenStack Days are happening.


OpenStack Marketing Portal

There is some fantastic OpenStack Foundation content available on the Marketing Portal.

This includes materials like:

  • OpenStack 101 slide deck
  • 2017 OpenStack Highlights & Stats presentation
  • Collateral for events (Sticker and T-Shirt designs)

Latest from Superuser

How to install the OpenStack Horizon Dashboard

How to get more involved in OpenStack

How to make your Summit talk a big success

How to deliver network services at the edge

Kickstart your OpenStack skills with an Outreachy Internship


Have you got a story for Superuser? Write to editor[at]openstack.org.


On a sad note…a farewell

Last week, an extremely valued member of our community, Tom Fifield, announced his departure as Community Manager from the OpenStack Foundation.

Tom, we thank you for your amazing industrious efforts over the last five years! Your work has contributed to culminating in the healthy community we have today, across more than 160 countries, where users and developers collaborate to make clouds better for the work that matters.

Thank you Tom!!

Read his full announcement here


Contributing to the User Group Newsletter.

If you’d like to contribute a news item for next edition, please submit to this etherpad.

Items submitted may be edited down for length, style and suitability.








by Sonia Ramza at October 09, 2017 03:11 AM

October 07, 2017


Australia’s first OpenStack Cloud Application Hackathon is coming!

OpenStack Application Hackathon - 1 month to go

Join us for Australia’s first OpenStack Application Hackathon Nov 3-5, 2017 at Doltone House in Australia’s Technology park, the weekend prior to the OpenStack Summit Sydney.

This 3-day event is organised by the local OpenStack community and welcomes students and professionals to hack the stack using the most popular open infrastructure platforms and application orchestration tools.

Whether you’re inspired by Internet of Things, scientific research challenges, or have a creative idea to bring to market, knowing the latest tech is how you’ll get there. Hackathon teams will pick a theme–Edge computing, Scientific Research, or Market Viability–then pick one of three platforms to use as you build your app:

  • Agave, a “science-as-a-service” platform for research and big data

  • Cloudify, an orchestration framework that lets you put your apps on the cloud

  • OpenShift, a container management tool for using Docker containers

All projects will sit atop OpenStack, an open source cloud platform. You can learn more about OpenStack at the upcoming OpenStack Summit in Sydney, November 6-8.

Hackers are invited to join as an individual or as part of a team. Bring your tent/sleeping bag and camp out to take in the full Hackathon experience! All qualified mentors and participants who participate for the duration of the Hackathon will receive a free pass to the Sydney Summit. There’s great prizes (including a free tickets to the Vancouver OpenStack Summit for the winning team), all the food and coffee you need to fuel your hack, as well as pre-training and mentor support from renowned technology companies around the world.

Sign up now and get ready to compete for some big prizes!

The post Australia’s first OpenStack Cloud Application Hackathon is coming! appeared first on Aptira Cloud Solutions.

by Aptira at October 07, 2017 09:48 AM

October 06, 2017

OpenStack Blog

“Dear Boss, I want to attend OpenStack Summit Sydney”

Want to attend the OpenStack Summit Sydney but need help with the right words for getting your trip approved? While we won’t write the whole thing for you, here’s a template to get you going. It’s up to you to decide how the Summit will help your team, but with free workshops and trainings, technical sessions, strategy talks and the opportunity to meet thousands of likeminded Stackers, we don’t think you’ll have a hard time finding an answer.


Dear [Boss],

I would like to attend the OpenStack Summit in Sydney, November 6-8, 2017. The OpenStack Summit is the largest conference for the open source cloud platform OpenStack, and the only one where I can get free OpenStack training, learn how to contribute code upstream to the project, and meet with other users to learn how they’ve been using OpenStack in production. The Summit is an opportunity for me to bring back knowledge about [Why you want to attend! What are you hoping to learn? What would benefit your team?] and share it with our team, while helping us get to know similar OpenStack-minded teams from around the world.

Companies like Commonwealth Bank, Tencent and American Airlines will be presenting, and technical sessions will demonstrate how teams are integrating other open source projects, like Kubernetes with OpenStack, to optimize their infrastructure. I’ll also be able to give Project Teams feedback about OpenStack so our user needs can be incorporated into upcoming software releases.

You can browse past Summit content at openstack.org/videos to see a sample of the conference talks.

The OpenStack Summit is the opportunity for me to expand my OpenStack knowledge, network and skills. Thanks for considering my request.

[Your Name]


Learn more about the Summit and register at openstack.org/summit/sydney-2017/

by Anne Bertucio at October 06, 2017 02:44 PM

OpenStack Superuser

How to install the Openstack Horizon Dashboard

Horizon is one of the most popular ways to interact with OpenStack.

LearnIT Guide recently published a tutorial (text based and the 12-minute video below) of how to set up Openstack’s Dashboard. Horizon, provides a web-based user interface to OpenStack services including Nova, Swift, Keystone. The tutorial requires Centos 7.1 6-4 bit, OpenStack Liberty repositories and packages as well as an installed and configured MariaDB.

The OpenStack Foundation also provides this Quick Start Guide to Horizon, tested for Horizon on Ubuntu (16.04-64) and RPM-based (RHEL 7.x) distributions. Also, if you’re looking for other beginner resources — documentation, setting up a dev environment — check out the Foundation’s “How to get started” section.

LearnIT Guide also offers other free tutorials on OpenStack including how to configure Neutron and Nova.

Superuser is always looking for community resources and how-tos, get in touch: editorATopenstack.org

Cover Photo // CC BY NC

The post How to install the Openstack Horizon Dashboard appeared first on OpenStack Superuser.

by Superuser at October 06, 2017 02:33 PM

October 05, 2017

Julien Danjou

My interview with Cool Python Codes

A few days ago, I've recently been contacted by Godson Rapture from Cool Python codes to answer a few questions about what I work on in open source. Godson regularly interview developers and I invite you to check out his website!

Here's a copy of my original interview. Enjoy!

Good day, Julien Danjou, welcome to Cool Python Codes. Thanks for taking your precious time to be here.

You’re welcome!

Could you kindly tell us about yourself like your full name, hobbies, nationality, education, and experience in programming?

Sure. I’m Julien Danjou, I’m French and live in Paris, France. I studied Computer science for 5 years around 15 years ago, and continued my career in that field since then, specializing in open source projects.

Those last years, I’ve been working as a software engineer at Red Hat. I’ve spent the last 10 years working with the Python programming language. Now I work on the Gnocchi project which is a time series database.

When I’m not coding, I enjoy running half-marathon and playing FPS games.

Can you narrate your first programming experience and what got you to start learning to program?

I started programming around 2001, and my first serious programs were in Perl. I was contributing to a hosting platform for free software named VHFFS. It was a free software project itself, and I enjoyed being able to learn from other more experienced developers and being able to contribute back to it. That’s what got me stuck into that world of open source projects.

Which programming language do you know and which is your favorite?

I know quite a few, I’ve been doing serious programming in Perl, C, Lua, Common Lisp, Emacs Lisp and Python.

Obviously, my favorite is Common Lisp, but I was never able to use it for any serious project, for various reasons. So I spend most of my time hacking with Python, which I really enjoy as it is close to Lisp, in some ways. I see it as a small subset of Lisp.

What inspired you to venture into the world of programming and drove you to learn a handful of programming languages?

It was mostly scratching my own itches when I started. Each time I saw something I wanted to do or a feature I wanted in an existing software, I learned what I needed to get going and get it working.

I studied C and Lua while writing awesome- the window manager that I created 10 years ago and used for a while. I learned Emacs Lisp while writing extensions that I wanted to see in Emacs, etc. It’s the best way to start.

What is your blog about?

My blog is a platform where I write about what I work on most of the time. Nowadays, it’s mostly about Python and the main project I contribute to, Gnocchi.

When writing about Gnocchi, I usually try to explain what part of the project I worked on, what new features we achieved, etc.

On Python, I try to share solutions to common problems I encountered or identified while doing e.g. code reviews. Or presenting a new library I created!

Tell us more about your book, The Hacker’s Guide to Python.

It’s a compilation of everything I learned those last years building large Python applications. I spent the last 6 years developing on a large code base with thousands of other developers.

I’ve reviewed tons of code and identified the biggest issues, mistakes, and bad practice that developers tend to have. I decided to compile that in a guide, helping developers that played a bit with Python to learn the stages to get really productive with Python.

OpenStack is the biggest open source project in Python, Can you tell us more about OpenStack?

OpenStack is a cloud computing platform, started 7 years ago now. Its goal is to provide a programmatic platform to manage your infrastructure while being open source and avoiding vendor lock-in.

Who uses OpenStack? Is it for programmers, website owners?

It’s used by a lot of different organizations – not really by individuals. It’s a big piece of software. You can find it in some famous public cloud providers (Dreamhost, Rackspace…), and also as a private cloud in a lot of different organizations, from Bloomberg to eBay or the CERN in Switzerland, a big OpenStack user. Tons of telecom providers also leverages OpenStack for their own internal infrastructure.

Have you participated in any OpenStack conference? What did you speak on if you did?

I’ve attended the last 9 OpenStack summits and a few other OpenStack events around the world. I’ve been engaged in the upstream community for the last 6 years now.

My area of expertise is telemetry, the stack of software that is in charge of collecting and storing metrics from the various OpenStack components. This is what I regularly talk about during those events.

How can one join the OpenStack community?

There’s an entire documentation about that, called the Developer’s Guide. It explains how to setup your environment to send patches, how to join the community using the mailing-lists or IRC.

What makes your book, The Hacker’s Guide to Python stand out from other Python books? Also, who exactly did you write this book for?

I wrote the book that I always wanted to read about Python, but never found. It’s not a book for people that want to learn Python from scratch. It’s a great guide for those who know the language but don’t know the details that experienced developers know and that make the difference. The best practice, the elegant solutions to common problems, etc. That’s why it also includes interviews with prominent Python developers, so they can share their advice on different areas.

How can someone get your book?

I’ve decided to self-publish my book, so he does not have an editor like you can be used to see. The best place to get it is online at where you can pick the format you want, electronic or paper.

What do you mean when you say you hack with Python?

Unfortunately, most people refer to hacking as the activity of some bad guys trying to get access to whatever they’re not supposed to see. In the book title, I mean “hacking” as the elegant way of writing code and making things worse smoothly even when you were not expecting to make it.

You mentioned earlier that Gnocchi is a time series database. Can you please be more elaborate about Gnocchi? Is there also any documentation about Gnocchi?

So Gnocchi is a project I started a few years ago to store time series at large scale. Time series are basically a series of tuple composed of a timestamp and a value.

Imagine you wanted to store the temperature of all the rooms of the world at any point of time. You’d need a dedicated database for that with the right data structure. This is what Gnocchi does: it provides this data structure storage at very, very large scale.

The primary use case is infrastructure monitoring, so most people use it to store tons of metrics about their hardware, software, etc. It’s fully documented on its website.

How can a programmer without much experience contribute to open source projects?

The best way to start is to try to fix something that irritates you in some way. It might be a bug, it might be a missing feature. Start small. Don’t try big things first or you could be discouraged.

Never stop.

Also, don’t plunge right away in the community and start poking random people or spam them with questions. Do your homework, and listen

Do your homework, and listen to the community for a while to get a sense of how things are going. That can be joining IRC and lurking or following the mailing lists for example.

Big open source community dedicate programs to help you become engaged. It might be worth a try. Generic programs like Outreachy or Google Summer of Code are a great way to start if you don’t feel confident enough to jump by your own means in a community.

Just out of curiosity, do you write code in French?

Never ever. I think it’s acceptable to write in your language if you are sure that your code will never be open sourced and that your whole team is talking in that language, no matter what – but it’s a ballsy assumption, clearly.

Truth is that if you do open source, English is the standard, so go with it. Be sad if you want, but please be pragmatic.

I’ve seen projects being open sourced by companies where all the code source comments were in Korean. It was impossible for any non-Korean people to get a glance of what the code and the project was doing, so it just failed and disappeared.

How does a team of programmers handle bugs in a large open source project?

I wish there was some magic recipe, but I don’t think it’s the case. What you want is to have a place where your users can feel safe reporting bugs. Include a template so they don’t forget any details: how to reproduce the bugs, what they expected, etc. The worst thing is to have users reporting “That does not work.” with no details. It’s a waste of time.

What you want is to have a place where your users can feel safe reporting bugs. Include a template so they don’t forget any details: how to reproduce the bugs, what they expected, etc. The worst thing is to have users reporting “That does not work.” with no details. It’s a waste of time.

What tool to use to log all of that really depends on the team size and culture.

Once that works, the actual fixing of bug doesn’t follow any rule. Most developers fix the bug they encounter or the ones that are the most critical for users. Smaller problems might not be fixed for a long time.

Can you tell us about the new book you are working on and when do we expect to get it?

That new book is entitled “Scaling Python” and it provides insight into how to build largely scalable and distributed applications using Python.

It is also based on my experience in building this kind of software during the past years. This book also includes interviews of great Python hackers who work on scalable system or know a thing or two about writing applications for performance – an important point to have scalable applications.

The book is in its final stage now, and it should be out at the beginning of 2018.

How can someone get in contact with you?

I’m reachable at julien@danjou.info by email or via Twitter, @juldanjou.

by Julien Danjou at October 05, 2017 07:39 PM


Is your own Equifax crisis hiding in your infrastructure?

Anybody who works in enterprise IT can tell you that even when you know about urgent updates, once your infrastructure reaches a certain degree of complexity, knowing can be easier than doing.

by Nick Chase at October 05, 2017 06:53 PM

OpenStack Superuser

How to get more involved with OpenStack

John Garbutt knows a lot about what it takes to be a vital member of the OpenStack community. His involvement stretches back to late 2010, he’s been a project team leader (PTL) for Nova (and survived the endeavor more than once!) and is currently a member of nova-core, nova-specs-core and nova-drivers.

He now works as principal engineer focused on the convergence of OpenStack and high-powered computing at StackHPC, in addition to playing the tuba as his Twitter handle suggests. A frequent speaker at OpenStack events, he recently gave back to the community at OpenStack Days UK by sharing his thoughts on what you can do to get more involved.

As a way to answer common questions about how to commit upstream, he shared his experience of working on bugs at the OpenStack Innovation Center (OSIC) and offered the following ways to get more involved:

● Come meet the OpenStack community
● Fix that confusing bit in the docs
● Submit that bug report
● Upload that little bug fix
● Help with reproduction, logs and testing fixes
● Review a Specification that interests you

You can check out the whole slide deck here.


The post How to get more involved with OpenStack appeared first on OpenStack Superuser.

by Superuser at October 05, 2017 02:47 PM

Red Hat Stack

Using Red Hat OpenStack Platform director to deploy co-located Ceph storage – Part Two

Previously we learned all about the benefits in placing Ceph storage services directly on compute nodes in a co-located fashion. This time, we dive deep into the deployment templates to see how an actual deployment comes together and then test the results!

Enabling Co-Location

This article assumes the director is installed and configured with nodes already registered. The default Heat deployment templates ship an environment file for enabling Pure HCI. This environment file is:


This file does two things:

  1. It redefines the composable service list for the Compute role to include both Compute and Ceph Storage services. The parameter for storing this list in ComputeServices.

  2. It enables a port on the Storage Management network for Compute nodes using the OS::TripleO::Compute::Ports::StorageMgmtPort resource. The default network isolation disables this port for standard Compute nodes. For our scenario we must enable this port and its network for the Ceph services to communicate. If you are not using network isolation, you can leave the resource at None to disable the resource.

Updating Network Templates

As mentioned, the Compute nodes need to be attached to the Storage Management network so Red Hat Ceph Storage can access the OSDs on them. This is not usually required in a standard deployment. To ensure the Compute node receives an IP address on the Storage Management network, you need to modify the NIC templates for your  Compute node to include it. As a basic example, the following snippet adds the Storage Management network to the compute node via the OVS bridge supporting multiple VLANs:

    - type: ovs_bridge
     name: br-vlans
     use_dhcp: false
     - type: interface
       name: nic3
       primary: false
     - type: vlan
         get_param: InternalApiNetworkVlanID
       - ip_netmask:
           get_param: InternalApiIpSubnet
     - type: vlan
         get_param: StorageNetworkVlanID
       - ip_netmask:
           get_param: StorageIpSubnet
     - type: vlan
         get_param: StorageMgmtNetworkVlanID
       - ip_netmask:
           get_param: StorageMgmtIpSubnet
     - type: vlan
         get_param: TenantNetworkVlanID
       - ip_netmask:
           get_param: TenantIpSubnet

The blue highlighted section is the additional VLAN interface for the Storage Management network we discussed.

Isolating Resources

We calculate the amount of memory to reserve for the host and Red Hat Ceph Storage services using the formula found in “Reserve CPU and Memory Resources for Compute”. Note that we accommodate for 2 OSDs so that we can potentially scale an extra OSD on the node in the future.

Our total instances:
32GB / (2GB per instance + 0.5GB per instance for host overhead) = ~12 hosts

Total host memory to reserve:
(12 hosts * 0.5 overhead) + (2 OSDs * 3GB) = 12GB or 12000MB

This means our reserved host memory is 12000MB.

We can also define how to isolate the CPU resources in two ways:

  • CPU Allocation Ratio – Estimate the CPU utilization of each instance and set the ratio of instances per CPU while taking into account Ceph service usage. This ensures a certain amount of CPU resources are available for the host and Ceph services. See the ”Reserve CPU and Memory Resources for Compute” documentation for more information on calculating this value.
  • CPU PinningDefine which CPU cores are reserved for instances and use the remaining CPU cores for the host and Ceph services.

This example uses CPU pinning. We are reserving cores 1-7 and 9-15 of our Compute node for our instances. This leaves cores 0 and 8 (both on the same physical core) for the host and Ceph services. This provides one core for the current Ceph OSD and a second core in case we scale the OSDs. Note that we also need to isolate the host to these two cores. This is shown after deploying the overcloud. 


Using the configuration shown, we create an additional environment file that contains the resource isolation parameters defined above:

 NovaReservedHostMemory: 12000
 NovaVcpuPinSet: ['1-7,9-15']

Our example does not use NUMA pinning because our test hardware does not support multiple NUMA nodes. However if you want to pin the Ceph OSDs to a specific NUMA node, you can do so using following “Configure Ceph NUMA Pinning”.

Deploying the configuration …

This example uses the following environment files in the overcloud deployment:

  • /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml – Enables network isolation for the default roles, including the standard Compute role.
  • /home/stack/templates/network.yamlCustom file defining network parameters (see Updating Network Templates). This file also sets the OS::TripleO::Compute::Net::SoftwareConfig resource to use our custom NIC Template containing the additional Storage Management VLAN we added to the Compute nodes above.
  • /usr/share/openstack-tripleo-heat-templates/environments/hyperconverged-ceph.yamlRedefines the service list for Compute nodes to include the Ceph OSD service. Also adds a Storage Management port for this role. This file is provided with the director’s Heat template collection.
  • /home/stack/templates/hci-resource-isolation.yamlCustom file with specific settings for resource isolation features such as memory reservation and CPU pinning (see Isolating Resources).

The following command deploys an overcloud with one Controller node and one co-located Compute/Storage node:

$ openstack overcloud deploy \
    --templates /usr/share/openstack-tripleo-heat-templates \
    -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \      -e /home/stack/templates/network.yaml \
    -e /home/stack/templates/storage-environment.yaml \
    -e /usr/share/openstack-tripleo-heat-templates/environments/hyperconverged-ceph.yaml \
    -e /home/stack/templates/hci-resource-isolation.yaml
    --ntp-server pool.ntp.org

Configuring Host CPU Isolation

As a final step, this scenario requires isolating the host from using the CPU cores reserved for instances. To do this, log into the Compute node and run the following commands:

$ sudo grubby --update-kernel=ALL --args="isolcpus=1,2,3,4,5,6,7,9,10,11,12,13,14,15"
$ sudo grub2-install /dev/sda

This updates the kernel to use the isolcpus parameter, preventing the kernel from using cores reserved for instances. The grub2-install command updates the boot record, which resides on /dev/sda for default locations. If using a custom disk layout for your overcloud nodes, this location might be different.

After setting this parameter, we reboot our Compute node:

$ sudo reboot


After the Compute node reboots, we can view the hypervisor details to see the isolated resources from the undercloud:

$ source ~/overcloudrc
$ openstack hypervisor show overcloud-compute-0.localdomain -c vcpus
| Field | Value |
| vcpus | 14    |
$ openstack hypervisor show overcloud-compute-0.localdomain -c free_ram_mb
| Field       | Value |
| free_ram_mb | 20543 |

2 of the 16 CPU cores are reserved for the Ceph services and only 20GB out for 32GB is available for the host to use for instance.

So, let’s see if this really worked. To find out, we will run some Browbeat tests against the overcloud. Browbeat is a performance and analysis tool specifically for OpenStack. It allows you to analyse, tune, and automate the entire process.

For our test we have run a set of Browbeat benchmark tests showing the CPU activity for different cores. The following graph displays the activity for a host/Ceph CPU core (Core 0) during one of the tests:

Screen Shot 2017-09-29 at 10.18.42 am

The green line indicates the system processes and the yellow line indicates the user processes. Notice that the CPU core activity peaks during the beginning and end of the test, which is when the disks for the instances were created and deleted respectively. Also notice the CPU core activity is fairly low as a percentage.

The other available host/Ceph CPU core (Core 8) follows a similar pattern:

Screen Shot 2017-09-29 at 10.19.43 am

The peak activity for this CPU core occurs during instance creation and during three periods of high instance activity (the Browbeat tests). Also notice the activity percentages are significantly higher than the activity on Core 0.

Finally, the following is an unused CPU core (Core 2) during the same test:

Screen Shot 2017-09-29 at 10.21.06 am

As expected, the unused CPU core shows no activity during the test. However, if we create more instances and exceed the ratio of allowable instances on Core 1, then these instances would use another CPU core, such as Core 2.

These graphs indicate our resource isolation configuration works and the Ceph services will not overlap with our Compute services, and vice versa.


Co-locating storage on compute nodes provides a simple method to consolidate storage and compute resources. This can help when you want to maximize the hardware of each node and consolidate your overcloud. By adding tuning and resource isolation you can allocate dedicated resources to both storage and compute services, preventing both from starving each other of CPU and memory. And by doing this via Red Hat OpenStack Platform director and Red Hat Ceph Storage, you have a solution that is easy to deploy and maintain!

by Dan Macpherson, Principal Technical Writer at October 05, 2017 01:38 AM

October 04, 2017

OpenStack Superuser

OpenStack operator spotlight: Fastweb

We’re spotlighting users and operators who are on the front lines deploying OpenStack in their organizations to drive business success. These users are taking risks, contributing back to the community and working to secure the success of their organization in today’s software-defined economy. We want to hear from you, too: get in touch with editorATopenstack.org to share your story.

Today, we’re talking to Fastweb,  an Italian telecommunications company that provides landline, broadband Internet and digital television services.  Fastweb provides cloud-based information and communications technology (ICT) services to Italian businesses without internal resources to independently develop and support them. As its user uptake grew, Fastweb needed to improve the automation and self-service capabilities according to the highly dynamic customer needs. Last year, they migrated to the Red Hat OpenStack Platform to meet those market needs.

Here we catch up with cloud engineer Amedeo Salvati.

Describe how are you using OpenStack. What kinds of applications or workloads are you currently running on OpenStack?

In Fastweb, we are using OpenStack as our framework for infrastructure-as-a-service (IaaS) public cloud offer (fastcloud.it), so our customers can run anything that they need to on it.

What business results, at a high level, have you seen from using OpenStack? What have been the biggest benefits to your organization as a result of using OpenStack? How are you measuring the impact?

The biggest benefit we had using OpenStack, was the freedom to customize everything, also avoiding vendor lock-in.

What is a challenge that you’ve faced within your organization regarding OpenStack, and how did you overcome it?

A big challenge for us in the beginning was around building the right team who are able to manage engineering and operations of our OpenStack deployment, but after two years we have found the right combination of smart people to make it work.

The post OpenStack operator spotlight: Fastweb appeared first on OpenStack Superuser.

by Superuser at October 04, 2017 01:20 PM

October 03, 2017

Chris Dent

TC Report 40

This week opens OpenStack Technical Committee (TC) election season. There's an announcement email thread (note the followup with some corrections). Individuals in the OpenStack community may self-nominate up until 2017-10-08, 23:45 UTC. There are instructions for how to submit your candidacy.

If you are interested you should put yourself forward to run. The TC is better when it has a mixture of voices and experiences. The absolute time commitment is less than you probably think (you can make it much more if you like) and no one is expected to be a world leading expert in coding and deploying OpenStack. The required experience is being engaged in, with, and by the OpenStack community.

Election season inevitably leads to questions of:

  • what the TC is designed to do
  • what the TC should do
  • what the TC actually did lately

A year ago Thierry published What is the Role of the OpenStack Technical Committee:

Part of the reason why there are so many misconceptions about the role of the TC is that its name is pretty misleading. The Technical Committee is not primarily technical: most of the issues that the TC tackles are open source project governance issues.

Then this year he wrote Report on TC activity for the May-Oct 2017 membership.

Combined, these go some distance to answering the design and actuality questions.

The "should" question can be answered by the people who are able and choose to run for the TC. Throughout the years people have taken different approaches, some considering the TC a sort of reactive judiciary that mediates and adjudicates disagreements while others take the view that the TC should have a more active and executive leadership role.

Some of this came up in today's office hours where I reported participating in a few conversations with people who felt the TC was not relevant, so why run? The ensuing conversation may be of interest if you're curious about the intersection of economics, group dynamics, individualism versus consensualism in collaborative environments, perception versus reality, and the need for leadership and hard work.

Other Topics

Conversations on Wednesday and Thursday of last week hit a couple of other topics.


On Wednesday the topic of Long Term Support came up again. There are effectively two camps:

  • Those who wonder why this should be an upstream problem at all, as long as we are testing upgrades from N-1 we're doing what needs to be done.

  • Those who think that if multiple companies are going to be working on LTS solutions anyway, wouldn't it be great to not duplicate effort?

And we hear reports of organization that want LTS to exist, but are not willing to dedicate resources to see it happen, evidently still confusing large-scale open source with "yay! I get free stuff!".

Overlapping Projects

On Thursday we discussed some of the mechanics and challenges when dealing with overlapping projects in the form of Trove and a potential new database-related project with the working title of "Hoard". Amongst other things there's discussion of properly using the service types authority and effectively naming resources when there may be another thing that wants to use a similar name for not quite the same purpose.

by Chris Dent at October 03, 2017 07:00 PM

Doug Hellmann

git-os-job 1.1.1

The OpenStack project stores the logs for all of the test jobs related to a commit on http://logs.openstack.org organized by the commit hash. To review the logs after a job runs, most developers start with the message jenkins leaves on gerrit, and click through to the log files. Not all jenkins jobs are triggered by …

by doug at October 03, 2017 05:42 PM

OpenStack Superuser

How to make your OpenStack Summit talk a big success

You prepared, you submitted, you were accepted; congratulations! The OpenStack community is intelligent and engaged, so expectations are always high. Whether this is your 50th or first talk at an OpenStack Summit, here’s five little ways to make sure your talk is a success.

Focus on the nonobvious

Assume your audience is smart and that they’ve heard a talk about your subject before. Even if it’s a 101 talk where your goal is educating about the basics, what can you say that will be unique to your presentation? What could they not find out by Googling your topic? Make sure to present something new and unexpected.

A good presentation sells better than a sales pitch

Unfortunately, the quickest way to empty a room—particularly in the OpenStack community—is to use talk time to push a service or product. This might conflict with company expectations––someone probably wants to see an ROI on your talk and maybe even sent over talking points. Instead, create interest in your company or product by being an outstanding representative and demonstrating smarts, innovation and the ability to overcome the inevitable challenges. The “sales pitch” is not what you say about a product, but it is you and how you present.

Shorten your career path story

It’s very common for talks to begin with “first, a little about me,” which often sounds like reading a resume. While this can create an audience connection, it eats up valuable presentation time and takes the focus off the topic. Instead, share only the relevant pieces of your career to set up your expertise and the audience’s expectations.

Take a look at the difference between these examples:

Frequently done: “My name is Anne and I’m currently a marketing coordinator at the OpenStack Foundation. I started off in renewable energy, focusing on national energy policy and community engagement; then I became a content writer for a major footwear brand; then worked at an international e-commerce startup; and now I’m here! In my free time I race bicycles and like riding motorcycles.”

The audience has learned a lot about me (probably too much!), but it doesn’t give them a single area of expertise to focus on. It distracts the audience from the topic of my talk.

Alternative: “My name is Anne and as the marketing coordinator at the OpenStack Foundation, I work on our social media team.”

I’ve established my professional connection to the topic, explained why they should listen and foreshadowed that we’ll be talking about social media marketing.

Conversation, not recitation

Memorizing a script and having the script in front of you (like on a phone) is a common device to try to soothe presentation nerves. Ironically this makes your presentation more difficult and less enjoyable for the audience. When you trip up on a word (and we all do!), it can cause you to lose the paragraph that precedes it. Reading off a device will make your presentation sound artificial.

Instead, rehearse your presentation but use slide graphics or brief bullets to keep you on message. Pretend you’re having a conversation with the audience; just a cup of coffee over a very large table.

P.S. Make sure you budget time for conversation with your audience, and bring a few thought-provoking questions of your own to get the discussion started.

Humor doesn’t always work in international audiences

OpenStack has a wonderfully international community, which means that many people in your audience may not be native or fluent in the language you are presenting in. Idioms, turns of phrase or plays on words can be particularly difficult to understand. Instead of leaning on humor, tell a story about how something came to be, or a critical error that we can all see the humor in.

Looking forward to the incredible talks slated for the upcoming Summit; good luck, presenters!

Cover Photo // CC BY NC

The post How to make your OpenStack Summit talk a big success appeared first on OpenStack Superuser.

by Anne Bertucio at October 03, 2017 11:02 AM


Beat ransomware attacks to the punch

Ransomware, a type of cyberattack tool that encrypts data on computers and networks and demands money to release or not publish them, has been on quite a rampage this summer, first with the global outbreak of WannaCry/WanaCrypt0r followed quickly by the spread of updated Petya malware. Infecting hundreds of thousands of machines around the world and halting operations in small businesses as well as large corporations such as FedEx and LG Electronics, SMBs prove to be a particularly vulnerable target for ransomware.

At first glance, the issue would seem to be the ransomware cost, but that is a secondary concern. By far the more serious problem is the downtime and subsequent revenue loss while you deal with the attack, whether or not you choose to pay the ransom. Lacking the resources of larger corporations, SMBs can feel the hit much more intensely.

Here we’ll discuss how to minimize your risk of data lockdown from ransomware and how you can use Backup to the Cloud and Disaster Recovery solutions from Host-Telecom to keep your data and your data infrastructure up and running. But first let’s talk more about the ransomware variants du jour, WannaCry and Petya.

The dreaded takeover “screen”

Malware, including ransomware, has been around about as long as computers have, but ransomware seems to be enjoying a special surge of popularity at the moment. According to Malware Lab’s Cybercrime Tactics and Techniques Q2 2017 report, ransomware comprised no less than 50% of total malware attacks in January to a high of more than 70% in June. Appearing on May 12, WannaCry, which is just one of many ransomware families, began its worldwide spread, affecting about 300,000 systems. Figure 1 shows the takeover screen.

WannaCry GUI

Figure 1. WannaCry GUI (credit: Malware Bytes, Cybercrime tactics and techniques Q2 2017)

Petya followed WannaCry on June 27, its lockdown potential greatly exacerbated with updated worm capabilities that enable the spread of ransomware from a single unpatched machine across the network, taking the whole thing down.

Payment demands

Ransomware demands vary, with WannaCry and Petya not being terribly exorbitant, but the sheer number of computers affected reaches pandemic proportions so quickly that the crime tends to pay off. Targeting smaller companies without sophisticated data backup solutions who may then respond more quickly is lucrative, although there’s no guarantee your files will necessarily be released. In addition, some ransomware variants are beginning to demand payment in Bitcoin as seen in the WannaCry screen in Figure 1, making it much harder to trace payments and therefore emboldening attackers.

Backups – Shoulda, Coulda, Woulda

As is often typical of malware, a patch that would have subverted WannaCry attacks was developed before cyber criminals unleashed it. Unfortunately and also typically, the fix was extremely sparsely deployed, resulting in widespread shutdowns with a huge economic impact. Malware is nothing new, but unfortunately few organizations are setting records with timely updates.

Yeah, you’ll be down awhile, and yeah, it’ll cost a lot

How bad could ransomware really be? Well, the sick feeling that the takeover screen evokes is completely secondary to the ongoing trauma of lost revenue as you resolve the issue. Figures estimate those costs at $325M a few years ago and increasing at a blazing rate. By May of this year, Cybersecurity Ventures editor-in-chief Steve Morgan reported that global damages from ransomware had multiplied 15 times in two years. He predicted 2017 losses at $5B, with attacks on healthcare organizations quadrupling by 2020.

With the threat increasing in both frequency and cost, at a minimum your organization should follow rudimentary policies to reduce infection, which we now discuss.

Basic protection

To avoid ransomware, establishing a strict patch update schedule and enforcing it is absolutely critical. As noted, security measures are often available months before data takeover attempts, as was the case with WannaCry and Petya. For example, a patch for WannaCry was available before it started spreading in May, but LG was infected by WannaCry in August.

Keeping staff aware and vigilant about opening suspicious email and attachments is also a must. Even technically savvy people can let their guard down when faced with a huge workload, a full inbox, and Internet pop-up screens. However, neither patching nor employee vigilance is enough.

The inevitable problem with basic protection

While basic network hygiene can mitigate the risks of ransomware, the human factor ensures fallibility. In fact, despite the importance of employee vigilance about suspicious email and similar phishing ploys, Fortinet security states:

“THIS IS CRITICAL: Do NOT count on your employees to keep you safe. While it is still important to up-level your user awareness training so employees are taught to not download files, click on email attachments, or follow unsolicited web links in emails, human beings are the most vulnerable link in your security chain, and you need to plan around them.”

The advice not to count on your employees inevitably extends to strict patch updates where enforcement depends on a human for timely application. Despite the imperfection of these protection measures, you should still do what you can to reduce your risks and start evaluating a higher level of protection.

We’ll address data security and infrastructure solutions now, including how to get a meaningful risk assessment for your business that helps you to plan and budget for what you may encounter in ransomware and other malware exploits.

Calculate real world risk

Two highly useful methods to evaluate the resiliency of your business in case of a hit by ransomware or some other type of disaster are Recovery Time Objective (RTO) and Recovery Point Objective (RPO). RTO is an estimate of how long your organization can survive with systems down while RPO is a measure of how well your company operates when faced with data loss.

RTO may stop you dead in your tracks as it can bring down your entire IT infrastructure, but RPO can vary if you have a solid data backup that the ransomware has not encrypted. A good way to differentiate RPO is to think of it as the shortest time you can continue to operate with your latest data backup. Depending on what you use the data for, the backup may suffice for a few days or more before business suffers. However, if your business requires the latest updates, RPO could be zero. To calculate it, determine the time between data backups, the data lost in between, and when the lost data lost becomes critical to business operations.

Risk assessment based on RTO and RPO gives you a much more accurate idea of the  appropriate tools, preparation, and associated budget requirements for ensuring ongoing business operations when facing unanticipated shutdowns due to ransomware and other malicious system breaches. Now let’s take a look at some of those tools and their costs as well as cost-benefits.

Host-Telecom vs Ransomware

Host-Telecom together with partner Hystax offers Backup to the Cloud and Disaster Recovery, both of which ensure that your latest data are immediately available in case of ransomware lockdown. If a cyberattack causes total system failure, Disaster Recovery can restore your entire operating environment in addition to your data within minutes.

Fast, highly reliable backup recovery at lower costs

Both Backup to the Cloud and Disaster Recovery are based on OpenStack cloud, which we use because of its speed and technical advantages. In addition, you achieve significant savings when operating in an OpenStack environment, which unlike commercial platforms, does not charge software licensing fees.

And while you may harbor concerns about the security of cloud, you should really think again. Although heightened security risk is often cited as a factor in the latency of SMB cloud adoption, research does not support this concern. Analysis shows that the risk of malware infection from using cloud applications is low, with no correlation between usage and threat levels.

With understanding of the cost savings of a dependable environment, let’s get into the technical details about our solutions, beginning with Backup to the Cloud.

Backup to the Cloud

To enable Backup to the Cloud, Host-Telecom installs a small, pre-configured virtual machine on your side that transmits your data to our OpenStack cloud in the Host-Telecom data center, where Backup to the Cloud compresses, duplicates, and encrypts your data, creating a complete shot of your data. You can immediately access the continually refreshed data version if your data is compromised.

WannaCry accesses a specific port to attack your computer and network. If you have it open for any reason, the ransomware has easy access. We close this port in the Host-Telecom data center, keeping your workloads secure.

In addition, with Backup to the Cloud you avoid the time and expense of creating your own backup system or spending ever-growing sums for Backup as a Service as your storage needs increase. Check out our prices and particulars.

Disaster Recovery

Disaster Recovery replicates your data as well as your entire IT infrastructure in the OpenStack cloud deployed in the Host-Telecom data center. Accommodating VMware, vSphere, Hyper-V, OpenStack, Virtuozzo, and bare metal workloads, you can access your data and/or deploy your operations in the OpenStack cloud environment immediately in case of system failure. Minimizing RTO and RPO, Disaster Recovery includes plans testing and powerful failback to production at very competitive and affordable prices.

Ransomware personalized

You need affordable solutions and you need them to operate at the highest level. At Host-Telecom we genuinely care about your well being and have exacting standards for the solutions we provide you, testing and using them in our own workspace to make continuing improvements. The following statement by Jean-Philippe Taggart, Senior Security Researcher at Malwarebytes Labs in their Q2 2017 report, echoes our sentiments as he describes his experience facing the biggest disaster failure he’s seen:

“Witnessing a small business owner having to pay for a ransomware decryption key. This particular individual had no disaster recovery plan and would have had to put the key under the door and close the business as critical data was encrypted. Never in my life was buying bitcoins and acquiring the decryption key a more depressing event. This is the worst-case scenario and the worst possible outcome. Not only did such an event demonstrates (sic) the viability of a ransomware attacks (sic) to criminals, it is something I never want to have to do again. Needless to say, this particular victim now has multiple backup solutions, as well as a strictly enforced work-only machine policy. I was profoundly uneasy in providing assistance with this ransomware infection, as I am a strong advocate in never paying, but in this case they saw no other solution. Despite successfully recovering most of the data, it felt like a defeat.”

Like Taggart, Host-Telecom wants you to operate with state-of-the-art security tools at prices you can afford. No one should have to face ransomware or system failure without excellent protection. Contact us today to learn how we can work together to ensure the security of your company’s data and infrastructure.



Denise Boehm

Denise Boehm

The post Beat ransomware attacks to the punch appeared first on Host-Telecom.com.

by Denise Boehm at October 03, 2017 10:02 AM

October 02, 2017

Red Hat Stack

Using Red Hat OpenStack Platform director to deploy co-located Ceph storage – Part One

An exciting new feature in Red Hat OpenStack Platform 11 is full Red Hat OpenStack Platform director support for deploying Red Hat Ceph storage directly on your overcloud compute nodes. Often called hyperconverged, or HCI (for Hyperconverged Infrastructure), this deployment model places the Red Hat Ceph Storage Object Storage Daemons (OSDs) and storage pools directly on the compute nodes.

Co-locating Red Hat Ceph Storage in this way can significantly reduce both the physical and financial footprint of your deployment without requiring any compromise on storage.


Red Hat OpenStack Platform director is the deployment and lifecycle management tool for Red Hat OpenStack Platform. With director, operators can deploy and manage OpenStack from within the same convenient and powerful lifecycle tool.

There are two primary ways to deploy this type of storage deployment which we currently refer to as pure HCI and mixed HCI.

In this two-part blog series we are going to focus on the Pure HCI scenario demonstrating how to deploy an overcloud with all compute nodes supporting Ceph. We do this using the Red Hat OpenStack Platform director. In this example we also implement resource isolation so that the Compute and Ceph services have their own dedicated resources and do not conflict with each other. We then show the results in action with a set of Browbeat benchmark tests.

But first …

Before we get into the actual deployment, let’s take a look at some of the benefits around co-locating storage and compute resources.

  • Smaller deployment footprint: When you perform the initial deployment, you co-locate more services together on single nodes, which helps simplify the architecture on fewer physical servers.

  • Easier to plan, cheaper to start out: co-location provides a decent option when your resources are limited. For example, instead of using six nodes, three for Compute and three for Ceph Storage, you can just co-locate the storage and use only three nodes.

  • More efficient capacity usage: You can utilize the same hardware resources for both Compute and Ceph services. For example, the Ceph OSDs and the compute services can take advantage of the same CPU, RAM, and solid-state drive (SSD). Many commodity hardware options provide decent resources that can accommodate both services on the same node.

  • Resource isolation: Red Hat addresses the noisy neighbor effect through resource isolation, which you orchestrate through Red Hat OpenStack Platform director.

However, while co-location realizes many benefits there are some considerations to be aware of with this deployment model. Co-location does not necessarily offer reduced latency in storage I/O. This is due to the distributed nature of Ceph storage: storage data is spread across different OSDs, and OSDs will be spread across several hyper-converged nodes. An instance on one node might need to access storage data from OSDs spread across several other nodes.

The Lab

Now that we fully understand the benefits and considerations for using co-located storage, let’s take a look at a deployment scenario to see it in action. 


We have developed a scenario using Red Hat OpenStack Platform 11 that deploys and demonstrates a simple “Pure HCI” environment. Here are the details.

We are using three nodes for simplicity:

  • 1 director node
  • 1 Controller node
  • 1 Compute node (Compute + Ceph)

Each of these nodes are these same specifications:

  • Dell PowerEdge R530
  • Intel Xeon CPU E5-2630 v3 @ 2.40GHz  – This contains 8 cores each with hyper-threading, providing us with a total of 16 cores.
  • 32 GB RAM
  • 278 GB SSD Hard Drive

Of course for production installs you would need a much more detailed architecture; this scenario simply allows us to quickly and easily demonstrate the advantages of co-located storage. 

This scenario follows these resource isolation guidelines:

  • Reserve enough resources for 1 Ceph OSD on the Compute node
  • Reserve enough resources to potentially scale an extra OSD on the same Compute node
  • Plan for instances to use 2GB on average but reserve 0.5GB per instance on the Compute node for overhead.

This scenario uses network isolation using VLANs:

  • Because the default Compute node deployment template shipped with the tripleo-heat-templates do not attach the Storage Management network computes we need to change that. They require a simple modification to accommodate the Storage Management network which is illustrated later.

Now that we have everything ready, we are set to deploy our hyperconverged solution! But you’ll have to wait for next time for that so check back soon to see the deployment in action in Part Two of the series!

Want to find out how Red Hat can help you plan, implement and run your OpenStack environment? Join Red Hat Architects Dave Costakos and Julio Villarreal Pelegrino in “Don’t fail at scale: How to plan, build, and operate a successful OpenStack cloud” today.

For full details on architecting your own Red Hat OpenStack Platform deployment check out the official Architecture Guide. And for details about Red Hat OpenStack Platform networking see the detailed Networking Guide.

by Dan Macpherson, Principal Technical Writer at October 02, 2017 10:31 PM


Don’t confuse OpenStack with infrastructure

At its most fundamental, OpenStack is a common API abstraction layer for infrastructure. But what does that actually mean, and why should you care?

by Shaun O'Meara at October 02, 2017 08:06 PM


Recent blog posts

Here's what the RDO community has been blogging about recently:

OpenStack 3rd Party CI with Software Factory by jpena

Introduction When developing for an OpenStack project, one of the most important aspects to cover is to ensure proper CI coverage of our code. Each OpenStack project runs a number of CI jobs on each commit to test its validity, so thousands of jobs are run every day in the upstream infrastructure.

Read more at http://rdoproject.org/blog/2017/09/openstack-3rd-party-ci-with-software-factory/

OpenStack Days UK by Steve Hardy

OpenStack Days UKYesterday I attended the OpenStack Days UK event, held in London.  It was a very good day and there were a number of interesting talks, and it provided a great opportunity to chat with folks about OpenStack.I gave a talk, titled "Deploying OpenStack at scale, with TripleO, Ansible and Containers", where I gave an update of the recent rework in the TripleO project to make more use of Ansible and enable containerized deployments.I'm planning some future blog posts with more detail on this topic, but for now here's a copy of the slide deck I used, also available on github.

Read more at http://hardysteven.blogspot.com/2017/09/openstack-days-uk-yesterday-i-attended.html

OpenStack Client in Queens - Notes from the PTG by jpichon

Here are a couple of notes about the OpenStack Client, taken while dropping in and out of the room during the OpenStack PTG in Denver, a couple of weeks ago.

Read more at http://www.jpichon.net/blog/2017/09/openstack-client-queens-notes-ptg/

Event report: OpenStack PTG by rbowen

Last week I attended the second OpenStack PTG, in Denver. The first one was held in Atlanta back in February.

Read more at http://drbacchus.com/event-report-openstack-ptg/

by Rich Bowen at October 02, 2017 04:48 PM

Doug Hellmann

git-os-job 1.1.0

The OpenStack project stores the logs for all of the test jobs related to a commit on http://logs.openstack.org organized by the commit hash. To review the logs after a job runs, most developers start with the message jenkins leaves on gerrit, and click through to the log files. Not all jenkins jobs are triggered by …

by doug at October 02, 2017 04:05 PM

OpenStack Superuser

How to deliver network services at the edge

SAN FRANCISCO — It’s been about 18 months since Verizon started on their edge computing journey. Beth Cohen, new product strategist at the telecoms giant, took the keynote stage at OpenDev to share takeaways. “Literally, we were starting from nothing — massively distributed cloud wasn’t even a term when we came up with these ideas.”

“We needed to do software-defined networking, we knew we had to embrace it, that’s the future,” she says.”But what what is software-defined networking — how does that translate into actual products? That’s the trick…” (She goes into more detail about Verizon’s “cloud in a box,” a product that’s about the size of your home router, in this Boston Summit keynote.)

In the 15-minute talk for OpenDev, which includes an appropriately “edgy” live demo, she touches on:
• Product goals and objectives
• The challenges of building products from scratch (spoiler alert: it’s hard!)
• The architecture for massively distributed OpenStack
• “Orchestration magic” (Making it work is truly magic because it’s not just orchestration within data center, but across the network. “Latency becomes a serious problem when you have your core information sitting in a data center in Texas and you’re trying to deploy to an edge device in Sydney, Australia. There’s some physics involved.”
• Virtual network services

OpenDev was two-day event sponsored by the Ericsson, Intel and the OpenStack Foundation. The event was a welcome forum for talking about how to push the boundaries even further, Cohen said. “This is a fantastic opportunity for us to dig into what’s needed to do edge computing right.” She recalls attending one of the early OpenStack Summits as one of the only users present and asking, “‘Um, where’s the documentation?’ This is a great way to get the users and developers together from the start.”

You can check out her talk below or Etherpads from the individual sessions.




The post How to deliver network services at the edge appeared first on OpenStack Superuser.

by Superuser at October 02, 2017 03:21 PM

Chris Dent

OpenStack Denver PTG

Because it's a thing, here's my summary from the most recent OpenStack PTG in Denver. This was the second such event. A PTG is a "Project Teams Gathering". This means it is a time for contributors to the various OpenStack projects to do detailed planning for the coming development cycle, without the conference obligations normally associated with summit.

Denver was the second PTG. The first was in Atlanta (I wrote a post about that one too). It's pretty clear that the organizers took the feedback from Atlanta to heart when orchestrating Denver:

  • In the entire six days of OpenStack work (the first day there was a board meeting) I experienced a coffee drought only once. This is a huge improvement over Atlanta (and the intervening summit in Boston).

  • The food was much better. I looked forward to lunch.

Despite the trains the facilities at the hotel worked very well for what we were there to do: sit in a room and talk.

The week was divided up into two sections: The first two days oriented towards cross-project or horizontal teams; the latter three days more project specific. In the most recent TC report I've already reported on the TC-related work. The rest of this discusses API SIG and Nova.


In Atlanta the API room ended up being something of a destination. In part this was because people didn't know where else to go but it was also because were talking about formalizing the interoperability guidelines and thus talking about microversions and that tends to draw the crowds.

(Etherpad from the API room.)

This time around microversions were not on the agenda. Instead we chatted about capabilities, the API working group becoming a SIG, and reaching out to developers of SDKs that happen to support OpenStack.

Capabilities is an overloaded term. In the context of APIs it has at least four meanings:

  • What can be done with this cloud?
  • What can be done with this service (in this cloud)?
  • What can be done with this type of resource (in this service)?
  • What can be done with this instance (of this type of resource)?

Each of these can change according to the authorization of the requesting user.

For the first, there's work in progress on a cloud profile document which will, in broad strokes, describe the bones of a single OpenStack deployment. This is useful to prescribe because openstack-infra already has lots of experience with needing this, due to the fact that they use multiple clouds.

For the other meanings we are less sure of the needs, so some exploration is needed. The Cinder project has committed to exploring service-level capabilities. From there a standard will evolve. The hope is that we will be able to have something consistent such that discovery of capabilities follows the same pattern in each service. Otherwise we're doing a huge disservice to API consumers.

Becoming a SIG is an effort to ensure that all people who are interested in OpenStack APIs have a place to collaborate. In the past, because the vast majority of working group effort was in improving or correcting the implementations of service APIs, there was a perception that the group was for developers only. This has never been the intent and the hope is that now anyone who uses, makes, or makes tools for, OpenStack APIs can use the SIG as a meeting place.

One major audience is developers of SDKs (such as gophercloud) that consume OpenStack APIs. We discussed strategies for implementing microversions (the topic got in after all!) in clients. One strategy is to be opinionated about which microversion for any given request is "best" but also allow the caller to declare otherwise. Supporting this kind of thing in clients is complex ("Does this mean we need to support every single microversion ever?") but is the real cost of making clients interoperate with different deployments that may be at different points in apparent time ("Yes, mostly.").

There was also discussion about coming up with straightforward ways to evaluate existing client implementations and bless them as preferred solutions, not because we like them, but because they work correctly. To make that most effective a plan is underway to provide testing resources to SDK developers so they have known good OpenStack clouds to test against.


I traditionally dread any in-person Nova planning event. I like seeing the people, and it's great to share ideas but the gap between the amount of stuff we need to talk about and actually get around to talking about is so astronomically huge that it is difficult to not become frustrated, depressed and even angry.

A cycle retrospective was the first item on the three day agenda. It wasn't really a retrospective. People are sufficiently burnt from lack of change, too much work, or inability to really talk in earnest about the issues that only token gestures were made to addressing anything. The most concrete statements were "try harder to not get into nit-picky arguments" and "merge sooner in the cycle so we can catch global breakages and avoid end-of-cycle chaos". Both of these are reasonable things if taken at face value, but if our behavior since the PTG is any indicator we're going to struggle to live up to those plans.

A great deal of the time at the PTG was devoted to placement and placement-related topics. Part of this is because placement is in the middle of everything, part of it is because lots of "enhanced platform" work is blocked on placement, and part of it is because those of us who are placement people are prolix. Our main accomplishment was to limit placement priorities to something potentially realistic:

  • implementing alternate selected hosts to support retries in cellsv2
  • getting allocation handling for migrations using migration uuids
  • implementing nested resource providers

Typically, these are all bigger than they sound. There's some additional information in one my recent resource provider updates.

Another large topic, which will have implications in placement, is coming up with new ways to generically support device management that can deal with PCI devices, FPGA, GPUs and such things in libvirt environments as well as in other hypervisor setups (PowerVM, VMWare) where such devices are not locally visible in the filesystem. The outcome is we're not going to work on that yet, but we made sure that work happening now with nested resource providers will not limit our options later. And that we're going to stop using the PCI whitelist for everything related to PCI devices and take it back to only being a whitelist. Eventually.


The next in person gathering will be in Sydney for the next summit, including the "Forum" which is oriented towards engaging and extracting feedback from users, operators and other developers to make long term plans. I'm hoping that we can limit the amount of "placement, placement, placement" at the forum as we've got enough to work with for now and there are plenty of other topics that need attention.

As reported in the last TC report I'm on the hook to report at summit on the community health issues related to "developer happiness". I feel pretty strongly that some of the issues not addressed in the Nova retrospective are strong factors in these community health issues but I'm as yet unsure how to express them or address them. This is not as simple as "Nova needs more cores" (something that has been stated in the past). The unhealthiness is just as much an issue for cores (who are overstretched, too often pre-disposed to heroics) as non-cores. If you have ideas, please let me know, and I'll work to integrate them. I'll be writing more about this.

by Chris Dent at October 02, 2017 01:00 PM

October 01, 2017

Dragonflow Team

Pike Release - What have we done of late?


We would like to present the work, changes, and features we have done this cycle. We have worked very hard, and we would like to show off some of our work.

Birds-Eye View

What Did We Do?

In the past half a year, we have finished the following features:
  • IPv6
  • Trunk ports (VLAN aware VMs)
  • SFC
  • Service Health Report
  • BGP
  • Distributed SNAT
We also made a big improvement to our NorthBound API.

What's Next?

This is what we plan to do for Queens. If you want something that isn't here, let us know!
  • Kuryr Integration
  • RPM and DEB Packaging
  • LBaaS (Dragonflow native and distributed)
  • L3 Flavour
  • Troubleshooting Tools
  • FWaaS
  • TAPaaS
  • Multicast and IGMP
Of course, the best laid plans of mice and men...



Contribution by Companies (Resolved bugs)



Lines of Code by Contributors


Filed Bugs, by Compay:


What Was Done



Dragonflow now supports IPv6. This means that virtual machines that are configured on IPv6 subnets can communicate with each other. IPv6 routing also works, so they don't even have to be on the same network. Security groups also work, so you can create IPv6-based micro-segmentation.

If there is also an IPv6 provider network, these VMs can communicate with the outside world.

Note that since NAT is supposed to be less used in IPv6, it was not implemented.

Trunk Ports (VLAN aware VMs) 


In container scenarios, a virtual machine with a single virtual network card hosts multiple containers. It tags each container's traffic with a VLAN tag, and then Dragonflow can know to which (virtual) port the traffic belongs. Incoming traffic tagged with the container's VLAN tag is forwarded to the container.

Dragonflow now supports this scenario. With trunk ports, virtual sub-ports can be defined to have a segmentation type and ID, and have a completely different network and subnet than its parent.

Since now anything can sit on the network (virtual machine, container, or even an application inside a network namespace), we will refer to all of them as Network Elements.

A few words regarding implementation: Every port in Dragonflow is tagged in an OpenFlow register. Specifically, reg6 contains an internal ID belonging to the source port.

Dragonflow detects the VLAN tag on the packet from a relevant logical port. It untags the packet, and changes reg6 to the logical subport.

To emphasise the container networking angle, see the Kuryr-Kubernetes blog post: http://www.dragonflow.net/2017/09/kubernetes-container-services-at-scale.html. It discusses the Kuryr integration as well.



Service Function Chaining allows the tenant to place Service Functions along the network path between network elements. A service function can be anything - e.g. firewall, deep packet inspection device, or VOIP codec.

A blog post specifically for SFC has been published here:


Service Health Report


It is very important to know which running services  are still... running. This is implemented in the Service Health Report feature. Every Dragonflow service now reports its health to the database.

This way you can tell if a service process has died. There isn't a user interface for this yet, but the underlying data is in place.



Border Gateway Protocol (BGP) is a standardized gateway protocol designed to exchange routing and reachability information among autonomous systems.

BGP dynamic routing in OpenStack enables advertisement of self-service network prefixes to physical network devices that support BGP, thus removing the conventional dependency on static routes.

Distributed SNAT


Distributed SNAT is a novel implementation for SNAT that allows it to be fully distributed. This means that no network node is needed. The entire feature is contained within the compute node relevant to the network element using it.

For more information on exactly how it works, see this post: http://www.dragonflow.net/2017/06/distributed-snat-examining-alternatives.html

What Is Yet To Come

Kuryr Integration

Kuryr allows container networking (e.g. Docker, Kubernetes) to be defined using the Neutron API. We want to make sure Dragonflow supports being deployed and used this way.

It is worth mentioning that external LBaaS solutions such as HA Proxy and Octavia already work with Dragonflow. They are used to support Kubernetes services.

RPM and DEB Packaging

Currently, the only installation method is pip. This is not good enough for production. RPM (for RedHat based distributions) and DEB (For Debian based distribution) packages are a must-have for any mature project.



Dragonflow's motto is that everything should be distributed. Bearing that in mind, we believe that we can improve LBaaS performance by implementing it in a distributed manner. Actual implementation should be pushed as far down as it would go (i.e. implement using OpenFlow first if possible, and push it higher-up only if necessary).

L3 Flavour


Dragonflow is becoming very feature rich, but not all the features are wanted in every deployment. In some cases, only Dragonflow's L3 features (e.g. Distributed SNAT) are needed. Allowing Dragonflow to be deployed as an L3 agent allows Dragonflow to be used with greater flexibility, letting deployers take exactly the features they need.

Troubleshooting Tools


Troubleshooting cloud networking is a known pain point. It pains developers and operators alike. If there is a problem in the network, you need to quickly find the source of the problem, and quickly identify who can fix it the fastest.

We want to answer this need, by developing troubleshooting tools that will be able to visually show where the network fails, and why.



Sometimes, security groups are just not enough.  In some cases, the user wants to define a firewall inside their virtual network. The most logical place to put the firewall implementation is on the wire, i.e. directly in the pipeline the packet passes.



tcpdump is usually the first tool I go to when I don't understand why my network application isn't working. Sometimes even before ping. TAPaaS allows cloud users to have a similar functionality. Implementing this service will go a long way to help users understand why their application isn't working right.

Multicast and IGMP


Multicast communication has many uses. Its power that its both efficient, and specific. However, none of these strengths come into play in the current multicast implementation. This can be improved greatly.


As you can see, we have a lot planned for the next cycle. It would be great if you could join us to suggest features, priorities, or even patches!

Stay tuned for information regarding the vPTG for Queens.

In the meantime, you can find us on the IRC, on Freenode, in #openstack-dragonflow !

by Omer Anson (noreply@blogger.com) at October 01, 2017 01:35 PM

September 29, 2017

OpenStack Blog

Developer Mailing List Digest September 23-29 2017


Sydney Forum

General Links

Etherpads (copied from Sydney Forum wiki)


If you want to post an idea, but aren’t working with a specific team or working group, you can use these:

Etherpads from Teams and Working Groups

Garbage Patches for Simple Typos Fixes

  • There is some agreement that we as a community have to do something beyond mentoring new developers.
    • Others have mentioned that some companies are doing this to game the system in other communities besides OpenStack.
      • Gain: show a high contribution level was “low quality” patches.
    • Some people in the community want to put a stop to this figuratively with a stop sign, otherwise, things will never improve. If we don’t do something now we are hurting everyone, including those developers who could have done more meaningful contributions.
    • Others would like before we go into creating harsh processes, we need to collect the data to show other times to provide guidance I Have not worked.
    • We have a lot of anecdotal information right now that we need to collect and summarize.
    • If the results show that there are clear abuses, rather than misunderstandings, then we can use that data to design effective blocks without hurting other contributors or creating a reputation that our community is not welcoming.
  • Some are unclear why there is so much outrage about these patches, to begin with. They are fixing real things.
    • Maybe there is a CI cost, but the faster they are merged the less likely someone is to propose it in the future which keeps the CI cost down.
    • If people are deeply concerned about CI resources, step one is to give us a better accounting into their existing system to see where resources are currently spent.
  • Thread

Status of the Stewardship Working Group

  • The stewardship working group was created after the first session of leadership training that the Technical Committee, User Committee, Board and other community members were invited to participate in 2016.
  • Follow-up on what we learned at ZingTrain and push adoption of the tools we discovered there.
  • While we did (and continue)
    • The activity of the workgroup mostly died when we decided to experiment getting rid of weekly meetings for greater inclusion.
    • Lost original leadership.
  • The workgroup is dormant until someone steps up and leads it again.
  • Join us on IRC Freenode in channel openstack-swg if interested.
  • Message

Improving the Process for Release Marketing

  • Release marketing is a critical heart for sharing what’s new with each release.
  • Let’s work together on reworking how the marketing community and projects work together to make the release communications happen.
  • Having multiple, repetitive demands to summarize” top features” during release time can be a pester, and having to recollect the information each time isn’t an effective use of time.
    • Being asked to make a polished, “Press–Friendly” message out of release can feel far outside of the PTL’s Focus areas or skills.
    • Technical content marketers, attempting to find the key features from release notes, mailing lists, specifications, roadmaps, whatever means interesting features are sometimes overlooked.
  • To address this gap, the release team and foundation marketing team proposed collecting information as part of the release tagging process.
    • We will collect from deliverable files to provide highlights for the series (about three items).
    • The text will be used to build a landing page on release.openstack.org that shows the”Key features” flagged by PTL’s that the marketing teams should be looking at during release communication times.
      • This page will link to the release notes so marketers can gather additional information.
  • Message

Simplification in OpenStack

  • Two camps appear: people that want to see OpenStack as a product with a way of doing deployments and the people who want to focus on configuration management tools.
    • One person gives an example of using both Ubuntu MAAS and Puppet. The puppet solution allowed for using existing deployment methodologies unlike the former.
  • We should start promoting and using a single solution for the bulk of the community efforts. Right now we do that with Devstack as a reference implementation that nobody should use for anything but dev/test.
    • This sort of idea could make other deployment efforts relevant.
  • Kolla came up at the PTG: scenario-based testing and documentation based on different  “constellations” or use cases.
    • Puppet has been doing this and Triple-o has been doing this.
    • If you break down actual use cases, most people want nova (qemu+KVM), neutron (vxlan, potentially VLAN), Cinder (ceph).
      • If we agreed to cover 90% of users, that’ll boil down to 4 to 5 different “constellations.”
    • Someone has been working on a local testing environment, and it boils down to this.
  • Thread

by Mike Perez at September 29, 2017 11:46 PM

OpenStack Superuser

How to upgrade to Pike using Kolla and Kayobe

We have previously described a new kind of OpenStack infrastructure, built to combine polymorphic flexibility with HPC levels of performance, in the context of our project with the Square Kilometre Array. To take advantage of OpenStack’s latest capabilities, we recently upgraded that infrastructure from Ocata to Pike.

Early on, we took a design decision to base our deployments on Kolla, which uses Docker to containerize the OpenStack control plane, transforming it into something approximating a microservice architecture.

Kolla is in reality several projects. There is the project to define the composition of the Docker containers for each OpenStack service, and then there are the projects to orchestrate the deployment of Docker containers across one or more control plane hosts. This could be done using Kolla-Kubernetes, but our preference is for Kolla-Ansible.

Kolla-Ansible builds upon a set of hosts already deployed and configured up to a baseline level where Ansible can drive the Docker deployment. Given we are typically starting from pallets of new servers in a loading dock, there is a gap to be filled to get from one to the other. For that role, we created Kayobe, loosely defined as “Kolla on Bifrost“, and intended to perform a similar role to TripleO, but using only Ironic for the undercloud seed and driven by Ansible throughout. This approach has enabled us to incorporate some compelling features, such as Ansible-driven configuration of BIOS and RAID firmware parameters and Network switch configuration.

There is no doubt that Kayobe has been a huge enabler for us, but what about Kolla? One of the advantages claimed for a containerized control plane is how it simplifies the upgrade process by severing the interlocking package dependencies of different services. This week we put this to the test, by upgrading a number of systems from Ocata to Pike.

This is a short guide to how we did it, and how it worked out…

Have a working test plan

It may seem obvious but it may not an obvious starting point. Make a set of tests to ensure that your OpenStack system is working before you start. Then repeat these tests at any convenient point. By starting with a test plan that you know works, you’ll know for sure if you’ve broken it.

Otherwise in the depths of troubleshooting you’ll have a lingering doubt that perhaps your cloud was broken in this way all along…

Preparing the system for upgrade

We brought the system to the latest on the stable/ocata branch. This in itself shakes out a number of issues. Just how healthy is the kernel and OS on the controller hosts? Is the Netron agents containers spinning looking for lost namespaces? Is the kernel blocking on most cores before spewing out reams of kernel:NMI watchdog: BUG: soft lockup - CPU#2 stuck for 23s!

A host in this state is unlikely to succeed in moving one patchset forward, let alone a major OpenStack release.

One of Kolla’s strengths is the elimination of dependencies between services. It makes it possible to deploy different versions of OpenStack services without worrying about dependency conflicts. This can be a very powerful advantage.

The ability to update a kolla container forward along the same stable release branch establishes the basic procedure is working as expected. Getting the control plane migrated to the tip of the current release branch is a good precursor to making the version upgrade.

Staging the upgrade

Take the leap on a staging or development system and you’ll be more confident of landing in one piece on the other side. In tests on a development system, we identified and fixed a number of issues that would each have become a major problem on the production system upgrade.

Even a single-node staging system will find problems for you.

For example:

  • During the Pike upgrade, the Docker Python bindings package renames from docker_py to docker. They are mutually exclusive. The python environment we use for Kolla-Ansible must start the process with docker_py installed and at the appropriate point transition to docker. We found a way through and developed Kayobe to perform this orchestration.
  • We carrried forward a piece of work to enable our Kolla logs via Fluentd to go to Monasca, which just made its way upstream.
  • We hit a problem with Kolla-Ansible’s RabbitMQ containers generating duplicate entries in /etc/hosts, which we work around while the root cause is investigated.
  • We found and fixed some more issues with Kolla-Ansible pre-checks for both Ironic and Murano.
  • We hit this bug with generating config for mariadb – easily fixed once the problem was identified.

Performing the upgrade

On the day, at a production scale, new problems can occur that were not exposed at the scale of a staging system.

In a production upgrade, the best results come from bringing all the technical stakeholders together while the upgrade progresses. This enables a team to draw on all the expertise it needs to work through issues encountered.

In production upgrades, we worked through new issues:

That final point should have been found by our test plan, but was not covered (this time). Arguably it should have been found by Kolla-Ansible’s CI testing too.

The early bird gets the worm

Being an early adopter has both benefits and drawbacks. Kolla, Ansible and Kayobe have made it possible to do what we did – successfully – with a small but talented team.

Our users have scientific work to do, and our OpenStack projects exist to support that.

We are working to deliver infrastructure with cutting-edge capabilities that exploit OpenStack’s latest features. We are proud to take some credit for our upstream contributions, and excited to make the most of these new powers in Pike.

This post first appeared on Stack HCP. Superuser is always interested in community content get in touch: editorATopenstack.org

The post How to upgrade to Pike using Kolla and Kayobe appeared first on OpenStack Superuser.

by Stig Telfer at September 29, 2017 04:14 PM


OpenStack 3rd Party CI with Software Factory


When developing for an OpenStack project, one of the most important aspects to cover is to ensure proper CI coverage of our code. Each OpenStack project runs a number of CI jobs on each commit to test its validity, so thousands of jobs are run every day in the upstream infrastructure.

In some cases, we will want to set up an external CI system, and make it report as a 3rd Party CI on certain OpenStack projects. This may be because we want to cover specific software/hardware combinations that are not available in the upstream infrastructure, or want to extend test coverage beyond what is feasible upstream, or any other reason you can think of.

While the process to set up a 3rd Party CI is documented, some implementation details are missing. In the RDO Community, we have been using Software Factory to power our 3rd Party CI for OpenStack, and it has worked very reliably over some cycles.

The main advantage of Software Factory is that it integrates all the pieces of the OpenStack CI infrastructure in an easy to consume package, so let's have a look at how to build a 3rd party CI from the ground up.


You will need the following:

  • An OpenStack-based cloud, which will be used by Nodepool to create temporary VMs where the CI jobs will run. It is important to make sure that the default security group in the tenant accepts SSH connections from the Software Factory instance.
  • A CentOS 7 system for the Software Factory instance, with at least 8 GB of RAM and 80 GB of disk. It can run on the OpenStack cloud used for nodepool, just make sure it is running on a separate project.
  • DNS resolution for the Software Factory system.
  • A 3rd Party CI user on review.openstack.org. Follow this guide to configure it.
  • Some previous knowledge on how Gerrit and Zuul work is advisable, as it will help during the configuration process.

Basic Software Factory installation

For a detailed installation walkthrough, refer to the Software Factory documentation. We will highlight here how we set it up on a test VM.

Software installation

On the CentOS 7 instance, run the following commands to install the latest release of Software Factory (2.6 at the time of this article):

$ sudo yum install -y https://softwarefactory-project.io/repos/sf-release-2.6.rpm
$ sudo yum update -y
$ sudo yum install -y sf-config

Define the architecture

Software Factory has several optional components, and can be set up to run them on more than one system. In our setup, we will install the minimum required components for a 3rd party CI system, all in one.

$ sudo vi /etc/software-factory/arch.yaml

Make sure the nodepool-builder role is included. Our file will look like:

description: "OpenStack 3rd Party CI deployment"
  - name: managesf
      - install-server
      - mysql
      - gateway
      - cauth
      - managesf
      - gitweb
      - gerrit
      - logserver
      - zuul-server
      - zuul-launcher
      - zuul-merger
      - nodepool-launcher
      - nodepool-builder
      - jenkins

In this setup, we are using Jenkins to run our jobs, so we need to create an additional file:

$ sudo vi /etc/software-factory/custom-vars.yaml

And add the following content

nodepool_zuul_launcher_target: False

Note: As an alternative, we could use zuul-launcher to run our jobs and drop Jenkins. In that case, there is no need to create this file. However, later when defining our jobs we will need to use the jobs-zuul directory instead of jobs in the config repo.

Edit Software Factory configuration

$ sudo vi /etc/software-factory/sfconfig.yaml

This file contains all the configuration data used by the sfconfig script. Make sure you set the following values:

  • Password for the default admin user.
  admin_password: supersecurepassword
  • The fully qualified domain name for your system.
fqdn: sftests.com
  • The OpenStack cloud configuration required by Nodepool.
  - auth_url:
    name: microservers
    password: cloudsecurepassword
    project_name: mytestci
    region_name: RegionOne
    regions: []
    username: ciuser
  • The authentication options if you want other users to be able to log into your instance of Software Factory using OAuth providers like GitHub. This is not mandatory for a 3rd party CI. See this part of the documentation for details.

  • If you want to use LetsEncrypt to get a proper SSL certificate, set:

  use_letsencrypt: true

Run the configuration script

You are now ready to complete the configuration and get your basic Software Factory installation running.

$ sudo sfconfig

After the script finishes, just point your browser to https:// and you can see the Software Factory interface.

SF interface

Configure SF to connect to the OpenStack Gerrit

Once we have a basic Software Factory environment running, and our service account set up in review.openstack.org, we just need to connect both together. The process is quite simple:

  • First, make sure the local Zuul user SSH key, found at /var/lib/zuul/.ssh/id_rsa.pub, is added to the service account at review.openstack.org.

  • Then, edit /etc/software-factory/sfconfig.yaml again, and edit the zuul section to look like:

  default_log_site: sflogs
  external_logservers: []
  - name: openstack
    hostname: review.openstack.org
    port: 29418
    puburl: https://review.openstack.org/r/
    username: mythirdpartyciuser
  • Finally, run sfconfig again. Log information will start flowing in /var/log/zuul/server.log, and you will see a connection to review.openstack.org port 29418.

Create a test job

In Software Factory 2.6, a special project named config is automatically created on the internal Gerrit instance. This project holds the user-defined configuration, and changes to the project must go through Gerrit.

Configure images for nodepool

All CI jobs will use a predefined image, created by Nodepool. Before creating any CI job, we need to prepare this image.

  • As a first step, add your SSH public key to the admin user in your Software Factory Gerrit instance.

Add SSH Key

  • Then, clone the config repo on your computer and edit the nodepool configuration file:
$ git clone ssh://admin@sftests.com:29418/config sf-config
$ cd sf-config
$ vi nodepool/nodepool.yaml
  • Define the disk image and assign it to the OpenStack cloud defined previously:
  - name: dib-centos-7
      - centos-minimal
      - nodepool-minimal
      - simple-init
      - sf-jenkins-worker
      - sf-zuul-worker
      DIB_CHECKSUM: '1'
      QEMU_IMG_OPTIONS: compat=0.10

  - name: dib-centos-7
    image: dib-centos-7
    min-ready: 1
      - name: microservers

  - name: microservers
    cloud: microservers
    clean-floating-ips: true
    image-type: raw
    max-servers: 10
    boot-timeout: 120
    pool: public
    rate: 2.0
      - name: private
      - name: dib-centos-7
        diskimage: dib-centos-7
        username: jenkins
        min-ram: 1024
        name-filter: m1.medium

First, we are defining the diskimage-builder elements that will create our image, named dib-centos-7.

Then, we are assigning that image to our microservers cloud provider, and specifying that we want to have at least 1 VM ready to use.

Finally we define some specific parameters about how Nodepool will use our cloud provider: the internal (private) and external (public) networks, the flavor for the virtual machines to create (m1.medium), how many seconds to wait between operations (2.0 seconds), etc.

  • Now we can submit the change for review:
$ git add nodepool/nodepool.yaml
$ git commit -m "Nodepool configuration"
$ git review
  • In the Software Factory Gerrit interface, we can then check the open change. The config repo has some predefined CI jobs, so you can check if your syntax was correct. Once the CI jobs show a Verified +1 vote, you can approve it (Code Review +2, Workflow +1), and the change will be merged in the repository.

  • After the change is merged in the repository, you can check the logs at /var/log/nodepool and see the image being created, then uploaded to your OpenStack cloud.

Define test job

There is a special project in OpenStack meant to be used to test 3rd Party CIs, openstack-dev/ci-sandbox. We will now define a CI job to "check" any new commit being reviewed there.

  • Assign the nodepool image to the test job
$ vi jobs/projects.yaml

We are going to use a pre-installed job named demo-job. All we have to do is to ensure it uses the image we just created in Nodepool.

- job:
    name: 'demo-job'
    defaults: global
      - prepare-workspace
      - shell: |
          cd $ZUUL_PROJECT
          echo "This is a demo job"
      - zuul
    node: dib-centos-7
  • Define a Zuul pipeline and a job for the ci-sandbox project
$ vi zuul/upstream.yaml

We are creating a specific Zuul pipeline for changes coming from the OpenStack Gerrit, and specifying that we want to run a CI job for commits to the ci-sandbox project:

  - name: openstack-check
    description: Newly uploaded patchsets enter this pipeline to receive an initial +/-1 Verified vote from Jenkins.
    manager: IndependentPipelineManager
    source: openstack
    precedence: normal
      open: True
      current-patchset: True
        - event: patchset-created
        - event: change-restored
        - event: comment-added
          comment: (?i)^(Patch Set [0-9]+:)?( [\w\\+-]*)*(\n\n)?\s*(recheck|reverify)
        verified: 0
        verified: 0

  - name: openstack-dev/ci-sandbox
      - demo-job

Note that we are telling our job not to send a vote for now (verified: 0). We can change that later if we want to make our job voting.

  • Apply configuration change
$ git add zuul/upstream.yaml jobs/projects.yaml
$ git commit -m "Zuul configuration for 3rd Party CI"
$ git review

Once the change is merged, Software Factory's Zuul process will be listening for changes to the ci-sandbox project. Just try creating a change and see if everything works as expected!


If something does not work as expected, here are some troubleshooting tips:

Log files

You can find the Zuul log files in /var/log/zuul. Zuul has several components, so start with checking server.log and launcher.log, the log files for the main server and the process that launches CI jobs.

The Nodepool log files are located in /var/log/nodepool. builder.log contains the log from image builds, while nodepool.log has the log for the main process.

Nodepool commands

You can check the status of the virtual machines created by nodepool with:

$ sudo nodepool list

Also, you can check the status of the disk images with:

$ sudo nodepool image-list

Jenkins status

You can see the Jenkins status from the GUI, at https:///jenkins/, if logged on with the admin user. If no machines show up at the 'Build Executor Status' pane, that means that either Nodepool could not launch a VM, or there was some issue in the connection between Zuul and Jenkins. In that case, check the jenkins logs at `/var/log/jenkins`, or restart the service if there are errors.

Next steps

For now, we have only ran a test job against a test project. The real power comes when you create a proper CI job on a project you are interested in. You should now:

  • Create a file under jobs/ with the JJB definition for your new job.

  • Edit zuul/upstream.yaml to add the project(s) you want your 3rd Party CI system to watch.

by jpena at September 29, 2017 12:37 PM


Aptira’s Growing Army of Solutionauts: Welcome Michael, Hugh and Craig!

Our plan to take over the world continues, with the introduction of three new solutionauts to the Aptira army.

Michael Still

Principal Solutionaut

Michael has spent the last decade working on “web scale” systems for companies like Google, Canonical and Rackspace. Since 2011 he has been an active contributor to the OpenStack Compute project, including serving as a core reviewer and the Project Technical Lead. In his spare time Michael bushwalks, runs archery events, and enjoys medium distance running.

Hugh Blemings

Principal Consultanaut

Hugh joined Aptira with over 15 years of experience in Free and Open Source software/hardware, most recently working on OpenStack at Rackspace. Prior to being involved in the OpenStack project, he worked on everything from Linux kernel code for supercomputers, to open hardware for microcontrollers (at one extreme) and hyperscale data centres (at the other) to bespoke network security systems.  Hugh is based in Melbourne, Australia and when not tinkering with technology enjoys playing rock/funk/blues keyboards.

Craig Armour

Solutionaut of Technology

Craig has been a consultant and technologist for over 20 years, having worked with a wide range of organisations from emerging startups, to market leading multinationals (Including a number of years with Roland!). Craig is looking to put a strong background in highly available, high performing systems, to work on the performance and availability analysis for some of our key customers.  Outside of work, Craig is a passionate sailor, and while you wouldn’t know it now, was a keen ultra endurance mountain biker, having competed in the world championships for both sports.

Our entire team of solutionauts will be taking over the OpenStack Summit in Sydney, and some will also be at the Hackathon. Come see us, buy us a beer and learn more about our plans for world domination.

The post Aptira’s Growing Army of Solutionauts: Welcome Michael, Hugh and Craig! appeared first on Aptira Cloud Solutions.

by Aptira at September 29, 2017 01:24 AM

September 28, 2017

SWITCH Cloud Blog

SWITCHdrive Over IPv6

When we built the SWITCHdrive service on the OpenStack platform that was to become SWITCHengines, that platform didn’t really support IPv6 yet. But since Spring 2016 it does. This week, we enabled IPv6 in SWITCHdrive and performed some internal tests. Today around noon, we published its IPv6 address (“AAAA record”) in the DNS. We quickly saw around 5% of accesses use IPv6 instead of IPv4.

Screen Shot 2017-09-28 at 22.10.45

In the evening, this percentage climbed to about 14%. This shows the relatively good support for IPv6 on Swiss broadband (home) networks, notably by the good folks at Swisscom.

The lower percentage during office (and lecture, etc.) hours shows that the IPv6 roll-out to higher education campuses still has some way to go. Our SWITCHlan backbone has been running “dual-stack” (IPv4 and IPv6 in parallel) in production for more than 10 years, and most institutions have added IPv6 configuration to their connections to us. But campus networks are wonderfully complex, so getting IPv6 deployed to every network plug and every wireless access point is a daunting task. Some schools are almost there, including some large ones that don’t use SWITCHdrive—yet!?—so the 5% may underestimate the extent of the roll-out for the overall SWITCH community. The others will follow in their footsteps. They can count on the help of the community and benefit from IPv6 training courses organized by our colleagues in the security and network teams. Contact us if you need help!

Filed under: IPv6, openstack, SWITCHdrive, SWITCHengines

by Simon Leinen at September 28, 2017 09:55 PM

OpenStack Superuser

Kickstart your OpenStack skills with an Outreachy internship

Outreachy offers three-month internships with stipends for people from groups that are traditionally underrepresented in tech.

Outreachy interns work remotely with mentors from Free and Open Source Software (FOSS) communities on projects ranging from programming, user experience, documentation, illustration and graphical design, to data science. The deadline for applications for the December-March internships is Oct. 23, 2017.

OpenStack has three projects that need help this time around:

  • Work on Go and Container related projects in OpenStack.
    Required skills: Python, Go
  • Add introspection HTTP REST points to the Kubernetes API watchers.
    Required skills: Python
    Optional skills: API design
  • Improve Keystone testing jobs in Jenkins cover the new features on Keystone, such as fernet tokens, v3 API and functional tests
    Required skills: Python
    Optional skills: Tests on CI, but we’ll teach you

It’s a good way to get started — just ask Victoria Martinez de la Cruz, once an Outreachy mentee, now a software engineer at Red Hat. She eventually went on to become a Zaqar and Trove core reviewer and is currently working on Manila. She’s active in Outreachy and Google Summer of Code as a coordinator and frequently speaks about OpenStack. (You can catch her talking about CephFS at the upcoming Sydney Summit.) Selected applicants are paired with a mentor, usually a full-time contributor to the project, and will spend three months learning what it’s like to work in the open source world, she tells Superuser.

You can find out more about OpenStack’s Outreachy program here.




The post Kickstart your OpenStack skills with an Outreachy internship appeared first on OpenStack Superuser.

by Superuser at September 28, 2017 01:18 PM

Dragonflow Team

Bare-Metal networking in OpenStack-ironic

Bare-Metal networking in OpenStack-ironic

Ironic is an OpenStack project for provisioning bare metal machines as part of an OpenStack deployment. Ironic manages those servers by using common management protocols (e.g PXE and IPMI) and vendor-specific management protocols.  (More information about Ironic and how it's work can be found In the Ironic documentation).

  In this post I want to focus on the networking aspect of Ironic . Ironic use Neutron (the networking API of OpenStack)  for configuring the network.“Bare-metal” deployment is little bit different than VM and  Ironic had some extra  requirement from the Neutron ml2 impelmation.  (All operations  mentioned in this post (e.g create-network, create-ports, bind-port etc..) should be implemented by Neutron ml2-driver).   

This post should be an introduction to another-post that will describe how we planning to implement those networking requirements in Dragonflow.   

Ironic networking overview         

What Ironic Requires from neutron-implementation?

  • Ironic defines 3 different network types for "bare metal"  (as doucmented in  spec , doc):
    • Cleaning network - network that is used to clean  the bare-metal server - and make sure that the "bare metal"-node is ready for new workload. That network is recommended to be created as a provider-VLAN network for separation from the tenant  VLAN ranges.
    • Provisioning network - network that is used for regular management of the node (tear-down, reboot, pxe-boot etc..) . Also that network is recommended to be created as a provider-VLAN network for the same reasons of cleaning networks. (The operator can use same network for  Provisioning and cleaning, but Ironic enable define those 2 types for enable the separation between the the new/clean-nodes that are waiting to deploy and the dirty-nodes, that are waiting for clean)
    • Tenant Networks - networks that can be used for accessing to the "bare metal" for any other purpose - those networks should be managed like any network on the cloud. When “bare-metal” node is connected to tenant network , it’s should not be connected to the provision network for security reasons. (the same provision network is used for all bare-metal servers, and it breaks isolation requirements).
  • Supporting port-groups - Bare-Metal often required to treat a group of physical ports - as logical port (e.g BOND/LAG). Those port-groups are required to be managed by Neutron.
  • Support PXE boot with DHCP - the most common way to boot a Bare-metal servers is by PXE boot .The PXE-boot procedure uses dhcp for retrieving the boot-file-name and tftp-server address. Ironic pass the value of those parameters to neutron (by using neutron extra_dhcp_opt ), and the dhcp-server implementation in neutron should use those parameters for answering pxe-dhcp-requests.         

The networking building blocks of Bare-metal deployment

     There are several components involved in the networking of a bare-metal deploy:
  1. The bare-metal server itself.
  2. Ironic conductor - the software component of Ironic that actually controls the "bare metal" server (that includes the TFTP server for the PXE boot).
  3. DHCP server - for the assigning IP address to the "bare metal" server, and support PXE-BOOT param as well.  
  4. Top of rack switch - we assume that the bare-metal server is physically connected to along with all other components (compute-node, ironic conductor-node  etc..) .
  5. Tenant network - can be dynamically attached and detached from the "bare metal" node.     
  6. Provider networks  - for cleaning and provisioning  - and for any other needs.
      Example of deployment :

Bare-metal machine -life-cycle (from networking side):
(full state machine of ironic-node can bew found here )
  1. Cleaning - make the node ready for new a job (use the cleaning network).  
  2. Provisioning - ironic-conductor uses IPMI on the provisioning network in order to start the machine - and use PXE for booting the machine with the desired image. The PXE boot process includes the following steps (all steps done on provisioning networks):
    1. Use DHCP to obtain tftp-server addresses  
    2. Download boot-file from the tftp-server
    3. Boot from the downloaded file         
  3. Connect to tenant network - after the machine is up and running. It can be connected to tenant network and managed like any VM.  At this phase traffic from "bare metal" server interacts with all other component in the deployment (e.g vm , SNAT, DNAT etc.. ).
    1. Ironic can  change the physical-ports that were used for provisioning network to be bind to tenant network. In such case the "bare metal" server will lose the connectivity with Ironic-conductor, and with "bare metal" provisioning.  
  4. Cleaning - back to step 1..   


How Neutron learn about the bare metal topology:

neutron-port configurations:
To notify neutron about "bare metal" ports, Ironic uses it's own mechanisms to inspect the hardware , and forward that information as part of neutron-port configuration.
For that 2 new fields introduced in neutron lport (spec) :
  • local_link_information - that field located in the lport binding-profile and used for inform neutron how the port is connected the TOR switch. it's include 3 parameters:
    • switch_id - identifier of the switch that the port connected to. It’s can be switch MAC address OpenFlow based datapath_id.
    • port_id - a physical port-identifier in the switch.
    • switch_info - other information about the switch (optional param).
  • port-groups - a list of parameters for configuring the LAG/BOND on the TOR.
The neutron mechanism-drivers should use that information , while binding the lport.

DHCP configuration:

Ironic uses the extra_dhcp_option attribute on  neutron-port for configuring the the DHCP to support PXE boot (dhcp options:  boot-file-name and tftp-server-address). Neutron  ML2 driver should configure the DHCP server to answer these values upon request.

by Eyal Leshem (noreply@blogger.com) at September 28, 2017 07:34 AM

September 27, 2017

OpenStack Superuser

Navigating Kubernetes and edge computing

Often the best path to working with two vanguard technologies is unclear. That’s why OpenDev, a recent two-day event sponsored by the Ericsson, Intel and the OpenStack Foundation, dedicated a session to folks navigating Kubernetes and edge computing.  Both technologies appear to be here to stay. “Containers are what the developers are using,” said Jeremy Huylebroeck of Orange Silicon Valley. “It’s way more convenient for them to actually publish their code and try things faster.”

OpenDev was devised as more of a workshop than a traditional conference, you can also check the event schedule for Etherpads from the individual sessions.

Moderated by Walmart’s Andrew Mitry, participants ranged from telecoms to large technology multinationals. The 54-minute working session touches on the following topics:

  • Deploying and Deployment models
  • Infrastructure management at the edge
  • Edge evangelism/best practices
    “How do we convince vendors that converting to containers is a good idea?” asked Verizon’s Beth Cohen, adding that her company is working with a number of application security vendors who have VMs but none are containerized.
  • Bare metal
  • Robustness of applications under Kubernetes at the edge
  • Supporting stateful workloads in Kubernetes
    “Let me paint a picture of the challenge we see,” said Mitry. “Right now the Kubernetes community recommends that you run your stateful workloads outside Kubernetes.” That means your database, for example, might be running on a VM or a managed service outside Kubernetes. “That’s the prevailing advice. But if we’re talking about relatively small edge platform, I don’t want to be running multiple types and flavors of infrastructure because one is better supported than the other, I’d like to standarize on one.”

You can check out the session’s Etherpad or catch the entire session on video here or below.


Stay tuned to Superuser for more on edge computing and Kubernetes.

Cover Photo // CC BY NC

The post Navigating Kubernetes and edge computing appeared first on OpenStack Superuser.

by Superuser at September 27, 2017 02:07 PM

Steve Hardy

OpenStack Days UK

OpenStack Days UK

Yesterday I attended the OpenStack Days UK event, held in London.  It was a very good day and there were a number of interesting talks, and it provided a great opportunity to chat with folks about OpenStack.

I gave a talk, titled "Deploying OpenStack at scale, with TripleO, Ansible and Containers", where I gave an update of the recent rework in the TripleO project to make more use of Ansible and enable containerized deployments.

I'm planning some future blog posts with more detail on this topic, but for now here's a copy of the slide deck I used, also available on github.

by Steve Hardy (noreply@blogger.com) at September 27, 2017 11:18 AM

Julie Pichon

OpenStack Client in Queens - Notes from the PTG

Here are a couple of notes about the OpenStack Client, taken while dropping in and out of the room during the OpenStack PTG in Denver, a couple of weeks ago.


The original plan was to simply get rid of deprecated stuff, change a few names here and there and have few compatibility breaking changes.

However, now shade may adopt the SDK and move some of its contents into it. Then shade would consume the SDK, and OSC would consume it as well. It would be pretty clean and easy to use, but would mean major breaking changes for OSC4. OSC would become a shim layer over osc-lib. The plugin interface is going to change, as the loading time is long - every command requires loading all of the plugins which takes over half of the loading time even though the commands themselves load quickly. (There will be more communication once we understand what the new plugin interface will look like.) OSC4 would rip out global argument processing and adopt os-client-config (breaking change). It would adopt the SDK and stop using the client libraries.

Note that this may all change depending on how the SDK situation evolves.

From the end-user perspective, some option names will change. There is some old cruft left around for compatibility reasons that will disappear (e.g. "ip floating" will be gone, it changed a year ago to "floating ip"). The column output will handle structured data better and some of this is already commited to the osc4 feature branch.

The order of commands will not be changed.

For authentication, the bevahiour may change a bit between the CLI behaviour or clouds.yaml. os-client-config came along and changed a few things, notably with regard to precedence. The OSC way of doing will be removed and replaced with OCC.

Best effort will be made not to break scripts. The "configuration show" command shows your current configuration but not where it comes from - it's a bit hard to do because of all the merging of parameters going on.

The conversation continued about auth, how shade uses adapters and may change the SDK to use them as well: would sessions or adapters make the most sense? I had to attend another session and missed the discussion and conclusions.

Command aliases

There was a long discussion around command aliases, as some commands are very long to type (e.g. healthmonitor). It was very clear it's not something OSC wants to get into the business of managing itself (master list of collisions, etc) so it would be up to individual plugins. There could be individual .osc config file that would do the short to long name mapping, similar to a shell alias. It shouldn't be part of the official plugin (otherwise, "why don't we just use those names to begin with?") but it could be another pluging that sets up alias mappings to the short name or a second set of entry points, or include a "list of shortcuts we found handy" in the documentation. Perhaps there should be a community-wide discussion about this.

Collisions are to be managed by users, not by OSC. Having one master list to manage the initial set of keywords is already an unfortunate compromise.

Filtering and others

It's not possible to do filtering on lists or any kind of complex filtering at the moment. The recommendation, or what people currently do, is to output to --json and pipe the output to jq to do what they need. The documentation should be extended to show how to do this.

At the moment filtering varies wildly between APIs and none of them are very expressive, so there isn't a lot OSC can do.

Tagged with: events, openstack

by jpichon at September 27, 2017 08:40 AM

September 26, 2017

Chris Dent

TC Report 39

It has been a while since the last one of these that had any substance. The run up to the PTG and travel to and fro meant either that not much was happening or I didn't have time to write. This week I'll attempt to catch up with TC activities (that I'm aware of) from the PTG and this past week.

Board Meeting

The Sunday before the PTG there was an all day meeting of the Foundation Board, the Technical Committee, the User Committee and members of the Interop and Product working groups. The agenda was oriented towards updates on the current strategic focus areas:

  • Better communicate about OpenStack
  • Community Health
  • Requirements: Close the feedback loop
  • Increase complementarity with adjacent technologies
  • Simplify OpenStack

Each group gave an overview of the progress they've made since Boston. Mark McLoughlin has a good overview of most of the topics covered.

I was on the hook to discuss what might be missing from the strategic areas. In the "Community Health" section we often discuss making the community inviting to new people, especially to under-represented groups and making sure the community is capable of creating new leaders. Both of these are very important (especially the first) but what I felt was missing was attention to the experience of the regular contributor to OpenStack who has been around for a while. A topic we might call "developer happiness". There are a lot of dimensions to that happiness, not all of which OpenStack is great at balancing.

It turns out that this was already a topic within the domain of Community Health but had been set aside while progress was being made on other topics. So now I've been drafted to be a member of that group. I will start writing about it soon.


The PTG was five days long, I intend to write a separate update about the days in the API and Nova rooms, what follows are notes from the TC-related sessions that I was able to attend.

As is the norm, there was an etherpad for the whole week, which for at least some things has relatively good notes. There's too much to report all that happened, so here are some interesting highlights:

  • To encourage community diversity and accept the reality of less-than-full time contributors it will become necessary to have more cores, even if they don't know everything there is to know about a project.
  • Before the next TC election (coming soon: nominations start 29 September) a report will be made on the progress made by the TC in the last 12 months, especially with regard to the goals expressed in the vision statement. We should have been doing this all along, but is perhaps an especially good idea now that regular meetings have stopped.
  • The TC will take greater action to make sure that strategic priorities (in the sense of "these are some of the things the TC observes that OpenStack should care about") are effectively publicised. These are themes that fit neither in the urgency of the Top 5 list nor in the concreteness of OpenStack-wide Goals. One idea is to prepare a short list before each PTG to set the tone. Work remains to flesh this one out.

The Past Week

The week after the PTG it's hard to get rolling, so there's not a great deal to report from office hours or otherwise. The busiest day in #openstack-tc was Thursday where the discussion was mostly about Glare's application to be official. This has raised a lot of questions, many of which are in the IRC log or on the review. As is often the case with contentious project applications, the questions frequently reflect (as they should) the biases and goals the reviewers have for OpenStack as a whole. For example I asked "Why should Glare be an OpenStack project rather than a more global project (that happens to have support for keystone)?" while others expressed concern for any overlap (or perception thereof) between Glance and Glare and still others said the equivalent of "come on, enough with this, let's just get on with it, there's enough work to go around."

And with that I must end this for this week, as there's plenty of other work to do.

by Chris Dent at September 26, 2017 09:35 PM

TC Report 39 (not)

It has been a while since the last one of these that had any substance. The run up to the PTG and travel to and fro meant either that not much was happening or I didn't have time to write. This week I'll attempt to catch up with TC activities (that I'm aware of) from the PTG and this past week.

Board Meeting

The Sunday before the PTG there was an all day meeting of the Foundation Board, the Technical Committee, the User Committee and members of the Interop and Product working groups. The agenda was oriented towards updates on the current strategic focus areas:

  • Better communicate about OpenStack
  • Community Health
  • Requirements: Close the feedback loop
  • Increase complementarity with adjacent technologies
  • Simplify OpenStack

Each group gave an overview of the progress they've made since Boston. Mark McLoughlin has a good overview of most of the topics covered.

I was on the hook to discuss what might be missing from the strategic areas. In the "Community Health" section we often discuss making the community inviting to new people, especially to under-represented groups and making sure the community is capable of creating new leaders. Both of these are very important (especially the first) but what I felt was missing was attention to the experience of the regular contributor to OpenStack who has been around for a while. A topic we might call "developer happiness". There are a lot of dimensions to that happiness, not all of which OpenStack is great at balancing.

It turns out that this was already a topic within the domain of Community Health but had been set aside while progress was being made on other topics. So now I've been drafted to be a member of that group. I will start writing about it soon.


The PTG was five days long, I intend to write a separate update about the days in the API and Nova rooms, what follows are notes from the TC-related sessions that I was able to attend.

As is the norm, there was an etherpad for the whole week, which for at least some things has relatively good notes. There's too much to report all that happened, so here are some interesting highlights:

  • To encourage community diversity and accept the reality of less-than-full time contributors it will become necessary to have more cores, even if they don't know everything there is to know about a project.
  • Before the next TC election (coming soon: nominations start 29 September) a report will be made on the progress made by the TC in the last 12 months, especially with regard to the goals expressed in the vision statement. We should have been doing this all along, but is perhaps an especially good idea now that regular meetings have stopped.
  • The TC will take greater action to make sure that strategic priorities (in the sense of "these are some of the things the TC observes that OpenStack should care about") are effectively publicised. These are themes that fit neither in the urgency of the Top 5 list nor in the concreteness of OpenStack-wide Goals. One idea is to prepare a short list before each PTG to set the tone. Work remains to flesh this one out.

The Past Week

The week after the PTG it's hard to get rolling, so there's not a great deal to report from office hours or otherwise. The busiest day in #openstack-tc was Thursday where the discussion was mostly about Glare's application to be official. This has raised a lot of questions, many of which are in the IRC log or on the review. As is often the case with contentious project applications, the questions frequently reflect (as they should) the biases and goals the reviewers have for OpenStack as a whole. For example I asked "Why should Glare be an OpenStack project rather than a more global project (that happens to have support for keystone)?" while others expressed concern for any overlap (or perception thereof) between Glance and Glare and still others said the equivalent of "come on, enough with this, let's just get on with it, there's enough work to go around."

And with that I must end this for this week, as there's plenty of other work to do.

by Chris Dent at September 26, 2017 09:35 PM


Planet OpenStack is a collection of thoughts from the developers and other key players of the OpenStack projects. If you are working on OpenStack technology you should add your OpenStack blog.


Last updated:
October 22, 2017 08:50 PM
All times are UTC.

Powered by: