November 29, 2015

OpenStack in Production

Our cloud in Kilo

Following on from previous upgrades, CERN migrated the OpenStack cloud to Kilo during September to November. Along with the bug fixes, we are planning on exploiting the significant number of new features, especially as related to performance tuning. The overall cloud architecture was covered at the Tokyo OpenStack summit video

As the LHC continues to run 24x7, these upgrades were done while the cloud was running and virtual machines were untouched.

Previous upgrades have been described as below
The staged approach was used again. While most of the steps went smoothly, a few problems were encountered.
  • Cinder - we encountered the bug which led to a foreign key error. The cause appears to be related to UTF8. The patch ( was not completed so did not get included into the release. More details at the thread at
  • Keystone - one of the configuration parameters for caches had changed syntax and this was not reflected in the configuration generated by Puppet. The symptoms were high load on the Keystone servers since caching was not enabled.
  • Glance - given the rolling upgrade on Glance, we took advantage of having virtualised the majority of the Glance server pool. This allows new resources to be brought online with a Juno configuration and the old ones deleted.
  • Nova
    • Following the upgrade, we had an outage of the metadata service for the OpenStack specific metadata. The EC2 metadata works fine. This appears to be a cells related issue.
    • The VM resize functions are giving errors during the execution. We're tracking this with the upstream developers.
    • We wanted to use the latest Nova NUMA features. We encountered a problem with cells and this feature, although it worked well in a non-cells cloud. This is being tracked in We will use the new features for performance optimisation once these problems are resolved.
Catching these cases with cells early is part of the work for the scope of the the Cell V2 project at to which we are contributing along with the BARC centre in Mumbai so that the cells configuration becomes the default (with only a single cell) and the upstream test cases are enhanced to validate  the multi cell configuration.

As some of the hypervisors are still running Scientific Linux 6, we used the approach from GoDaddy to package the components using software collections. Details are available at We used this for nova and ceilometer which are the agents installed on the hypervisors. The controllers were upgraded to CentOS 7 as part of the upgrade to Kilo.

Overall, getting to Kilo enables new features and includes bug fixes to reduce administration effort. Keeping up with new releases requires careful planning and sharing upstream activities such as the Puppet modules but has proven to be the best approach. With many of the CERN OpenStack team in the summit in Tokyo, we did not complete the upgrade before Liberty was released but this has been completed soon afterwards.

With the Kilo base in production, we are now ready to start work on the Nova network to Neutron migration, deployment of the new EC2 API project and enabling Magnum for container native applications.

by Tim Bell ( at November 29, 2015 03:46 PM

November 28, 2015

OpenStack Blog

Technical Committee Highlights November 27, 2015

Welcome back from Tokyo. While there, I did not realize a three-dimensional subway map exists for Tokyo, but I sure loved traveling on the subway.

Welcoming the latest projects to OpenStack

Speaking of amazing cities and their subway maps, we should mention the growing list of OpenStack projects. We welcome these projects to OpenStack governance since the OpenStack Summit.

    • Monitoring – both OpenStack and its resources: monasca
    • Backups for file systems using OpenStack: freezer
    • Deployment for OpenStack: fuel
    • Cluster management service for Compute and Orchestration: senlin
    • Integrate Hyper-V, Windows and related components into OpenStack: winstackers

During these last weeks, the TC also had other project reviews requests that were put on hold for later once those projects and/or teams are more mature and ready to join the Big Tent.

Reports from TC Working Groups

Project Team Guide

The Project Team Guide team held a session back in Tokyo to discuss the next steps for this project. As a result of that session, more content will be created (or moved from the wiki): add community best practices, detail the benefits and trade-offs of the various release models, introduce deliverables and tags (as maintained in the governance repo’s projects.yaml), detail what common infrastructure projects can build on, and so on.

Communications Group

The communications working group (the one that brings these blog posts to you) will continue to operate under the same model. Announcements, summaries and communications will be sent out as they have been during the last cycle. Remember that feedback is always welcome and the group is looking for ways to improve. Talk back to us, we’re listening!

Project Tags

These are the latest new project tags created by the Technical Committee.

    • team:single-vendor: A new tag was added to communicate when a project team is currently driven by a single organization. We had some discussion about using the term “vendor” or “organization” but this intent is to show the opposite of a diversity in the team’s makeup.
    • assert:supports-upgrade: A new tag has been added to communicate when a project supports upgrades. Teams should apply this tag to their project if they assert they intend to support ongoing upgrades.
    • assert:supports-rolling-upgrade: A new tag has been added to communicate when a project supports rolling upgrades. Team should apply this tag to their project if they assert that operators can expect to perform rolling upgrades of their project, where the service can remain running while the upgrade is performed.

by Anne Gentle at November 28, 2015 10:36 PM

November 27, 2015

OpenStack Blog

OpenStack Developer Mailing List Digest November 21-27

Success Bot Says

  • vkmc: We got 7 new interns for the Outreachy program December-March 2015 round.
  • bauzas: Reno in place for Nova release notes.
  • AJaeger: We now have Japanese Install Guides published for Liberty [1].
  • robcresswell: Horizon had a bug day! We made good progress on categorizing new bugs and removing old ones, with many members of the community stepping up to help.
  • AJaeger: The OpenStack Architecture Design Guide has been converted to RST [2].
  • AJaeger: The Virtual Machine Image guide has been converted to RST [3].
  • Ajaeger: Japanese Networking Guide is published as draft [4].
  • Tell us yours via IRC with a message “#success [insert success]”.

Release countdown for week R-18, Nov 30 – Dec 4

  • All projects following the cycle-with-milestones release model should be preparing for the milestone tag.
  • Release Actions:
    • All deliverables must have Reno configured before adding a Mitaka-1 milestone tag.
    • Use openstack/releases repository to manage the Mitaka-1 milestone tags.
    • One time change, we will be simplifying how we specify the versions for projects by moving to only using tags instead of the version entry for setup.cfg.
  • Stable release actions: Review stable/liberty branches for patches that have landed since the last release and determine if your deliverables need new tags.
  • Important dates:
    • Deadline for requesting a Mitaka-1 milestone tag: December 3rd
    • Mitaka-2: Jan 19-21
    • Mitaka release schedule [5]

Common OpenStack ‘Third-Party’ CI Solution – DONE!

  • Ramy Asselin who has been spearheading the work for a common third-party CI solution announces things being done!
    • This solution uses the same tools and scripts as the upstream Jenkins CI solution.
    • The documentation for setting up a 3rd party ci system on 2 VMs (1 private that runs the CI jobs, and 1 public that hosts the log files) is now available here [6] or [7].
    • There a number of companies today using this solution for their third party CI needs.

Process Change For Closing Bugs When Patches Merge

  • Today when a patch merges with ‘Closes-Bug’ in the commit message, that marks the associated bug as ‘Fix Committed’ to indicated fixed, but not in the release yet.
  • The release team uses automated tools to mark bugs from ‘Fix Committed’ to ‘Fix Released’, but they’re not reliable due to Launchpad issues.
  • Proposal for automated tools to improve reliability: Patches with ‘Closes-Bug’ in the commit message to have the bug status mark the associated bug as ‘Fix Released’ instead of ‘Fix Committed’.
  • Doug would like to have this be in effect next week.

Move From Active Distrusting Model to Trusting Model

  • Morgan Fainberg writes most projects have a distrusting policy that prevents the following scenario:
    • Employee from Company A writes code
    • Other Employee from Company A reviews code
    • Third Employee from Company A reviews and approves code.
  • Proposal for a trusting model:
    • Code reviews still need 2x Core Reviewers (no change)
    • Code can be developed by a member of the same company as both core reviewers (and approvers).
    • If the trust that is being given via this new policy is violated, the code can [if needed], be reverted (we are using git here) and the actors in question can lose core status (PTL discretion) and the policy can be changed back to the “distrustful” model described above.
  • Dolph Mathews provides scenarios where the “distrusting” model either did or would have helped:
    • Employee is reprimanded by management for not positively reviewing & approving a coworkers patch.
    • A team of employees is pressured to land a feature with as fast as possible. Minimal community involvement means a faster path to “merged,” right?
    • A large group of reviewers from the author’s organization repeatedly throwing *many* careless +1s at a single patch. (These happened to not be cores, but it’s a related organizational behavior taken to an extreme.)

Stable Team PTL Nominations Are Open

  • As discussed [8][9] of setting up a standalone stable maintenance team, we’ll be organizing PTL elections over the coming weeks.
  • Stable team’s mission:
    • Define and enforce the common stable branch policy
    • Educate and accompany projects as they use stable branches
    • Keep CI working on stable branches
    • Mentoring/growing the stable maintenance team
    • Create and improve stable tooling/automation
  • Anyone who successfully contributed to a stable branch back port over the last year can vote in the stable PTL election.
  • If interested, reply to thread with your self-nomination.
  • Deadline is 23:59 UTC Monday, November 30.
  • Election will be the week after.
  • Current candidates:
    • Matt Riedmann [10]
    • Erno Kuvaja [11]

Using Reno For Libraries

  • Libraries have two audiences for release notes:
    • Developers consuming the library.
    • Deployers pushing out new versions of the libraries.
  • To separate the notes from the two audiences and avoid doing manually something that we have been doing automatically, we can use Reno for just deployer release notes.
  • Library repositories that need Reno should have it configured like service projects, with separate jobs and a publishing location different from their developer documentation [12]

Releases VS Development Cycle

  • Thierry writes that as more projects enter the Big Tent, there have recently been questions about release models and development cycle.
  • Some projects want to be independent of the common release cycle and dates, but still keep some adherence to the development cycle. Examples:
    • Gnocchi wants to be completely independent of the common development cycle, but still maintain stable branches.
    • Fuel traditionally makes their releases a few months behind the OpenStack release to integration all the functionality there.
  • All projects above should current be release:independent, until they are able to (and willing) to coordinate their own releases with the projects following the common release cycle.
  • The release team currently supports 3 models:
    • release:cycle-with-milestones is the traditional Nova model with one release at the end of a 6-month dev cycle and a stable branch derived from that.
    • release:cycle-with-intermediary is the traditional Swift model, with releases as-needed during the 6-month development cycle, and a stable branch created from the last release in the cycle.
    • release:independent, for projects that don’t follow the release cycle at all.
      • Can make a release supporting the Mitaka release (including stable updates).
        • Can call the branch stable/mitaka even after the Mitaka release cycle, as long as the branch is stable and not development.
      • Should clearly document that their release is *compatible with* the OpenStack Mitaka release, rather than *part of* the Mitaka release.

by Mike Perez at November 27, 2015 08:27 PM

The Official Rackspace Blog

The Download: Nov. 27, 2015 — The Thanksgiving Edition!

Happy Friday everyone! As we head into the Thanksgiving weekend, there are many things we have to be thankful for this year. Friends, family and endless pounds of turkey are probably high on the list, but there are other, more nuanced things we have to be thankful for too.

If you’re an online retailer, you’re (hopefully) thankful that your site is buzzing along today, handling Black Friday traffic with ease.

Or, maybe things haven’t gone as smoothly as you had planned. Your site had trouble loading or crashed altogether — not the greatest outcome, but don’t lose hope! You can be thankful that Rackspace has been working with ecommerce customers for years, and we’ve gained a lot of experience along the way.

Earlier this year, Racker Alan Bush compiled a roundup of solid Black Friday tips and advice specifically for folks who were struggling to optimize their ecommerce operations around big retail holidays. It can really be a make or break scenario for businesses, as Alan puts it:

The math is simple: an underperforming site will turn away many would-be buyers; a completely down site means no sales at all (and in both cases, businesses also stand to lose potential future buyers, who simply won’t come back). And if your site isn’t ready for mobile shopping, you’re leaving sales on the table.

Check out all of the helpful tips he’s posted there, as well as the great services Rackspace provides for ecommerce customers, 24x7x365.

We’re thankful for all of the opportunities we get to help businesses thrive, but we’re also thankful that Rackspace has earned a reputation as such a great place to work — worldwide! Just this week, Rackspace was named as one of the top ten coolest companies for Women in Australia. This accolade joins similar recognition we’ve received here in the states, and we couldn’t be more proud.

But this year’s big takeaway for Rackspace, the thing we’re truly thankful for, is our growing base of customers and the increasing number of ways we can help them.

In addition to the support we offer our ecommerce customers, we’re also arming them (and others) with data analytics so they can gain greater insight into their business. We’ve expanded our cloud expertise to encompass leading technologies like Microsoft Azure and Amazon Web Services, and we’ve strengthened our commitment to OpenStack with the addition of Carina and our partnership with Intel in the OpenStack Innovation Center.

Overall, we’re thankful that we get to meet customers where they are and outfit them with the right technologies for their business. With the wide array of expertise Rackers have to offer, we’re able to be technology agnostic and then provide fanatical support on top of whichever solutions you choose.

Offering these capabilities is part of our vision to be the world’s leading service company on top of the world’s leading technologies, and we’re thankful that we get to do it every day.

Thanks for reading this week, have a great weekend and Happy Thanksgiving!

by Abe Selig at November 27, 2015 05:40 PM

November 25, 2015

OpenStack Superuser

OpenStack Mitaka release: what’s next for OpenStackClient and Documentation

Each release cycle, OpenStack project team leads (PTLs) introduce themselves, talk about upcoming features for the OpenStack projects they manage, plus how you can get involved and influence the roadmap.

Superuser will feature these summaries of the videos weekly; you can also catch them on the OpenStack Foundation YouTube channel. This round of interviews covers the OpenStack Client, Documentation.


<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="" width=""></iframe>

What: OpenStackClient (OSC) is a command-line client for OpenStack that brings the command set for Compute, Identity, Image, Object Store and Volume APIs together in a single shell with a uniform command structure.

Who: Dean Troyer, PTL. Formerly with NASA, Troyer worked on OpenStack before it formally existed. Now he’s a senior cloud software engineer at Intel Corp.

Burning issues

alt text here

“One of the things we talked about the most [in Tokyo] was the 'help' command," he said. There’s a long list of things we need to address and we’re still sorting it out, that's one of our goals for the next cycle.”

What’s next

alt text here

Smoothing out the user experience is also a priority, he added. Troyer also says he hopes to work with OpenStack UX to get user feedback in this cycle.

What matters in Mitaka: Performance improvements. “The place we see this most is developers in DevStack, [we’re] trying to reduce the time it takes to load,” he said. "You should be able to type 'openstack flavor list' and not have to wait four seconds to get a response.”

Get involved!
“Like every project out there, we could use new contributors...especially on the newer projects that don’t have legacy client issues…Getting involved with some of those newer projects is helpful.”
Use Ask OpenStack for general questions
For roadmap or development issues, subscribe to the OpenStack development mailing list, and use the tag [OpenStackClient]


<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="" width=""></iframe>

What: OpenStack Docs,which provides documentation for most of OpenStack’s projects. Some two-thirds of all OpenStack users refer to docs once a week, if not more.

Who: Lana Brindley, PTL. Day job: Senior manager, information development, Rackspace.

Burning issues

alt text here

About these, she added: “We want to make sure we’re good community citizens we want to increase our support for teams who want to include information about their products in the official docs…Big tent has changed the way we look at projects and we want to make sure we’re supporting each product team in the best way possible.”

What improvements are you working on

"We're working on feedback that the Web Application Description Language (WADL) system isn't working so well for API docs, so we're working on that. Anne Gentle has been working on an ID since Vancouver that we're now ready to implement, that will switch us to Swagger instead," she said.

"We also got a lot feedback on release notes, it was surprising for me. So we’ve been working with a few different people to understand that problem better and we have some exciting ideas for improvements...”

What's planned for Mitaka?

alt text here

Navigation of the docs takes priority, she said. Some of the books are easier to move from role-based to task-based data typing, so "we're going to start with one of the easier ones, the user guides, to see how that works.”

The overall theme for work on the next release is manageability with an additional focus on collaboration, Brindley added.

"Because Liberty was my first release as PTL, I'm still learning what makes a good release," Brindley said. "It was great to hear feedback from the docs team on what went well and what didn't go so well, so we can implement those changes and hopefully improve in Mitaka..."

Get involved!
Brindley underlined that the docs team is always looking for technical writers, developers (even those who don't think they're good writers.) She also added that docs is developed just like code, so OpenStack active technical contributors (ATCs) already have the skills to help out.
Join the OpenStack Docs mailing list.
Participate in the weekly meetings: held on #openstack-meeting every Wednesday at alternating times for different time zones:
APAC: Wednesday, 00:30:00 UTC #openstack-meeting
US: Wednesday, 14:00:00 UTC

Cover Photo // CC BY NC

by Nicole Martinelli at November 25, 2015 04:25 PM

SUSE Conversations

SUSE Linux and OpenStack: Concentrated Strength for SAP Offerings from the Cloud

Why should SAP customers who want to obtain their applications from the cloud rely on the administration and orchestration solution SUSE® OpenStack Cloud? A new SUSE whitepaper provides the answers. As the leading open-source cloud infrastructure solution, OpenStack offers numerous advantages over proprietary cloud solutions – above all, better interoperability with other systems, less vendor …

+read more

The post SUSE Linux and OpenStack: Concentrated Strength for SAP Offerings from the Cloud appeared first on SUSE Blog. SaSoe

by SaSoe at November 25, 2015 10:31 AM

November 24, 2015

OpenStack Superuser

Powering one billion page views and 170 million unique visitors per month with OpenStack

When locals need to perform search-based queries, including news, healthcare and finance, they often turn to goo, a Japanese web portal and Internet search engine that has been around for 18 years. Goo is a web service brand of NTT Group accommodating the web search, blog, Q&A service Oshiete!goo, smartphone applications and more, which generates one billion page views per month and 170 million unique users.

Online activity is particularly high in Japan, ranking as country with the fourth highest percentage of its population surfing the web. But what are the locals searching for? The trends suggest that online shopping, last year racking up ¥8.5 trillion yen - or $79.3 billion USD - is a popular pastime.

Seasons also influence online behavior, the team at NTT Resonant notes. Popular search terms range from hay fever (花粉症) during peak cherry blossom season - February through April - to “zip code” or “post code” (郵便番号) in December, when locals celebrate the new year, biggest holiday in Japan, by sending postcards to family and friends.

Goo is based on the Q&A service Oshiete!goo and Google’s provision of basic search functions, including web databases. The NTT Resonant team set out to streamline operations for its popular web portal by adopting OpenStack for its private cloud infrastructure in October 2014.

Approximately 90 percent of Japan’s 126.8 million residents own more than one device, accounting for each unique goo visit. NTT Resonant provides more than 80 services including goo and web services for the enterprise.

The deployment, operated by NTT Resonant, consists of 400 hypervisors and 4,800 physical cores, accommodating more than 1,800 virtual servers in production. It provides about 60 types of goo-related services, including an Internet search business featuring a search engine service developed by goo that searches for mobiles, images and blogs and services offering a wide variety of information including weather, news, healthcare, finance and business.

Committed to open source and the OpenStack community

NTT Resonant’s team has embraced the culture of open source, citing the learning opportunities from the trial and error phases as a key benefit. Toshikazu Ichikawa, a senior research engineer at the NTT Software Innovation Center (SIC), says that the community and open-by-design concept were key factors when deciding to work with OpenStack.

“Choosing OpenStack helps us avoid unnecessary vendor lock-in and gives us choices of vendor plugins and integration when needed,” he says. “Applying OpenStack to our production triggered us to join the OpenStack community deeply.”

The team is rooted in the community, contributing to Summits, meetups and the OpenStack project itself. Koji Iida, a senior research engineer supervisor at the NTT SIC says that identifying bugs, many related to nova-cell, and contributing fixes were mandatory for the success of this deployment.

One particular bug was a show-stopper for NTT Resonant’s team, Iida said, until they were able to push a fix upstream, which also benefits the broader user community.

“The shelve function didn't work in the Icehouse release with nova-cell deployment,” he said. “With hypervisor maintenance, our workflow relies for our users to shelve/unshelve their virtual servers to migrate servers from the hypervisor; therefore, it was mandatory for us to fix.”

Creating efficiencies for operators and application developers

With a nimble team of 15, six members working for server software including OpenStack and 9 working on the physical server, network and data center, NTT Resonant has leveraged OpenStack to increase productivity among its key team members.

alt text hereMembers of the NTT Resonant team (from left to right): Tomoya Hashimoto, Munetaka Komazaki and Kazuhiro Tooriyama

The team deployed KVM virtualization in 2010 to shift away from each team procuring and managing an infrastructure per application. However, even with KVM, Tomoya Hashimoto, a senior manager at NTT Resonant, says that a lot of tasks, including configuring and assigning virtual servers, remained highly manual and coordinating between application teams and the infrastructure team consumed too much of the project’s time.

By moving to a cloud model with OpenStack and KVM in 2014, many of the steps to provide a virtual server were automated.

“We could automate most of the related tasks, such as monitoring system configuration beside the installation of OpenStack,” says Hashimoto. “This contributed to the reduction of tasks and burdens for the infrastructure operators and application developers.”

Once this was implemented, application developers created and deleted virtual servers more frequently than before, increasing the speed of business overall.

“The time to provide a virtual server for application developers was shortened from five business days to five minutes,” says Kazuhiro Tooriyama, an engineer at NTT Resonant. “Moreover, a migration of virtual servers between physical servers became much easier, making it easier for our operators to handle a server failure.”

Tooriyama also says that since virtual servers can now be easily scrapped and built, this has increased the opportunity for technical evaluation among the team’s application developers.

What’s next?

Currently, the deployment is running on Icehouse in a single datacenter, but at the August operator’s mid-cycle meetup in Palo Alto, Ichikawa lead a lightning talk saying that the team intends to upgrade from Icehouse soon, with the goal to continue upgrading to the newest release of OpenStack.

The team also has plans for expansion, including increasing the number of hypervisors and launching a second datacenter.

This article was originally published in the print Superuser publication that was distributed at the OpenStack Summit Tokyo, where the NTT Group was also given the Superuser Awards based on a community vote.

Learn more about the NTT Group's different use cases, including NTT Resonant with this episode of Superuser TV.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="" width=""></iframe>

by Allison Price at November 24, 2015 07:12 PM

Connect the dots on OpenStack Neutron with this updated manual

There's a new book to speed up your knowledge of OpenStack Neutron. The second edition of "Learning OpenStack Networking (Neutron)," is coming down the pike with updates and additional chapters.

Author James Denton, principal architect at Rackspace, has had his work cut out for him since the October 2014 edition launched. This 329-page manual available for December preorder from Packt Publishing promises to help readers "wield the power" of networking in OpenStack.

Superuser talked to Denton about why networking is taking center stage now, eliminating pain points in the Neutron project and his favorite resources for learning about OpenStack.

alt text here

What are the most significant updates to this edition of the book?

The second edition is based on features available as of the Kilo release and includes a few new chapters that cover:

• L3 High Availability using Virtual Router Redundancy Protocol (VRRP)
• Distributed Virtual Routing
• VPN as-a-service

In addition, Security Groups and firewall-as-a-service have been broken out into their own chapters, each with new and/or enhanced diagrams and figures to better explain their respective concepts and functionality. The concepts described in the book apply to the Liberty release as well, but there many be minor differences in implementation along with additional functionality compared to Kilo.

<script async="async" charset="utf-8" src=""></script>

What was your reaction when Neutron was called out in the Tokyo Summit keynote as the most active project in OpenStack?

Neutron has been called out numerous times over the last two years as a major pain point in architecting and operating an OpenStack cloud.

It’s no surprise that Neutron was noted as the most active project this time around, considering all of the focus and resources that have been poured into stabilizing core features and functionality.

Also, as the project has matured, you’re seeing more vendors take notice and begin developing plugins to bring their technologies into the fold. Networking is the foundation of the cloud, and the work that the team has put into the project in the last two to three release cycles has really paid off.

Any thoughts on why "The time is now for networking to have its day," as Mark Collier said?

Server virtualization technologies have really matured, and the focus now is to bring the network stack into the fold. It makes sense that the next logical step is to virtualize network appliances in the same way servers were virtualized years ago. Network vendors are making it easier to bring firewalls, load balancers, and more in as virtual appliances that can be treated like any other virtual machine. A lot of work has been done in Neutron over the last couple of releases to ensure that the network plumbing and security model can support virtualized network appliances.

Now that the barriers are being eliminated, we should start seeing more and more network administrators embrace the idea of network virtualization and its benefits, much like server administrators did a decade ago. In addition, containerization technologies have really turned traditional networking implementations on their head, so I think we’ll see a shift towards software-defined networking (SDN) and other non-traditional ways of connecting devices to allow for large scale networking.

There’s a lot of work to be done!

Neutron has been criticized for its complexity on several OpenStack user surveys -- what's the best way to tackle that for an operator/administrator?

Networking is complicated, and we as users have been fortunate in recent years that vendors have simplified the process of configuring network devices and even networking within operating systems. Think back 15-20 years though, and things weren’t so easy. The underlying network functions are as complex as ever, but when those functions and configurations are abstracted from the user, one can take for granted how ‘easy’ it is.

Neutron is complex because networking is complex. No one system or environment is the same, and Neutron has to allow for numerous combinations and configurations. I think it’s important to have a solid foundation in networking to understand how to configure and implement Neutron features; even more so if you’re responsible for maintaining and troubleshooting them. Many operators may have a strong system administration or development background, but lack foundational network knowledge that would benefit them in standing up and maintaining an OpenStack cloud. Work is underway to provide better documentation on simple network configurations, but the truth is, anything other than simple is going to require some work to get right for your environment and use-case.

<script async="async" charset="utf-8" src=""></script>

There's so much you can learn online -- why buy a physical book (or even an e-book)?

There is a ton of useful information regarding OpenStack and Neutron on the internet. The problem is finding what’s relevant to you. When you’re new to a subject, it’s hard to know what to search for, and even harder to weed through information without context or experience. Most blogs and snippets cover a particular issue or feature, and while extremely useful, are just one piece of a much larger puzzle.

The goal with this book is to provide an end-to-end experience for the reader, beginning with architecting the physical network, installing OpenStack and Neutron, and laying a foundation that enable the creation of networks, subnets, routers, and advanced network devices. I think readers can appreciate having all of that information in a centralized location.

What books or materials got you started with OpenStack?

When I started with OpenStack, we were deploying Essex-based clouds using nova-network. When Grizzly was introduced, we decided to adopt Quantum (Neutron) and found there was little information to be found other than the code itself.

I spent a lot of time testing various configurations until I found one that provided some kind of connectivity. I read up on the Open vSwitch manual to figure out how flows worked and were implemented, and spent some time reverse-engineering various OpenStack code files to see what was going on under the hood. Manually creating bridges and flows, creating and attaching VMs, and breaking things really helped me figure out how everything fit together and was orchestrated. Asaaf Muller, a core Neutron developer, has an excellent blog at where he breaks down various Neutron components and technologies. I highly recommended his blog for anyone looking to immerse themselves in the nuts and bolts of Neutron.

Cover Photo // CC BY NC

by Nicole Martinelli at November 24, 2015 05:23 PM


OpenStack, Cloud Foundry and a turnkey app environment

The post OpenStack, Cloud Foundry and a turnkey app environment appeared first on Mirantis | The #1 Pure Play OpenStack Company.

On November 17, Mirantis’ Kamesh Pemmaraju and John Jainschigg  and Pivotal’s Ryan Pei got together to provide a joint webinar explaining how Pivotal Cloud Foundry and OpenStack can come together to provide a turnkey application environment. You can view the entire webinar here, but we also had some great questions, so we wanted to answer them here in the blog.

Q: What do you mean by “not locked in”? How portable is a Cloud Foundry application?

Ryan: Seamlessly portable :) Cloud Foundry works on many IaaSs and the way you create and deploy your apps does not change from one IaaS to another.

Q: What are the application dependencies on Cloud Foundry? For example, how easy/hard is to move away from an app done in the Cloud Foundry environment?

Ryan: It’s not difficult at all. For instance, with a Java app, if you wanted to port the app off of Cloud Foundry, you would just need to ensure that the dependencies in the Cloud Foundry buildpacks are provided in the new environment, such as a JVM. There shouldn’t be any configuration changes required.

Q: Can I do the same with Ops Manager or Apps Manager ?

John: Ops Manager is basically a configuration and deployment tool that manages interactions with BOSH (the actual deployment engine) — it’s an easy way for users to configure and deploy Pivotal Cloud Foundry and related products on an IaaS, without having to deal directly with the YAML files that BOSH consumes. Apps Manager is a management UI for an installed PCF environment that provides a web UI to (for admins) manage user accounts, create organizations, spaces and other ‘tenant-like’ permission/resource subdivisions, etc., and (for admins and end user developers) to manage orgs, spaces and apps they own. (So, yes.)

Q: How do you segregate your workloads (security, network, traffic management)? Do you build multiple PaaS (for Dev, for Prod, etc.) or you have a single large PaaS that is segmented for the various uses?

Ryan: Cloud Foundry is inherently a very secure environment, because it limits network traffic to a very small number of entrypoints (the customer-provided load balancer upstream from the CF router, plus VMs used to manage the PCF cluster). PCF deploys apps on secure containers with their own strict memory allocations and private read/write file systems. It uses Application Security Groups, role-based authentication and other abstractions to let operators configure portable security and access policy on an app-by-app basis. The software (buildpacks, service APIs and components) against which applications run impose security on ‘operating system’ calls. And since all these components can be updated non-disruptively, it should be relatively easy to keep pace with patches. This includes VM stemcells providing the guest OS underlying major PCF components, which can be rolled back and out with BOSH. Underneath Cloud Foundry, there are all sorts of ways of segregating and securing VM workloads and tenant networks on OpenStack, including stuff like using Availability Zones, affinity/anti-affinity groups, Intel TXT, etc., to direct VM workloads (e.g., Cloud Foundry components) by tenant and other characteristics to particular compute and storage resources, and stuff like NetScaler and Contrail to build any-to-any high-capacity mesh networks and isolate different traffic types and tenant-by-tenant/app-by-app traffic on VLANs or tunnels. Finally, the extraordinary speed and ease of deploying Mirantis OpenStack and Pivotal Cloud Foundry makes brute-force ‘air-gap’ strategies operationally efficient — you can have, not just separate PaaSs in separate tenants, but separate PaaSs on separate clouds for Dev/Test and Production.

Q:  Patching/Ownership; Application Framework (spring)/Runtime /Infra Auto(bosh), etc. Who’s owns the patching of these?

Ryan:  The Cloud Foundry Foundation is patching and supporting these components on behalf of the community at large, and Pivotal is doing the same on behalf of its customers across all the Pivotal Cloud Foundry products, including Ops Manager, Apps Manager, Spring Cloud Services, etc.

Q: Hi, I would like to know whether we can install JBoss as Application catalog in Mirantis OpenStack ?

A: Mirantis has published a Murano app that deploys JBoss in a Docker image. You can find it at

Q: Can we use OpenStack’s Trove to provide Databases as a Service?

John: It’s not immediately clear what advantage one might gain from this, since PCF provides DBaaS to many flavors of database already. However, in principle, if you were, for example, using PCF to build an app that provided authorized users (e.g., operators) with a tool to set up databases and provide DB credentials on the underlying OpenStack, it should be possible to use the CF service broker API to create a service broker to Trove.

Q: Hi, can any of this deploy behind a corporate internet proxy?

A: Yes, many people do this.

Q: What’s is the core difference of using ops manager to install CF versus using CF community via the Murano package?

John: There are many differences. Pivotal Cloud Foundry, installed with Ops Manager, includes a distribution of Community Cloud Foundry, plus additional components from Pivotal. If you would prefer to work with Community Cloud Foundry, at this point, we recommend installing a recent version using the community-provided BOSH installer. Alternatively, the Murano app, which deploys version 1.4 of community Cloud Foundry in a small, PoC configuration, may suffice for initial evaluations.

Q: Will you be covering the differences between using a buildpack (e.g. python/django) vs. using docker, particularly in the case where a buildpack exists for your dev environment? What are the pros and cons of using one vs. the other and what different CF “features” set these two uses apart (i.e. docker vs. app buildpack)?

Ryan: Buildpacks come with good guidelines on how to create your applications for a specific language and/or framework. Docker is very open-ended in terms of what you can add to Docker containers, so that gives you a lot of freedom but then you must be careful that this freedom is not abused. It may not be as easy, for instance, to restage every app in your system with a security or maintenance patch if not everyone is using standardized filesystems.

Q: In the Infra as Code “spirit” how are the configurations (tenants, admin, credentials, etc.) kept so that if someone were to mess up the setup of CF it can be recreated from “code”? Is backing up the way to achieve such redundancy or is there a concept of “coding” that setup (checking in to source control)?

Ryan: PCF Ops Manager has a feature that allows you to “Export settings”, or download a zip file that contains all of your deployment settings. We suggest downloading this package regularly and whenever changes are being made.

Q: When auto-scaling down your app instances, how does the platform determine which instance should be shut down?

Ryan: The platform shuts down instances starting with the highest index values in Cloud Controller.

Q: What are the key points to port a legacy app to a CF compatible version ?

Ryan: Any app can be seamlessly ported over to CF, but if you do have time to refactor, we would recommend following the Twelve Factor principles when designing your application. Some more specifics are available here in the CF docs as well.

Q: Hi. Where can I read about CF – Integrated logging and about multi-cloud – what it means to have a VM evacuation?

Ryan: You can find out more about Cloud Foundry at

Q: As of now, CF Buildpacks does not support Dynatrace for monitoring. Will it be possible in the near future? I could see New Relic has been added as an extension.

Ryan: Custom buildpacks are always an option. New Relic actually released a service for Pivotal Cloud Foundry earlier this year, and we’re evaluating whether this is a good solution for the long-term.

Q: Can end users deploy microservices as well, just like pushing apps?

John: Microservices are essentially just very simple, single-process applications. You can take full advantage of this architecture by using a framework to wire together these microservices into a cohesive app. This is where a tool like Spring Cloud Services can help you to compose something really great with Java Spring microservices.

Q: How much programming skill is required to create custom PCF services? Is this a task for Dev or Ops?

Ryan: It’s easy to do; check out Either Dev or Ops can create them for their own use.

If you’re interested in using Cloud Foundry and OpenStack to create a better application environment, go ahead and download the runbook that explains exactly how to deploy Cloud Foundry with OpenStack.

The post OpenStack, Cloud Foundry and a turnkey app environment appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Guest Post at November 24, 2015 03:25 AM

November 23, 2015

Gal Sagie

OVN L3 Deep Dive

As you might have noticed L3 support has been added to OVN, in this post I am going to create a simple OpenStack topology, two networks connected using a router, one VM on each of these networks and i am going to describe how this translate to the OpenFlow pipeline in OVN.

To better understand this pipeline, it is better you first get yourself familiarize with this post where i did an OVN L2 deep dive, some things have changed since i wrote it (like the broadcast/multicast handling) but in its high level concept its the same.

The following diagram shows the OpenStack setup i have configured:

How does OVN implement L3 ? in a nutshell it uses a pair of OVS patch ports to simulate a connection between logical switch (OpenStack network) to logical router (OpenStack router) and in order to do it, it creates a pair of patch ports for every logical router port.

If you are interested in understanding more about patch ports, you can read this post, it describe how patch ports are used to connect two different OVS bridges, in our example they are connected to the same bridge and used as a modeling construct but the concept is the same.

If we print the OVS ports in our bridge we get the following list

OFPST_PORT_DESC reply (xid=0x2):
 1(tapfee83ab5-5f): addr:62:8c:72:4c:26:df
     config:     PORT_DOWN
     state:      LINK_DOWN
     speed: 0 Mbps now, 0 Mbps max
 16(tapdf2a478a-5b): addr:00:00:00:00:00:00
     config:     PORT_DOWN
     state:      LINK_DOWN
     speed: 0 Mbps now, 0 Mbps max
 17(tap071b1892-67): addr:00:00:00:00:00:00
     config:     PORT_DOWN
     state:      LINK_DOWN
     speed: 0 Mbps now, 0 Mbps max
 19(patch-10f594b4-): addr:4e:d2:16:a9:83:17
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
 20(patch-5de69aae-): addr:66:a6:a1:a0:12:e6
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
 22(patch-d77ee94a-): addr:76:8b:05:66:a3:38
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
 23(patch-7581ba34-): addr:fa:b7:bf:7a:9a:8a
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
 24(tapaacde712-12): addr:fe:16:3e:a4:d1:b5
     config:     0
     state:      0
     current:    10MB-FD COPPER
     speed: 10 Mbps now, 0 Mbps max
 LOCAL(br-int): addr:c6:75:4f:5e:50:47
     config:     PORT_DOWN
     state:      LINK_DOWN
     speed: 0 Mbps now, 0 Mbps max

What we can see is we have four patch ports and four tap ports, the tap ports are for our two VMs and additional two ports for the DHCP namespaces (OpenStack reference DHCP implementation creates a namespace with dnsmasq per network and since we are in a single node setup we have them all connected to our bridge).

As we can see OVN also created 4 patch ports for the two logical router legs, patch ports are coming in pairs and we can see that patch port 19 and 20 are connected as peers and the same for ports 23 and 24.

For every pair, one port represent the side on the logical switch and another patch port used to represent the logical router side. In order to identify which is which we can check the id in the name.

Following is a partial dump of OVN Northbound DB:

Logical_Router_Port table
_uuid                                enabled external_ids mac                 name                                   network       peer
------------------------------------ ------- ------------ ------------------- -------------------------------------- ------------- ----
5de69aae-545d-4aca-93a7-36685b60fa06 []      {}           "fa:16:3e:6c:c0:33" "10f594b4-a762-4ba0-86fb-a76fe7ad0538" "" []
7581ba34-2926-482c-b95e-5b56e863adb9 []      {}           "fa:16:3e:70:41:c5" "d77ee94a-b57c-48a4-a300-7093f02f49a1" "" []

We can see that the first logical router port name start with 10f594b4, the same as the name of patch port 19. This means that this port is basically the logical switch default gateway and has the IP, it is treated as part of that logical switch in the pipeline (as we will soon see).

Its peer port (20) is basically its end point in the logical router side and is later used in the pipeline for the routing process.

The following diagram tries to explain this better:

The Pipeline

Lets now see how this modeling is actually implemented in the OpenFlow pipeline in the local OVS. I will describe the main parts as i see them, its important to note that there are additional parts used for security and RFC compliance that are done in the pipeline and i will not describe them as i dont want to make this too long.

Ports Classification

If we look at the first table we can see how the patch ports are treated

 cookie=0x0, duration=10428.153s, table=0, n_packets=3, n_bytes=987, priority=100,in_port=20 actions=set_field:0x6->metadata,set_field:0x1->reg6,resubmit(,16)
 cookie=0x0, duration=10428.153s, table=0, n_packets=0, n_bytes=0, priority=100,in_port=19 actions=set_field:0x4->metadata,set_field:0x1->reg6,resubmit(,16)
 cookie=0x0, duration=7295.704s, table=0, n_packets=2, n_bytes=658, priority=100,in_port=23 actions=set_field:0x6->metadata,set_field:0x2->reg6,resubmit(,16)
 cookie=0x0, duration=7295.704s, table=0, n_packets=0, n_bytes=0, priority=100,in_port=22 actions=set_field:0x5->metadata,set_field:0x1->reg6,resubmit(,16)
 cookie=0x0, duration=5325.304s, table=0, n_packets=9, n_bytes=1455, priority=100,in_port=24 actions=set_field:0x4->reg5,set_field:0x4->metadata,set_field:0x2->reg6,resubmit(,16)
 cookie=0x0, duration=66.190s, table=0, n_packets=8, n_bytes=1126, priority=100,in_port=25 actions=set_field:0x5->reg5,set_field:0x5->metadata,set_field:0x2->reg6,resubmit(,16)

We can see that patch ports 19 and 22 represent the default gateway ports (IPs and and are classified with metadata numbers that fits their logical switchs (metadata represent the logical switch/network, every local controller allocate running numbers that are unique per logical switch).

We can also see that patch ports 20 and 23 are classified with the same metadata number (6) which represent all the ports attached to the logical router, this is later used for routing.

ARP Responders and Ping

Next in the pipeline we can see that there are ARP responders installed for all the router ports and ping responders (ICMP echo request)

 cookie=0x0, duration=10428.269s, table=17, n_packets=0, n_bytes=0, priority=90,icmp,reg6=0x1,metadata=0x6,nw_dst=,icmp_type=8,icmp_code=0 actions=move:NXM_OF_IP_SRC[]->NXM_OF_IP_DST[],set_field:>ip_src,set_field:255->nw_ttl,set_field:0->icmp_type,set_field:0->reg6,set_field:0->in_port,resubmit(,18)

 cookie=0x0, duration=10428.268s, table=17, n_packets=0, n_bytes=0, priority=90,icmp,reg6=0x1,metadata=0x6,nw_dst=,icmp_type=8,icmp_code=0 actions=move:NXM_OF_IP_SRC[]->NXM_OF_IP_DST[],set_field:>ip_src,set_field:255->nw_ttl,set_field:0->icmp_type,set_field:0->reg6,set_field:0->in_port,resubmit(,18)

 cookie=0x0, duration=7295.816s, table=17, n_packets=0, n_bytes=0, priority=90,icmp,reg6=0x2,metadata=0x6,nw_dst=,icmp_type=8,icmp_code=0 actions=move:NXM_OF_IP_SRC[]->NXM_OF_IP_DST[],set_field:>ip_src,set_field:255->nw_ttl,set_field:0->icmp_type,set_field:0->reg6,set_field:0->in_port,resubmit(,18)

 cookie=0x0, duration=7295.816s, table=17, n_packets=0, n_bytes=0, priority=90,icmp,reg6=0x2,metadata=0x6,nw_dst=,icmp_type=8,icmp_code=0 actions=move:NXM_OF_IP_SRC[]->NXM_OF_IP_DST[],set_field:>ip_src,set_field:255->nw_ttl,set_field:0->icmp_type,set_field:0->reg6,set_field:0->in_port,resubmit(,18)

 cookie=0x0, duration=10428.268s, table=17, n_packets=0, n_bytes=0, priority=90,arp,reg6=0x1,metadata=0x6,arp_tpa=,arp_op=1 actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],set_field:fa:16:3e:6c:c0:33->eth_src,set_field:2->arp_op,move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],set_field:fa:16:3e:6c:c0:33->arp_sha,move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],set_field:>arp_spa,set_field:0x1->reg7,set_field:0->reg6,set_field:0->in_port,resubmit(,32)

 cookie=0x0, duration=7295.816s, table=17, n_packets=0, n_bytes=0, priority=90,arp,reg6=0x2,metadata=0x6,arp_tpa=,arp_op=1 actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],set_field:fa:16:3e:70:41:c5->eth_src,set_field:2->arp_op,move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],set_field:fa:16:3e:70:41:c5->arp_sha,move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],set_field:>arp_spa,set_field:0x2->reg7,set_field:0->reg6,set_field:0->in_port,resubmit(,32)

These are in charge of replying to ARP or pings on the default gateway for each logical switch (the router ports, IPs and

Routing and Destination Lookup

In this section we are going to see the interesting part of how distributed routing is done. Following are the steps when VM1 tries to reach VM2 in our setup:

1) VM1 tries to find its default gateway MAC address by sending an ARP request for

2) The ARP responder mentioned above create and sends a reply (all done in flows) and sends patch port 19 MAC address.

3) VM1 sends packet with patch port 19 MAC address, which is going the same route as any other L2 traffic and in the end (egress table) sent to this port.

4) Since patch port 20 is a peer of patch port 19 the traffic re-enters the pipeline on table 0, but the in-port is now port 20, its being classified and tagged with metadata=6

5) Then comes the interesting part in table 18 and 19:

 cookie=0x0, duration=10428.268s, table=18, n_packets=0, n_bytes=0, priority=24,ip,metadata=0x6,nw_dst= actions=dec_ttl(),move:NXM_OF_IP_DST[]->NXM_NX_REG0[],resubmit(,19)

 cookie=0x0, duration=7295.816s, table=18, n_packets=0, n_bytes=0, priority=24,ip,metadata=0x6,nw_dst= actions=dec_ttl(),move:NXM_OF_IP_DST[]->NXM_NX_REG0[],resubmit(,19)

We can see that what we do is match on metadata (which is 6 for indicating all the router ports group) and on the subnet destination. Since the destination is VM2 (IP the second flow is matched and packet is sent to table 19, we also set the final destination IP in reg0 and we will soon see why.

We can also see that we decrement the TTL (similar to what a router would do at this point).

If we look at table 19 we can see the following flows:

 cookie=0x0, duration=10428.162s, table=19, n_packets=0, n_bytes=0, priority=200,reg0=0xa010001,metadata=0x6 actions=set_field:fa:16:3e:6c:c0:33->eth_src,set_field:fa:16:3e:6c:c0:33->eth_dst,set_field:0x1->reg7,resubmit(,32)

 cookie=0x0, duration=7295.710s, table=19, n_packets=0, n_bytes=0, priority=200,reg0=0xa020001,metadata=0x6 actions=set_field:fa:16:3e:70:41:c5->eth_src,set_field:fa:16:3e:70:41:c5->eth_dst,set_field:0x2->reg7,resubmit(,32)

 cookie=0x0, duration=5326.167s, table=19, n_packets=0, n_bytes=0, priority=200,reg0=0xa010003,metadata=0x6 actions=set_field:fa:16:3e:6c:c0:33->eth_src,set_field:fa:16:3e:a4:d1:b5->eth_dst,set_field:0x1->reg7,resubmit(,32)

 cookie=0x0, duration=67.313s, table=19, n_packets=0, n_bytes=0, priority=200,reg0=0xa020003,metadata=0x6 actions=set_field:fa:16:3e:70:41:c5->eth_src,set_field:fa:16:3e:fe:37:f2->eth_dst,set_field:0x2->reg7,resubmit(,32)

As we can see we have a match on reg0 and the metadata, reg0 holds all the possible IPs that the router could reach, we can see here the router ports IPs (, and our both VMs IPs ( and In our case the last flow is matched (destination IP is for VM2).

We can also see that we switch the MAC addresses of the source and destination depending on the destination VM MAC and the router port MAC address (again similar to what a router would do)

The following diagram depicts OVS in this setup:

The following steps summarize the routing process in OVN:

1) The idea is that when a VM try to communicate with another subnet (L3), OVN first sends the traffic to the patch port which represent the default gateway for the VM subnet.

2) Then the packet re-enter the pipeline with an in-port that represent the router side (the patch port peer) and being classified with the same metadata group number.

3) Routing happens, there are matching flows for all the possible destination subnets this packet can reach (depending on the router ports).

4) After OVN identified that the destination is routable the next step is to identify the exact destination according to the destination IP (which is now in reg0 as a hex value).

5) Replace the MAC addresses according to the destination and the router port and send to the final destination.

Its important to note that patch ports traffic traversing should have relatively light performance impact as they are only simulated in the user space OF pipeline process. When the kernel flows are installed the patch ports shouldn't exists, which makes them more of a modeling construct than a real port in the system.

I hope this posts gave you a high level insight of how distributed L3 is implemented in OVN, there are many little details which weren't mentioned and i hope to cover them in future posts. (like how all this is combined with security and connection tracking and so on).

November 23, 2015 11:25 PM


RDO blog roundup, week of November 23

Here's what RDO engineers have been blogging about lately:

Automated API testing workflow by Tristan Cacqueray

Services exposed to a network of any sort are in risk of security exploits. The API is the primary target of such attacks, and it is often abused by input that developers did not anticipate.

…

RDO Community Day @ FOSDEM by Rich Bowen

We're pleased to announce that we'll be holding an RDO Community Day in conjunction with the CentOS Dojo on the day before FOSDEM. This event will be held at the IBM Client Center in Brussels, Belgium, on Friday, January 29th, 2016.

…

Translating Between RDO/RHOS and Upstream OpenStack releases by Adam Young

There is a straight forward mapping between the version numbers used for RDO and Red Hat Enterprise Linux OpenStack Platform release numbers, and the upstream releases of OpenStack. I can never keep them straight. So, I write code.

…

Does cloud-native have to mean all-in? by Gordon Haff

Cloud-native application architectures promise improved business agility and the ability to innovate more rapidly than ever before. However, many existing conventional applications will provide important business value for many years. Does an organization have to commit 100% to one architecture versus another to realize true business benefits?

…

by Rich Bowen at November 23, 2015 08:33 PM

OpenStack Superuser

OpenStack and OPNFV strengthen collaboration for telcos

BURLINGAME, CA. -- Over 700 people, including several OpenStack community members, global telecommunications companies (telcos) and Foundation staff, attended the inaugural OPNFV Summit.

Jonathan Bryce participated on the keynote Strategic Technologies panel and the Women of OPNFV and Allies happy hour was a great way to meet women and men alike in an informal setting.

OpenStack was front and center at every session we attended. Results of an OPNFV global survey clearly support the project and OpenStack’s role in network functions virtualization (NFV). Key findings include:

  • Over half the respondents said OPNFV is poised to accelerate NFV adoption.
  • 62 percent believe OPNFV will lead to more rapid development.
  • 68 percent of respondents cited OpenStack as very important to the success of OPNFV.

OPNFV is a reference implementation of the European Telecommunications Standards Institute (ETSI) NFV specification. OpenStack is a large component of both OPNFV and ETSI NFV. Almost all telcos, cable companies and enterprises that operate large networks are closely following and implementing NFV from the ETSI spec or OPNFV releases. NFV is of tremendous value for agility and cost reduction.

OPNFV’s first release, Arno, is available and includes OpenStack as the Virtualized Infrastructure Manager (VIM). Brahmaputra, the second release planned for February 2015, will update OpenStack to the Liberty release, which includes many NFV-supporting features and related projects.

alt text hereDiagram of the OPNFV Arno release

In scheduled meetings and the “hallway track,” the collaboration between OpenStack and OPNFV was discussed, particularly around OPNFV’s "upstream first” practice for requirements and development. The OPNFV community identifies gaps, submits blueprints to OpenStack, then works on those blueprints. This expands the number of OpenStack contributors helping meet key telco requirements. The OPNFV contributors are skilled developers that may be new to OpenStack processes and schedules and are welcomed by the OpenStack community!

These are a few of the OpenStack features that were included in Liberty. Features developed to meet telco needs for resiliency, performance and scaling are also valuable to all OpenStack users.

  • Nova features such as a new API call to mark compute nodes as down.
  • A new alarm evaluator has been added to Ceilometer to realize immediate alarm notification.
  • Service function chaining capabilities have been added to the OpenStack Tacker project.

OpenStack users and vendors also bring feature requests to the table, in addition to OPNFV. These Liberty-implemented examples are valuable to telcos and enterprise users alike:

  • Support for CPU pinning and SR-IOV passthrough in Nova's new extensible CPU scheduler
  • Improvements in the scalability of Neutron's advanced services
  • Cells v2 support for horizontal scaling
  • Improved support for IPv6 in Neutron and Nova

The OPNFV OpenStack Community wiki has been updated for the OpenStack Mitaka release -- with the new release schedule, project governance and blueprint process and assistance. Community members from both organizations will advocate OPNFV requirements within OpenStack projects through a streamlined process detailed on the wiki.

We’re excited to continue and enhance the collaboration with telcos and OPNFV. NFV will be prominent at the upcoming OpenStack Summit Austin.

Stay tuned for more information and telco NFV user stories!

Cover Photo // CC BY NC

by Kathy Cacciatore at November 23, 2015 06:52 PM

Upgrades in Nova: Database Migrations

Dan Smith is a principal software engineer at Red Hat. He works primarily on Nova, is a member of the core team and generally focuses on topics relating to live upgrade. You can follow him on Twitter @get_offmylawn.

In the previous post on objects, I explained how Nova uses objects to maintain a consistent schema for services expecting different versions, in the face of changing persistence. That’s an important part of the strategy, as it eliminates the need to take everything down while running a set of data migrations that could take a long time to apply on even a modest data set.

Additive Schema-only Migrations

In recent cycles, Nova has enforced a requirement on all of our database migrations. They must be additive-only, and only change schema not data. Previously, it was common for a migration to add a column, move data there, and then drop the old column. Imagine my justification for adding the foobars field to the Flavor object was because I wanted to rename memory_mb. A typical offline schema/data migration might look something like this:

meta = MetaData(bind=migrate_engine)
flavors = Table('flavors', meta, autoload=True)
flavors.create_column(Column('foobars', Integer))
for flavor in
        where( ==\
flavors.drop_column(Column('memory_mb', Integer))

If you have a lot of flavors, this could take quite a while. That is a big problem because migrations like this need to be run with nothing else accessing the database — which means downtime for your Nova deployment. Imagine the pain of doing a migration like this on your instances table, which could be extremely large. Our operators have been reporting for some time that large atomic data migrations are things we just cannot keep doing. Large clouds being down for extended periods of time simply because we’re chugging through converting every record in the database is just terrible pain to inflict on deployers and users.

Instead of doing the schema change and data manipulation in a database migration like this, we only do the schema bit and save the data part for runtime. But, that means we must also separate the schema expansion (adding the new column) and contraction (removing the old column). So, the first (expansion) part of the migration would be just this:

meta = MetaData(bind=migrate_engine)
flavors = Table('flavors', meta, autoload=True)
flavors.create_column(Column('foobars', Integer))

Once the new column is there, our runtime code can start moving things to the new column. An important point to note here is that if the schema is purely additive and does not manipulate data, you can apply this change to the running database before deploying any new code. In Nova, that means you can be running Kilo, pre-apply the Liberty schema changes and then start upgrading your services in the proper order. Detaching the act of migrating the schema from actually upgrading services lets us do yet another piece at runtime before we start knocking things over. Of course, care needs to be taken to avoid schema-only migrations that require locking tables and effectively paralyzing everything while it’s running. Keep in mind that not all database engines can do the same set of operations without locking things down!

Migrating the Data Live

Consider the above example of effectively renaming memory_mb to foobars on the Flavor object. For this I need to ensure that existing flavors with only memory values are turned into flavors with only foobars values, except I need to maintain the old interface for older clients that don’t yet know about foobars. The first thing I need to do is make sure I’m converting memory to foobars when I load a Flavor, if the conversion hasn’t yet happened:

def get_by_id(cls, context, id):
    flavor = cls(context=context, id=id)
    db_flavor = db.get_flavor(context, id)
    for field in flavor.fields:
        if field not in ['memory_mb', 'foobars']:
            setattr(flavor, field, db_flavor[field])

    if db_flavor['foobars']:
        # NOTE(danms): This flavor has
        # been converted
        flavor.foobars = db_flavor['foobars']
        # NOTE(danms): Execute hostile takeover
        flavor.foobars = db_flavor['memory_mb']

When we load the object from the database, we have a chance to perform our switcheroo, setting foobars from memory_mb, if foobars is not yet set. The caller of this method doesn’t need to know which records are converted and which aren’t. If necessary, I could also arrange to have memory_mb set as well, either from the old or new value, in order to support older code that hasn’t converted to using Flavor.foobars.

The next important step of executing this change is to make sure that when we save an object that we’ve converted on load, we save it in the new format. That being, memory_mb set to NULL and foobars holding the new value. Since we’ve already expanded the database schema by adding the new column, my save() method might look like this:

def save(self, context):
    updates = self.obj_get_updates()
    updates['memory_mb'] = None
    db.set_flavor(context,, updates)

Now, since we moved things from memory_mb to foobars in the query method, I just need to make sure we NULL out the old column when we save. I could be more defensive here in case some older code accidentally changed memory_mb, or try to be more efficient and only NULL out memory_mb if I decide it’s not already. With this change, I’ve moved data from one place in the database to another, at runtime, and without any of my callers knowing that it’s going on.

However, note that there is still the case of older compute nodes. Based on the earlier code, if I merely remove the foobars field from the object during backport, they will be confused to find memory_mb missing. Thus, I really need my backport method to revert to the older behavior for older nodes:

def obj_make_compatible(self, primitive,
    super(Flavor, self).obj_make_compatible(primitive,
    target_version = utils.convert_version_to_tuple(
    if target_version < (1, 1):
        primitive['memory_mb'] = self.foobars
        del primitive['foobars']

With this, nodes that only know about Flavor version 1.0 will continue to see the memory information in the proper field. Note that we need to take extra care in my save() method now, since a Flavor may have been converted on load, then backported, and then save()d.

Cleaning Up the Mess

After some amount of time, all the Flavor objects that are touched during normal operation will have had their foobars columns filled out, and their memory_mb columns emptied. At some point, we want to drop the empty column that we’re no longer using.

In Nova, we want people to be able to upgrade from one release to the other, having to only apply database schema updates once per cycle. That means we can’t actually drop the old column until the release following the expansion. So if the above expansion migration was landed in Kilo, we wouldn’t be able to land the contraction migration until Liberty (or later). When we do, we need to make sure that all the data was moved out of the old column before we drop it and that any nodes accessing the database will no longer assume the presence of that column. So the contraction migration might look like this:

count = select([func.count()]).select_from(flavors).\
    where(memory_mb != None)
if count:
    raise Exception('Some Flavors not migrated!')
flavors.drop_column(Column('memory_mb', Integer))

Of course, if you do this, you need to make sure that all the flavors will be migrated before the deployer applies this migration. In Nova, we provide nova-manage commands to background-migrate small batches of objects and document the need in the release notes. Active objects will be migrated automatically at runtime, and any that aren’t touched as part of normal operation will be migrated by the operator in the background. The important part to remember is that all of this happens while the system is running. See step 7 here for an example of how this worked in Kilo.

Doing online migrations, whether during activity or in the background, is not free and can generate non-trivial load. Ideally those migrations would be as efficient as possible, not re-converting data multiple times and not incurring significant overhead checking to see if each record has been migrated every time. However, some extra runtime overhead is almost always better than an extended period of downtime, especially when it can be throttled and done efficiently to avoid significant performance changes.

Online Migrations Are Worth It

Applying all the techniques thus far, we have now exposed a trap that is worth explaining. If you have many nodes accessing the database directly, you need to be careful to avoid breaking services running older code while deploying the new ones. In the example above, if you apply the schema updates and then upgrade one service that starts moving things from memory_mb to foobars, what happens to the older services that don’t know about foobars? As far as they know, flavors start getting NULL memory_mb values, which will undoubtedly lead them to failure.

In Nova, we alleviate this problem by requiring most of the nodes (i.e. all the compute services) to use conductor to access the database. Since conductor is always upgraded first, it knows about the new schema before anything else. Since all the computes access the database through conductor with versioned object RPC, conductor knows when an older node needs special attention (i.e. backporting).

Dan Smith originally posted this tutorial from his blog, Right Angles. Superuser is always interested in how-tos and other contributions, please get in touch:

Cover Photo// CC BY NC

by Dan Smith at November 23, 2015 05:35 PM

Hugh Blemings



Welcome to Last week on OpenStack Dev (“Lwood”) for the week ending 15th November 2015. For more background on Lwood, please refer here.

Basic Stats for week 16th to 22nd November 2015 :

  • ~628 Messages (down about 8% relative to last week)
  • ~189 Threads (same as last week)

Traffic and threads steady…

Notable Discussions

Nova / Trusted Computing Pools related security notice (OSSN 0059)

Summary from the original security notice: “A trusted VM that has been launched earlier on a trusted host can still be powered on from the same host even after the trusted host is compromised.  More in the original post or the OSSN itself.

A reminder to projects about new “assert” tags

Over the last few months the TC defined a number of “assert” tags – a standardised way for projects to make certain assertions about their projects.  In his email Thierry Carrez reminds all concerned (basically PTLs and Project Cores) that the time is upon them to see if these tags should apply and if so to start using them.

In time this information will be added to that already displayed in the OpenStack Foundation’s Project Navigator hence the desire to get projects using these tags as soon as possible.  For operators and other non-developers, the use of these tags and endeavours like the Project Navigator promise to make the process of evaluating an OpenStack projects maturity a little simpler.

Vitrage – a Root Cause Analysis engine for OpenStack

Announced at the Mitaka Summit, a post to the list provided some more information on the Vitrage project – part of the broader Telemetry umbrella project for OpenStack,  The Vitrage developers would “like it to be the Openstack RCA (Root Cause Analysis) Engine for organizing, analyzing and expanding OpenStack alarms & events, yielding insights regarding the root cause of problems and deducing the existence of problems before they are directly detected.”

Noble goals and early days but a worthwhile and much needed set of functionality

What should the openstack-announce mailing list be ?

Tom Fifield kicked off a thread discussing what the best use of the mailing list is from herein.  Originally conceived as a low traffic read-only list he makes the point that with the addition of more (arguably) developer oriented content it’s become rather high traffic.  The concern being this appears to have put some folk off with them either filtering the list or unsubscribing – and so possibly missing the urgent content such as security notifications.

While there has been a little discussion since his first post on Friday input from a broad range of readers would be welcome.

Is booting a Linux VM important for certified OpenStack interoperability ?

On behalf of the DefCore Committee Egle Sigler asks for feedback on whether the ability to boot a Linux VM should be required for certified OpenStack interoperability.  A quick glance at the comments in the review cited suggests this is anything but a simple topic particularly once you consider containers and bare metal clouds in an environment…

Autoscaling both clusters and containers

Ryan Rossiter kicked off an interesting thread about autoscaling both containers and clusters. Essentially have the ability for a cluster to expand when the concentration of containers gets too high. Evidently there was some discussion about this in Tokyo with at least one demo being given using Senlin interfacing to Magnum to autoscale.

Developer Mailing List Digest

Originally a section within the OpenStack Community Newsletter, Mike Perez’ excellent openstack-dev digest is now available as a digest sent to the openstack-dev mailing list as well as being posted on the OpenStack Foundation’s blog.  I commend it to you if you’re after a deeper and/or more technical analysis than Lwood or other sources provide.  Here are the links to the Digest for November 7-13 and 14-20.

Discounted documentation changes

While the sprint in question is alas over at the time of writing, this post from Joshua Harlow about the Oslo projects virtual documentation sprint is too well written not to note :)

Midcycle dates and locations

A few more midcycle discussions this past week

  • [ironic] The Midcycle discussions from last week kicked off by Lucas Alvares seem to have settled on the idea of a virtual midcycle as proposed by Jim Rollenhagen
  • [manila] A survey for midcycle attendees/interested persons – Ben Swartzlander
  • [cinder] Mitaka Midcycle Sprint is on 26-29 January in North Carolina, USA – Sean McGinnis

Post Mitaka Summit Summaries and Priorities

A few more Summaries and Priority lists rolled in from the Mitaka Summit

People and Projects

Further Reading & Miscellanea

Don’t forget these excellent sources of OpenStack news :)

Closing on a lighter note – this edition of Lwood brought to you by Booker T Jones (Potato Hole & The Road From Memphis), Vinnie Moore (Aerial Visions), The String Contingent (Talk, TSC II), Steve Ray Vaughan (In Step), Tommy Emmanuel (Determination) amongst other excellent tunes.


by hugh at November 23, 2015 12:48 PM

Containers-as-a-service, the Keystone design summit, and more OpenStack news

Interested in keeping track of what's happening in the open source cloud? is your source for news in OpenStack, the open source cloud infrastructure project.

by Jason Baker at November 23, 2015 09:00 AM

November 22, 2015

OpenStack Superuser

OpenStack Mitaka release: what’s next for Swift and Ironic

Each release cycle, OpenStack project team leads (PTLs) introduce themselves, talk about upcoming features for the OpenStack projects they manage, plus how you can get involved and influence the roadmap.

Superuser will feature these videos weekly; once they’re all published you can also catch them on the OpenStack Foundation YouTube channel. Our first round of interviews covers Swift and Ironic.


<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="" width=""></iframe>

What: Swift, the OpenStack Object Store project

Who: John Dickinson, PTL. Day job: director of technology, SwiftStack.

Burning issues alt text here

Of these he says, "Encryption is something we’re spending a lot of time on now. It’s not a new topic but something that's been in progress for six months….[we’re] trying to implement a way operators can encrypt all of the data stored in the clusters so they meet requirements for certain kinds of information —personally identifiable information, financial records etc. So to lower the barriers for adoption, that’s one of the things we’re working on.”

What’s next

alt text here

“We spend a lot of time on improving — in various ways — overall experience for end-users and operators. Specifically around lowering latency, smoothing out latency when clusters are under load…”

What matters in Mitaka

alt text here

Get involved!
Use Ask OpenStack for general questions
For roadmap or development issues, subscribe to the OpenStack development mailing list, and use the tag [swift]
Participate in the weekly meetings: Wednesday at 2100UTC in #openstack-meeting on freenode IRC


What: Ironic, OpenStack's Bare Metal Provisioning Program.

alt text here

Ironic moves on an independent release schedule. About that, Rollenhagen said: “It took us awhile with Liberty to get our first release out, and within a month-and-a-half we had two more releases. The second of which had something like 30 or 40 bug fixes.”

Who: Jim Rollenhagen, PTL. Day job: software developer, Rackspace.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="" width=""></iframe>

Burning issues

alt text here

What’s next

alt text here

What matters in Mitaka: Overall: scalability, resiliency and third-party testing for all drivers.

About the drivers, he added: “That’s going to be huge for us, ensuring that those all work as expected for our users.”

Get involved!
Use Ask OpenStack for general questions
For roadmap or development issues, subscribe to the OpenStack development mailing list, and use the tag [ironic]
Participate in the weekly meetings: The one-hour weekly meetings start at 1700 UTC on Mondays, held in the #openstack-meeting-3 room on

Cover Photo // CC BY NC

by Nicole Martinelli at November 22, 2015 05:35 PM

November 21, 2015

OpenStack Blog

OpenStack Developer Mailing List Digest November 14-20

Time to Make Some Assertions About Your Projects

  • The technical committee defined a number of “assert” tags which allows a project team to to make assertions about their own deliverables:
    • assert:follows-standard-deprecation
    • assert:supports-upgrade
    • assert:supports-rolling-upgrade
  • Read more on their definitions [1]
  • Update the project.yaml [2] of which tags apply to your project already.
  • The OpenStack foundation will use “assert”tags very soon in the project navigator [3].

Making Stable Maintenance Its Own OpenStack Project Team (Cont)

  • Continuing discussion from last week [4]…
  • Negatives:
    • Not enough work to warrant a designated “team”.
    • The change is unlikely to bring a meaning full improvement to the situation, sudden new resources.
  • Positives:
    • * An empowered team could tackle new coordination tasks, like engaging more directly in converging stable branch rules across teams, or producing tools.
    • Release management doesn’t overlap anymore with stable branch, so having them under that PTL is limiting and inefficient
    • Reinforcing the branding (by giving it its own team) may encourage more organizations to affect new resources to it
  • Matt Riedemann offers to lead the team.

Release Countdown For Week R-19, November 23-27

  • Mitaka-1 milestone scheduled for December 1-3.
  • Teams should be…
    • Wrapping up incomplete work left over from the end of the Liberty cycle .
    • Finalizing and announcing plans from the summit.
    • Completing specs and blueprints.
  • The openstack/release repository will be used to manage Mitaka 1 milestone tags.
  • Reno [5] will be used instead of Launchpad for tracking completed work. Make sure any release notes done for this cycle are committed to your master branchless before proposing the milestone tag.

New API Guidelines Read for Cross Project Review

  • The following will be merged soon:
    • Adding introduction to API micro version guideline [6].
    • Add description of pagination parameters [7].
    • A guideline for errors [8].
  • These will be brought up in the next cross project meeting [9].

by Mike Perez at November 21, 2015 02:34 AM


Fuel levels up: Enhancements in Mirantis OpenStack 7.0 (Q&A)

The post Fuel levels up: Enhancements in Mirantis OpenStack 7.0 (Q&A) appeared first on Mirantis | The #1 Pure Play OpenStack Company.

Yesterday, we held a webinar about some of the enhancements to Fuel in Mirantis OpenStack 7.0. Fuel has come a long way in this iteration, and we detailed both the operational and interface improvements — including some of the possibilities of the plugin framework — which you can hear about here.  We also entertained questions from the audience, which we thought you might find helpful.

Q: How does Fuel discover new nodes? Does this happen automatically or do I have to register them as new nodes first?

A: Fuel discovers nodes that boot to its PXE network environment. (In other words, nodes that PXE boot on the same network occupied by Fuel.) When a node powers on and is set to PXE boot, Fuel bootstraps that node and discovers many of the physical attributes such as the CPUs, RAM, HDD, NICs, MAC address, and so on. Fuel then adds the node to the pool of available systems.

Q: I assume there is a way for a plugin to define what other roles it can share a node with?

A: Yes. The plugin framework allows for full control of how the plugin is deployed. For example, Fuel limits which default roles can coexist (for example, Controller and Compute can NOT), and Plugins provide the same capability to developers. In the case of our LMA plugins, for example they can currently only coexist with each other, as far as the specific roles are concerned. We can stack the Kibana and Grafana roles on a single dedicated node, but we can’t stack them with the Storage-Cinder role, for example.

Q: Does Fuel support different deployment modes?

A: Earlier versions of Fuel provided the option to choose between HA and non-HA environments; if you chose non-HA, there was no going back later. Current versions of Fuel will instead default to an HA-capable deployment mode, though the cluster will only truly have high availability if it has at least three controllers.  That said, if you add new controllers to an existing cluster, you will be able to achieve HA. This way, you get the benefits of HA availability without having to install a minimum of three controllers if you don’t need it.

Q: Each node role has its own network template file. What if a node has multiple roles?

A: For the current release, network templates are defined by a single role. Nodes with multiple roles will take the first role template that matches alphabetically. If the node needs configuration from multiple templates, it will require deployment customization. We are looking at adding this type of flexibility to future releases.

Q: What is the difference between the name that is displayed on the node and the hostname in the extended settings screen?

A: While the current implementation can be somewhat confusing, the name shown on the node in the UI is the “node name”. This value can change at any point, before or after deployment, and is used only to visually identify nodes. The name in the extended settings screen is only capable of being changed prior to deployment and is propagated down to the node as the node’s actual hostname, which is visible both in the UI and on the node itself.

Q: Are these labels are locally significant, if they don’t correspond to the hostname or any other server identifier?

A: Labels are also visible in the CLI, but are not used in conjunction with the hostname. However, we do have the ability to define the hostname for a node in the UI as a new feature, as well.

Q: Are there any plans to support the glusterfs plugin with MOS 7.0 ?

A: GlusterFS was initially developed as a proof of concept to showcase the plugin capabilities in 6.0, but we do not have plans at this time to create a GlusterFS plugin compatible with 7.0. (That said, the whole point of the plugin architecture is that you can, if you need it.)

Q: When using ESXi compute (and multiple clusters) do you recommend putting nova-compute processes on the controllers or dedicated nodes?

A: This one is probably best as “it depends based on your workload” but we are exploring the ability to deploy individual processes – including RMQ and Keystone – onto dedicated nodes based on the workloads that will be running on your environment.

Q: Are the HealthChecks automatically constructed based on the contents of the deployment script(s) ?

A: HealthChecks are a predefined subset of relevant, selectable Tempest, Rally, and OSTF tests exposed in the Fuel UI.

Q: Can multiple environments share the same network?

A: Network Templates are environment-specific, but once defined, they can be copied to multiple environments.

Q: Why are the CEPH and Telemetry greyed out in this example?

A: Fuel grays out selections which are incompatible with currently selected roles. In addition, whether a selection is available is dependent on the selections of the specific environment components during the creation of a new environment. In our demo, for example, we had NOT selected CEPH as the storage backend, and also did not enable the optional Ceilometer service.

Q: Is the possibility to install plugins (or update current ones) in already deployed environments planned?

A: Yes, this is one of our high value targets for improving our plugin framework in upcoming releases. (We do not have a release date planned for these features yet.)

Q: I thought there are some more enhancements in the upcoming Mirantis OpenStack 8.0 with respect to plugin management?

A: Yes, there are more enhancements coming with respect to plugin management, including the ability to provide plugin options in the wizard and a further improved Settings tab, where plugins can define where they are presented. This is all being worked on right now, and we hope to see these changes in MOS 8.0.

Q: Will the webinars on December 3 and December 10 be the same as this one? Or do they have different content?

A: The webinars in December will be different. One will be about Kubernetes/Murano integration, while the other will be more VMware-centric. Feel free to check out our landing page for more info:

Thanks for joining us. If you’d like to check out Fuel in Mirantis OpenStack 7.0 for yourself, please go ahead and download it and take it for a spin.


The post Fuel levels up: Enhancements in Mirantis OpenStack 7.0 (Q&A) appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Guest Post at November 21, 2015 02:29 AM

Benjamin Kerensa

Openly Thankful

ThankfulSo next week has a certain meaning for millions of Americans that we relate to a story of indians and pilgrims gathering to have a meal together. While that story may be distorted from the historical truth, I do think the symbolic holiday we celebrate is important.

That said, I want to name some individuals I am thankful for….



Lukas Blakk

I’m thankful for Lukas for being a excellent mentor to me at Mozilla for the last two years she was at Mozilla. Lukas helped me learn skills and have opportunities that many Mozillians would not have the opportunity to do. I’m very grateful for her mentoring, teaching, and her passion to help others, especially those who have less opportunity.

Jeff Beatty

I’m especially thankful for Jeff. This year, out of the blue, he came to me this year and offered to have his university students support an open source project I launched and this has helped us grow our l10n community. I’m also grateful for Jeff’s overall thoughtfulness and my ability to go to him over the last couple of years for advice and feedback.

Majken Connor

I’m thankful for Majken. She is always a very friendly person who is there to welcome people to the Mozilla Community but also I appreciate how outspoken she is. She is willing to share opinions and beliefs she has that add value to conversations and help us think outside the box. No matter how busy she is, she has been a constant in the Mozilla Project. always there to lend advice or listen.

Emma Irwin

I’m thankful for Emma. She does something much different than teaching us how to lead or build community, she teaches us how to participate better and build better participation into open source projects. I appreciate her efforts in teaching future generations the open web and being such a great advocate for participation.

Stormy Peters

I’m thankful for Stormy. She has always been a great leader and it’s been great to work with her on evangelism and event stuff at Mozilla. But even more important than all the work she did at Mozilla, I appreciate all the work she does with various open source nonprofits the committees and boards she serves on or advises that you do not hear about because she does it for the impact.


Jonathan Riddell

I’m thankful for Jonathan. He has done a lot for Ubuntu, Kubuntu, KDE and the great open source ecosystem over the years. Jonathan has been a devout open source advocate always standing for what is right and unafraid to share his opinion even if it meant disappointment from others.

Elizabeth Krumbach Joseph

I’m thankful for Elizabeth. She has been a good friend, mentor and listener for years now and does so much more than she gets credit for. Elizabeth is welcoming in the multiple open source projects she is involved in and if you contribute to any of those projects you know who she is because of the work she does.


Paolo Rotolo

I’m thankful for our lead Android developer who helps lead our Android development efforts and is a driving force in helping us move forward the vision behind Glucosio and help people around the world. I enjoy near daily if not multiple time a day conversations with him about the technical bits and big picture.

The Core Team + Contributors

I’m very thankful for everyone on the core team and all of our contributors at Glucosio. Without all of you, we would not be what we are today, which is a growing open source project doing amazing work to bring positive change to Diabetes.


Leslie Hawthorne

I’m thankful for Leslie. She is always very helpful for advice on all things open source and especially open source non-profits. I think she helps us all be better human beings. She really is a force of good and perhaps the best friend you can have in open source.

Jono Bacon

I’m thankful for Jono. While we often disagree on things, he always has very useful feedback and has an ocean of community management and leadership experience. I also appreciate Jono’s no bullshit approach to discussions. While it can be rough for some, the cut to the chase approach is sometimes a good thing.

Christie Koehler

I’m thankful for Christie. She has been a great listener over the years I have known her and has been very supportive of community at Mozilla and also inclusion & diversity efforts. Christie is a teacher but also an organizer and in addition to all the things I am thankful for that she did at Mozilla, I also appreciate her efforts locally with Stumptown Syndicate.

by Benjamin Kerensa at November 21, 2015 01:58 AM

November 20, 2015

OpenStack Blog

OpenStack Weekly Community Newsletter (Nov. 14 – 20)

A primer on Magnum, OpenStack containers-as-a-service

Adrian Otto, project team lead, on how Magnum works and what problems it can solve for you.

OpenStack Mitaka release: what’s next for Ansible, Oslo and Designate

Meet the project team leads (PTLs) for these OpenStack projects and find out how to get involved.

Community feedback

OpenStack is always interested in feedback and community contributions, if you would like to see a new section in the OpenStack Weekly Community Newsletter or have ideas on how to present content please get in touch:

Reports from Previous Events 

Deadlines and Contributors Notifications

Security Advisories and Notices 

  • None this week

Tips ‘n Tricks 

Upcoming Events 

by Jay Fankhauser at November 20, 2015 10:50 PM

OpenStack Superuser

Upgrades in OpenStack Nova: Objects

Dan Smith is a principal software engineer at Red Hat. He works primarily on Nova, is a member of the core team and generally focuses on topics relating to live upgrade. You can follow him on Twitter @get_offmylawn.

Originally incubated in Nova, the versioned object code is now spun out into an Oslo library for general consumption, called oslo.versionedobjects. As discussed in the post on RPC versioning,, sending complex structures over RPC is hard to get right, as the structures are created and maintained elsewhere and simply sent over the wire between services. When running different levels of code on services in a deployment, changes to these structures must be handled and communicated carefully — something that the general oslo.messaging versioning doesn’t handle well.

The versioned objects that Nova uses to represent internal data help us when communicating over RPC, but they also help us tolerate a shifting persistence layer. They’re a critical facade within which we hide things like online data migrations and general tolerance of multiple versions of data in our database.

What follows is not an exhaustive explanation of versioned objects, but provides just enough for you to see how it applies to Nova’s live upgrade capabilities.

Versioned Objects as Schema

The easiest place to start digging into the object layer in Nova is to look at how we pass a relatively simple structure over RPC as an object, instead of just an unstructured dict. To get an appreciation of why this is important, refer back to the rescue_instance() method in the previous post. After our change, it looked like this:

def rescue_instance(self, context, instance, rescue_password,

Again, the first two parameters (self and context) are implied, and not of concern here. The rescue_password is just a string, as is the rescue_image_ref. However, the instance parameter is far more than a simple string — at version 3.0 of our RPC API, it was a giant dictionary that represented most of what nova knows about its primary data structure. For reference, this is mostly what it looked like in Juno, which is a fixture we use for testing when we need an instance. In reality, that doesn’t even include some of the complex nested structures contained within. You can imagine that we could easily add, remove, or change attributes of that structure elsewhere in the code or database without accounting for the change in the RPC interface in any way. If you end up with a newer node making the above call to an older node, the instance structure could be changed in subtle ways that the receiving end doesn’t understand. Since there is no version provided, the receiver can’t even know that it should fail fast, and in reality, it will likely fail deep in the middle of an operation. Proof of this comes from the test structure itself which is actually not even in sync with the current state of our database schema, using strings in places where integers are actually specified!

In Nova we addressed this by growing a versioned structure that defines the schema we want, independent of what is actually stored in the database at any given point. Just like for the RPC API, we attach a version number to the structure, and we increment that version every time we make a change. When we send the object over RPC to another node, the version can be used to determine if the receiver can understand what is inside, and take action if not. Since our versioned objects are self-serializing, they show up on the other side as rich objects and not just dicts.

An important element of making this work is getting a handle on the types and arrangement of data inside the structure. As I mentioned above, our “test instance” structure had strings where integers were actually expected, and vice versa. To see how this works, lets examine a simple structure in Nova:

class Flavor(base.NovaObject):
    # Version 1.0: Initial version
    VERSION = '1.0'

    fields = {
        'id': fields.IntegerField(),
        'name': fields.StringField(nullable=True),
        'memory_mb': fields.IntegerField(),
        'vcpus': fields.IntegerField(),
        'root_gb': fields.IntegerField(),
        'ephemeral_gb': fields.IntegerField(),
        'flavorid': fields.StringField(),
        'swap': fields.IntegerField(),
        'rxtx_factor': fields.FloatField(nullable=True,
        'vcpu_weight': fields.IntegerField(nullable=True),
        'disabled': fields.BooleanField(),
        'is_public': fields.BooleanField(),
        'extra_specs': fields.DictOfStringsField(),
        'projects': fields.ListOfStringsField(),

Here, we define what the object looks like. It consists of several fields of data, integers, floats, booleans, strings, and even some more complicated structures like a dict of strings. The object can have other types of attributes, but they are not part of the schema if they’re not in the fields list, and thus they don’t go over RPC. In case it’s not clear, if I try to set one of the integer properties, such as “swap” with a string, I’ll get a ValueError since a string is not a valid value for that field.

As long as I’ve told oslo.messaging to use the VersionedObjectSerializer from oslo.versionedobjects, I can provide a Flavor object as an argument to an RPC method and it is magically serialized and deserialized for me, showing up on the other end exactly as I sent it, including the version and including the type checking.

If I want to make a change to the Flavor object, I can do so, but I need to make two important changes. First, I need to bump the version, and second I need to account for the change in the class’ obj_make_compatible() method. This method is the routine that I can use to take a Flavor 1.1 object and turn it into a Flavor 1.0, if I need to for an older node.

Let’s say I wanted to add a new property of “foobars” to the Flavor object, which is merely a count of the number of foobars an instance is allowed. I would denote the change in the comment above the version, bump the version, and make a change to the compatibility method to allow backports:

class Flavor(base.NovaObject):
    # Version 1.0: Initial version
    # Version 1.1: Add foobars
    VERSION = '1.1'

    fields = {
        . . .
        'foobars': fields.IntegerField(),

    def obj_make_compatible(self, primitive,
        super(Flavor, self).obj_make_compatible(
            primitive, target_version)
        target_version = utils.convert_version_to_tuple(
        if target_version < (1, 1):
            del primitive['foobars']

The code in obj_make_compatible() boils down to removing the foobars field if we’re being asked to downgrade the object to version 1.0. There have been many times in nova where we have moved data from one attribute to another, or disaggregated some composite attribute into separate ones. In those cases, the task of obj_make_compatible() is to reform the data into something that looks like the version being asked for. Within a single major version of an object, that should always be possible. If it’s not then the change requires a major version bump.

Knowing when a version bump is required can be a bit of a challenge. Bumping too often can create unnecessary backport work, but not bumping when it’s necessary can lead to failure. The object schema forms a contract between any two nodes that use them to communicate, so if something you’re doing changes that contract, you need a version bump. The oslo.versionedobjects library provides some test fixtures to help automate detection, but sharing some of the Nova team’s experiences in this area is good subject matter for a follow-on post.

Once you have your data encapsulated like this, one approach to providing compatibility is to have version pins as described for RPC. Thus, during an upgrade, you can allow the operator to pin a given object (or all objects) to the version(s) that are supported by the oldest code in the deployment. Once everything is upgraded, the pins can be lifted.

The next thing to consider is how we get data in and out of this object form when we’re using a database for persistence. In Nova, we do this using a series of methods on the object class for querying and saving data. Consider these Flavor methods for loading from and saving to the database:

class Flavor(base.NovaObject):
    . . .
    def get_by_id(cls, context, id):
        flavor = cls(context=context)
        db_flavor = db.get_flavor(context, id)
        # NOTE(danms): This only works if the flavor
        # object looks like the database object!
        for field in flavor.fields:
            setattr(flavor, field, db_flavor[field])
        return flavor

    def save(self):
        # Here, updates is a dict of field=value items,
        # and only what has changed
        updates = self.obj_get_updates()
        db.set_flavor(self._context,, updates)

With this, we can pull Flavor objects out of the database, modify them, and save them back like this:

flavor = Flavor.get_by_id(context, 123)
flavor.memory_mb = 512

Now, if you’re familiar with any sort of ORM, this doesn’t look new to you at all. Where it comes into play for Nova’s upgrades is how these objects provide RPC and database-independent facades.

Nova Conductor

Before we jump into objects as facades for the RPC and database layers, I need to explain a bit about the conductor service in Nova.

Skipping over lots of details, the nova-conductor service is a stateless component of Nova that you can scale horizontally according to load. It provides an RPC-based interface to do various things on behalf of other nodes. Unlike the nova-compute service, it is allowed to talk to the database directly. Also unlike nova-compute, it is required that the nova-conductor service is always the newest service in your system during an upgrade. So, when you set out to upgrade from Kilo to Liberty, you start with your conductor service.

In addition to some generic object routines that conductor handles, it also serves as a backport service for the compute nodes. Using the Flavor example above, if an older compute node receives a Flavor object at version 1.1 that it does not understand, it can bounce that object over RPC to the conductor service, requesting that it be backported to version 1.0, which that node understands. Since nova-conductor is required to be the newest service in the deployment, it can do that. In fact, it’s quite easy, it just calls obj_make_compatible() on the object at the target version requested by the compute node and returns it back. Thus if one of the API nodes (which are also new) looks up a Flavor object from the database at version 1.1 and passes it to an older compute node, that compute node automatically asks conductor to backport the object on its behalf so that it can satisfy the request.

Versioned Objects as RPC Facade

So, nova-conductor serves an important role for older compute nodes, providing object backports for compatibility. However, except for the most boring of calls, the older compute node is almost definitely going to have to take some action, which will involve reading and writing data, thus interacting with the database.

As I hinted above, nova-compute is not actually allowed to talk directly to the database, and hasn’t for some time, even predating Versioned Objects. Thus, when nova-compute wants to read or write data, it must ask the conductor to do so on its behalf. This turns out to help us a lot for upgrades, because it insulates the compute nodes from the database — more on that in the next section.

However, in order to support everything nova-compute might want to do in the database means a lot of RPC calls, all of which need to be versioned and tolerant of shifting schemas, such as Instance or Flavor objects. Luckily, the versioned object infrastructure helps us here by providing some decorators that turn object methods into RPC calls back to conductor. They look like this:

class Flavor(base.NovaObject):
    . . .    
    def get_by_id(cls, context, id):
        . . .

    def save(self):
        . . .

With these decorators in place, a call to something like Flavor.get_by_id() on nova-compute turns into an RPC call to conductor, where the actual method is run. The call reports the version of the object that nova-compute knows about, which lets conductor ensure that it returns a compatible version from the method. In the case of save(), the object instance is wrapped up, sent over the wire, the method is run, and any changes to the object are reflected back on the calling side. This means that code doesn’t need to know whether it’s running on compute (and thus needs to make an RPC call) or on another service (and thus needs to make a database call). The object effectively handles the versioned RPC bit for you, based on the version of the object.

Versioned Objects as Database Facade

Based on everything above, you can see that in Nova, we delegate most of the database manipulation responsibility to conductor over RPC. We do that with versioned objects, which ensure that on either side of a conversation between two nodes, we always know what version we’re talking about, and we tightly control the structure and format of the data we’re working on. It pays off immediately purely from the RPC perspective, where writing new RPC calls is much simpler and the versioning is handled for you.

Where this really becomes a multiplier for improving upgrades is where the facade meets the database. Before Nova was insulating the compute nodes from the database, all the nodes in a deployment had to be upgraded at the same time as a schema change was applied to the database. There was no isolation and thus everything was tied together. Even when we required compute nodes to make their database calls over RPC to conductor, they still had too much direct knowledge of the schema in the database and thus couldn’t really operate with a newer schema once it was applied.

The object layer in Nova sadly doesn’t automatically make this better for you without extra effort. However, it does provide a clean place to hide transitions between the current state of the database schema and the desired schema (i.e. the objects). I’ll discuss strategies for that next.

The final major tenet in Nova’s upgrade strategy is decoupling the actual database schema changes from the process of upgrading the nodes that access that schema directly (i.e conductor, api, etc). That is a critical part of achieving the goal.

Dan Smith originally posted this tutorial from his blog, Right Angles. Superuser is always interested in how-tos and other contributions, please get in touch:

Cover Photo// CC BY NC

by Dan Smith at November 20, 2015 05:14 PM

Ben Nemec

OVB Network Diagram and Update


It's been a while since my last OVB update, and I've been meaning to post a network diagram for quite a while now so I figured I would combine the two.

In general, the news is good. Not only have I continued to use OVB for my primary TripleO development environment, but I also know of at least a couple other people who have done successful deployments with OVB. There is also some serious planning going on around how to switch the TripleO CI cloud over to OVB to make it more flexible and maintainable.

OVB is also officially not a one man show anymore. Steve Baker has actually been contributing for a while, and has made some significant changes as of late, including the ability to use a single BMC instance to manage a large number of baremetal instances. This helps OVB make even better use of the hardware available. He also gets credit for a lot of the improvements to the documentation, which now exists and can be found in the Github repo.

At some point I want to move this work into the big tent so we can start using the upstream Gerrit and general infrastructure, but the project likely requires some extra work before that can happen (unit tests would seem like a prereq, for example). I (or someone else...subtle hint ;-) also need to sit down and do the work to get all of this working in regular public clouds. Because I've had a few people ask about this recently, I started an etherpad about the work required to upstream OVB. Feel free to jump on any of the tasks listed there. :-)

Network Diagram

Attached to this post is a network diagram generated with the slick new Network Topology page in Horizon. In it, you can see a few things:

  • At the bottom left, we see the not terribly interesting external and private Neutron networks, connected by a Neutron router. This is pretty standard stuff that you would see in almost any OpenStack installation.
  • At the bottom right, we see the first OVB-specific item: the bmc. As noted above, this is a single instance that has a number of openstackbmc services running on it. These services respond to IPMI commands, and control the baremetal instances in the upper left of the diagram. Note that it does not have to be on the same network as the baremetal instances because it never talks to them. All it needs is a way to talk to the host cloud, which it gets through the external network.
  • Around the middle of the diagram we see the undercloud instance. This is just a regular OpenStack instance that can be used to do baremetal-style deployments to the baremetal vms. In my case it's a TripleO undercloud, but you could use other baremetal deployment tools instead. Note that this instance is on both the private network (so you can access it via a floating ip, and so it has access to the bmc instance) and the provision network (which is an isolated Neutron network over which the baremetal instances can be provisioned). In this diagram it's also attached to a public network, which isn't being used in this case, but could also be attached to the baremetal instances to better simulate what a real baremetal environment might look like.

In any case, I hope this visualization of the network architecture of an OVB deployment helps anyone new to the project who might be having trouble wrapping their head around what's going on. As always, feel free to contact me if you want to discuss anything further.

by bnemec at November 20, 2015 04:27 PM

Tesora Corp

Short Stack: OpenStack Action Items, Fuel Joins the Big Tent, and the New Op on the Cloud Block

Welcome to the Short Stack, our regular feature where we search for the most intriguing OpenStack news. These links may come from traditional publications or company blogs, but if it’s about OpenStack, we’ll find the best ones.

Here are our latest links:

Two OpenStack operators share their latest action items | OpenStack Superuser

Matt Fischer and Clayton O’Neill shared the important lessons they learned at OpenStack Summit Tokyo. They divulged ‘action items’ that they would like to implement to optimize their cloud’s performance. Some of these suggestions included always running the latest version and using autoheal.

Database as a Service: New Op on the Cloud Block | Channel Partners Online

Tesora’s VP of Business Development, Brian Otis, explored the concept of Database as a Service (DBaaS) and the reasons behind its growing popularity. He described the basic premise of DBaaS as well as the positives to working with it in the cloud. Notably, Otis remarked that with DBaaS it becomes easier and faster to provision and operate a database in a secure public or private cloud.

Mirantis’ Fuel Project Joins the OpenStack Big Tent | Ostatic

This week, Mirantis announced that Fuel became an official OpenStack component of the Big Tent. Fuel, a library of configuration and management tools, made quite a difference as a cloud deployment and management toolset.

Podcast Episode 13: Sriram Subramanian, The Cloud Don | OpenStack: Now

OpenStack:Now’s Nick Chase and John Jainschigg interviewed Sriram Subramanian, The Cloud Don. The three discuss Sriram’s experiences at the recent Tokyo Summit, which he attended as an analyst for the first time this year. Subramanian participated in more in-depth conversations about use cases. He shared insights into how posed candid questions about the direction of OpenStack to the Foundation and others, as well as summarized the type of data he would like to see released on OpenStack usage in the future.

Sticky situation: the serious business of stickers in open source |

Rikki Endsley explained the dynamic of stickers within the open source community and how they serve as both badges of pride as well as tokens of support. She recounted her experience at a recent conference where she spoke with an event organizer about the multiple functions of stickers, including promoting open source projects, showing support for open source communities, acting as souvenirs, and more.

The post Short Stack: OpenStack Action Items, Fuel Joins the Big Tent, and the New Op on the Cloud Block appeared first on Tesora.

by Alex Campanelli at November 20, 2015 04:19 PM

IBM OpenTech Team

Keystone Design Summit Outcome and Goals for Mitaka

Better late than never! I took some off after the summit, but here’s my blog about the Keystone Design Summit outcome and goals for Mitaka. Both Boris Bobrov and Dolph Mathews have already written fantastic recaps, I strongly recommend you take a look at those too. This blog is meant to summarize the action items from various design sessions, and should hopefully act as release notes in 6 months! This is by no means complete, but it’s certainly close!

To summarize the outcome in one sentence

We’ll be continuing the trend from our previous releases, focusing on performance, scalability, stability and adding just a few new features that are essential to both public and private clouds.

Keystone Server

Roles & Policy

Return names when listing role assignments NEW API
– Modify GET /v3/role_assignments to support &include_names flag
– This should improve usability for both Horizon and OpenStackClient
– Specification:

Implied roles NEW APIs
– Have one role imply many roles
– Specification:

Domain Scoped Roles NEW APIs
– Allow roles to be created within a domain, effectively allowing per-domain-policy
– Specification:

Define Admin Project in config file
– Add the ability to define “this is my admin project”
– This should resolve a lot of the incorrect usage of the admin role being unnecessarily given to many projects (See bug 968696)
– Specification:


Create shadow accounts for any user that has been created or authenticated (via local SQL, LDAP or federation)
– Greatly improve the story for federated users, we will be able to assign roles directly and trace their actions more easily for billing and auditing
– Specification:

New specs that have APIs!
– Service provider and endpoint filtering, see specification:
– Mark identity providers as public or private, see specification:

Operator request: Clean up logging, far too much in DEBUG and not enough in INFO


Continue to make Fernet tokens the go-to token format for Keystone
– DevStack and Tempest support is still being worked on, needs to be completed before Keystone can make it the default format
– Improve documentation, lots of unofficial docs via blog posts are causing misconceptions


Retrieve the service catalog with an unauthenticated call
– Part of a larger cross-project effort, we will be looking to return a well defined service catalog on a new API
– This will allow for a better service-discovery story for OpenStack as a whole
– Implementation and API has yet to be finalized

Fixing broken things

– Pagination for projects, roles and doamins, is possible now since they are only available in SQL
– Pagination for users and groups in LDAP? We can add support for it, but YMMV

Custom TTL on tokens for long-running operations
– Make keystone_authtoken configurable with a custom TTL, token validation uses this value or ignores the expires_at

REST interface for domain config NEW API
– List default values of a domain config, see specification:


– The old routers and SQL backends will be deprecated in M and be removed in O
– Paste files will need to be updated to point to the new resources before O is released
No more extensions – ever.


Each of these items will follow the standard deprecation policy that the TC has now publicized.

v2.0 of the Identity API Deprecate in M, remove in O or greater
– We will maintain some v2.0 authentication calls, such as: POST /v2.0/tokens and GET /v2.0/tenants

PKI token format Deprecate in M, remove in O
– Contains a major security bug
– If PKI format is specified, the resultant token will be a UUID token

LDAP write support Deprecate in M, remove in O
– Rarely do OpenStack deployers want to write to LDAP, and more rarely do LDAP administrators want to allow this sort of operation


Eventlet Deprecated in K, to be removed in M
– May live to see another release, need confirmation from mailing list

LDAP as a resource backend, to store projects, domains and roles Deprecated in K, to be removed in M

Keystone Libraries


We need to support federation protocol like SAML and kerberos
– Since support for these pulls in additional libraries, they will be ‘optional’
– Install these optional plugins with: pip install keystoneauth[kerberos] or pip install keystoneauth[saml]
– The python-keystoneauth-saml repo will be removed (there were no releases of it)
– The python-keystoneclient-kerberos repo will become inactive and eventually removed (there were 3 minor releases)
Improve the documentation: Show how to create plugins for the federation plugins, and also explain k2k flows


– Adapt keystonemiddleware to use keystoneauth
– Tokenless auth support
– Deprecate certain auth_token configuration values


– Only changes to the CRUD calls should be added or modified
– Authentication plugins should go into keystoneauth, and CLI should go into openstackclient
– Modify other python-*clients to use keystoneauth
– deprecate auth plugins and session code (remove these in O)
– potentially remove CLI and mark keystoneclient as 2.0 (need to check deprecation policy for clients)
– potentially remove middleware and mark keystoneclient as 3.0 (need to check deprecation policy for clients)

The post Keystone Design Summit Outcome and Goals for Mitaka appeared first on IBM OpenTech.

by Steve Martinelli at November 20, 2015 05:24 AM

Week in OpenStack II

snoopy Image courtesy:

November 20, 2015 12:00 AM

November 19, 2015


OpenStack:Now Podcast Ep 13: Sriram Subramanian, The Cloud Don

The post OpenStack:Now Podcast Ep 13: Sriram Subramanian, The Cloud Don appeared first on Mirantis | The #1 Pure Play OpenStack Company.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="" width="560"></iframe>

OpenStack:Now’s Nick Chase and John Jainschigg talk with Sriram Subramanian, The Cloud Don, about the goings on at the OpenStack summit in Tokyo.

The post OpenStack:Now Podcast Ep 13: Sriram Subramanian, The Cloud Don appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Nick Chase at November 19, 2015 08:48 PM

Red Hat Stack

Does cloud-native have to mean all-in?

This is the first in a series of posts that delves deeper into the questions that IDC’s Mary Johnston Turner and Gary Chen considered in a recent IDC Analyst Connection. The first question asked:

Cloud-native application architectures promise improved business agility and the ability to innovate more rapidly than ever before. However, many existing conventional applications will provide important business value for many years. Does an organization have to commit 100% to one architecture versus another to realize true business benefits?

As Mary and Gary write, there are indeed “cost and performance benefits of greenfield, extreme scale cloud-native applications running on highly standardized, automated infrastructure.” However, as they also note, bringing in the bulldozers to replace all existing infrastructure and applications isn’t an option for very many businesses. There’s too much investment and, even if it were an option financially, the disruption involved in wholesale replacement would likely offset any efficiency gains.

It’s also worth observing that the goals and relevant metrics for traditional IT systems, especially in regulated industries that deal with highly-sensitive data, are typically also different from cloud-native “systems of engagement.” Traditional systems prioritize characteristics like reliability and stability over the rapid introduction of new capabilities and features. To be clear, cloud-native architectures can and are being designed and implemented to reliably perform traditional “systems of record” functions. But it’s more typically the benefits that cloud-native can bring to the development of new applications and types of applications (such as mobile) that’s driving the interest in most enterprises. (And we see this dynamic reflected, for example, in OpenStack’s initial focus on cloud-native workloads rather than replicating all the features of enterprise virtualization.)

So what should enterprises do given that “many if not most of their core mission-critical applications are supported by conventional architectures” according to Mary and Gary?

It comes down to three things from my perspective.

The first is modernization. Just because a traditional IT infrastructure and application portfolio isn’t being radically reinvented using OpenStack, containers, and microservices design principles, doesn’t mean that it should be encased in amber and ignored. There are many incremental paths to improve efficiency and agility of existing environments.

One is continuing to move from old proprietary environments to modern open source ones like Linux. This is a well-mapped migration path with full support for every step of the transition. It lets you enhance IT performance, and increase flexibility—all at a reduced cost.

Applying agile application development and deployment practices, DevOps, is also important. Open source plays a key role in DevOps both as a source of innovative tooling and as a model for the iterative development, open collaboration, and transparent communities forming the culture that DevOps requires to succeed. DevOps is probably more associated with cloud-native application development but it can also benefit existing IT even though the tools and the pace of pushing code deployments will likely differ between the two environments.

Ansible, recently acquired by Red Hat, is an example of a DevOps tool being used by organizations even in their more traditional environments. Ansible provides a simple automation language for application infrastructure automation from servers to virtualized cloud infrastructures to containers. Ansible provides a path to DevOps for a broad class of enterprise users including both DevOps teams demanding agile practices and fast provisioning as well as business units which require simplicity above all else.

Finally, even while recognizing the differences between traditional and cloud-native environments, it’s necessary to build a bridge between the two. Red Hat provides a variety of tools to do so. Red Hat CloudForms automates and unifies infrastructure management, service brokering, and monitoring across a hybrid environment with policy-based controls. Red Hat also offers software-defined storage that can span multiple environments so that a consistent view of data can be provided to any application requiring it. JBoss middleware has messaging and business rules management to integrate new applications with existing data sources and business workflows.

Certainly, there are an incredible number of exciting new technologies and approaches that apply primarily to cloud-native infrastructures and application development. And forward-looking enterprises should be experimenting with them–at a minimum–as part of transforming themselves into digital businesses. But this transformation must also consider the world as it is and the huge investments that they’ve already made in IT. And shepherd that investment into the future wisely.

by ghaff at November 19, 2015 04:09 PM

OpenStack Superuser

A primer on Magnum, OpenStack containers-as-a-service

It’s difficult to right-size the excitement about containers, the old/new technology of the moment.

We can offer the skinny on OpenStack’s Magnum project from Adrian Otto, Rackspace distinguished architect, at the recent Summit Tokyo. Otto runs through what problems OpenStack’s containers-as-a-service project can really solve for you and when to give know-it-alls a kick in the pants.

Supersizing the project
Otto, who is also the PTL, has been talking about Magnum for several Summits. His favorite slide? The one he has to update the most.

alt text here

“It shows that we are reviewing code, and we're making that code better before it gets upstream but we're not making tons of revisions to this code,” he said. We're doing a few revisions to make it better and then that code is getting in…there's some velocity here.”

Magnum’s changing vision

Magnum used to be all about containers as a first-class resource in OpenStack, he said. That’s about as passé as dial-up these days, because “we've already achieved this.”

Now, “Magnum is all about is combining the best of infrastructure software with the best of container software..I want everyone to recognize that container software does not solve our problems,” he added.

Kick ‘em in the pants
There’s a lot of confusion about what containers are, Otto said. If someone asserts that containers just like virtual machines but smaller and faster, there’s only one thing to do.

“Please kick someone in the nuts if they say that because they're not,” he said. “Containers are about things that are related to processes that run on hosts. Killing them, starting them, setting their environment variables, binding file system volumes, attaching terminals to them and running processes within them.” Cramming all of those capabilities into Nova, OpenStack’s compute project, would be a terrible fit, he said, so it was decided to create a new project with its own API.

alt text here

What Magnum can solve

Most of the problems that crop up when you try to run applications on containers are still infrastructure problems, he said. These are problems like: “How do I connect my networks?” “What do I do with my storage?” “Where does my addressing come from?” “How are these things related?” “How do I orchestrate these? How do I scale them?”

Container software helps at the app layer and it helps with packaging and distribution — but it doesn't solve everything in the infrastructure. “Magnum is trying to take and vertically integrate solutions that solve an entire range of problems,” he added.

Magnum 101

Otto ran through the basics. First you have OpenStack — compute, networking and storage.

alt text here

Magnum is an additional service that allows you to create a new entity, a new cloud resource called a Bay. A Bay is where your container orchestration engine (COE) lies. You can run Docker Swam, Kubernetes and, with the Liberty release, Apache Mesos.

Bays were designed provide users with the native tool experience — using the Docker CLI or the Cube CTL command against the cluster as you see fit.

“You should be able to enjoy the new features that surface in these various COEs as they're made,” Otto said adding, “not have to wait for the Magnum team to build leaky obstructions on top of all that stuff in order to surface that to you.” Instead, you rely on Magnum to create the bay and scale that infrastructure from a capacity perspective, then you interact and create containers and manage containers and stock containers all using your native tools, he said.

DIY Containers
Magnum does also provide a feature to create a container — more about this from his talk in Vancouver. It allows you to create a port in Kubernetes, it has this capability as well but you also have the option to run this as a native experience. Depending on what bay you choose, he said, you’ll get a different native API experience.

Nodes are essentially Nova instances. A Bay is simply a grouping of Nova instances, so all of the bays in Magnum today have at least these three abstractions.

A pod is a grouping of containers that run together on the same host. A service is a way of connecting a network port to a container and a bay is a grouping of Nova instances. Nodes are one-to-one related to Nova instances.

What’s new
Otto talked about his favorite new features in Magnum: Mesos Bay Type, Security Bays (TLS), External Load Balance Support and Multi-master from Kubernetes.

alt text here

The Future? Uncontainable excitement

“What I'm most proud of is that collaboration is now between 101 engineers who come from 28 different affiliations,” Otto said. “I think this is a testament to the excitement that we all feel about where this new technology might take us.”

How to get involved
Use Ask OpenStack for general questions
For roadmap or development issues, subscribe to the OpenStack development mailing list, and use the tag [magnum]
Weekly meetings held on Tuesdays at 16:00 UTC.

You can watch his 30-minute talk on the OpenStack Foundation’s YouTube channel.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="" width=""></iframe>

Cover Photo // CC BY NC

by Nicole Martinelli at November 19, 2015 04:01 PM

Kenneth Hui

Platform9 Just Announced Latest Release of Our Managed OpenStack


One of the benefits that Platform9 customers most value with our managed cloud offering is the ability to take advantage of new innovations and capabilities without having to shoulder the burden that often comes with new software releases. This includes all the testing that needs to be performed prior to putting new technology in production. It includes integration of new software with existing or new infrastructure. And it includes the burden of having to upgrade live production systems to a new software release. These all go away for Platform9 customers who rely on us to deliver new innovations and capabilities, such as OpenStack Neutron networking, as part and parcel of our cloud management SaaS offering.

This was the experience for our customers when we seamlessly upgraded them to release 1.3 of Platform9 and our customers “magically’ received new block storage capabilities via our rollout of the OpenStack Cinder service. Now with the release of version 1.4 of Platform9, old and new customers alike will receive additional new capabilities such as dynamic network provisioning, application orchestration using OpenStack Heat, and other useful innovations. As with previous Platform9 releases, we will be  able to upgrade our customers to the new release with minimal disruption.

In a post over on the Platform9 blog site, I briefly reviewed some the new capabilities provided in our new 1.4 release of Platform9. I encourage you to take a look there to find out more.

Filed under: Cloud, Cloud Computing, OpenStack, Private Cloud, Virtualization, VMware Tagged: Cloud, Cloud computing, OpenStack, Platform9, Private Cloud, VMware, VMware vSphere, vSphere

by kenhui at November 19, 2015 03:51 PM

JJ Asghar

Characteristics of a Successful Chatroom Meeting

Author: JJ Asghar

Edited: Adam Leff

I’ve spent some time over the last few years in Chatroom meetings. Based on my experience, here are some characteristics of a successful Chatroom meeting for both the organizer and attendees.


  • Create an agenda, and allow meeting participants to collaborate on it before the meeting.
  • Be sure to have at least one Chair for the meeting. The Chair opens, closes, and moderates the meeting.
  • Announce the meeting ahead of time using the main way to communicate with your user group. Use this opportunity to remind them of the meeting and offer them a link to the agenda for review and collaboration.
  • Leverage a tool such as meetbot as your scribe and timekeeper. If you can’t run something that can do it automatically for you, designate a note taker and timekeeper. Be sure to announce the note taker and timekeeper at the beginning of the meeting.
  • Before allowing discussion on topics not listed on the agenda, ensure all items on the agenda have been discussed first.
  • As all participants type at different speeds, allow for extra time for comments from slower typists. State the pause so the participants understand the reason for the silence:

Chair: OK, any other comments before we move on to item 2? Waiting 20 seconds for additional comment...

  • If it appears a topic has come to a conclusion and a decision has been reached, announce the decision and allow for dissenting opinions. Use this opportunity to clarify any aspects of the decision that are necessary. Provide a time-boxed pause to allow users to voice any dissent. For example:

Chair: Seems the group has decided that vi is better then emacs. If you disagree, please '-1'. Waiting 20 seconds for any feedback... Attendee1: -1 Chair: Attendee1: can you explain? Attendee1: Chair: Even though the group has decided this, I think this is still a matter of personal choice. Attendee2: Chair: +1

  • Keep the conversation going by asking relevant questions and seeking out quieter attendees for their opinions. This helps keep the conversation active while making all participants feel their voice is being heard.
  • Take note of any topics that come up in discussion that were not listed as an agenda item. Add them to the next agenda to make sure that the issue raised is resolved, or provide time at the end of the meeting for discussion if time allows.
  • Most importantly, be friendly and helpful! The Chair is here to ensure a successful meeting with a successful outcome, and that can only happen if people feel welcomed, valued, and comfortable.


  • Remember that you are there to share your opinions and gather input for a given topic or set of topics. Be constructive and helpful, and everyone will benefit.
  • The Chair is responsible for keeping to the agenda and moderating the flow of the meeting. The Chair may have to cut a conversation short in order to ensure all items on the agenda are discussed. If you wish to continue discussion on a particular topic over the allotted time, ask for an extension. If an extension is not possible or feasible, ask for the topic to be added to the next agenda, or seek out another form of communication, such as the group’s mailing list.
  • When the meeting starts, announce yourself as an attendee. Announcements such as o/, hello or I'm here are considered acceptable. In addition to making it clear you are here to participate, the note taker or meeting bot will have an accurate record of those in attendance.
  • Do your best to stay on topic. This is a forum for synchronous feedback and it’s easy for tangential conversations to muddy the waters. If you feel your topic needs to be discussed, ask the Chair to add it to the end of the agenda or propose it for a future meeting.
  • When replying, address the user directly. In addition to getting his/her attention, it helps other attendees follow the conversation flow as if it were happening face-to-face. For example:

Attendee1: I think vi is better then emacs. Chair: No way emacs is better then vi! Attendee2: I think sublime text is the clear winner. Attendee3: Attendee1: What about atom? Attendee1: Attendee3: I do everything through a terminal, so atom isn't an option. Attendee2: Chair: Not everyone likes to use parentheses in their configuration files.

  • An amazing aspect of online meetings is the ability to engage with users from all around the world on a single topic. These users come from different personal, cultural, and technical backgrounds. Some may not type as fast or as accurately as you, and some may be communicating in a language that is not their primary language. Be patient with your fellow attendees and be extra understanding of written inaccuracies.
  • Be friendly and respectful! You are here share your opinion, hear other opinions, and learn from each other. Everyone’s input is valuable. Because tone-of-voice cannot properly be portrayed in a text-only format, keep jokes and sarcasm to a minimum to ensure no attendee misinterprets your message. If you feel offended by another attendee’s message, assume no malicious intent and ask him/her to re-explain their message.

November 19, 2015 12:05 AM

November 18, 2015


End of the Hot Snapshot, place to the Cold Snapshot for a better backup consistency!

On November 2, 2015, Cloudwatt adopted the JUNO release of OpenStack.

On this new version, the OpenStack community had decided to change the backups enforcement procedure to improve the consistency of snapshots that were not guaranteed by the “hot snapshot” (no flush of data RAM and no blocking File System to prevent code writing during the snapshot).

Kilo of Liberty release notes do not plan to return to the functionality “hot snapshot “.

Principle of “ cold snapshot “: when launching a snapshot via the console or the CLI API, the “Image-Create” command is executed. The “ cold snapshot” is then launched with a “Suspend-Resume” at the VM level, the time to record its status, thus ensuring the consistency of the snapshot.

Depending of the typology of the VM, we noted up to several seconds interruptions.

If you are facing issues, you can contact our Support service.

by Jean-Brice at November 18, 2015 11:00 PM

Carl Baldwin

A Neutron Layer 3 Model

Since the beginning of Openstack Neutron (then Quantum), its logical model has centered around a networking layer 2 construct named the Network. The semantics of this construct have not always been clear. In fact, there have been some attempts to model layer 3 networks with it. They have resulted in some awkward implementations and don’t clearly communicate to the end user the capabilities of the network. Recent discussions on the mailing list and examining the code base have made it clear to me that the Network is meant to represent an L2 broadcast domain. Read more...

November 18, 2015 08:22 PM

SUSE Conversations

Carpe Diem; Reach to Billions of Users

Half of the world’s population have a mobile device that connect us (billions) to each other. That’s no surprise everyone and mobile apps are at the heart of human interactions. Businesses want to acquire customers who are reachable through business apps. We used decades ago, me one of those, to develop software applications on a …

+read more

The post Carpe Diem; Reach to Billions of Users appeared first on SUSE Blog. Naji Almahmoud

by Naji Almahmoud at November 18, 2015 02:35 PM


Fuel Becomes an OpenStack Project under Big Tent

The post Fuel Becomes an OpenStack Project under Big Tent appeared first on Mirantis | The #1 Pure Play OpenStack Company.

The OpenStack Big Tent just got a whole lot bigger. The Technical Committee has voted in the largest OpenStack project yet: Fuel. Fuel is an open source deployment and management tool for OpenStack. It’s a lesser-known project, but it’s huge, with 50 percent more code than Nova and 75 percent more commits per month than Neutron. Developed as an OpenStack community effort, Fuel provides an intuitive, GUI-driven experience for deploying and operating OpenStack, related community projects and plug-ins. In short, it provides a quick onramp to OpenStack.

OpenStack Fuel big tent
















OpenStack User Survey, p. 26

The Technical Committee, which is composed of elected representatives from most key players in the OpenStack community, accepted Fuel as an OpenStack project under the Big Tent governance model because the engineering process in Fuel is done in the OpenStack way. Mirantis uses Fuel as a part of Mirantis OpenStack and is a major contributor. The company’s pure-play approach has enabled the Fuel team to use the same process as upstream, from code reviews and bug tracking tools, to sharing email and IRC communication channels with other OpenStack teams, to establishing active collaboration with other OpenStack projects such as Puppet and Infrastructure. Today Fuel is flexible, decoupled from vendor-specific solutions, and enables multiple choices for key supporting technologies (from storage to networking to operating systems).

Fuel’s acceptance into the Big Tent is terrific news for Fuel contributors and the OpenStack community. OpenStack operators get a comprehensive and mature deployment service. OpenStack vendors get assurance that Fuel can and will support their favored OpenStack version and distribution. OpenStack developers can use Fuel to explore more complex multi-node deployment configurations.

It also goes a long way in assuring community members that the Fuel project will remain open and neutral. My hope is that this encourages other vendors to increase their contributions to enable additional platforms, OpenStack projects, and deployment configurations. More diverse contributions would not just expand Fuel’s functional scope, but also drive the flexibility of Fuel’s core components. It will also ensure the long term viability of the project by making it less dependent on investments from a single commercial entity.

To the current and future Fuel contributors: our work has just begun. The whole community will be paying close attention to Fuel, and will expect our open source team to improve our high standards of open collaboration, cross-project communication, adherence to OpenStack engineering practices and alignment with OpenStack development tools.

We have solved the problem of initial deployment, so the next big challenge for the Fuel team is to simplify the “day 2” operation of OpenStack environments: applying updates, upgrades, configuration changes, adding and removing services and plugins, scaling up and down, logging, monitoring, and alerting. This whole class of problems is often called “lifecycle management,” and remains largely unaddressed by existing tools.

Fuel developers kicked off the discussion of lifecycle management use cases with the OpenStack community at large before the OpenStack summit in Tokyo in October, focusing on what it means not only for deployment services, but for all OpenStack projects. The Technical Committee is also looking into encouraging all OpenStack projects to consider these use cases by labeling projects with tags such as “supports-upgrade” which indicate which aspects of this problem are supported out of the box.

With increased interest in Fuel from other OpenStack developers, turning Fuel itself and its massive body of deployment tests into a development tool for other OpenStack projects will become another major theme for the Fuel team. While some projects such as Puppet OpenStack are interested in validating their code with the Fuel CI, the OpenStack Infrastructure team is already calling for the next step: porting the same tests to nodepool and other tools used to operate the OpenStack CI on top of public OpenStack clouds.

We have a great opportunity and responsibility to make OpenStack easy to install and operate at scale. Let’s work together to make Fuel the best it can be!

Want to learn more? Download the latest version of Fuel or contact Mirantis.

The post Fuel Becomes an OpenStack Project under Big Tent appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Boris Renski at November 18, 2015 02:00 PM

OpenStack Superuser

OpenStack Mitaka release: what’s next for Ansible, Oslo and Designate

Each release cycle, OpenStack project team leads (PTLs) introduce themselves, talk about upcoming features for the OpenStack projects they manage, plus how you can get involved and influence the roadmap.

Superuser will feature these summaries of the videos weekly; you can also catch them on the OpenStack Foundation YouTube channel. This round of interviews covers Ansible, Oslo and Designate.


<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="" width=""></iframe>

What: OpenStackAnsible, a collection of Ansible playbooks and roles to deploy OpenStack.
Who: Jesse Pretorius, PTL. Day job: DevOps engineer, Rackspace.
Burning issues

alt text here

“We trying to do things that are simple to use and simple to understand. OpenStack is complicated enough, operators don't have to learn another complex deployment tool to do what they need to do.”

What’s next

alt text here

What matters in Mitaka

alt text here

“One thing that struck me in the conversations I had at the Summit was that the deployment projects have a unique position in the community. We are the first to test a lot of things that are coming out of the development community,” Pretorius says. “We get to either lead the operators in new feature deployments, or we get to figure out some of the hard things that the operators don't really have the time to figure out. As a deployer community, we could spend a lot more time getting together and being that interface between the two parties, that would be a valuable thing…"

Get involved!
Use Ask OpenStack for general questions
For roadmap or development issues, subscribe to the OpenStack development mailing list, and use the tag [ansible]
Participate in the weekly meetings: community meeting, on IRC in #openstack-meeting-4, on Thursdays at 16:00 UTC, or bug triage, on IRC in #openstack-ansible, on Tuesdays at 16:00 UTC.


<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="" width=""></iframe>

What: Oslo is the OpenStack Common Libraries project.
Who: Davanum Srinivas aka Dims, PTL. Day job: Community architect/principal software engineer, Mirantis.

Burning issues

alt text here

What’s next

alt text here

What matters in Mitaka

alt text here

"Based on feedback, we'll be working on messaging a lot this cycle," Srinivas added. "All [of these efforts] are attempts to stabilize the infrastructure so we can scale out to a huge number of nodes, compute nodes for example," he added.

Get involved!

“Typically, Oslo developers work primarily on other projects," says Srinivas. "We tend to be folks who work part-time on things they really care about in Oslo, so we welcome folks who start contributing and usually quickly promote them as core [contributors.]”

Use Ask OpenStack for general questions
For roadmap or development issues, subscribe to the OpenStack development mailing list, and use the tag [oslo]
Participate in the weekly meetings: held Mondays, 16:00 UTC.


What: alt text here

"Traditionally, in companies, DNS is difficult to do. Users have to file a ticket with IT or they have to go edit text files on some server and do it all manually. Designate provides an easy-to-use multi-tenant API to create and update DNS records," says Graham Hayes, PTL, and senior software engineer, Hewlett-Packard Enterprise.

Burning issues

alt text here

"We've been going through a large amount of re-architecture," Hayes says, adding that the issues discussed at the Summit Tokyo were largely around those changes in addition to the points mentioned above.

What’s next alt text here

What matters in Mitaka alt text here

In the drive to make it more manageable, he says: “It's always good to put things into the hands of users and get feedback, so we're taking that feedback now and updating the Horizon panels with it.”

Get involved!
Use Ask OpenStack for general questions
For roadmap or development issues, subscribe to the OpenStack development mailing list, and use the tag [designate]
Participate in the weekly meetings: held at #openstack-meeting-alt on freenode at 17:00 UTC every Wednesday.

Cover Photo: OpenStack Foundation.

by Superuser at November 18, 2015 12:51 PM

Rossella Sblendido

How VMs get access to their metadata in Neutron

The metadata for a VM are delivered by Nova metadata service. Nova
needs to know the instance ID of a VM to be able to deliver its metadata. Neutron has a special agent (metadata agent) just to add this information in the HTTP header of the metadata request of the VM. Let’s see more in detail how it works…

There are two possible configurations:

  • Routed networks
  • Isolated networks

Routed network

In this case the VM is on a network that is connected to a router. A router is implemented in Neutron using a network namespace. There’s a specific agent that handles the routers in Neutron: the L3 agent. In routed network mode the L3 agent is also in charge of spawning the metadata proxy. As the name says, the metadata proxy is just a proxy that forwards the requests to the metadata agent using a Unix domain socket. When a VM sends a metadata request, the request reaches the router, since it’s the default gateway. In the router namespace there’s an iptables rules that redirects the traffic whose destination is the metadata server to a local port 9697.

vagrant@vagrant:~$ sudo ip netns exec qrouter-5c41b22f-a874-4689-8b93-e82640541929 iptables -t nat -L | grep redir
REDIRECT tcp -- anywhere tcp dpt:http redir ports 9697

The metadata proxy is listening to this port.

vagrant@vagrant:~$ sudo ip netns exec qrouter-5c41b22f-a874-4689-8b93-e82640541929 netstat -atp Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 *:9697 *:* LISTEN 7753/python 

When the proxy receives a packets it knows 1) the IP of the VM that is sending the request 2) the router ID of the router that it’s connected to the network the VM is on since there’s one proxy for every router. It adds this information (IP of the VM and router ID) in the HTTP header and forwards the request to the metadata agent. The metadata agent uses the router ID to list all the network connected to that router and identifies the one to which the VM belongs. Then it issues a query to the neutron server to get the instance ID of the VM using the IP and the network ID as filters. It adds the instance ID in the HTTP request and forwards the request to Nova. Yuppie!

Isolated networks

When a network is not connected to a router, how can a VM get its metadata? Well, there’s a flag that you can set in the dhcp agent config file: enable_isolated_metadata . If it’s set to True, the dhcp agent will do some magic. Let’s see the datails.

The dhcp agent is the one that is in charge of dhcp. DHCP is not only about assigning IP addresses, there are other options. For example the Option 121 is for setting a static route. That’s exactly what the dhcp agent uses to tell the VM that the next hop to reach the metadata server is the IP of the dhcp port. If you set enable_isolated_metadata to True and you ssh into the VM you’ll see:

$ route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default UG 0 0 0 eth0 * U 0 0 0 eth0 UGH 0 0 0 eth0

where in my case is the IP of the dhcp port:

vagrant@vagrant:~$ neutron port-show eb0cd637-0a3a-40a2-90ac-064ef2bca05d
| Field                 | Value                                                                             |
| admin_state_up        | True                                                                              |
| allowed_address_pairs |                                                                                   |
| binding:host_id       | vagrant-ubuntu-trusty-64.localdomain                                              |
| binding:profile       | {}                                                                                |
| binding:vif_details   | {"port_filter": true, "ovs_hybrid_plug": true}                                    |
| binding:vif_type      | ovs                                                                               |
| binding:vnic_type     | normal                                                                            |
| device_id             | dhcpd439385c-2745-50dd-91dd-8a252bf35915-7fb0c0f4-7ed1-4e4f-8683-ec187a396c51     |
| device_owner          | network:dhcp                                                                      |
| extra_dhcp_opts       |                                                                                   |
| fixed_ips             | {"subnet_id": "6bad6b4a-23fc-4864-a6d0-668aab7d9486", "ip_address": ""} |
| id                    | eb0cd637-0a3a-40a2-90ac-064ef2bca05d                                              |
| mac_address           | fa:16:3e:c1:62:ab                                                                 |
| name                  |                                                                                   |
| network_id            | 7fb0c0f4-7ed1-4e4f-8683-ec187a396c51                                              |
| security_groups       |                                                                                   |
| status                | ACTIVE                                                                            |
| tenant_id             | df3187034bcd49a18659c30584d8767a                                                  |


So the VM sends the packet with the metadata request to the dhcp namespace (that’s where the dhcp port is) . In this namespace the dhcp agent has spawn a metadata proxy that is listening on port 80.

vagrant@vagrant:~$ sudo ip netns exec qdhcp-7fb0c0f4-7ed1-4e4f-8683-ec187a396c51 netstat -atp
 Active Internet connections (servers and established)
 Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
 tcp 0 0 *:http *:* LISTEN 18968/python

The proxy knows the network ID and the IP of the VM (it’s in the request). It adds this info in the HTTP request and forwards it to the metadata agent. As before the metadata agent gets the instance ID of the VM from the neutron server using the network ID and the IP as filters. It adds the instance ID to the request and forwards it to Nova. Yuppie!

Debugging !

If you use devstack, you can just join the screen session

screen -x

and you will see the metadata server in the window named q-meta.You can set breakpoints directly in the code using pdb .

It’s a bit trickier to debug the metadata proxy, since it’s not in the screen session. Here is what In _get_metadata_proxy_callback add the ‘–nodaemonize’ flag in the command line. You can also specify the log directory if you want to access the logs ‘–log-dir=/opt/stack/logs’. To make the dhcp agent restart the proxy do:

neutron dhcp-agent-network-add <dhcp_agent_id> <net_id>
neutron dhcp-agent-network-remove <dhcp_agent_id> <net_id>

Now you can do ‘ps | grep metadata’ and copy the command line to spawn the proxy:

sudo /usr/local/bin/neutron-rootwrap /etc/neutron/rootwrap.conf ip netns exec qdhcp-284eaa7e-082b-4e60-9e4f-3150647d4fdd neutron-ns-metadata-proxy –pid_file=/opt/stack/data/neutron/external/pids/284 --metadata_proxy_socket=/opt/stack/data/neutron/metadata_proxy --network_id=284eaa7e-082b-4e60-9e4f-3150647d4fdd --state_path=/opt/stack/data/neutron --metadata_port=80 --log-dir=/opt/stack/logs --nodaemonize --debug –verbose 

Kill the current proxy process. Create a new window in the screen section that you can name q-meta-proxy.  Just paste the command line you copied there to start the proxy. Now you can modify the source code and debug the metadata proxy directly. Don’t forget that you need to restart the proxy every time you modify the code. To make the VM send a metadata request you can just ssh into the VM and do:


by Rossella Sblendido at November 18, 2015 10:31 AM

Adam Young

Translating Between RDO/RHOS and Upstream OpenStack releases

There is a straight forward mapping between the version numbers used for RDO and Red Hat Enterprise Linux OpenStack Platform release numbers, and the upstream releases of OpenStack. I can never keep them straight. So, I write code.

UPDATE: missed Juno before…this is why we code review


upstream = ['Austin', 'Bexar', 'Cactus', 'Diablo', 'Essex', 'Folsom',
            'Grizzly', 'Havana', 'Icehouse', 'Juno', 'Kilo', 'Liberty',
            'Mitaka', 'N', 'O', 'P', 'Q', 'R', 'S']

for v in range(0, len(upstream) - 3):
    print "RHOS Version %s = upstream %s" % (v, upstream[v + 3])

RHOS Version 0 = upstream Diablo
RHOS Version 1 = upstream Essex
RHOS Version 2 = upstream Folsom
RHOS Version 3 = upstream Grizzly
RHOS Version 4 = upstream Havana
RHOS Version 5 = upstream Icehouse
RHOS Version 6 = upstream Juno
RHOS Version 7 = upstream Kilo
RHOS Version 8 = upstream Liberty
RHOS Version 9 = upstream Mitaka
RHOS Version 10 = upstream N
RHOS Version 11 = upstream O
RHOS Version 12 = upstream P
RHOS Version 13 = upstream Q
RHOS Version 14 = upstream R
RHOS Version 15 = upstream S

I’ll update one we have names for N and O.

by Adam Young at November 18, 2015 02:41 AM

November 17, 2015

OpenStack Superuser

Two OpenStack operators share their latest action items

The OpenStack Summit Tokyo was, as always, educational, exciting, entertaining and energizing.

After we flew back and shook off the jet lag, the question was what will we do with what we learned and discussed? What actions will we take as an operator to make our cloud more stable, performant and future-proof?

After some team discussions, we here are the things that we are taking immediate action on -- either doing, or investigating, -- based on what we learned in Tokyo.

On the first day of the summit, we attended a talk called “RabbitMQ Operations for OpenStack.” RabbitMQ is always a sore spot and a dangerous area to poke in an OpenStack deployment. In this talk, Michael Klishin from Pivotal Labs gave dozens of tips on how to operate RabbitMQ.

A few of these tips really hit home:

  • “Always run the latest version.” I was a bit horrified, but not surprised, to find out that we were two full versions out of date.
  • “Tune your FDs." We’d already been hit by this, the default limit is way too low and as we add capacity and nodes the number in use keeps climbing.
  • “Use autoheal.” We’re not and we should be.
  • “Optimize the TCP config.” This paraphrases about 5 minutes of the talk. The good news is that the newer Puppet modules do most of this for us, so we upgraded the Puppet module, too.

The thing with Rabbit is that if you’re going to take it down for a reconfig or an upgrade, you only want to do it once. So we spent some time digging into all the rest of his optimizations and also wrote up an Ansible-based RabbitMQ upgrade playbook. We can happily say that we finished rolling out this upgrade of RabbitMQ (from 3.3 to 3.5) to production last week. This included our optimized configuration.

Another talk that we enjoyed (and led along with GoDaddy) was “Duct-Tape, Bubblegum, and Bailing Wire: The 12 Steps in Operating OpenStack." In this talk we learned about a bunch of issues that operators had, little problems that crop up and cause headaches. There were plenty of other talks in this category, including hallway discussions and as an outcome of this we made a few small tweaks in our environment:

  • Raising kernel.pid_max (sysctl): We had the default value and according to other operators this can cause issues with Ceph. This rolled to production last week.
  • Enabling the neutron root-wrap daemon: Neutron wastes a lot of time execing rootwrap and sudoing and the daemon should make neutron calls to things like ip netns much faster. This change is baking in our dev environment while we test how stable it is.

Finally, we both attended lots of Ops sessions. Much of the discussion in these sessions was on prepping for the Liberty upgrade. We have a philosophy that we always try to upgrade fairly quickly after the summit so we’re getting some tasks done now that will make it easier when we do it. One of the big things we’re looking forward to in Liberty is improved Fernet token performance. Fernet tokens are marching towards being the default token provider and although they work well for us, we’d like them to be faster. There was a great design session on these changes on Wednesday of the summit. Another major thing we’re going to enjoy in Liberty is the Neutron OVS agent fix. Right now in Kilo (and before) when the OVS agent restarts it can interrupt networking to the customer VMs we host. This makes host maintenance and upgrades painful. Unfortunately, this fix will not be backported to Kilo, but our upgrades will be much easier after Liberty.

Due to some infrastructure work and holiday closures, we’re not starting Liberty now, but what pre-Liberty work are we doing now?

One topic that saw a lot of discussion is the requirement to be on Kilo.1 for Nova before upgrading to Liberty. In general, upgrading to a new minor release for OpenStack is fairly straightforward, but as mentioned before, restarting the OVS agent to do that upgrade can be really disruptive. Additionally package dependencies can force you to upgrade everything on a box even if you just want a newer copy of Nova. For these reasons, we’re currently working on moving Nova services into Docker containers so that we can upgrade just that single service.

Keystone deprecated eventlet during the Kilo cycle, and so we’ve already switched over to service keystone with Apache. However because other services will probably also deprecate eventlet and for performance reasons, we’re looking to switch all of our API services to use a higher performing WSGI server. We moving more and more services into containers and running Apache inside a container would be workable, but seemed like overkill. Reading up we learned that most people recommend using uWSGI for this now and we’ve been reading up, experimenting it and getting close to rolling our first service (heat) using it soon.

Tokyo gave us a ton of ideas and also a ton of work, but we’re confident that these changes will make our cloud better, our lives as operators easier and our customers happier.

You can catch Matt Fischer, princple engineer at Time Warner Cable, on Twitter at @openmfisch. Clayton O'Neill, also a principle engineer at TWC, is on Twitter at @clayton_oneill.

Superuser is always interested in how-tos and other contributions, please get in touch:

Cover Photo // CC BY NC

by Matt Fischer and Clayton O'Neill at November 17, 2015 06:25 PM


RDO Community Day @ FOSDEM

We're pleased to announce that we'll be holding an RDO Community Day in conjunction with the CentOS Dojo on the day before FOSDEM. This event will be held at the IBM Client Center in Brussels, Belgium, on Friday, January 29th, 2016.

You are encouraged to send your proposed topics and sessions via the Google form. If you have questions about the event, or proposed topics, bring them either to the rdo-list mailing list, or to the #rdo channel on the Freenode IRC network.

If you're thinking of attending either the RDO Day, or the CentOS Dojo, please register so that we know how many people to expect.

by Rich Bowen at November 17, 2015 04:43 PM

Automated API testing workflow

Services exposed to a network of any sort are in risk of security exploits. The API is the primary target of such attacks, and it is often abused by input that developers did not anticipate.

This article introduces security testing of Openstack services using fuzzing techniques

This article will demonstrate how OpenStack services could be automatically tested for security defects. First, I will introduce a set of tools called RestFuzz to demonstrate how it works in practice. Second, I will discuss the limitations of RestFuzz and how more advanced techniques could be used.

The end goal is to design an automatic test to prevent a wide range of security bugs to enter OpenStack's source code.

How a fuzzer works

A fuzzer's purpose is to discover issues like OSSA 2015-012 by injecting random inputs into the target program. This advisory shows how a single invalid input, accepted by the API, broke the entire L2 network service. While code review and functional tests are not good at spoting such mistake, a fuzz test is a proven strategies to detect security issues.

A fuzzer requires at least two steps: first, inputs need to be generated and fed to the service's input; and second, errors needs to be detected.

OpenStack API description

To reach the actual service code, a fuzzer will need to go through the REST interface routing, which is based on HTTP method and url. One way to effectively hit the interesting parts of the service code is to replay known valid HTTP queries. For example, a fuzzer could re-use HTTP requests created by tempest functional tests. However, this method is limited by its trace quality. Instead, RestFuzz works the other way around and generates valid queries based on API definitions. That way, it covers actions that functional tests do not use, such as unusual combinaisons of parameters.

To keep the process simple, RestFuzz will need a description of the API that defines the methods' endpoint and inputs' types. While in theory the API description could be generated dynamically, it's easier to write it down by hand.

As a C function description, a REST API method description could be written in YAML format like this:

  - name: 'network_create'
    url: ['POST', 'v2.0/networks.json']
        name: {'required': 'True', 'type': 'string'}
        admin_state_up: {'type': 'bool'}
      network_id: {'type': 'resource', 'json_extract': 'lambda x: x["network"]["id"]'}

This example from the network api defines a method called "network_create" using: * The HTTP method and url to access the API. * The description of the method's parameters' name and types. * The description of the expected outputs along with a lambda gadget to ease value extraction from JSON output.

The method object shows how to call network_create based on the above description. Note that the call procedure takes a dictionary as JSON inputs. The next chapter will cover how to generate such inputs dynamically.

Input generation

An input constructor needs to be implemented for each type. The goal is to generate data known to cause application errors.

The input_generator object shows the api call parameters can be dynamically generated by using the method input description shown above. Here are a couple of input generators:

    def gen_string():
        chunk = open("/dev/urandom").read(random.randint(0, 512))
        return unicode(chunk, errors='ignore')

    def gen_address_pair():
        return {
            "ip_address": generate_input("cidr"),
            "mac_address": generate_input("mac_address")

Note that "resource" types are UUID data, which can't be randomly generated. Thus, the fuzzer needs to keep track of method outputs to reuse valid UUID whenever possible.

The fuzzer can now call API methods with random inputs, but, it still needs to monitor service API behavior. This leads us to the second steps of error detection.

OpenStack error detection

This chapter covers a few error detection mechanisms that can be used with OpenStack components.

HTTP Error code

OpenStack services' API will return theses important error codes:

  • 401 tells the keystone token needs to be refreshed
  • 404 tells a UUID is no longer valid
  • 500 tells unexpected error

Tracebacks in logs

Finding tracebacks in logs is usually a good indicator that something went wrong. Moreover, using file names and line numbers, a traceback identifier could be computed to detect new unique errors. The health object features a collect_traceback function.

API service process usage

Finaly cgroups can be used to monitor API services memory and CPU usage. High CPU loads or constant growth of reserved memory are also good indicators that something went wrong. However this requires some mathematical calculations which are yet to be implemented.

Fuzzing Workflow

All together, a fuzzing process boils down to:

  • choose a random method
  • call method using random inputs
  • probe for defect
  • repeat

The ApiRandomCaller object uses random.shuffle to search for random methods and returns an Event object to the main process starting here.

RestFuzz features

Console output:

  $ restfuzz --api ./api/network.yaml --health ./tools/
  [2015-10-30 06:19:09.685] port_create: 201| curl -X POST -d '{"port": {"network_id": "652b1dfa-9bcb-442c-9088-ad1a821020c8", "name": "jav&#x0D;ascript:alert('XSS');"}}' -> '{"port": {"id": 2fc01817-9ec4-43f2-a730-d76b70aa4ea5"}}'
  [2015-10-30 06:19:09.844] security_group_rule_create: 400| curl -X POST -d '{"security_group_rule": {"direction": "ingress", "port_range_max": 9741, "security_group_id": "da06e93b-87a3-4d90-877b-047ab694addb"}}' -> '{"NeutronError": {"message": "Must also specifiy protocol if port range is given.", "type": "SecurityGroupProtocolRequiredWithPorts", "detail": ""}}'
  [2015-10-30 06:19:10.123] security_group_create: 500| curl -X POST -d '{"security_group": {"name": ["Dnwpdv7neAdvhMUcqVyQzlUXpyWyLz6cW2NPPyA6E8Z9FbvO9mVn1hs30rlabVjtVHy6yCQqpEp0xcF1AsWZYAPThstCZYebxKcJaiS7o7fS0GsvG3i8kQwNzOl5F1SiXBxcmywqI9Y6t0ZuarZI787gzDUPpPY0wKZL69Neb87mZxObhzx4sgWHIRGfsrtYTawd6dYXYSeajEDowcr1orVwJ6vY"]}}' -> '{"NeutronError": {"message": "Request Failed: internal server error while processing your request.", "type": "HTTPInternalServerError", "detail": ""}}', /var/log/neutron/q-svc.log
  File "/opt/stack/neutron/neutron/api/v2/", line 83, in resource
    result = method(request=request, **args)
  File "/opt/stack/neutron/neutron/api/v2/", line 391, in create
  File "/opt/stack/neutron/neutron/api/v2/", line 652, in prepare_request_body
  File "/opt/stack/neutron/neutron/extensions/", line 195, in _validate_name_not_default

Network topology after one hour of fuzz test:

Random Network Topology

Interesting features:

  • API description works for block storage (Cinder), image (Glance), network (Neutron) and dns (Designate).
  • Input generator randomly inter-changes types using a "chaos monkey" behavior to abuse the API even more.
  • Health plugin implements custom probe per services.
  • Simple workflow where methods are called in random order. An early version tried to build a graph to walk through the API more efficiently, but it turns out that calling method randomly is efficient enough and it keeps the code much more simple. See this image for a dependency graph of the network API.
  • More importantly, it already found a bunch of security defect like bug #1486565 or bug #1471957. The full list is available in the README.

However, RestFuzz is a proof of concept and it's not ready to be used as an efficient gate mechanism. The next part of this article discusses the limitations reached and what needs to be done to get something awesome!

Limitations and improvments

It's built from the ground up

RestFuzz currently only requires python-requests and PyYAML. A good chunk of the code could use frameworks such as the syntribos and/or gabbi

API description

Managing API descriptions is a great limitation, as the fuzzer will only be as good as the API description. OpenStack really needs to provide us the API description in a unified and consistent format:

  • API Documentation does not describe the input object precisely enough, making it impossible to use automatically.
  • Service documentation is sporadic and REST API routing is done differently in most OpenStack projects.
  • Command line tools may provide a good base, but they usualy omit unusual parameters and service extensions.

Having an unified API description will give a fuzzer bigger code coverage.

Complex types generation

A formal description of API inputs is a real challenge. Some services like Heat or Sahara can take flat files as input that would requires dedicated input generators. Otherwise, services like Designate need coherent inputs to pass early checks. E.G., the domain name used at zone creation needs to be reused for record creation.

In theory, this limitation could be mitigated using better instruments like guided fuzzing.

Guided fuzzing

A modern fuzzer like American Fuzzy Lop (afl) is able to monitor which instructions are executed by the target program and then use "rock-solid instrumentation-guided genetic algorithm" to produce inputs. In short, afl can generate valid input out of thin air without any description.

However afl is difficult to put in place for network services because it requires services' modification. python-afl bridges the Python interpreter with the fuzzer, and it may be interesting for OpenStack services testing.

Anomalies detection probes

Better watchdog processes could be used in paralel to detect silent logic flaws. Stacktraces and 500 errors most often reveal basic programing flaws. If one of the following actions fail, it's a good sign something is defective:

  • Boot an instance, assign an ip and initiate connection
  • Test methods with expected output like with gabbi
  • Run functional tests

Fuzzer As A Service

A fuzzer could also be implemented as an OpenStack service that integrates nicely with the following components:

  • Compute resources with Nova
  • Database store with Trove
  • Message bus for parallel testing with Zaquar
  • Data processing with Sahara/Sparks


Fuzzing is a powerful strategy to pro-actively detect security issues. Similar to bandit the end goal is to have an automatic test to detect and prevent many kinds of code defect.

While RestFuzz is really just a naïve proof of concept, it nonetheless indicated important limitations such as the lack of API description. I would appreciate feedback and further would like to create an upstream community to implement OpenStack fuzz testing.

This effort could start as an offline process with automatic reporting of traceback and 500 errors. But ultimately this really needs to be used as a gating system to prevent erroneous commits to be released.

I'd like to thank fellow Red Hat engineers Gonéri Le Bouder, Michael McCune, and George Peristerakis for their support and early reviews.

by TristanCacqueray at November 17, 2015 04:43 PM

The Official Rackspace Blog

OpenStack Doc PTL on Mitaka: Priorities a Blend of Tech and Community

In the wake of OpenStack Tokyo last month, Rackblog asked Rackers serving as Project Team Leads to share the status of their particular projects.  Lana Brindley’s post on priorities for Mitaka will kick off this occasional series. If you’re a PTL on an OpenStack project and you’d like to write an update, great! Just email

Tokyo is a city of contrast.

From the neon lights of Shibuya and Akihabara (“Electronics Town”) to the shrines and temples dotted in green spaces around the city, old and new Japan come together in Tokyo in a surprisingly harmonious way.

While the OpenStack Summit, like the people of Tokyo, is focused on new technology and moving into the future, an atmosphere of community, fellowship, and camaraderie that underpins the conference also flourishes.

I am privileged to be the documentation PTL for Mitaka, having taken over the reins from Anne Gentle after the Kilo cycle. 

[Anne and Lana sat down with Superuser TV in Tokyo to discuss why documentation is important for the user community, the latest metrics from the Liberty release and how you can get involved.]

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="281" src=";wmode=opaque" width="500"></iframe>

This time around, we have three main priorities to achieve, and I think it’s a  lovely blend between the technical and the community.

First, we will focus on improving the usability of our documentation, making it easier to navigate the docs site and find relevant information quickly. The main way we want to address this is through a different model of data typing, and moving away from a role-based architecture towards a task-based one. This is easier to do for some guides than others, of course, and we intend to start with the user guides. The other component of this is reorganizing the index page, and adding some more guidance on what each book is about. You should start to see those changes rolling in shortly.

We also want to continue converting our books to restructured text (RST). A lot of our books have already been converted, but we still have quite a few to go, and determining which books are to be converted in each release and who is responsible for doing so can take quite a bit of planning. This time around, we’re looking at the Architecture Design Guide, the Operations Guide, and the Configuration Reference Guide.

Finally, a fairly small thing that should have a big influence on our work is tweaking the way the docimpact tool works. At the moment, it creates a lot of noise in our bug queue and makes it harder for us to find the real work that needs to be done. By changing the way this tool operates, we hope it to make it much more responsive and useful both for the docs team, and for the OpenStack developer community.

This release is about having docs work more efficiently and effectively as part of the OpenStack development community. With OpenStack moving to the big tent, we need to reevaluate which projects we document, how we go about communicating with other development teams, and we need to ensure we’re being good community citizens. I also want to ensure we’re working closely with enterprise writing teams, and valuing the input that our corporate contributors provide to the documentation.

Liberty was my first release as Docs PTL, so I’m still learning what makes a good release. It was great to hear feedback from the docs team on what went well and what didn’t go so well, so I can learn from this to improve in Mitaka. I’m very grateful to have a team that has supported me in my new role, and I am honored to be leading them again into the next release.

Finally, we always need docs contributors.

If you are a technical writer, we have plenty of projects that need your writing expertise. If you’re a developer, even if you don’t think you’re a good writer, we could definitely use your technical prowess to test and improve our documentation. Remember, the documentation is developed exactly like code within the OpenStack system, so if you’re an active technical contributor (ATC), you already have the skills required to contribute, and to improve the docs for all our users.

by Lana Brindley at November 17, 2015 04:00 PM

Jesse Pretorius

OpenStack-Ansible Mitaka Summit Summary

The OpenStack Design Summit provides an opportunity for OpenStack collaborators to meet face-to-face, discuss lessons learned in the last cycle and figure out the plans for the next cycle.

As a new PTL for a deployment project in the OpenStack Big Tent it provided me an opportunity to attend design sessions/discussions in the summit which I felt would benefit the project as a whole and to look for opportunities where the project can add value to Operators, to Developers and to the whole OpenStack ecosystem.

Upgrading OpenStack

The Operator sessions I managed to attend were the Major Liberty Issues and Upgrades sessions. Both these sessions were of keen interest to me as upgrading OpenStack was a subject matter which we, as the OpenStack-Ansible project, had determined was a difficulty area for our downstream consumers and for the OpenStack community as a whole.

The Problem

The problem has a number of facets which make it difficult:

  1. Upgrades require configuration review to spot any deprecations.

    Current configurations need to be reviewed to determine whether anything being used has been deprecated upstream. If so then it needs to be identified whether there is a replacement configuration or whether it has been entirely removed. The release notes do help, and they have gotten better, but they are not always complete.

    While this action often ends up being considered a luxury (as deprecation cycles are often quite long), not doing this proactively can result in you being caught by surprise several upgrades later when the reference material needed is lost in the haystack of more up to date information. Also, the best opportunity to improve the reference information is usually at the point of initial deprecation - as that is when the developers have the information you need fresh in their minds.

  2. Upgrades require API down time.

    For some this is more of a problem than for others. Reports are that Nova online upgrades work well, but require careful orchestration of configuration changes if you care about staying online and not losing any API transactions that enter the environment during the course of the upgrades. Other projects in the ecosystem are not nearly as mature in terms of doing online upgrades and therefore require API down time.

  3. Database offline migration times are unpredictable.

    This essentially means that you have no idea how long your API down time will be, making your minimal change window time hard to determine. The only solution recommended by Operators to improve the success of this step, and to improve time estimates, is to ensure that database migrations are tested using a backup of the live database before the change window as part of the change preparation.

  4. Component version compatibility is important to understand up front.

    It appears to be fairly common for Operators using Horizon to be using a later version of Horizon than is used for the rest of the stack. Many of the larger Operators even run the latest version of Horizon from the head of the master branch. It is well known that this is an option, but whether you can run mixed versions of other components is not very well known. The general consensus is to test what you are hoping to do in a staging environment before executing it in production.

  5. Upgrading is more than just about upgrading OpenStack.

    As was expressed recently on the Operators mailing list, upgrading OpenStack is only one part of the upgrade process. It also involves certification of all integrations, hardware and other bits that interact with OpenStack. It will also often involve updating training materials, Operations run books, automation scripts, knowledgebase materials, marketing information and many other bits that relate to the environment.

    If there is any data plane down time then there is also a possibility that co-ordinating that down time with the stakeholders will be necessary, which introduces the additional complexity of having to schedule in-between change black-out periods or having to jump through additional hoops which the stakeholders require as part of their policies and procedures.

How can we help?

As OpenStack-Ansible is a deployment project which deploys from source, we are well positioned to play a role in the broader community to help improve the Operator experience with regards to upgrades:

  1. We can play a part in validating and improving release documentation.

    Our community is able to test the next major version of OpenStack as it develops. This means that the community can play a part in improving OpenStack installation, security, architecture and developer documentation for current and future releases.

  2. We can play a part in improving OpenStack code quality before it merges.

    As reviews are submitted in the OpenStack projects, the OpenStack-Ansible community is in a position to test those reviews using the All-in-One (AIO) or in multinode lab test environments and to provide feedback to the developers well before the code is merged. This allows the Operator community to play a larger part in proactive bug prevention!

    Miguel Grinberg did a presentation at the Tokyo summit about his experience using OpenStack-Ansible instead of DevStack for OpenStack development. There is also a blog post on the same subject.

  3. We can work out how best to orchestrate upgrades and codify the methods.

    While I am a firm believer that it is not possible to implement a one-size-fits-all upgrade process, I do believe that there is value in taking the time to work out how to execute an upgrade from one major version to the next with as little down time as possible, then to express that process in Ansible. Ansible is relatively easy to read, so it not only adds value to OpenStack-Ansible, but also adds value to the broader OpenStack community which can learn from what we put together.

OpenStack-Ansible Upgrade Framework

We held an open workshop to further discuss the difficulties with regards to upgrades and to try and determine whether we could identify ways in which we could providing tooling in the project which could improve the upgrade experience for Operators.

The discussion got a little stuck on some of the complexities and experiences and we did not have enough time to come to any specific conclusions. However this input did inform a further discussion in the Ansible Collaboration Day where Blue Box OpenStack Release Lead Jesse Keating agreed to work with OpenStack-Ansible to develop an Ansible module to help manage database migrations. This work will learn from the Neutron Database Migration module which currently only handles Neutron database migrations (both online and offline). The end goal will be to have a module that handles database migrations for all projects which is available outside of OpenStack-Ansible, possibly as an Ansible extra module.

Beyond that work we hope to implement a generalised framework in the Mitaka development cycle in order to cater for major upgrades of OpenStack, supporting services (MariaDB, RabbitMQ, etc) and OpenStack-Ansible.

Image-based Deployment

We held an open workshop to discuss the value and plausibility of developing an image-based deployment mechanism within OpenStack-Ansible.

The discussion had some passionate points of view, not all of which agreed. There were those who were passionate about the implementation of microservices containers and using image-based deployment. There were others who were passionate about image-based deployment being a problem at scale due to network saturation. There were still others who raised the important point that implementing the containers through images does not cover the whole picture as it leaves out the hosts.

No real conclusions were reached in the workshop. It does seem that we should discuss this again another time, but perhaps it should be done after the downstream consumers of OpenStack-Ansible have had an opportunity to make use of the shippable venvs which were introduced in the Liberty release. The combination of using shippable venvs and a managed apt repository do solve the problem of repeatable deployments quite well already.

Community Day

The OpenStack-Ansible community day was held on Friday and provided an opportunity for us to coalesce our summit experience into an etherpad and just generally hang out, discussing anything that came to mind.

I think that while we were already quite frazzled by then, it was great to share experiences and have open discussions.

Other Mitaka Development Cycle Work

As is evident in the list of OpenStack-Ansible Mitaka Specifications there is also a lot of other work which we hope to achieve in the Mitaka development cycle.

Highlights include:

  1. Gate Split

    The current integration gate check relies on an All-In-One (AIO) build which is running low on resources and does not adequately test all code paths that matter for the primary use-cases of the project. This work aims to resolve this.

  2. Independent Role Repositories

    In order to improve the ability to independently consume the roles produced by OpenStack-Ansible in different reference use-case deployments and allow independent development of each role by different projects, the existing roles are being split into their own repositories. The roles will also be registered in Ansible Galaxy for the broader community to consume.

Call For Contributors: Multi-OS Enablement

There has also been interest in implementing OpenStack-Ansible on platforms other than Ubuntu. eg: CentOS, Fedora, Gentoo

While there has been interest, no-one has stepped forward to own the work so if you are interested in seeing this become a reality then please engage with us on the openstack-dev mailing list with the subject line [openstack-ansible].

November 17, 2015 11:35 AM


Red Hat makes a deal with … Microsoft?

The post Red Hat makes a deal with … Microsoft? appeared first on Mirantis | The #1 Pure Play OpenStack Company.

Apparently the cloud is like politics; it makes very strange bedfellows. As Red Hat tries to unseat Ubuntu as the operating system of the cloud, and Microsoft tries to unseat Amazon Web Services as, well, the cloud itself, the two companies have joined forces, agreeing to help each other.  Yes, you read that right.

Microsoft will offer Red Hat Enterprise Linux as the default choice for Linux on its Azure cloud service, and Red Hat will provide support for it.  Red Hat will also make a “pay as you go” license available on the Azure cloud, just as it is on AWS.

Don’t think that means that Red Hat is giving up on OpenStack, of course; the companies said that part of the deal is that Azure will run on Red Hat’s OpenStack distribution, Red Hat Enterprise Linux OpenStack Platform.

RHEL will also become the primary platform for developing and testing .NET Core on Linux. Microsoft’s cooperation in making the .NET environment cross-platform fits in with the hybrid nature of the deal, which will also see Red Hat’s CloudForms management project interact with Azure.

The partnership,” said Redmond Magazine, “will involve a Red Hat engineering team moving to Redmond to provide joint technical support for Red Hat Enterprise Linux workloads running in the Microsoft Azure public cloud and on its hybrid cloud offerings. The pact also calls for applications developed in the Microsoft .NET Framework language to run on RHEL, OpenShift and the new Red Hat Atomic Host container platform.”

A Microsoft spokesman said there were no plans to run OpenStack natively on Azure, but Paul Cormier, Red Hat President of Products and Technologies, told the audience of the announcement webcast that the possibility existed for Windows VMs and containers to be run in the Red Hat OpenStack environment.

This deal is surprising because it wasn’t that long ago that Microsoft was calling Linux a “cancer” and threatening its users with patent infringement lawsuits.  Part of this deal involves both companies setting aside that kind of talk — for now.  Cormier told eWeek, “Red Hat and Microsoft did not acknowledge the validity or value of each other’s patents. This is a commercial deal spurred by strong customer demand for our solutions to work together.”

“As we said for the Ansible acquisition, our enterprise customers have complex heterogeneous IT environments and don’t want IT organizations to create redundant management silos, or embrace single vendor stacks if it’s not the best for their business,” Alessandro Perilli, General Manager, Cloud Management Strategy, wrote.  It’s not clear how this view meshes with the company’s “one OS to rule them all” attitude.


The post Red Hat makes a deal with … Microsoft? appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Nick Chase at November 17, 2015 09:58 AM

OpenStack Glance, Murano, Community app catalog: links in a chain

The post OpenStack Glance, Murano, Community app catalog: links in a chain appeared first on Mirantis | The #1 Pure Play OpenStack Company.

Last month, at the OpenStack Summit in Tokyo, Mirantis engineer Alexander Tivelkov led a couple of Murano sessions: the cross-region session, and another about actions. Here’s Alexander’s view of what happened behind the scenes of the design summit in Tokyo.


On Wednesday I did a Glance API V3 talk with Brian Rosmaita, hoping to send a proper message to those who are going to use our APIs. We also had some fruitful conversations on Artifacts before and after the sessions; it seems like the topic still attracts attention.

After the session, we tried to do some internal (Mirantis-only) planning, but we were suddenly “attacked” by a very friendly (and very talkative) person who wanted to talk about Murano. It’s funny, we worked so hard to get some traction, and when we finally did, it caused problems for my schedule. Not that I’m complaining, of course.  That’s what summits are for, and we appreciate the outside perspectives!


Cross-project property protection

The idea here is that Glance has ability to “protect” (i.e. to make read only or to hide) some of the image metadata properties based on policies. However, there are projects which create, update or read the images on behalf of the user but still should have other privileges to access those properties. For example, if Nova creates a Snapshot of the VM, it should be able to set some special properties, while a regular user – that is, someone accessing the images API – should not be able to do that. Right now, Glance has no way to differentiate between these two types of requests; both of them are authenticated by the user’s token.

It’s been suggested that we utilize so-called “Service Tokens” – additional authentication tokens being passed with the request, and identifying that these requests are made by some service (e.g. Nova) on behalf of the user. In this case, the request being made by the service will have both tokens, and the property protection will take into account the roles of both the user and the server. This streamlines the process.

Another related topic is property protection for read operations. Projects such as Searchlight are indexing image properties based on notifications emitted by Glance, and they (probably) should not be allow to search by the protected properties if the searching user does not have privileges to access them. However, this requires Searchlight to duplicate the property-protection settings of glance, and that’s a bad idea, as duplication is always error-prone. But we don’t know what to do with it. One of the ideas may be not to include the protected properties into the notification at all, but this will prevent Searchlight from indexing such data at all, even for admins.

Glance: Image Signing

I attended a small session about needed improvements to the image signing feature, which was recently added and landed in Liberty. The list of improvements includes adding new signing algorithms (MD5 is definitely outdated, so more should be added, but we need to maintain backwards compatibility) and increasing the key size. The latter requires us to increase the size of the signature, and this may lead to some issues; the size of the signature property will have to be more than 255 chars, and since it is passed in the headers, that may cause some deployment issues.

The team has agreed on the first topic (more algorithms) and identified some open issues with the second (signature size). It was agreed that we should look more deeply into it, but that it should be solvable.

Glance: DefCore updates

The DefCore guys joined us to explain DefCore’s rules and guidance about APIs, their test coverage, the differences between public and private (or admin) APIs and their view of them.  It was a useful and informative session, mostly aimed at syncing up and clarifying these things. No particular decisions were made, but the picture became a lot clearer.

One important thing that did come out of this session is that DefCore will provide a set of compatibility requirements within 3 months of the summit, at which point the community can provide feedback.

For, artifacts my understanding is that the plugin-specific APIs are fine, as we just do not include them into DefCore, and that’s fine. The “public” part should be DefCore-compatible, however. So, this actually matches what I thought before, so we don’t have to make any changes there.

Glance: Artifacts

Ok, this one was my session, and here is an important decision we’ve made: we are separating the API endpoints of Glance Images (v1/v2) and Glance Artifacts (Glare) into two different processes (and thus two different endpoints in Keystone). This actually means that the Glance v3 API is removed in favor of Artifacts v1. The project itself remains in the Glance codebase, and under Glance project guidance.

It was also decided to merge the v3 feature-branch of python-glanceclient into the master, so we have a single client library. In the meantime, the Glare CLI should be done in the OpenStack CLI itself, not in python-glanceclient.

So, this is quite a radical change for us. It is ok, and it makes sense, however it actually puts the things back to where it was before the Vancouver. It’s frustrating because these changes are confusing people, so I feel like we better pick something and stay at it, instead of changing the decisions every cycle with every new PTL.

Glance Priority List

We decided to have a special tag in the commit message that indicates the work done by that commit is considered a priority for the Glance team during the Mitaka cycle. This tage then enables reviewers filter the reviews in the dashboard based on the approved team priorities.

As for as the actual priorities, security fixes have top priority, of course. Also, the conversion from Nova v1 to v2 is considered a primary priority. Most of the patches in this process will target Nova, but some things may need to land in Glance. These patches should indicate appropriate needs in the commit messages.

We agreed, that for the Mitaka cycle, most other features are of lower priority. (They are listed in the spec to be landed this week.)  We also decided that Glare API stabilization is a priority on its own, but it’s actual priority level was undecided.  We’ll file a separate spec on that ASAP.


For AppCatalog, there was a priority and roadmap presentation, followed by a design session.

The presentation clearly identified the potential conflict of AppCat, Murano and GlanceArtifacts, also it was expressed a very serious security concern on the fact that AppCat does not provide any way to verify the resources. AppCat should have https endpoint support, as well as the ability to sign resources. We decided to document the current issues, start fixing the https issue (as it’s easier) and then think about how to do the consistency guarantees.

During the design session we discussed the workflow needed to update the assets. We agreed that the existing objects should never be modified. Instead, a new version of the object should be published.

The Murano Contributors’ Meetup actually turned into a cross-project work session for Murano, Glance and AppCatalog to define the borders between projects, and agree on shared responsibilities and reuse of common functionality.

We’ve started with what had been called Glance V3 (and now is called Artifacts V1, aka Glare) and its applications for Community App Catalog, i.e. with what was defined in this article. It turned out that we’ve finally reached an agreement with the AppCat guys on the fact that Glare should be used for the backend. We’ve identified the immediate action items for this to happen (pluggable DB models, CORS API support and more) and agreed to collaborate in this direction. Also, I’ve made a demo of the proof-of-concept we made for them.

Then we covered the overlap in functionality between the Murano Dashboard and the AppCatalog UI. And this one looks important; we’ve decided to join the UI efforts here.

Instead of having a separate “Murano” dashboard with its “Application Catalog” -> “Applications” panel which looks really similar in the name and the concept to the “Applications” panel of App Catalog UI, we’ve decided to have a single “Applications” Dashboard (note, that there will be no “Murano” or other code names in the UI: users usually don’t care about the project names). It will provide capabilities to browse both remote applications (i.e. search in and the local one (Glare – if it is installed, of just Glance V2 if not) for all types of supported assets. It will users allow to install assets such as apps locally (if supported by local config) and launch the apps from the local copies (i.e. spawn a VM out of an Image, run a Heat stack out of a template or deploy a Murano application out of package).

The same dashboard will contain a “Running Apps” page to list what has already been deployed, with available actions, such as Stop or calling Murano Actions. Different versions of that dashboard will be available depending on what is currently installed in the cloud.

So this is a really big thing, which may turn into a new project with its own git repo and reviewers team. We need more discussion on this internally (we need to be sure that we don’t shoot ourselves in the foot with this: we don’t want a simple “launch a VM” or “run a Heat stack” to prevent people from using Murano), but overall that was a big success for the Murano team.

The post OpenStack Glance, Murano, Community app catalog: links in a chain appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Nick Chase at November 17, 2015 09:49 AM

OpenStack Security Issues: Self-defense without weapons

The post OpenStack Security Issues: Self-defense without weapons appeared first on Mirantis | The #1 Pure Play OpenStack Company.

You may remember Alexander Adamov from his May 2015 article, “Detecting targeted cyber attacks in the cloud”. Now he’s brought us a look at security at the Tokyo summit.

Vulnerability Management and Security Testing

The vulnerability mailing list is to be tightened to provide more control on information exposure, but the Vulnerability Management process flow will be more formalized so that the mailing list isn’t the sole means of delivering information to downstream stakeholders. You can get more information on the etherpad.

A new Python API security testing tool called Syntribos was discussed in a dedicated design session. It can be used as a penetration testing tool or implemented as a gate job in the future. Syntribos is designed to automatically detect common potential security breaches such as SQL injection, LDAP injection, a buffer overflow, and so on. In addition, Syntribos can be used to help identify new security defects by fuzzing HTTP requests. You can get more information on the etherpad, or check out the Syntribos tool itself.

Identity Federation in focus

Using the Barbican service for Identity Federation was discussed, however there’s a problem with trusting a cloud provider when storing an encryption key for images and running decrypted images. The key stored by a Barbican private service and a decrypted image should be erased after use but what if a cloud provider leaves decrypted data, or unintentionally exposes a key?

Steve Martinelli from IBM, Joe Savak and Douglas Mendizábal from Rackspace, Marek Dennis from CERN, and Klaas Wierenga from Cisco gave a presentation about different aspects of federated applications. During the panel discussion it was stated that Federation continues growing in popularity, and is now even used in an academic area. Dennis explained how Federation helps CERN researchers to focus on discovering the fundamental structure of the Universe, not managing accounts that provide scientists with access to petabytes of data gathered by the Large Hadron Collider’s sensors.


Development of the Firewall as a Service plugin is ongoing and we looked at the FWaaS roadmap. Important takeaways included:

  • We should consider setting FWaaS up for zones, service chains, containers, and VM ports, as opposed to the current situation, where it only works for Routers. For example, a zone-based firewall will be able to filter any given network traffic without the need to set multiple ACLs, bringing the service to a higher abstraction level. In this way, you need only to care about setting an appropriate security zone for a node.
  • The Liberty FWaaS API is deprecated and will be redesigned in Mitaka with the goal of enabling:
    • port based functionality
    • augmentation of SecurityGroups
    • an IPTables based reference implementation
    • Service Groups
  • In the N-cycle, we’ll work on scalability, HA, and zone-based firewalls.
  • In the O-cycle, we’ll work on a common back end for Security Groups and Firewall.
  • Mirantis can contribute to FWaaS documentation.

Securing traffic in OpenStack private clouds (Intel + Midokura)

Intel, with the help of Midokura, presented its own security solution for scanning network traffic, built on the Intel Security Controller Platform.  (They also promised to open source it at some point in the future.) The implementation is new, and looks fairly immature for now, however, I consider this approach rather promising. Sooner or later cloud providers will start thinking about deploying an all-in-one security solution that can orchestrate a deployed cloud in an automatic way via APIs. This will make it possible to configure, adjust, and scale it in close to real time, providing protection for emerging threats. Moreover, being integrated with Intel TXT on a hardware level, it may have a solid background to be a reliable security solution.

Some takeaways from this session include:

  • 80% of cloud traffic is East/West — that is, inside the cloud
  • The Intel Security Controller (ISC) Platform may include McAfee Virtual Network Security Platform, Firewall, and  third-party security applications communicating via open APIs for Orchestration
  • ISC uses the VLAN tag to bind Security Policies and the in/out packet is redirected to the Service VM to be scanned

Basically, the Intel Security Controller Platform will scan ingoing/outgoing network traffic and orchestrate deployed security solutions via API; they also introduced a concept of Virtual Domains from PLUMgrid that includes a policy enforcement zone through which network traffic comes from external networks to VMs. Such domains bring a flexibility, for example, to cut a compromised domain from the Internet and a private network, and let the forensics team do their work.

Secure Your OpenStack Infrastructure (Awnix + PLUMgrid)

Rick Kundiger (CEO Awnix and former US DoD engineer) stated that Firewalls, VLAN, and ACLs are not enough to make a cloud secure. The suggested a solution:  create a Security Tenant with Firewall, Security Groups, IDS, IPS, UTM, and so on on board, leveraging SDN.  This way, once any tenant is compromised, the Gateway IP can be changed to connect to the internal Forensics network with a security tenant there full of forensics tools.  (This is a nice idea PLUMgrid came up with.)  The also discussed Detection (for example, by IDS) and Remediation automation.

The suggested that Security Groups are preferable to Firewall rules because of the high granularity they provide. Even if an attacker gets into one server, they won’t be able to propagate within the same project.

Also discussed was IOVisor, a Linux upstream project by PLUMgrid that provides Network Isolation across Compute Domains that works the same way for VMs and Containers. This way it unifies management of networking security, and not only networking. Because it is a part of the linux kernel, IOVisor can also trace a process and a user that sends suspicious packets.

Protecting Hybrid Cloud (FlawCheck)

I also attended a talk by FlawCheck, an interesting startup that has its own malware/vulnerability static signature engine to check Docker containers.  They’re on the rise now, of course, and according to the report, the majority of predefined containers are vulnerable.  In fact, the top concern users had about running apps in Docker containers was “Vulnerability and malware”, with 42% of respondents citing it.  And no wonder: with no security check inspection by Docker, it turns out that 30% containers have vulnerabilities.

An overall look at security at the OpenStack Tokyo Summit

In Tokyo we saw more topics that ever before intending to answer the question, “How do you protect your cloud against cyber attacks?” In addition to these views coming from Intel and Midokura, Awnix and PLUMGrid, and FlawCheck, the Vulnerability Management Team, being a part of the OpenStack Security Project, is doing a great job in revealing and fixing vulnerabilities in the upstream code and delivering those fixes to downstream stakeholders. Tools such as Bandit and a new security testing tool called Syntribos appeared this release, and can be of use here.

However, fixing security bugs is not enough to protect clouds against hackers. Intel and PLUMgrid understand this and propose architectural solutions aimed to increase cloud security.

On the container side, we saw just how serious the situation is, but we also saw the introduction of the open source Linux kernel module IOVisor to try and solve it.

To sum up, we see a trend of interface unification and centralized security orchestration, which may bring more flexibility and simplicity for security policy enforcement and digital forensics.

The post OpenStack Security Issues: Self-defense without weapons appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Alexander Adamov at November 17, 2015 09:21 AM

OpenStack Keystone in Tokyo: deprecations, deprecations, deprecations!

The post OpenStack Keystone in Tokyo: deprecations, deprecations, deprecations! appeared first on Mirantis | The #1 Pure Play OpenStack Company.

PKI tokens, which were the first attempt to implement non-persistent tokens, are really close to be deprecated in favor of Fernet tokens. The LDAP assignment driver will be either removed or set to read-only mode after being deprecated for 2 cycles. The v2.0 API will be partially deprecated, leaving only authentication-related parts non-deprecated. Running keystone with eventlet will be removed in this cycle, which means that deployers will have to use WSGI servers, which are more secure and mature. This is a very good thing for keystone: a lot of old code is getting cleaned up and maintaining keystone becomes much easier for developers.

Fernet tokens are now the recommended type of tokens to use. They are small as UUID tokens and don’t require storage as PKI tokens. Their size does not depend on the size of Keystone catalog, however they store enough information to verify them. Before them it was challenging to achieve scalability and high availability of Keystone. Fernet tokens are designed with this issue in mind.

Although functional testing was discussed during last summit, it turned out that not everyone had the same vision of how it should be done: some saw it as “black-box” testing, when we test Keystone only as a REST service, some wanted the tests to check the state of underlying modules, such as database. This was re-evaluated and re-discussed and we decided to stick with the “black-box” approach.

Identity Federation is still a hot topic and is the future of keystone. We want to make Federation the first-class citizen of OpenStack. The default way for deploying Keystone should to be via Federation. A lot of work needs to be done and we have a lot of options what to do. We need to enhance our mapping engine, enhance ability to debug and troubleshoot issues with federation, get client-side support. That’s a lot of work, but it’s worth it — it will make Keystone more scalable and will let deployers do cross-DC deployments more easily.

Work on tokenless auth with x.509 certificates is in progress: we want to stop storing service users whose only responsibility is to validate a token. This will be perfect for clouds, where users are stored in LDAP, because now operators have to configure an additional SQL backend for service users only.

The post OpenStack Keystone in Tokyo: deprecations, deprecations, deprecations! appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Boris Bobrov at November 17, 2015 08:50 AM

November 16, 2015

OpenStack Superuser

OpenStack and Network Function Virtualization, the backstory

Network Function Virtualization (NFV) is occupying more space in cloud conversations than ever. It might seem like breaking new ground for OpenStack, but it's practically a cornerstone dating back to the early days of the Foundation, Bryce says.

Here's some of the backstory on OpenStack and NFV from Jonathan Bryce, OpenStack executive director, from the OPNFV Summit.

About the journey of OpenStack: “It’s incredible to see how people take technology…When you create an environment where developers and users can come together, they build new things that you would’ve never thought of.

That started with public cloud use cases, then enterprise use cases. What we started seeing about two years ago was some of our large users were cable and telco providers taking their fundamental capabilities to manage compute, storage and networking and using that to transform the way they provision services on their networks. So we have companies like Time Warner Cable and Comcast who are using that to deliver TV content and high-speed networking to their user base…And companies like China Mobile, AT&T, Deutsche Telekom and others around the world who are starting to work on this massive shift of their networks over to NFV.

How OpenStack got started with NFV: “One of our board members Tobias Ford works for AT&T …He presented us with some of the challenges that telcos were facing to be come more agile, move more quickly and re-provision capacity as demand shifts from SMS to mobile data or whatever the next thing will be…He put this challenge out there, he thought OpenStack could help them solve this, but it needed a variety of updates and new capabilities."

What's next "Over the last year and a half, it’s been exciting to see a community form around it, to bring those changes into OpenStack. We’re here at the OPNFV Summit, and it’s been exciting for us to see it form as a group that takes those requirements from these use cases and brings them back upstream, which is so important for any open project to grow and change and become more valuable.

For OpenStack, it's really cool to see how these different open communities work together and collaborate to make something, a new use case that we didn’t intend originally but will be extremely valuable across a lot of industries. It’s really all about bringing users developers together in an open and collaborative community. “

alt text here

Bryce was part of the strategic technology keynote panel, along with moderator Peter Jarich of Current Analysis, Neela Jacques of the OpenDaylight Project, Pere Monclus of PlumGrid, Brandon Philips of CoreOS and Reynold Xin of Databricks.

We'll update the post with a link to the full 45-minute session when the video of the panel is available.

You can catch the rest of the OPNFV Summit on live stream, too.

Cover Photo // CC BY NC

by Nicole Martinelli at November 16, 2015 11:02 PM

Maish Saidel-Keesing

Is the #OpenStack Leopard Changing its Spots?

A short while back I tweeted the following:

<script async="async" charset="utf-8" src=""></script>

This was a result of me reading the minutes of the OpenStack Technical committee from November 3rd, 2015 (full log is here)

What pleasantly surprises me is that this might finally becoming a viable option.

Leopard spots

Let me first go into the background of my trail of thoughts.
I really enjoy working with OpenStack, but I also really hate working with OpenStack. A good old love/hate relationship.

There are amazing things about this community, the way we work, and the software we produce (just to name two).

On the other hand there are really bad things about this community, namely – the way we work, and the software we produce.

One might say that is an oxymoron (I am not calling anyone stupid though). But yes it is true. The OpenStack community produces amazing things – or at least that is until someone (let’s call them an Operator) tries to actually use this in a production environment, and then they find out until it is completely and totally, not ready, in any way for production use.

So this Operator tries to raise his concerns, why this is not useable, these are valid concerns and maybe these should be fixed. But unfortunately they have no way of actually getting this done, because – the feedback loop is completely broken. Operators have no way of getting that information back in. The community has no interest in opening up a new conduit into the developer community – because they are fixated on working in a specific way.

I must stress and honestly say, that this was the case up until approximately a year ago. Since then the OpenStack community has embarked on a number of initiatives that are making this a much easier process, and we are actually seeing some change (see the title of this post).

For me the question actually is – what sparked this change? Up until now I was under the impression that the OpenStack community (and I mean the developers) was of the opinion that they are driving the requirements and development goals, but it seems as of late – this could be shifting.

To me this is going to have a significant effect of what OpenStack is, and how it moves forward.

For better and for worse. Time will tell.

What do you all think? Please feel free to leave your thoughts and comments below.

by Maish Saidel-Keesing ( at November 16, 2015 10:00 PM

OpenStack Superuser

Upgrades in OpenStack Nova: Remote Procedure Call APIs

Dan Smith is a principal software engineer at Red Hat. He works primarily on Nova, is a member of the core team and is generally focused on topics relating to live upgrade. You can follow him on Twitter @get_offmylawn.

If you’re not already familiar with Remote Procedure Call (RPC) as it exists in many OpenStack projects, you might want to watch this video first.

The details below are focus on Nova and should be applicable to other projects using oslo.messaging for their RPC layer.

Why We Need Versioning

It’s important to understand why we need to go to all the trouble that is described below. With a distributed system like Nova, you’ve got services running on many different machines communicating with each other over RPC. That means they’re sending messages with data which end up calling a function on a remote machine that does something and (usually) returns a result. The problem comes when one of those interfaces needs to change, which it inevitably will. Unless you take the entire deployment down, install the new code on everything at the same time, and then bring them back up together, you’re going to have some nodes running different versions of the code than others.

If newer code sends messages that the older services don’t understand, you break. If older code sends messages missing information needed by the newer services, you break.

alt text here

Versioning your RPC interfaces provides a mechanism to teach your newer nodes how to speak to older nodes when necessary, and defines some rules about how newer nodes can continue to honor requests that were valid in previous versions of the code. Both of these apply to some time period or restriction, allowing operators to upgrade from one release to the next, ensuring that everything has been upgraded before dropping compatibility with the old stuff. It’s this robustness that we as a project seek to provide with our RPC versioning strategy to make Nova operations easier.

Versioning the Interfaces

At the beginning of time, your RPC API is at version 1.0. As you evolve or expand it, you need to bump the version number in order to communicate the changes that were made. Whether you bump the major or minor depends on what you’re doing. Minor changes are for small additive revisions, or where the server can accept anything at a certain version back to a base level. The base level is a major version, and you bump to the next one when you need to drop compatibility with a bunch of minor revisions you’ve made. When you do that, of course, you need to support both major versions for a period of time so that deployers can upgrade all their clients to send the new major version before the servers drop support for the old one. I’ll focus on minor revisions below and save major bumps for a later post.

In Nova (and other projects), APIs are scoped to a given topic and each of those has a separate version number and implementation. The client side (typically and the server side (typically both need to be concerned with the current and previous versions of the API and thus any change to the API will end up modifying both. Examples of named APIs from Nova are “compute”, “conductor”, and “scheduler”. Each has a client and a server piece, connected over the message bus by a topic.

First, an example of a minor version change from Nova’s compute API during the Juno cycle. We have an RPC call named rescue_instance() that needed to take a new parameter called rescue_image_ref. This change is described in detail below.

Server Side

The server side of the RPC API (usually needs to accept the new parameter, but also tolerate the fact that older clients won’t be passing it. Before the change, our server code looked like this:

target = messaging.Target(version='3.23')

  . . .

def rescue_instance(self, context, instance, rescue_password):

What you see here is that we’re currently at version 3.23 and rescue_instance() takes two parameters: instance and rescue_password (self and context are implied). In order to make the change, we bump the minor version of the API and add the parameter as an optional keyword argument:

target = messaging.Target(version='3.24')

 . . .

def rescue_instance(self, context, instance, rescue_password,

Now, we have the new parameter, but if it’s not passed by an older client, it will have a default value (just like Python’s own method call semantics). If you change nothing else, this code will continue to work as it did before.

It’s important to note here that the target version that we changed doesn’t do anything other than tell the oslo.messaging code to allow calls that claim to be at version 3.24. It isn’t tied to the rescue_instance() method directly, nor do we get to know what version a client uses when they make a call. Our only indication is that rescue_image_ref could be non-None, but that’s all we should care about anyway. If we need to be able to pass None as a valid value for the parameter, we should use a different sentinel to indicate that the client didn’t pass any value.

Now comes the (potentially) tricky part. The server code needs to tolerate calls made with and without the rescue_image_ref parameter. In this case, it’s not very complicated: we just check to see if the parameter is None, and if so, we look up a default image and carry on. The actual code in nova has a little more indirection, but it’s basically this:

def rescue_instance(self, context, instance, rescue_password,

    if rescue_image_ref is None:
        # NOTE(danms): Client is old, so mimic the old behavior
        # and use the default image
        # FIXME(danms): Remove this in v4.0 of the RPC API
        rescue_image_ref = get_default_rescue_image()

Now, the rest of the code below can assume the presence of rescue_image_ref and we’ll be tolerant of older clients that expected the default image, as well as newer clients that provided a different one. We made a NOTE indicating why we’re doing this, and left a FIXME to remove the check in v4.0. Since we can’t remove or change parameters in a minor version, we have to wait to actually make rescue_image_ref mandatory until v4.0. More about that later.

You can see how the code actually ended up here.

Client Side

There is more work to do before this change is useful: we need to make the client actually pass the parameter. The client part is typically in and is where we also (conventionally) document each change that we make. Before this change, the client code for this call looked like this (with some irrelevant details removed for clarity):

def rescue_instance(self, ctxt, instance, rescue_password):
    msg_args = {'rescue_password': rescue_password,
                'instance': instance}
    cctxt = self.client.prepare(
        server=_compute_host(None, instance),
    cctxt.cast(ctxt, 'rescue_instance', **msg_args)

While the actual method is a little more complicated because it has changed multiple times in the 3.x API, this is basically what it looks like ignoring that other change. We take just the instance and rescue_password parameters, declare that we’re using version 3.0 and make the cast which sends a message over the bus to the server side.

In order to make the change, we add the parameter to the method, but we only include it in the actual RPC call if we’re “allowed” to send the newer version. If we’re not, then we drop that parameter and make the call at the 3.0 level, compatible with what it was at that time. Again, with distractions removed, the new implementation looks like this:

def rescue_instance(self, ctxt, instance, rescue_password,
    msg_args = {'rescue_password': rescue_password,
                'instance': instance}
    if self.client.can_send_version('3.24'):
        version = '3.24'
        msg_args['rescue_image_ref'] = rescue_image_ref
        version = '3.0'
    cctxt = self.client.prepare(
        server=_compute_host(None, instance),
    cctxt.cast(ctxt, 'rescue_instance', **msg_args)

As you can see, we now check to see if version 3.24 is allowed. If so, we include the new parameter in the dict of parameters we’re going to use for the call. If not, we don’t. In either case, we send the version number that lines up with the call as we’re making it. Of course, if we were to make multiple changes to this call in a single major version, we would have to support more than two possible outbound versions (like this). The details of how client_can_send_version() knows what versions are okay will be explained later.

Another important part of this change is documenting what we did for later. The convention is that we do so in a big docstring at the top of the client class. Including as much detail as possible will definitely be appreciated later, so don’t be too terse. This change added a new line like this:

  • 3.24 - Update rescue_instance() to take optional rescue_image_ref

In this case, this is enough information to determine later what was changed. If multiple things were changed (multiple new arguments, changes to multiple calls, etc) they should all be listed here for posterity.

So, with this change, we have a server that can tolerate calls from older clients that don’t provide the new parameter, and a client that can make the older version of the call, if necessary. This was a pretty simple case, of course, and so there may be other changes required on either side to properly handle the fact that a parameter can’t be passed, or that some piece of data isn’t received. Here it was easy for the server to look up a suitable value for the missing parameter, but it may not always be that easy.

Gotchas and special cases

There are many categories of changes that may need to be made to an RPC API, and of course I cheated by choosing the easiest to illustrate above. In reality, the corner cases are most likely to break upgrades, so they deserve careful handling.

The first and most important is a change that alters the format of a parameter. Since the server side doesn’t receive the client’s version, it may have a very hard time determining which format something is in. Even worse, such a change may occur deep in the DB layer and not be reflected in the RPC API at all, which could result in a client sending a complex structure in a too-old or too-new format for the server to understand, and no version bump was made at all to indicate to either side that something has changed. This case is the reason we started working on what is now oslo.versionedobjects — more on that later.

Another change that must be handled carefully is the renaming or removal of a parameter. When a call is dispatched on the server side as a result of a received message, it is done so by keyword, even if the method’s arguments are positional. This means that if you change the name of a positional parameter, the server will fail to make the call to your method as if you passed a keyword argument to a python method that it wasn’t expecting. The same goes for a removed parameter of course.

In Nova, we typically handle these by not renaming things unless it’s absolutely necessary, and never removing any parameters until major version bumps. If we do rename a parameter, we continue to accept both and honor them in order in the actual implementation, the newer taking precedence if both are provided.

Version Pins

Above, I waved my hands over the can_send_version() call, which magically knew whether we could send the newer version or not. In Nova, we (currently) handle this by allowing versions for each service to be pinned in the config file. We honor that pin on the client side in the initialization of the RPC API class like this:

    'icehouse': '3.23',
    'juno': '3.35',

def __init__(self):
    super(ComputeAPI, self).__init__()
    target = messaging.Target(topic=CONF.compute_topic,
    version_cap = self.VERSION_ALIASES.get(
    serializer = objects_base.NovaObjectSerializer()
    self.client = self.get_client(target,

What this does is initialize our base version to 3.0, and then calculate the version_cap, if necessary that our client should obey. To make it easier on the operators, we define some aliases, allowing them to use release names in the config file instead of actual version numbers. So, we get the version_cap, which is either the alias based on the config, or the actual value from the config if there is no alias, or None if they didn’t set it. When we initialize the client, it gets the version that matches their alias, the version they specified, or None (i.e. no limit) if not. This is what makes the can_send_version() method able to tell us whether a given version is okay to use (i.e. if it’s below the version_cap, if one is set).

What services/APIs should be pinned, when, and to what value will depend on the architecture of the project. In Nova, during an upgrade, we require the operators to upgrade the control services before the compute nodes. This means that when they’ve upgraded from, say Juno to Kilo, the control nodes running Kilo will have their compute versions pinned to the Juno level until all the computes are upgraded. Once that happens, we know that it’s okay to send the newer version of all the calls, so the version pin is removed.

Aside from the process of bumping the major version of the RPC API to drop compatibility with older nodes, this is pretty much all you have to do in order to make your RPC API tolerate mixed versions in a single deployment. However, as described above, there is a lot more work required to make these interfaces really clean, and not leak version-specific structures over the network to nodes that potentially can’t handle them.

Dan Smith originally posted this tutorial from his blog, Right Angles. Superuser is always interested in how-tos and other contributions, please get in touch:

Cover Photo// CC BY NC

by Dan Smith at November 16, 2015 05:25 PM

Sébastien Han

OpenStack Summit Tokyo: Ceph and OpenStack Current Integration and Roadmap

Date: 29/10/2015


<iframe src=""></iframe>

Download the slides here.

November 16, 2015 03:02 PM

Doug Hellmann

Release Team Changes and Goals for OpenStack’s Mitaka Release Cycle

For the Mitaka cycle, we will be implementing changes designed to make it easier for project teams to manage their own projects, with less need for coordination and tight-coupling of the schedule. Enabling a New Model for Milestones During previous release cycles we have strictly synchronized teams around milestones, tagging pre-release versions of each project … Continue reading Release Team Changes and Goals for OpenStack’s Mitaka Release Cycle

by doug at November 16, 2015 02:35 PM

Hugh Blemings



Welcome to Last week on OpenStack Dev (“Lwood”) for the week ending 15th November 2015. For more background on Lwood, please refer here.

Basic Stats for week 9th to 15th November 2015 :

  • ~683 Messages (basically flat relative to last week)
  • ~189 Threads (down 7% from last week)

Traffic and threads settling after the Mitaka lull then rush of the last few weeks!

Notable Discussions

Making Stable Maintenance it’s own OpenStack project team

This quite lengthy thread starts on Monday here and neatly summarised in a post on Friday makes the case for making stable maintenance its own discreet OpenStack project team.

Reasons in favour include the ability for a suitably empowered team to tackle new coordination tasks (across projects) and reinforcing branding – making stable more visible to organisations that may in turn be more inclined to commit resources.

To me the upsides of this far outweigh the minor downsides – as OpenStack continues to mature so will the expectation around stable releases and longer term support.

New API guidelines for review

There are three new API guidelines ready for review that will be merged on November 20th in the absence of any further feedback.  They are;

Telemetry and Ceilometer explained

Gord Chung wrote a nice summary explaining the newly introduced [telemetry] tag and accompanying project and how it relates to Ceilometer.  In short telemetry is a project that encapsulates various smaller projects, including Ceilometer that provide monitoring, alarming, data collection and resource storage style services.

OSprofiler is dead, long live OSprofiler

A tongue in cheek subject for a significant thread from Boris Pavlovic in which he outlines his work to write a new OSprofiler.  For the uninitiated this tool allows quite fine grained analysis of where time is spent doing various OpenStack requests.  Boris provides a link to a demo trace of a CLI command (nova boot) as an example.

While he acknowledges there is some work to be done there already appears to be quite widespread support for seeing this work integrated across the Mitaka release.

Nominations open for N and O names of OpenStack

Monty Taylor wrote to advise that nominations for the N and O releases of OpenStack were open – they are alas now closed at the time of writing this edition.

Fear not, you can still vote when the names are put forward for Community voting on 30th November.  If you’re curious the geographc regions in questions are “Texas Hill County” for the “N” release and “Catalonia” for “O” – both lovely places :)

Shout out for Nova API documentation contributions

Understand a bit about Nova ?  Able to explain it to others ?  Please consider contributing to the very important work to refresh the Nova API documentation as Alex Xu writes here.  There is also a virtual Sprint planned.

Cool graphs!

What’s better than OpenStack ?  Graphs of OpenStack of course! :)

Jokes aside, Paul Belanger noted the existence of a new site which will, as it develops, allow various dashboards to be created.  Suggestions for what the community would like to see are sought, for now there is an example dashboard of Zuul data.

High Availability topic for openstack-dev

Adam Spiers noted that there is now a formal topic tag [HA] for High Availability related posts to the mailing list.  If you’re posting on HA related topics please use this tag to assist server-side and client-side maili filters in doing their thing.

Last sync from oslo-incubator

A good news thread with a practical bent this one – Dims noting that most of the code in oslo-incubator has now moved into oslo.* libraries (as intended) followed up by a post from Thierry pointing out it was three years to the day that oslo-incubator was first created.  Props for all involved in getting Oslo to where it is today.

Not keeping Juno around longer after all

Following up on the thread mentioned in last week’s Lwood, Tony Breeds summed up a fairly full couple of weeks discussion in this message noting that “… Juno is no more long live Kilo!”

Introducing Lwood

A colleague suggested I should flag a post about Lwood from within Lwood to see if the resulting recursion would collapse the Internet.  Silliness aside, as Thierry suggested I’ve added the blog version of Lwood to Planet Openstack.

Other Reading

Don’t forget these excellent sources of OpenStack news :)

Post Mitaka Summit Summaries and Priorities

A few more Summaries and Priority lists rolled in from the Mitaka Summit

Midcycle dates and locations

A couple of midcycles (or lack thereof) were announced;

People and Projects

  • [tacker] Sripriya Seetharam nominated for Tacker core by Sridhar Ramaswamy
  • [oslo][taskflow] Greg Hill proposed for the taskflow-core team by Joshuah Harlow
  • [freezer] Proposal to add Pierre-Arthur Mathieu and Eldar Nugaev to freezer core by Marzi Fausto
  • [kolla] Proposing Michal Rostecki for Core Reviewer by Steve Dake
  • [vpnaas][neutron] Paul Michali advised he is stepping down from VPNaaS work to focus on other areas on Neutron


by hugh at November 16, 2015 11:44 AM

Ramon Acedo

OpenStack lab on your laptop with TripleO and director


This setup allows us to experiment with OSP director in our own laptop, play with the existing Heat templates, create new ones and understand how TripleO is used to install OpenStack, from the confort of your laptop.

VMware Fusion Professional version is used, but this will also work in VMware Workstation with virtually no changes and in vSphere or VirtualBox with an equivalent setup.

This guide uses the official Red Hat documentation, in particular the Director Installation and Usage.


Architecture diagram

Standard RHEL OSP 7 architecture with multiple networks, VLANs, bonding and provisioning from the Undercloud / director node via PXE.

RHEL OSP 7 in laptop - [racedo]

Networks and VLANs

No especial setup is needed for enabling VLAN support in VMware Fusion, we just set the VLANs and their networks in RHEL as usual.


DHCP and PXE are provided by the Undercloud VM.


VMware Fusion NAT will be used to provide external access to the Controller and Compute VMs via the provisioning and external networks. The VMware Fusion NAT below, configures in your Mac OS X as the default gateway for the VMs, which will be used in the TripleO templates as the default gateway IP.

VMware Fusion Networks

The networks are configured in the VMware Fusion menu in Preferences, then Network.



The provisioning (PXE) network is set up in vmnet9, the rest of the networks in vmnet10.

The above describes the architecture of our laptop lab in VMware Fusion. Now, let’s implement it.

Step 1. Create 3 VMs in VMware Fusion

VM specifications

VM vCPUs Memory Disk NICs Boot device
Undercloud 1 3000 MB 20 GB 2 Disk
Controller 2 3000 MB 20 GB 3 1st NIC
Compute 2 3000 MB 20 GB 3 1st NIC

Disk size

You may want to increase the disk size for the controller to be able to test more or larger images and to the compute node to be able to run more or larger instances. 3GB of memory is enough if you include a swap partition for the compute and controller.

VMware network driver in .vmx file

Make sure the network driver in the three VMs is vmxnet3 and not e1000 so that RHEL shows all of them:

$ grep ethernet[0-9].virtualDev Undercloud.vmwarevm/Undercloud.vmx
ethernet0.virtualDev = "vmxnet3"
ethernet1.virtualDev = "vmxnet3"

ethX vs enoX NIC names

By default, the OSP director images have the kernel boot option net.ifnames=0. This will name the network interfaces as ethX as opposed to enoX. This is why in the Undercloud the interface names are eno16777984 and eno33557248 (default net.ifnames=1) and the Controller and Compute VMs have eth0, eth1 and eth2. This may change in RHEL OSP 7.2.

Undercloud VM Networks

This is the mapping of VMware networks to OS NICs. A OVS bridge br-ctlplane will be created automatically by the installation of the Undercloud.

Networks VMware Network RHEL NIC
External vmnet10 eno33557248
Provisioning vmnet9 eno16777984 / br-ctlplane

Copy the MAC addresses of the controller and compute VMs

Make a note of the MAC addresses of the first vNIC in the Controller and Compute VMs.

Screen Shot 2015-10-29 at 16.12.41

Screen Shot 2015-10-29 at 16.19.03

Step 2. Install the Undercloud

Install RHEL 7.1 in your preferred way in the Undercloud VM and then configure it as follows.

Network interfaces

First, set up the network. will be the external IP in eno33557248 and the provisioning IP in eno16777984.

In /etc/sysconfig/network-scripts/ifcfg-eno33557248


And in /etc/sysconfig/network-scripts/ifcfg-eno16777984


Once the network is set up ssh from your Mac OS X to and not to because the latter will be automatically reconfigured during the Undercloud installation to become the IP of the bridge called br-ctrlplane and you would lose access during the reconfiguration.

Undercloud hostname

The Undercloud needs a fully qualified domain name and it also needs to be present in the /etc/hosts file. For example:

# sudo hostnamectl set-hostname undercloud.osp.poc

And in /etc/hosts: undercloud.osp.poc undercloud

Subscribe RHEL and Install the Undercloud Package

Now, subscribe the RHEL OS to Red Hat’s CDN and enable the required repos.

Then, install the OpenStack client plug-in that will allow us to install the Undercloud

# yum install -y python-rdomanager-oscplugin

Create the user stack

After that, create the stack user, which we will use to do the installation of the Undercloud and later the deployment and management of the Overcloud.

Configure the director

The following undercloud.conf file is a working configuration for this guide, which is mostly self-explanatory.

For a reference of the configuration flags, there’s a documented sample in /usr/share/instack-undercloud/undercloud.conf.sample

Become the stack user and create the file in its home directory.

# su - stack
$ vi ~/undercloud.conf
image_path = /home/stack/images
local_ip =
undercloud_public_vip =
undercloud_admin_vip =
local_interface = eno16777984
masquerade_network =
dhcp_start =
dhcp_end =
network_cidr =
network_gateway =
discovery_iprange =,
undercloud_debug = true

The masquerade_network config flag is optional as in VMware Fusion we already have NAT as explained above, but it might be needed if you use VirtualBox.

Finally, get the Undercloud installed

We will run the installation as the stack user we created

$ openstack undercloud install

Step 3. Set up the Overcloud deployment

Verify the undercloud is working

Load the environment first, then run the service list command:

$ . stackrc
$ openstack service list
| ID                               | Name       | Type          |
| 0208564b05b148ed9115f8ab0b04f960 | glance     | image         |
| 0df260095fde40c5ab838affcdbce524 | swift      | object-store  |
| 3b499d3319094de5a409d2c19a725ea8 | heat       | orchestration |
| 44d8d0095adf4f27ac814e1d4a1ef9cd | nova       | compute       |
| 84a1fe11ed464894b7efee7543ecd6d6 | neutron    | network       |
| c092025afc8d43388f67cb9773b1fb27 | keystone   | identity      |
| d1a85475321e4c3fa8796a235fd51773 | nova       | computev3     |
| d5e1ad8cca1549759ad1e936755f703b | ironic     | baremetal     |
| d90cb61c7583494fb1a2cffd590af8e8 | ceilometer | metering      |
| e71d47d820c8476291e60847af89f52f | tuskar     | management    |

Configure the fake_pxe Ironic driver

Ironic doesn’t have a driver for powering on and off VMware Fusion VMs so we will do it manually. We need to configure the fake_pxe driver for this.

Edit /etc/ironic/ironic.conf and add it:

enabled_drivers = pxe_ipmitool,pxe_ssh,pxe_drac,fake_pxe

Then restart ironic-conductor and verify the driver is loaded:

$ sudo systemctl restart openstack-ironic-conductor
$ ironic driver-list
| Supported driver(s) | Active host(s)     |
| fake_pxe            | undercloud.osp.poc |
| pxe_drac            | undercloud.osp.poc |
| pxe_ipmitool        | undercloud.osp.poc |
| pxe_ssh             | undercloud.osp.poc |

Upload the images into the Undercloud’s Glance

Download the images that will be used to deploy the OpenStack nodes to the directory specified in the image_path in the undercloud.conf file, in our example /home/stack/images. Get the images and untar them as described here. Then upload them into Glance in the Undercloud:

$ openstack overcloud image upload --image-path /home/stack/images/

Define the VMs into the Undercloud’s Ironic

TripleO needs to know about the nodes, in our case the VMware Fusion VMs. We describe them in the file instackenv.json which we’ll create in the home directory of the stack user.

Notice that here is where we use the MAC addresses we took from the two VMs.

 "nodes": [
   "arch": "x86_64",
   "cpu": "2",
   "disk": "20",
   "mac": [
   "memory": "3000",
   "pm_type": "fake_pxe"
   "arch": "x86_64",
   "cpu": "2",
   "disk": "20",
   "mac": [
   "memory": "3000",
   "pm_type": "fake_pxe"

Import them to the undercloud:

$ openstack baremetal import --json instackenv.json

The command above adds the nodes to Ironic:

$ ironic node-list
| UUID                                 | Name | Instance UUID                        | Power State | Provision State | Maintenance |
| 111cf49a-eb9e-421d-af05-35ab0d74c5d6 | None | 941bbdf9-43c0-442e-8b65-0bd531322509 | power off   | available       | False       |
| e579df9f-528f-4d14-94bc-07b2af4b252f | None | f1bd425b-a4d9-4eca-8bc4-ee31b300e381 | power off   | available       | False       |

To finish the registration of the nodes we run this command:

$ openstack baremetal configure boot

Discover the nodes

At this point we are ready to start discovering the nodes, i.e. having Ironic powering them on, booting with the discovery image that was uploaded before and then shutting them down after the relevant hardware information has been saved in the node metadata in Ironic. This process is called introspection.

Note that as we use the fake_pxe driver, Ironic won’t power on the VMs, so we do it manually in VMware Fusion. We wait until the output of ironic node-list tells us that the power state is on and then we run this command:

$ openstack baremetal introspection bulk start

Assign the roles to the nodes in Ironic

There are two roles in this example, compute and control. We will assign them manually with Ironic.

$ ironic node-update 111cf49a-eb9e-421d-af05-35ab0d74c5d6 add properties/capabilities='profile:compute,boot_option:local'
$ ironic node-update e579df9f-528f-4d14-94bc-07b2af4b252f add properties/capabilities='profile:control,boot_option:local'

Create the flavors in Glance and associate them with the roles in ironic

This consists in creating the flavors matching the specs of the VMs and then adding the property control and compute to the corresponding flavors to match Ironic’s as done in the previous step. Then, it also requires a flavor called baremetal.

$ openstack flavor create --id auto --ram 3000 --disk 17 --vcpus 2 --swap 2000 compute
$ openstack flavor create --id auto --ram 3000 --disk 19 --vcpus 2 --swap 1500 control

TripleO also needs a flavor called baremetal (which we won’t use):

$ openstack flavor create --id auto --ram 3000 --disk 19 --vcpus 2 baremetal

Notice the disk size is 1 GB smaller than the VM’s disk. This is a precaution to avoid No valid host found when deploying with Ironic, which sometimes is a bit too sensitive.

Also, notice that I added swap because 3 GB of memory is not enough and the out of memory killer could be triggered otherwise.

Now we make the flavors match with the capabilities we set in the Ironic nodes in the previous step:

$ openstack flavor set --property "cpu_arch"="x86_64" --property "capabilities:boot_option"="local" --property "capabilities:profile"="control" control
$ openstack flavor set --property "cpu_arch"="x86_64" --property "capabilities:boot_option"="local" --property "capabilities:profile"="compute" compute


Step 4. Create the TripleO templates

Get the TripleO templates

Copy the TripleO heat templates to the home directory of the stack user.

$ mkdir ~/templates
$ cp -r /usr/share/openstack-tripleo-heat-templates/ ~/templates/

Create the network definitions

These are our network definitions:

Network Subnet VLAN
Provisioning VMware native
Internal API 201
Tenant 204
Storage 202
Storage Management 203
External VMware native

To allow creating dedicated networks for specific services we describe them in a Heat template that we can call network-environment.yaml.

$ vi ~/templates/network-environment.yaml
 OS::TripleO::Compute::Net::SoftwareConfig: /home/stack/templates/nic-configs/compute.yaml
 OS::TripleO::Controller::Net::SoftwareConfig: /home/stack/templates/nic-configs/controller.yaml


 # The IP address of the EC2 metadata server. Generally the IP of the Undercloud
 # Gateway router for the provisioning network (or Undercloud IP)
 DnsServers: [""]


 # Leave room for floating IPs in the External allocation pool
 ExternalAllocationPools: [{'start': '', 'end': ''}]
 InternalApiAllocationPools: [{'start': '', 'end': ''}]
 TenantAllocationPools: [{'start': '', 'end': ''}]
 StorageAllocationPools: [{'start': '', 'end': ''}]
 StorageMgmtAllocationPools: [{'start': '', 'end': ''}]

 InternalApiNetworkVlanID: 201
 StorageNetworkVlanID: 202
 StorageMgmtNetworkVlanID: 203
 TenantNetworkVlanID: 204

 # ExternalNetworkVlanID: 100
 # Set to the router gateway on the external network
 # Set to "br-ex" if using floating IPs on native VLAN on bridge br-ex
 NeutronExternalNetworkBridge: "br-ex"

 # Customize bonding options if required

More information about this template can be found here.

Configure the NICs of the VMs

We have examples of NIC configurations for multiple networks and bonding in /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans/

We will use them as a template to define the Controller and Compute NIC setup.

$ mkdir ~/templates/nic-configs/
$ cp /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans/* ~/templates/nic-configs/

Notice that they are called from the previous template network-environment.yaml.

Controller NICs

We want this setup in the controller:

Bonded Interface  Bond Slaves Bond Mode
bond1 eth1, eth2 active-backup
Networks VMware Network RHEL NIC
Provisioning vmnet9 eth0
External vmnet10 bond1 / br-ex
Internal vmnet10 bond1 / vlan201
Tenant vmnet10 bond1 / vlan204
Storage vmnet10 bond1 / vlan202
Storage Management vmnet10 bond1 / vlan203

We only need to modify the resources section of the ~/templates/nic-configs/controller.yaml to match the configuration in the table above:

$ vi ~/templates/nic-configs/controller.yaml
    type: OS::Heat::StructuredConfig
      group: os-apply-config
              type: interface
              name: nic1
              use_dhcp: false
                      - '/'
                      - - {get_param: ControlPlaneIp}
                        - {get_param: ControlPlaneSubnetCidr}
                  next_hop: {get_param: EC2MetadataIp}
              type: ovs_bridge
              name: {get_input: bridge_name}
                - ip_netmask: {get_param: ExternalIpSubnet}
                - ip_netmask:
                  next_hop: {get_param: ExternalInterfaceDefaultRoute}
              dns_servers: {get_param: DnsServers}
                  type: ovs_bond
                  name: bond1
                  ovs_options: {get_param: BondInterfaceOvsOptions}
                      type: interface
                      name: nic2
                      primary: true
                      type: interface
                      name: nic3
                  type: vlan
                  device: bond1
                  vlan_id: {get_param: InternalApiNetworkVlanID}
                    ip_netmask: {get_param: InternalApiIpSubnet}
                  type: vlan
                  device: bond1
                  vlan_id: {get_param: StorageNetworkVlanID}
                    ip_netmask: {get_param: StorageIpSubnet}
                  type: vlan
                  device: bond1
                  vlan_id: {get_param: StorageMgmtNetworkVlanID}
                    ip_netmask: {get_param: StorageMgmtIpSubnet}
                  type: vlan
                  device: bond1
                  vlan_id: {get_param: TenantNetworkVlanID}
                    ip_netmask: {get_param: TenantIpSubnet}

    description: The OsNetConfigImpl resource.
    value: {get_resource: OsNetConfigImpl}

Compute NICs

In the compute node we want this setup:

Bonded Interface  Bond Slaves Bond Mode
bond1 eth1, eth2 active-backup
Networks VMware Network RHEL NIC
Provisioning vmnet9 eth0
Internal vmnet10 bond1 / vlan201
Tenant vmnet10 bond1 / vlan204
Storage vmnet10 bond1 / vlan202
$ vi ~/templates/nic-configs/compute.yaml
    type: OS::Heat::StructuredConfig
      group: os-apply-config
              type: interface
              name: nic1
              use_dhcp: false
              dns_servers: {get_param: DnsServers}
                      - '/'
                      - - {get_param: ControlPlaneIp}
                        - {get_param: ControlPlaneSubnetCidr}
                  next_hop: {get_param: EC2MetadataIp}
                  default: true
                  next_hop: {get_param: ControlPlaneDefaultRoute}
              type: ovs_bridge
              name: {get_input: bridge_name}
                  type: ovs_bond
                  name: bond1
                  ovs_options: {get_param: BondInterfaceOvsOptions}
                      type: interface
                      name: nic2
                      primary: true
                      type: interface
                      name: nic3
                  type: vlan
                  device: bond1
                  vlan_id: {get_param: InternalApiNetworkVlanID}
                    ip_netmask: {get_param: InternalApiIpSubnet}
                  type: vlan
                  device: bond1
                  vlan_id: {get_param: StorageNetworkVlanID}
                    ip_netmask: {get_param: StorageIpSubnet}
                  type: vlan
                  device: bond1
                  vlan_id: {get_param: TenantNetworkVlanID}
                    ip_netmask: {get_param: TenantIpSubnet}
    description: The OsNetConfigImpl resource.
    value: {get_resource: OsNetConfigImpl}

Enable Swap

Enabling the swap partition is done from within the OS. Ironic only creates the partition as instructed in the flavor. This can be done with the templates that allow running first boot scripts via cloud-init.

First, the template for running at cloud-init userdata /home/stack/templates/firstboot/firstboot.yaml

 OS::TripleO::NodeUserData: /home/stack/templates/firstboot/userdata.yaml

Then, the actual script for enabling swap /home/stack/templates/firstboot/userdata.yaml

heat_template_version: 2014-10-16

   type: OS::Heat::MultipartMime
   - config: {get_resource: swapon_config}

   type: OS::Heat::SoftwareConfig
   config: |
     swap_device=$(sudo fdisk -l | grep swap | awk '{print $1}')
     if [[ $swap_device && ${swap_device} ]]; then
       echo "swapon $swap_device " >> $rc_local
       chmod 755 $rc_local
       swapon $swap_device
 value: {get_resource: userdata}


Step 5. Deploy the Overcloud


We have everything we need to deploy now:

  • The Undercloud configured.
  • Flavors for the compute and controller nodes.
  •  Images for the discovery and deployment of the nodes.
  • Templates defining the networks in OpenStack.
  • Templates defining the nodes’ NICs configuration.
  • A first boot script used to enable swap.

We will use all this information when running the deploy command:

$ openstack overcloud deploy \
--templates templates/openstack-tripleo-heat-templates/ \
-e templates/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
-e templates/network-environment.yaml \
-e templates/firstboot/firstboot.yaml \
--control-flavor control \
--compute-flavor compute \
--neutron-tunnel-types vxlan --neutron-network-type vxlan \

After a successful deployment you’ll see this:

Deploying templates in the directory /home/stack/templates/openstack-tripleo-heat-templates
Overcloud Endpoint:
Overcloud Deployed

An overcloudrc file with the environment is created for you to start using the new OpenStack environment deployed in your laptop.

Step 6. Start using the Overcloud

Now we are ready to start testing our newly deployed platform.

$ . overcloudrc
$ openstack service list
| ID | Name | Type |
| 043524ae126b4f23bd3fb7826a557566 | glance     | image         |
| 3d5c8d48d30b41e9853659ce840ae4fe | neutron    | network       |
| 418d4f34abe449aa8f07dac77c078e9c | nova       | computev3     |
| 43480fab74fd4fd480fdefc56eecfe83 | cinderv2   | volumev2      |
| 4e01d978a648474db6d5b160cd0a71e1 | nova       | compute       |
| 6357f4122d6d41b986dab40d6fb471e3 | cinder     | volume        |
| a49119e0fd9f43c0895142e3b3f3394a | keystone   | identity      |
| b808ae83589646e6b7033f2b150e7623 | horizon    | dashboard     |
| d4c9383fa9e94daf8c74419b0b18fd6e | heat       | orchestration |
| db556409857d4d24872cdc1b718eee8f | swift      | object-store  |
| ddc3c82097d24f478edfc89b46310522 | ceilometer | metering      |

by ramonacedo at November 16, 2015 09:37 AM

Looking forward to the Mitaka release, container integration, and more OpenStack news

Interested in keeping track of what's happening in the open source cloud? is your source for news in OpenStack, the open source cloud infrastructure project.

by Jason Baker at November 16, 2015 09:00 AM

November 13, 2015

OpenStack Blog

OpenStack Weekly Community Newsletter (Nov., 7-13)

Kilowatts for Humanity harnesses the wind and sun to electrify rural communities

OpenStack-powered DreamCompute cloud computing helps keep rural microgrids up and running

OpenStack and Network Function Virtualization, the backstory

Jonathan Bryce, OpenStack executive director, gave a keynote at the OPNFV Summit where he talked about the two communities.

Fighting cloud vendor lock-in, manga style

To celebrate the OpenStack community and Liberty release, the Cloudbase Solutions team created a one-of-a-kind printed comic packed with OpenStack puns and references to manga and kaiju literature.

Community feedback

OpenStack is always interested in feedback and community contributions, if you would like to see a new section in the OpenStack Weekly Community Newsletter or have ideas on how to present content please get in touch:

Reports from Previous Events 

Deadlines and Contributors Notifications

Security Advisories and Notices 

  • None this week

Tips ‘n Tricks 

OpenStack Developer Mailing List Digest November 7-13

Upcoming Events 

by Jay Fankhauser at November 13, 2015 10:35 PM

OpenStack Developer Mailing List Digest November 7-13

New Management Tools and Processes for stable/liberty and Mitaka

  • For release management, we used a combination of launchpad milestone pages and our wiki to track changes in releases.
  • We used to pull releases notes for stable point releases at the time of releases.
  • Release managers would work with PTLs and release liaisons at each milestone to update launchpad to reflect the work completed.
  • All this requires a lot of work of the stable maintenance and release teams.
  • To address this work with the ever-growing set of project, the release team is introducing Reno for continuously building release notes as files in-tree.
    • The idea is small YAML files, usually one per note or patch to avoid merge conflicts on back ports, which are then compiled to a readable document for readers.
    • ReStructuredText and Sphinx are supported for converting note files to HTML for publication.
    • Documentation for using Reno is available [1].
  • Release liaisons should create and merge a few patches for each project between now and Mitaka-1 milestone:
    • To the master branch, instructions for publishing the notes. An example of Glance [2].
    • Instructions for publishing in stable/liberty of the project. An example with Glance [3].
    • Relevant jobs in project-config. An example with Glance [4].
    • Reno was not ready before the summit, so the wiki was used for release notes for the initial Liberty releases. Liaisons should covert those notes to Reno YAML files in stable/liberty branch.
  • Use the topic ‘add-reno’ for all patches to track adoption.
  • Once liaisons have done this work, launchpad can stop being used for tracking completed work.
  • Launchpad will still be used for tracking bug reports, for now.

Keeping Juno “alive” For Longer

  • Tony Breeds is seeking feedback on the idea of keeping Juno around a little longer.
  • According to the current user survey [5], Icehouse still has the biggest install base in production clouds. Juno is second, which means if we end of life (EOL) Juno this month, ~75% of production clouds will be running a EOL’d release.
  • The problems with doing this however:
    • CI capacity of running the necessary jobs of making sure stable branches still work.
    • Lack of resources of people who care to make sure the stable branch continues to work.
    • Juno is still tied with Python 2.6.
    • Security support is still needed.
    • Tempest is branchless, so it’s running stable compatible jobs.
  • This is acknowledged as a common request. The common answer being “push more resources in fixing existing stable branches and we might consider it”.
  • Matt Riedmann who works in the trenches of stable branches confirms stable/juno is already a goner due to requirements capping issues. You fix one issue to unwedge a project and with global-requirement syncs, we end breaking 2 other projects. The cycle never ends.
  • This same problem does not exist in stable/kilo, because we’ve done a better job of isolating versions in global-requirements with upper-constants.
  • Sean Dague wonders what are the reasons that keep people from doing upgrades to begin with. Tony is unable to give reasons since some are internal to his companies offering.

Oslo Libraries Dropping Python 2.6 compatibility

  • Davanum notes a patch to drop py26 oslo jobs [6].
  • Jeremy Stanley notes that the infrastructure team plans to remove CentOS 6.X job workers which includes all python 2.6 jobs when stable/juno reaches EOL.

Making Stable Maintenance its own OpenStack Project Team

  • Thierry writes that when the Release Cycle Management team was created, it just happen to contain release management, stable branch management, and vulnerability management.
    • Security Team was created and spun out of the release team today.
  • Proposal: spin out the stable branch maintenance as well.
    • Most of the stable team work used to be stable point release management, but as of stable/liberty this is now done by the release management team and triggered by the project-specific stable maintenance teams, so there is no more overlap in tooling used there.
    • Stable team is now focused on stable branch policies [7], not patches.
    • Doug Hellmann leads the release team and does not have the history Thierry had with stable branch policy.
    • Empowering the team to make its own decisions, visibility, recognition in hopes to encourage more resources being dedicated to it.
      • Defining and enforcing stable branch policy.
      • If team expands, it could own stable branch health/gate fixing. The team could then make decisions on support timeframes.
    • Matthew Treinis disagrees that the team separation solves the problem of getting more people to work on gate issues. Thierry has evidence though that making a its own project will result in additional resources. The other option is to kill stable branches, but as we’ve seen from the Keeping Juno Alive thread, there’s still interest in having them.

by Mike Perez at November 13, 2015 10:25 PM


Can you deploy OpenStack in an IPV6-only cloud?

The post Can you deploy OpenStack in an IPV6-only cloud? appeared first on Mirantis | The #1 Pure Play OpenStack Company.

<script async="" charset="utf-8" src=""></script>


In 2011, the last IPv4 address blocks were assigned from the Internet Assigned Numbers Authority (IANA), which manages IP addresses globally, to the regional internet registrars (RIRS). For the next year or so, Europe, Asia, and Latin America continued to parcel out addresses that were already in their control, and in September, ARIN, the RIR responsible for North America, ran out of IPv4 addresses.

Thankfully, there’s no need to despair. IPv6, the next version of the IP protocol, is becoming more widely adopted, and yes, you can deploy an IPv6-only OpenStack cloud.

At the OpenStack Summit in Tokyo, Brian Haley and Sean Collins, software engineers at HP and Mirantis, discussed their work to deploy OpenStack with IPV6, some of the issues they found in their development and testing, and current efforts expected through the Mitaka cycle. Watch the video to learn more, and stay tuned to the blog as Sean kicks off our new series, “Guerilla OpenStack,” with a group of blogs explaining how to create your own home OpenStack lab using IPv6.


<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="" width="560"></iframe>



The post Can you deploy OpenStack in an IPV6-only cloud? appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Jodi Smith at November 13, 2015 07:52 PM

OpenStack @ NetApp

NetApp in Tokyo - OpenStack Summit Recap

NetApp in Tokyo: OpenStack Summit Recap

About two weeks ago, NetApp sent a large contingent of employees to the biannual OpenStack Summit conference -- this time in Tokyo, Japan! 5,000 Developers, operators, and users from 56 different countries converged to talk about their experiences around OpenStack, learn more about what’s new in the ecosystem, and help design the next release of one of the fastest growing pieces of open source software in history: OpenStack!

Tokyo Summit


The biggest announcement was that the Manila file-share-as-a-service project is now production ready for the mainstream enterprise. NetApp’s open source contributions to Manila provide an automated, on-demand, scalable service for delivering shared and distributed file systems using an open, standardized API developed within the OpenStack community.

As the founder of the Manila project and a charter member of the OpenStack Foundation, NetApp has contributed deep expertise and technology that enable application developers to choose the best cloud storage model for their applications. Because file shares can be offered as a service, they can be self-served and migrated, no matter whether the cloud is on or off the premises or is private or public.

If you want to learn more about Manila (and how to deploy it in OpenStack Liberty), check out our Deployment and Operations Guide @

You can also jump on IRC - we’re always hanging out in #openstack-manila and #openstack-netapp on Freenode! We’ve got weekly meetings at 1500 UTC in #openstack-meeting-alt on Freenode as well.

October 2015 User Survey

The OpenStack Foundation conducts a User Survey every 6 months (roughly concurrent with the advent of a new release) that profiles how and where OpenStack is being deployed. NetApp has consistently been rated as the leading commercial storage option for Proof of Concept, Development and Q/A, and Production environments. The latest survey was no exception.

See the results here, specifically on page 31: October 2015 OpenStack User Survey.

NetApp Sessions in Tokyo

We had some good representation from NetApp at the Tokyo Summit's speaking sessions. If you weren't able to join us in person, please feel free to watch recordings via the table below:

Session Title Who from NetApp
Yahoo! Japan Builds an OpenStack Enterprise Storage Architecture for Japan's Largest Internet Portal Akihiro Katano
Manila - An Update from Liberty alt text Akshai Parthasarathy
Manila and Sahara: Crossing the Desert to the Big Data Oasis alt text Jeff Applewhite
Test Drive for OpenStack with NetApp Cloud ONTAP alt text Akshai Parthasarathy
Accelerate POC to Production: OpenStack with FlexPod alt text Dave Cain
Scalable and Reliable OpenStack Deployments on FlexPod alt text Dave Cain

Tech ONTAP Podcast

The NetApp Tech ONTAP Podcast covered the OpenStack Summit in Episode #13. For more information and where to listen, head over to this blog post on the NetApp Community site.

NetApp Insight Berlin

City Cube

NetApp Insight is NetApp’s annual technical conference for storage and data management professionals. Occurring next week from November 16-19, 2015 in the CityCube in Berlin -- Insight gives customers, engineers, consultants and partners a forum for learning from industry experts and each other.

Choose from more than 300 technical sessions, participate in self-paced Hands-On Labs, earn NetApp Certifications at no cost, and see exciting new technology at the conference. You don't want to miss this event.

If you are going to be there, be sure to stop by our booth in the NetApp Pavilion to discuss all things OpenStack! If you're not able to make it to Berlin, be sure to see what we're up to in real-time using Twitter. Watch the hash tag #OpenStackNetApp to see what we're up to!

November 13, 2015 06:37 PM

Tesora Corp

Short Stack: Building a Developer Story, Embracing the Open Source Shift, and a Look back at Tokyo’s Summit

Welcome to the Short Stack, our regular feature where we search for the most intriguing OpenStack news. These links may come from traditional publications or company blogs, but if it’s about OpenStack, we’ll find the best ones.

Here are our latest links:

Sitting down with NTT Group, the Superuser Awards winner at the OpenStack Summit Tokyo | Superuser TV

Superuser TV interviewed NTT Software Innovation Center’s Shintaro Mizuno and Toshikazu Ichikawa after accepting the Superuser Award at OpenStack Summit. Both shared the mission of NTT R&D and how they supported OpenStack use in the organization.

Building a Developer Story for Mitaka | Datamation

Mirantis Co-Founder, Boris Renski, described the importance of creating a developer story as the OpenStack community anticipates the Mitaka release in 2016. Renski stated that because OpenStack is maturing, it can differentiate itself from companies such as VMWare or Amazon by creating unique developer narratives. If OpenStack can take measures that are considered more disruptive, like building distributed applications using containers and orchestration, that would serve as a long-term benefit for the platform.

Tesora Advances OpenStack Trove DBaaS Features for MySQL and PostgreSQL | Ostatic

On Wednesday, Tesora released the latest update to their Database as a Service platform. Version 1.6 adds support for MySQL Enterprise along with enhanced log file management and Ceilometer integration for other databases like Oracle 12c. This means users can now provision and manage twelve different databases with Tesora’s OpenStack Trove-based Database as a Service platform.

Embracing the tech world’s open source shift |

Ruth Bavousett shared her personal story of how she discovered Linux and became a system administrator and software developer. She worked particularly with library technologies, and highlighted just how far our systems have come in the past decade. While Bavousett claimed to miss the intricacies of the old closed sourced world, the technology available today has left those things behind for the most part. 

More Signal & Less Noise: My OpenStack Tokyo Retrospective |

Rob Hirschfeld gave one of the final recaps of the OpenStack Summit Tokyo. He highlighted the recent shift to the “big tent era, as well as many trending topics that emerged at Summit. Particularly, he noted the shift to Big Tent, the global users and providers of OpenStack, and container workloads. While he gave an overall positive review of Summit, Hirschfeld relayed that there is more work to be done in the community– especially the lack of structure concerning how to mingle platform, community and ecosystem.

The post Short Stack: Building a Developer Story, Embracing the Open Source Shift, and a Look back at Tokyo’s Summit appeared first on Tesora.

by Alex Campanelli at November 13, 2015 05:25 PM


Planet OpenStack is a collection of thoughts from the developers and other key players of the OpenStack projects. If you are working on OpenStack technology you should add your OpenStack blog.


Last updated:
November 29, 2015 10:58 PM
All times are UTC.

Powered by: