November 21, 2014

Ben Nemec

Paris OpenStack Summit - The Big Tent Casts a Long Shadow

So it's been a couple of weeks since I got back from the Paris Summit, but it's been a busy time and I haven't had time to write up all of my thoughts about it. Here's my belated attempt to do that.

From a purely personal perspective, I felt like this summit was much more useful than Atlanta. That's not a commentary on the summit itself, but more on my involvement in it. I don't know if I was just more comfortable jumping into discussions or if an extra six months working on everything provided me with stronger opinions, but regardless I felt like I contributed a lot more this time around. Some highlights of the discussions I was in:

Logging

Lots of good discussion around improving logging this cycle, which, as a TripleO contributor who is constantly frustrated by our useless logging at anything but DEBUG level, makes me very happy. We have a spec that is very close to ready IMHO, and once that merges we can get to work really improving things in this area.

Upgrades

Also lots of talk about upgrades. Since this and logging were the top two operator issues with OpenStack (by a long shot, IIRC), it's good that the focus is appropriately strong on them. I got the impression from some of the discussions that there's still a lot that isn't fully understood about how Nova is moving forward with their live upgrade mechanisms (and thus how the other projects can follow suit), but once all the mechanisms are in place it becomes more of an education issue than an implementation one, which should be an easier problem.

There's a long way to go, but I'm seeing a glimmer of light at the end of the live upgrades tunnel.

TripleO Review Policy

This was a long discussion, but I think it was valuable and the end result was a streamlining of the review process in TripleO. If this works out I could see it being adopted across many more projects (some have already expressed interest). Since core reviewer bandwidth is one of the biggest bottlenecks in OpenStack, optimizing it is important.

See the new review policy spec for a detailed discussion of the motivations and conclusions from this discussion.

Puppet/Chef/Ansible/etc.

While I will admit to having been poorly prepared for these discussions, it felt like there was a good dialog between some experts in the various tools and the TripleO team who are trying to implement their usage. I learned a lot and I think a general path forward was determined. It's early days for that work though, so we'll see where it goes from here.

Lunch

Not a technical point, but I thought some kudos were deserved here. Lunches at Le Meridien were terrific. The lines were kind of excessive the last two days, but I suspect that was a reflection of the quality and the fact that fewer people felt the need to go elsewhere to find food they wanted to eat.

There were a few lowlights too though:

Big Tent

The Big Tent sessions didn't feel like they made any significant progress toward an answer, at least from where I was sitting. Based on that, my suspicion is that a massive, all-at-once change to a different governance model just isn't going to happen. Hopefully we can find some incremental improvements to make along the way, and maybe some day we'll end up in a place everyone is happier with. This topic's probably above my pay grade though.

Containers

Based solely on this session, containers as a service may never be a thing in OpenStack. :-(

"The sky is falling" pessimism aside though, my reaction after the session concluded was less negative than while I was in the session. Honestly, I think the disconnect there (which was never explicitly called out in the session, to my knowledge) is that the folks working on containers want to collaborate with the OpenStack Compute team to make their new project fit in as well as possible, but the OpenStack Compute team is overloaded as it is and can't give them the time and effort they're looking for. This has led to a lot of wheel spinning in the effort to get the project off the ground.

I suspect the outcome of this topic is going to be that the containers team goes off and does their thing, and eventually they live as a largely separate entity under the Compute umbrella. But my crystal ball has been wrong before. :-)

In the end, I think it was a productive summit and we seem to be working on the right things to make OpenStack better for the people using it. I could probably write for another $TOO_MANY hours about everything that went on, but I have an RDO release for TripleO to work on so I'll stop here.

by bnemec at November 21, 2014 06:44 PM

Rafael Knuth

Online Meetup: How does OpenStack Swift play with Open vStorage?

Join us for a conversation between Swift PTL John Dickinson and Wim Provoost of Open vStorage as...

November 21, 2014 05:21 PM

OpenStack Blog

OpenStack Community Weekly Newsletter (Nov 14 – 21)

OpenStack User Survey Insights: November 2014

This is the fourth consecutive survey conducted by the User Committee prior to each Summit starting in April 2013 and the previous survey in Atlanta, May 2014. The goals of the survey are to generate insights based on a representative sample of OpenStack users, in order to better understand their organizational profiles, use cases and technology choices across different deployment stages and sizes. These insights are intended to provide feedback to the broader community, and to arm technical leaders and contributors with better data to make decisions.

Wrapping up the Travel Support Program – Kilo

The OpenStack Foundation brought 20 people to Paris for the Summit earlier this month, thanks to the grants offered by the Travel Support Program. We had 22 people accepted in the program from 11 different countries, spanning five continents. Four people traveled from Brazil, four from India, three from Europe and the rest were from South America, North America and South-east Asia. Of the selected recipients, two were unable to attend due to VISA timing issues, but we were excited to welcome the 20 attendees that were able to make the trip. Stay tuned for when we announce the applications for the Travel Support Applications for the May Summit in Vancouver.

A mascot for Ironic

The idea about what the mascot would be was easy because the RAX guys put “bear metal” in their presentation and that totally rocks! So Lucas Alvares Gomes drew a bear.

Relevant Conversations

Deadlines

Tips ‘n Tricks

Security Advisories and Notices

Reports from Previous Events

Upcoming Events

Other News

Got Answers?

Ask OpenStack is the go-to destination for OpenStack users. Interesting questions waiting for answers:

Welcome New Reviewers, Developers and Core Reviewers

Welcome Michael McCune and Sergey Reshetniak to sahara-core, Steve Heyman, Juan Antonio Osorio Robles and Steve Heyman for barbican-core.

Xiao Xi LIU Ai Jie Niu
Naohiro Tamura jiangfei
Danny Wilson Dave Chen
yatin rajiv
Richard H juigil kishore
Pawel Palucki Richard Hedlind
Park Lucas Dutra Nunes
jmurax Nicolas T
Swati Shukla Michael Hagedorn
Guillaume Giamarchi keshava
Seb Hughes
sailajay

OpenStack Reactions

Helping new contributor

Finding a great bug to solve and giving it to new contributors (and helping them)

The weekly newsletter is a way for the community to learn about all the various activities occurring on a weekly basis. If you would like to add content to a weekly update or have an idea about this newsletter, please leave a comment.

by Stefano Maffulli at November 21, 2014 04:47 PM

Flavio Percoco

What's coming in Kilo for Glance, Zaqar and Oslo?

As usual, here's a write up of what happened last week during the OpenStack Summit. More than a summary, this post contains the plans we discussed for the next 6 months.

Glance

Lots of things happened in Juno for Glance. Work related to artifacts was done, async workers were implemented and glance_store was created. If none of these things excite you, I'm sorry to tell you that you're missing the big picture.

The 3 features mentioned above are the bases of many things that will happen in Kilo. For long time, we've been waiting for async workers to land and now that we have them we can't but use them. One of the first things that will consume this feature is image introspection, which will allow glance to read image's metadata and extract useful information from them. In addition to this, we'll messing with images a bit more by implementing basic support for image conversion to allow for automatic conversion of images during uploads and also as a manual operation. There are many things to take care of here and tons of subtle corner cases so please, keep an eye on these things and help us out.

The work on artifacts is not complete, there are still many things to do there and lots of patches and code are being written. This still seems to be the path the project is going down to for Kilo to allow more generic catalogs and support for storing data assets.

One more thing on Glance, all the work that happened in glancestore during Juno, will finally pay off in Kilo. We'll start refactoring the library and it'll likely be adopted by Nova in K-2. Noticed I said likely? That's because before we get there, we need to clean up the messy glance wrapper nova has. In that same session we discussed what to do with that code and agreed on getting rid of it and let nova consume glanceclient directly, which will happen in kilo-1 before the glancestore adoption. Here's the spec.

Zaqar

When thinking about Zaqar and Kilo, you need to keep 3 things in mind:

  1. Notifications
  2. Persistent Transport
  3. Integration with other services

Notifications is something we've wanted to work on since Icehouse. We talked about them back in Hong Kong, then in Atlanta and we finally have a good plan for them now. The team will put lots of efforts on this feature and we'd love to get as much feedback as possible on the implementation, use cases and targets. In order to implement notifications and mark a fresh start, the team has also decided to bump the API version number to 2 and use this chance to clean up the technical debt from previous versions. Some of the things that will go away from the API are:

  • Get messages by id
  • FIFO will become optional
  • Queue's will be removed from the API, instead we'll start talking about topics. Some notes on this here.

One of the projects goal is to be easily consumed regardless of the device you're using. Moreover, the project wants to allow users to integrate with it. Therefore, the team is planning to start working on a persistent Transport in order to define a message-based protocol that is both stateless and persistent as far as the communication between the peers goes. The first target is websocket, which will allow users to consume Zaqar's API from a browser and even using a library without having to go down to raw TCP connections, which was highly discouraged at the summit. This falls perfectly in the projects goals to be easily consumable and to reuse existing technologies and solutions as much as possible.

Although the above two features sound exciting, the ultimate goal is to integrate with other projects in the community. The team has long waited for this opportunity and now that it has a stable API, it is the perfect time for this integration to happen. At our integration session folks from Barbican, Trove, Heat, Horizon showed up - THANKS - and they all shared use-cases, ideas and interesting opinions about what they need and about what they'd like to see happening for Kilo with regards to this integration. Based on the results of this session Heat and Horizon are likely to be the first targets. The team is thrilled about this and we're all looking forward for this collaboration to happen.

Oslo

No matter what I work on, I'll always have time for Oslo. Just like for the other projects I mentioned, there will be exciting things happening in Oslo as well.

Let me start by saying that new libraries will be released but not many of them. This will give the team the time needed to focus on the existing ones and also to work on the other, perhaps equally important, items in the list. For example, we'll be moving away from using namespaces - YAY!, which means we'll be updating all the already released libraries. Something that's worth mentioning is that the already released libraries won't be renamed and the ones to be released will follow the same standard for names. The difference is that they won't be using namespaces internally at all.

Also related to the libraries maintenance, the team has decided to stop using alpha versions for the libraries. One of the points against this is that we currently don't put caps on stable branches, however this will change in Kilo. We will pin to MAJOR.MINOR+1 in stable, allowing bug fixes in MAJOR.MINOR.PATCH+1.

I unfortunately couldn't attend all the Oslo sessions and I missed one that I really wanted to attend about oslo.messaging. By reading the etherpad, it looks like great things will happen in the library during kilo that will help with growing its community. Drivers will be kept in tree, zmq won't be deprecated, yet. Some code de-duplication will happen and both the rabbit and qpid driver will be merged into a single one now that kombu has support for qpid. Just like other projects throughout OpenStack, we'll be targeting full Py3K support like CRAZY!

Hopefully I didn't forget anything or even worse said something stupid. Now, if you may excuse me, I gotta go offline for the next 6 month. Someone has to work on these things.

by FlaPer87 at November 21, 2014 03:33 PM

Rob Hirschfeld

To thrive, OpenStack must better balance dev, ops and business needs.

OpenStack has grown dramatically in many ways but we have failed to integrate development, operations and business communities in a balanced way.

My most urgent observation from Paris is that these three critical parts of the community are having vastly different dialogs about OpenStack.

Clouds DownAt the Conference, business people were talking were about core, stability and utility while the developers were talking about features, reorganizing and expanding projects. The operators, unfortunately segregated in a different location, were trying to figure out how to share best practices and tools.

Much of this structural divergence was intentional and should be (re)evaluated as we grow.

OpenStack events are split into distinct focus areas: the conference for business people, the summit for developers and specialized days for operators. While this design serves a purpose, the community needs to be taking extra steps to ensure communication. Without that communication, corporate sponsors and users may find it easier to solve problems inside their walls than outside in the community.

The risk is clear: vendors may find it easier to work on a fork where they have business and operational control than work within the community.

Inside the community, we are working to help resolve this challenge with several parallel efforts. As a community member, I challenge you to get involved in these efforts to ensure the project balances dev, biz and ops priorities.  As a board member, I feel it’s a leadership challenge to make sure these efforts converge and that’s one of the reasons I’ve been working on several of these efforts:

  • OpenStack Project Managers (was Hidden Influencers) across companies in the ecosystem are getting organized into their own team. Since these managers effectively direct the majority of OpenStack developers, this group will allow
  • DefCore Committee works to define a smaller subset of the overall OpenStack Project that will be required for vendors using the OpenStack trademark and logo. This helps the business community focus on interoperability and stability.
  • Technical leadership (TC) lead “Big Tent” concept aligns with DefCore work and attempts to create a stable base platform while making it easier for new projects to enter the ecosystem. I’ve got a lot to say about this, but frankly, without safeguards, this scares people in the ops and business communities.
  • An operations “ready state” baseline keeps the community from being able to share best practices – this has become a pressing need.  I’d like to suggest as OpenCrowbar an outside of OpenStack a way to help provide an ops neutral common starting point. Having the OpenStack developer community attempting to create an installer using OpenStack has proven a significant distraction and only further distances operators from the community.

We need to get past seeing the project primarily as a technology platform.  Infrastructure software has to deliver value as an operational tool for enterprises.  For OpenStack to thrive, we must make sure the needs of all constituents (Dev, Biz, Ops) are being addressed.


by Rob H at November 21, 2014 03:30 PM

Self-Exposure: Hidden Influencers become OpenStack Product Working Group

Warning to OpenStack PMs: If you are not actively involved in this effort then you (and your teams) will be left behind!

ManagersThe Hidden Influencers (now called “OpenStack Product Working Group”) had a GREAT and PRODUCTIVE session at the OpenStack (full notes):

  1. Named the group!  OpenStack Product Working Group (now, that’s clarity in marketing) [note: I was incorrect saying "Product Managers" earlier].
  2. Agreed to use the mailing list for communication.
  3. Committed to a face-to-face mid-cycle meetup (likely in South Bay)
  4. Output from the meetup will be STRATEGIC DIRECTION doc to board (similar but broader than “Win the Enterprise”)
  5. Regular meeting schedule – like developers but likely voice interactive instead of IRC.  Stefano Maffulli is leading.

PMs starting this group already direct the work for a super majority (>66%) of active contributors.

The primary mission for the group is to collaborate and communicate around development priorities so that we can ensure that project commitments get met.

It was recognized that the project technical leads are already strapped coordinating release and technical objectives.  Further, the product managers are already but independently engaged in setting strategic direction, we cannot rely on existing OpenStack technical leadership to have the bandwidth.

This effort will succeed to the extent that we can help the broader community tied in and focus development effort back to dollars for the people paying for those developers.  In my book, that’s what product managers are supposed to do.  Hopefully, getting this group organized will help surface that discussion.

This is a big challenge considering that these product managers have to balance corporate, shared project and individual developers’ requirements.  Overall, I think Allison Randall summarized our objectives best: “we’re herding cats in the same direction.”


by Rob H at November 21, 2014 03:11 PM

Leveling OpenStack’s Big Tent: is OpenStack a product, platform or suite?

Question of the day: What should OpenStack do with all those eager contributors?  Does that mean expanding features or focusing on a core?

IMG_20141108_101906In the last few months, the OpenStack technical leadership (Sean Dague, Monty Taylor) has been introducing two interconnected concepts: big tent and levels.

  • Big tent means expanding the number of projects to accommodate more diversity (both in breath and depth) in the official OpenStack universe.  This change accommodates the growth of the community.
  • Levels is a structured approach to limiting integration dependencies between projects.  Some OpenStack components are highly interdependent and foundational (Keystone, Nova, Glance, Cinder) while others are primarily consumers (Horizon, Saraha) of lower level projects.

These two concepts are connected because we must address integration challenges that make it increasingly difficult to make changes within the code base.  If we substantially expand the code base with big tent then we need to make matching changes to streamline integration efforts.  The levels proposal reflects a narrower scope at the base than we currently use.

By combining big tent and levels, we are simultaneously growing and shrinking: we grow the community and scope while we shrink the integration points.  This balance may be essential to accommodate OpenStack’s growth.

UNIVERSALLY, the business OpenStack community who wants OpenStack to be a product.  Yet, what’s included in that product is unclear.

Expanding OpenStack projects tends to turn us into a suite of loosely connected functions rather than a single integrated platform with an ecosystem.  Either approach is viable, but it’s not possible to be both simultaneously.

On a cautionary note, there’s an anti-Big Tent position I heard expressed at the Paris Conference.  It’s goes like this: until vendors start generating revenue from the foundation components to pay for developer salaries; expanding the scope of OpenStack is uninteresting.

Recent DefCore changes also reflect the Big Tent thinking by adding component and platform levels.  This change was an important and critical compromise to match real-world use patterns by companies like Swiftstack (Object), DreamHost (Compute+Ceph), Piston (Compute+Ceph) and others; however, it creates the need to explain “which OpenStack” these companies are using.

I believe we have addressed interoperability in this change.  It remains to be seen if OpenStack vendors will choose to offer the broader platform or limit to themselves to individual components.  If vendors chase the components over platform then OpenStack becomes a suite of loosely connect products.  It’s ultimately a customer and market decision.

It’s not too late to influence these discussions!  I’m very interested in hearing from people in the community which direction they think the project should go.


by Rob H at November 21, 2014 03:09 PM

IBM OpenStack Team

Code and tapas: Best when enjoyed with others

I’m off to Spain to hang out with an innovative group of developers at the Codemotion conference at the Universidad San Pablo CEU in Madrid. On Saturday, I’ll be delivering the keynote, but throughout the conference we’ll explore our craft: software development.

Here’s how Codemotion describes its conference:

“Computer Science is a motor of the economy: we control airports and hospitals, keep transactions and invoices flowing, and store passwords, documents and kitten pictures. Hell, we even did what we could for the banks.

“We live in a sector that reinvents itself every six months, where it would be easy to just relax and stop chasing after the next new and shiny thing. Codemotion is this day in which we sit down to talk about all that we have in common, stepping aside everything that makes us different. … [so]…

“Codemotion gets all the IT communities in Spain together for two days. We take an entire university building to present the best of each technology and get you far away from your comfort zone.”

This is going to be fun—and not just because my pre-Thanksgiving dinner is going to be authentic tapas (there are some patatas bravas and jamón iberico with my name on it, I’m sure). It’ll be fun because we get to talk about how developers are making a difference in the lives of everyday citizens. And I get to hang out with developers committed to getting better at doing what they do best.

codemotionBut one line in the Codemotion call-to-action stands out above the rest to me: “We live in a sector that reinvents itself every six months, where it would be easy to just relax and stop chasing after the next new and shiny thing.”

It couldn’t be more true. We are seeing an unprecedented rate of change in our industry. New languages. New technologies and devices. New skills. New use cases. New data management problems that require even newer solutions. Plus, everything needs to be secured, tracked, stored, accessed and, of course, analyzed at a moment’s notice.

So how do we keep up with this constant reinvention? If you know me, my answer won’t surprise you: Open technology.

The power of the community

At the heart of a robust open technology effort is a vibrant community of developers looking to constantly improve the code we are working on. From different countries, industries, and bringing our unique skills and use cases together, we innovate as a team. The skills here go beyond languages and coding methods. Project management and collaboration are key. Basic listening skills are as critical as software engineering training. Together, we do more than any one team or company could do on its own.

The power of the code

The outcome of a well-managed, highly engaged and talented open technology development effort is quality code. Code talks! Written for the leading use cases, tested, debugged and iterated, implementers know quality when they can use it, and when they see many other vendors and partners doing the same at increasing rates. This is how we begin to achieve more vendor implementation choices and begin to ensure cross-implementation interoperability.

Tapas is always better when you share it with friends. Coding and open technology are the same. Meticulously crafted and stacked from the appetizing ingredients, the group quickly identifies innovative new ideas. We share our standby recipes. We make improvements. Innovation demands we try new things, but it doesn’t demand we ignore the favorites—those standard recipes that work every time.

This is open technology. There is no one perfect recipe for success. There are guidelines. Open governance. Technical innovation. Iteration. The OpenStack community stands out among the crowd. This open source software community is building the most innovative technology for creating private and public clouds today. The success of the growing community is helping individual community members succeed as well. Just ask an OpenStack developer about their job prospects. Ask an implementer if they are better prepared to handle new cloud innovations and integrations because they employ OpenStack as the foundation of their cloud infrastructure.

For today’s developers, open source is a way of life. Like tapas, there is no easy answer. You have to get in there, roll up your sleeves and create something fresh. At the end of the day, a developer knows he or she was successful because something works better. Like when you have a great tapa, success tastes great.

Os veréen Madrid.

The post Code and tapas: Best when enjoyed with others appeared first on Thoughts on Cloud.

by Angel Luis Diaz at November 21, 2014 01:00 PM

Julien Danjou

Distributed group management and locking in Python with tooz

With OpenStack embracing the Tooz library more and more over the past year, I think it's a good start to write a bit about it.

A bit of history

A little more than year ago, with my colleague Yassine Lamgarchal and others at eNovance, we investigated on how to solve a problem often encountered inside OpenStack: synchronization of multiple distributed workers. And while many people in our ecosystem continue to drive development by adding new bells and whistles, we made a point of solving new problems with a generic solution able to address the technical debt at the same time.

Yassine wrote the first ideas of what should be the group membership service that was needed for OpenStack, identifying several projects that could make use of this. I've presented this concept during the OpenStack Summit in Hong-Kong during an Oslo session. It turned out that the idea was well-received, and the week following the summit we started the tooz project on StackForge.

Goals

Tooz is a Python library that provides a coordination API. Its primary goal is to handle groups and membership of these groups in distributed systems.

Tooz also provides another useful feature which is distributed locking. This allows distributed nodes to acquire and release locks in order to synchronize themselves (for example to access a shared resource).

The architecture

If you are familiar with distributed systems, you might be thinking that there are a lot of solutions already available to solve these issues: ZooKeeper, the Raft consensus algorithm or even Redis for example.

You'll be thrilled to learn that Tooz is not the result of the NIH syndrome, but is an abstraction layer on top of all these solutions. It uses drivers to provide the real functionalities behind, and does not try to do anything fancy.

All the drivers do not have the same amount of functionality of robustness, but depending on your environment, any available driver might be suffice. Like most of OpenStack, we let the deployers/operators/developers chose whichever backend they want to use, informing them of the potential trade-offs they will make.

So far, Tooz provides drivers based on:

All drivers are distributed across processes. Some can be distributed across the network (ZooKeeper, memcached, redis…) and some are only available on the same host (IPC).

Also note that the Tooz API is completely asynchronous, allowing it to be more efficient, and potentially included in an event loop.

Features

Group membership

Tooz provides an API to manage group membership. The basic operations provided are: the creation of a group, the ability to join it, leave it and list its members. It's also possible to be notified as soon as a member joins or leaves a group.

Leader election

Each group can have a leader elected. Each member can decide if it wants to run for the election. If the leader disappears, another one is elected from the list of current candidates. It's possible to be notified of the election result and to retrieve the leader of a group at any moment.

Distributed locking

When trying to synchronize several workers in a distributed environment, you may need a way to lock access to some resources. That's what a distributed lock can help you with.

Adoption in OpenStack

Ceilometer is the first project in OpenStack to use Tooz. It has replaced part of the old alarm distribution system, where RPC was used to detect active alarm evaluator workers. The group membership feature of Tooz was leveraged by Ceilometer to coordinate between alarm evaluator workers.

Another new feature part of the Juno release of Ceilometer is the distribution of polling tasks of the central agent among multiple workers. There's again a group membership issue to know which nodes are online and available to receive polling tasks, so Tooz is also being used here.

The Oslo team has accepted the adoption of Tooz during this release cycle. That means that it will be maintained by more developers, and will be part of the OpenStack release process.

This opens the door to push Tooz further in OpenStack. Our next candidate would be write a service group driver for Nova.

The complete documentation for Tooz is available online and has examples for the various features described here, go read it if you're curious and adventurous!

by Julien Danjou at November 21, 2014 12:10 PM

eNovance Engineering Teams

Distributed group management and locking in Python with tooz

With OpenStack embracing the Tooz library more and more over the past year, I think it’s a good start to write a bit about it.

A bit of history

A little more than year ago, with my colleague Yassine Lamgarchal and others at eNovance, we investigated on how to solve a problem often encountered inside OpenStack: synchronization of multiple distributed workers. And while many people in our ecosystem continue to drive development by adding new bells and whistles, we made a point of solving new problems with a generic solution able to address the technical debt at the same time.

Yassine wrote the first ideas of what should be the group membership service that was needed for OpenStack, identifying several projects that could make use of this. I’ve presented this concept during the OpenStack Summit in Hong-Kong during an Oslo session. It turned out that the idea was well-received, and the week following the summit we started the tooz project on StackForge.

Goals

Tooz is a Python library that provides a coordination API. Its primary goal is to handle groups and membership of these groups in distributed systems.

Tooz also provides another useful feature which is distributed locking. This allows distributed nodes to acquire and release locks in order to synchronize themselves (for example to access a shared resource).

The architecture

If you are familiar with distributed systems, you might be thinking that there are a lot of solutions already available to solve these issues: ZooKeeper, the Raft consensus algorithm or even Redis for example.
You’ll be thrilled to learn that Tooz is not the result of the NIH syndrome, but is an abstraction layer on top of all these solutions. It uses drivers to provide the real functionalities behind, and does not try to do anything fancy.
All the drivers do not have the same amount of functionality of robustness, but depending on your environment, any available driver might be suffice. Like most of OpenStack, we let the deployers/operators/developers chose whichever backend they want to use, informing them of the potential trade-offs they will make.

So far, Tooz provides drivers based on:

Kazoo (ZooKeeper)
Zake
memcached
redis
SysV IPC (only for distributed locks for now)
PostgreSQL (only for distributed locks for now)
MySQL (only for distributed locks for now)

All drivers are distributed across processes. Some can be distributed across the network (ZooKeeper, memcached, redis…) and some are only available on the same host (IPC). Also note that the Tooz API is completely asynchronous, allowing it to be more efficient, and potentially included in an event loop.

Features

Group membership

Tooz provides an API to manage group membership. The basic operations provided are: the creation of a group, the ability to join it, leave it and list its members. It’s also possible to be notified as soon as a member joins or leaves a group.

Leader election

Each group can have a leader elected. Each member can decide if it wants to run for the election. If the leader disappears, another one is elected from the list of current candidates. It’s possible to be notified of the election result and to retrieve the leader of a group at any moment.

Distributed locking

When trying to synchronize several workers in a distributed environment, you may need a way to lock access to some resources. That’s what a distributed lock can help you with.

Adoption in OpenStack

Ceilometer is the first project in OpenStack to use Tooz. It has replaced part of the old alarm distribution system, where RPC was used to detect active alarm evaluator workers. The group membership feature of Tooz was leveraged by Ceilometer to coordinate between alarm evaluator workers.

Another new feature part of the Juno release of Ceilometer is the distribution of polling tasks of the central agent among multiple workers. There’s again a group membership issue to know which nodes are online and available to receive polling tasks, so Tooz is also being used here.

The Oslo team has accepted the adoption of Tooz during this release cycle. That means that it will be maintained by more developers, and will be part of the OpenStack release process.

This opens the door to push Tooz further in OpenStack. Our next candidate would be write a service group driver for Nova.

The complete documentation for Tooz is available online and has examples for the various features described here, go read it if you’re curious and adventurous!

by Julien Danjou at November 21, 2014 11:29 AM

Opensource.com

Lessons from the OpenStack user survey

Every six months, the OpenStack Foundation reports on the results from its user survey. The results for the most recent iteration were released earlier this month on the OpenStack Superuser blog. Let's take a look and see what's new.

by Jason Baker at November 21, 2014 10:00 AM

Craige McWhirter

Deleting Root Volumes Attached to Non-Existent Instances

Let's say you've got an OpenStack build you're getting ready to go live with. Assume also that you're performing some, ahem, robustness testing to see what breaks and prevent as many surprises as possible prior to going into production. OpenStack controller servers are being rebooted all over the shop and during this background chaos, punters are still trying to launch instances with vary degrees of success.

Once everything has settled down, you may find that some lucky punters have deleted the unsuccessful instances but the volumes have been left behind. This isn't initially obvious from the cinder CLI without cross checking with nova:

$ cinder list
+--------------------------------------+-----------+--------------+------+-------------+--
--------+--------------------------------------+
|                  ID                  |   Status  | Display Name | Size | Volume Type | B
ootable |             Attached to              |
+--------------------------------------+-----------+--------------+------+-------------+--
--------+--------------------------------------+
| 3e56985c-541c-4bdd-b437-16b3d96e9932 | in-use    |              |  3   |    block    |
 true   | 6e06aa0f-efa7-4730-86df-b32b47e53316 |
+--------------------------------------+-----------+--------------+------+-------------+--
--------+--------------------------------------+
$ nova show 6e06aa0f-efa7-4730-86df-b32b47e53316
ERROR (CommandError): No server with a name or ID of '6e06aa0f-efa7-4730-86df-b32b47e53316' exists.

It will manifest itself in Horizon like this:

Attached to None

Now trying to delete this volume is going to fail:

$ cinder delete 52aa706df17d-4599-948c-87ae46d945b2
Delete for volume 52aa706d-f17d-4599-948c-87ae46d945b2 failed: Invalid volume:
Volume status must be available or error, but current status is: creating (HTTP 400)
(Request-ID: req-f45671de-ed43-401c-b818-68e2a9e7d6cb)
ERROR: Unable to delete any of the specified volumes.

As will an attempt to detach it from the non-existent instance:

$ nova volume-detach 6e06aa0f-efa7-4730-86df-b32b47e53316 093f32f6-66ea-451b-bba6-7ea8604e02c6
ERROR (CommandError): No server with a name or ID of '6e06aa0f-efa7-4730-86df-b32b47e53316' exists.

and no, force-delete does not work either.

Here's my approach for resolving this problem:

SSH onto your MariaDB server for OpenStack and open MariaDB to the cinder database:

$ mysql cinder

Unset the attachment in the volumes table by repeating the below command for each volume that requires detaching from a non-existent instance:

MariaDB [cinder]> UPDATE volumes SET attach_status='detached', instance_uuid=NULL, \
attach_time=NULL, status="available" WHERE id='3e56985c-541c-4bdd-b437-16b3d96e9932';
Query OK, 1 row affected (0.01 sec)
Rows matched: 1  Changed: 1  Warnings: 0

Back on your OpenStack client workstations you should now be able to delete the offending volumes:

$ cinder delete 3e56985c-541c-4bdd-b437-16b3d96e9932

Happy housekeeping :-)

by Craige McWhirter at November 21, 2014 01:35 AM

November 20, 2014

Kenneth Hui

Some New OpenStack Content

OSDC_What_is_Openstack_blue

One of the things I enjoy most about being part of the OpenStack community is the amount of content available (sometime too much content) about the technology and the ecosystem.  I’ve had the privilege to contribute some of that content myself, particularly in the past month.  So, I thought it might be helpful to at least a few people if I aggregated the content in a single blog post.

OpenStack Summit

I had a busy first two days in Paris where I moderated two panels, participated as a panelist in five others, and had the opportunity to give a mini-talk.  The panels and mini-talk covered a wide range of topics from technology to community to career advice.  The OpenStack Foundation has helpfully made all the Summit panels and talks available on their YouTube channel.  Below are the panels and talk I participated in, organized in alphabetical order.

Ask The Experts: OpenStack As A Service Or A Distribution

<iframe allowfullscreen="true" class="youtube-player" frameborder="0" height="315" src="http://www.youtube.com/embed/jHt3Fj5XbbM?version=3&amp;rel=1&amp;fs=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" type="text/html" width="560"></iframe>

Bridging The Gap: OpenStack For VMware Admins

<iframe allowfullscreen="true" class="youtube-player" frameborder="0" height="315" src="http://www.youtube.com/embed/ePIwHZJxVNg?version=3&amp;rel=1&amp;fs=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" type="text/html" width="560"></iframe>

 Building A Cloud Career In OpenStack

<iframe allowfullscreen="true" class="youtube-player" frameborder="0" height="315" src="http://www.youtube.com/embed/0ba2W2TPrbE?version=3&amp;rel=1&amp;fs=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" type="text/html" width="560"></iframe>

 Meet The OpenStack Ambassadors

<iframe allowfullscreen="true" class="youtube-player" frameborder="0" height="315" src="http://www.youtube.com/embed/JZraN92lgZE?version=3&amp;rel=1&amp;fs=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" type="text/html" width="560"></iframe>

OpenStack Design Guide Panel

<iframe allowfullscreen="true" class="youtube-player" frameborder="0" height="315" src="http://www.youtube.com/embed/PtomtKeJ0tc?version=3&amp;rel=1&amp;fs=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" type="text/html" width="560"></iframe>

The Role Of Software Defined Storage In OpenStack Cinder

<iframe allowfullscreen="true" class="youtube-player" frameborder="0" height="315" src="http://www.youtube.com/embed/DZbrf8TgCTU?version=3&amp;rel=1&amp;fs=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" type="text/html" width="560"></iframe>

OpenStack Podcast

I had a blast this week being interviewed by Niki Acosta and Jeff Dickey (well sort of, since Jeff decided to dodge me by going on vacation in Mexico).  I recorded the podcast/hangout from my home office which has bright red walls (not my choice) so it looks like I am in some kind of 80′s new wave dance hall.  That aside, it was a great opportunity to talk all things OpenStack.

<iframe allowfullscreen="true" class="youtube-player" frameborder="0" height="315" src="http://www.youtube.com/embed/videoseries?list=UU2iPkY9iMBaWVCxoklBeGWQ&amp;hl=en_US" type="text/html" width="560"></iframe>

OpenStack CrowdChat

Last but not least, I had the privilege of moderating an EMC sponsored CrowdChat on the topic of “OpenStack Operations.”  The CrowdChat was headlined by a star-studded panel of OpenStack experts, including Tim Bell from CERN, Scott Carlson from PayPal, Craig Tracey from Blue Box Group, and Sean Winn from EMC/Cloudscaling.  There were great discussions on a plethora of topics and I encourage everyone to read the transcript here rather you were able to join us live or not.  We will be sponsoring more OpenStack CrowdChats in the future.

That’s all for now.  Hope folks find the content interesting and useful.


Filed under: Cloud, Cloud Computing, IT Operations, OpenStack, Private Cloud, Public Cloud Tagged: Amazon Web Services, AWS, Cloud, Cloud computing, IaaS, OpenStack, Private Cloud, Public Cloud

by kenhui at November 20, 2014 04:00 PM

The Official Rackspace Blog » OpenStack

Build Rich Network Services With OpenStack’s Neutron API

Providing users with programmatic control of their infrastructure has always been one of the primary value propositions of the Rackspace Cloud. The ability to deploy and manage a wide array of cloud resources with a few lines of code has brought new levels of automated efficiency to the IT industry and revolutionized how we think about managing our applications and workloads.

Today, we make it easier for Rackspace Cloud users to build rich network services with the availability of the OpenStack Neutron Networking API. This milestone greatly increases your ability to create and manage networking services and capabilities and makes it possible to add a number of immediate and future improvements. The API will be in Limited Availability for our Cloud Networks users only. If you are already a Rackspace Cloud Networks user, you may immediately start taking advantage of this API.

Integrating OpenStack’s Neutron API into the Rackspace Cloud introduces three new top-level resources: /networks, /ports and /subnets – all of which are available today.

The Neutron API also introduces the following new features:

  • Create and manage Cloud Networks via Neutron API
  • Assign routes to Cloud Servers at boot-time (Host Routes)
  • Configure allocation pools for subnets (CIDRs) on Cloud Networks to control the dynamic IP address assignments on your Cloud Servers
  • Provision an IP address of your choice on isolated networks ports
  • Dual stack your isolated networks so that you can have IPv4 and IPv6 addresses on the same port

The integration of the OpenStack Networking API provides an alternative to the /os-networksv2 Cloud Servers extension that was the only option when provisioning networks in the public cloud.

While the existing networking API based on the /os-networksv2 extension will continue to function, we encourage you to start using the new API in order to take advantage of the new features and services that are only available within Neutron.

If you are a heavy consumer of Rackspace’s networking API, here are a few points to take note of:

  • You will still need to use the Cloud Networks virtual interface extension (/os-virtual-interfacesv2) to attach and detach networks at this time
  • The Neutron client will not be available to use along with the new API at this time, although we plan to make this available very soon.
  • The API is not currently available to RackConnect v3 users
  • The Neutron API and all of the related improvements are only available via API at this time. We expect to start integrating some new functions into the Cloud Control Panel soon.

The documentation for the new API can be found here and the getting started guide can be found here.

As always, we greatly value your feedback and look forward to more announcements about new features for cloud users.

by Sameer Satyam at November 20, 2014 04:00 PM

Red Hat Stack

Co-Existence of Containers and Virtualization Technologies

By, Federico Simoncelli, Principal Software Engineer, Red Hat

As a software engineer working on the Red Hat Enterprise Virtualization (RHEV), my team and I are driven by innovation; we are always looking for cutting edge technologies to integrate into our product.

Lately there has been a growing interest in Linux containers solutions such as Docker. Docker provides an open and standardized platform for developers and sysadmins to build, ship, and run distributed applications. The application images can be safely held in your organization registry or they can be shared publicly in the docker hub portal (http://registry.hub.docker.com) for everyone to use and to contribute to.

Linux containers are a well-known technology that runs isolated Linux systems on the same host sharing the same kernel and resources as cpu time and memory. Containers are more lightweight, perform better and allow more density of instances compared to full virtualization where virtual machines run dedicated full kernels and operating systems on top of virtualized hardware. On the other hand virtual machines are still the preferred solution when it comes to running highly isolated workloads or different operating systems than the host.

As the Docker vendor ecosystem has grown richer, Red Hat announced Red Hat Enterprise Linux Atomic: a lightweight operating system based on Red Hat Enterprise Linux and designed to run applications in Docker containers. Other vendors are focused on providing Docker orchestration tools across different hosts. One example is Kubernetes, an open source Docker manager, recently released by Google.

So how does Red Hat Enterprise Virtualization work with Docker today? oVirt (the upstream project for Red Hat Enterprise Virtualization) supports running Docker containers inside virtual machines and simplifies the process by enabling the Project Atomic image to be imported into your datacenter from the public Glance repository (glance.ovirt.org). Additionally, we are working on providing a platform for the orchestration solutions to integrate with RHEV. Kubernetes, in fact, already includes an oVirt Cloud Provider that can be connected to your data centers to discover the virtual machines dedicated to run containers (minions in the Kubernetes jargon).

Red Hat Enterprise Virtualization therefore is capable of providing you with an optimized stack to run containers starting from the operating system on the bare-metal up to the one inside the virtual machines and in the images, preserving at the same time the freedom and possibilities of the Docker hub. It is possible to imagine a future addition to the Kubernetes oVirt Cloud Provider to register regular RHEV hosts as minions as well, giving you the option to run containers on bare-metal with a minimum effort.

Today, deploying a private or hybrid cloud that runs virtual machines and containers, as just described, is cost prohibitive. We know that you are very much interested in maximizing the efficiency and optimizing your data centers by deploying the right tools for the right workloads. To help you in this quest, we are working on enabling Red Hat Enterprise Linux (RHEL) Atomic hosts to dynamically run different types of workloads. For example RHEV virtual machines, Docker containers and Hadoop jobs. In fact under the orchestration of Mesos (a powerful scheduling framework) it is possible to maximize and balance the hosts computational power for the most important and demanding tasks at any given time.

Integrating emerging technologies such as Kubernetes, Docker and Mesos enables us to help you to meet your requirements and run efficient and reliable datacenters. Stay tuned for more blog posts that will highlight the integration of these new technologies features into Red Hat Enterprise Virtualization.

by fsimonce at November 20, 2014 02:00 PM

Opensource.com

Do I need OpenStack if I use Docker?

Docker has broken a record in the speed in which it moved from being a disruptive technology to a commodity. The speed of adoption and popularity of Docker brings with it lots of confusion.

In this post I wanted to focus on a trend of commentary that has been gaining popularity that I’ve started to hear more often recently from users who just started using Docker: whether it makes sense to use OpenStack if they’ve already chosen to use Docker.

by Nati Shalom at November 20, 2014 10:00 AM

November 19, 2014

Aptira

Product Management lies dreaming ...

This was originally posted to the product working group mailling list, which consists of people in influencing positions within their employer's OpenStack organisations. The group originally formed around the idea of Hidden Influencers, which made me think of us lying dormant somewhere, like Cthulhu lies dreaming at R'lyeh. Although with fewer tentacles. Anyway, on with the post, which I've modified slightly to add appropriate context:
 
There's a been a bit of traffic on the operators list about a proposal for an Operations project that would aim to curate a collection of assets to support the operator community. To generate a set of "Frequently Required Solutions" if you will.
 
Whilst there is some disagreement on the thread about how to proceed, it is clear there's definitely a need amongst operators for quite a bit of supporting code. As a group of product managers it behooves us to consider some of the larger questions like this and drive/enable/influence the community toward answering the questions that community considers a priority.
 
In just saying that, some more big questions are raised:
  • What are the goals of this group?
  • What strategy are we going to employ to achieve those goals?
  • How do we gain acceptance from the wider community for this effort?
  • How do we find an effective mechanism for product management to occur?
and no doubt there are more.
 
Answering these questions it's not as "easy" as starting a project, but it's necessary if this group wants to contribute in any collective or concerted fashion.
 
If we're going to have a mid-cycle meetup to bootstrap a product management group or forum, lets set some targets for what we want to achieve there, and lets set them well in advance. For me, the priority is establishing the basic existential motivations for this group:
  • Why are we here? (Purpose, Goals)
  • What are we going to do about it? (Strategy)
Anything more than that is a bonus, and if we can agree these prior to the meetup, so much the better. However without these basics, any work in the detail doesn't validate the existence of this group as anything more than just a mailing list.
 
For those who aren't on, or aren't aware of the product group mailling list: if you're in a position of influence or control of your employer's OpenStack direction think about joining up. We're going to be looking at some pretty fundamental questions about how OpenStack will move forward as it progresses further along the hype cycle.

by Roland Chan (roland@aptira.com) at November 19, 2014 10:41 PM

Marton Kiss

OpenStack Budapest User group - Summit overview

Yesterday we had the first Hungary user group meeting since Paris Summit, and of course we cannot miss the summit review from the Agenda. The event start was a bit exciting, because the projector’s cable failed to work and we realized this little technical failure in the last minute. The Bonnie Restro staff did a wonderful job, and they somehow managed to get a replacement device from the closest pub.

The first topic was an introduction of a very interesting open-source project, two guys Peter Voros and Daniel Csubak from Balabit made a Zorp GPL based load balancer service for Neutron as a part of last Goggle Summer of Code. Zorp GPL is an application level firewall, and this LbaaS implementation is a really entry-level introduction of technology feature-set. For this project the actual publication of source-code and packages was the “easy” introduction, the hard work starts now if they want to contribute back those results into upstream OpenStack code. Basically the source code consist around 500 lines of code, the most time was used for learning OpenStack, Neutron and DevStack. As they told during the project they run into several issues caused mostly by the difference of simple virtualization and cloud terminology. My personal opinion that we could bold those differences for newcomers, and somehow reflect this in documentation. (for example you cannot ssh into an instance if you don’t have proper security group configuration, and you don’t need to forget about assigning elastic ips for public access). I think this project is a nice example of a good start in OpenStack ecosystem, and I guess those guys have endless possibilities in the open-source world of cloud if they continue the hard work and continuous learning.

If you want to learn more about this project, I suggest to read this OpenSuse blog post: https://news.opensuse.org/2014/09/01/things-i-learnt-with-the-zorp-and-opensuse-team/comment-page-1/#comment-105958

The second session was a Summit overview. Gergely Szalay from Fremont, Gabriel Borkowski from Avaya and Ildiko Tuzko-Keger from HP came and share our story with the user group. I can definitely say Paris Summit was one of my favourite. I’ve a large amount of data to make this comparsion, because it was my 7th Summit, and not missed a one since Boston. I’m very happy that Hungarians joined in a relatively large number and can participate in this large-scale event, from Avaya, IBM Zürich Research lab, Ericsson, Percona, HP and Fremont. Most of them not simply visit the conference, but was an active participant in the Design Summit as ATCs. Paris was the first Summit in Europe, our home-continent, so usually it means 2 hrs of flight from Budapest, but we made a policy with Gergely and Gabriel that a trip to an OpenStack Summit cannot take less time than 10hours, so instead of flying we jump in a car in Saturday morning, and crossed half of the Europe through Austria and Germany and drove directly to Paris. Budapest-Paris distance is 1500km, and we accomplished this task during 15hrs including the mandatory stops for car-refueling and eating. (And I want to notice here, don’t try to drive in Paris if you not born there!)

With Saturday evening arrival we missed the first part of upstream training, but joined in on Sunday. The goal of this training was to teach newcomers of OpenStack contribution process and give a little feeling about day-to-day work we are doing on OpenStack projects. Our favorite part of the training was the Lego-town building collaboration. We formed teams and started to work on specified tasks, based on several sprints. It was amazing to see how an early chaos turned into a real collaboration of people and teams, and I want to remark here that most of them never meet each-other personally. As a real simulation of OpenStack project we needed to take care that the buildings (Projects) we are working on must fit together (API definition) and it is not enough if people agree inside the teams, we cannot miss a team-level collaboration to build a whole town from little fragments of different projects.

The keynotes were fantastic as usual, but Hungary was a little-bit over-represented due some political intention trying to introduce a traffic based additional tax on internet, not forgetting that we are already paying a simple VAT tax on our internet subscriptions. I hope next time our country will appear in a different spotlight with some nicely finished OpenStack projects maybe in the gov. sector. The big deal here was the BMW story. To tell the truth selling OpenStack in the EU with names like Wells Fargo, or Comcast not easy, because those brands are not so well known here. Linking OpenStack to a premium-category car vendor is a real win-win situation. Most of business leaders and decision makers know the brand, even using those cars, and BMW’s decision broadcasts a strong signal to existing OpenStack users and prepares the involvement of new users of technology.

Ildiko was a first-timer in Paris and was working hard at HP booth that was constantly filled up with people and was easier to bypass the crowd than crossing through. She told us that it was a huge surprise for her to see the diversity of the entire community, from a geeky faces to well-suited vendor vp’s, and their openness to talk with each other. I think this is a very basic core value of the OpenStack ecosystem and presents a never-seen collaboration of the IT industry.

Additionally to daily presentations and sessions, the most remarkable part of the Summit was the social activity in form of parties organized by different vendors, some of them was publicly announced, and a few was a closed secret one. All of us agree on that the biggest boom was the Machine-Machine party by HP, they rented Les Pavillons de Bercy and it is hard to describe the whole event if you were not there. There was some open-air grill, raw-meat (tartare), good quality of wines, French cheese, entertainment and music. And of course a lot of interesting people from OpenStack community we can talk personally.

I’m sure we could share some passion about the Summit, and it helps to trigger of involvement of user group members. Of course I not forget to mention that contribution and community work can help to reach the Summits through the Travel Summit program. For Paris Summit with the support of the Foundation helped 20 people to visit the event from every part of the world.

November 19, 2014 03:22 PM

Red Hat Stack

Empowering OpenStack Cloud Storage: OpenStack Juno Release Storage Overview

Wind Energy

 License: CC0 Public Domain

The OpenStack 10th release added ten new storage backends and improved testing on third-party storage systems. The Cinder block storage project continues to mature each cycle exposing more and more Enterprise cloud storage infrastructure functionalities.

Here is a quick overview of some of these key features.

Simplifying OpenStack Disaster Recovery with Volume Replication

After introducing a new Cinder Backup API to allow export and import backup service metadata in the Icehouse release, which allowed “electronic tape shipping” style backup-export & backup-import capabilities to recover OpenStack cloud deployments, the next step for Disaster Recovery enablement in OpenStack is the foundation of volume replication support at block level.

Starting with the OpenStack Juno release, Cinder has now initial support for Volume Replication which makes Cinder aware of replicas, and allows the cloud admin to define storage policies to enable replication.
With this new feature Cinder Storage backend drivers can expose different replication capabilities via volume type convention policies that enable various replication operations, such as failover, failback as well as reverse direction capabilities.
Using the new API, a volume is created with replication extra-spec that will be allocated on supported backends for replication.

Data Protection Enablement

Consistency Groups support was added to group volumes together for the purpose of application data protection (with the focus of snapshots of consistency groups for disaster recovery), where the grouping of volumes is based on the volume-type, however there is still future work to support this functionality together with Cinder backups & volume replication.

Another important aspect is also to maintain consistency at the application & filesystem level. Similar to the AWS ec2-consistent-snapshot feature that provides consistent data in the snapshot, by performing flush/freeze to the filesystem, as well as flushing and locking the database, if applicable.  It is possible to achieve similar functionality in OpenStack with QEMU guest agent during image snapshotting for KVM instances, where nova-compute libvirt driver could request QEMU Guest Agent to freeze the filesystems (and applications if fsfreeze-hook is installed) during the image snapshot.  The QEMU guest agent support is currently planned for the next release to help automate daily/weekly backup of instances with consistency.

Storage Management enhancements at the Dashboard level

The following Cinder API features were also added to the Horizon dashboard in the Juno cycle:

  • Utilize Swift to store volume backups as well as restore volumes from these backups.
  • Enabling resetting the state of a snapshot.
  • Enabling resetting the state of a volume.
  • Supporting upload-to-image.
  • Volume retype.
  • QoS (quality of service) support.

Support for Volume Pools via a new Pool-aware Scheduler

Until the Juno release, Cinder used to see each volume backend as a whole, even if the backend consisted of several smaller pools with totally different capabilities and capacities. This gap could have caused issues where a backend may appear to have enough capacity to create a copy of a volume but in fact failed to do so. Extending Cinder to support storage pools within volume backend has also improved Cinder scheduler decision making, that is now aware of storage pools within backend and also use them as finest granularity for resource placement.

Another Cinder Scheduling aspect was addressed with a new Volume Num Weighter feature that enables the user to choose a volume backend according to free_capacity and allocated_capacity. The volume number weighter feature lets the scheduler choose a volume backend based on its volume number in the volume backend, to improve volumes IO balancing performance.

Glance Generic Catalog

The Glance Image Service introduced artifacts as a broader definition for images during Juno. The scope is expanding the image repository to a generic catalog of various data assets. It is possible now to manage a catalog of metadata definitions where users can register the metadata definitions to be used on various resource types including images, aggregates, and flavors. Support for viewing and editing the assignment of these metadata tags is included in Horizon.  Other key new features included asynchronous processing and Image download improvements such as:

  • Restart of partial download  (solved a problem associated with downloads of  very large images that may be interrupted prior to completion, due to dropped connections)
  • A new download image restriction policy, to restrict users from downloading an image based on policy.

Introducing Swift Storage Policies

This large feature has been finally released in the Juno cycle of the OpenStack Object Storage project, allowing users more control over cost and performance in terms of how they want to replicate and access data across different backends and geographical regions.

Storage Policies allow for some level of segmenting the cluster for various purposes through the creation of a multiple object ring. Once configured, users can create a container with a particular policy.

Storage Policies can be set for:

  • Different Storage implementations:
  • Different Diskfile (e.g. GlusterFS, Kinetic) for a group of nodes
  • Different levels of replication
  • Different Performance profiles  (e.g. SSD-only)

Other new Swift new notable feature include the Multi-ring awareness with support for:

  • Object replication – added to be aware of the different locations on-disk that storage policies introduced
  • Large objects – refactoring work for storage policies
  • Object auditing – added to be aware of the different on-disk locations for objects in different storage policies
  • Improved partition placement – allows for better capacity adjustments, especially when adding a new region to existing clusters.

The swift-ring-builder has been updated as well to first prefer device weight, then use failure domains to break ties.

The progress on multi-ring and storage policies is the foundation to the Swift Erasure coding development that is on-going in the Kilo release cycle. Erasure coding is a storage policy with its own ring and configurable set of parameters designed to reduce  storage costs associated with massive amounts of data (both operating costs and capital costs) by providing an option that maintains the same, or better, level of durability using much less disk space, especially for “Warm” storage use cases, such as when performing  volume backup to a Swift object storage system, as backups are typically large compressed objects and are infrequently read once they have been written to the storage system.

To learn more about the new OpenStack storage features see OpenStack 2014.2 (Juno) Release notes.

by Sean Cohen at November 19, 2014 02:00 PM

November 18, 2014

The Official Rackspace Blog » OpenStack

Forrester Names Rackspace A ‘Strong Performer’ In Hosted Private Cloud; And We’ve Made It Even Stronger

Forrester Research, Inc. has named Rackspace Private Cloud (RPC) a “strong performer” in The Forrester Wave™: Hosted Private Cloud Solutions, Q4 2014. We believe that this rating is great recognition of the momentum we’re seeing with our OpenStack-powered private cloud solution.

The report states that Rackspace has “successfully been able to deliver an intuitive portal, diverse control options, granular user permissions, numerous certifications, and large global DC footprint.” And because Fanatical Support is core to our business, we are especially pleased Forrester noted that RPC offers “strong support capabilities.”

What we find even more exciting, however, are some areas of improvement that RPC can capitalize on to offer a stronger private cloud solution: reporting and metering, automation, multi-data-center portability and strength of our SLAs. Why are we so excited about those? Because they are some of the exact enhancements included in the latest release of Rackspace Private Cloud.

Forrester’s study was conducted in July 2014 and the report is based on the previous version of Rackspace Private Cloud, which was powered by OpenStack Havana. In the time between when the study was conducted and when it was published, we released an enhanced version of Rackspace Private Cloud. This new version of RPC is built on OpenStack Icehouse and includes features that address many of the opportunities Forrester highlights as areas of improvement. Specifically, with the current version of RPC we address the SLA opportunity by delivering an industry-leading 99.99 percent OpenStack API uptime guarantee. We also address the automation opportunity by enabling DevOps Automation Services with RPC, supporting the OpenStack Orchestration (Heat) project and launching several RPC solution templates that empower customers to deploy production-ready application stacks in just minutes.

We believe that the opportunities laid out by Forrester help validate the recent enhancements we’ve made to Rackspace Private Cloud. We’ll continue to enhance our solution based on our customers’ feedback and provide you with all the power of the cloud without the pain of running it, so you can focus on your core business.

by Christian Foster at November 18, 2014 03:00 PM

OpenStack @ NetApp

NetApp in Paris - OpenStack Summit Recap

NetApp in Paris: OpenStack Summit Recap

About two weeks ago, NetApp sent a large contingent of folks to the biannual OpenStack Summit in Paris – where developers, operators, and users converge - to talk about their experiences around OpenStack, learn more about what’s new in the ecosystem, and help design the next release of OpenStack!

Probably most notable was the amount of excitement around Manila. (In case you're not familiar, Manila allows users of OpenStack clouds to provision and securely manage shared file systems through a simple REST API.)

There was a standing-room-only general session on Manila, which gave an overview of Manila, its API structure and key concepts, an architectural overview of the service, and then updates on what was new in the Juno release and where our focus will be during the Kilo cycle. Here’s a link to the recording in case you weren’t able to join us - https://www.youtube.com/watch?v=PLYr5Xo3LJk

Manila Overview Diagram

Immediately following the session on Manila, NetApp hosted a "Use Case Showcase" session where we had brief presentations from customers & partners in various stages of deployment: a customer in production, one currently deploying OpenStack on NetApp, and a prototype integration scenario of Manila with SAP on SUSE Cloud. There were some great takeaways from this session - so check out the YouTube recording at https://www.youtube.com/watch?v=6XW5U2XFwEg".

As always, if you want to learn more about Manila, get started by checking our wiki page @ http://wiki.openstack.org/wiki/Manila or jump on IRC - we’re always hanging out in #openstack-manila and #openstack-netapp on Freenode! We’ve got weekly meetings at 1500 UTC in #openstack-meeting-alt on Freenode as well.

November 18, 2014 02:42 PM

Red Hat Stack

Simplifying and Accelerating the Deployment of OpenStack Network Infrastructure

plumgrid logo

RHOSCIPN_logo_small

The energy from the latest OpenStack Summit in Paris is still in the air. Its record attendance and vibrant interactions are a testimony of the maturity and adoption of OpenStack across continents, verticals and use cases.

It’s especially exciting to see its applications growing outside of core datacenter use cases with Network Function Virtualization being top of mind for many customers present at the Summit.

If we look back at the last few years, a fundamental role fueling OpenStack adoption has been played by the distributions which have taken the project OpenStack and helped turn it into an easy to consume, supported, enterprise-grade product.

At PLUMgrid we have witnessed this transformation summit after summit, customer deployment after customer deployment. Working closely with our customers and our OpenStack partners we can attest how much easier, smoother, simpler an OpenStack deployment is today.

Similarly, PLUMgrid wants to simplify and accelerate the deployment of OpenStack network infrastructure, especially for those customers that are going into production today and building large-scale environments.

If you had the pleasure to be at the summit you have learnt about all the new features that were introduced in Juno for the OpenStack networking component (and if not check out this blog which provides a good summary of all Juno’s networking feature).

Despite the tremendous progress, standing up the network infrastructure for an OpenStack environment can still be a daunting process made of opening tickets, creating VLANs and opening firewall ports, tcpdump and lengthy troubleshooting that can slow down the deployment of many weeks.
Also, once the OpenStack cloud is in place, some network component might still fail to be as on-demand, rich, programmable and flexible as the other components of the stack are.

Last, especially for NFV type of clouds, there is a strong need to leverage and combine Network Functions from a broad portfolio of partners – a task that can’t always easily be achieved today out of the box.

PLUMgrid’s comprehensive software suite, PLUMgrid ONS 2.0 for OpenStack, enables scalable and secure virtual network infrastructure for OpenStack clouds.
As a software only solution fully integrated within Red Hat’s Foreman installer, PLUMgrid enables the creation of all the necessary infrastructure in a matter of hours completely independent of the underlying physical infrastructure.

Once the OpenStack deployment is in place, PLUMgrid’s comprehensive set of L2-4 distributed Virtual Network Functions as well as its broad ecosystem powered by its Service Insertion Architecture can be used to satisfy a wide array of use cases.

Its integration and certification with Red Hat OpenStack Platform 5.0 enable PLUMgrid to effectively speed up OpenStack deployments and enable their joint customers to create scalable, secure multi-tenant solutions.

To learn more about how PLUMgrid deployed with Red Hat OpenStack Platform 5.0 can speed up NFV OpenStack deployments join us for a live webinar on November 19th at 8AM PST.

by Valentina at November 18, 2014 02:00 PM

Rafael Knuth

Recording from Online Meetup: OpenStack Juno the Complete Lowdown & Tales from the Summit

Everything new in the Juno Release, Syed Armani - Hastexo!Tales from the Summit w/ Nati Shalom,...

November 18, 2014 02:00 PM

Mirantis

It’s not a summit without a party: OpenStack Underground

One of the things we enjoy the most about the semi-annual OpenStack summit is the chance for people throughout the ecosystem to get together and really kick back together like the extended family that we are.  For those of you who couldn’t make it to OpenStack Underground (or were there but would like a refresher) here are some of the highlights.  You can see the entire photostream here.


mirantis-004
mirantis-007mirantis-115mirantis-011mirantis-013mirantis-027mirantis-039mirantis-047mirantis-078mirantis-087mirantis-098mirantis-128mirantis-134mirantis-137mirantis-151mirantis-157mirantis-162mirantis-173mirantis-175mirantis-170mirantis-174mirantis-196mirantis-201mirantis-236mirantis-220mirantis-234mirantis-205mirantis-249mirantis-250mirantis-254 mirantis-247mirantis-243

The post It’s not a summit without a party: OpenStack Underground appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Admin at November 18, 2014 03:53 AM

November 17, 2014

The Official Rackspace Blog » OpenStack

OpenStack Summit Paris: Federated Clouds Becoming A Reality [Video]

At OpenStack Summit Paris, cloud federation – the ability to easily and seamlessly leverage a multi-cloud environment – was a buzz worthy topic. The story of cloud federation was told though a number of sessions: from Tim Bell from CERN’s keynote presentation to several smaller breakouts. Rackspace and CERN openlab are working together to federate OpenStack clouds. The project has already had a number of successes.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="http://www.youtube.com/embed/ijYeHxI4DIw" width="640"></iframe>

In the video above, Bell and Rackspace DevOps Practice CTO Chris Jackson talk about the work Rackspace and CERN openlab are doing and the importance of federated clouds.

For a deeper drilldown into cloud federation, tune in to this recorded Google Hangout featuring Bell and Jackson.

Check out more coverage from OpenStack Summit Paris.

 

by Andrew Hickey at November 17, 2014 08:00 PM

Rafael Knuth

Online Meetup: OpenStack Trove Update - Juno, Kilo and Beyond

Join us as Amrith Kumar from Tesora shares what is new in Trove in Juno release, community plans for...

November 17, 2014 05:51 PM

OpenStack Blog

Wrapping up the Travel Support Program – Kilo

The OpenStack Foundation brought 20 people to Paris for the Summit earlier this month, thanks to the grants offered by the Travel Support Program. The Travel Support Program is based on the promise of Open Design and its aim is to facilitate participation of key contributors to the OpenStack Design Summit. The program aims at covering costs for travel and accommodation for key contributors to the OpenStack project to join the community at the Summits.

Travel Support Program

We had 22 people accepted in the program from 11 different countries, spanning five continents. Four people traveled from Brazil, four
 from India, three from Europe and the rest were from South America, North America and South-east Asia. Of the selected recipients, two were unable to attend due to VISA timing issues, but we were excited to welcome the 20 attendees that were able to make the trip. 

The Foundation spent $28,400 on flights and $24,000 on hotels for a total cost for the Foundation of more than $54,000 USD, including the cost of four full access passes granted to non-ATCs. 

Stay tuned for when we announce the applications for the Travel Support Applications for the May Summit in Vancouver. The Travel Support Program will also be a sponsorship opportunity for the upcoming Summit in Vancouver. Details will be shared in the sponsorship prospectus that will be published soon.  

by Allison Price at November 17, 2014 03:37 PM

Opensource.com

OpenStack's developers make plans for the next release

Interested in keeping track of what's happening in the open source cloud? Opensource.com is your source for what's happening right now in OpenStack, the open source cloud infrastructure project.

by Jason Baker at November 17, 2014 08:00 AM

Adam Young

Dynamic Policy in Keystone

Ever get that feeling that an epiphany is right around the corner? I spent a good portion of the OpenStack summit with that feeling. I knew that it would not be earth shattering, or lead me to want to rewrite Keystone, but rather a clarification of how a bunch of things should fall together. The “click” happened on the second to last day, and it can be summarized in a few key points.

When discussing the OAUTH1.0 extension to Keystone, several people commented on how it was similar to trusts, and that we should have a unified mechanism between them for delegation. During a discussion with David Chadwick, he mentioned that the role assignments themselves were a form of delegation, and lamented that we were losing the chain of delagtion by how we delegate roles. So the first point was this:

Keystone should have a single, unified mechanism for delegation.

One key feature that feeds into that is the ability to break a big role into a small one. I had posted a spec for hierarchical roles prior to the summit, but wasn’t clear for how to implement it; I could see how it coule be implemented on the token side, but all people I talked to insisted it made more sense on the enforcement side. That is the second big point.

Role inheritance should be expanded by policy enforcement.

Policy is almost all static. Each OpenStack project had it’s own policy file in its own it repo. Extending it to cover is user requests for things like project specific policy or more granular roles has not been possible.

UPDATE: I’ve been asked to make clearer what problems this addresses.

  1. Determine what roles a user can assign to another user
  2. Allow a user to determine what roles they need to perform some action
  3. Allow some user interface to determine what a user is capable of doing based on their roles
  4. Establish an iterative process solve the long-standing bug that a user with admin on any scope has admin on all scoped.
  5. Allow a user to delegate a subset of their capabilites to a remote service.

What we have now is a simple set of specs that build on each other that will, in the end, provide a much more powerful, flexible, and consistant delegation mechanism for Keystone. Here are the General steps:

  1. Graduate oslo policy to a library
  2. Add to the policy library the essential code to enforce policy based on a keystone token.  I’ve looked at both the Keystone and Nova pieces that do this, and they are similar enough that we should not have too much problem making this happen.
  3. Add in the ability to fetch the policy.json file from Keystone.
  4. Add a rule to the Keystone policy API to return the default policy file if no policy file is specified for an endpoint.
  5. Merge the current default policy files from all of the projects into a single policy file, with namespaces that keep the rules from conflicting across services.  Reduce the duplication of rules like “admin_or_owner”  so that we have a consistent catalog of capabilities across OpenStack.  Make this merged file the default that is served out of Keystone when an endpoint asks for a policy file and Keystone does not have an endpoint specific file to give it.
  6. Make a database schema to hold the rules from the policy file.  Use this to generate the policy files served by Keystone.  There should be no functional difference between the global file and the one produced in the above merge.
  7. Use the hierarchical role definitions to generate the rules for the file above.  For example, rules that essentially say “grant access to a user with any role on this project”  will now say  “grant access to any user with the member role, or with any role that inherits the member role.  The member role will be the lowest form of access.  Admin will inherit member, as will all other defined roles.
  8. Break member up into smaller roles.  For example,  we could distinguish between actions that can only read state from those that can change it:  “Observer”  and “Editor”  Member would inherit editor, and editor would inherit observer.
  9. Change the rules for specific API policy enforcement points to know about the new roles.  For example, the API to create a new image in glance might now require the editor role instead of the member role.  But, since member inherits editor, all current users will be able to perform the same set of operations.
  10. Change the role assignment mechanism so that a user can only assign a role that they themselves have on the designated scope.  In order to assign Member, the user must have the member role, or a role that inherits Member,such as admin.  Role assignment, trusts, oauth, and any other mechanism out there will follow this limitation.  We will have to perform additional limitations, such as determining what happens to a delegated role when the person that does the delegation has that role removed;  perhaps one will need a specific role in order to perform “sticky” role assignments that last past your employment, or perhaps we will allow a user to pass some/all their delegations on to another user.

 

This is still in the planning stage.  One nice thing about a plan like this is that each stage shows value on its own, so that if we only get as far as, say stage 3, we still have a better system than we do today.  Many of the details are still hiding in the weeds, and will require more design.  But I think the above approach makes sense, and will make Keystone do what a lot of people need it to do.

by Adam Young at November 17, 2014 03:45 AM

Minimal Token Size

OpenStack Keystone tokens can become too big to fit in the headers between mod_wsgi and the WSGI applications. Compression mitigates the problem somewhat, but if token sizes continue to grow, eventually they outpace the benefits of compression. How can we keep them to a minimal size?

There are two variables to the size of the tokens: the packaging, and the data inside. The packaging for a PKIZ token has a lower bound based on the the signing algorithm. An empty CMS document of compressed data is going to be no less than 650 bytes. An unscoped token with proper compression comes in at 930 bytes. This are for headers, but it means that we have to keep additional data inside the token body as small as possible.

Encoding

Lets shift gears back to the encoding. A recent proposal suggested using symmetric encryption instead of asymmetric. The idea is that a subset of data would be encrypted by Keystone, and the data would have to be sent back to Keystone to validate. What would this save us?

Lets assume for a moment that we don’t want to pay any of the overhead of the CMS message format. Instead, keystone will encrypt just the JSON and base64 the data. How much does that save us? Depends on the encryption algorithm. An empty token will be tiny: 33 bytes when encrypted like this:

openssl bf -salt -a -in cms/empty.json -out cms/empty.bf

Which, according to the openssl man page, is blowfish encrypted and base64 encoded. What about a non-trivial token? Turns out, our unscoped token is quite a bit bigger: 780 bytes for the comparable call:

openssl bf -d -k key.data -in cms/auth_token_unscoped.json -out cms/auth_token_unscoped.bf

Compared with the PKIZ format at 929 bytes, the benefit does not seem all that great.

What about for a scoped token with role data embedded in it, but no service catalog? It turns out the compression actually makes the PKIZ format more effecient: PKIZ is 917 bytes versus 1008 for the bf.

Content

What data is in the token?

Identification. This is what you would see in an unsigned token: user id and name, domain id and possibly name.

Scope: domain and project info Roles: specific to the scope. service catalog. The sets of services and endpoints that implement those services.

It is the service catalog that is so problematic. While we have stated that you can make tokens without a service catalog, doing so is rally not going to allow the endpoints to make any sort of decisions about where to get resources.

There is a lot of redundant data in the catalog. We’ve discussed doing ID only service catalogs. That implies that each endpoint is expandable on the endpoint size: the endpoints need to be able to fetch the service catalog and then look up the endpoints by ID.

But let us think in terms of scale. If there is a service catalog with, say, 512 endpoints, we are still going to be sending tokens that are 512 * length(endpoint_id)

Can we do better? According to Jon Bently in Programming Pearls, yes we can. We can use a bitmap. No, not the image format. Here a bitmap is an array of bits, each of which, when set, indicates inclusion of the member in the set.

We need two things. One, a cached version of the service catalog on the endpoints. But now we need to put a slightly stricter constraint on it: the token must match up exactly to a version of the service catalog, and the service catalog must contain that version number. I’d take the git approach, do a sha256 hash of the service catalog document, and include that version in the token.

Second, we need to enforce ordering on the service catalog. Each endpoint must be in a repeatable location in the list. I need to be able to refer to the endpoints, not by ID, but by sequence number.

Now, what the token would contain? Two things:

The hash of the service catalog. A bitmap of the included services.

Here’s a minimal service catalog

Index | Service name | endpoint ID
 0 | Nova | N1
 1 | Glance | G1
 2 | Neutron | T1
 3 | Cinder | C1

A service catlog that had all of the endpoints would be (b for binary) b1111 or, in Hex, 0xF

A service catalog with only Nova would be b0001 or 0×1.

Just cinder would be b1000 or 0×8

A service catalog with 512 endpoints would be 512 bits in length. That would be 64 characters long, the length of a string comparable to a sha256. A comparable list of uuids would take 16384 characters, not including the JSON overhead of commas and quotes.

I’ve done a couple tests with token data in both the minimized and the endpoing_id only formats. With 30 endpoint ids, the compressed token size is 1969 bytes. Adding one ID to that increases the size to 1989. The minimized format is 1117 when built with the following data:

"minimizedServiceCatalog": { 
    "catalog_sha256": "7c7b67a0b88c271384c94ed7d93423b79584da24a712c2ece0f57c9dd2060924",
    "entrymap": "Ox2a9d590bdb724e6d888db96f846c9fd8" },

The ID only one would scale up at rougly 20 bytes per entry point, the minimized one would stay fairly fixed in length.

Are there other options? If a token without a catalog assumed that all endpoints were valid, and auth_token middleware set the environment for the request appropriately, then there is no reason to even send a catalog on over.

Project filtering of endpoints could allow for definitions of the service catalog that is a subset of the overall catalog. These subordinate service catalogs could have their own ids, and be sent over in the token. This would minimize the size of data in the token at the expense of the server; a huge number of projects, each with their own service catalog would lead to a large synchronization effort between the endpoints and the keystone server.

If a token is only allowed to work with a limited subset of the endpoints assigned to the project, then maintaining strictly small service catalogs in their current format would be acceptable. However, this would require a significant number of changes on how users and service request tokens from Keystone.

by Adam Young at November 17, 2014 02:42 AM

November 15, 2014

Opensource.com

Top 5 of the week: Free book on GitHub and open cloudy weather

Every week, I tally the numbers and listen to the buzz to bring you the best of last week's open source news and stories on Opensource.com, this week: November 10 - 14, 2014.

by Jen Wike Huger at November 15, 2014 02:00 PM

Lars Kellogg-Stedman

Creating a Windows image for OpenStack

If you want to build a Windows image for use in your OpenStack environment, you can follow the example in the official documentation, or you can grab a Windows 2012r2 evaluation pre-built image from the nice folks at CloudBase.

The CloudBase-provided image is built using a set of scripts and configuration files that CloudBase has made available on GitHub.

The CloudBase repository is an excellent source of information, but I wanted to understand the process myself. This post describes the process I went through to establish an automated process for generating a Windows image suitable for use with OpenStack.

Unattended windows installs

The Windows installer supports fully automated installations through the use of an answer file, or "unattend" file, that provides information to the installer that would otherwise be provided manually. The installer will look in a number of places to find this file. For our purposes, the important fact is that the installer will look for a file named autounattend.xml in the root of all available read/write or read-only media. We'll take advantage of this by creating a file config/autounattend.xml, and then generating an ISO image like this:

mkisofs -J -r -o config.iso config

And we'll attach this ISO to a vm later on in order to provide the answer file to the installer.

So, what goes into this answer file?

The answer file is an XML document enclosed in an <unattend>..</unattend> element. In order to provide all the expected XML namespaces that may be used in the document, you would typically start with something like this:

<?xml version="1.0" ?>
<unattend
  xmlns="urn:schemas-microsoft-com:unattend"
  xmlns:ms="urn:schemas-microsoft-com:asm.v3"
  xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State">

  <!-- your content goes here -->

</unattend>

Inside this <unattend> element you will put one or more <settings> elements, corresponding to the different configuration passes of the installer:

<settings pass="specialize">
</settings>

The available configuration passes are:

Of these, the most interesting for our use will be:

  • windowsPE -- used to install device drivers for use within the installer environment. We will use this to install the VirtIO drivers necessary to make VirtIO devices visible to the Windows installer.

  • specialize -- In this pass, the installer applies machine-specific configuration. This is typically used to configure networking, locale settings, and most other things.

  • oobeSystem -- In this pass, the installer configures things that happen at first boot. We use this to step to install some additional software and run sysprep in order to prepare the image for use in OpenStack.

Inside each <settings> element we will place one or more <component> elements that will apply specific pieces of configuration. For example, the following <component> configures language and keyboard settings in the installer:

<settings pass="windowsPE">
  <component name="Microsoft-Windows-International-Core-WinPE"
    processorArchitecture="amd64"
    publicKeyToken="31bf3856ad364e35"
    language="neutral"
    versionScope="nonSxS">

    <SetupUILanguage>
      <UILanguage>en-US</UILanguage>
    </SetupUILanguage>
    <InputLocale>en-US</InputLocale>
    <UILanguage>en-US</UILanguage>
    <SystemLocale>en-US</SystemLocale>
    <UserLocale>en-US</UserLocale>
  </component>
</settings>

Technet provides documentation on the available components.

Cloud-init for Windows

Cloud-init is a tool that will configure a virtual instance when it first boots, using metadata provided by the cloud service provider. For example, when booting a Linux instance under OpenStack, cloud-init will contact the OpenStack metadata service at http://169.254.169.254/ in order to retrieve things like the system hostname, SSH keys, and so forth.

While cloud-init has support for Linux and BSD, it does not support Windows. The folks at Cloudbase have produced cloudbase-init in order to fill this gap. Once installed, the cloudbase-init tool will, upon first booting a system:

  • Configure the network using information provided in the cloud metadata
  • Set the system hostname
  • Create an initial user account (by default "Admin") with a randomly generated password (see below for details)
  • Install your public key, if provided
  • Execute a script provided via cloud user-data

Passwords and ssh keys

While cloudbase-init will install your SSH public key (by default into /Users/admin/.ssh/authorized_keys), Windows does not ship with an SSH server and cloudbase-init does not install one. So what is it doing with the public key?

While you could arrange to install an ssh server that would make use of the key, cloudbase-init uses it for a completely unrelated purpose: encrypting the randomly generated password. This encrypted password is then passed back to OpenStack, where you can retrieve it using the nova get-password command, and decrypt it using the corresponding SSH private key.

Running nova get-password myinstance will return something like:

w+In/P6+FeE8nv45oCjc5/Bohq4adqzoycwb9hOy9dlmuYbz0hiV923WW0fL
7hvQcZnWqGY7xLNnbJAeRFiSwv/MWvF3Sq8T0/IWhi6wBhAiVOxM95yjwIit
/L1Fm0TBARjoBuo+xq44YHpep1qzh4frsOo7TxvMHCOtibKTaLyCsioHjRaQ
dHk+uVFM1E0VIXyiqCdj421JoJzg32DqqeQTJJMqT9JiOL3FT26Y4XkVyJvI
vtUCQteIbd4jFtv3wEErJZKHgxHTLEYK+h67nTA4rXpvYVyKw9F8Qwj7JBTj
UJqp1syEqTR5/DUHYS+NoSdONUa+K7hhtSSs0bS1ghQuAdx2ifIA7XQ5eMRS
sXC4JH3d+wwtq4OmYYSOQkjmpKD8s5d4TgtG2dK8/l9B/1HTXa6qqcOw9va7
oUGGws3XuFEVq9DYmQ5NF54N7FU7NVl9UuRW3WTf4Q3q8VwJ4tDrmFSct6oG
2liJ8s7ybbW5PQU/lJe0gGBGGFzo8c+Rur17nsZ01+309JPEUKqUQT/uEg55
ziOo8uAwPvInvPkbxjH5doH79t47Erb3cK44kuqZy7J0RdDPtPr2Jel4NaSt
oCs+P26QF2NVOugsY9O/ugYfZWoEMUZuiwNWCWBqrIohB8JHcItIBQKBdCeY
7ORjotJU+4qAhADgfbkTqwo=

Providing your secret key as an additional parameter will decrypt the password:

$ nova get-password myinstance ~/.ssh/id_rsa
fjgJmUB7fXF6wo

With an appropriately configured image, you could connect using an RDP client and log in as the "Admin" user using that password.

Passwords without ssh keys

If you do not provide your instance with an SSH key you will not be able to retrieve the randomly generated password. However, if you can get console access to your instance (e.g., via the Horizon dashboard), you can log in as the "Administrator" user, at which point you will be prompted to set an initial password for that account.

Logging

You can find logs for cloudbase-init in c:\program files (x86)\cloudbase solutions\cloudbase-init\log\cloudbase-init.log.

If appropriately configured, cloudbase-init will also log to the virtual serial port. This log is available in OpenStack by running nova console-log <instance>. For example:

$ nova console-log my-windows-server
2014-11-19 04:10:45.887 1272 INFO cloudbaseinit.init [-] Metadata service loaded: 'HttpService'
2014-11-19 04:10:46.339 1272 INFO cloudbaseinit.init [-] Executing plugin 'MTUPlugin'
2014-11-19 04:10:46.371 1272 INFO cloudbaseinit.init [-] Executing plugin 'NTPClientPlugin'
2014-11-19 04:10:46.387 1272 INFO cloudbaseinit.init [-] Executing plugin 'SetHostNamePlugin'
.
.
.

Putting it all together

I have an install script that drives the process, but it's ultimately just a wrapper for virt-install and results in the following invocation:

exec virt-install -n ws2012 -r 2048 \
  -w network=default,model=virtio \
  --disk path=$TARGET_IMAGE,bus=virtio \
  --cdrom $WINDOWS_IMAGE \
  --disk path=$VIRTIO_IMAGE,device=cdrom \
  --disk path=$CONFIG_IMAGE,device=cdrom \
  --os-type windows \
  --os-variant win2k8 \
  --vnc \
  --console pty

Where TARGET_IMAGE is the name of a pre-existing qcow2 image onto which we will install Windows, WINDOWS_IMAGE is the path to an ISO containing Windows Server 2012r2, VIRTIO_IMAGE is the path to an ISO containing VirtIO drivers for Windows (available from the Fedora project), and CONFIG_IMAGE is a path to the ISO containing our autounattend.xml file.

The fully commented autounattend.xml file, along with the script mentioned above, are available in my windows-openstack-image repository on GitHub.

The answer file in detail

windowsPE

In the windowsPE phase, we start by configuring the installer locale settings:

<component name="Microsoft-Windows-International-Core-WinPE"
  processorArchitecture="amd64"
  publicKeyToken="31bf3856ad364e35"
  language="neutral"
  versionScope="nonSxS">

  <SetupUILanguage>
    <UILanguage>en-US</UILanguage>
  </SetupUILanguage>
  <InputLocale>en-US</InputLocale>
  <UILanguage>en-US</UILanguage>
  <SystemLocale>en-US</SystemLocale>
  <UserLocale>en-US</UserLocale>

</component>

And installing the VirtIO drviers using the Microsoft-Windows-PnpCustomizationsWinPE component:

<component name="Microsoft-Windows-PnpCustomizationsWinPE"
  publicKeyToken="31bf3856ad364e35" language="neutral"
  versionScope="nonSxS" processorArchitecture="amd64">

  <DriverPaths>
    <PathAndCredentials wcm:action="add" wcm:keyValue="1">
      <Path>d:\win8\amd64</Path>
    </PathAndCredentials>
  </DriverPaths>

</component>

This assumes that the VirtIO image is mounted as drive d:.

With the drivers installed, we can then call the Microsoft-Windows-Setup component to configure the disks and install Windows. We start by configuring the product key:

<component name="Microsoft-Windows-Setup"
  publicKeyToken="31bf3856ad364e35"
  language="neutral"
  versionScope="nonSxS"
  processorArchitecture="amd64">

  <UserData>
    <AcceptEula>true</AcceptEula>
    <ProductKey>
      <WillShowUI>OnError</WillShowUI>
      <Key>INSERT-PRODUCT-KEY-HERE</Key>
    </ProductKey>
  </UserData>

And then configure the disk with a single partition (that will grow to fill all the available space) which we then format with NTFS:

  <DiskConfiguration>
    <WillShowUI>OnError</WillShowUI>
    <Disk wcm:action="add">
      <DiskID>0</DiskID>
      <WillWipeDisk>true</WillWipeDisk>

      <CreatePartitions>
        <CreatePartition wcm:action="add">
          <Order>1</Order>
          <Extend>true</Extend>
          <Type>Primary</Type>
        </CreatePartition>
      </CreatePartitions>

      <ModifyPartitions>
        <ModifyPartition wcm:action="add">
          <Format>NTFS</Format>
          <Order>1</Order>
          <PartitionID>1</PartitionID>
          <Label>System</Label>
        </ModifyPartition>
      </ModifyPartitions>
    </Disk>
  </DiskConfiguration>

We provide information about what to install:

  <ImageInstall>
    <OSImage>
      <WillShowUI>Never</WillShowUI>

      <InstallFrom>
        <MetaData>
          <Key>/IMAGE/Name</Key>
          <Value>Windows Server 2012 R2 SERVERSTANDARDCORE</Value>
        </MetaData>
      </InstallFrom>

And where we would like it installed:

      <InstallTo>
        <DiskID>0</DiskID>
        <PartitionID>1</PartitionID>
      </InstallTo>
    </OSImage>
  </ImageInstall>

specialize

In the specialize phase, we start by setting the system name to a randomly generated value using the Microsoft-Windows-Shell-Setup component:

<component name="Microsoft-Windows-Shell-Setup"
  publicKeyToken="31bf3856ad364e35" language="neutral"
  versionScope="nonSxS" processorArchitecture="amd64">
  <ComputerName>*</ComputerName>
</component>

We enable remote desktop because in an OpenStack environment this will probably be the preferred mechanism with which to connect to the host (but see this document for an alternative mechanism).

First, we need to permit terminal server connections:

<component name="Microsoft-Windows-TerminalServices-LocalSessionManager"
  processorArchitecture="amd64"
  publicKeyToken="31bf3856ad364e35"
  language="neutral"
  versionScope="nonSxS">
  <fDenyTSConnections>false</fDenyTSConnections>
</component>

And we do not want to require network-level authentication prior to connecting:

<component name="Microsoft-Windows-TerminalServices-RDP-WinStationExtensions"
  processorArchitecture="amd64"
  publicKeyToken="31bf3856ad364e35"
  language="neutral"
  versionScope="nonSxS">
  <UserAuthentication>0</UserAuthentication>
</component>

We will also need to open the necessary firewall group:

<component name="Networking-MPSSVC-Svc"
  processorArchitecture="amd64"
  publicKeyToken="31bf3856ad364e35"
  language="neutral"
  versionScope="nonSxS">
  <FirewallGroups>
    <FirewallGroup wcm:action="add" wcm:keyValue="RemoteDesktop">
      <Active>true</Active>
      <Profile>all</Profile>
      <Group>@FirewallAPI.dll,-28752</Group>
    </FirewallGroup>
  </FirewallGroups>
</component>

Finally, we use the Microsoft-Windows-Deployment component to configure the Windows firewall to permit ICMP traffic:

<component name="Microsoft-Windows-Deployment"
  processorArchitecture="amd64"
  publicKeyToken="31bf3856ad364e35"
  language="neutral" versionScope="nonSxS">

  <RunSynchronous>

    <RunSynchronousCommand wcm:action="add">
      <Order>3</Order>
      <Path>netsh advfirewall firewall add rule name=ICMP protocol=icmpv4 dir=in action=allow</Path>
    </RunSynchronousCommand>

And to download the cloudbase-init installer and make it available for later steps:

    <RunSynchronousCommand wcm:action="add">
      <Order>5</Order>
      <Path>powershell -NoLogo -Command "(new-object System.Net.WebClient).DownloadFile('https://www.cloudbase.it/downloads/CloudbaseInitSetup_Beta_x64.msi', 'c:\Windows\Temp\cloudbase.msi')"</Path>
    </RunSynchronousCommand>
  </RunSynchronous>
</component>

We're using Powershell here because it has convenient methods available for downloading URLs to local files. This is roughly equivalent to using curl on a Linux system.

oobeSystem

In the oobeSystem phase, we configure an automatic login for the Administrator user:

  <UserAccounts>
    <AdministratorPassword>
      <Value>Passw0rd</Value>
      <PlainText>true</PlainText>
    </AdministratorPassword>
  </UserAccounts>
  <AutoLogon>
    <Password>
      <Value>Passw0rd</Value>
      <PlainText>true</PlainText>
    </Password>
    <Enabled>true</Enabled>
    <LogonCount>50</LogonCount>
    <Username>Administrator</Username>
  </AutoLogon>

This automatic login only happens once, because we configure FirstLogonCommands that will first install cloudbase-init:

  <FirstLogonCommands>
    <SynchronousCommand wcm:action="add">
      <CommandLine>msiexec /i c:\windows\temp\cloudbase.msi /qb /l*v c:\windows\temp\cloudbase.log LOGGINGSERIALPORTNAME=COM1</CommandLine>
      <Order>1</Order>
    </SynchronousCommand>

And will then run sysprep to generalize the system (which will, among other things, lose the administrator password):

    <SynchronousCommand wcm:action="add">
      <CommandLine>c:\windows\system32\sysprep\sysprep /generalize /oobe /shutdown</CommandLine>
      <Order>2</Order>
    </SynchronousCommand>
  </FirstLogonCommands>

The system will shut down when sysprep is complete, leaving you with a Windows image suitable for uploading into OpenStack:

glance image-create --name ws2012 \
  --disk-format qcow2 \
  --container-format bare  \
  --file ws2012.qcow2

Troubleshooting

If you run into problems with an unattended Windows installation:

During the first stage of the installer, you can look in the x:\windows\panther directory for setupact.log and setuperr.log, which will have information about the early install process. The x: drive is temporary, and files here will be discarded when the system reboots.

Subsequent installer stages will log to c:\windows\panther\.

If you are unfamiliar with Windows, the type command can be used very much like the cat command on Linux, and the more command provides paging as you would expect. The notepad command will open a GUI text editor/viewer.

You can emulate the tail command using powershell; to see the last 10 lines of a file:

C:\> powershell -command "Get-Content setupact.log -Tail 10"

Technet has a Deployment Troubleshooting and Log Files document that discusses in more detail what is logged and where to find it.

by Lars Kellogg-Stedman at November 15, 2014 05:00 AM

November 14, 2014

Ed Leafe

The OpenStack Big Tent and Magnum

One of the most heavily-attended design summit events at last week’s OpenStack Summit in Paris was on Magnum, a proposed service for containers that would integrate into the Nova compute service. It seems that any session at any conference these days that involves Docker attracts a lot of interest, as Docker is an amazing new way of approaching how we think about virtualization and achieving efficiencies of scale.

Disclaimer: I know Adrian Otto, the leader of the Magnum project, from my days at Rackspace, and genuinely like him. I have no doubt that he would be able to put together a team that can accomplish all that he is setting out to do with this project. My thoughts and concerns about Magnum would be the same no matter who was leading the project.

The goal of the Magnum session was to present its concept and proposed architecture to the Nova ganttteam, with the hope of being designated as the official Docker project in OpenStack. However, there was a lot of push back from many members of the Nova team. Some of it had to do with procedural issues; I learned later that Magnum had been introduced at the Nova mid-cycle meetup, and the expectations set then had not been met. I wasn’t at that meetup, so I can’t personally attest to that. But the overall sentiment was that it was just too premature to settle on one specific approach to something as important and fast-moving as Docker. While I support the idea of Magnum and hope that it is a wild success, I also think that world of Docker/containers is moving so fast that what looks good today may look totally different 6 months from now. Moving such a project into OpenStack proper would only slow it down, and right now it needs to remain as nimble as possible.

I wrote a little while ago about my thoughts on the current discussions on the Big Tent vs. Layers vs. Small Core (Simplifying OpenStack), and I think that the Magnum effort is an excellent example of why we need to modify the approach to how we handle projects like this that add to OpenStack. The danger of the current Big Tent system of designating a single effort as the official OpenStack solution to a given problem is that by doing so we might be discouraging some group with a different and potentially better solution from pursuing development, and that would short-change the OpenStack ecosystem long-term. Besides, a little competition usually improves overall software quality, right?

by ed at November 14, 2014 06:06 PM

OpenStack Blog

OpenStack Community Weekly Newsletter (Post Paris)

How Operators Can Get Involved in Kilo #OpenStackSummit

Maish Saidel-Keesing participated in the Ops Summit: How to get involved in Kilo, and shared his notes from those sessions.

Development Reports from Summit

Relevant Conversations

Tips ‘n Tricks

Security Advisories and Notices

Upcoming Events

Other News

Got Answers?

Ask OpenStack is the go-to destination for OpenStack users. Interesting questions waiting for answers:

Welcome New Reviewers, Developers and Core Reviewers

jiangfei Cserey Szilard
ZongKai LI Ran Ziv
Swati Shukla Accela Zhao
Guillaume Giamarchi yatin
Seb Hughes François Bureau
Vadim Rutkovsky habuka036
Mark McDonagh Zengfa Gao
Craige McWhirter Jorge Munoz
juigil kishore Jobin Raju
Trung Trinh John Belamaric
Scott Lowe Seb Hughes
Roozbeh Shafiee David Caro
Mike Mason Craige McWhirter
Tan Lin Jan-Erik Mångs
David Caro Adolfo Duarte
Konstantinos Papadopoulos Tan Lin
Matteo Panella hossein zabolzadeh
Lena Vinod Pandarinathan
Michael Hagedorn Pieter
Major Hayden Lan Qi song
Magnus Lundin Vidyut
Arun S A G Inessa Vasilevskaya
Pratik Mallya Gil Meir
Brian Saville Dimitri Korsch
Chris Grivas Ian Adams
Marcin Karkocha Pratik Mallya
Yash Bathia
Wei Xiaoli
Mike Mason
Anton Arefiev
Yury Konovalov
Shang Yong

OpenStack Reactions

show-the-light

When a large company join the OpenStack foundation after ignoring it for a while

The weekly newsletter is a way for the community to learn about all the various activities occurring on a weekly basis. If you would like to add content to a weekly update or have an idea about this newsletter, please leave a comment.

 

by Stefano Maffulli at November 14, 2014 06:06 PM

Daniel P. Berrangé

Faster rebuilds for python virtualenv trees

When developing on OpenStack, it is standard practice for the unit test suites to be run in a python virtualenv. With the Nova project at least, and I suspect most others too, setting up the virtualenv takes a significant amount of time as there are many packages to pull down from PyPI, quite a few of which need compilation too. Any time the local requirements.txt or test-requirements.txt files change it is necessary to the rebuild the virtualenv. This rebuild is an all-or-nothing kind of task, so can be a significant time sink, particularly if you frequently jump between different code branches.

At the OpenStack design summit in Paris, Joe Gordon showed Matt Booth how to setup devpi and wheel to provide a cache of the packages that make up the virtualenv. Not only does it avoid the need to repeatedly download the same packages from pypi each time, but it also avoids the compilation step, since the cache is storing the final installed pieces for each python module. The end result is that it takes 20-30 seconds or less to rebuild a virtualenv instead of many minutes.

After a few painful waits for virtualenvs today, I decided to set it up too I don’t like installing non-packaged software as root on my machines, so what follows is all done as a regular user account. The first step is to pull down the devpi and wheel packages from pypi, telling pip to install them in under $HOME/.local

# pip install --user devpi
# pip install --user wheel

Since we’re using a custom install location, it necessary to update your $HOME/.bashrc file with new $PATH and $PYTHONPATH environment variables and then source the .bashrc file

# cat >> $HOME/.bashrc <<EOF
export PATH=\$PATH:$HOME/.local/bin
export PYTHONPATH=$HOME/.local/lib/python2.7/site-packages
EOF
# . $HOME/.bashrc

The devpi package provides a local server that will be used for downloads instead of directly accessing pypi.python.org, so this must be started

# devpi-server --start

Both devpi and wheel integrate with pip, so the next setup task is to modify the pip configuration file

# cat >> $HOME/.pip/pip.conf <<EOF
[global]
index-url = http://localhost:3141/root/pypi/+simple/
wheel-dir = /home/berrange/.pip/wheelhouse
find-links = /home/berrange/.pip/wheelhouse
EOF

We’re pretty much done at this point – all that is left is to prime the cache with the all the packages that Nova wants to use

# cd $HOME/src/cloud/nova
# pip wheel -r requirements.txt
# pip wheel -r test-requirements.txt

Now if you run any command that would build a Nova virtualenv, you should notice it is massively faster

# tox -e py27
# ./run_tests.sh -V

That is basically all there is to it. Blow away the virtualenv directories at any time and they’ll be repopulated from the cache. If the requirements.txt is updated with new deps re-running the ‘pip wheel’ command above will update the cache to hold the new bits.

That was all incredibly easy and so I’d highly recommend devs on any non-trivially sized python project make use of it.  Thanks to Joe Gordon for the pointing this out at the summit !

by Daniel Berrange at November 14, 2014 05:28 PM

Red Hat Stack

Delivering Public Cloud Functionality in OpenStack

Talligent-logo

RHOSCIPN_logo_small

When it comes to delivering cloud services, enterprise architects have a common request to create a public cloud-type rate plan for showback, chargeback, or billing. Public cloud packaging is fairly standardized across the big vendors as innovations are quickly copied by others and basic virtual machines are assessed mainly on price. (I touched on the concept of the ongoing price changes and commoditization of public clouds in an earlier post.) Because of this standardization and relative pervasiveness, public cloud rate plans are well understood by cloud consumers. This makes them a good model for introducing enterprise users to new cloud services built on OpenStack.Enterprise architects are also highly interested in on-demand, self-service functionality from their Openstack clouds in order to imitate the immediate response of public clouds. We will cover how to deliver on-demand cloud services in a future post.

Pricing and Packaging Cloud Services
Public cloud rate plans are very popular, seeing adoption within enterprises, private hosted clouds, and newer public cloud providers alike. Most public cloud providers use the typical public cloud rate plan as a foundation for layering on services, software, security, and intangibles like reputation to build up differentiated offerings.Enterprise cloud architects use similar rate plans to demonstrate to internal customers that they can provide on-demand, self-service cloud services at a competitive price. To manage internal expectations and encourage good behavior, enterprises usually introduce cloud pricing via a showback model which does not directly impact budgets or require exchange of money. Users learn cloud cost structures and the impact of their resource usage. Later, full chargeback can be applied where internal users are expected to pay for services provided.

As evidenced by easily accessible published rate plans, on-demand compute instances can represent a wide range of sizes, locations, operating systems, and optimizations (memory, storage, compute). Reserved instances provide the user the opportunity to make a one-time upfront payment in exchange for a discount on the hourly charge for the instance over the course of one or three years. Spot instances provide a discount in exchange for flexibility of running the instance at a time when the cloud has unused compute capacity.

Red Hat Enterprise Linux OpenStack Platform and Talligent OpenBook
Red Hat Enterprise Linux OpenStack Platform provides an integrated set of OpenStack modules and APIs that Talligent OpenBook connects to. OpenBook leverages the Keystone and Horizon modules for customer authentication and self-service. OpenBook also directly connects to other OpenStack components such as Nova, Swift, and Cinder to create an initial list of cloud tenants and assigned resources.

openstack stack
The key attributes of the rate plan are available from OpenStack via the Ceilometer metering component. Ceilometer data is non-repudiable and therefore auditable for billing of cloud services. OpenBook captures a full set of configuration details and usage metrics from OpenStack via the Ceilometer module. This includes all the meters associated with Nova (Compute), Neutron (Network), Glance (Image), Cinder (Volume), Swift (Object Storage), and Kwapi (Energy). A detailed list of the meters is available here.

As new metrics are added to Ceilometer, those metrics are picked up by the resource manager in OpenBook and made available as billable elements included in rate plans. For example, the Juno release of OpenStack is expected to add key SDN metrics such as Load Balancer as a Service, Firewall as a Service, and VPN as a Service. We plan to make those SDN metrics available in OpenBook within a few weeks of the GA release of the Juno version of OpenStack.

Creating an Public Cloud Rate Plan in OpenStack
Using a solution like OpenBook from Talligent, cloud architects can create a rate plan for Infrastructure-as-a-Service similar to any of the large public cloud providers. In order to approximate typical public cloud pricing in OpenStack, here are the key elements of the rate plan:

  • Server images in Openstack correspond to on-demand instances. Generally, they are categorized according to size (small, medium, large, extra large, etc.) and have vCPU, memory, and instance storage configured accordingly.
  • While OpenBook can prorate charges for fractional billing periods, most public cloud providers rounds up all fractions to the next hourly interval when computing charges.
  • OpenStack includes the concept of geographical regions. The region attribute enables the cloud architect to charge different rates for the different regions. This might be done because of variations in cost structures, data center capabilities, currencies, taxes, or other location specific variables.
  • Once the image, billing period, and regions have been established, users can designate a unit charge for each server image. Based on a recent survey of public cloud prices, prices range from $0.013 per hour for a tiny instance to $0.28 per hour for an extra large instance.
  • Block storage can be billed by volume type (SSD, SATA,..) and by GB hour for volumes and GB month for images and snapshots. GB hour and GB month calculations can be based on either peak or average hourly size.
  • Object storage can be billed by storage object size, container size, outgoing bytes transferred, and number of API requests.

That covers the basics. Advanced users can develop services that take into account dedicated connections between the customer and cloud, GPUs, DNS, VPN Connection, high i/o instances, high storage instances, high performance computing, etc. Public cloud rates change regularly as the market grows and competition increases. Instance types might not correspond directly to instances configured within OpenStack so please note that these rates will be an approximation only. Talligent.com for more information

About Talligent
A Business-Ready CloudTM requires business processes and controls. Once your OpenStack cloud is deployed, you need the visibility and control to maintain high operational efficiency, as well as the billing flexibility to keep up with ever-changing customer requirements and market conditions. Self-service capabilities from Talligent support the high response on-demand cloud functionality that customers expect. Public and private cloud offerings and prices are evolving rapidly – Talligent supplies the tools to evolve your cloud and compete.

by johnmeadowsjr at November 14, 2014 02:00 PM

Percona

Q&A: Percona XtraDB Cluster as a MySQL HA solution for OpenStack

Thanks to all who attended my Nov. 12 webinar titled, “Percona XtraDB Cluster as a MySQL HA Solution for OpenStack.” I had several questions which I covered at the end and a few that I didn’t. You can view the entire webinar and download the slides here.

Q&A: Percona XtraDB Cluster as a MySQL HA solution for OpenstackQ: Is the read,write speed reduced in Galera compared to normal MySQL?

For reads, it’s the same (unless you use the sync_wait feature, used to be called causal reads).

For writes, the cost of replication (~1 RTT to the worst node), plus the cost of certification will be added to each transaction commit.  This will have a non-trivial impact on your write performance, for sure.  However, I believe most OpenStack meta store use cases should not suffer overly by this performance penalty.

Q: Does state transfers affect a continuing transaction within the nodes?

Mostly, no.  The joining node will queue ongoing cluster replication while receiving its state transfer and use that to do its final ‘catchup’.  The node donating the state transfer may get locked briefly during a full SST, which could temporarily block writes on that node.  If you use the built-in clustercheck (or check the same things it checks), you can avoid this by diverting traffic away from a donor.

Q: Perhaps not the correct webinar for this question, but I was also expecting to hear something about using PXC in combination with OpenStack Trove. If you’ve got time, could you tell something about that?

Trove has not supported the concept of more than a single SQL target.  My understanding is a recent improvement here for MongoDB setups may pave the way for more robust Trove instances backed by Percona XtraDB Cluster.

Q: For Loadbalancing using the Java Mysql driver, would you suggest HA proxy or the loadbalancing connection in the java driver. Also how does things work in case of persistent connections and connection pools like dbcp?

Each node in a PXC cluster is just a mysql server that can handle normal mysql connections.  Obviously if a node fails, it’s easy to detect that you should probably not keep using that node, but in Percona XtraDB Cluster you need to watch out for things like cluster partitions where nodes lose quorum and stop taking queries (but still allow connections)  I believe it’s possible to configure advanced connection pooling setups to properly health check the nodes (unless the necessary features are not in the pool implementation), but I don’t have a reference architecture to point you to.

Q: Are there any manual solutions to avoid deadlocks within a high write context to force a query to execute on all nodes?

Yes, write to a single node, at least for the dataset that has high write volume.

Remember what I said in the webinar:  you can only update a given row once per RTT, so there’s an upper cap of throughput on the smallest locking granularity in InnoDB (i.e., a single row).

This manifests itself in two possible ways:

1) In a multi-node writing cluster by triggering deadlocks on conflicts.  Only approximately 1 transaction modifying a given row per RTT would NOT receive a deadlock

2) In a single-node writing cluster by experiencing longer lock waits.  Transaction times (and subsequently lock times) are extended by the replication and certification time, so other competing transactions will lock wait until the blocking transaction commit. There are no replication conflict deadlocks in this case, but the net effect is exactly the same:  only 1 transaction per row per RTT.

Galera offers us high data redundancy at the cost of some throughput.  If that doesn’t work for your workload, then asynchronous replication (or perhaps semi-sync in 5.7) will work better for you.

Note that there is wsrep_retry_autocommit.  However, this only works for autocommited transactions.  If your write volume was so high that you need to increase this a lot to get the conflict rate down, you are likely sacrificing a lot of CPU power (rollbacks are expensive in Innodb!) if a single transaction needs multiple retries to commit.  This still doesn’t get around the law:  1 trx per row per RTT at best.

That was all of the questions. Be sure to check out our next OpenStack webinar on December 10 by my colleague Peter Boros. It’s titled “MySQL and OpenStack Deep Dive” and you can register here (and like all of our webinars, it’s free). I also invite you to visit our OpenStack Live 2015 page and learn more about that conference in Santa Clara, California this coming April 13-14. The Call for Papers ends Nov. 16 and speakers get a full-access conference pass. I hope to see you there!

The post Q&A: Percona XtraDB Cluster as a MySQL HA solution for OpenStack appeared first on MySQL Performance Blog.

by Jay Janssen at November 14, 2014 02:00 PM

Sylvain Bauza

How to compare 2 patchsets in Gerrit ?

Apples & Oranges - They Don't Compare (Flickr, CC2.0)

Apples & Oranges – They Don’t Compare (Flickr, CC2.0)

Reviewing is one of my duties I’m doing daily. I try to dedicate around 2 hours each day in reading code, understanding the principles, checking if everything is OK from a Python perspective, verifying the test coverage and eventually trying to understand if it’s good for the project I’m supporting and not breaking anything even if CI is happy.

All that stuff can take time. And as I’m lazy, I really dislike the idea of reviewing again the same change that I previously approved if I’m sure that the new patchset is only a rebase. So, the question became in my mind very quickly : how can I compare that 2 patchsets are different ?

Actually, there are many ways to do so with Gerrit and Git. The obvious one is to make use of the Gerrit UI and ask for comparing 2 patchsets.

The main problem is that it shows all the differences, including the changes coming from the rebase so that’s not really handy, unless the change is very small.

Another is maybe to make use of the magical option “-m” of git review which allows to rebase each patchset on master and compare them.

git review -m <CHANGE_NUMBER>,<OLD_PS>[-<NEW_PS>]

That’s a pretty nice tool because it will allow you to make sure that the diffs are made against the same thing (here, master) so that it works fine provided you have your local master up-to-date. That said, if when rebasing your old patchset, you get a conflict, you should be stuck and sometimes the output would be confusing.

On my own, I finally ended up by doing something different : I’m fetching each patchset on a local branch and comparing each branch with the previous commit it has in the branch. Something like that :

vimdiff <(git diff ${MY_PS1_BRANCH}^ ${MY_PS1_BRANCH}) <(git diff ${MY_PS2_BRANCH}^ ${MY_PS2_BRANCH})

Here, MY_PS1_BRANCH would just be the local branch for the previous patchset, and MY_PS2_BRANCH would be the local branch for the next patchset.

By comparing the previous commit and the actual commit (ie. the change) on one branch and the another, then I’m quite sure that I won’t have to carry all the rebasing problems with my local master.


by Sylvain Bauza at November 14, 2014 10:26 AM

November 13, 2014

Mirantis

How small is too small? A minimal OpenStack testbed

When I started working on OpenStack I wanted to test all kinds of things and had no budget to build even a scaled-down test environment. And I know many engineers often test changes on mission critical systems in production because of their organization’s financial constraints, even at the real risk of downtime, lost revenue, and unhappy customers. Not to mention that testing in production is not as thorough as it could be.  It doesn’t have to be this way.

An inexpensive testing solution?

An OpenStack testbed does not have to be a full-size representation of the cluster. The architecture should be the same, but the number of compute and storage nodes, the specs of the hardware, and the networking infrastructure can be scaled down. A cloud with a hundred compute nodes and 25 Swift or Ceph storage nodes of 20TB each can be represented by a mini cloud with the same number of controllers and Swift proxies, but with only 5-10 compute nodes and five Swift or Ceph nodes at 10% of the total cost of the cloud. This is a great solution when you consider that a single large outage in a production environment can eat up the cost of a relatively small testbed.

Of course some things cannot be adequately tested with a scaled-down testbed, such as the behavior of a large number of nodes under load or bottleneck testing and similar issues that are only seen at scale. If testing such issues you may need a full-scale replica of the existing environment; however, the vast majority of configuration changes and failure scenarios can be successfully tested in a miniature environment.

The scaled-down environment is still too expensive?

If a scaled-down version of a real environment is still too expensive, you can test specific issues in a smaller environment. For example, you can use a single-controller configuration  instead of an HA configuration.

So what is the minimum size of a physical environment? For testing Nova, a single controller and one compute node can be sufficient. If a high load on the compute node is expected, you can use SSDs on the storage side, LACP on the network side, and more memory on the system side to alleviate the worst bottlenecks.

A Swift cluster can consist of a single storage node and a single proxy, which can reside on the same hardware. You can install Ceph alongside proxies or controllers, a technique Mirantis OpenStack employs to provide Ceph for internal storage without additional nodes.

Unlike an architecturally similar environment, a stripped down environment does not lend itself to running the same configuration you would run in production, so you will have to hand-prep changes tested in the small environment and plan for the impact of the scaled down version.

Still not cheap enough?

Looking at virtualization options I wondered how far I could get with a single node. I bought what I would normally consider a gaming machine – a tower case with a beefy power supply, a mainboard with a single AMD 8-core CPU, 32GB of memory, two 500GB SSDs, and an additional GbE network card. Total cost was roughly $1000. VMWare ESXi provided a number of VMs that I bound together into clouds.

Initially designed as a learning tool, this machine has seen a lot of test cases, and other developers have borrowed it on occasion. Mirantis OpenStack is preinstalled on one of the VMs, ready to deploy whatever cluster I may need for research. The machine is still sitting under my desk, waiting for the next test case.

And don’t forget one of the biggest advantages of virtualization. Before doing something potentially destructive you can make a snapshot of a VM and reinstate it if something does go awry.

I would never have believed that such a small and seemingly restricted machine would be able to do so much good in a world where scale is of paramount importance, but with a little imagination and ingenuity this mini testbed lets me try out a lot of things that would have cost time and money if I had wanted to set them up in a test environment.

Think outside the box

This old management adage still has merit, but occasionally it pays to think about a box — a box you can use to educate yourself, and develop software, troubleshoot, and test new features on.

The post How small is too small? A minimal OpenStack testbed appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Christian Huebner at November 13, 2014 04:03 PM

Percona

Percona Live London 2014 Wrap Up

Percona Live London 2014 SummaryThe 2014 edition of Percona Live London brought together attendees from 30 countries to hear insightful talks from leaders in the MySQL community. The conference kicked off on Monday with a full day of tutorials followed by the very popular Community Dinner featuring a double decker bus shuttle from the conference to the event.

Tuesday started with keynote talks by representatives from MySQL, VMware, HGST, Codership, and Percona. I particularly enjoyed the talks by Tomas Ulin of MySQL (which highlighted the upcoming MySQL 5.7 release) and Robert Hodges of VMware (which focused on the evolution of MySQL). The remainder of the day was filled by six time slots of breakout sessions (30 sessions in all) broken into 10 tracks. The day wrapped up with the always popular Community Networking Reception. Attesting to the quality of the conference, 4 out of 5 respondents to our post conference survey indicate they are likely to attend the conference again in 2015.

The session slides are available by visiting the Percona Live London 2014 conference website (look for the “Session Slides” button in the right hand column). Slides are added as they come in from the speakers so please check back if the slides are not yet available for a specific talk that interests you.

Special thanks goes out to the Percona Live London 2014 Conference Committee which put together such a great program:

  • Cedric Peintre of Dailymotion
  • David Busby of Percona
  • Colin Charles of MariaDB
  • Luis Motta Campos of the ebay Classified Group
  • Nicolai Plum of Booking.com
  • Morgan Tocker of Oracle
  • Art van Scheppingen of Spil Games

Percona Live London 2014 Attendee Survey

Percona Live London 2014 Attendee SurveyThis year we collaborated with ComputerworldUK to run a short survey at the conference which should appear in that publication in the near future. We had 64 responses, all of them being form MySQL professionals who attended the conference. The results were interesting:

Do you agree with the statement that “Oracle has been a good steward of MySQL over the past twelve months”?
YES = 81%
NO = 19%

Are you currently running a production OpenStack environment in your organization?
YES = 17%
NO = 83%

Have you evaluated OpenStack within the past twelve months?
YES = 25%
NO = 75%

Do you plan to evaluate OpenStack in the next twelve months?
YES = 48%
NO = 52%

Are you currently using an AWS product to run MySQL in the Cloud?
YES = 28%
NO = 72%

Are you more likely to switch to a non-MySQL open source database now than you were twelve months ago?
YES = 35%
NO = 65%

The sentiment about Oracle’s stewardship of MySQL compares favorably with the comments by our own Peter Zaitsev in a recent ZDNet article titled “MySQL: Why the open source database is better off under Oracle“.

Percona Live MySQL Conference and OpenStack Live Conference

Percona Live London 2014The ratings related to OpenStack mirror our experience with the strong growth in interest in that technology. In response, we are launching the inaugural OpenStack Live 2015 conference in Silicon Valley which will focus on making attendees more successful with OpenStack with a particular emphasis on the role of MySQL and Trove. The event will be April 13-14, 2015 at the Santa Clara Convention Center. The call for proposals closes on November 16, 2014.

Our next Percona Live MySQL Conference and Expo is April 13-16, 2015 in Silicon Valley. Join us for the largest MySQL conference in the world – last year’s event had over 1,100 registered attendees from 40 countries who enjoyed a full four days of tutorials, keynotes, breakout sessions, networking events, BOFs, and more. The call for speaking proposals closes November 16, 2014 and Early Bird registration rates are still available so check out the conference website now for full details.

Thanks to everyone who helped make Percona Live London 2014 a great success. I look forward to the Percona Live MySQL Conference and the OpenStack Live Conference next April in Silicon Valley!

The post Percona Live London 2014 Wrap Up appeared first on MySQL Performance Blog.

by Terry Erisman at November 13, 2014 02:16 PM

Stefano Maffulli

Tips to improve organizations contributing to OpenStack

The slides of my presentation at OpenStack Summit in Paris 2014 (download ODP source, 9M).

<iframe frameborder="0" height="400" marginheight="0" marginwidth="0" scrolling="no" src="http://www.slideshare.net/slideshow/embed_code/41502830" width="476"></iframe>

And the sessions’ recording:

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="http://www.youtube.com/embed/4sXuadpEhk8" width="560"></iframe>


© stefano for ][ stefano maffulli, 2014. | Permalink | No comment | Add to del.icio.us
Post tags: , , , , , ,

Feed enhanced by Better Feed from Ozh

by stefano at November 13, 2014 11:19 AM

Opensource.com

8 new tips for getting things done with OpenStack

Want to get more done with OpenStack? We've got you covered.

We've put together some of the best how-tos, guides, tutorials, and tips published over the past month into this handy collection. And if you need more help, the official documentation for OpenStack is always a great place to turn.

by Jason Baker at November 13, 2014 10:00 AM

November 12, 2014

Mirantis

See the Mirantis talks from Paris OpenStack Summit on video

The OpenStack Kilo 2014 Summit in Paris is now behind us. With thousands of attendees, a couple hundred presentations and design sessions, there was a lot to take in. And of course, the city of lights provided plenty of distractions.  

For those of you who didn’t have a chance to travel to Paris or attend the 20+ presentations that we took part in, we’ve curated a quick list of videos of those presentations (produced by the OpenStack Foundation) below. 

A full list of all conference videos is here

The post See the Mirantis talks from Paris OpenStack Summit on video appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Admin at November 12, 2014 05:26 PM

Tesora Corp

Short Stack: Execs discuss Kilo, OpenStack and Software-defined Economy and the Mr. Spock view of OpenStack

short stack_b small_0_0.jpgWelcome to the Short Stack, our weekly feature where we search for the most intriguing OpenStack links to share with you. These links may come from traditional publications or company blogs, but if it's about OpenStack, we'll find the best links we can to share with you every week.

If you like what you see, please consider subscribing.

Here we go with this week's links:

OpenStack welcomes you to the software-defined economy | ZDNet

As we transform to a software-defined economy (software really is eating the world, as Marc Andreessen once said), we need to be faster to market and quicker to react to competitive pressure and proprietary software inhibits that.  OpenStack started as a small project and now it's a huge project because people are recognizing the value of an open source project in today's rapidly changing world.

As OpenStack Stabilises Big Questions Remain for the Foundation | Forrester/Computerworld

There is little dispute that OpenStack is a maturing project now, but Forrester believes that as we head into 2015, we will move from experimentation to full blown implementations and Forrester would like to hear more from the enterprise user base and less from large vendors about the future of the project

OpenStack: Do The Needs of the Many Outweigh the Needs of the Few? [VIDEO] | Datamation

How can you resist an article that uses a Star Trek reference? And when it comes to the OpenStack community, you may need to look at the path forward the Vulcan way: the needs of the many outweigh the needs of the few. Sounds good.

OpenStack Execs Discuss Kilo Release, Ironic Project and More | eWeek

Leaders of the OpenStack community had a discussion and one of the things that came out of it was the support for Bare Metal servers in the next release of OpenStack, which could provide performance improvements to drive additional use cases for OpenStack moving forward.

5 things we learnt from OpenStack Summit 2014 | Computer Business Review

Now that we've wrapped up the OpenStack Paris Summit, there are few things that are clear. The community has grown enormously (although the exact size is unclear) and the big companies are very interested in getting involved. This article provides a summary of themes and take-aways.

by 693 at November 12, 2014 01:15 PM

Aptira

What is Ceilometer's Minimum Viable Product?

I believe the basic goals of Ceilometer are laudable. It attempts to gather data about the state of an OpenStack instance, which is a useful goal. It then attempts to solve a bunch of problems related to that data, which is also useful.

However, many of the problems it tries to solve work at cross purposes to each other, and so choices that are made to accommodate solutions to these problems prevent the project from solving any of them satisfactorily. Solutions to some require high resolution data, some do not. Solutions to other problems require large amounts of historical data, others do not.

Ceilometer attempts to address these problems in a single database (OK a few databases, but one for each class of data: meters, events, alarms). This can never meet the requirements of every solution. Instead Ceilometer should focus on gathering the information, getting it to consuming systems in a fast, reliable and easy to consume manner, and then it should stop. There's plenty of work to do to reach this smaller set of goals, and there's an enormous amount of work to be done creating consumer systems that deliver real customer value.

Let other projects start to solve the problems we need to solve.

- Lets build a usage corellation system that takes the data stream and emits simple rateable usage records. Lets build another system that rates the records according to rules defined in a product catalogue.

- Lets build a policy engine that can pull the levers on various APIs based on a stream of information coming from Ceilometer.

- Lets build a repository for long term storage of data and accompanying analysis tools

and so on. But let's make them separate projects, not attempt to solve all the problems from a single DB.

The projects can focus on their own specific needs and because they are loosely coupled to metric collection and completely decoupled from one another, they can ignore needs of others that might be destructive to their own value. They can adopt an attitude of excellence rather than one of compromise.

More generally, lets examine our approach to adding features to OpenStack and see whether continually adding features to existing projects is necessarily the right way to go.

by Roland Chan (roland@aptira.com) at November 12, 2014 05:41 AM

Cloudify Engineering

From VMWare Virtualization to Public Cloud. Now Hybrid Cloud.

vCloud, OpenStack & Cloudify VMWare has been in the front of the virtualization space for many years and is by...

November 12, 2014 12:00 AM

November 11, 2014

Sébastien Han

OpenStack Glance: import images and convert them directly in Ceph

Ceph, to work in optimal circumstances requires the usage of RAW images. However, it is painful to upload RAW images in Glance because it takes a while. Let see how we can make our life easier.


First let’s upload our image, for the purpose of this example I used a tiny CirrOS image:

<figure class="code"><figcaption></figcaption>
1
2
3
4
5
6
7
8
9
$ sudo rbd -p imajeez --image-format 2 import cirros-0.3.0-x86-64-disk.img.1 $(uuidgen)

$ sudo rbd -p imajeez info 33fc77e2-df0e-4f71-a966-b8df2b245f42
rbd image '33fc77e2-df0e-4f71-a966-b8df2b245f42':
  size 9532 kB in 3 objects
  order 22 (4096 kB objects)
  block_name_prefix: rbd_data.331574b0dc51
  format: 2
  features: layering
</figure>

Now this is where it becomes interesting! The good thing here is that we can trigger the conversion directly from Ceph and this using the qemu-img tool. Simply call a conversion and generate a new name based on a new UUID.

<figure class="code"><figcaption></figcaption>
1
$ sudo qemu-img convert -O raw rbd:imajeez/33fc77e2-df0e-4f71-a966-b8df2b245f42 rbd:imajeez/$(uuidgen)
</figure>

We now have two images in our pool:

<figure class="code"><figcaption></figcaption>
1
2
3
$ sudo rbd -p imajeez ls
33fc77e2-df0e-4f71-a966-b8df2b245f42
4f460d8c-2af3-4041-a28d-12c3631a305f
</figure>

And the image has a RAW format:

<figure class="code"><figcaption></figcaption>
1
2
3
4
5
6
$ sudo qemu-img info rbd:imajeez/4f460d8c-2af3-4041-a28d-12c3631a305f
image: rbd:imajeez/4f460d8c-2af3-4041-a28d-12c3631a305f
file format: raw
virtual size: 39M (41126400 bytes)
disk size: unavailable
cluster_size: 4194304
</figure>

We can now delete our original QCOW2 image:

<figure class="code"><figcaption></figcaption>
1
$ sudo rbd -p imajeez rm 33fc77e2-df0e-4f71-a966-b8df2b245f42
</figure>

In order for this image to be compliant with Glance, we need to snapshot and protect it:

<figure class="code"><figcaption></figcaption>
1
2
3
4
5
6
$ sudo rbd --pool imajeez snap create --snap snap 4f460d8c-2af3-4041-a28d-12c3631a305f
$ rbd --pool imajeez snap protect --image 4f460d8c-2af3-4041-a28d-12c3631a305f --snap snap

$ sudo rbd -p imajeez snap ls 4f460d8c-2af3-4041-a28d-12c3631a305f
SNAPID NAME     SIZE
     4 snap 40162 kB
</figure>

Eventually add this image into Glance, note that using the --location flag will not upload anything since we directly register the location in Ceph.

<figure class="code"><figcaption></figcaption>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
$ glance image-create --id 4f460d8c-2af3-4041-a28d-12c3631a305f --name CirrosImport --store rbd --disk-format raw --container-format bare --location rbd://$(sudo ceph fsid)/imajeez/4f460d8c-2af3-4041-a28d-12c3631a305f/snap
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | None                                 |
| container_format | bare                                 |
| created_at       | 2014-11-10T17:00:02                  |
| deleted          | False                                |
| deleted_at       | None                                 |
| disk_format      | raw                                  |
| id               | 4f460d8c-2af3-4041-a28d-12c3631a305f |
| is_public        | False                                |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | CirrosImport                         |
| owner            | 2f314f86ca9048ac828baedb5e8e4e2a     |
| protected        | False                                |
| size             | 41126400                             |
| status           | active                               |
| updated_at       | 2014-11-10T17:00:02                  |
| virtual_size     | None                                 |
+------------------+--------------------------------------+
</figure>

This procedure will probably be reproduced as soon as the Glance conversion blueprint gets implemented. As always, it’s easier with Ceph since we don’t need to store the image in a temporary location, convert it and then upload it. This is unfortunately the problem with backend such as Swift.

November 11, 2014 10:45 PM

Stefano Maffulli

Post-summit summary of OpenStack Paris

Seven straight full time working days, starting from 9am Saturday and Sunday for the second edition of OpenStack Upstream Training to the feedback session on Friday evening at 6pm are finally over and I’m starting to catch up.

I spoke on Monday, highlighting some of the findings of my research on how to change organizations so they can better contribute to OpenStack, later lead with Rob, Sean and Allison the creation of the Product working group (more on this later). The double-length session dedicated to addressing the growth pain in Neutron has removed the “if” and left the question to “how and when do we empower drivers and plugin developers to take their fate in their own hands”.

I think this has been one of the most productive summit, I’m leaving Paris with the feeling we keep on improving our governance models and processes to better serve our ecosystem. Despite the criticisms, this community keeps on changing and adapts to new opportunities and threats like nothing I’ve seen before. I’m all charged up and ready for the six months of intense work leading up to Vancouver!


© stefano for ][ stefano maffulli, 2014. | Permalink | No comment | Add to del.icio.us
Post tags: , ,

Feed enhanced by Better Feed from Ozh

by stefano at November 11, 2014 09:23 PM

IBM OpenStack Team

Open source in the City of Lights: Recapping the OpenStack Summit

Last week more than 4,600 developers and users—an attendance record—gathered in Paris for  the OpenStack summit at the Palais des Congrès. Collectively we shared best practices, user stories and discussed the 11th release of OpenStack cloud software, Kilo, planned for April 2015.

With keynotes by banks, car companies, media conglomerates, and with 59 countries participating, this was the biggest OpenStack Summit yet. And when a BMW i8 was driven on the stage during the keynote speeches, there was a  gasp from the audience, both for the beauty of the driving machine itself, and for the collective realization that the moment is here: OpenStack is on the right side of history.

As tweeted by Andrew Hately, Distinguished Engineer of Open Source at IBM, if there was any doubt left about OpenStack’s viability in enterprises, it was laid to rest at the Paris summit.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

It definitely resonated when OpenStack Foundation director Jonathan Bryce (@jbryce) mentioned that in a software-defined economy everyone competes with a startup.

My intention at the summit was to look at it from a PaaS lens, and see how much of moment is shifting up the stack. It was also surprising to see how much traction Docker, the open platform for distributed applications for developers and system administrators, has gained. Docker, Docker, Docker was the chant throughout the conference, and all the sessions with this one word in title were a huge draw.

Open technologies

On the very first day of the summit, I opened the brown bag tech talk on “Docker, Cloud Foundry and OpenStack, the leading Open Source Triumvirate.” There is a huge demand from the community to understand where these three technologies meet. The talk described the technical advancements in OpenStack and Cloud Foundry with respect to Docker, and how the three are converging.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

IBM and the open community

Todd Moore, Brad Topil, Dave Lindquist, Moe Abdulla and others from IBM gave talks about IBM’s participation in the OpenStack community, and our offerings built on top of OpenStack. Brad Topol, IBM Distinguished Engineer for OpenStack, showed a demo of OpenStack Swift service in Bluemix, our PaaS platform built on yet another open source technology, Cloud Foundry.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

As outlined by Angel Diaz, IBM VP for Open Source in this insightful blog, we’ve contributed 10 percent of the total OpenStack code contributions to the April 2014 OpenStack release, and we are committed to open source. And as the second highest contributor to open source PaaS platform Cloud Foundry, we’re increasing our contributions by Pull Request to 33 percent of all contributions so far this year. We are also part of the newly formed Docker Advisory Board.

Beyond code contributions

My colleagues Daniel Krook and Manuel Sylveyra also gave a talk during the summit about how we can participate more in the open communities beyond just contributing code. The team came back together with Shaun Murakami and Kalonji Bankole to deliver a well attended session on “Docekerizing OpenStack HA.”

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

PaaS: Cloud Foundry

Shifting the focus back toward the PaaS lens, Cloud Foundry was definitely the “in” PaaS technology. IBM, Pivotal, Intel, HP, Canonical and others lead discussions on the topic.

IBM and Pivotal led a joint design session around Cloud Foundry BOSH. Ferran Rodenas (@ferdy) from Pivotal joined IBMer Kalonji Bankole and me for a session with folks from Cisco, Rackspace, Bloomberg and Anynines. Each organization shared its use cases in Cloud Foundry, OpenStack and BOSH.

We had productive discussions on the use of BOSH, Chef/Puppet, Spiff as technology for manifest manipulation, enhancements needed in BOSH CPI for OpenStack (or any other IaaS in general), generic capability in BOSH to execute scripts, External CPI, Universal Stemcells and more.

If you are interested details, please follow the slides and Etherpad links. Ferran also gave a talk on Cloud Foundry and OpenStack, as did I during another brown bag tech talk, “Lifecycle Management of Cloud Foundry on OpenStack.”

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Other interesting talks on the topic were from Intel, “Extending OpenStack IaaS with Cloud Foundry PaaS.” Catherine Spence (@cw_spence) shared Intel IT experience, the technical solution, integration methods and benefits of OpenStack, and discussed future direction.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="https://www.youtube.com/embed/9vfkk_rNwyw?feature=oembed" width="640"></iframe>

Canonical gave an insightful talk about how they are using Juju charms for scaling Cloud Foundry, OpenStack and Hadoop clusters in one go (“Hadoop and Cloud Foundry at Scale on OpenStack (Canonical)”), and it was very well received.

There is much debate in the community about Juju, BOSH, and Heat as the deployment mechanism for Cloud Foundry. At IBM, we are using BOSH for deployments of Bluemix pods, and so far it has been a very useful tool for deployment and lifecycle management of Cloud Foundry.

Other talks included one from HP about how they are leveraging the services from OpenStack portfolio like Trove DB to extend the Cloud Foundry platform.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Three Takeaways from Paris

Overall, it was a great week at the backdrop of a beautiful city! At the then end of it I when I was walking by the serene Seine river looking at reflections of beautifully lit city in the night, three things were resonating in my mind.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

1. OpenStack is the next Open Source blockbuster after Linux (as outlined by Jim Zemlin, Executive Director, Linux Foundation).

2. Its mature enough for enterprises and for production deployments, and moving up the stack for building production PaaS platforms like Cloud Foundry.

3. Embrace containers and Docker, it’s coming faster than we think.

The post Open source in the City of Lights: Recapping the OpenStack Summit appeared first on Thoughts on Cloud.

by Animesh Singh at November 11, 2014 06:03 PM

Ed Leafe

OpenStack Paris Summit – Growing Up

I’ve just returned from the 5-day-long OpenStack Summit, and after a very long day of travel, my brain is still slightly crispy, but I wanted to record some impressions of the summit. Since it was held in Paris, there are a lot of non-technical experiences I may write about, but for now I’ll limit my thoughts to those concerning OpenStack.

For those who don’t know my history, I was one of the original OpenStack developers who began the project in 2010, and participated in all of the early summits. After two years I changed roles in my job, which meant that I was no longer actively contributing to OpenStack, so I no longer was able to attend the summits. But now that I’m back as a full-time contributor in my role at IBM, I eagerly anticipated re-acquainting myself with the community, which had evolved since I had last been an active member.

First, let me say how impressive it is to see this small project we started grow into the truly international phenomenon it has become. The sheer number of people and exhibitors who came to Paris to be involved in the world of OpenStack was amazing: the latest count I saw was over 4,600 attendees, which contrasts with around 70 at the initial summit in Austin.

Second, during my hiatus away from active development on OpenStack many of the active core contributors to Nova have moved on, and a whole new group has taken their place. In the months leading up to the summit I got to know many of them via IRC and the dev email list, but had never met them in person. One thing about OpenStack development that has always been true is that it’s very personal: you get to know the people involved, and have a good sense of what they know and how they work. It is this personal familiarity that forms the basis of how the core developers are selected: trust. There is no test or anything like that; once you’ve demonstrated that you contribute good code, that you understand the way the various parts fit together, that you can take constructive criticism of your code, and that you can offer constructive criticism on others’ code, eventually one of the existing core members nominates you to become core. The other cores affirm that choice if they agree. Rarely have I seen anyone nominated for core who was rejected by the group; instead, the reaction usually is along the lines of “oh, I thought they were core already!”. As one of my goals in the coming year is to once again become a core Nova developer, getting to meet much of the current core team was a great step in that direction.

And lastly, while the discussions about priorities for the Kilo cycle were lively, there was almost none of the polarizing disagreements that were part of Nova’s early days. I believe that Nova has reached a maturity level where everyone involved can see where the weak points are, and agree on the need to fix them, even if opinions on just how to do that differed. A great example was the discussion on what to do about Cells: do we fix the current approach, or do we shift to a different, simpler approach that will get us most, but not all, of what the current code can do, but with a cleaner, more maintainable design. After a few minutes of discussion the latter path was chosen, and we moved on to discussing how to start making that change. While I miss the fireworks of previous summit sessions, I much prefer the more cooperative atmosphere. We really must be growing up!

by ed at November 11, 2014 04:03 PM

Percona

OpenStack Live Call for Proposals closes November 16

OpenStack Live 2015: Call for speakers open through Nov. 9The OpenStack Live conference in Silicon Valley (April 13-14, 2015) will emphasize the essential elements of making OpenStack perform better with emphasis on the critical role of MySQL and Trove. If you use OpenStack and have a story to share or a skill to teach, we encourage you to submit a speaking proposal for a breakout or tutorial session. The OpenStack Live call for proposals is your chance to put your ideas, case studies, best practices and technical knowledge in front of an intelligent, engaged audience of OpenStack Users. If you are selected as a speaker, you will receive one complimentary full conference pass. November 16th is the last day to submit.

We are seeking submissions for both breakout and tutorial sessions on the following topics:

  • Performance Optimization of OpenStack
  • OpenStack Operations
  • OpenStack Trove
  • Replication and Backup for OpenStack
  • High Availability for OpenStack
  • OpenStack User Stories
  • Monitoring and Tools for OpenStack

All submissions will be reviewed by our highly qualified Conference Committee:

  • Mark Atwood from HP
  • Rich Bowen from Red Hat
  • Andrew Mitty from Comcast
  • Jason Rouault from Time Warner
  • Peter Boros from Percona

If you don’t plan to submit a speaking proposal, now is a great time to purchase your ticket at the low Super Saver rates. Visit the OpenStack Live 2015 conference website for full details.

OpenStack Live 2015 April 13-14

The post OpenStack Live Call for Proposals closes November 16 appeared first on MySQL Performance Blog.

by Terry Erisman at November 11, 2014 03:20 PM

Opensource.com

Open source accelerating the pace of software

What's really accelerating today's pace of change in software is the combinations of many open source parts building on and amplifying each other. It's a dynamic that just isn't possible with proprietary software.

by ghaff at November 11, 2014 12:00 PM

Brian Curtin

OpenStack SDK Post-Summit Update

<html><body>

This is a long post about the OpenStack SDK. It even has a Table of Contents.

Current Project Status

The OpenStack SDK is quickly heading toward being usable for application developers. Leading up to the OpenStack Summit we had a reasonably complete Resource layer and had been working on building out a higher-level interface, as exposed through the Connection class. As of now, first cuts of a high-level interface have implementations in Gerrit for most of the official programs, and we're working to iterate on what we have in there right now before expanding further. We also had an impromptu design session on Thursday to cover a couple of things we'll need to work through.

Project Architecture

At the lowest level, the authentication, session, and transport pieces have been rounded out and we've been building on them for a while now. These were some of the first building blocks, and having a reasonably common approach that multiple service libraries could build on is one of the project goals.

Session objects are constructed atop Authenticators and Transports. They get tokens from the Authenticator to insert into your headers, get endpoints to build up complete URLs, and make HTTP requests on the Transport, which itself is built on top of requests and handles all things inbound and outbound from the REST APIs.

http://i.imgur.com/A2U6yc4.png

Poorly drawn version of what we're doing

On top of that lies the Resource layer, a base class implemented in openstack/resource.py, which aims to be a 1-1 representation of the requests or responses the REST APIs are dealing with. For example, the Server class in openstack/compute/v2/server.py inherits from Resource and maps to the inputs and outputs of the compute service's /servers endpoint. That Server object contains attributes of type openstack.resource.prop, which is a class that maps server-communicated values, such as mapping the accessIPv4 response body value to an attribute called access_ipv4. This serves two purposes: one is that it's a place we can bring consistency to the library when it comes to naming, and two is that props have a type argument that allows for minimal client-side validation on request values.

Resource objects are slightly raw to work with directly. They require you to maintain your own session (it's the first argument of Resource methods), and they typically only support our thin wrappers around HTTP verbs. Server.create will take your session and then make a POST request populated with the props you set on your object.

On top of the Resource layer is the Connection class, which forms our high-level layer. Connection objects, from openstack/connection.py, tie together our core pieces - authentication and transport within a session - and expose namespaces that allow you to work with OpenStack services from one place. This high-level layer is implemented via Proxy classes inside of each service's versioned namespace, in their _proxy.py module.

Right now many of these Proxy implementations are up for review in Gerrit, but openstack.compute.list_flavors is currently available in master. It builds on the openstack.compute.v2.flavor Resource, simply calling its list method inside list_flavors and passing on the Session that compute was initialized with.

What the high-level looks like

There are a bunch of example scripts in the works in the Gerrit reviews, but some of what we're working on looks like the following.

Create a container and object in object storage:

from openstack import connection
conn = connection.Connection(auth_url="https://myopenstack:5000/v3",
                             user_name="me", password="secret", ...)
cnt = conn.object_store.create_container("my_container")
ob = conn.object_store.create_object(container=cnt, name="my_obj",
                                     data="Hello, world!")

Create a server with a keypair:

from openstack import connection
conn = connection.Connection(auth_url="https://myopenstack:5000/v3",
                             user_name="me", password="secret", ...)
args = {
    "name": "my_server",
    "flavorRef": "big",
    "imageRef": "asdf-1234-qwer-5678",
    "key_name": "my_ssh_key",
}
server = conn.compute.create_server(**args)
servers = conn.compute.list_servers()

Where we're going

General momentum has carried us into this Connection/Proxy layer, where we have initial revisions of a number of services, and by default, we'll just keep pushing on this layer. I expect we'll iterate on how we want this layer to look, hopefully with input from people outside of the regular contributors. Outside of that, results from conversations at the Summit will drive a couple of topics.

  1. We need to figure out our story when it comes to versioning APIs at the high level. Resource classes are under versioned namespaces, and even the Proxy classes that implement the high level are within the same versioned namespace, but we currently expose high level objects through the Connection without a version, as seen in the above examples.

    On one hand, it's pretty nice to not have to think about versions for APIs that only have a v1, but that won't last. Along with that, we're working in a dynamic language on growing APIs. Not pinning to a version of the interface is going to result in a world of pain for users.

  2. We need to think about going even higher level than what we have now. Monty Taylor's shade library came up both at his "User Experience, SDKs" design session, as well as during the impromptu OpenStack SDK session we had, and once we get more of the Connection level figured out, we're going to look at how we can tackle compound operations.

  3. Docs, docs, docs. Terry Howe has been putting in a lot of work on building up documentation, and now that we're moving along more smoothly up the stack, I think we'll soon hit the point where code changes will require doc changes.

    I'm also working up a "Getting Started" guide for the project, as we have some people interested in contributing to the project. Thursday's python-swiftclient session ended in that team being interested in shifting their efforts to this SDK, so we need to make sure they can easily get going and help improve the client and tool landscape.

    For the time being, doc builds will appear at http://python-openstacksdk.readthedocs.org/

  4. PyPI releases. Terry put together a version of the package that could reproduce the examples we showed in our talk on Monday, comprised of master plus a couple of his in-flight reviews for compute and network and mine for object store. As we progress and want to try things out, and to enable people to try along with us, we'll probably keep cutting more releases under 0.1.:

    pip install python-openstacksdk
    

    Keep in mind this is absolutely a work-in-progress, and API stability isn't yet a thing, so check it out and let us know what you think, but don't build your business on it.

  5. Need to get back into some of the administrivia that we've been avoiding recently in the name of expanding the Resource layer. The wiki page could use a refresh to reflect where we're at and what's going on. We need to start using more blueprints and the issue tracker, especially as more people become interested in joining the project. We were able to work without most of that when it was just a couple of us wanting to get this off the ground, but we need to make better use of the tools around us.


Overall, the SDK is coming along nicely. We had some good talks at the Summit and got a lot of interest from people and projects, so the coming months should be another good period of growth for us.

Summit Presentation with Terry Howe

On Monday, Terry Howe and I presented "Getting Started with the OpenStack SDK", a 40 minute talk on why we're doing this, how we're doing it, and where the project is going. Both of us had presented at conferences before, but never jointly, so it was an interesting first time experience, and it seemed to work well. The general gist is that I covered the most of the "why" and "where", and Terry covered most of the "how".

The first half focuses on three key ideas that brought this SDK to being: fragmentation, duplication, and inconsistency in the library and tooling landscape around OpenStack. I dove into each of those areas with examples of why they're an issue, such as how many different clients there are, and how different it can be to work with them. From there I covered some of the goals we have while trying to improve those issues, such as building solid foundations and providing consistent user interfaces.

The second half focuses on showing where we're at and what can be done. Terry took a working example that creates a network, sets up various security group rules, starts up a server, attaches a floating IP, and results in a running Jenkins server. After that, he dove into some of the internals, showing how session, transport, and authenticator work together, and explaining the resource and proxy levels.

After we were done, we had a good 10 minutes of questions, and about another 20 minutes of conversation in the hall afterward. A university professor came up to me to say he wants to use the SDK with his students, which was awesome to hear.

Check out the video here - 42 minutes total.

SDK Conversations at the Summit

In the Marketplace

While spending most of Monday through Wednesday in the Rackspace booth in the marketplace, I talked to a lot of people about the SDK project. It's fun to give away t-shirts and raffle off prizes at conferences, but I'm there to talk with people about the experiences they have with Rackspace, OpenStack, and other platforms, and to advocate for the first two.

I've gotten the SDK "elevator pitch" down fairly well by now for when people turn around and ask what I do. The good thing is that no one thought it was a bad idea! People were excited over various parts of it, mostly between reducing the fragmentation by offering all of the libraries from one package, and a lot were excited about coming up with more consistent interfaces across services.

Overall it was a lot of small conversations that ended with a smile that we're both doing fun stuff and it's all getting better.

Impromptu Design Session

Although we didn't have a session on the schedule, we created one of our own Thursday morning in the Le Meridien lobby. Dean Troyer, Jamie Lennox, Terry Howe, Ken Perkins, and myself gathered to talk for about 40 minutes on where we're going. We talked about two main points: an even higher level than we currently provide, and our multi-version story.

Even Higher Level

Currently we provide an abstraction that gets a user to the point where they can call, e.g., object_store.list_containers(), and they'll receive a list of containers. We've taken care of the lower-level plumbing bits like authentication, session, and transport within the Connection class, which exposes the object_store namespace, containing the higher-level view on top of the account and container resource level.

It was mentioned during this session, and during Monty Taylor's user experience session, that Monty is working on a project called Shade. Shade flies at a higher level where you say give me a working server and it does what's necessary to make that happen. The tool aims to abstract away provider differences in order to complete the task, such as how Rackspace gives you a VM with a publicly accessible IP and HP VMs need to be added to a network and have a floating IP attached to them.

"Give me a server" is a pretty common first step for newcomers, so that's an obvious starting place. "Upload this directory to object storage" is another. If you have others, we'd love to know, and we'd love help to implement them. With where we're working right now, we're not yet on to provider specific plugins, so high-level multistep tasks on vanilla OpenStack are what we're looking for.

Multiversion APIs

At the high level within openstack.Connection, we're not currently making any attempt to expose multiple versions of a service's API. We support authenticating via either a v2 or v3 Keystone, and we support multiple versions of APIs at the resource layer, but you end up with high-level access to a set of unversioned service APIs. On one hand, that makes it fairly nice to work with methods on openstack.object_store, especially since there is currently only a v1 API, but should that actually have a v1 somewhere in there?

A point was brought up that we pin versions in other places, such as our requirements. We couldn't have an unversioned dependency in requirements.txt and expect our code to continue working against its APIs forever. When they go from v1 to v2, things will be different and potentially affect what we've coded against. If you've written against the v1 API, you probably want to stick with it until you've written and tested against the v2 API. As much as the unversioned namespace may feel more friendly, it's eventually going to cause pain.

The "Improving python-swiftclient" Design Session

On Thursday, John Dickinson held a session on how to improve the python-swiftclient project. I'm not a contributor there, but was interested to see what they were planning to do and maybe chime in on getting a few more eyes on the SDK, especially since I threw together a high-level Swift view.

Within the first few minutes, the bulleted list that the group had come up with looked a lot like the bulleted lists we came up with to start the SDK project. They have a lot of work they want to be doing, and we're already on our way doing much of the same. Dean Troyer beat me to the punch of grabbing a mic and asking if it's possibile to put some of these efforts behind both OpenStackClient and the SDK.

Dean and I then gave very quick talks on where OSC and SDK fit in to what they were aiming to accomplish. From there, the conversation shifted towards 'Can we accomplish this over there?' and 'Do we want to accomplish this over there?' The answer to both turned out to be 'yes'.

Coming out of this meeting, we're going to have to quickly bulk up our documentation of the lower-level parts so we can bring these folks up to speed, as one of the first topics was their HTTPConnection class, and the second was from Jamie Lennox on using Keystone's sessions.

We're also going to need to bulk up on a "Getting Started" guide for new contributors coming out of this session and a few other talks I've had. Welcome everyone!


If you got this far, wow. See me at a conference some time for a high five.

</body></html>

November 11, 2014 11:00 AM

Florian Haas

Get trained on OpenStack Juno!

Our first OpenStack Juno based open enrollment training is in full swing this week, with Syed Armani teaching in Israel on behalf of OpenStack Israel. And we've only just started!

Next week, I will be teaching a course (OpenStack für Profis, in German) over at Heinlein in Berlin, Germany. Details are at Heinlein's website; I hear there is one slot still open. So if you want to grab that, you'd better be quick!

Then the week of November 24, we're holding an open-enrollment class in Bangalore, India, consisting of Cloud Fundamentals, Networking, and Security classes for OpenStack. Sign-up links are on our class schedule.

And in the first week of December, we'll be offering another open-enrollment class in Vienna, Austria, again covering Cloud FundamentalsNetworking, and Security. And as with Bangalore, you'll find our sign-up links on our class schedule.

We still have limited seats left in the Bangalore and Vienna classes, so sign up today and catch all there is to know about OpenStack Juno!

read more

by florian at November 11, 2014 08:21 AM

Opensource.com

The Kilo OpenStack Summit in review

Interested in keeping track of what's happening in the open source cloud? Opensource.com is your source for what's happening right now in OpenStack, the open source cloud infrastructure project.

by Jason Baker at November 11, 2014 08:00 AM

Terry Wilson

Replacing ovs-vsctl calls with native OVSDB in neutron

Summary

I currently have a patch that needs review that adds a new drop-in replacement for ovs_lib that uses openvswitch’s python bindings to make OVSDB calls instead of running ovs-vsctl. Here is the spec that was approved for Juno which I will need to update for Kilo.

Both the current ovs_lib and ovs-vsctl seem to scale quadratically with the number of ports on a system where ovs_lib2 scales linearly.

Please take a look at the review and make suggestions. There’s still some stuff to do, but it should be in a testable state.

Benchmarking

Test setup

Test setup is just a devstack VM with the dummy network kernel module loaded and set to create 1000 dummy devices. Create /etc/modprobe.d/dummy.conf with:

options dummy numdummies=1000

and then:

sudo modprobe dummy

Baseline - bash and ovs-vsctl

First, let’s get rid of the necessity for using sudo by just quickly doing:

sudo chmod a+rwx /var/run/openvswitch/db.sock

Now we can test adding 100 ports w/o sudo overhead:

[terry@localhost neutron]$ ovs-vsctl del-br testbr -- add-br testbr
[terry@localhost neutron]$ time (for ((i=0;i<100;i++));do ovs-vsctl add-port testbr dummy${i};done)

real    0m1.389s

So that isn’t too bad, actually. What happens if we use sudo?

[terry@localhost neutron]$ ovs-vsctl del-br testbr -- add-br testbr
[terry@localhost neutron]$ time (for ((i=0;i<100;i++));do sudo ovs-vsctl add-port testbr dummy${i};done)

real    0m6.513s

So we’re about 5x slower just having to use sudo from the CLI. What about rootwrap?

[terry@localhost neutron]$ ovs-vsctl del-br testbr -- add-br testbr
[terry@localhost neutron]$ time (for ((i=0;i<100;i++));do sudo neutron-rootwrap /etc/neutron/rootwrap.conf ovs-vsctl add-port testbr dummy${i};done)

real    0m26.869s

Using sudo rootwrap is around 20x slower than the baseline of using no privilege escalation tool at all.

Now, what about adding 1000 ports? Does it scale linearly? Do we get around 13 seconds for adding 1000 ports with no sudo?

[terry@localhost neutron]$ ovs-vsctl del-br testbr -- add-br testbr
[terry@localhost neutron]$ time (for ((i=0;i<1000;i++));do ovs-vsctl add-port testbr dummy${i};done)

real    1m11.138s

No, we do not. ovs-vsctl does a dump of most of the database each time it runs. The more ports in the DB, the slower each successive call will be.

Testing ovs_lib1 against ovs_lib2

Here is a simple script to benchmark ovs_lib1 against ovs_lib2.

import logging
import time

from eventlet import greenpool
from neutron.agent.linux.ovs_lib import ovs_lib as ovs_lib1
from neutron.agent.linux.ovs_lib import ovs_lib2

logging.basicConfig()

def add_and_delete(br, iface):
    br.add_port(iface)
    br.delete_port(iface)

pool = greenpool.GreenPool()

for ovs_lib in (ovs_lib1, ovs_lib2):
    with ovs_lib.OVSBridge('test1', 'sudo') as br:
        start = time.time()
        for i in range(100):
            iface = "dummy%d" % i
            pool.spawn_n(add_and_delete, br, iface)
        pool.waitall()
        print ovs_lib.__name__, time.time() - start

which results in:

[terry@localhost neutron]$ python test.py
neutron.agent.linux.ovs_lib.ovs_lib 11.2306790352
neutron.agent.linux.ovs_lib.ovs_lib2 1.48559379578

So calling ovs-vsctl directly with sudo seems to be about 2x as fast as using ovs_lib1 to do the same thing and using ovs_lib2 is roughly the same speed as calling ovs-vsctl directly without sudo.

What about with 1000 ports instead? Does ovs_lib2 scale better than ovs-vsctl? Let’s bump the range(100) to range(1000) and remove the greenpool stuff (1000 spawns is just going to cause ovs-vsctl timeouts and open file descriptor errors) and see:

[terry@localhost neutron]$ python test.py 
neutron.agent.linux.ovs_lib.ovs_lib 169.324312925
neutron.agent.linux.ovs_lib.ovs_lib2 16.3264598846

Yes! ovs_lib2 does scale linearly. Even though OVS’s python IDL library does cache the database, it does it upon connection and ovs_lib2 maintains that connection and reuses it.

Caveats

If we go this route, we’ll have to talk about the recommended way for handling privileges. This could be done via connecting to openvswitch via TCP/SSL and controlling access via firewall rules and/or having deployment tools/packaging modifying the owner/permissions of the ovsdb unix socket.

Conclusion

Even without the overhead of sudo or rootwrap, there is room for dramatically improving performance of OVSDB operations by moving away from calling ovs-vsctl. Please take a look at the review even though I have it marked as Work In Progress. I crave your feedback!

November 11, 2014 12:07 AM