May 24, 2015

Elizabeth K. Joseph

Liberty OpenStack Summit days 3-5

Summiting continued! The final three days of the conference offered two days of OpenStack Design Summit discussions and working sessions on specific topics, and Friday was spent doing a contributors meetup so we could have face time with people we’re working with on projects.

Wednesday began with a team breakfast, where over 30 of us descended upon a breakfast restaurant and had a lively morning. Unfortunately it ran a bit long and made us a bit late for the beginning of summit stuff, but the next Infrastructure work session was fully attended! The session sought to take some next steps with our activity tracking mechanisms, none of which are currently part of the OpenStack Infrastructure. Currently there are several different types of stats being collected, from reviewstats which are hosted by a community member and focus specifically on reviews to those produced from Bitergia (here) that are somewhat generic but help compare OpenStack to other open source projects to Stackalytics which is crafted specifically for the OpenStack community. There seems to be value in hosting various metric types, mostly so comparisons can be made across platforms if they differ in any way. The consensus of the session was to first move forward with moving Stackalytics into our infrastructure, since so many projects find such value in it. Etherpad here: YVR-infra-activity-tracking


With this view from the work session room, it’s amazing we got anything done

Next up was QA: Testing Beyond the Gate. In OpenStack there is a test gate that all changes must pass in order for a change to be merged. In the past cycle periodic and post-merge tests have also been added, but it’s been found that if a code merging isn’t dependent upon these passing, not many people pay attention to these additional tests. The result of the session is a proposed dashboard for tracking these tests so that there’s an easier view into what they’re doing, whether they’re failing and so empower developers to fix them up. Tracking of third party testing in this, or a similar, tracker was also discussed as a proposal once the infra-run tests are being accounted for. Etherpad here: YVR-QA-testing-beyond-the-gate

The QA: DevStack Roadmap session covered some of the general cleanup that typically needs to be done in DevStack, but then also went into some of the broader action items, including improving the reliability of Centos tests run against it that are currently non-voting, pulling some things out of DevStack to support them as plugins as we move into a Big Tent world and work out how to move forward with Grenade. Etherpad here: YVR-QA-Devstack-Roadmap

I then attended QA: QA in the Big Tent. In the past cycle, OpenStack dropped the long process of being accepted into OpenStack as an official project and streamlined it so that competing technologies are now all in the mix, we’re calling it the Big Tent – as we’re now including everyone. This session focused on how to support the QA needs now that OpenStack is not just a slim core of a few projects. The general idea from a QA perspective is that they can continue to support the things-everyone-uses (nova, neutron, glance… an organically evolving list) and improve pluggable support for projects beyond that so they can help themselves to the QA tools at their disposal. Etherpad here: YVR-QA-in-the-big-tent

With sessions behind me, I boarded a bus for the Core Reviewer Party, hosted at the Museum of Anthropology at UBC. As party venues go, this was a great one. The museum was open for us to explore, and they also offered tours. The main event took place outside, where they served design-your-own curry seafood dishes, bison, cheeses and salmon. Of course no OpenStack event would be complete with a few bars around serving various wines and beer. There was an adjacent small building where live music was playing and there was a lot of space to walk around, catch the sunset and enjoy some gardens. I spent much of my early evening with friends from Time Warner Cable, and rounded things off with several of my buddies from HP. This ended up being a get-back-after-midnight event for me, but it was totally worth it to spend such a great time with everyone.

Thursday morning kicked off with a series of fishbowl sessions where the Infrastructure team was discussing projects we have in the works. First up was Infrastructure: Zuul v3. Zuul is our pipeline-oriented project gating system, which currently works by facilitating the of running tests and automated tasks in response to Gerrit events. Right now it sends jobs off to Gearman for launching via Jenkins to our fleet of waiting nodes, but we’re really using Jenkins as a shim here, not really taking advantage of the built in features that Jenkins offers. We’re also in need of a system that better supports multi-tenancy and multi-node jobs and which can scale as OpenStack continues to grow, particularly with Big Tent. This session discussed the end game of phasing out Jenkins in favor of a more Zuul-driven workflow and more immediate changes that may be made to Nodepool and smaller projects like Zuul-merger to drive our vision. Etherpad here: YVR-infra-zuulv3

Everyone loves bug reporting and task tracking, right? In the next session, Infrastructure: Task tracking, that was our topic. We did an experiment with the creation of Storyboard as our homebrewed solution to bug and task tracking, but in spite of valiant efforts by the small team working on it, they were unable to gain more contributors and the job was simply too big for the size of the team doing the work. As a result, we’re now back to looking at solutions other than Canonical’s hosted Launchpad (which is currently used). The session went through some basic evaluation of a few tools, and at the end there was some consensus to work toward bringing up a more battle-hardened and Puppetized instance of Maniphest (from Phabricator) so that teams can see if it fits their needs. Etherpad here:YVR-infra-task-tracking

The morning continued with an Infrastructure: Infra-cloud session. The Infrastructure team has about 150 machines in a datacenter that have been donated to us by HP. The session focused on how we can put these to use as Nodepool instances by running OpenStack on our own and adding that “infra-cloud” to the providers in Nodepool. I’m particularly interested in this, given some of my history with getting TripleO into testing (so have deployed OpenStack many, many times!) and in general eager to learn even more about production OpenStack deployments. So it looks like I’ll be providing Infra-brains to Clint Byrum who is otherwise taking a lead here. To keep in sync with other things we host, we’ll be using Puppet to deploy OpenStack, so I’m thankful for the expertise of people like Colleen Murphy who just joined our team to help with that. Etherpad here: YVR-infra-cloud

Next up was the Infrastructure: Puppet testing session. It was great to have some of the OpenStack Puppet folks in the room so they could talk some about how they’re using beaker-rspec in our infra for testing the OpenStack modules themselves. Much of the discussion centered around whether we want to follow their lead, or do something else, leveraging our current system of node allocation to do our own module testing. We also have a much commented on spec up for proposal here. The result of the discussion was that it’s likely that we’ll just follow the lead of the OpenStack Puppet team. Etherpad here: kilo-infra-puppet-testing

That afternoon we had another Infrastructure: Work session where we focused on the refactor of portions of system-config OpenStack module puppet scripts, and some folks worked on getting the testing infrastructure that was talked about earlier. I took the opportunity to do some reviews of the related patches and help a new contributor do some review – she even submitted a patch that was merged the next morning! Etherpad for the work session here: YVR-infra-puppet-openstackci

The last session I attended that day was QA: Liberty Priorities. It wasn’t one I strictly needed to be in, but I hadn’t attended a session in room 306 yet, and it was the famous gosling room! The room had a glass wall that looked out onto a roof were a couple of geese had their babies and would routinely walk by and interrupt the session because everyone would stop, coo and take pictures of them. So I finally got to see the babies! The actual session collected the pile of to do list items generated at the summit, which I got roped into helping with, and prioritized them. Oh, and they gave me a task to help with. I just wanted to see the geese! Etherpad with the priorities is here: YVR-QA-Liberty-Priorities


Photo by Thierry Carrez (source)

Thursday night I ended up having dinner with the moderator of our women of OpenStack panel, Beth Cohen. We went down to Gastown to enjoy a dinner of oysters and seafood and had a wonderful time. It was great to swap tech (and women in tech) stories and chat about our work.

Friday! The OpenStack conference itself ended on Thursday, so it was just ATCs (Active Technical Contributors) attending for the final day of the Design Summit. So things were much quieter and the agenda was full of contributors meetups. I spent the day in the Infrastructure, QA and Release management contributors meetup. We had a long list of things to work on, but I focused on the election tooling, which I ended up following up with on list and then later had a chat with the author of the proposed tooling. My afternoon was spent working on the translations infrastructure with Steve Kowalik who works with me on OpenStack infra and Carlos Munoz of the Zanata team. We were able to work through the outstanding Zanata bugs and make some progress with how we’re going to tackle everything, it was a productive afternoon and always a pleasure to get together with the folks I work with online every day.

That evening, as we left the closing conference center, I met up with several colleagues for an amazing sushi dinner in downtown Vancouver. A perfect, low-key ending to an amazing event!

by pleia2 at May 24, 2015 02:15 AM

May 23, 2015

Emilien Macchi

Puppet OpenStack plans for Liberty

Our Vancouver week just ended and I think it was a very productive Summit for the Puppet OpenStack folks.
This blog post summarizes what we did this week, and what we plan for the next release.

Vancouver-Canada

Releases and master branch policy

https://etherpad.openstack.org/p/liberty-summit-design-puppet-master-branch

So we officially decided to support the latest version of OpenStack provided by upstream packages (Ubuntu UCA and CentOS7 RDO) in our modules master branch. That means if you submit a change in the module, it will have to pass our integration testing (running Beaker).

Examples:

  • if your change update configuration to support latest version of OpenStack provided by recent packaging, and break previous OpenStack version, it’s fine, because we have stable branches that aim to support previous versions of OpenStack.
  • if your change aims to delete/add a parameter, it has to be backward compatible for at least 2 releases.

The whole spec will be documented here: https://review.openstack.org/#/c/180141/

We are working on having a easy way to automate the backports to stable branches with https://review.openstack.org/#/c/175849/. We hope it will improve our stable branches so people can actually use them in production without having to stay close from master.

 

Puppet module compliance

https://etherpad.openstack.org/p/liberty-summit-design-puppet-compliant

Moving under the big tent is very exciting but is also challenging. One of our biggest concerns today is consistency across our modules and also with other OpenStack projects.

Most of requirements to have a compliant module are inspired by Puppetlabs criteria. Though it will also require some adjustments regarding our conventions and the way we work with OpenStack community.

A blueprint will come up with the specs that we expect to say that “a module is compliant”. Also, we are in the process to move some modules (that are compliant) under the big tent, while some others will require more work.

To help in this process, we have some folks working on having tools to generate Puppet modules easily and maintain common files synchronized.

 

Handle default OpenStack parameters values

https://etherpad.mozilla.org/liberty-summit-design-puppet-parameter-defaults

We will  to try to make sure our default parameters are set to ‘undef‘ and our providers to make sure the parameters are absent if ‘undef’ is defined. That way, if no configuration is provided, the service will run default OpenStack parameters and rely on upstream configuration.

This is still work in progress and some feedback will be required before changing our modules.

 

CI plans

Puppet OpenStack modules have now integration testing working on Ubuntu Trusty & CentOS7 (both running Kilo).

Our next steps are:

Also we are working with OpenStack Infra, who also wants to run testing against their modules: https://etherpad.openstack.org/p/kilo-infra-puppet-testing

They will also start using Beaker so we will have a consistency between what does OpenStack Infra & Puppet modules team, which is a very good news for our collaboration and community work. Technically speaking, we might have to adjust some features we would want to have in Beaker so we can cover a good testing.

Last but not least, we will enable voting for Puppet 4.x very soon, since our modules are already supporting this Puppet version.

You can have a look at the CI status.

 

No puppet-oslo

https://etherpad.openstack.org/p/liberty-summit-design-puppet-oslo

We were wondering how to configure Oslo parameters (example: oslo messaging) and it came up that it was not possible to have a single puppet-oslo module since all Oslo project have specific releases; having multiple puppet-oslo-* would have been too expensive to maintain.

So for now, we are going to update Puppet OpenStack modules to support Oslo messaging with new configuration sections & parameters.

At the end of Liberty, we will have enough experience to know if we continue on that way or if external modules would be needed.

 

Open topics

  •  Some work will be done in Puppet to correctly support systemd, also fix puppet-swift init scripts management.
  • Database sync (ex: nova-manage db sync) will be optional for all modules.
  • Our blueprints will be exposed on http://specs.openstack.org

 

Puppet dinner

We had fun (and ate a great steak) at the end of the week. Talking about Puppet, our jobs, our life… well, it made me think I’m really proud to work in a team like that.

CFqxgIaVIAA9dQT.jpg:large

I also take the opportunity to thank our contributors and the OpenStack community for this awesome summit.

Hopefully see you all in Tokyo!

by Emilien at May 23, 2015 09:53 AM

eNovance Engineering Teams

Puppet OpenStack plans for Liberty

Our Vancouver week just ended and I think it was a very productive Summit for the Puppet OpenStack folks.
This blog post summarizes what we did this week, and what we plan for the next release.

Vancouver-Canada

Releases and master branch policy

https://etherpad.openstack.org/p/liberty-summit-design-puppet-master-branch

So we officially decided to support the latest version of OpenStack provided by upstream packages (Ubuntu UCA and CentOS7 RDO) in our modules master branch. That means if you submit a change in the module, it will have to pass our integration testing (running Beaker).

Examples:

  • if your change update configuration to support latest version of OpenStack provided by recent packaging, and break previous OpenStack version, it’s fine, because we have stable branches that aim to support previous versions of OpenStack.
  • if your change aims to delete/add a parameter, it has to be backward compatible for at least 2 releases.

The whole spec will be documented: https://review.openstack.org/#/c/180141/

We are working on having a easy way to automate the backports to stable branches with https://review.openstack.org/#/c/175849/. We hope it will improve our stable branches so people can actually use them in production without having to stay close from master.

 

Puppet module compliance

https://etherpad.openstack.org/p/liberty-summit-design-puppet-compliant

Moving under the big tent is very exciting but is also challenging. One of our biggest concerns today is consistency across our modules and also with other OpenStack projects.

Most of requirements to have a compliant module are inspired by Puppetlabs criteria. Though it will also require some adjustments regarding our conventions and the way we work with OpenStack community.

A blueprint will come up with the specs that we expect to say that “a module is compliant”. Also, we are in the process to move some modules (that are compliant) under the big tent, while some others will require more work.

To help in this process, we have some folks working on having tools to generate Puppet modules easily and maintain common files synchronized.

 

Handle default OpenStack parameters values

https://etherpad.mozilla.org/liberty-summit-design-puppet-parameter-defaults

We will to try to make sure our default parameters are set to ‘undef‘ and our providers to make sure the parameters are absent if ‘undef’ is defined. That way, if no configuration is provided, the service will run default OpenStack parameters and rely on upstream configuration.

This is still work in progress and some feedback will be required before changing our modules.

 

CI plans

Puppet OpenStack modules have now integration testing working on Ubuntu Trusty & CentOS7 (both running Kilo).

Our next steps are:

Also we are working with OpenStack Infra, who also wants to run testing against their modules: https://etherpad.openstack.org/p/kilo-infra-puppet-testing

They will also start using Beaker so we will have a consistency between what does OpenStack Infra & Puppet modules team, which is a very good news for our collaboration and community work. Technically speaking, we might have to adjust some features we would want to have in Beaker so we can cover a good testing.

Last but not least, we will enable voting for Puppet 4.x very soon, since our modules are already supporting this Puppet version.

You can have a look at the CI status.

 

No puppet-oslo

https://etherpad.openstack.org/p/liberty-summit-design-puppet-oslo

We were wondering how to configure Oslo parameters (example: oslo messaging) and it came up that it was not possible to have a single puppet-oslo module since all Oslo project have specific releases; having multiple puppet-oslo-* would have been too expensive to maintain.

So for now, we are going to update Puppet OpenStack modules to support Oslo messaging with new configuration sections & parameters.

At the end of Liberty, we will have enough experience to know if we continue on that way or if external modules would be needed.

 

Open topics

  • Some work will be done in Puppet to correctly support systemd, also fix puppet-swift init scripts management.
  • Database sync (ex: nova-manage db sync) will be optional for all modules.
  • Our blueprints will be exposed on http://specs.openstack.org

 

Puppet dinner

We had fun (and ate a great steak) at the end of the week. Talking about Puppet, our jobs, our life… well, it made me think I’m really proud to work in a team like that.

CFqxgIaVIAA9dQT.jpg:large

I also take the opportunity to thank our contributors and the OpenStack community for this awesome summit.

Hopefully see you all in Tokyo!

by Emilien Macchi at May 23, 2015 09:50 AM

OpenStack Superuser

Revamping Ceilometer, Federated Identity for research and why OpenStack is doomed

An OpenStack community some 6,000 people strong gathered in Vancouver this week for the Summit. It’s impossible to sound out everyone, but here’s a sample of voices…

How did the Summit go for you?

alt text here

Giuseppe Andronico, technical researcher, Italy’s National Institute for Nuclear Physics (INFN)

We’ve been coming to the Summits since Hong Kong, but this Summit was very interesting because there were several developments that are potentially very useful for us. We’re working on developing a cloud infrastructure for our research. Right now, we’re working on two main fronts. One, we need a multi-region cloud because we operate throughout Italy and these clouds need to be managed together.

And we also need to develop the concept of federation, because we cooperate with a lot of other institutions around the world. So we need a federation with other clouds to share resources and data.

We’re working hard on this and hope to have support from the Foundation and other users for best practices in setting up our clouds…We also hope to see what others are doing so we can gain new perspectives on our infrastructure.

What was your favorite talk?

Matt Joyce, Big Switch

I really liked Andy Smith’s session “Openstack Is Doomed And It Is Your Fault.” He’s one of the guys who helped start the project, he and Vish Ishaya wrote the code base for Nova…He’s basically saying that it’s been five years, it’s time to do a refresh. I think he’s probably correct. It’s an opportunity for the community to think about how to rewrite some of the stuff that’s been written. It’s a good thing to look in a mirror and say hey, “Here are our flaws, how can we re-approach solving them?”

There’s an emergence of containers, there’s an emergence of software-defined network stuff and a bunch of technologies that have been the result of this drive and that have given us the chance to reevaluate how we might reengineer this stuff going forward. The next few years will see a few people go a little quiet but some very cool stuff is going to come out of it.

What’s your biggest takeaway from the Summit?

Pranav Salunke, Suse Linux

I like the way Ceilometer is going to be revamped. The current design has inherent flaws - reporting takes more juice from the cluster than actual workloads. I’m pleased about that. The blueprint sessions in the Design Summit were also useful…

What's unique about this Summit for you?

alt text here

Michael Still, senior software development manager at Rackspace and former project team lead (PTL) of Nova.

Over time, the Nova team is better at reaching consensus than we used to be…Now we tend to have better thought through, more respectful conversations. That’s a sign of maturity.

The Nova Ops feedback session was really interesting — because I didn’t really have anything to complain about! There was a little fiddly bit on the edge that’s kind of bothering us, but it turns out that Nova’s not that bad to run anymore. It’s been cool to get out of this firefighting mode and start talking about how to make things better.

Why did the conflict die down?

We talk more. We have specs, for example, and we can have concrete discussions with feedback from users and operators. Previously, we’d walk into a session cold and it’d be like, ‘let’s do this from first principles.’

We set required reading for sessions now. We’ve gotten better at communicating, despite the fact that the team has grown.

What was your favorite session?

alt text here

Sayali Lunkad, consultant, and core reviewer OpenStack training guides

I attended a couple of Neutron sessions that were very good. One in particular was “Neutron L2 and L3 agents: How They Work and How Kilo Improves Them,” it covered all the differences and gave a great idea about the new features, how they work and what work still needs to be done, in case you want to get involved. It was a really nice session.

How many Summits have you attended?

Edgar Magana, cloud operations architect, Workday

alt text here

I’ve been to the last nine! My first one was the Santa Clara Summit, 2011.

What’s the biggest change you’ve seen?

Well, they used to be much more personal. It was so unique the way we came together in the Design Summits, discussing what we wanted to see. We all knew each other very well.

On the conference side, wow! How many different ways there are to do things is just amazing. You have somebody who uses a certain operating system with certain networking technology tools and a certain use case — and the number of use cases that you can provide with OpenStack is just amazing.

I believe the most important part of OpenStack is the ecosystem that ’s been created around it. Five years on, you really see the growth — and not just because there are vendors who want to sell stuff.

How do you get the most out of a Summit?

The last two or three Summits, I’ve noticed that it’s been too much. I wanted to attend everything but I can’t! For every session I wanted to attend, there were another three that I missed.

This time, I sat down with my five-person team to decide beforehand what we would cover, it was all about ‘divide and conquer!’ You can always watch the videos later, but the experience of being in the room is unique, that’s why you come.

What would you change?

It’s hard to extend the conference over more days - people get tired - but maybe focus more on the conference and the expo. My colleagues on the dev environment side will kill me for saying this: but maybe we should evolve and like the Ubuntu community did virtualize more of that? Although, right now I’m looking at this table of Neutron developers sitting together — and that’s probably priceless.

by Nicole Martinelli at May 23, 2015 12:36 AM

May 22, 2015

OpenStack Superuser

OpenStack users share how their deployments stack up

Some of OpenStack’s founding projects -- including Nova, Keystone, Glance, Horizon and Cinder -- continue to be the most popular. That may be changing, however. For starters, the inclusion of bare-metal provisioning project Ironic in the integrated release led to an increase across all deployment stages, including production, test and running a proof-of-concept (POC). Other projects, including Heat, Ceilometer, Swift and Trove, are also gaining in adoption, according to the recent User Survey.

Kernel-based Virtual Machine (KVM) remains the most popular hypervisor, though a rush of bare metal deployments, up 5 percent, was seen over the past six months.

Puppet continues to retain a strong lead as deployment tool of choice, with more than half of production deployments using Puppet as one of their tools.

Those are some of the key takeaways from the fifth consecutive survey conducted by the OpenStack User Committee. The User Committee sounds out people working with OpenStack ahead of each Summit. These results are from voluntary surveys answered online between March 9, 2015 - April 16, 2015. The opt-in survey is not an exhaustive catalog of OpenStack deployments, but provides valuable intelligence on usage patterns and technology decisions in real-world deployments.

This the second in a three-part series analyzing the most recent OpenStack user survey. Part one focused on demographics and business drivers. This post is an analysis of deployment details, including project usage, size trends and tools. Part three will share app developer insights.

Project Usage

Ironic’s inclusion in the integrated release has spurred its popularity, with an even spread of users running it in production (up 4 percent), testing it (up 12 percent), or running a proof-of-concept (up 4 percent). The large number of production deployments of Designate at this early stage indicates a real need being satisfied by this component. Nova, Keystone, Glance, Horizon and Cinder remain the most popular projects. However, all others show significant gains in production deployments compared to last survey: Heat (up 15 percent), Ceilometer (up 10 percent) Swift (up 9 percent), Neutron (up 9 percent), Trove (up 3 percent).

alt text here

Size Trends

On the size front: everything keeps getting bigger. The average response to questions about number of compute nodes, cores, instances, IP addresses and storage sizes increased across the board for the large-scale end of things.

alt text here

alt text here

alt text here

alt text here

alt text here

alt text here

alt text here

Workloads

There were significant changes to the workloads question in this survey, breaking out workloads into three categories: “service,” “enterprise” and “horizontal.”

alt text here

alt text here

alt text here

Deployment Profiles

Hypervisors KVM remains the most popular hypervisor by a long way, though a rush of bare metal deployments, up 5 percent, was seen over the past six months, with the use of container technologies also increasing. Over the same period, small decreases in the use of KVM and VMware, with small increases in Hyper-V and Xen-based hypervisors.

alt text here

Block Storage Drivers

Ceph continues to be the most popular storage driver, gaining 7 percent since the last survey. A difference in this survey compared to last is that this question was only asked of respondents who previously noted that were using the Block Storage (Cinder) project. This could explain the large reductions in the use of LVM, which is a commonly used storage technology regardless of Cinder usage.

alt text here

Network Drivers

In terms of network drivers, OpenvSwitch returns similar numbers as six months ago - 46 percent of production deployments, converting a few, with nova-network still holding strong at 24 percent of all production deployments in second. Linux bridge has gained 5 percent, remaining in third. Increases also seen in the use most vendor-sponsored drivers especially in regard to proof-of-concepts, although the Cisco driver showed a decline across all deployment categories. About 4 percent of respondents are on a driver not shown on this graph.

alt text here

alt text here

alt text here

alt text here

There appears to be no real relationship between the use of nova-network vs. neutron in terms of scale of the compute deployment - both are seen across small and large deployments in similar percentages.

alt text here

Identity Drivers

SQL continues to be the most common, with Lightweight Directory Access Protocol (LDAP) and Active Directory retaining their spots in second and third compared to six months ago. A significant change in this survey is the increase in use of the Templated backend, now used in 5 percent of production deployments.

alt text here

Deployment Tools

More than 50 percent of production deployments are using Puppet as one of their tools, and this trend carries through to the community with the puppet-openstack modules being recognised as an important project. Ansible gained another 5 percent of production deployments, with more than a few reporting that they use a combination of Puppet and Ansible to manage their cloud. Fuel has lept up from the “other” category last survey to third place, narrowly beating Chef which suffered a decline in respondents this survey (down 12 percent). Interest in CFEngine and Crowbar was also lower than last round. On the devstack-in-production front: only three reported this odd usage pattern on this occasion, and two of them are also using other tools (Puppet and "other tool" and Puppet and Saltstack respectively), so it’s probably safe to assume they were likely talking about the development area of their production cloud.

alt text here

Operating Systems

alt text here

Over the past six months, the share of production CentOS-based deployments have risen in popularity - gaining about 10 percent, while Red Hat Enterprise Linux and Ubuntu production deployments have fallen by a similar number. This is accompanied by a corresponding drop in Dev/QA and Proof of Concept deployments in CentOS, down 10-15 percent compared with November last year - potentially indicating these came to fruition. Ubuntu's gained 5 percent in proof-of-concept deployments which have often been converting into production deployments in later surveys. Debian holds its own in fourth place, returning similar numbers to six months ago. About 2 percent of production deployments use an operating system not mentioned on this graph.

Compatibility APIs

Production usage of the EC2 (up 2 percent) and S3 (up 5 percent) APIs has increased over the past half year, with OCCI (down 2 percent) and GCE API (down 1 percent) usage remaining low. The increase in “other compatibility API” in this round is due to this question being accidentally marked as mandatory for a short period of time - most entries here are simply “none.”

alt text here

Spotlight Questions

Each survey we ask several questions on specific topic areas of interest to the community.

Interest in Containers

This was the first time users were sounded out about this hot topic. Of the users who expressed interest in containers, below is the breakdown of the particular projects they are interested in.

alt text here

Packaging

At recent ops meetups, there have been a number of discussions on distribution packaging for OpenStack. When asked, many operators noted that they are creating their own packages. To further assess this across larger numbers, we added two additional questions to the survey this time: “What packages does this deployment use?” and, for those who answered that they were modifying or building packages, we also asked why.

alt text here

The result of the first shows the different audiences we reach with the ops meetups and the user survey. Majority of users (68 percent) are able to run unmodified packages. However, the remaining 32 percent of users need to modify or build their own packages is worth investigating further.

The good news is that the level of packaging bugs is relatively low - fewer people reported issues here. However, there was a strong result that the packaging process is too slow, or missing items that users need - whether a critical bug fix or a new feature that was added after the package was cut.

As one commenter noted, “Standard packages aren't updated on a cadence that meets ours. At times that's 'not quickly enough' and at times that's "too quickly.”

alt text here

Ceilometer

Although 43 percent of production deployments use Ceilometer, its adoption has not been rising as quickly as expected. So we asked deployers who weren’t using Ceilometer why they weren’t. There were dozens of comments related to stability and reliability, particularly at scale. A couple of comments noted that if a billing integration feature were included it might entice them to use it.

“often >10 minutes for a query”
“ reliable backend store (MongoDB is a lot of effort to maintain)”
“Fixing the data schema so it doesn;t explode in size.”
“Seems very unstable for very large deployments.”
“For it to actually work at scale- API queries take ages even behind a large multi-node Mongo cluster.”
“If it would work reliably and with decent performance- and we could trust in its development path.”
“It requires too much horsepower and maintenance for most private cloud situations.”
“Scalability issue: on our QA workload were generating so many data so default backend can’t even work with.”
“We don’t see Ceilometer as an operator tool. We see it as a tenant monitoring service”
“Were using it - but would like to see more rrd like capabilities so it is more stable.”

The OpenStack User Committee is led by Subbu Allamaraju, Tim Bell and Jon Proulx. This core group provides oversight and guidance to a number of working groups that target specific areas for improvement.

by OpenStack User Committee at May 22, 2015 11:05 PM

OpenStack application developers share insights

Application developers working with OpenStack know what they want. Most are looking for clear, accurate documentation with emphasis on detailed working examples so they can get their jobs done.

That’s one of key takeaways from the fifth consecutive survey conducted by the User Committee.The User Committee sounds out people working with OpenStack ahead of each Summit. These results are from voluntary surveys answered online that were created or updated between March 9, 2015 - April 16, 2015. The opt-in survey is not an exhaustive catalog of OpenStack deployments, but provides valuable intelligence on usage patterns and technology decisions in real-world deployments.

This the third in a three-part series analyzing the most recent OpenStack user survey focusing on app developer insights. Part one focused on demographics and business drivers, part two provided an analysis of deployment details, including project usage, size trends and tools.

App Dev Insights

Some 230 app developers were asked for their feedback about OpenStack, starting with the software development kit (SDK) popularity contest. This is important, since the documentation team can use the information to prioritize code examples. It’s worth noting that the libcloud was unintentionally omitted on the last survey. This has been fixed, and it has since shot to the top SDK usage position after the OpenStack clients, closely followed by jClouds (no change,) fog (no change) and a myriad of other toolkits not listed.

The SDK developers appear to have done well in attracting users - the number of respondents who wrote their own code to talk to the REST API directly dropped by about 7 percent compared to last survey.

alt text here

When it comes to documentation, OpenStack app developers want it all. They don’t really mind if it’s concise or not, but they want accurate, complete, clear, and searchable documentation with detailed working examples. Recently, the introductory tutorial which also ranked highly was completed for the libcloud API, and the documentation team has already identified the need for major changes to the API specification, so this should improve over the next cycle.

alt text here

When asked what they struggled with when working with applications on OpenStack, users first noted general issues with cloud applications, for example:

Interacting with other clouds

Last survey, for the first time, we asked application developers whether they were using other clouds with their OpenStack deployment. Few answered, but of those who did, 80 percent were using another public cloud.

This time, this optional question was moved to a more prominent section and received many more results, but likely tilted away from that application developer-centric audience. With this larger set, it’s still clear that hybrid cloud is gaining traction: 35 percent of respondents noted the use of another cloud.

alt text here

Improvement underway

When asked what they struggled with when working with applications on OpenStack, users first noted general issues with cloud applications, for example:

"Educating enterprise developers to design cloud-aware architecture. Migrating traditional workloads into cloud-native OpenStack environment."

This was closely followed by annoyances with the various OpenStack APIs and the project split, for example:

"Code is not well factored. Too many overlapped functions between OpenStack projects. Communication does not use well-defined structured object but via dictionary. Functions and classes do not have documented schema at all (as most of python projects). Some API (e.g. Sahara) are not well defined."

To improve the developer experience of API users by converging the OpenStack API to a consistent and pragmatic RESTful design, the API working group creates guidelines that all OpenStack projects should follow for new development, and promotes convergence of new APIs and future versions of existing APIs.

See the use of other OpenStack clouds rising - something previously hoped for with more public clouds in the marketplace.

The OpenStack User Committee is led by Subbu Allamaraju, Tim Bell and Jon Proulx. This core group provides oversight and guidance to a number of working groups that target specific areas for improvement.

by OpenStack User Committee at May 22, 2015 11:04 PM

Top 10 quotes: State of the Stack

alt text here

Randy Bias likes to call them as he sees them.

The vice president of technology at EMC delivered another hard-hitting "State of the Stack" talk at the OpenStack Summit Vancouver. With a rat-a-tat-tat delivery -- by his own admission he was on a "rant-and-roll" -- Bias finished ahead of the time allotted, hitting the packed room with a ton of timely ideas.

These are some select nuggets -- you can also mosey through his slide deck or check out the 30-minute talk here.

The good

Bias ran through current numbers - companies, active contributors, commits - to give participants an idea of what winning looks like for OpenStack now, calling it the fastest-growing open source community.

"OpenStack is closing in on vSphere as a Google search trend I mean, wow!"

alt text here

"That there are as many people in OpenStack groups as in AWS on Linkedin for OpenStack is impressive."

The bad

"There are so many projects...If you're new, you look at this and you’re gonna say 'where do I start?'"

"There’s a spaghetti ball of interdependence of nearly 30 projects...OpenStack risks collapsing under its own weight."

alt text here

Bias recognized that product working groups and the "big tent" release cycle have helped -- but asserts that it's not enough.

"Docker is dead simple, and that’s why people have adopted it much faster. Three million downloads in first quarter 2014 and 100 million downloads by the end of 2014. That’s a crazy ramp."

"If we want OpenStack to be successful, we need to make it simpler. Right now there's a big gap between DevStack and what you need to do to run on 10 machines."

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

How to fix it

alt text here

On the roller-coaster of new technology adoption, Bias sees OpenStack "heading towards the trough of disillusionment. This is where people start to give up hope and start to walk away from it. If we don't take care and we don't think about what people need to be successful, we might derail."

Bias is running his own user survey and while he admits that it is not as scientific as it could be - respondents are mostly self-selected from his Twitter followers - there were some useful insights.

"Draw a line around OpenStack and tell me what it is. You can’t. We should chop it up. We should have flavors of OpenStack."

His five-point plan? Streamlining the governance model, allowing competition (both projects and multiple programming languages,) conform to well-known APIs, testable reference architecture and ruthless simplification.

"OpenStack projects should be interrelated instead of interdependent."

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

alt text here

"It would be good for us to make new mistakes...The door is already cracked open. I’d like it fully open, but I’ll take what I can get."

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="https://www.youtube.com/embed/rOSmvwEhFmg" width=""></iframe>

Cover Photo by Steve Gill // CC BY NC

by Nicole Martinelli at May 22, 2015 11:02 PM

Superuser weekend reading: Summit Vancouver edition

Here's the news from the OpenStack world you won't want to miss -- the musings, polemics and questions posed by the larger community.

Got something you think we should highlight? Tweet, blog, or email me!

In Case You Missed It

We know you're busy. We get it. Here's a TL;DR version of the Summit highlights in video form, including where we're headed to next.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="https://www.youtube.com/embed/iZdEwQ-76P4" width=""></iframe>

Not enough high-level news stories for you? We've got more...

The majestic conifers of Vancouver may have inspired this post about "decomposing" the myths surrounding OpenStack.

"OpenStack is an IT phenomenon like none we’ve yet seen. It is big scale, diverse, and even diffuse. And it has captured the focus of a growing list of enterprise IT organizations because its messages are powerful and its implications are far reaching," writes Forbes contributor John Webster, a senior partner at Evaluator Group.

Business Insider's Matt Weinberger is a bit more skeptical in his piece titled "Why cloud computing leaders must hang together or hang separately," taking up the analogy of jazz improvisation, a familiar riff when writing about open source.

TechTarget takes on the question of whether OpenStack is ready for prime time (haven't we heard this before?) with Forrester research analyst Lauren Nelson opining thusly:

"The challenge is when people talk about whether it's ready for production, they look at it as a 'yes' or 'no' rather than looking at what are folks really doing when they're running in production," Nelson said. "It is being run in production in many use cases today; the question is for what environments."

For more on the prime time front, here's an interesting read on how Time Warner Cable is using OpenStack by Jason Baker over at Opensource.com.

"We have a large development organization that embraces using a number of open source tools that are out there," Matt Haines said. "I think people can see the advantage of the contribution side for projects like OpenStack—that for us to use it effectively, we need to get our changes upstream, otherwise we’d end up carrying a lot of patches. So, I think people are seeing that it’s a realistic and viable software model."

Last but not least, Elizabeth Krumbach Joseph has some great posts about her doings at the Summit Vancouver -- a Women of OpenStack working breakfast, taking a panel selfie, the Design Summit and, finally, what happens when people try S'mores for the first time...

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

[Cover Photo](https://www.flickr.com/photos/neilsingapore/14506293014/ by Neil Howard // CC BY NC

by Nicole Martinelli at May 22, 2015 10:59 PM

Opensource.com

OpenStack enables open source shift at Time Warner Cable

Time Warner Cable is going big with OpenStack. Just a year into their production use of OpenStack for powering their internal cloud, they are leveraging it for everything from video to networking to deploying web applications, all on an in-house OpenStack cloud spread across two data centers.

by Jason Baker at May 22, 2015 09:00 AM

Craige McWhirter

How To Resolve a Volume is Busy Error on Cinder With Ceph Block Storage

When deleting a volume in OpenStack you may sometimes get an error message stating that Cinder was unable to delete the volume because the volume was busy:

2015-05-21 23:31:41.160 16911 ERROR cinder.volume.manager [req-6f77ef4d-bbff-4ff4-8a3e-4c6b264ac5ca \
04b7cb61dd3f4f2f8f80bbd9833addbd 5903e3bda1e840d492fe79fb840acacc - - -] Cannot delete volume \
f8867d43-bc82-404e-bcf5-6d345c32269e: volume is busy

There are a number of reasons why a volume may be reported by Ceph as busy, however the most common reason in my experience has been that a Cinder client connection has not yet been closed, possibly because a client crashed.

If you were to look at the volume in Cinder, that status is usually available, the record looks in order. When you check Ceph, you'll see that the volume still exists there too.

% cinder show f8867d43-bc82-404e-bcf5-6d345c32269e | grep status
|    status    |    available    |

 # rbd -p my.ceph.cinder.pool ls | grep f8867d43-bc82-404e-bcf5-6d345c32269e
 volume-f8867d43-bc82-404e-bcf5-6d345c32269e

Perhaps there's a lock on this volume. Let's check for locks and then remove them if we find one:

# rbd lock list my.ceph.cinder.pool/volume-f8867d43-bc82-404e-bcf5-6d345c32269e

If there are any locks on the volume, you can use lock remove using the id and locker from the previous command to delete the lock:

# rbd lock remove <image-name> <id> <locker>

What if there are no locks on the volume but you're still unable to delete it from either Cinder or Ceph? Let's check for snapshots:

# rbd -p my.ceph.cinder.pool snap ls volume-f8867d43-bc82-404e-bcf5-6d345c32269e
SNAPID NAME                                              SIZE
  2072 snapshot-33c4309a-d5f7-4ae1-946d-66ba4f5cdce3 25600 MB

When you attempt to delete that snapshot you will get the following:

# rbd snap rm my.ceph.cinder.pool/volume-f8867d43-bc82-404e-bcf5-6d345c32269e@snapshot-33c4309a-d5f7-4ae1-946d-66ba4f5cdce3
rbd: snapshot 'snapshot-33c4309a-d5f7-4ae1-946d-66ba4f5cdce3' is protected from removal.
2015-05-22 01:21:52.504966 7f864f71c880 -1 librbd: removing snapshot from header failed: (16) Device or resource busy

This reveals that it was the snapshot that was busy and locked all along.

Now we need to unprotect the snapshot:

# rbd snap unprotect my.ceph.cinder.pool/volume-f8867d43-bc82-404e-bcf5-6d345c32269e@snapshot-33c4309a-d5f7-4ae1-946d-66ba4f5cdce3

You should now be able to delete the volume and it's snapshot via Cinder.

Enjoy :-)

by Craige McWhirter at May 22, 2015 05:24 AM

Cloudscaling Corporate Blog

State of the Stack v4 – OpenStack In All It’s Glory

Yesterday I gave the seminal State of the Stack presentation at the OpenStack Summit.  This is the 4th major iteration of the deck.  This particular version took a very different direction for several reasons:

  1. Most of the audience is well steeped in OpenStack and providing the normal “speeds and feeds” seemed pedantic
  2. There were critical unaddressed issues in the community that I felt needed to be called out
  3. It seemed to me that the situation was becoming more urgent and I needed to be more direct than usual (yes, that *is* possible…)

There are two forms you can consume this in: the slideshare and the YouTube video from the summit.  I recommend the video first and then the Slideshare.  The reason being, that with the video I provide a great deal of additional color, assuming you can keep up with my rapid fire delivery.  Color in this case can be construed several different ways.

I hope you enjoy. If you do, please distribute widely via twitter, email, etc. :)

The video:

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="338" src="https://www.youtube.com/embed/rOSmvwEhFmg?feature=oembed" width="600"></iframe>

The Slideshare:

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="356" marginheight="0" marginwidth="0" scrolling="no" src="https://www.slideshare.net/slideshow/embed_code/key/ktP6fjvZybEDOs" style="border:1px solid #CCC; border-width:1px; margin-bottom:5px; max-width: 100%;" width="427"> </iframe>

by Randy Bias at May 22, 2015 04:26 AM

Cloudify Engineering

An Update on the OpenStack Heat-Translator Project - TOSCA, Networking, Containers, and more

In the past I've written about the progress of TOSCA through my experience as a core committer in the Heat...

May 22, 2015 12:00 AM

May 21, 2015

Elizabeth K. Joseph

Liberty OpenStack Summit day 2

My second day of the OpenStack summit came early with he Women of OpenStack working breakfast at 7AM. It kicked off with a series of lightning talks that talked about impostor syndrome, growing as a technical leader (get yourself out there, ask questions) and suggestions from a tech start-up founder about being an entrepreneur. From there we broke up into groups to discuss what we’d like to see from the Women of OpenStack group in the next year. The big take-aways were around mentoring of new women joining our community and starting to get involved with all the OpenStack tooling and more generally giving voice to the women in our community.

Keynotes kicked off at 9AM with Mark Collier announcing the next OpenStack Summit venues: Austin for the spring 2016 summit and Barcelona for the fall 2016 summit. He then went into a series of chats and demos related to using containers, which may be the Next Big Thing in cloud computing. During the session we heard from a few companies who are already using OpenStack with containers (mostly Docker and Kubernetes) in production (video). The keynotes continued with one by Intel, where the speaker took time to talk about how valuable feedback from operators has been in the past year, and appreciation for the new diversity working group (video). The keynote from EBay/Paypal showed off the really amazing progress they’ve made with deploying OpenStack, with it now running on over 300k cores and pretty much powers Paypal at this point (video). Red Hat’s keynote focused on customer engagement as OpenStack matures (video). The keynotes wrapped up with one from NASA JPL, which mostly talked about the awesome Mars projects they’re working on and the massive data requirements therein (video).


OpenStack at EBay/Paypal

Following keynotes, Tuesday really kicked off the core OpenStack Design Summit sessions, where I focused on a series of Cross Project Workshops. First up was Moving our applications to Python 3. This session focused on the migration of Python 3 for functional and integration testing in OpenStack projects now that Oslo libraries are working in Python 3. The session mostly centered around strategy, how to incrementally move projects over and the requirements for the move (2.x dependencies, changes to Ubuntu required to effectively use Python 3.4 for gating, etc). Etherpad here: liberty-cross-project-python3. I then attended Functional Testing Show & Tell which was a great session where projects shared their stories about how they do functional (and some unit) testing in their projects. The Etherpad for this one is super valuable for seeing what everyone reports, it’s available here: liberty-functional-testing-show-tell.

My Design Summit sessions were broken up nicely with a lunch with my fellow panelists, and then the Standing Tall in the Room – Sponsored by the Women of OpenStack panel itself at 2PM (video). It was wonderful to finally meet my fellow panelists in person and the session itself was well-attended and we got a lot of positive feedback from it. I tackled a question about shyness with regard to giving presentations here at the OpenStack Summit, where I pointed at a webinar about submitting a proposal via the Women of OpenStack published in January. I also talked about difficulties related to the first time you write to the development mailing list, participate on IRC and submit code for review. I used an example of having to submit 28 patches for one of my early patches, and audience member Steve Martinelli helpfully tweeted about a 63 patch change. Diving in to all these things helps, as does supporting the ideas of and doing code review for others in your community. Of course my fellow panelists had great things to say too, watch the video!


Thanks to Lisa-Marie Namphy for the photo!

Panel selfie by Rainya Mosher

Following the panel, it was back to the Design Summit. The In-team scaling session was an interesting one with regard to metrics. We’ve learned that regardless of project size, socially within OpenStack it seems difficult for any projects to rise above 14 core reviewers, and keep enough common culture, focus and quality. The solutions presented during the session tended to be heavy on technology (changes to ACLs, splitting up the repo to trusted sub-groups). It’ll be interesting to see how the scaling actually pans out, as there seem to be many more social and leadership solutions to the problem of patches piling up and not having enough core folks to review them. There was also some discussion about the specs process, but the problems and solutions seem to heavily vary between teams, so it seemed unlikely that a unified solution to unprocessed specs would be universal, but it does seem like the process is often valuable for certain things. Etherpad here: liberty-cross-project-in-team-scaling.

My last session of the day was OpenStack release model(s). A time-based discussion required broader participation, so much of the discussion centered around the ability for projects to independently do intermediary releases outside of the release cycle and how that could be supported, but I think the jury is still out on a solution there. There was also talk about how to generally handle release tracking, as it’s difficult to predict what will land, so much so that people have stopped relying on the predictions and that bled into a discussion about release content reporting (release changelogs). In all, an interesting session with some good ideas about how to move forward, Etherpad here: liberty-cross-project-release-models.

I spent the evening with friends and colleagues at the HP+Scality hosted party at Rocky Mountaineer Station. BBQ, food trucks and getting to see non-Americans/non-Canadians try s’mores for the first time, all kinds of fun! Fortunately I managed to make it back to my hotel at a reasonable hour.

by pleia2 at May 21, 2015 10:03 PM

OpenStack Superuser

OpenStack Design Summit Highlights: Vancouver Edition

At every OpenStack Summit, the Design Summit provides the opportunity for collaborative working sessions where OpenStack developers come together to discuss requirements for the next software release and connect with other community members. In Vancouver, the conversation is dedicated toward the upcoming Liberty release.

A key theme for the Summit was interoperability for OpenStack clouds, made possible with features that were decided on in the previous Design Summit.

"In the Kilo release the most interesting feature we delivered was in Keystone federated identity features," said Thierry Carrez, OpenStack's director of engineering. "They complimented what we already had with the Juno cycle, providing extra capabilities for clouds to interact with one another."

Watch the highlights below:

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="https://www.youtube.com/embed/Q_vdvxnyVu4" width=""></iframe>

To Tom Fifield, community manager at the OpenStack Foundation, user feedback is a critical part of the design and development process:

“It’s not just about coming up with feature requests at the beginning, running away, and complaining when they don’t get implemented. Watching the development progress — being an active part of the design and collaborating at the same level as developers — Summits are the place where all of the developers and operators come together to make that a reality.”

This year, OpenStack introduced a new format for collaboration at the Design Summit that utilizes two types of primary sessions:

  • Fishbowl: open sessions to discuss a specific feature or issue that needs to be solved. They happen in large rooms organized in fishbowl style (meaning, concentric rings of chairs).
  • Work: dedicated to smaller groups, and tailored toward people already involved in an OpenStack project looking to focus on specific issues. More information on working group sessions will be available in the Design Summit etherpads.

Feedback on the Design Summit has been positive. Take a look at what some of the attendees have had to say:

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

by Superuser at May 21, 2015 07:46 PM

OpenStack Reactions

Life of an OpenStack contributor in Animated GIF the summit session (Vancouver 2015)

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="https://www.youtube.com/embed/jGFJ7bFDdjI" width="560"></iframe>

by chmouel at May 21, 2015 04:56 PM

OpenStack in Production

Juno, EL6 and RDO Community.

Since 2011, CERN selected OpenStack as a cloud platform. It was natural to choose RDO as our RPMs provider ; RDO is a community of people using and deploying OpenStack on Red Hat Enterprise Linux, Fedora and distributions derived from these (such as Scientific Linux CERN 6 which powers our hypervisors).

The community decided not to provide official upgrade path from Icehouse to Juno on el6 systems.

While our internal infrastructure is now moving to CentOS 7, we have to maintain during the transition around 2500 compute nodes under SLC6.

As it was mentioned in the previous blog post we recently finished the migration from IceHouse to Juno. Part of this effort was to rebuild Juno RDO packages for RHEL6 derivative and provide a tested upgrade path from IceHouse.

We are happy to announce that we recompiled openstack-nova and openstack-ceilometer packages publicly with the help of the CentOS infrastructure and made them available to the community.
The effort is led by the CentOS Cloud SIG and I'd like to thank particularly Alan Pevec, Haïkel Guemar and Karanbir Singh for their support and time.

For all the information and howto use the Juno EL6 packages please follow this link https://wiki.centos.org/Cloud/OpenStack/JunoEL6QuickStart.

by Thomas Oulevey (noreply@blogger.com) at May 21, 2015 04:51 PM

Opensource.com

Open source is about more than cost savings

The move to open source technology is a much more fundamental shift than just cost savings, and represents a trend that is starting to cross industries, even the most traditional ones, from financial services through telcos.

by shar1z at May 21, 2015 11:00 AM

Cloudify Engineering

OpenStack & Beyond Podcast - Episode 2 | OpenStack Summit Roundtable

After a very fascinating first podcast about SDN & NFV, the OpenStack & Beyond podcast is back with a fantastic...

May 21, 2015 12:00 AM

May 20, 2015

Elizabeth K. Joseph

Liberty OpenStack Summit day 1

This week I’m at the OpenStack Summit. It’s the most wonderful, exhausting and valuable-to-my-job event I go to, and it happens twice a year. This time it’s being held in the beautiful city of Vancouver, BC, and the conference venue is right on the water, so we get to enjoy astonishing views throughout the day.


OpenStack Summit: Clouds inside and outside!

Jonathan Bryce Executive Director of the OpenStack Foundation kicked off the event with an introduction to the summit, success that OpenStack has built in the Process, Store and Move digital economy, and some announcements, among which was the success found with federated identity support in Keystone where Morgan Fainberg, PTL of Keystone, helped show off a demonstration. The first company keynote was presented by Digitalfilm Tree who did a really fun live demo of shooting video at the summit here in Vancouver, using their OpenStack-powered cloud so it was accessible in Los Angeles for editorial review and then retrieving and playing the resulting video. They shared that a recent show that was shot in Vancouver used this very process for the daily editing and that they had previously used courier services and staff-hopping-on-planes to do the physical moving of digital content because it was too much for their previous systems. Finally, Comcast employees rolled onto the stage on a couch to chat about how they’ve expanded their use of OpenStack since presenting at the summit in Portland, Oregon Video of the all of this available here.

Next up for keynotes was Walmart, who talked about how they moved to OpenStack and used it for all the load on their sites experienced over the 2014 holiday season and how OpenStack has met their needs, video here. Then came HP’s keynote, which really focused on the community and choices available aspect of OpenStack, where speaker Mark Interrante said “OpenStack should be simpler, you shouldn’t need a PhD to run it.” Bravo! He also pointed out that HP’s booth had a demonstration of OpenStack running on various hardware at the booth, an impressively inclusive step for a company that also sells hardware. Video for HP’s keynote here (I dig the Star Wars reference). Keynotes continued with one from TD Bank, which I became familiar with when they bought up the Commerce branches in the Philadelphia region, but have since learned are a major Canadian Bank (oooh, TD stands for Toronto Dominion!). The most fascinating thing about their moved to the cloud for me is how they’ve imposed a cloud-first policy across their infrastructure, where teams must have a really good reason and approval in order to do more traditional bare-metal, one off deployments for their applications, so it’s rare, video. Cybera was the next keynote and perhaps the most inspiring from a humanitarian standpoint. As one of the earliest OpenStack adopters, Cybera is a non-profit that seeks to improve access to the internet and valuable resources therein, which presented Robin Winsor stressed in his keynote was now as the physical infrastructure that was built in North America in the 19th and 20th centuries (railroads, highways, etc), video here. The final keynote was from Solidfire who discussed the importance of solid storage as a basis of a successful deployment, video here.

Following the keynotes, I headed over to the Virtual Networking in OpenStack: Neutron 101 (video) where Kyle Mestery and Mark McClain gave a great overview of how Neutron works with various diagrams showing of the agents and improvements made in Kilo with various new drivers and plugins. The video is well worth the watch.

A chunk of my day was then reserved for translations. My role here is as the Infrastructure team contact for the translations tooling, so it’s also been a crash course in learning about translations workflows since I only speak English. Each session, even unrelated to the actual infrastructure-focused tooling has been valuable to learning. In the first translation team working session the focus was translations glossaries, which are used to help give context/meaning to certain English words where the meaning can be unclear or otherwise needs to be defined in terms of the project. There was representation from the Documentation team, which was valuable as they maintain a docs-focused glossary (here) which is more maintained and has a bigger team than the proposed separate translations glossary would have. Interesting discussion, particularly as my knowledge of translations glossaries was limited. Etherpad here: Vancouver-I18n-WG-session.

I hosted the afternoon session on Building Translation Platform. We’re migrating the team to Zanata have been fortunate to have Carlos Munoz, one of the developers on Zanata, join us at every summit since Atlanta. They’ve been one of the most supportive upstreams I’ve ever worked with, prioritizing our bug reports and really working with us to make sure our adoption is a success. The session itself reviewed the progress of our migration and set some deadlines for having translators begin the testing/feedback cycle. We also talked about hosting a Horizon instance in infra, refreshed daily, so that translators can actually see where translations are most needed via the UI and can prioritize appropriately. Finally, it was a great opportunity to get feedback from translators about what they need from the new workflow and have Carlos there to answer questions and help prioritize bugs. Etherpad here: Vancouver-I18n-Translation-platform-session.

My last translations-related thing of the day was Here be dragons – Translating OpenStack (slides). This was a great talk by Łukasz Jernaś that began with some benefits of translations work and then went into best practices and tips for working with open source translations and OpenStack specifically. It was another valuable session for me as the tooling contact because it gave me insight into some of the pain points and how appropriate it would be to address these with tooling vs. social changes to translations workflows.

From there I went back to general talks, attending Building Clouds with OpenStack Puppet Modules by Emilien Macchi, Mike Dorman and Matt Fischer (video). The OpenStack Infrastructure team is looking at building our own infra-cloud (we have a session on it later this week) and the workflows and tips that this presentation gave would also be helpful to me in other work I’ve been focusing on.

The final session I wandered into was a series of Lightning Talks, put together by HP. They had a great lineup of speakers from various companies and organizations. My evening was then spent at an HP employee gathering, but given my energy level and planned attendance at the Women of OpenStack breakfast at 7AM the following morning I headed back to my hotel around 9PM.

by pleia2 at May 20, 2015 11:26 PM

OpenStack Superuser

Expanding the reach of OpenStack to new industries

The future is cloudy -- and filled with fewer headaches for operators.

If just a few years back, software was eating the world, it now looks like cloud computing will cast its influence over an ever-expanding set of industries. That's the opinion of Lew Tucker, vice president and CTO of cloud computing at Cisco, speaking at the OpenStack Summit Vancouver.

Over the past several years, the continued adoption of OpenStack and its expansion into new areas has moved from cloud service providers, enterprise private clouds to large media companies and big science, Tucker said in a talk titled "OpenStack in an ever-expanding world of possibilities." He mapped out how OpenStack got to where it is today and how contributors can guide it to a bright future.

"It's not just an ordinary cloud platform anymore," he added. "I view this as becoming verticalized into a couple of different industries."

You can catch the full 40-minute session, along with the rest of the Summit sessions here.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="https://www.youtube.com/embed/Nkcf_lA0s88" width=""></iframe>

The internet paired with cloud computing is driving disruption. "Internet gives us the reach, and the cloud allows us to do it with small number of people. When those two things come together, you have the possibility for very small companies to go after entire industries."

Tucker credits Amazon with launching the self-service model that took the time-consuming hassle out of provisioning. Tucker likens this revolution to what happened when FedEx started giving customers a link and web interface to track their packages, where it used to have to maintain a call center, saving the company a lot of money and providing a great service to customers.

"There's a strong economic foundation to everything we're doing," Tucker said.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

But something else happened as the cloud spread - it became a platform for cloud-native application development. By being able to spin up or kill instances quickly, you can make them dynamically elastic, so it's easy to recover from failures.

"We know at scale those failures will happen every day, and you have to have a service that's running non-stop."

The dev-ops people making sure that everything is running will continue to have an important role even when infrastructure is virtualized. Operations mean costs, costs can be controlled through automation and automation through software, he said.

Adoption cuts across sectors: not just cloud providers but companies from diverse industries including Bloomberg, Comcast, BestBuy and Walmart.

"It's all very possible, though it requires a lot of work. This is still a very new technology." OpenStack becomes a new layer in the data center software stack, and services are the new platform.

alt text here

Now picture this: standards bodies such as the European Telecommunications Standards Institute (ETSI) work with OpenNFV to define reference architecture standards. "This is something that four years ago, we weren't even thinking of," Tucker said. "It's an entirely different space."

alt text here

To meet that challenge, the OpenStack Foundation has formed a number of groups with the goal of becoming carrier-grade and fully enterprise ready.

alt text here

"The platform itself becomes really important, I urge you work on these blueprints so we can make OpenStack really resilient," he said.

Another sweeping change? The "nightmare" of tangled web of cables in traditional data centers are also on their way out, another way that trouble tickets are becoming a thing of the past.

alt text here

"You want software driven infrastructure to be managed by Intent. You enforce that via group-based policy."

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Tucker imagines that automation will also take the place of the dreaded phone calls to trouble shoot problems. Cisco has launched open-source analytics tool Analytics & Visualization on OpenStack, or AVOS) as well as Cloud Pulse and CEPH Early Warning System.

The idea behind them is the same as a health monitor in a hospital."The steady beep beep beep means the patient (your cloud) is still alive," he said. "If that goes turns into a high-pitched whine, you know your cloud’s in trouble." You'll no longer have to spend time on the phone figuring out "is it my problem or in the cloud?"

All this will bring some significant challenges to OpenStack as it keeps expanding, providing more and more services yet requiring them to meet a certain set of standards.

alt text here

"This might seem confusing, but I view it as an opportunity," Tucker said. "It becomes a much more powerful platform that we're working on."

"The challenge is: with this ever-growing number of possibilities, can we keep it together?" he asked. "Can we find right balance between carrier-grade, enterprise-ready solid, resilient cloud platform as we expand/grow number of services."

To do that, he said, the OpenStack community needs to take an architectural view on how these services are put together and focus on what the true value to is to the end user.

"As a community, I'd like us to find balance work on both of those layers - resiliency and scale, but keep the innovation," he said. "For those of you who are contributors, it'd be great if you work on both of those things, that's the ideal world."

Cover Photo by xii li // CC BY NC

by Nicole Martinelli at May 20, 2015 11:14 PM

And the Superuser Award goes to...

VANCOUVER, British Columbia -- The OpenStack Vancouver Summit kicked off day two by awarding the Superuser Award to Comcast.

Tim Bell of CERN, whose team won the honor last year, passed the baton to the crew from Comcast.

alt text here

Comcast Cable is the nation’s largest video, high-speed Internet and phone provider to residential customers under the XFINITY brand and also provides these services to businesses.

Comcast’s commitment to OpenStack is greater than ever: the team has increased its infrastructure by more than 500 percent in the last year. OpenStack has helped team members roll out new services and features at a much quicker rate, and has enabled faster time-to-market, allowing its engineers a self-service portal to access resources on demand. This allows them to reduce the time to provision resources with the flexibility that OpenStack provides. Portions of Comcast’s latest X1 set-top box back end, which currently services millions of consumer set-top boxes, is one of its largest consumers of OpenStack infrastructure. To date, the Comcast team has contributed more than 36,000 lines of code to OpenStack.

The team not only won the esteem and admiration that comes with being a Superuser Award winner, but are also provided two hotel stays and two flights to the Tokyo Summit in the fall, along with up to ten full access passes. Congratulations!

The OpenStack Foundation launched the Superuser Awards to recognize, support and celebrate teams of end-users and operators that use OpenStack to meaningfully improve their businesses while contributing back to the community.

Interested in nominating a team to be recognized at the next Awards ceremony in Tokyo? Stay up-to-date on the latest information at superuser.openstack.org/awards.

by Superuser at May 20, 2015 03:43 PM

Xen Project Blog

Xen Project now in OpenStack Nova Hypervisor Driver Quality Group B

A few weeks ago, we introduced the Xen Project – OpenStack CI Loop, which is testing Nova commits against the Xen Project Hypervisor and Libvirt. Xen Project community is pleased to announce that we have moved from Quality Group C to B, as we’ve made significant progress in the last few weeks and the Xen Project CI loop is now voting on Nova commits.

This diagram shows the number of OpenStack Nova drivers for Hypervisors, which allow you to choose which Hypervisor(s) to use for your Nova Deployment. Note that these are classified into groups A, B and C. Xen Project is now in Quality Group B.

This diagram shows the number of OpenStack Nova drivers for Hypervisors, which allow you to choose which Hypervisor(s) to use for your Nova Deployment. Note that these are classified into groups A, B and C. Xen Project is now in Quality Group B.

Quality groups are defined as follows:

  • Group C: These drivers have minimal testing and may or may not work at any given time. Test coverage may include unit tests that gate commits. There is no public functional testing.
  • Group B: Test coverage includes unit tests that gate commits. Functional testing is provided by an external system (such as our CI loop) that do not gate commits, but advises patch authors and reviewers of results in OpenStack Gerrit.
  • Group A: Test coverage includes unit tests and functional testing that both gate commits.

What does this mean in practice?

The easiest way to understand what this means in practice, is to look at a real code review as shown in the figure below.

This diagram shows how the functional tests for the Xen Project initially failed, and passed after a new patch was uploaded.

This diagram shows how the functional tests for the Xen Project initially failed, and passed after a new patch was uploaded.

This code review shows that the OpenStack Jenkins instance (running on KVM) and the Xen Project CI Loop failed their respective tests initially, so a new patchset was uploaded and the tests succeeded afterwards.

This diagram shows the status after a new patchset was uploaded. Note that the review of the patchset is still pending manual review.

This diagram shows the status after a new patchset was uploaded. Note that the review of the patchset is still pending manual review.

Also see Merging.Repository Gating in the OpenStack documentation.

Relevant OpenStack Summit Sessions

There are a number of sessions at this week’s OpenStack Summit that are worth attending including:

Hands-on sessions to improve 3rd party CI infrastructure include:

Other third-party CI-related sessions include:

Relevant Regular Meetings

Note that there are also weekly Third Party CI Working Group meetings for all operators of 3rd party CI loops in #openstack-meeting-4 on Wednesdays at 1500/0400 UTC alternating organized by Kurt Taylor (krtaylor). Third party CI operators interested in enhancing documentation, reviewing patches for relevant work, and improving the consumability of infra CI components are encouraged to attend. See here for more information on the working group.

by Lars Kurth at May 20, 2015 12:00 PM

OpenStack Superuser

How to grow the OpenStack application community

The situation: we have a diaspora of IT tribes that have specialized in the old way. OpenStack represents a new way that breaks the time tested practices of the old way. In OpenStack there are many ways to accomplish a goal, and lots of opinions about what's good and bad about each. How to resolve this?

OpenStack is revolutionary in that it allows us to create programs that, when executed, represent (codify) the combined knowledge of many different people. In traditional IT organizations much of this knowledge was centralized. As those orgs grew and matured processes developed that represented good practice with the tools of the time. Each tool set had associated skill sets required so it was hard, or impossible, for one person to deeply understand it all. Think about the gulf between archival storage and network security! Each tribe codified knowledge in very different ways, mostly in the form of process documents rich in implementation detail, making it tough to understand how it all fit together.

Then outsourcing came along and the barriers that had been developing between those disciplines got so much larger. It quickly became very difficult, and consequently costly, to institute systematic IT changes. It's no wonder some IT organizations feel like they're in the dark ages when they consider adopting technologies like containers, or software defined networks.

OpenStack, to me, represents a natural reaction to the dispersed knowledge and skills that brought organizational innovation to its knees. OpenStack brings the diaspora back together by representing all these different skills and practices in a common set of APIs. It then shouldn't be surprising that this is a bit messy and confusing. Now we have the beginnings of a set of artifacts that codify the way a system operates, making it easier to see the whole.

We now face complexity at many levels:

  • Scale of distributed systems is hard
  • Diversity of subsystems and tooling
  • Uneven knowledge within our organizations
  • Disagreement about patterns, and when they apply (don't get me started on anti-patterns)
  • Distributed understanding about what works well

This complexity is a classic architectural problem. How can this be solved? Because this is a systematic problem, it must be solved at the system level, which in this case is at the community level. It's not enough that some of you already know the answers - those answers need to be accessible to both the grizzled OpenStack veteran and the n00b.

  • Gather artifacts that represent ways to express the variety of ways to solve the problem
  • Learn about the various ways to solve problems
  • Codify the solutions in a way all, from grizzled to n00b, can consume and use

The OpenStack Community App Catalog is the seed of step one -- gathering the bits together in one place.

What's next?

  • Plant more seeds.
  • Organize them and grow them.
  • Nurture. Weed! Harvest...

This post was written for Superuser by Craig Peters, a product manager at Mirantis. Superuser is always looking for good content, email us if you'd like to appear here.

Cover Photo by CIFOR // CC BY NC

by Craig Peters at May 20, 2015 12:18 AM

Why OpenStack is no longer risky business

Where there is risk, there's reward: that was one of the key messages from the stage at the OpenStack Summit Vancouver day two keynote.

Mark Collier, OpenStack Foundation COO, was master of ceremonies in a two-hour session titled "Taking Risks: How Experimentation Leads to Breakthroughs" that highlighted some of the daring feats of users including the Jet Propulsion Lab, Google and eBay Inc. Collier invited participants to consider experimentation and taking risks in the same way a scientist might - at the heart of innovation.

"Think about OpenStack as an agnostic integration engine," Collier said. "One that puts users in the best position for success."

Meet the OpenStack Community App Catalog

alt text here

Collier unveiled the OpenStack Application Catalog, designed to help users make applications available on their clouds. Recently added apps include Kubernetes, Cloud Foundry, Oracle, Debian and OpenShift.

"This is the beginning, knowing this community, expect there will be hundreds of new additions in the next few days," Collier said.

Craig Peters from Mirantis led the demo "the opposite of a black-screen demo" by logging into Horizon and using Murano to get the apps to self deploy.

The app catalog will make launching cloud apps a lot easier -- you can read more about it in this write-up from Tech Crunch.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Containers, containers, containers

The keynote dedicated a good chunk of space dedicated to the "new" hot technology: containers. (The Summit also has dedicated a whole day to the topic - see more here and catch videos from these sessions on the OpenStack Foundation YouTube Channel.

Collier asked how many people in the audience were running Docker containers already and about five people put their hands up. When he asked how many were interested in running them, there was a sea of raised arms.

alt text here

Adrian Otto, project team lead for OpenStack project Magnum, took the stage to talk about how the project burst on to the scene. Magnum container management involved 42 engineers from 19 different affiliations and more than 106,000 lines of code in just six months.

alt text here

Otto zipped through a demo with "Kube" (the cool-kids nickname for it, apparently) showing how Magnum managed Kubernetes and Docker can work side-by-side.

"If you're already an OpenStack user, it leverages tools you’re already used to," Collier said after the demo.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Otto's later Container Day session was standing-room only:

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Collier welcomed Google's Sandeep Parikh to the Summit keynote stage with a question: “What ARE you doing here?"

Parikh responded that Google came to Vancouver to talk about Kubernetes -- and Google’s vision of it running everywhere. Google launches over 2 billion containers a week, and this hybrid approach makes it easy for Kubernetes to recover from disasters.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Bare metal rocks

James Penick, cloud architect at Yahoo, hit the stage to the tune of a raging guitar solo. It was the fitting "bare metal" soundtrack as he hit the crowd with some hair-raising numbers. Penick says Yahoo has been "secretly building the largest bare metal cloud in the world," Yahoo has servers in the hundreds of thousands and by the end of 2015, Penick says they will all be on OpenStack.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Another success story was shared by Zack Rosen of Pantheon. “We run over 400,000 environments for customers,” Pantheon said to audible sighs from the crowd when he shared the cost savings and efficiencies.

For Rosen, "the future is containers and bare metal" and virtual machines are becoming obsolete.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Superuser Intel also shared some eye-opening numbers. Its OpenStack journey started in 2012 with less than 300 servers but by spring of 2015 that cloud coverage extended to 12,000 hypervisors, said Imad Sousou, director of the company's open source technology center.

"There is a perception that OpenStack is not ready for prime time, and there’s some truth to that," he added. It took 10 years to get Linux to where it is now, OpenStack is on that same path, it takes work to get success. Now it’s a question of focus."

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

"We at Intel are very committed to OpenStack, we think this is THE platform for cloud," he asserted.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Superuser award finalist eBay also had some great numbers and good insight into future work. Subbu Allamaraju, chief engineer of Cloud at eBay told the crowd that when he started in 2012, we had no automation to speak of, but by 2013 "we automated the heck out of it." eBay/PayPal are running 100 percent of PayPal production web/app and 100 percent of dev/test workloads.

Calling it a "hand-crafted organic cloud," he says that this isn't a roadmap that he expects other companies to follow. Saying that he was "excited about the demos" during the keynote, especially Kubernetes and Mesos which can bring efficiency reliability to their apps, he put out a call for help.

alt text here

His prescription: ‘OpenStack needs to do more on scalability, upgradability - raising the bar on the core. It also needs to "productize" operations-- continue to expand the ops meetups. He noted that 89 percent of operators are running code base at least six months old and 21 percent of operators are running OpenStack code bases over 18 months old. Operators are not necessarily concerned with latest Liberty release.

Dare mighty things

Jonathan Chiang, IT chief engineer at the Jet Propulsion Laboratory, rocked the stage with the coolest video of the day. His "family" portrait featured three of JPL's projects that harness the power of OpenStack.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

NASA JPL uses OpenStack to plan human missions to Mars by 2030 and create asteroid redirect missions. How does OpenStack fit in? It helps them process and move over 80 terabytes of data per day.

Chiang finished with a request: "Support OpenStack because it’s doing a wealth of good for JPL."

Cover Photo by The Fayj // CC BY NC

by Nicole Martinelli at May 20, 2015 12:11 AM

Cloudify Engineering

Auto-Scaling your Apps with OpenStack Heat - An Orchestration Tool Comparison Pt I of II

Scaling - it’s all the rave. When having a conversation about cloud orchestration (all the cool kids are doing it...

May 20, 2015 12:00 AM

May 19, 2015

Mirantis

Get your OpenStack Summit News!

The post Get your OpenStack Summit News! appeared first on Mirantis | The #1 Pure Play OpenStack Company.

It’s midday at the OpenStack Summit, and we want you to see what’s going on. If you’re in Vancouver, check out our live blogs to see info on the current sessions, or you can send us news you want to promote. If you’re not at the Summit, follow us on Twitter using the #OpenStackSummitNow hashtag or visit us here:

  • The pre-show page, with announcements and some bent Mirantis humor.

  • The first day overview, with rolling content updates. Look for video interviews we’ll be publishing today.

  • The second day page, which highlights the app catalog, Murano, containers, and security, and more.

Check back for updates on our live-blogging during the Summit.

The post Get your OpenStack Summit News! appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Ilya Stechkin at May 19, 2015 11:02 PM

Sébastien Han

OpenStack Summit Vancouver: Ceph and OpenStack current integration and roadmap

Date: 19/05/2015

Video:

<iframe src="https://www.youtube.com/embed/PhxVPEZeHp4"></iframe>

Slides:

Download the slides here.

May 19, 2015 10:08 PM

Mirantis

Democratizing service creation in OpenStack – Anything-as-a-Service with Murano

The post Democratizing service creation in OpenStack – Anything-as-a-Service with Murano appeared first on Mirantis | The #1 Pure Play OpenStack Company.

The workload layer is user land, owned and controlled by users. At the workload layer, customers and end users expect to add new services and evolve them at a fast pace, while still maintaining full control of their destiny. To make these users successful, we need to arm them with a powerful framework that enables them to create and manage these services; a framework that give users full control, allowing them to innovate at their own pace, minimizing the dependency on the OpenStack upstream community to create and maintain those services. With the recent release of Murano in Kilo we are providing such a framework.

One recent example that highlights Murano’s capabilities in an enterprise environment is the ability to provision an Oracle Database on demand. The Oracle team created a Murano application that enables users to provision a Pluggable Database and use it from OpenStack.  This application can be used in any OpenStack deployment that uses Murano, so users can now go to their Murano catalog, press a button and have a database available to them.

Using the same method used by the Oracle team, users can now easily create additional database packages and offer them as Murano apps, creating a complete end to end Database-as-a-Service. The advantage of this approach for creating Database-as-a-Service is that it is completely under the control of the end user, with much less dependency on the OpenStack release cycle. This way, users can create a rich set of options for provisioning and lifecycle management, all controlled through Murano using an API or GUI. The Murano packages are simple and extendable, so users can maintain and extend their catalog. Murano can launch VMs, but as in the Oracle database example, it can also simply communicate with an external resource and perform the operations needed to configure, consume or manage that resource.

With the Database-as-a-Service example in mind, we can extend this approach to more services and on top of offering applications as catalog items, users can create any action and offer it as a catalog item. These actions can be virtually anything, from configuring/provisioning compute, storage or network on demand, to creating and configuring a virtualized network function, to running a virus scan in a particular VM — or a set of them. Any action that the user needs to take can be packaged into a Murano catalog item and performed on demand.

We call this approach “Anything-as-a-Service.” Any action, any application and any operation can be programmed into one or more catalog items available through a GUI or an API and be made available to users. And all of those actions can be customized to specific needs, and tailored to fit compliance, best practices and special requirements, all under full control of the OpenStack user and on their terms and timeline.

With the new release of Murano, we now have the right framework in place, and we hope to see many users taking control and creating their own services.

You can find more information about Murano and how to create Murano applications here:

http://murano.readthedocs.org/en/latest/

Murano kicks AaaS!

The post Democratizing service creation in OpenStack – Anything-as-a-Service with Murano appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Ronen Kofman at May 19, 2015 04:01 PM

Three Reasons the OpenStack Community App Catalog Will Be a Game Changer

The post Three Reasons the OpenStack Community App Catalog Will Be a Game Changer appeared first on Mirantis | The #1 Pure Play OpenStack Company.

Today the OpenStack Foundation launched the community app catalog, an initiative driven by the foundation, but made possible through the support of many community members who have helped build the catalog as well as contributed their apps. Industry giants like Oracle and Google have combined forces with infrastructure innovators like Apcera, CoreOS, ActiveState and others in support of this effort.

The launch of the community catalog is a turning point in OpenStack evolution. It signifies a shift in focus from infrastructure vendors to application providers. With dozens of enterprises like Wells Fargo, Paypal and Walmart now having publicly testified to running OpenStack in production, the OpenStack focus is evolving from “how do we make it work” to “how do we use it.” OpenStack’s prepubescent days were accompanied by several years of infrastructure vendors announcing their embrace of the platform. OpenStack puberty must be accompanied by application and developer tools vendors doing the same. Community App Catalog is a vehicle to make that happen. Here is why everyone must pay close attention:

You Can Run It In-House

The key problem with today’s application marketplaces and container repositories is that enterprises don’t go outside the firewall to pull their application assets. They need something they can control and something that is hosted on-prem. Hosted catalogs are cool for developers to tinker with, but are mostly useless for enterprise needs.

Unlike AWS Marketplace, Azure Marketplace or Docker Hub, OpenStack Community App catalog is the first one to be developed by a diverse open community under an Apache 2.0 license. The on demand version is hosted by a non-profit foundation with no vendor affiliation and anyone can download the catalog and deploy their own version in-house right here.

Deploy Cool Stuff Like Kubernetes, Hadoop, Cloud Foundry with a click

Today’s application catalogs are designed for application assets comprised of a single machine image or a single application container. For instance, an application could be a MySQL database running inside a VM. But what if you want to deploy an HA MySQL instance across three VMs in various zones with Gallera, Corosync etc.? You are in for a lot of fun with Linux CLI and BASH scripts.

Linux was pulled into the mainstream by its applicability for web workloads and, specifically, LAMP stack. The new equivalent of LAMP stack in web-scale world are cool things like Kubernetes, Mesos, Hadoop, Cloud Foundry etc. But all of them are comprised of many highly distributed microservices and require deployment and orchestration engines designed for a highly distributed environment.

As OpenStack strives to be relevant in the microservices-driven world, much like Linux was relevant in the LAMP stack world, the OpenStack application catalog needs to play nice with distributed applications…. and it does…  Powered by Heat and Murano, OpenStack projects originally designed for orchestrating microservices in distributed environments, it accomplishes just that. Your experience spinning up a distributed Kubernetes cluster is the same as it would be with pulling an Apache Server docker image from Docker Hub.

Safe and Even Playing Field for All Application Vendors

Every cloud application catalog out there today is hosted by a vendor with an agenda: AWS Marketplace, Azure Marketplace, Docker Hub etc. If I am Oracle, there is a lot of mindshare behind my database and my applications. For the most part, I’d like to limit the extent to which I openly avail them to a competitor and risk advancing their agenda at my expense.

OpenStack Community Application Catalog is the first catalog of cloud assets to be hosted by a completely vendor-neutral, independent, non-profit organization. Making your applications available in the OpenStack catalog is symbolic of embracing the open source momentum; it will advance the momentum behind the open cloud movement, not the agenda of your competitor.

The post Three Reasons the OpenStack Community App Catalog Will Be a Game Changer appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Boris Renski at May 19, 2015 04:00 PM

Anne Gentle

Treat Docs like Code: Doc Bugs and Issues

This is a follow on to a post on opensource.com about using git and GitHub for technical documentation. In the OpenSource.com article, I discuss reviews and keeping up with contributions. This post talks about fixes and patches.

htakashi_typewriter

What about doc issues in GitHub, how do you get through all those?

In OpenStack, we document how to triage doc bugs, and that’s what you need to do in GitHub, is establish your process for incoming issues. Use Labels in GitHub to indicate the status and priority. Basically, you have to accept that it’s a doc bug, if it’s not a doc bug, ask for more information from the reporter. If you want to create labels for particular deliverables, like the API doc or end-user doc, you can further organize your doc issues. You will need to define priorities as well — what’s critical and must be fixed? What’s more “wishlist” and you’ll get to it when you can? If you use similar systems for both issues and pull requests you’ll have your work prioritized for you when you look at the GitHub backlog.

How can you encourage contributors to create a good pull request for docs?

The best answer for this is “documentation” but also great onboarding. Make sure someone’s first pull request is handled well with a personal touch. There’s a lot of coaching going on when you do reviews. Ensure that you’ve written up “What goes where” as this is often the hardest part of doc maintenance for a large body of work that already exists. This expansion problem is only getting harder in OpenStack as more projects are added. We’re having a lot of documentation sessions at the OpenStack Summit this week and we’d love to talk more about creating good doc patches.

One person I work with uses GitHub emojis every chance he gets when he reviews pull requests. I think that’s fun and sets a nice tone for reviews.

Nitpicking can be averted if you point to your style guide and conventions with a good orientation to a newcomer so that new contributors don’t get turned off by feeling nitpicked.

Have you heard of anyone who has combined GitHub with a different UI “top layer” to simplify the UI?

O’Reilly has done this with their Atlas platform. For reviews, the Gerrit UI has been extremely useful to a large collection of projects like OpenStack. There’s Penflip, which is a better frontend for writers than GitHub. The background story is great in that it offers anecdotes about GitHub being super successful for collaborative writing projects.

I think that GitHub itself is fine if your docs are treated like code. I think GitHub is great for technical writing, API documentation, and the like. Academic writers haven’t found GitHub that much of a match for their collaborative writing, see “The Limitations of GitHub for Writers” for example. It’s the actual terms that have to be adapted and adopted for GitHub to be a match for writers. For example, do you track doc bugs (issues) and write collaboratively with added content treated like software features? I say, yes!

If you just want simple markup like markdown for collaborative writing, check out Beegit. With git in the name I have to wonder if it’s git-backed, but couldn’t figure it out from a few minutes on their site. Looks promising but again, for treating docs like code, living and working with developers.

by annegentle at May 19, 2015 01:33 PM

Opensource.com

The benefits of building an open infrastructure

Having an infrastructure that's open source and maintained by the community has many benefits. In this guide, Elizabeth Joseph details how and why it's a good idea to open it up.

by pleia2 at May 19, 2015 09:00 AM

OpenStack community speeds ahead

In this quick but insightful interview with Kavit Munshi, an OpenStack Ambassador and on the OpenStack Board of Directors, speaks to the health of the OpenStack community, like their growth in India, and the single most important public service announcement (PSA) he'd give any user on the spot.

by Jen Wike Huger at May 19, 2015 07:00 AM

Cloud Platform @ Symantec

Cloud war is not

The cloud war is not a war at all. It's a struggle, a headlong pursuit of balance

Read More

by David T Lin at May 19, 2015 04:14 AM

OpenStack Superuser

Unlocking the cloud: day one of the OpenStack Summit Vancouver

VANCOUVER, British Columbia -- Jonathan Bryce, executive director of the OpenStack Foundation, took the stage to welcome 6,000 of the community's "closest friends" in a keynote with some of the thunder of a rock show.

Attendees, up 25 percent since the Summit held in Paris, streamed into the huge hall on an overcast but temperate day in Vancouver. The two-hour keynote featured updates from users including Walmart, TD Bank Group, Cybera, Comcast and headline sponsors Hewlett-Packard and SolidFire.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

About half of the participants to the Summit were newcomers, with developers and project strategists/architects making up a little over half the total attendees. This week, they'll share ideas and best practices in almost 500 sessions ranging from containers to core.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

DigitalFilm Tree showed the power of cloud for television and film production with a camera-to-couch demo that blitzed audience footage to Los Angeles for post-production and presto chang-o sent it back to Vancouver. You can read more about how they are using Swift, Nova, Keystone and Neutron to maximize efficiency.

alt text here

Two milestones in OpenStack interoperability

Meet the OpenStack Powered pioneers: there are 14 companies including public clouds, hosted private clouds, distributions and appliances, that carry this branding. You can find more about them at the OpenStack Marketplace. The OpenStack Powered moniker means that users can now count on these products and clouds around the world to deliver a consistent set of core services.

In the second announcement, a group of OpenStack cloud providers have signed on to support the new federated identity feature in the OpenStack Kilo release, with offerings expected by the end of this year. Identity federation enables hybrid and multi-cloud scenarios with a seamless user experience.

The growing list of companies committed to delivering federated identity currently includes these 32.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Frederic Lardinois of Tech Crunch had this to say about it: "This project is a clear indication of how much the OpenStack ecosystem has grown over the last few years. With only a few players on the market, there wasn’t much need for this kind of certification program, but now that the number of vendors who provide some kind of service for the project continues to increase, it’s becoming more important to ensure that users don’t get locked into a single vendor. That, after all, would be very much against the philosophy of the project."

The announcement created a buzz that kept things humming in later Summit sessions.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Robin Winsor, CEO of Cybera, showed off his deployment with a timely reminder: "Stone age didn't end because we ran out of stone."

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Hit us with your ideas, people!

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

In the final talk of the keynote, Solidfire Dave Wright CEO asked the crowd what they would do if storage wasn't an issue. To answer, he was ready to take the heat, in the form of soft rockets that people launched at him on stage.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

by Superuser at May 19, 2015 01:43 AM

Cloudify Engineering

OpenStack and Cloud Trends - Hypervisors, NFV, Orchestration and more thoughts from the Summit

The OpenStack Summits, which I have been attending twice a year for the past five years, continue to serve as...

May 19, 2015 12:00 AM

May 18, 2015

IBM OpenTech Team

IBM is Proud to have Three OpenStack Powered Products

OSSummit2015 1024x293 IBM is Proud to have Three OpenStack Powered Products
Today at the OpenStack Summit in Vancouver the OpenStack Foundation announced the products that earned OpenStack Powered branding and as such can be listed on the OpenStack Marketplace. The OpenStack Powered Branding means that products built on OpenStack pass a stringent set of tests proving that they are interoperable with other OpenStack based cloud products.

IBM has always been a champion of interoperability, standards, and openness in software development. And because our products are built on Open Technologies such as OpenStack, we are proud to say that we have three products that have met the requirements to be branded as “OpenStack Powered”. IBM Cloud Manager with OpenStack, IBM Cloud OpenStack Services, and IBM Spectrum Scale are OpenStack Powered and IBM customers can now count on the fact that IBM Products that bear the OpenStack Powered branding are interoperable with other clouds bearing the same branding. This gives customers the flexibility they demand to create cloud based applications once and run them on any platform that has OpenStack Powered branding.

IBM is very proud to be OpenStack Powered!

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="551" src="https://www.youtube.com/embed/b8NdzQV7s7k?feature=oembed" width="980"></iframe>

The post IBM is Proud to have Three OpenStack Powered Products appeared first on IBM OpenTech.

by Garth Tschetter at May 18, 2015 08:41 PM

Rob Hirschfeld

Ready State Foundation for OpenStack now includes Ceph Storage

For the Paris summit, the OpenCrowbar team delivered a PackStack demo that leveraged Crowbar’s ability to create a OpenStack ready state environment.  For the Vancouver summit, we did something even bigger: we updated the OpenCrowbar Ceph workload.

Cp_1600_1200_DB2A1582-873B-413B-8F3C-103377203FDC.jpegeph is the leading open source block storage back-end for OpenStack; however, it’s tricky to install and few vendors invest the effort to hardware optimize their configuration.  Like any foundation layer, configuration or performance errors in the storage layer will impact the entire system.  Further, the Ceph infrastructure needs to be built before OpenStack is installed.

OpenCrowbar was designed to deploy platforms like Ceph.  It has detailed knowledge of the physical infrastructure and sufficient orchestration to synchronize Ceph Mon cluster bring-up.

We are only at the start of the Ceph install journey.  Today, you can use the open source components to bring up a Ceph cluster in a reliable way that works across hardware vendors.  Much remains to optimize and tune this configuration to take advantage of SSDs, non-Centos environments and more.

We’d love to work with you to tune and extend this workload!  Please join us in the OpenCrowbar community.


by Rob H at May 18, 2015 06:13 PM

Alessandro Pilotti

The Open Enterprise Cloud – OpenStack’s Holy Grail?

The way people think about the enterprise IT is changing fast, putting into question many common assumptions on how hardware and software should be designed and deployed. The upending of these long held tenets of Enterprise IT are happening simply due to the innovation brought on by OpenStack and a handful of other successful open source projects that have gained traction in recent years.

 

What is still unclear is how to deliver all this innovation in a form that can be consumed by customers’ IT departments without the need to hire an army of experienced DevOps, itself as notoriously hard to find as unicorns commodity that has a non-trivial impact on the TCO.

 

The complexity of an OpenStack deployment is not just perception or FUD spread by the unhappy competition. It’s a real problem that is sometimes ignored by those deeply involved in OpenStack and its core community. The industry is clearly waiting for the solution that can “package” OpenStack in a way that hides the inherent complexity of this problem domain and “just works”. They want something that provides user-friendly interfaces and management tools instead of requiring countless hours of troubleshooting.

 

This blog post is the result of our attempt to find and successfully productize this ‘Holy Grail’, featuring a mixture of open source projects that we actively develop and contribute to (OpenStack, Open vSwitch, Juju, MAAS, Open Compute) alongside Microsoft technologies such as Hyper-V that we integrate into Openstack and that are widely used in the enterprise world.

 

We are excited to be able to demonstrate this convergence of all the above technologies at our Cloudbase Solutions booth at the Vancouver Summit, where we shall be hosting an Open Compute OCS chassis demo.

The Open Enterprise Cloud

Objectives

Here are the prerequisites we identified for this product:

  • Full automation, from the bare metal to the applications
  • Open source technologies and standards
  • Windows/Hyper-V support, a requirement in the enterprise
  • Software Defined Networking (SDN) and Network Function Virtualization (NFV)
  • Scalable and inexpensive storage
  • Modern cloud optimized hardware compatible with existing infrastructures
  • Easy monitoring and troubleshooting tools
  • User friendly experience

 

Hardware

Let’s start from the bottom of the stack. The way in which server hardware has been designed and produced didn’t really change much in the last decade. But when the Open Compute Project kicked off it introduced a set of radical innovations from large corporations running massive clouds like Facebook.

Private and public clouds have requirements that differ significantly from what traditional server OEMs keep on offering over and over again. In particular, cloud infrastructures don’t require many of the features that you can find on commodity servers. Cloud servers don’t need complex BMCs beyond basic power actions and diagnostics (who needs a graphical console on a server anymore?) or too many redundant components (the server blade itself is the new unit of failure) or even fancy bezels.

 
image
 

Microsoft’s Open CloudServer (OCS) design, contributed to the Open Compute Project, is a great example. It offers a half rack unit blade design with a separate chassis manager in a 19” chassis with redundant PSUs, perfectly compatible with any traditional server room, unlike for example other earlier 21” Open Compute Project server designs. The total cost of ownership (TCO) for this hardware is significantly lower compared to traditional alternatives, which makes this is a very big incentive even for companies less prone to changes in how they handle their IT infrastructure.

Being open source, OCS designs can be produced by anyone, but this is an effort that only the larger hardware manufactures can effectively handle. Quanta in particular is investing actively in this space, with a product range that includes the OCS chassis on display at our Vancouver Summit booth.

 

Storage

“The Storage Area Network (SAN) is dead.” This is something that we keep hearing and if it’s experiencing  a long twilight it’s because vendors are still enjoying the profit margins it offers. SANs used to provide specialized hardware and software that has now moved to commodity hardware and operating systems. This move offers scalable and fault tolerant options such as Ceph or the SMB3 based Windows Scale-Out File Server, both employed in our solution.

 

The OCS chassis offers a convenient way of storing SAS, SATA or SSD storage in the form of “Just a Bunch of Disks” (JBOD) units that can be deployed alongside regular compute blades having the same form factor. Depending on the requirements, a mixture of typically inexpensive mechanical disks can be mixed with fast SSD units.

 

Bare metal deployment

There are still organizations and individuals out there who consider that the only way to install an operating system consists in connecting monitor, keyboard and mouse to a server, insert a DVD, configure it interactively and wait until it’s installed. In a cloud, regardless of being private or public, there are dozens, hundreds or thousands of servers to deploy at once, so manual deployments do not work. Besides this, we need all those servers to be consistently configured, without the unavoidable human errors that manual deployments always incur at scale.

 

That’s where the need for automated bare metal deployment comes in.

We chose two distinct projects for bare metal: MAAS and Ironic. We use MAAS (to which we contributed Windows support and imaging tools), to bootstrap the chassis, deploy OpenStack using Juju, including storage and KVM or Hyper-V compute nodes. The user can freely decide any time to redistribute the nodes among the individual roles, depending on how many compute or storage resources are needed.

We recently contributed support for the OCS chassis manager in Ironic, so users have also the choice to use Ironic in standalone mode or as part of an OpenStack deployment to deploy physical nodes.

The initial fully automated chassis deployment can be performed from any laptop, server or “jump box” connected to the chassis’ network without the need of installing anything. Even a USB stick with a copy of our v-magine tool is enough.

 

OpenStack

There are quite a few contenders in the IaaS cloud software arena, but none managed to generate as much interest as OpenStack, with almost all relevant names in the industry investing in its foundation and development.

There’s not much to say here that hasn’t been said elsewhere. OpenStack is becoming the de facto standard in private clouds, with companies like Canonical, RackSpace and HP basing their public cloud offerings on OpenStack as well.

OpenStack’s compute project, Nova, supports a wide range of hypervisors that can be employed in parallel on a single cloud deployment. Given the enterprise-oriented nature of this project, we opted for two hypervisors: KVM, which is the current standard in OpenStack, and Hyper-V, the Microsoft hypervisor (available free of charge). This is not a surprise as we have contributed and are actively developing all the relevant Windows and Hyper-V support in OpenStack in direct coordination with Microsoft Corporation.

The most common use case for this dual hypervisor deployment consists in hosting Linux instances on KVM, and Windows ones on Hyper-V. KVM support for Windows is notoriously shaky, while Windows Hyper-V components are already integrated in the OS and the platform is fully supported by Microsoft, making it a perfect choice for Windows. On the Linux side, while any modern Linux works perfectly fine on Hyper-V thanks to the Linux Integration Services (LIS) included in the upstream Linux kernel, KVM is still preferred by most users.

 

Software defined networking

Networking has enjoyed a large amount of innovation in recent years, especially in the areas of configuration and multi tenancy. Open vSwitch (OVS) is by far the leader in this domain, commonly identified as software defined networking (SDN). We recently ported OVS to Hyper-V, allowing the integration of Hyper-V in multi-hypervisor clouds and VXLAN as a common overlay standard.

Neutron includes also support for Windows specific SDN for both VLAN and NVGRE overlays in the ML2 plugin, which allows seamless integration with other solutions, including OVS.

 

Physical switches and open networking

Modern managed network switches provide computing resources that were simply unthinkable just a few years ago and today they’re able to natively run operating systems traditionally limited to server hardware.

Cumulus Linux, a network operating system for bare metal switches developed by Cumulus Networks, is a Linux distribution with hardware acceleration of switching and routing functions. The NOS seamlessly integrates with the host-based Open vSwitch and Hyper-V networking features outlined above.

Neutron takes care of orchestrating hosts and networking switches, allowing a high degree of flexibility, security and performance which become particularly critical when the size of the deployment increases.

 

Deploying OpenStack with Juju

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="354" src="https://www.youtube.com/embed/lyA5bWklS0w?feature=oembed" width="630"></iframe>
 
One of the reasons for OpenStack’s success lies in its flexibility: the ability to support a very large amount of hypervisors, backend technologies, SDN solutions and so on. Most of the medium and large enterprise IT departments already adopted some of those technologies and want OpenStack to employ them, with the result that there’s not a single “recommended” way to deploy your stack.

 

Automation, probably the leading glue in all modern datacenter technologies, doesn’t play that well with flexibility: the higher the flexibility, the higher the amount of automation code that needs to be written and tested, requiring often very complex deployments that become soon unfeasible for any continuous integration framework.

 

Puppet, Chef, SaltStack and similar configuration management tools are very useful when it comes to automating a specific scenario, but are not particularly suitable for generic use cases, unless you add on top tools like RDO’s PackStack to orchestrate them. Finally, while command line tools are the bread-and-butter of every DevOp, they don’t do much to bring a user-friendly experience that a more general user base can successfully employ without having to resort to cloud specialists.

When looking for a suitable deployment and configuration solution, we recognized that Juju was fulfilling most of our requirements, with the exception of Windows and CentOS support which we contributed shortly afterwards. What we liked in particular is the strict decoupling between independent configurations (called Charms), and a killer GUI that makes this brave new automation world more accessible to less experienced users.

 

This model has the potential for a large impact on the usage spectrum, productivity improvement and the general TCO reduction. Furthermore, Juju offers also a wide and fast growing catalog of applications.

 

Applications and orchestration

People want applications, not virtual machines or containers. IaaS is nice to have, but what you do on top of it is what matters for most users. Juju comes to the rescue in this case as well, with a rich charms catalog. Additionally, we developed Windows specific charms to support all the main Microsoft related workloads: Active Directory, IIS, VDI, Windows Server Failover Clustering, SQL Server (including AlwaysOn), Exchange and SharePoint.

Besides Juju, we support Heat (providing many Heat templates for Windows, for example) and PaaS solutions like Cloud Foundry that can be easily deployed via Juju on top of OpenStack.

 

Cattle and Pets

Using the famous cattle vs pets analogy (a simplistic metaphor for what belongs to a cloud and what doesn’t), OpenStack is all about cattle. At the same time, a lot of enterprise workloads are definitely pets, so how can we create a product that serves both cases?

 

An easy way to distinguish pets and cattle is that pets are not disposable and require fault tolerant features at the host level, while cattle instances are individually disposable. Nova, OpenStack’s compute project, does not support pets, which means that failover cluster features are not available natively.

We solved this issue by adding one extra component that integrates Nova with the Microsoft Windows Failover Clustering when using Hyper-V. Other components, including storage and networking, are already fully redundant and fault tolerant, so this additional feature allows us to provide proper transparent support for pets without changes in the way the user manages instances in OpenStack. Cattle keep grazing unaffected.

 

Conclusions

Finding a reliable way to deploy OpenStack and managing it in all its inherent complexity with a straightforward and simple user experience is the ‘Holy Grail’ of today’s private cloud business. At Cloudbase Solutions, we believe we have succeeded in this ever elusive quest for simplicity of user experience by consolidating the leading open source technologies for setting up the bare metal right on to the top of the stack applications, including support for Enterprise Windows and Open source workloads deployed with Canonical’s Juju, and all this in perfect harmony.

The advantage for the user is straightforward: an easy, reliable and affordable way to deploy a private cloud, avoiding vendor lock-in, supporting Microsoft and Linux Enterprise workloads and bypassing the need for an expensive DevOps team on payroll.

Want to see a live demo? Come to our booth at the OpenStack Summit in Vancouver on May 18th-22nd!

 

The post The Open Enterprise Cloud – OpenStack’s Holy Grail? appeared first on Cloudbase Solutions.

by Alessandro Pilotti at May 18, 2015 04:00 PM

Cloudbase Solutions and Canonical partner to deliver Windows Hyper-V support to BootStack managed OpenStack clouds

Vancouver – OpenStack Summit – 18 May 2015: Cloudbase Solutions, the developer of Windows components in OpenStack, announced today that it is partnering with Canonical to enable their customers to run KVM and Hyper-V environments side-by-side in the same managed cloud. Cloudbase Solutions believes Windows and open source interoperability is essential for the adoption of managed clouds by enterprises.

Delivered by BootStack, the managed private cloud service from Canonical, this new capability allows customers to run Windows virtual workloads on Hyper-V and Linux workloads on Ubuntu hosts, with seamless networking between Linux and Windows application components.

Enterprise users frequently employ Active Directory for identity management. BootStack now allows the integration of Keystone (OpenStack’s identity component) with Active Directory, either by leveraging an existing onsite domain or by provisioning a new fault tolerant Active Directory forest and domain.

Networking between Ubuntu and Hyper-V hosts is based on modern overlay standards provided by Open vSwitch (OVS) with VXLAN, VLANs and soon NVGRE on Microsoft’s native networking stack, fully integrated in Neutron. Open vSwitch comes natively in Ubuntu and has been recently ported to Hyper-V thanks to Cloudbase Solutions, VMWare and the rest of the other members of the community.

Since its launch in 2014, BootStack has been adopted rapidly by organisations looking to benefit from the agility of OpenStack without the need to worry about security updates, managing complex upgrades or alerts monitoring. BootStack is the only fully managed OpenStack cloud that’s SLA-backed and supported end-to-end.

 

Cloudbase Solutions has also contributed Windows support to Juju and MAAS, Canonical’s award-winning cloud automation tools, allowing the same level of automation, fault tolerance and user experience that Juju provides on Ubuntu. Cloudbase Solutions’ Juju charms are available for Hyper-V, Active Directory, Windows Scale-Out File Server Storage, Nagios, Windows Server Update Services (WSUS) and many other Microsoft based application workloads from the Charm Store.

Alessandro Pilotti, CEO Cloudbase Solutions said, “As OpenStack is maturing, a large market opportunity is opening up for bringing together the open source experience provided by Canonical and OpenStack with the Windows-based IT found in most enterprises. BootStack, along with our Hyper-V and Windows automation plus support, is the perfect managed product to achieve this goal”.

Arturo Suarez, Product Manager Canonical said, “We are committed to bringing the widest range of options to all the different levels of the stack, including the hypervisor. Our focus is ease of use and reliability and so by partnering with Cloudbase, Canonical delivers the many benefits of OpenStack to Microsoft workloads in the form of BootStack, our fully managed service offered worldwide.”

Canonical and Cloudbase Solutions are exhibiting at OpenStack Summit, Vancouver, 18 – 22 May 2015. Come visit us at booth P3 (Canonical) and booth T64 (Cloudbase Solutions) for more detail.

 

About Cloudbase Solutions

Cloudbase Solutions is a privately held company dedicated to cloud computing and interoperability, with two offices in Romania and a soon-to-be-opened one in the USA.

 

Cloudbase Solutions’ mission is to bridge the modern enterprise and cloud computing worlds by bringing OpenStack to Windows based infrastructures. This effort starts with developing and maintaining all the crucial Windows and Hyper-V OpenStack components and culminates with a product range which includes orchestration for Hyper-V, SQL Server, Active Directory, Exchange and SharePoint Server via Juju charms and Heat templates. Furthermore, to solve the perceived complexity of OpenStack deployments, Cloudbase Solutions developed v-magine, bringing a reliable, fast and easy bare-metal deployment model to hybrid and multi hypervisor OpenStack clouds with mixed compute, SDN and storage requirements, ranging from proof of concepts to large scale infrastructures.

 

For more information on Cloudbase Solutions please contact Diana Macau – dmacau@cloudbasesolutions.com

 

About Canonical

Canonical is the commercial sponsor of the Ubuntu project and the leading provider of enterprise services for Ubuntu cloud deployments. Ubuntu delivers reliability, performance and interoperability to cloud and scale out environments. Telcos and cloud service providers trust Ubuntu for OpenStack and public cloud and it is used by global enterprises such as AT&T, Comcast, Cisco WebEx, Deutsche Telekom, Ericsson, China Telecom, Korea Telecom, NEC, NTT, Numergy and Time Warner Cable.

 

Canonical’s tools Juju and MAAS raise the bar for scale-out modeling and deployment in cloud environments. With developers, support staff and engineering centres all over the world, Canonical is uniquely positioned to help its partners and enterprise customers make the most of Ubuntu. Canonical is a privately held company.

 

For more information on Canonical and Ubuntu please contact Sarah Whebble, March PR – ubuntu@marchpr.com

The post Cloudbase Solutions and Canonical partner to deliver Windows Hyper-V support to BootStack managed OpenStack clouds appeared first on Cloudbase Solutions.

by Alessandro Pilotti at May 18, 2015 04:00 PM

Daniel P. Berrangé

Deprecating libvirt / KVM hypervisor versions in OpenStack Nova

If you read nothing else, just take note that in the Liberty release cycle Nova has deprecated usage of libvirt versions < 0.10.2, and in the Mxxxxx release cycle support for running with libvirt < 0.10.2 will be explicitly dropped.

OpenStack has a fairly aggressive policy of updating the minimum required versions of python modules it depends on. Many python modules are updated pretty frequently and (bitter) experience has shown that updates will often not be API compatible, even across seemingly minor version number changes. Maintaining working OpenStack code across different incompatible versions of the same module is tricky to get right and will inevitably be fragile without good testing coverage. While OpenStack has a huge level of testing, it cannot reasonably be expected to track the matrix of different incompatible python module versions. So (reluctantly) I accept that OpenStack has chosen the only practical approach, which is the increase the min required version of a library anytime it is found to be incompatible with an older version, or whenever there is a new feature required that is only present in a newer version. Now this does create pain in that the versions of python modules shipped in most distros are going to be too old to satisfy OpenStack’s needs. Thus when deploying OpenStack the distro provided versions must be updated to something newer. Fortunately, most OpenStack deployment tools mitigate the pain for users by taking ownership of installation and management of the full python stack, whether a 3rd party module, or an OpenStack provided module and this works pretty well in general.

It is important to contrast this with the situation found for dependencies on non-python modules, and in particular for Nova, the hypervisor platform that is targeted. While OpenStack does get some testing coverage of the hypervisor control plane, it is inconsequential when placed in the context of testing done by the hypervisor vendors themselves. The vendors will of course have tested the control plane themselves, both directly and often in the context of higher level apps such as oVirt and OpenStack. Beyond that though, the vendors will test a whole suite of guest operating systems to ensure they deploy and operate in a functionally correct manner. For Windows guests, there will be certifications of accelerated guest drivers via WHQL and the OS as a whole with Microsoft’s SVVP. The vendor will benchmark and validate scalability and performance the hypervisor on a multitude of compute workloads, and against various different storage and network technologies. For government related deployments, the platform will go through Common Criteria Certifications and security audits. Finally of course, the vendor will have a team of people maintaining the version they ship, most critically of course, to deal with security errata. I should note that I’m thinking about Open Source hypervisors primarily here and the difference between upstream releases and productized downstream releases. For closed source hypervisors you only ever get access to the productized release.

This is all a long winded way of saying that it is a very hard sell for OpenStack to require users to update their hypervisor versions to something OpenStack has tested, in preference to the version that the vendor ordinarily ships & supports.  The benefit of OpenStack’s testing of the hypervisor control plane does not come anywhere close to offsetting the costs of loosing the testing, certification & support work that the vendor has put onto the hypervisor platform as a whole. There are also costs suffered directly by the user wrt platform upgrades, as distinct from application upgrades. It is fairly common for organizations to go through their own internal build and certification process when deploying a new operating system and/or hypervisor platform. This will include jobs such as integrating with their network services, particularly authentication & authorization engines, service monitoring frameworks, auditing systems and backups services. In addition the OS/hypervisor is also likely to undergo testing against any hardware platforms/models that the organization may have standardized on. It may take as long as 6 months, or even 12, before some organizations are ready to deploy a new hypervisor platform released by a vendor. Once an organization has deployed a platform, they will naturally wish to maximise its useful lifetime before upgrading to newer versions. This is in stark contrast to applications that an organization runs on the platforms which may be upgraded very frequently in matter of weeks or even days. It is sad that there can be such time lags for platform but not applications, but unfortunately this is just the way many organizations IT support works.

For these reasons, OpenStack needs to take a different approach to hypervisor platforms, and be pretty conservative about updating the minimum required version. The costs on users will be quite large and not something that can be mitigated by deployment tools that OpenStack can provide, unless the organization is one of the minority that is nimble enough to cope with a continuous deployment model and has enough in house expertise to take on a degree of hypervisor maintenance. In cases where Nova does wish to update the minimum required version there needs to be a fairly compelling set of benefits to Nova that outweigh the costs that will be imposed on the downstream users. Mere prettiness / cleanliness of the code is exceedingly unlikely to count as a compelling benefit.

Looking specifically at the Libvirt + KVM platform dependency in Nova, back in November 2013 we increased the minimum required libvirt from 0.9.6 to 0.9.11. This had the cost of dropping the ability to run Nova on the (then current) Ubuntu LTS platform. This cost was largely mitigated by the fact that Canonical provide the Cloud Archive add-on repository which ships newer libvirt and KVM versions specifically for use with OpenStack, so users had an easy way out in that case. The compelling benefit to Nova though, was that it enabled OpenStack to depend on the new libvirt-python module that had been split off from the main libvirt package and made available on PyPi. This made it possible for OpenStack testing to setup virtualenvs with specific libvirt python versions in common with its approach for any other python modules. More importantly this new libvirt-python has support for the Python 3 platform, so unblocking that porting item for Nova. As a result, the upgrade from 0.9.6 to 0.9.11 was a clear net win on balance.

The benefit of increasing the min required libvirt to values beyond 0.9.11 is harder to articulate. It would enable removal of a few workarounds in OpenStack but nothing that is imposing an undue burden on Nova Libvirt driver maintenance at this time. Mostly the problem with older versions is that they simply lack a lot of functionality compared to current versions, so there will be an increasingly large set of OpenStack features which will not work at all on such old versions. They also get comparatively less testing by developers, vendors and users alike, so as time goes by we’re less likely to spot incompatibilities with old versions which will ultimately affect the experience users have when deploying OpenStack. It is less clear cut when to the draw the line though, in these cases. To help guide our decision making, a list of currently shipping libvirt, kvm and libguestfs versions across distros is maintained. For the community focused distros with short lifetimes (short == less than 2 years from release to end-of-life), it is quite simple to just drop them as supported targets when they go end of life. So from the POV of Fedora, at time of writing, we’ll only care about Nova supporting Libvirt >= 1.1.3. For the enterprise focus distros with long lifetimes (long == more than 2 years, often 5-10 years), it is hard to decide when to drop them as a supported target. As mentioned earlier, enterprise organizations will typically have quite a time lag between a new release coming out and it being something that is widely deployed. Despite RHEL-7 having been available since June 2014, it is not uncommon for organizations to still be using RHEL-6 for new platform deployments. Officially, RHEL-6 is a supported platform by Red Hat until at least 2020, but clearly Nova will not wish to continue targeting it for that length of time. So there is a question of when it is reasonable for Nova to end support for the RHEL-6 platform. Nova already dropped support for Python 2.6, so RHEL-6 users will need to use the Software Collections Layer to get Python 2.7 access, and Red Hat’s OpenStack product is now RHEL-7 based only, so clearly Nova on RHEL-6 is entering its twilight years.

Looking at the current distro support matrix for libvirt versions it was decided that support for Debian Wheezy and OpenSuse 12.2 was reasonable to drop, but at this time Nova will continue to support RHEL-6 vintage libvirt. To provide users with greater advance notice it was agreed that dropping of libvirt/kvm versions should require issuance of a deprecation warning for one release cycle.. So in the Liberty release, Nova will now print out a warning if run on libvirt < 0.10.2, and in the Mxxxx release cycle this will turn into a fatal error. So anyone currently deployed on libvirt 0.9.11 -> 0.10.1 has advance warning to plan for an upgrade of their hypervisor platform. I suspect that RHEL-6 may well get the chop 1 cycle later, eg we’d issue a warning in Mxxx and drop it in Nxxxx release, as RHEL-7 would have been available for 2 years by that point and should be taking the overwhealming majority of KVM hypervisor deployments.

One of the things to come out of the discussion around incrementing the libvirt minimum version was that we haven’t really articulated what our policy is in this area. As one of the lead maintainers of the Nova libvirt driver, this blog post is an attempt to set out my views of the matter. As you can see there is no simple answer, but the intent is to be as conservative as practical to minimize the number of users who are likely to be impacted by decisions to increase the minimum version. Is also became clear that we need to do a better job of articulating our approach to required platform versions to users in documentation. Previously there had been an attempt to categorize Nova hypervisor platforms/drivers into three groups, primarily according to the level of testing they have in the OpenStack or 3rd party CI systems. The intention behind this is fine, but the usefulness to users is somewhat limited because OpenStack CI obviously only tests a handful of very specific hypervisor platforms. So this classification gives you confidence that a Nova driver has been tested, but not confidence that it has been tested with your particular versions. So functionality that OpenStack claims is tested & operational may not be available on your platform due to version differences. To address this, OpenStack needs to provide more detailed information to users, in particular it must distinguish between what versions of a hypervisor Nova is technically capable of running against, vs the versions of a hypervisor that have been validated by CI. Armed with this knowledge, where those versions differ, it is reasonable for the user to look to their hypervisor vendor for confirmation that their own testing can provide an equivalent level of assurance to the OpenStack CI testing. The user also has the option of running the OpenStack CI tests themselves against their own specific deployment platform. On the theme of providing users with more information about hypervisor capabilities, the Nova feature support matrix which was previously held in a wiki has been turned into a piece of formal documentation maintained in Nova itself. The intent is to continue to expand this to provide more fine grained information about features and eventually annotate them with any caveats about minimum required versions of the hypervisor in the associated notes for each feature item.

by Daniel Berrange at May 18, 2015 03:57 PM

Steve Hardy

Debugging TripleO Heat templates

Lately, I've been spending increasing amounts of time working with TripleO heat templates, and have noticed some recurring aspects of my workflow whilst debugging them which I thought may be worth sharing.

For the uninitiated, TripleO is an OpenStack deployment project, which aims to deploy and manage OpenStack using standard OpenStack API's.  In practice, this means using Nova and Ironic for baremetal node provisioning, and Heat to orchestrate the deployment and configuration of the nodes.

The TripleO heat templates, unlike most of the heat examples, are pretty complex.  They make extensive use of many "advanced" features, such as nested stacks, using provider resources via the environment and also many software config resources.

This makes TripleO a fairly daunting target to those wishing to debug and modify and/or debug the TripleO templates.

Fortunately TripleO templates, although large, have many repeated patterns, and good levels of abstraction and modularity.  Combined with some recently added heat interfaces, it becomes rapidly less daunting, as I will demonstrate in the worked example below:



 

Step 1: Create the Stack


So, step 1 when deploying OpenStack via TripleO is to do a "heat stack-create".  Whether you create the heat stack directly via python-heatclient (which is what the TripleO "devtest" script calls), or indirectly via some other interface such as tuskar-ui the end result is the same - a heat stack is created (normally it's called "overcloud" by default):

$ heat stack-create -e /home/shardy/tripleo/overcloud-env.json -e /home/shardy/tripleo/tripleo-heat-templates/overcloud-resource-registry-puppet.yaml -t 360 -f /home/shardy/tripleo/tripleo-heat-templates/overcloud-without-
mergepy.yaml -P ExtraConfig= overcloud



+--------------------------------------+------------+--------------------+----------------------+
| id                                   | stack_name | stack_status       | creation_time        |
+--------------------------------------+------------+--------------------+----------------------+
| e4cfc4a8-d9e9-4033-8556-5ebca84c1455 | overcloud  | CREATE_IN_PROGRESS | 2015-04-20T11:05:53Z |
+--------------------------------------+------------+--------------------+----------------------+

 

Step 2: Oh No - CREATE_FAILED!

Ok, it happens - sometimes you have a fault in your environment, a bug in your templates, or just get bitten by a regression in one of the projects used to deploy your overcloud.

Unfortunately that modularity I just mentioned in the templates leads to a level of additional complexity when debugging - the tree of resources created by heat is actually grouped into nearly 40 nested stacks! (In my environment, this number is dependent on the number of nodes you're deploying).

You can see them all, including which one failed, with heat stack-list, using the --show-nested option, and your choice of either grep "FAILED" or the -f filter option to python heatclient:

$ heat stack-list --show-nested -f "status=FAILED"
+--------------------------------------+----------------------------------------------------------------------------------------------------------+---------------+----------------------+--------------------------------------+
| id                                   | stack_name                                                                                               | stack_status  | creation_time        | parent                               |
+--------------------------------------+----------------------------------------------------------------------------------------------------------+---------------+----------------------+--------------------------------------+
| e4cfc4a8-d9e9-4033-8556-5ebca84c1455 | overcloud                                                                                                | CREATE_FAILED | 2015-04-20T11:05:53Z | None                                 |
| 36f3ef93-872f-460b-bd6a-14a89569d5a7 | overcloud-ControllerNodesPostDeployment-rl67kiqu7pbp                                                     | CREATE_FAILED | 2015-04-20T11:09:18Z | e4cfc4a8-d9e9-4033-8556-5ebca84c1455 |
| 28d1fd38-85ba-442b-9e57-859731349e94 | overcloud-ControllerNodesPostDeployment-rl67kiqu7pbp-ControllerDeploymentLoadBalancer_Step1-tnsuslbx5hu7 | CREATE_FAILED | 2015-04-20T11:09:20Z | 36f3ef93-872f-460b-bd6a-14a89569d5a7 |
+--------------------------------------+----------------------------------------------------------------------------------------------------------+---------------+----------------------+--------------------------------------+


Here, we can derive some useful information looking at the stack names, note that in all cases we can disregard the randomly generated suffix on the stack names (heat adds it internally for nested stack resources).

  • overcloud is the top-level stack, the parent at the top of the tree.  This is defined by the overcloud-without-mergepy.yaml
    template which we passed to heat stack-create.
  • ControllerNodesPostDeployment-rl67kiqu7pbp is the nested stack which handles post-deployment configuration of all Controller nodes.  This is the ControllerNodesPostDeployment resource, defined by the overcloud resource registry as the implementation of the OS::TripleO::ControllerPostDeployment type, which is a provider resource alias for this template when using the puppet implementation.
  • The final (verbosely named!) stack maps to the
    ControllerDeploymentLoadBalancer_Step1 resource in controller-post-puppet.yaml.
All of this is a long-winded way of saying that something went wrong applying a puppet manifest, via an OS::Heat::StructuredDeployments resource (
ControllerDeploymentLoadBalancer_Step1) - anything with "Deployment" in the name failing is highly likely to mean the same thing.


Armed with this information, we can proceed to figure out why :)

 

Step 3: Resource Introspection

So we now know which nested stack failed, but not which resource, or why.

There's a couple of ways to find this out, you can either use the steps outlined in my previous post about nested resource introspection, or (if you're lazy like me), you can use the heat resource-list --nested-depth option to save some time:

$ heat resource-list --nested-depth 5 overcloud | grep FAILED
| ControllerNodesPostDeployment               | 36f3ef93-872f-460b-bd6a-14a89569d5a7          | OS::TripleO::ControllerPostDeployment             | CREATE_FAILED   | 2015-04-20T11:05:53Z |                                        |
| ControllerDeploymentLoadBalancer_Step1      | 28d1fd38-85ba-442b-9e57-859731349e94          | OS::Heat::StructuredDeployments                   | CREATE_FAILED   | 2015-04-20T11:09:19Z | ControllerNodesPostDeployment          |
| 0                                           | 980137bc-21b1-460c-9d4a-488cb5611a6c          | OS::Heat::StructuredDeployment                    | CREATE_FAILED   | 2015-04-20T11:09:20Z | ControllerDeploymentLoadBalancer_Step1 |










Here, we can see several things:
  • ControllerDeploymentLoadBalancer_Step1 has failed, it's an
    OS::Heat::StructuredDeployments resource.  StructuredDeployments (plural) resources apply a heat StructuredConfig/SoftwareConfig to a group of servers.
  • There's a "0" resource, which is a
    OS::Heat::StructuredDeployment (singular) type.  The parent resource (last column) of this is ControllerDeploymentLoadBalancer_Step1. This is because a SoftwareDeployments resource creates a nested stack with a (sequentially named) SoftwareDeployment per server (in this case, one per Controller node in the OS::Heat::ResourceGroup defined as "Controller" in the overcloud-without-mergepy template)

Now, we can do a resource-show to find out the reason for the failure.  Here, we use the ID of
ControllerDeploymentLoadBalancer_Step1 as the stack ID, because all nested stack resources set the ID as that of the stack they create:


$ heat resource-show 28d1fd38-85ba-442b-9e57-859731349e94 0 | grep resource_status_reason
| resource_status_reason | Error: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 6       



So, to summarize what we've discovered so far

  • A SoftwareDeployment (in this case a puppet run) failed on Controller node 0
  • The thing it was running exited with status code 6.  
The next step is to look at the logs to work out why..

 

Step 4: Debugging the failure

When a Heat SoftwareDeployment resource is triggered, it runs something on the node (e.g applying a puppet manifest), then signals either success or failure back to Heat.  Fortunately, in recent version of Heat, there is an API which exposes this information (in a more verbose way than the resource-show output above with the reason for failure):

To access it, you need the ID of the deployment (e.g
980137bc-21b1-460c-9d4a-488cb5611a6c) from the heat resource-list above):



heat deployment-show 980137bc-21b1-460c-9d4a-488cb5611a6c
{
  "status": "FAILED",
  "server_id": "6a025200-b20e-47df-ae4c-97a54499b586",
  "config_id": "b924d133-42d7-48ab-b2c9-7311de3b3ca4",
  "output_values": {
    "deploy_stdout": "<stdout of command>,
    "deploy_stderr": "<stderr of command>",
    "deploy_status_code": 6
  },
  "creation_time": "2015-04-20T11:09:20Z",
  "updated_time": "2015-04-20T11:10:02Z",
  "input_values": {},
  "action": "CREATE",
  "status_reason": "deploy_status_code : Deployment exited with non-zero status code: 6",
  "id": "980137bc-21b1-460c-9d4a-488cb5611a6c"
}

I've not included the full stderr/stdout because it's pretty long, but it's basically the same information that you get from SSHing onto the node and looking at the logs.

If you still want to do that, you can use "nova show"  with the "server_id" above to get the IP of the node, SSH in and do further investigations.

In Summary...

So those paying attention will have spotted that this all really boils down to two steps:


  1.  Use heat resource-list with the --nested-depth option to get the failing resource.  The one you want is the one which isn't the parent_resource to any other and is in a FAILED state.
  2. Investigate what caused the failure, for failing SoftwareDeployment resources heat deployment-show is a useful time-saver which avoids always needing to log on to the node.

Hopefully this somewhat demystifies the debugging of TripleO templates, and other large Heat deployments which use similar techniques such as nested stacks and SoftwareConfig resources!




by Steve Hardy (noreply@blogger.com) at May 18, 2015 03:19 PM

Opensource.com

The OpenStack Summit kicks off in Vancouver, and other OpenStack news

The Opensource.com weekly look at what is happening in the OpenStack community and the open source cloud at large.

by Jason Baker at May 18, 2015 07:00 AM

Cloudify Engineering

A Multi-Network Friendly Openstack VM Image with Netplugd

A multi-network friendly Openstack VM image with netplugd A while (long while. Sorry about that) ago, I posted about setting...

May 18, 2015 12:00 AM

A Multi-Network Friendly Openstack VM Image with Neutron & Netplugd

A multi-network friendly Openstack VM image with netplugd A while (long while. Sorry about that) ago, I posted about setting...

May 18, 2015 12:00 AM

May 17, 2015

Steve Hardy

TripleO Heat templates Part 3 - Cluster configuration, introduction/primer

In my previous two posts I covered an overview of TripleO template roles and groups, and specifics of how initial deployment of a node happens.  Today I'm planning to introduce the next step of the deployment process - taking the deployed groups of nodes, and configuring them to work together as clusters running the various OpenStack services encapsulated by each role.

This post will provide an introduction to the patterns and Heat features used to configure the groups of nodes, then in the next instalment I'll dig into the specifics of exactly what configuration takes place in the TripleO heat templates.




Recap - the deployed group of servers


So, we're continuing from where we got to at the end of the last post - we've deployed a ResourceGroup containing several OS::TripleO::Controller resources, which in turn have deployed a nova server, and done some initial configuration of it.

What comes next is configuring the whole group, or cluster, to work together, e.g configuring the OpenStack services running on the controller.

Group/Cluster configuration with Heat


Similar to the SoftwareDeployment (singular) resources described in my previous post, Heat supports applying a SoftwareConfig to a group of servers via the SoftwareDeployments and StructuredDeployments (plural) resources.  The function of both is basically the same, one works with a SoftwareConfig resource and the other with a StructuredConfig resource.




Typically (in TripleO at least) StructuredDeployments resources are used combined with a ResourceGroup containing some servers.  You pass a list of servers to configure (provided via an attribute from the OS::Heat::ResourceGroup resource), and a reference to a StructuredConfig resource.

The StructuredConfig resource defines the configuration to apply to each server, and the StructuredDeployments resource then internally creates a series of StructuredDeployment (singular) resources, one per server.

When all of the deployment (singular) resources complete, the deployments (plural) resource goes CREATE_COMPLETE - if any of the nested deployment resources fail, the deployments resource will go into a FAILED state.

Debugging groups of deployments


You may notice that the StructuredDeployments resource above looks a lot like the ResourceGroup containing the OS::TripleO::Controller resources - this is no coincidence, internally heat actually creates a ResourceGroup containing the StructuredDeployment resources.

This is a useful fact to remember when debugging, because it means you can use the techniques I've previously described to inspect the individual Deployment resources  created by the StructuredDeployments resource, e.g so you can use heat deployment-show <id> to help diagnose a problem with a failing deployment inside the StructuredDeployments group (which is often quicker and more convenient than SSHing onto the failing node and trawling the logs).


For example, here's a simple bash script which dumps out details about all of the Deployment resources in an overcloud, obviously you can add in a "grep FAILED" here if you just want to see details about failing deployments:

#!/bin/bash
while read -r line
do
  deployment_name=$(echo $line | cut -d"|" -f2)
  deployment_id=$(echo $line | cut -d"|" -f3)
  parent_name=$(echo $line | cut -d"|" -f7)
  echo "deployment=$deployment_name ($deployment_id) parent $parent_name"
  heat deployment-show $deployment_id
  echo "---"
done < <(heat resource-list --nested-depth 5 overcloud | grep "OS::Heat::\(Software\|Structured\)Deployment ")


We should probably add a python-heatclient feature which automates this lookup (particularly for failing deployments), but right now that is one way to do it.

 

Until next time..!

So here we've covered the basics of how Heat can be used to configure groups of servers, and we've illustrated how that pattern is applied in the TripleO templates.

The TripleO templates use this technique for all roles, to do multiple configuration passes during the deployment - in the next post I'll cover specifics of how this works in detail, but for now you can check out the TripleO heat templates and hopefully see this pattern for yourself.  Note that it's combined with provider resource abstractions as previously discussed, which as we will see makes for a nicely abstracted approach to cluster configuration which is pretty easy to modify, extend, or plug in alternative implementations.

by Steve Hardy (noreply@blogger.com) at May 17, 2015 08:52 PM

Maish Saidel-Keesing

Integrating OpenStack into your Jenkins workflow

This is a re-post of my interview with Jason Baker of opensource.com

Continuous integration and continuous delivery are changing the way software developers create and deploy software. For many developers, Jenkins is the go-to tool for making CI/CD happen. But how easy is it to integrate Jenkins with your OpenStack cloud platform?

Meet Maish Saidel-Keesing. Maish is a platform architect for Cisco in Israel focused on making OpenStack serve as a platform upon which video services can be deployed. He works to integrate a number of complementary solutions with the default out-of-the-box OpenStack project and to adapt Cisco's projects to have a viable cloud deployment model.

At OpenStack Summit in Vancouver next week, Maish is giving a talk called: The Jenkins Plugin for OpenStack: Simple and Painless CI/CD. I caught up with Maish to learn a little more about his talk, continous integration, and where OpenStack is headed.

Interview


Without giving too much away, what can attendees expect to learn from your talk?

The attendees will learn about the journey that we went through 6-12 months ago, when we looked at using OpenStack as our compute resource for the CI/CD pipeline for several of our products. I'll cover the challenges we faced, why other solutions were not suitable, and how we overcame these challenges with a Jenkins plugin that we developed for our purposes, which we are open sourcing to the community at the summit.

openstack_summit_logo

What affects has CI/CD had on the development of software in recent years?

I think that CI/CD has allowed software developers to provide a better product for their customers. In allowing them to continuously deploy and test their software, they can provide better code. In addition, it has brought the developers closer to the actual deployments in the field. In the past, there was a clear disconnect between the people writing the software and those who deployed and supported it at the customer.

How can a developer integrate OpenStack into their Jenkins workflow?

Using the plugin we developed it is very simple to integrate an OpenStack cloud as part of the resources that can be consumed in your Jenkins workflow. All the users will need is to provide a few parameters, such as endpoints, credentials, etc., and they will be able to start deploying to their OpenStack cloud.

How is the open source nature of this workflow an advantage for the organizations using it?

An open source project always has the benefit of having multiple people contributing and improving the code. It is always a good thing to have another view on a project with a fresh outlook. It improves the functionality, the quality and the overall experience for everyone.

Looking more broadly to the OpenStack Summit, what are you most excited about for Vancouver?

First and foremost, I look forward to networking with my peers. It is a vibrant and active community.

I would also like to see some tighter collaboration between the operators, the User Committee, the Technical Committee, and the projects themselves to understand what the needs are of those deploying and maintaining OpenStack in the field and to help them to achieve their goals.

One of the major themes I think we will see from this summit will be the spotlight on companies, organizations and others using the products. We'll see why they moved, and how OpenStack solves their problems. Scalability is no longer in question: scaling is a fact.

Where do you see OpenStack headed, in the Liberty release and beyond?

The community has undergone a big change in the last year, trying to define itself in a clearer way: what is OpenStack, and what it is not.

I hope that all involved continue to contribute, and that the projects focus more on features and problems that are fed to them from the field. It is fine line to define, and usually not a clear one, but something that OpenStack (and all those who consider themselves part of the OpenStack community) have to address and solve, together.

by Maish Saidel-Keesing (noreply@blogger.com) at May 17, 2015 09:00 AM

Assaf Muller

Testing Lightning Talk

I’m giving a lightning talk in the OpenStack Vancouver Neutron design summit. It’s a 5 minute talk about testing, common pitfalls and new developments with respect to testing frameworks.

Download slides

<iframe frameborder="0" height="569" marginheight="0" marginwidth="0" src="https://docs.google.com/presentation/d/1SLFoLWXCKE3qRfzrKH3_dTK62--V5QfltFCfYnY_b-E/embed?start=false&amp;loop=false&amp;delayms=60000" width="696"></iframe>

by assafmuller at May 17, 2015 04:42 AM

Adam Spiers

Cloud rearrangement for fun and profit

In a populated compute cloud, there are several scenarios in which it’s beneficial to be able to rearrange VM guest instances into a different placement across the hypervisor hosts via migration (live or otherwise). These use cases typically fall into three categories:

  1. Rebalancing – spread the VMs evenly across as many physical VM host machines as possible (conceptually similar to vSphere DRS). Example use cases:
  2. Consolidation – condense VMs onto fewer physical VM host machines (conceptually similar to vSphere DPM). Typically involves some degree of defragmentation. Example use cases:
  3. Evacuation – free up physical servers:

Whilst one-shot manual or semi-automatic rearrangement can bring immediate benefits, the biggest wins often come when continual rearrangement is automated. The approaches can also be combined, e.g. first evacuate and/or consolidate, then rebalance on the remaining physical servers.

Other custom rearrangements may be required according to other IT- or business-driven policies, e.g. only rearrange VM instances relating to a specific workload, in order to increase locality of reference, reduce latency, respect availability zones, or facilitate other out-of-band workflows or policies (such as data privacy or other legalities).

In the rest of this post I will expand this topic in the context of OpenStack, talk about the computer science behind it, propose a possible way forward, and offer a working prototype in Python.

If you’re in Vancouver for the OpenStack summit which starts this Monday and you find this post interesting, ping me for a face-to-face chat!

VM placement in OpenStack: present and future

It is clear from the diversity of the use cases listed above that VM placement policies are likely to vary greatly across clouds, and sometimes even within a single cloud. OpenStack Compute (nova) has fairly sophisticated scheduling capabilities which can be configured to implement some of the above policies on an incremental basis, i.e. every time a VM instance is started or migrated, the destination VM host can be automatically chosen according to filters and weighted cost functions. However, this approach is somewhat limited for migration, because the placement policies are only considered one migration at a time. Like in Tetris, more efficient arrangements can be achieved by thinking further ahead!

OpenStack clouds can already be segregated in various ways via cells, regions, availability zones, host aggregates, and server groups, and so there are various techniques available for implementing different placement policies to different areas of your cloud. For example you could have groups of hosts dedicated to CPU-intensive or I/O-bound workloads using an anti-affinity placement policy, and other groups dedicated to light workloads using an affinity placement policy. But these policies are static in nature (e.g. nova filters and schedulers are configured by the cloud administrator in nova.conf), which somewhat limits their flexibility.

OpenStack is still relatively new, however with Cinder and Neutron rapidly evolving, it is at the point where a VM’s network and storage dependencies can be live migrated along with the workload in a near seamless fashion. So now we should be able to start developing mechanisms for implementing more sophisticated placement policies, where not only is VM rearrangement performed automatically, but the policies themselves can be varied dynamically over time as workload requirements change.

VMware is not the (complete) solution

VMware advocates might instinctively recommend running vSphere clusters underneath nova, to harness vSphere’s pre-existing DRS / DPM functionality. However, there are still unresolved problems with this approach, as this article on nova-scheduler and DRS highlights. (Funnily enough, I encountered exactly these same problems a few years ago when I was part of a team adding vSphere integration into the orchestration component of what has since become NetIQ Cloud Manager …) Besides, a vSphere cluster and an OpenStack cloud are very different beasts, and of course not everyone wants to build their cloud exclusively with VMware technology anyway.

The computer science behind cloud rearrangement

Unfortunately developing algorithms to determine optimal placement is distinctly non-trivial. For example, the consolidation scenario above is a complex variant of the bin packing problem, which is NP-hard. The following constraints add significant complexity to the problem:

  • A useful real world solution should take into account not only the RAM footprint of the VMs, but also CPU, disk, and network.
  • The algorithm needs to ensure that SLAs are maintained whilst any rearrangement takes place.
  • If the cloud is too close to full capacity, it may not be possible to rearrange the VMs from their current placement to a more optimal placement without first shutting down some VMs, which could be prohibited by the SLAs.
  • Even if the cloud is at low utilization, live migration is not always possible, e.g.
    • the hypervisor may not support it (especially across non-identical CPU architectures)
    • shared storage may be required but unavailable
    • the network may not support it
  • Even if the arrangement is achievable purely via a sequence of live migrations, the algorithm must also be sensitive to the performance impact to running workloads when performing multiple live migrations, since live migrations require intensive bursts of network I/O in order to synchronize the VM’s memory contents between the source and target hosts, followed by a momentary freezing of the VM as it flips from the source to the target. This trade-off between optimal resource utilization and service availability means that a sub-optimal final placement may be preferable to an optimal one.
  • In the case where the hypervisor is capable of sharing memory pages between VMs (e.g. KSM on KVM), the algorithm should try to place together VMs which are likely to share memory pages (e.g. VMs running the same OS platform, OS version, software libraries, or applications. A research paper published in 2011 demonstrated that VM packing which optimises placement in this fashion can be approximated in polytime, achieving 32% to 50% reduction in servers and a 25% to 57% reduction in memory footprint compared to sharing-oblivious algorithms.

(There is prior art of course. For example, several years ago, Pacemaker implemented a best effort algorithm for VM host selection.)

Divide and conquer?

As noted by the 2011 research paper referenced above, this area of computer science is still evolving. There is one constant however: any rearrangement solution must not only provide a final VM placement optimised according to the chosen constraints, but also a sequence of migrations to it from the current placement. There will often be multiple migration sequences reaching the optimised placement from the current one, and their efficiency can vary widely. In other words, there are two questions which need answering:

  1. Given a starting placement A, which is the best (or near-optimal) final placement B to head for?
  2. What’s the best way to get from A to B?

The above considerations strongly suggest that the first question is much harder to answer than the second, although the two are not necessarily orthogonal; for example there could be two different final placements B and C which are equally near-optimal, but it may be much harder to reach C from A than B.

I propose that it is worth examining the effectiveness of a divide and conquer approach: solving the second question may simplify the first, and also provide a mechanism for comparatively evaluating the effectiveness of potential answers to the first. Another bonus of this decoupling is that it should be possible for the path-finding algorithm to also discover opportunities for parallelizing live migrations when walking the path, so that the target placement B can be reached more quickly.

A proof of concept algorithm for VM migration path-finding

I’ve designed an algorithm which solves the second problem, and hacked together a proof of concept implementation in Python. This video shows it solving three randomly generated scenarios:

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="480" src="https://www.youtube.com/embed/aHK3UF3ffNg?rel=0" width="640"></iframe>

It’s far from perfect, but as a proof of concept it seems to hold up fairly well under soak testing. I’m pleased to announce that the code is now on github: https://github.com/aspiers/migrationpaths and I would love to hear feedback on it.

Feedback

If you found this blog interesting, feel free to get in touch! I’m aspiers on FreeNode IRC, and adamspiers on Twitter.

I’m also in Vancouver for the OpenStack summit which starts this Monday, so if you are, come and say hello!

Acknowledgements

Many thanks to Florian Haas, Steven Hardy, and Christoph Thiel for their input on an earlier draft version of this post, and apologies that it took me SO long to publish it!

Share

The post Cloud rearrangement for fun and profit appeared first on Structured Procrastination.

by Adam at May 17, 2015 03:57 AM

Cloudify Engineering

More Reference NFV Architecture Based on TOSCA + NetConf YANG

TOSCA is a very good descriptive language for VNF definition, nodes monitoring and active policies like healing and scaling, as...

May 17, 2015 12:00 AM

May 16, 2015

Mirantis

Docker on OpenStack with Kubernetes

The post Docker on OpenStack with Kubernetes appeared first on Mirantis | The #1 Pure Play OpenStack Company.

If you’re building applications for OpenStack and deploying them with Docker containers, you need a powerful, usable management tool to integrate with OpenStack. Google, with its extensive experience operating a cloud using container technology, developed the open source Kubernetes orchestration system to manage containerized applications in a clustered environment.

Mirantis and Google have joined forces to bring the Kubernetes container management to OpenStack using Murano by creating a Kubernetes package. Murano enables developers to self-service provision Kubernetes clusters from the Murano browsable catalog.

Kubernetes, like OpenStack, is a fast-moving open source project that can be a huge challenge to digest without help.  At the OpenStack Summit in Vancouver, Kit Merker from Google and I are giving a presentation on using Docker on OpenStack with Kubernetes where we’ll demonstrate not only how to manage containers with Kubernetes, but the agility, control, and scale you can achieve with Kubernetes.

Following is an overview of what we’ll be covering.

Docker, meet Kubernetes. Kubernetes meet Murano

Lightweight, easy-to-use, and extensible, Kubernetes manages containerized applications across multiple hosts, providing a simpler way to deploy, maintain, and scale applications. It also provides a cloud-independent level of abstraction for Dockerized applications, making it easier to move workloads between public, private, and hybrid clouds, which enables application operation across cloud technologies.

More specifically, Kubernetes is a system for managing Dockerized applications that are made up of smaller component services, which must be scheduled in resource groups with the same networking conditions for the application to work as intended.

To consolidate service management, Kubernetes uses “pods” and “labels” to group an application’s containers into logical units for easy management and discovery. A pod is the basic Kubernetes work-based unit. While Docker containers are not assigned to hosts, Kubernetes groups closely-related containers together in a pod, which represents one or more containers that should be controlled as a single group.

Kubernetes provides a layer over the pod infrastructure providing scheduling and services management of Docker containers. Labels basically act as a user-defined tag placed above the pod to mark them as a part of a group. Kubernetes labels enable services to select containers based on the assigned tag, allowing the services to retrieve a list of backend servers to pass traffic to. After labeling pods, you can then manage and target them for action.

What’s in it for you?

With Kubernetes and Murano you get:

  • Orchestration for sophisticated multi-tier, scale-out applications managed within containers

  • Google’s best practices for Docker container management

  • Easy Kubernetes integration with OpenStack Murano

  • OpenStack’s flexible infrastructure management and cost savings

  • Application integration to simplify moving workloads across different types of clouds

Docker-based application development is faster and easier with Kubernetes and Murano. Download Mirantis OpenStack and try the Kubernetes integration today. And please join us for our presentation at the OpenStack Summit!

The post Docker on OpenStack with Kubernetes appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Craig Peters at May 16, 2015 09:29 PM

Cloud certification with Rally

The post Cloud certification with Rally appeared first on Mirantis | The #1 Pure Play OpenStack Company.

Testing OpenStack cloud should go beyond just making sure “everything works,” which is really just the tip of the iceberg for meeting customer expectations. The cloud must also scale in the expected timeframe and meet other high levels of performance.

At 3:40 on Tuesday at the OpenStack Summit in Vancouver, Jesse Keating (BlueBox) and I will be presenting Using Rally for OpenStack certification at scale! We’ve analyzed testing levels for cloud certification, and we’ll demonstrate in depth why Rally is the right tool for certifying your cloud at scale.

Rally is a benchmarking tool that tells you how OpenStack performs, especially under load at scale. You can use it to validate, performance test, and benchmark your OpenStack deployment using various pluggable Rally benchmark scenarios.

Now let’s establish what cloud certification means.

Testing levels for cloud certification

We start here with the five testing levels for cloud certification as seen in Figure 1.

Fig. 1

  1. As noted, you first need to check that “everything works” at a basic level.

  2. “Expected performance” ascertains not only performance, but that everything in the environment works in a timely way.

  3. Testing “under expected load” confirms the environment works on time with a specific load, e.g, 100 VMs in parallel, while meeting expected performance.

  4. “Under expected scale” means testing how many VMS you can run in total and how the cloud works as you add more. For example, if you’re running 10,000 VMs, what happens to performance when you add 100 more?

  5. Testing “high availability” means ensuring the cloud functions if something goes wrong, like a controller going down. Does the cloud continue its work at expected scale with the expected load and performance?

Rally tests these steps, easily creating graphical test results, including pie charts, performance charts, and histograms, so you can create reports that are easy to understand, with information that viewers can quickly digest.  See more details.

Now that you understand what Rally can do, join us at the Summit to see how it works. We look forward to seeing you!

The post Cloud certification with Rally appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Boris Pavlovic at May 16, 2015 06:44 AM

Cloudify Engineering

The Skinny on OpenStack Kilo - NFV & IPv6, Bare Metal Cloud, and More

The 11th release of OpenStack has officially landed in advance of the OpenStack summit, which makes this the “K” release...

May 16, 2015 12:00 AM

May 15, 2015

IBM OpenTech Team

Senlin – a flexible, clustering/collection service for OpenStack based clouds

While not an expert in Mandarin, I understand that Senlin means a forest. After traveling through many enterprise OpenStack cloud scenarios ranging from Wireless Network Clouds embracing NFV, commercial workloads moving from traditional data centers to a cloud based deployment model and embracing the orchestration provided by HEAT, it was felt that the problems around auto-scaling, load balancing, availability were all too common and perhaps the greater need was in the management of a collection of homogenous objects – hence the creation of the Senlin project which initially started with the problem of introducing auto-scaling support in HEAT. While OpenStack HEAT certainly made workload orchestration more efficient , having another component like Senlin address the auto-scaling aspects made managing the infrastructure better. This effort was then expanded to include load balancing, health checking etc which is now the more comprehensive clustering/collection service as encapsulated in Senlin. Qi Ming Teng from IBM’s China Research Lab is a core contributor to HEAT and our key representative and force behind Senlin. The Senlin project is positioned as a generic clustering service for OpenStack, with its first milestone being offloading the auto-scaling support from Heat.

Given below are some highlights about the Senlin project as provided by Qi Ming Teng :

Code name: Senlin # Chinese Pinyin for ‘forest’.
Git Repo:
 https://git.openstack.org/stackforge/senlin
 https://git.openstack.org/stackforge/python-senlinclient
License: Apache License 2.0
Started: Dec 2014
Scope:
– Clustering service for OpenStack, managing homogeneous objects (nodes) exposed
 by other OpenStack services, such as Heat stacks, Nova servers, etc.
– Provides REST APIs for users/tools to create and manage clusters, including
  * membership: add/remove nodes, move nodes between clusters
  * policies: create/update/delete policies using a simple schema, and attach/detach
them to/from a cluster. Sample policies include placement policy, deletion
policy, scaling policy, load-balancing policy, health policy, etc.
  * profiles: create/delete profiles that are used to create nodes. Each profile type can
be treated as a driver backend that enable Senlin to interact with a specific
service, e.g. os.heat..stack, os.nova.server, etc.
  * webhooks: an encoded URI for triggering actions on a cluster or a node.
* actions: operations that are executed asynchronously as the consequence of ReST API
invocations or webhook triggering.
* events: a log of interesting happenings inside the service engine when actions are
performed or policies are checked/enforced

Thanks,
Radha Ratnaparkhi (with support from Dr. Qi Ming Teng – IBM’s China Research Lab)

The post Senlin – a flexible, clustering/collection service for OpenStack based clouds appeared first on IBM OpenTech.

by RadhaRatnaparkhi at May 15, 2015 06:51 PM

Rob Hirschfeld

OpenSource.com Interview on DefCore, project management, and the future of OpenStack

Reposted from My Interview with RedHat’s OpenSource.com Jason Baker

Rob Hirschfeld has been involved with OpenStack since before the project was even officially formed, and so he brings a rich perspective as to the project’s history, its organization, and where it may be headed next. Recently, he has focused primarily on the physical infrastructure automation space, working with an an enterprise version of OpenCrowbar, an “API-driven metal” project which started as an OpenStack installer and moved to a generic workload underlay.

Rob is speaking on two panels at the upcoming OpenStack Summit in Vancouver, including DefCore 2015 and the State of OpenStack Project Management. We caught up with Rob to get updates about these two topics and what else lies ahead for OpenStack.

We asked you to help walk us through DefCore as it was being developed last year; just as a reminder, what is DefCore and why should people care about it?

DefCore creates a minimal definition for OpenStack vendors to help ensure interoperability and stability for the user community. While DefCore definitions apply only to vendors asking to use the OpenStack trademark, there are technical impacts on the tests and APIs that we select as required. We’ve worked hard to make sure the that selection process for picking “core” is transparent and fair.

What did the changes approved by the OpenStack Foundation membership earlier this year mean for DefCore?

The by-laws changes approved by the community were important to allow us to use DefCore more granular definition of Core. The previous by-laws were much more project focused. The changes allow us to select specific APIs and code components from a project as required instead of picking everything blindly. That allows projects to have both stable and new innovative components.

What can we expect from OpenStack’s structure and organization as we move forward towards the next release?

There are a lot of changes still to come. The technical leadership is making it easier to become part of the OpenStack code base. I’ve written about this change having potentially both positive and negative impacts on OpenStack to make it appear more like a suite of projects than a tightly integrate product. In many ways, DefCore helps vendors define OpenStack as a product as the community is expanding to include more capabilities. In my discussions, this is a good balance.

Switching gears a bit, you’ve also been heavily involved in the OpenStack project management working group. How has that group been progressing since they convened at the Paris Summit?

This group has made a lot of progress. We’ve seen non-board leadership step in and lead the group. That leadership is more organic and based in the companies that are directly contributing. I think that’s resulted in a lot of good ideas and documentation from the group. We’ll see some excellent results in Vancouver from them. It’s going to come back to the community and technical leadership to leverage that work. I think that’s the real test: we have to share ownership of direction between multiple perspectives. The first step in doing that is writing it down (which is what they have been doing).

Aside from the organization, let’s talk about the software itself. What are you hoping to see from the Liberty release?

I’m hoping to see adoption of Neutron accelerate. Having two network approaches makes it impossible to really have an interoperability story. That means Neutron has to be working technically, but also for operators and users. To be brutally honest, it also has to overcome its own reputation. If Neutron does not become the dominate choice, we are going to effectively have two major flavors of OpenStack. From the DefCore, vendor, or user perspective, that’s a very challenging position.

Anything else you’d like to add?

We’ve accomplished a lot together. In some ways, chasing too many targets is our biggest threat. I think that container workloads and orchestration are already being very disruptive for OpenStack. I’m hoping that we focus on delivering a stable core infrastructure. That’s why I’ve been working so hard on DefCore. Looking forward, there’s an increasing risk of trying to chase too many targets and losing the core of what users want.

This article is part of the Speaker Interview Series for OpenStack Summit Vancouver, a five-day conference for developers, users, and administrators of OpenStack Cloud Software.

by Rob H at May 15, 2015 05:10 PM

Richard Jones

Easy installation of a new stack with OpenStack Ansible Deployment (OSAD)

OSAD is a project that deploys OpenStack using Ansible ("OpenStack Ansible Deployment") and I decided to see whether I could use it to create the development stack I needed behind my Horizon work.

OSAD Simple Installation

The absolute simplest installation is the one the OSAD project uses for its testing using Tempest. It sets up a moderately complex stack environment (multiple Keystones, Horizons, rabbitmq backends, etc all load balanced through a configured HAProxy). You'll need a system with 8GB of RAM and 40GB of disk. You don't want it to run the actual Tempest suite though, so include RUN_TEMPEST=hellno (anything that's not "yes" will do):

apt-get update && apt-get install -y git

# Clone the source code
git clone https://github.com/stackforge/os-ansible-deployment /opt/os-ansible-deployment

# Change your directory
cd /opt/os-ansible-deployment

# Checkout your desired branch (master for bleeding edge)
git checkout kilo

# Run the script from the root directory of the cloned repository
RUN_TEMPEST=no ./scripts/gate-check-commit.sh

For my purposes, I need the following setup:

So, I need my local Horizon to talk to a remote stack, and the default OSAD install has all of the Admin services accessed locally-only. We need to reconfigure it just a little to have it expose the public interface for all of those services. We can also edit the configuration to reduce the number of services started: I don't need to set up Horizon at all, and don't need a large mysql and rabbitmq cluster either.

So, the initial steps are very similar:

# Clone the source code
git clone https://github.com/stackforge/os-ansible-deployment /opt/os-ansible-deployment
 
# Change your directory
cd /opt/os-ansible-deployment
 
# Checkout your desired branch.
git checkout master
 
# Bootstrap the env
./scripts/bootstrap-aio.sh
 
# Bootstrap Ansible
./scripts/bootstrap-ansible.sh

# Set the internal address to the external
sed -i "s/internal_lb_vip_address:.*/internal_lb_vip_address: \"{{ external_lb_vip_address }}\"/" \
  /etc/openstack_deploy/openstack_user_config.yml​

Now we're going to edit the configuration so we don't set up all those unnecessary services. Open /etc/openstack_deploy/openstack_user_config.yml in an editor and turn off the Horizon container creation by editing:

os-infra_hosts:
  aio1:
    # Horizon is set to multiple to test clustering. This test only requires x2.
    affinity:
      horizon_container: 2
    ip: 172.29.236.100

to:

os-infra_hosts:
  aio1:
    # Horizon is set to multiple to test clustering. This test only requires x2.
    affinity:
      horizon_container: 0
    ip: 172.29.236.100

Similar edits can be made to reduce the size of the Galera and RabbitMQ clusters from 3 to 1 in the "galera_container" and "rabbit_mq_container" settings.

Now we can continue on and create the stack:

# Enter the playbooks directory
pushd playbooks
# Setup all the things
openstack-ansible haproxy-install.yml
openstack-ansible setup-everything.yml
popd

The stack will be empty (no networks, flavours or images) so to add some, I run:

pushd playbooks
openstack-ansible os-tempest-install.yml
popd

This sets up the tempest environment but does not run the tests.

A Keystone endpoint will be created on port 5000 on the public IP of the host you set OSAD up on. This is the HAProxy (load-balancer) endpoint, so it's there no matter how many actual Keystones are set up by the OSAD setup.

May 15, 2015 04:40 PM

Ravello Systems

Mirantis Openstack 6.0 Lab on AWS and Google Cloud

mirantis-logo

Author:
Stacy Véronneau
Lead OpenStack architect with CloudOps, Stacy has extensive experience in Cloud & classic infrastructures, operations management across multiple technologies & IT infrastructure disciplines.

Mirantis OpenStack(MOS) is a hardened OpenStack distribution with the Fuel deployment orchestrator. It uses PXE boot to setup other nodes in the OpenStack Cloud, thus making it very easy to quickly setup a multi-node OpenStack environment. Most traditional Mirantis OpenStack deployments are done on bare metal where there is support for PXE, full access to layer 2 networking, hardware acceleration support, etc. However, that requires capex investment in physical hardware and longer lead time to get everything provisioned. I approached Ravello to leverage their technology to setup Mirantis OpenStack on public cloud, so I could overcome these challenges and it’s been great to partner with them.

With Ravello nested hypervisor platform, I was able to:

I was able setup a 3 Compute node with Cinder, 1 Controller and 1 Zabbix system (monitoring system, as part of Mirantis OpenStack) environment with Fuel deployment on AWS and Google Cloud. This setup also comprised of multiple logical networks - admin, management, fixed/private, public/floating. Check out my detailed, how-to blog post here: http://www.cloudops.com/2015/05/faking-bare-metal-in-the-cloud-with-ravello-systems. I saved my multi-node Mirantis OpenStack application as a blueprint in my private library on Ravello. So, now I can spin up multiple isolated instances of the entire Mirantis OpenStack environments from the blueprint on AWS and Google Cloud, when required - on demand. There is no lead time spent to start from scratch and configure physical hardware, install various software components. These environments can be run for as long as required and shut down to be later spun up for use in the future.

The post Mirantis Openstack 6.0 Lab on AWS and Google Cloud appeared first on The Ravello Blog.

by Stacy Véronneau, Lead OpenStack Architect, CloudOps at May 15, 2015 04:39 PM

Tesora Corp

Good News! Mirantis and Oracle partner to bring Oracle 12c database to OpenStack Solaris

Earlier today, Oracle and Mirantis announced a partnership to make Oracle 12c Database work better on Oracle Solaris OpenStack.  You can get the details at: https://www.oracle.com/corporate/pressrelease/oracle-collaborates-with-mirantis-openstack-051515.html This is definitely positive news for OpenStack. Support for popular database workloads like Oracle is important to the long term success of OpenStack. As the leading contributor to OpenStack […]

The post Good News! Mirantis and Oracle partner to bring Oracle 12c database to OpenStack Solaris appeared first on Tesora.

by Ken Rugg at May 15, 2015 04:26 PM

OpenStack Superuser

Superuser weekend reading

Here's the news from the OpenStack world you won't want to miss -- the musings, polemics and questions posed by the larger community.

Superuser will be in force at the Vancouver Summit, so email me if you're interested in being featured here.

We've also got a special print edition with great content about Vancouver -- courtesy Red Hatter and local resident Diane Mueller -- plus insight on Summit themes that should be on your radar and more...

Got something you think we should highlight? Tweet, blog, or email me!

In Case You Missed It

This week the news is all about food. Well, sort of. OpenStack Individual Board Member Russell Bryant offers up the piquant idea of an An EZ Bake OVN for OpenStack and shows you how to make your own with DevStack.

In a post about breaking down resistance to OpenStack, Walter Bentley provides the recipe for some tasty arguments. "I can remember very vividly holding index cards in my hands with bullet points, as I was attempting to lay out all the reasons why OpenStack should be the company's next major infrastructure shift," Bentley writes at the Rackspace Developer Blog. "Being prepared for this converstion is critical to the overall enterprises architecture, so you need to articulate clearly why OpenStack is the best choice. You can never be too prepared. There will always be questions that you as a technology advocate, will not even think of. In my opinion, being prepared is key. So let’s start on our technology layer cake."

The always sharp Barb Darrow at Fortune has the goods on the latest OpenStack and Mirantis partnership."This new deal focuses on the latest Oracle 12c “multitenant” database, available since last summer, which enables companies to pack a bunch of database workloads onto a single machine, like Oracle’s Exadata server for workload consolidation."

Dan Radez of Trystack and "gear selfie" fame has written a book titled "OpenStack Essentials." Available for pre-order, it's aimed at those who have "a little knowledge and want to keep learning." We'll be following up with Radez to find out more about it, so stay tuned...

For this week's 30,000-foot view, the Wall Street Journal is predicting a drought of IP addresses that will dry out business for cloud companies...

"The shortage puts companies that maintain their own large and growing Internet presence at the biggest risk, especially providers of cloud-computing services. Such companies could find themselves saddled with unexpected costs, technical problems or simply an inability to serve new customers. Those that aren’t building out their own data centers won’t face the shortage directly, but their online providers likely will."

For the "um, yeah, obvs" post of the week: "OpenStack Doesn't Come for Free," says Caroline Chappell, Principal Analyst, Cloud & NFV, in a post sponsored by Alcatel Lucent. "OpenStack has tremendous momentum behind it as the fastest-growing open source project in history...Yet market misunderstanding of open source software in general, and OpenStack in particular, threatens to slow NFV adoption."

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Cover Photo by Patent and the Pantry // CC BY NC

by Nicole Martinelli at May 15, 2015 04:02 PM

About

Planet OpenStack is a collection of thoughts from the developers and other key players of the OpenStack projects. If you are working on OpenStack technology you should add your OpenStack blog.

Subscriptions

Last updated:
May 24, 2015 06:58 AM
All times are UTC.

Powered by:
Planet