March 05, 2015

Aptira

An alternative view of OpenStack, or how to encourage innovation by getting out of the way

A while ago I had a little rant about Ceilometer and dangers of overreaching. A similar discussion has popped up in the OpenStack-Operators list, during which it became clear that some operators had taken the bits they wanted from Ceilometer and gotten rid of the rest. Others had simply stopped using it.

A little while ago the deprecation of the EC2 API from Nova cropped up on OpenStack Operators and covered a variety of topics, notably the overheads experienced when developing an OpenStack project stifling the creativity and motivation of developers.

Yesterday, I read the new project requirements for OpenStack, where one of the criteria is:

Where it makes sense, the project cooperates with existing projects rather than gratuitously competing or reinventing the wheel

This sounds great, except when you consider a partially successful case like Ceilometer.

When it rolled around to today, and I thought “what the hell am I going to write now?” these thoughts glommed together.

What would happen if OpenStack minimised it’s scope? What if it delivered a framework to fulfil its mission, but stopped short of providing the entire solution?

  • the focus of core development would change to providing all the basic hygiene items users require: smooth upgrades, stable branches mean stable software and that sort of thing.
  • Almost all feature-based innovation moves outside of the core (into Stackforge? elsewhere? Does it actually matter?)
    • Software vendors are free to pursue their own agendas, and it becomes obvious whether they are contributing to OpenStack or building their own features. Vendors use this information to market themselves as they please.
    • Developers have frictionless mechanisms to contribute as they see fit.
    • Requirements for changes in the core are delivered no more slowly than they are today.
  • core components that get their architecture “right” create much less maintenance work freeing resources that can be applied to creating new user value. Similarly consistent architectures across components can deliver efficiency gains that can be expended on creating user value.
  • The number and size of official OpenStack project teams will probably shrink. Possibly by a great deal.

Now, I’m not an expert on the architecture of any of the components, so I can’t really comment on whether this is feasible for all projects, or perhaps has already happened in many of them. My point is more about whether this approach should be an explicit goal for OpenStack. My view is that it must.

OpenStack is already showing signs of stress from the difficulty of providing an integrated release between a large number of projects. The response from the TC has the same intent as what I have described above: improve focus on the core, lessen the friction for developers. However the methods for achieve this are markedly different:

  • The barrier to entry for a project is lower
  • The concept of an integrated release is removed over time
  • metadata is created to describe each project so that OpenStack users can determine the qualities of a given project.

It seems that the reason for this approach is to ensure that the community engagement: to allow a larger number of developers to work on projects that are sanctioned as part of OpenStack. This is laudable, but I’m not sure that the cost of maintaining an ever growing list of projects, even without an integrated release, is manageable. The quality of projects must still be assessed regularly in order for users to make accurate decisions, and this assessment must be made over a larger and larger set of projects.

The other benefit is that it allows easy reporting to the board for trademark related concerns, which is entirely secondary to anything the users might experience, and so probably quite irrelevant to increasing the quality of OpenStack.

It also does nothing to address the issue of troubled projects. Where a project has tried and only partially succeeded delivering on it’s mission we are faced with a problem. It seems we must convince the TC delegates that a competing project is worthwhile.

Reducing the scope of what an OpenStack sanctioned project to a high quality framework for innovation strongly controls the size of the QA task thus increasing the confidence of the users on the attributes of a particular version of the code. Whilst fewer developers would be working under an OpenStack sanctioned umbrella, the overall developer population should be working with a better OpenStack and those not in an OpenStack project can build whatever they want, however they want.

Yes, the number of ATCs will shrink, with a corresponding impact on the size of the electorate for the TC. I don’t think this is a problem. The core devs have a tightly focussed remit on a relatively small code base and can concentrate on that without the overhead of getting caught up in a vendor’s product delivery cycle.

This isn’t a proposal for the democratisation of the core of OpenStack. Quite the opposite: it’s a proposal to focus the control of the OpenStack Foundation to the very core of the ecosystem and to allow, or actually foster, innovation and the creation of value to occur unimpeded. Make of it what you will.

by Roland Chan (roland@aptira.com) at March 05, 2015 10:41 PM

Nir Yechiel

Red Hat Enterprise Linux OpenStack Platform 6: SR-IOV Networking – Part I: Understanding the Basics

nyechiel:

Check out this blog post I wrote for Red Hat Stack on SR-IOV networking support introduced in RHEL OpenStack Platfrom 6. This is based on the Nova and Neutron work done at the upstream community for the OpenStack Juno release.

Originally posted on Red Hat Stack:

Red Hat Enterprise Linux OpenStack Platform 6: SR-IOV Networking – Part I: Understanding the Basics

Red Hat Enterprise Linux OpenStack Platform 6 introduces support for single root I/O virtualization (SR-IOV) networking. This is done through a new SR-IOV mechanism driver for the OpenStack Networking (Neutron) Modular Layer 2 (ML2) plugin, as well as necessary enhancements for PCI support in the Compute service (Nova).

In this blog post I would like to provide an overview of SR-IOV, and highlight why SR-IOV networking is an important addition to RHEL OpenStack Platform 6. We will also follow up with a second blog post going into the configuration details, describing the current implementation, and discussing some of the current known limitations and expected enhancements going forward.

PCI Passthrough: The Basics

PCI Passthrough allows direct assignment of a PCI device into a guest operating system (OS). One prerequisite for doing this is that the hypervisor…

View original 866 more words


by nyechiel at March 05, 2015 07:39 PM

OpenStack Superuser

Superuser weekend reading

Here's the news from the OpenStack world you won't want to miss -- the musings, polemics and questions posed by the larger community.

Got something you think we should highlight? Tweet, blog, or email us!

In Case You Missed It

Ahead of the Mid-Cycle Ops meetup, Thierry Carrez, OpenStack's director of engineering, examines the various meanings that "integrated release" has had in the history of OpenStack and how to better convey that information through separate tags.

VMware makes NFV telco play by snuggling up to OpenStack, opines tech publication The Register. VMware has 30 NFV users already, according to the story. Shekar Ayyar, a corporate senior vice president and leader of VMware's Telco NFV Group, offered less detail on exactly what's inside the newly-launched suite, compared to VMware's other offerings, but it does include “purpose-built management packs to meet the unique requirements of communications service providers.”

If you’re a new OpenStack contributor or plan on becoming one soon, you should sign up for the next OpenStack Upstream Training in Vancouver, May 16-17. Participation is strongly advised also for first time participants to OpenStack Design Summit.

And what do your job prospects look like if you gain experience in OpenStack? Very good, according to the Linux Jobs Report as reported on ZDNet. "42 percent of hiring managers say experience with or knowledge of OpenStack and CloudStack are having a big impact on their Linux hiring decisions" while "49 percent of Linux professionals believe open cloud will be the biggest growth area for Linux in 2015,"

Individual board member Rob Hirschfeld has an incisive blog post on the trouble of making DefCore dead simple. "I’ve been working on the OpenStack DefCore process for nearly 3 years and our number #1 challenge remains how to explain it simply. We have managed to boil down our thinking into nine key points...

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

We feature user conversations throughout the week, so tweet, blog, or email us your thoughts!

Cover Photo by Ref 54 // CC BY NC

by Nicole Martinelli at March 05, 2015 07:39 PM

Red Hat Stack

Red Hat Enterprise Linux OpenStack Platform 6: SR-IOV Networking – Part I: Understanding the Basics

Red Hat Enterprise Linux OpenStack Platform 6: SR-IOV Networking – Part I: Understanding the Basics

Red Hat Enterprise Linux OpenStack Platform 6 introduces support for single root I/O virtualization (SR-IOV) networking. This is done through a new SR-IOV mechanism driver for the OpenStack Networking (Neutron) Modular Layer 2 (ML2) plugin, as well as necessary enhancements for PCI support in the Compute service (Nova).

In this blog post I would like to provide an overview of SR-IOV, and highlight why SR-IOV networking is an important addition to RHEL OpenStack Platform 6. We will also follow up with a second blog post going into the configuration details, describing the current implementation, and discussing some of the current known limitations and expected enhancements going forward.

PCI Passthrough: The Basics

PCI Passthrough allows direct assignment of a PCI device into a guest operating system (OS). One prerequisite for doing this is that the hypervisor must support either the Intel VT-d or AMD IOMMU extensions. Standard passthrough allows virtual machines (VMs) exclusive access to PCI devices and allows the PCI devices to appear and behave as if they were physically attached to the guest OS. In the case of networking, it is possible to utilize PCI passthrough to dedicate an entire network device (i.e., physical port on a network adapter) to a guest OS running within a VM.

What is SR-IOV?

Single root I/O virtualization, officially abbreviated as SR-IOV, is a specification that allows a PCI device to separate access to its resources among various PCI hardware functions: Physical Function (PF) and one or more Virtual Functions (VF). SR-IOV provides a standard way for a single physical I/O device to present itself to the the PCIe bus as multiple virtual devices. While PFs are the full featured PCIe functions, VFs are lightweight functions that lack any configuration resources. The VFs configuration and management is done through the PF, so they can concentrate on data movement only. It is important to note that the overall bandwidth available to the PF is shared between all VFs associated with it.

In the case of networking, SR-IOV allows a physical network adapter to appear as multiple PCIe network devices. Each physical port on the network interface card (NIC) is being represented as a Physical Function (PF) and each PF can be associated with a configurable number of Virtual Functions (VFs). Allocating a VF to a virtual machine instance enables network traffic to bypass the software layer of the hypervisor and flow directly between the VF and the virtual machine. This way, the logic for I/O operations resides in the network adapter itself, and the virtual machines think they are interacting with multiple separate network devices. This allows a near line-rate performance, without the need to dedicate a separate physical NIC to each individual virtual machine. Comparing standard PCI Passthrough with SR-IOV, SR-IOV offers more flexibility.

Since the network traffic completely bypasses the software layer of the hypervisor, including the software switch typically used in virtualization environments, the physical network adapter is the one responsible to manage the traffic flows, including proper separation and bridging. This means that the network adapter must provide support for SR-IOV and implement some form of hardware-based Virtual Ethernet Bridge (VEB).

In Red Hat Enterprise Linux 7, which provides the base operating system for RHEL OpenStack Platform 6, driver support for SR-IOV network adapters has been expanded to cover more device models from known vendors such as Intel, Broadcom, Mellanox and Emulex. In addition, the number of available SR-IOV Virtual Functions has been increased for capable network adapters, resulting in the expanded capability to configure up to 128 VFs per PF for capable network devices.

nir-post-1

SR-IOV in OpenStack

Starting with Red Hat Enterprise Linux OpenStack Platform 4, it is possible to boot a virtual machine instance with standard, general purpose PCI device passthrough. However, SR-IOV and PCI Passthrough for networking devices is available starting with Red Hat Enterprise Linux OpenStack Platform 6 only, where proper networking awareness was added.

Traditionally, a Neutron port is a virtual port that is typically attached to a virtual bridge (e.g., Open vSwitch) on a Compute node. With the introduction of SR-IOV networking support, it is now possible to associate a Neutron port with a Virtual Function that resides on the network adapter. For those Neutron ports, a virtual bridge on the Compute node is no longer required.

When a packet comes in to the physical port on the network adapter, it is placed into a specific VF pool based on the MAC address or VLAN tag. This lends to a direct memory access transfer of packets to and from the virtual machine. The hypervisor is not involved in the packet processing to move the packet, thus removing bottlenecks in the path. Virtual machine instances using SR-IOV ports and virtual machine instances using regular ports (e.g., linked to Open vSwitch bridge) can communicate with each other across the network as long as the appropriate configuration (i.e., flat, VLAN) is in place.

While Ethernet is the most common networking technology deployed in today’s data centers, it is also possible to use SR-IOV pass-through for ports using other networking technologies, such as InfiniBand (IB). However, the current SR-IOV Neutron ML2 driver supports Ethernet ports only.

Why SR-IOV and OpenStack?

The main motivation for using SR-IOV networking is to provide enhanced performance characteristics (e.g., throughput, delay) for specific networks or virtual machines. The feature is extremely popular among our telecommunications customers and those seeking to implement virtual network functions (VNFs) on the top of RHEL OpenStack Platform, a common use case for Network Functions Virtualization (NFV).

Each network function has a unique set of performance requirements. These requirements may vary based on the function role as we consider control plane virtualization (e.g., signalling, session control, and subscriber databases), management plane virtualization (e.g, OSS, off-line charging, and network element managers), and data plane virtualization (e.g., media gateways, routers, and firewalls). SR-IOV is one of the popular techniques available today that can be used in order to reach the high performance characteristics required mostly by data plane functions.

by Nir Yechiel at March 05, 2015 04:43 PM

Opensource.com

8 guides for cloud building with OpenStack

Every month, we compile the very best of recently published how-tos, guides, tutorials, and tips for working with OpenStack into this handy collection. Learn more and expand your knowledge of the open source cloud.

by Jason Baker at March 05, 2015 08:00 AM

March 04, 2015

OpenStack Superuser

OpenStack User Groups get a new home

alt text here

It’s easy to stay in touch with OpenStack birds-of-a-feather wherever you are, from India to Italy, with the new OpenStack User Groups portal.

Even with events all over the globe, it's super simple to stay current — join a group, and you’ll be notified of new events when they're created.

The site features a list of known OpenStack user groups, a nifty dynamic map of those groups and aggregates all upcoming events. If you’re interested in launching a group or event of your own, it’s also where you can read tips on organizing an event and find info on the community code of conduct. Anyone with an account on http://www.openstack.org can join, using the brand new OpenStack Identity provider with OpenID and OAUTH2.0.

The portal pulls together information from different platforms. User groups can still use other sites wherever they feel most comfortable (Meetup.com, Facebook or other places) and the information they create is visible on https://groups.openstack.org. They can also decide to host all their content on the Groups portal itself.

Built on Drupal Commons, with a lot of heavy lifting by the indefatigable Marton Kiss, the portal is fully open sourced, managed by the OpenStack Infra team. Contributions are welcome. The group portal is available in multiple languages already, although its entire contents haven't been translated yet.

Future plans for the portal are kept on Storyboard.

Let us know what you think!

Cover Photo by Mirando // CC BY NC

by Superuser at March 04, 2015 04:38 PM

Adam Young

Convince Nova to Use the V3 version of the API

In a recent post I showed how to set up the LDAP in a domain other than default. It turns out that the Nova configuration does accept these tokens; by default, Nova uses the V2 version of the Keystone API only. This is easy to fix.

The first indication that something was wrong was that Horizon threw up a warning

Cannot Fetch Usage Information.

It turns out that all Operations against Nova were failing.
The Default for Auth token should be to perform discovery to see what version of the Keystone API is supported. However, Nova seems to have a configuration override that defaults the value to the V2.0 API. Looking in /etc/nova/nova.conf

I saw:

#auth_version=V2.0

Setting this to

auth_version=

And restarting all of the services fixed the problem.

by Adam Young at March 04, 2015 04:30 PM

OpenStack Superuser

Why OpenStack’s drive for inclusivity matters

This post is part of the Women of OpenStack Open Mic Series to spotlight women in various roles within our community, who have helped make OpenStack successful. With each post, we will learn more about each woman’s involvement in the community and how they see the future of OpenStack taking shape. If you’re interested in being featured, please email editor@openstack.org.

Our first featured community member is Gretchen Curtis. Curtis has been working with tech companies and start-ups in Silicon Valley and San Francisco for 15 years. Currently co-founder and chief marketing officer of Piston Cloud, Curtis also promotes Marketers Against Waste, runs, recycles, obsesses over tiny homes, drinks a lot of coffee, and dabbles in several creative pursuits. Tweet her at @gretcurtis.

What’s your role in the OpenStack community?

I am co-founder and CMO of Piston, an OpenStack foundation gold member. We make software that orchestrates the bare metal underneath OpenStack, making OpenStack deployment, management and scale-out extremely fast and easy. Before we started Piston, I was part of the team that built NASA Nebula, the infrastructure project that was the precursor to OpenStack. When NASA partnered with Rackspace, I had the great fortune of meeting Lauren Sell, who now leads marketing for the OpenStack Foundation. Lauren and I co-wrote the press release that launched the OpenStack project in 2010, and I have been helping to tell the OpenStack story since those very beginnings. Today I continue to help promote the OpenStack project, and advocate on behalf of Piston’s customers building OpenStack private clouds.

Why do you think it's important for women to get involved with OpenStack?

Women make up half of the population on earth, but are somewhat missing from the upper ranks of our businesses and technology communities. This is a disadvantage for our entire industry. I think it’s important for women to get involved in OpenStack (or any other project for that matter) because organizations and systems that optimize for diversity perform better. They are stronger, faster, smarter, and infinitely more adaptable. Companies with diverse boards consistently report higher earnings. Organizations that design for different perspectives and world-views have a better chance at surviving and adapting over time. We all want OpenStack to survive and thrive, so it’s important that women participate and have a voice in its future.

What obstacles do you think women face when getting involved in the OpenStack community?

Enterprise technology is historically a male-dominated industry. While discrimination is not always overt and direct, it does sometimes happen in discreet ways, which can be frustrating and demoralizing. I think that being a minority of any kind in a large group can be uncomfortable and stigmatizing. One thing I appreciate about the OpenStack community is that the foundation makes a concerted effort to include everyone - not just women, but all people. It first takes empathy, but then deliberate action from leadership at the very top of an organization to say “we believe in inclusion. Hostility and discrimination of any kind - whether on the basis of race, ethnicity, religion, gender identity, age, marital status, sexual orientation, physical ability, military status, familial status, or political affiliation - will not be tolerated.” OpenStack does that, and it’s inspiring. It makes the community more welcoming and encourages more people to get involved.

What do you think can help get women more involved with OpenStack?

This is a tough question. To solve the problem of women becoming more involved with OpenStack, you have to solve the women in tech problem, and even further back to the subtle ways in which we treat girls vs. boys differently, and society's core beliefs about female-ness and male-ness. That’s probably a lot for the OpenStack community to solve, and certainly too much for this interview :) However, I think that OpenStack’s stated policy of inclusion, the continued existence of the ‘Women of OpenStack’ group, and creating more opportunities for women to connect with other women in the community will help. Having a support network makes the occasional frustrations of being “a woman in tech” easier to weather.

What do you want to see out of the Women of OpenStack group in the near and distant future?

I like meeting and connecting with the other women in the community. I think that should continue and we should all make more of an effort to build bridges. For the women participating on the technical side, a mentorship program would encourage junior developers to get involved and stay engaged. There seem to be many women in the community, but not as many technical contributors. It would be awesome to see more women participating in all aspects of the project.

What do you think is the single most important factor for the success of OpenStack users in 2015?

Don’t be afraid to ask for help or reach out with your questions. The community is vast and varied. If you have a question, I guarantee that the answer is out there somewhere.

What is the best piece of advice you have received from a parent, teacher, colleague, or mentor?

Focus, and above all - use your time wisely; it’s the only thing you can’t get more of.

by Hong Nga Nguyen at March 04, 2015 04:08 PM

Thierry Carrez

The facets of the OpenStack integrated release

In a recent Technical Committee update on the OpenStack blog, I explained how the OpenStack "integrated release" concept, due to its binary nature, ended up meaning all sorts of different things to different people. That is the main reason why we want to deconstruct its various meanings into a set of tags that can be independently applied to projects, in order to more accurately describe our project ecosystem.

In this blogpost, I'll look into the various meanings the "integrated release" ended up having in our history, and see how we can better convey that information through separate tags.

Released together

The original meaning of "integrated release" is that all the projects in it are released together on the same date, at the end of our development cycles.

I recently proposed a tag ("release:at-6mo-cycle-end") to describe projects that commit to producing a release at the end of our development cycles, which I think will cover that facet.

Managed release

Very quickly, the "integrated release" also described projects which had their release managed by the OpenStack Release Management team. This team sets up processes (mostly a Feature Freeze deadline and a release candidate dance) to maximize the chances that managed projects would release on a pre-announced date, and that none of the managed projects would end up delaying the end release date.

Projects would be added to incubation and when we thought they were ready to follow those processes without jeopardizing the "integrated release" for existing projects, they would get added to be the next release.

That is a separate facet from the previous one, so I proposed a separate tag ("release:managed") to describe projects that happen to still be handled by the Release Management team.

Co-gating

As we introduced complex code gating in OpenStack Infrastructure, the "integrated release" concept grew another facet: it would also mean that projects changes are tested against one another master branches. That way we ensure that changes in project B don't break project A. This is rather neat for a tightly-coupled set of projects. But this may be a bit overkill for projects higher in the stack that just consume public APIs. Especially when non-deterministic test errors in project A prevent changes from landing in B.

We need to revise how to split co-gating in meaningful groups. This work is led by Sean Dague. Once it is completed, I expect us to convey the information on what is actually tested together using tags as well.

Supported by OpenStack horizontal efforts

From there, the "integrated release" naturally evolved to also being the set of projects that horizontal efforts (such as Documentation, QA, stable branch maintenance, Vulnerability management, Translations...) would focus their work on. Being part of the "integrated release" ensured that you would get fully supported by those horizontal teams.

That didn't scale that well, though. The documentation team was the first to exhaust their limited resources, and started to move to a model where they would not directly write all the docs for all the integrated projects. Since then, all horizontal teams decided to gradually move to the same model, where they would directly handle a number of projects (or none), but provide tooling, mentoring and support for all the others.

It's still valuable information to know which project happens to be directly handled by which horizontal effort, and which project ends up having security support or basic docs. So we'll introduce tags in the future to accurately describe this facet.

The base features

Going downhill, the "integrated release" started to also mean the base features you can rely on being present when people say they deployed "OpenStack". That was a bit at odds with the previous facets though: why would all co-gating projects with a coordinated release necessarily be essential ? And indeed, the integrated release grew beyond "base" features (like Keystone) to include obviously optional projects (like Sahara or Ceilometer).

I personally think our users would still benefit from a description of what layer each project belongs to: is it providing base IaaS compute functionality, or is it relying on that being present ? This is not a consensual view though, as some people object to all proposals leading to any type of ranking within OpenStack projects.

OpenStack

At that point, the "integrated release" was "the OpenStack release", and things outside of it were "not OpenStack".

This obviously led to more pressure to add more projects in it. But when the OpenStack governance (previously under a single Project Policy Board banner) was split between the Technical Committee and the Foundation Board, the former retained control over the "integrated release" contents, and the latter took control of trademark usage. This created tension over that specific facet.

Defcore was created to solve this problem, by defining the criteria to apply to various trademark programs. When asked to provide a set of projects (or rather, as set of sections of code) to apply the trademark on, the Technical Committee answered with the only concept it had (you guessed it, "the integrated release" again).

In the tags world, when asked for a set of projects to apply a particular trademark program to, the Technical Committee shall be able to provide a finer-grained answer, by defining a specific tag for each question.

Stability

Further downhill, "integrated release" also started to mean "stable". Once published in such a release, a project would not remove a feature without proper discussion, notice, and deprecation period. That was yet another facet of the now-bloated "integrated release" concept.

The issue is, all projects were not able to commit to the same stability rules. One would never deprecate an existing feature, while one would rip its API off over the course of two development cycles. One size didn't fit all.

In the tags world, my view is that Ops and devs, working together, should define various stability levels and the rules that apply for each. Then each project can pick the tag corresponding to the stability level they can commit to.

Maturity

Last but not least, at one point people started to assume that projects in the "integrated release" were obviously mature. They were all in widespread usage, enterprise-ready, carrier-grade, service-provider-class, web-scale. The reality is, this facet is also complex: some projects are, and some projects are less, and some projects aren't. So we need to describe the various maturity styles and levels, and inform our users of each project real status.

It's difficult to describe maturity objectively though. I intend to discuss that topic with the ones that are the best placed to accurately describe it: the OpenStack operators gathered at the Ops Summit next week.

by Thierry Carrez at March 04, 2015 03:00 PM

IBM OpenTech Team

IBM helping drive Open Data in Europe

A few weeks ago I presented at the Open Data for Europe conference in Latvia. Sponsored by IBM and part of the initiatives being driven by Latvia’s current presidency of the European Union, this was a staging post along the way to opening up public data across Europe. Mandated by recent European directives, by July all European public entities must provide access (via APIs where possible) to the data the hold (unless it can be shown to be private personal information). Why is this important? A couple of reasons:

1) There are tremendous opportunities for new businesses to flourish that use analytics to clean and extract insights from this mountain of data.
2) There are large efficiencies to be gained by public entities actually understanding how they are operating. Ironically, one of the key sets of initial customers of the insights from this data….will be the data providers themselves. This will really be great way of them being able to “purchase the analytics they need” on usage-basis from the market. Ok, I’d guess they my baulk initially at the idea of “buying back their own data”, but once they realise the potential for how it can be augmented by analytics (that have access to data across all public entities) I think they will come round.

IBM is driving this change in multiple ways, not least by its advanced analytics offerings, but also by its support for OpenStack, CloudFoundry and other open projects – since these will be that platforms that will host all this publicly accessible data. This is just another way in which open software, backed committed companies, is changing the face of how we see, consume and interact with our data.

The post IBM helping drive Open Data in Europe appeared first on IBM OpenTech.

by HenryNash at March 04, 2015 11:39 AM

March 03, 2015

Percona

Introducing ‘MySQL 101,’ a 2-day intensive educational track at Percona Live this April 15-16

Talking with Percona Live attendees last year I heard a couple of common themes. First, people told me that there is a lot of great advanced content at Percona Live but there is not much for people just starting to learn the ropes with MySQL. Second, they would like us to find a way to make such basic content less expensive.

I’m pleased to say we’re able to accommodate both of these wishes this year at Percona Live! We have created a two-day intensive track called “MySQL 101” that runs April 15-16. MySQL 101 is designed for developers, system administrators and DBAs familiar with other databases but not with MySQL. And of course it’s ideal for anyone else who would like to expand their professional experience to include MySQL. The sessions are designed to lay a solid foundation on many aspects of MySQL development, design and operations.

As for the price: Just $101 for both full days, but only if you are among the first 101 people to register using the promo code “101” at checkout.  After that the price returns to $400 (still a great price!). :)

The MySQL 101 registration pass includes full access to the Percona Live expo hall (and all the fun stuff happening out there) as well as keynotes, which will inform you about most significant achievements in MySQL ecosystem.

MySQL 101 Percona Live 2015As there is so much information to cover in the MySQL 101 track, we’re running two sessions in parallel – one geared more toward developers using MySQL and the other toward sysadmins and MySQL DBAs, focusing more on database operations. Though I want to point out that you do not have to chose one track to attend exclusively, but rather can mix and match sessions depending what is most relevant to your specific circumstances.

I will be leading a couples tracks myself alongside many other Percona experts who are joining me for those two days!

Here’s a peek at just some of the many classes on the MySQL 101 agenda:

You can see the full MySQL 101 agenda here. Don’t forget the promo code “101” and please feel free to ask any questions below. I hope to see you in Santa Clara at Percona Live! The conference runs April 13-16 in sunny Santa Clara, California.

The post Introducing ‘MySQL 101,’ a 2-day intensive educational track at Percona Live this April 15-16 appeared first on MySQL Performance Blog.

by Peter Zaitsev at March 03, 2015 05:18 PM

OpenStack Superuser

Superuser weekend reading

Here's the news from the OpenStack world you won't want to miss -- the musings, polemics and questions posed by the larger community.

Got something you think we should highlight? Tweet, blog, or email us!

In Case You Missed It

David Fishman of Mirantis calls OpenStack the "cool kid on the block," in his provocative piece "Eight reasons not to touch OpenStack with a barge pole, " or maybe you don't need that barge pole after all…

If you do decide to deploy OpenStack, check out this report from GigaOm research on how to make it a success.

"Just one of the things, to me, open source is not about the money side of it, right? A lot of people think, 'Oh, it’s free software!' It’s not free software," says OpenStack individual board member Rob Hirschfeld in an interview with NetworkWorld. "There’s investment and learning and operational things and a lot of times people buy software support from a vendor. It’s really about control and transparency."

Interested in getting a job thanks to your mad OpenStack skills? Check out this Reddit thread on what hiring managers are looking for.

And, if you're interested in learning more about Database-as-a-Service and the OpenStack Trove Project, Tesora has put up a 40-minute webinar with slides to get you started. Or check out their free March 3 webinar on Getting Started with Hosted OpenStack.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

We feature user conversations throughout the week, so tweet, blog, or email us your thoughts!

Cover Photo by Charlie Steele // CC BY NC

by Superuser at March 03, 2015 05:12 PM

March 02, 2015

Rackspace Developer Blog

Evolution of OpenStack - From Infancy to Enterprise

Recently I had the pleasure of hosting a webinar covering the Evolution of OpenStack. No matter how many times I review the history of OpenStack, I manage to learn something new. Just the idea that multiple companies, with distinct unique ideas can come together to make what I consider to be a super platform is amazing. Whether you think OpenStack is ready for prime time or not, it is hard to deny the power and disruptive nature it has in the current cloud market.

OpenStack 101 – What is OpenStack

The most simple definition I can provide as to what OpenStack is: OpenStack is an open source cloud operating platform that can control large pools of compute, storage and networking recourses throughout a datacenter, all managed through a single interface controlled by either a CLI, API and/or dashboard. The orchestration provided by OpenStack gives administrators control over all those resources while still empowering the cloud consumers to provision resources thru a self-service model. The platform is built in a distributed and modular way. This means the platform is built from multiple components, and you can chose which ones you need in reference to your personal use case. A common analogy is that it is similar to Legos. One of the unique capabilities of OpenStack that stands out to me is the ability to leverage commodity hardware and not to have to rely on a particular make/model. With OpenStack, you don't have to keep all hardware the same.


The Three W’s - When, Who and Why

Let's jump right in at the beginning, the birth of OpenStack. The life of OpenStack started back in March 2010 when Rackspace decided to create an open source cloud platform. At the time Rackspace was primarily focused on a fully distributed object storage product. Coincidentally a few months earlier, NASA was approached by the US Government to create a platform to assist in the newly passed Open Government Initiative. NASA soon called their project, Nebula.

After an email exchange, Rackspace and NASA decided to combine their efforts. In October 2010 the OpenStack project officially started. Here is the link that provides a bit more context and high-level timeline in an interactive model - OpenStack Timeline.


The OpenStack Foundation

For the first two years, the OpenStack project was closely managed by Rackspace and its 25 initial partners. In September 2012, Rackspace decided to transfer the intellectual property and governance of the OpenStack project into a non-profit member run foundation that is known as the OpenStack Foundation. This OpenStack Foundation consists of a community that collaborates around a six-month, time-based release cycle. Within the planning phase of each release, the community gathers for a OpenStack Design Summit where project developers have live working-sessions and agree on release items.

The stats below prove that OpenStack is very much an active community platform with improvements happening daily, by the people who actually use and believe in the system.

OpenStack Community Stats


What Problem Does OpenStack Solve?

Before the cloud, and virtualization capability, came to the life, data centers were growing at an uncontrollable rate. Data centers were filled with extremely under-utilized servers running one or two applications. That problem consequently gave birth to virtualization, the concept of using a hypervisor on top of hardware to create a multi-tenant computing platform. Being able to run multiple virtual machines on a single piece of hardware allows you to reduce the overall server footprint and optimize your hardware use better. Problem solved right? Well, not totally.

As we all know, virtualization, over time, became the preferred infrastructure choice. Then data centers were filled with servers running hypervisors, with no real easy way to manage the ever growing virtualization platforms. This is where OpenStack provides value, because OpenStack allows you to add an orchestration layer on top of many types of hypervisors within your data center. This allows for more efficient management of your hardware and provides the ability to distribute your application workloads based on demand.


The Guts of OpenStack

The diagram below outlines all the projects/services currently part of the OpenStack platform.

OpenStack Projects/Services


OpenStack Project Timeline

This timeline visually walks you through the OpenStack project progression, really driving home the point that, over time, the project added a rich feature set. As mentioned during the webinar, it was not until around the Grizzly/Havana release that OpenStack was ready for primetime. Fast forwarding to now, with OpenStack turning 5 years old this year, you can see the feature set has only gotten better. Mainly with the introduction of Heat, Ceilometer and Trove, one could say OpenStack is now ready for Enterprise production workloads.

OpenStack Project Timeline


OpenStack Is Ready for the Enterprise

During OpenStack’s growth, its features have matured creating a stable reliable platform. The number of features are too many to mention here in this blog. Below are just a few of my favorite features, that in my mind makes it “Ready for the Enterprise”.

OpenStack Features and Benefits OpenStack Features

High Availability Options OpenStack HA


Industry Focus on OpenStack

Over the last year, OpenStack has gained the attention of many traditional IT vendors such as EMC, HP, Cisco and Red Hat. This has led to some of the most important cloud acquisitions we have seen in years. All of those acquisitions have one thing in common: OpenStack.

This article says it all - 2014’s Most Significant Cloud Deals Have OpenStack At Heart.


Enterprise Use Cases

On the openstack.org site there are many user stories you can download and read to help support OpenStack’s wide spread use. The Norte Dame story was very interesting to me. One of the newest and ground breaking user stories came from Walmart Labs, the eCommerce innovation and development group of Walmart. I have provided some cliff notes below but, you can read the whole article here. I strongly encourage taking the time out to read this article.

Walmart Labs have:

  • roughly 100K cores of compute running on OpenStack
  • used OpenStack to run the parent company’s Cyber Monday and holiday season sales operations
  • started with OpenStack a year and a half ago
  • 3.6K employees worldwide using their OpenStack platform
  • created a private cloud at public cloud scale

In Conclusion

Personally I enjoyed recapping the great progress OpenStack has made in almost 5 short years. The strength and power behind what a community of users can come together to do can make you speechless.

In my opinion, OpenStack is crossing the line from early adopters to early majority in the adoption cycle model. This idea is supported by a recent article describing how Walmart, the worlds largest revenue generating company, used OpenStack to run revenue critical applications. That speaks volumes as to its position in the market, as well as the ever increasing demand to move toward Open Source cloud technologies. Organizations now seek speed-to-market, agility, and flexibility, and they need a single control plane to manage their infrastructure. OpenStack has proven it provides all the above, and, technically, we are just getting Enterprise ready. Imagine what the next year or two will bring!

March 02, 2015 11:59 PM

Ben Nemec

IPMI Controller for OpenStack Instances

QuintupleO Network Topology

As discussed in my last update on QuintupleO, the biggest blocker for getting that working was a way to allow Ironic to control OpenStack instances. Since then I have been made aware of the pyghmi project, which provides a way to implement IPMI interfaces that do arbitrary things on the back end. It currently includes a couple of examples in its bin directory for noop and virsh implementations. I've written an OpenStack version.

So far it only implements the pieces I needed to do an Ironic deployment, and assumes the instance it's controlling was created with a PXE-bootable flavor as discussed in my previous QuintupleO update. With that and the small Nova and Neutron changes (also in the previous update) I was able to do a full baremetal-style deployment to OpenStack instances using Ironic. It actually imitates a real baremetal deploy better because you're using the regular ipmitool driver in Ironic instead of pxe_ssh.

Some Notes on my environment:

  • For each baremetal VM I wanted to deploy to, I created a matching "bmc" VM that was running the openstackbmc script (and nothing else, so these can be very small instances) pointed at the "baremetal" instance. There are other ways this could be done, but separate VMs seemed to map more closely to how BMCs would work in a real environment.
  • In addition to the usual public and private networks, I added a provisioning network called "undercloud" with DHCP disabled so the undercloud's Neutron could handle that. You can see the network topology in the attached image.
  • Note that the baremetal VMs had to have the "undercloud" network as their first network device. I ran into intermittent issues where they would only attempt to PXE boot off the first interface. I'm not sure what was going on there, but since it didn't particularly matter to me which interface came first that worked fine for me.
  • Example openstackbmc call (as root): openstackbmc --os-user admin --os-password password --os-tenant admin --os-auth-url http://11.1.1.12:5000/v2.0 --instance baremetal_0

While this should make QuintupleO a viable option for private clouds where you can hack up Nova and Neutron, we still need to get this functionality into the projects themselves in a way that integrates better. I would also like to look into automating the deployment of these environments - using Heat to deploy the baremetal/bmc pairs would be cool, and could make this much easier to scale.

So that's the state of the QuintupleO art today. I've had a few people contact me about it since my last update and hopefully this will be useful. If you want to chat more about it, my contact information is on the About page.

Edit: Here's my nodes json file for registering the Ironic nodes:

{
   "nodes":
   [
      {
         "pm_type": "pxe_ipmitool",
         "mac":
         [
            "fa:16:3e:2a:0e:36"
         ],
         "cpu": "2",
         "memory": "4096",
         "disk": "40",
         "arch": "x86_64",
         "pm_user": "admin",
         "pm_password": "password",
         "pm_addr": "10.0.0.8"
      },
      {
         "pm_type": "pxe_ipmitool",
         "mac":
         [
            "fa:16:3e:da:39:c9"
         ],
         "cpu": "2",
         "memory": "4096",
         "disk": "40",
         "arch": "x86_64",
         "pm_user": "admin",
         "pm_password": "password",
         "pm_addr": "10.0.0.15"
      },
      {
         "pm_type": "pxe_ipmitool",
         "mac":
         [
            "fa:16:3e:51:9b:68"
         ],
         "cpu": "2",
         "memory": "4096",
         "disk": "40",
         "arch": "x86_64",
         "pm_user": "admin",
         "pm_password": "password",
         "pm_addr": "10.0.0.16"
      }
   ]
}

by bnemec at March 02, 2015 05:35 PM

Rob Hirschfeld

DefCore Process 9 Point Graphic balances Community, Vendor, Goverance

I’ve been working on the OpenStack DefCore process for nearly 3 years and our number #1 challenge remains how to explain it simply.

10 days ago, the DefCore committee met face-to-face in Austin to work on documenting the process that we want to follow (see Guidelines).  As we codify DefCore, our top priority is getting community feedback and explaining the process without expecting everyone to read the actual nut-and-bytes of the process.

I think of it as writing the DefCore preamble: “We, the community, in order to form a more perfect cloud….”

defcore 9 pointsI don’t think we’ve reached that level of simplicity; however, we have managed to boil down our thinking into nine key points.  I’m a big Tufte fan and believe that visualizations are essential to understanding complex topics.  In my experience, it takes many many iterations with feedback to create excellent graphics.  This triangle is my first workable pass.

An earlier version of these points were presented to the OpenStack board in December 2014 and we’ve been able to refine these during the latest DefCore community discussions.

We’re interested in hearing your opinions.  Here are the current (2015-Feb-22) points:

  1. COMMUNITY INVOLVEMENT
    1. MAPPING FEATURE AVAILABILITY: We are investing in gathering data driven and community involved feedback tools to engage the largest possible base for core decisions.   This mapping information will be available to the community.
    2. COMMUNITY CHOSEN CAPABILITIES :  Going forward, we want a community process to create, cluster and describe capabilities.  DefCore bootstrapped this process for Havana.  Further, Capabilities are defined by tests in Tempest so test coverage gaps (like Keystone v2) translate into Core gaps that the community will fill by writing tests.
    3. TESTS AS TRUTH: DefCore could expand in the future, but uses Tempest as the source of tests for now.  Gaps in Test will result in DefCore gaps.  We are hosting final documents in the Gerrit, using the OpenStack review process to ensure that we work within the community processes.
  2. VENDOR
    1. CLEAR RESULTS (PASS-FAIL): Vendors must pass all required core tests as defined by this process.  There are no partial results.  Passing additional tests is encouraged but not required.
    2. VENDORS SELF-TEST: Companies are responsible for running tests and submitting to the Foundation for validation against the DefCore criteria.  Approved vendor reports will be available for the community.
    3. APPEAL PROCESS / FLAGGED TESTS: There is a “safety valve” for vendors to deal with test scenarios that are currently difficult to recreate in the field.  We expect flags to be temporary.
  3. GOVERNANCE
    1. SCORING BASED ON TRANSPARENT PROCESS (DEFCORE): The 2015 by-laws change requires the Board and TC to agree to a process by which the Foundation can hold OpenStack Vendors accountable for their use of the trademarks.
    2. BOARD IS FINAL AUTHORITY: The Board is responsible for approving the final artifacts based on the recommendations.  By having a transparent process, community input is expected in advance of that approval.
    3. TIMELY GUIDANCE: The process is time sensitive.  There’s a need for the Board to produce DefCore guidance in a timely way after each release and then feed that result into the next cycle.  The guidance is expected to be drafted for review at each Summit and then approved at the Board meeting three months after the draft is posted.

by Rob H at March 02, 2015 04:58 PM

John Eckersberg

Improving HA Failures with TCP Timeouts

Most people probably don't give too much thought to our old friend, TCP. For the most part, it's just a transparent piece of infrastructure that sits between clients and servers, and we focus most of our time paying attention to the endpoints, and rightly so. Almost always it does the right thing, and we don't care. Life is good.

However, sometimes you do need to care. This is about one of those times.

Background

This is what a typical HA deployment of Red Hat Enterprise Linux OpenStack Platform looks like, focusing specifically on the RabbitMQ service:

HA deployment

There are three controller nodes that are part of a pacemaker cluster. There is a virtual IP (VIP) for AMQP traffic that floats between the three controllers and is controlled by pacemaker. HAProxy is bound to listen on the VIP on all three controller nodes. At any point in time, only one controller will have the VIP, and only this node will be responsible for directing AMQP to the backend RabbitMQ servers via HAProxy. This HAProxy instance balances the traffic across all three instances of rabbitmq-server, one on each controller.

VIP Failover

At this point, everything is happily chugging along. There's a bunch of clients (or in AMQP parlance, consumers and producers) connected through the VIP, through HAProxy, ultimately to RabbitMQ on the backend. What happens if the VIP fails over to another node?

For the clients, it's pretty straightforward. The client will try to write to the server on the existing connection. (This is either a "normal" write, or via a TCP keepalive probe, which is a related topic that I won't go over here. Just assume it's there). Since the packet gets routed to a new host (whichever one newly picked up the VIP), and the existing TCP session is not recognized by the new host, the client will receive a TCP RST. The client cleans up the connection and reconnects to the VIP to establish a valid session on the new server.

However, the server behavior is more interesting. Consider:

  1. There's a connection, through the VIP, through HAProxy, to RabbitMQ

  2. The VIP fails over

  3. RabbitMQ writes data to the connection. HAProxy receives the data from RabbitMQ and writes the data back to the client.

What happens?

The source address for the connection from HAProxy back to the client is the VIP address. However the VIP address is no longer present on the host. This means that the network (IP) layer deems the packet unroutable, and informs the transport (TCP) layer. TCP, however, is a reliable transport. It knows how to handle transient errors and will retry. And so it does.

TCP Retries

TCP generally holds on to hope for a long time. A ballpark estimate is somewhere on the order of tens of minutes (30 minutes is commonly referenced). During this time it will keep probing and trying to deliver the data.

See RFC1122, section 4.2.3.5 for details on the specification and the linux documentation for the related tunables in the kernel.

It's important to note that HAProxy has no idea that any of this is happening. As far as its process is concerned, it called write() with the data and the kernel returned success.

Work Queues / RPC

Let's take a step back now and consider what impact this has to the users of messaging. RPC calls over AMQP are a common use case. What happens (simplified) is:

  1. The RPC client sends the request message into a work queue.

  2. RabbitMQ delivers the message to one of the workers consuming messages from the work queue. How this gets scheduled is configuration-dependent.

  3. The worker does the work to service the request and sends the reply back to the client.

If the VIP has failed over, #2 can trigger the problematic behavior described above. RabbitMQ tries to write the RPC request to one of the workers that was connected through the VIP on the old host. This request gets stuck in TCP retries. The end result is that the client never gets its answer for the request and throws an error somewhere.

The problem is that the TCP retries behavior allows "dead" consumers to hang around in RabbitMQ's view. It has no idea that they're dead, so it will continue to send messages until the consumer disconnects—when the TCP retries gives up some tens of minutes later.

TCP_USER_TIMEOUT

Since Linux 2.6.37, TCP sockets have an option called TCP_USER_TIMEOUT. Here's the original commit message, since it describes it better than I could:

commit dca43c75e7e545694a9dd6288553f55c53e2a3a3
Author: Jerry Chu <hkchu@google.com>
Date:   Fri Aug 27 19:13:28 2010 +0000

    tcp: Add TCP_USER_TIMEOUT socket option.

    This patch provides a "user timeout" support as described in RFC793. The
    socket option is also needed for the the local half of RFC5482 "TCP User
    Timeout Option".

    TCP_USER_TIMEOUT is a TCP level socket option that takes an unsigned int,
    when > 0, to specify the maximum amount of time in ms that transmitted
    data may remain unacknowledged before TCP will forcefully close the
    corresponding connection and return ETIMEDOUT to the application. If
    0 is given, TCP will continue to use the system default.

    Increasing the user timeouts allows a TCP connection to survive extended
    periods without end-to-end connectivity. Decreasing the user timeouts
    allows applications to "fail fast" if so desired. Otherwise it may take
    upto 20 minutes with the current system defaults in a normal WAN
    environment.

    The socket option can be made during any state of a TCP connection, but
    is only effective during the synchronized states of a connection
    (ESTABLISHED, FIN-WAIT-1, FIN-WAIT-2, CLOSE-WAIT, CLOSING, or LAST-ACK).
    Moreover, when used with the TCP keepalive (SO_KEEPALIVE) option,
    TCP_USER_TIMEOUT will overtake keepalive to determine when to close a
    connection due to keepalive failure.

    The option does not change in anyway when TCP retransmits a packet, nor
    when a keepalive probe will be sent.

    This option, like many others, will be inherited by an acceptor from its
    listener.

    Signed-off-by: H.K. Jerry Chu <hkchu@google.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>

This is exactly what we want. If HAProxy writes something back to the client and it gets stuck retrying, we want the kernel to timeout the connection and notify HAProxy as soon as possible.

Fixing HAProxy

I talked to Willy Tarreau over at HAProxy about this behavior, and he came up with this patch which adds a new tcp-ut option, allowing the user to configure the TCP_USER_TIMEOUT value per listening socket. This is available in the 1.6 development branch of HAProxy, and is currently being backported to Fedora and RHEL OpenStack Platform.

Fixing Linux

I told you a little white lie before. Although Linux gained TCP_USER_TIMEOUT in 2.6.37, it was broken for some cases until 3.18. Importantly, one of those cases is the case I've discussed here. It doesn't handle the situation where the TCP layer has put packets into the output queue, but the packets weren't transmitted. That's the case when IP can't route it. So you'll need either a kernel >= 3.18, or one which has this patch backported and applied:

commit b248230c34970a6c1c17c591d63b464e8d2cfc33
Author: Yuchung Cheng <ycheng@google.com>
Date:   Mon Sep 29 13:20:38 2014 -0700

    tcp: abort orphan sockets stalling on zero window probes

    Currently we have two different policies for orphan sockets
    that repeatedly stall on zero window ACKs. If a socket gets
    a zero window ACK when it is transmitting data, the RTO is
    used to probe the window. The socket is aborted after roughly
    tcp_orphan_retries() retries (as in tcp_write_timeout()).

    But if the socket was idle when it received the zero window ACK,
    and later wants to send more data, we use the probe timer to
    probe the window. If the receiver always returns zero window ACKs,
    icsk_probes keeps getting reset in tcp_ack() and the orphan socket
    can stall forever until the system reaches the orphan limit (as
    commented in tcp_probe_timer()). This opens up a simple attack
    to create lots of hanging orphan sockets to burn the memory
    and the CPU, as demonstrated in the recent netdev post "TCP
    connection will hang in FIN_WAIT1 after closing if zero window is
    advertised." http://www.spinics.net/lists/netdev/msg296539.html

    This patch follows the design in RTO-based probe: we abort an orphan
    socket stalling on zero window when the probe timer reaches both
    the maximum backoff and the maximum RTO. For example, an 100ms RTT
    connection will timeout after roughly 153 seconds (0.3 + 0.6 +
    .... + 76.8) if the receiver keeps the window shut. If the orphan
    socket passes this check, but the system already has too many orphans
    (as in tcp_out_of_resources()), we still abort it but we'll also
    send an RST packet as the connection may still be active.

    In addition, we change TCP_USER_TIMEOUT to cover (life or dead)
    sockets stalled on zero-window probes. This changes the semantics
    of TCP_USER_TIMEOUT slightly because it previously only applies
    when the socket has pending transmission.

    Signed-off-by: Yuchung Cheng <ycheng@google.com>
    Signed-off-by: Eric Dumazet <edumazet@google.com>
    Signed-off-by: Neal Cardwell <ncardwell@google.com>
    Reported-by: Andrey Dmitrov <andrey.dmitrov@oktetlabs.ru>
    Signed-off-by: David S. Miller <davem@davemloft.net>

I've proposed this for RHEL 7.

The (Hopefully Very Near) Future

Most of this behavior can be detected and avoided by using application layer AMQP heartbeats. However at this time the feature is not available in oslo.messaging, although this review has been close for a while now. Even with heartbeats, having TCP timeouts as another knob to tweak is a good thing, and it has more general applicability. I've had success using the timeouts to better detect inter-cluster failures of RabbitMQ nodes as well, but that will be a future post.

by John Eckersberg at March 02, 2015 03:12 PM

Opensource.com

Exploring the world of OpenStack, ensuring successful deployment, and more

The Opensource.com weekly look at what is happening in the OpenStack community and the open source cloud at large.

by Jason Baker at March 02, 2015 08:00 AM

Swapnil Kulkarni

[OpenStack] Rally in a Containter : A reference implementation similar to TCup

Create a Docker container with Rally

Prerequisites:

  • This setup requires to have 1 VM/ BM machine with internet connectivity.
  • Setup a fresh supported Linux installation. (Ubuntu/Fedora/CentOs)
  • Install git, docker

Steps

Pull the docker image coolsvap/docker-rally 

$ docker pull coolsvap/docker-rally:latest

Create two directories for rally_home (for configuration files) and rally_db to backup rally database

$ mkdir rally_home rally_db

Create a file deployment.json in rally_home directory with following details

{
“type”: “ExistingCloud”,
“auth_url”: “keystone-auth-url”,
“admin”: {
“username”: “username”,
“password”: “password”,
“tenant_name”: “tenant-name”
}
}

Replace the values for keystone-auth-url, username, password, and tenant-name as per deployment.

Run the container with following command, provide path to rally_home and rally_db directories created in previous command

$ docker run [PATH to rally_home]:/home/rally -v [PATH to rally_db]:/var/lib/rally/database coolsvap/docker-rally:latest

The container will run and provide you the tempest test results in rally_home directory.

Note: In some case you might need to turn the selinux mode to permissive for enabling the volume support, do it with following command.

$ setenforce 0


Filed under: Docker, HowTos, OpenStack, rally Tagged: Docker, OpenStack

by Swapnil Kulkarni at March 02, 2015 04:31 AM

February 28, 2015

OpenStack Blog

OpenStack Community Weekly Newsletter (Feb 20 – 27)

OpenStack DefCore Accelerates & Simplifies with Clear and Timely Guidelines [Feedback?]

Last week, the OpenStack DefCore committee rolled up our collective sleeves and got to work in a serious way. We had a in-person meeting with great turn out with 5 board members, Foundation executives/staff and good community engagement. TL;DR > We think DefCore deliverables should be dated milestone guidelines instead tightly coupled to release events (see graphic on Rob Hirschfeld‘s blog).

Scaling OpenStack Neutron Development

During Kilo cycle, Neutron’s team has made an effort to expand and scale the Neutron development community. Plugin Decomposition and Advanced Services Split were designed to enable a more scalable development environment which will allow for fast code iteration in all the areas affected. How have we done with these?

OpenStack + Kubernetes = More choice and flexibility for developers

Sometimes a demo can go almost too well. By the time Craig Peters and Georgy Okrokvertskhov took the mic to show how OpenStack support for Kubernetes makes managing Docker containers pretty much a point-and-click operation, the crowd at the Kubernetes San Francisco meetup was almost hoping for a glitch. Instead, in just under 10 minutes, the pair did a seamless walk-through of how to set up a Kubernetes cluster using OpenStack in the final demo of the night.

The Road to Vancouver

Relevant Conversations

Deadlines and Development Priorities

Reports From Previous Events

Security Advisories and Notices

Tips ‘n Tricks

Upcoming Events

The 2015 events plan is now available on the Global Events Calendar wiki.

Other News

Got Answers?

Ask OpenStack is the go-to destination for OpenStack users. Interesting questions waiting for answers:

OpenStack Reactions

callingthebatman

Pinging a core reviewer for a quick review to fix the gate

 

The weekly newsletter is a way for the community to learn about all the various activities occurring on a weekly basis. If you would like to add content to a weekly update or have an idea about this newsletter, please leave a comment.

by Stefano Maffulli at February 28, 2015 12:13 AM

February 27, 2015

OpenStack Blog

Sign up for OpenStack Upstream Training in Vancouver

It’s becoming a habit: the OpenStack Foundation will repeat in Vancouver the Upstream Training program to accelerate the speed at which new OpenStack contributors are successful at integrating their contributions into OpenStack.  If you’re a new OpenStack contributor or plan on becoming one soon, you should sign up for the next OpenStack Upstream Training in Vancouver, May 16-17. Participation is strongly advised also for first time participants to OpenStack Design Summit. We’ve done it before the Summits in Atlanta and Paris and participants loved it.

With over 2000 developers from 80 different companies worldwide, OpenStack is one of the largest collaborative software-development projects. Because of its size, it is characterized by a huge diversity in social norms and technical conventions. These can significantly slow down the speed at which changes by newcomers are integrated in the OpenStack project.

OpenStack Foundation is training new OpenStack developers and documentation writers to ensure bug fixes or features are accepted in the OpenStack project in a minimum amount of time and as little struggle as possible. Students are required to work on real-life bug fixes or new features during two days of real-life classes and online mentoring, until the work is accepted by OpenStack.

The live two-day class teaches developers to navigate the intricacies of the project’s technical tools and social interactions. In followup sessions, the students benefit from individual online sessions to help them resolve any remaining problems they might have. Get all the details on the wiki.

Enrolment for the training session in Vancouver is open: register and reserve your seat for OpenStack Upstream Training in Vancouver, May 16-17.

<iframe frameborder="0" height="254" marginheight="5" marginwidth="5" scrolling="auto" src="http://eventbrite.com/tickets-external?eid=15649117933&amp;ref=etckt" width="100%"></iframe>

by Stefano Maffulli at February 27, 2015 09:43 PM

OpenStack Superuser

OpenStack heads to Melbourne to CONNECT with local enterprise

Continuing our focus to bring OpenStack information across the globe, we’ve teamed up with CONNECT 2015, a large event in Melbourne, Victoria to host a daylong pow-wow on OpenStack on April 21.

We’re excited to be a part of CONNECT, which is developed for business people to understand how the convergence of a number of technology mega-trends are creating a perfect storm of disruption which will impact the way we live and do business in the future.

The day dedicated to OpenStack is specifically designed for cloud leaders, technology decision-makers and heads of infrastructure and innovation who want to learn more about OpenStack; discuss the benefits of open source and its community; and outline concrete steps that businesses can take to decide if an OpenStack Cloud is right for them.

With that in mind, the agenda is packed full of OpenStack leaders – from the Foundation to enterprise users. Randy Bias, currently VP of Technology at EMC and a well-known and outspoken supporter of OpenStack, will keynote. Board Director Tristan Goode, CEO of Aptira, will moderate a panel of super users and speak on what is on the horizon for the project.

The breadth of users illustrate the flexibility of the software to support differing needs and scale. Each is operating a production OpenStack cloud. * A local enterprise – Erez Yarkoni of Telestra * A large enterprise – David Medberry of Time Warner Cable * A hosting company – Mike Dorman of Go Daddy * A research organization – Glenn Moloney from the NeCTAR project at the University of Melbourne

Speakers from each of these companies will share their use cases and experiences with OpenStack.

They will be joined by OpenStack luminaries Michael Still, senior software development manager at Rackspace and the project team lead for Nova Compute – the primary OpenStack project; and Tom Fifield, community manager from the OpenStack Foundation. Presenters from Foundation ecosystem companies round out this impactful agenda. Red Hat and Brocade are immersed in OpenStack, simultaneously creating their OpenStack-based products and services and contributing the knowledge gained to the software through participation in the community process.

Click here to register for the CONNECT 2015 and the OpenStack sessions. You'll also be able to visit and chat with Aptira, EMC, Red Hat and Rackspace at our booth, number 87, in the Expo on April 22.

Cover Photo by Michael Theis // CC BY NC

by Superuser at February 27, 2015 07:15 PM

Tesora Corp

Short Stack: HP deal, OpenStack Community, Defcore, Containers

short stack_b small_0_0_0.jpgWelcome to the Short Stack, our weekly feature where we search for the most intriguing OpenStack links to share with you. These links may come from traditional publications or company blogs, but if it's about OpenStack, we'll find the best links we can to share with you every week.
If you like what you see, please consider subscribing.
Here we go with this week's links:
HP announces a 10-year, multi-billion dollar deal with Deutsche Bank that will outsource much of the bank's IT infrastructure.  Deutsche Bank plans on using HP's Helion private cloud to buy data centre services on demand, allowing them to cut cost and move to a modern, agile technology platform. 
Russell Bryant, from Red Hat, discusses OpenStack as a global community and the importance of real time communication and role models within the Community.  He also shares his thoughts on the success of OpenStack and why DefCore matters.
OpenStack and Docker are two dominant technologies that provide users with scalability and automation.  Recently, Google integrated the Kubernetes container management application with the OpenStack Application Catalog, Murano.  With this powerful combination, users will be able to easily create a scalable Kubernetes application within minutes.
Open source is all about people coming together from around the world, to collaborate on a common goal, to solve shared problems.  The growth of the OpenStack Community has been impressive over the last year and the Community involvement has been important to their success.  Two great ways to improve community collaboration is through social media and Meetups. 
Big players like IBM, Google, and Mirantis are now supporting and enabling container technology capabilities in their open source cloud platform.  Mirantis is partnering with Google to provide OpenStack users container technology that will allow developers to seamlessly move entire environments between private and public clouds.  

by 1 at February 27, 2015 06:57 PM

OpenStack Superuser

CloudCamp Bangladesh sparks conversation on the future of cloud

alt text here At any tech conference, the lines always tell a story. While people generally queue for the food and coffee, at CloudCamp Bangladesh some of them waited to take pictures with cut-outs of OpenStack Foundation members.

Pictured above is Mohammad Zaman, founder of CloudCamp Bangladesh with a life-size stand-in for OpenStack executive director Jonathan Bryce. Zaman tells us that the fourth edition was a resounding success.

In a country with 120 million mobile phone users -- and that’s still only 75 percent of the population -- the cloud is becoming increasingly important.

“Almost all of that mobile content and data are coming from the cloud or residing in a cloud,” Zaman says. “So people are really interested to know, ‘What is it? How do I use it and how can this be good for Bangladesh?’”

Roughly 900 attendees filled the Bangabandhu International Conference Center (BICC) in Dhaka to attend CloudCamp Bangladesh sponsored by OpenStack at Digital World 2015.

The fourth CloudCamp Bangladesh focused on open source technologies for cloud, such as OpenStack, Cloud Foundry and more. Billed as an unconference where early adopters of cloud computing technologies can exchange ideas, challenges and share best practices, the camp attracted many students who were keen on participating in the discussion. About 80 percent of attendees were students, eager to participate in open discussions striving for the advancement of cloud technologies in Bangladesh.

“Bangladesh is a very much open-source oriented country, culturally. Students like to participate and contribute,” Zaman added.

The three-hour camp featured discussion topics such as Kevin Jackson’s “Building a Cloud for the Government” and a panel discussion on “A Cloud for the Government of Bangladesh.”

Zaman says the amount of participation and awareness raised make the event a success. The event was of interest to the locals because, according to Zaman, “Cloud is where information technology is moving.” Zaman adds that despite interest, there’s a lot of confusion on the definition and utilization of it, but that the future looks promising. “We would like to see OpenStack very active in Bangladesh, as we see great interaction and enthusiasm.”

Cover Photo of Dhaka by Imtiaz Tonmoy // CC BY NC

by Hong Nga Nguyen at February 27, 2015 04:55 PM

Tesora Corp

Slides and Recording of Webinar: An Introduction to Database as a Service on OpenStack using Trove

Here are the slides and recording for yesterday's joint webinar by Mirantis and Tesora on February 26, 2015. It discussed OpenStack, Database as a Service and the OpenStack Trove Project. The speakers were Kamesh Pemmaraju of Mirantis and Ken Rugg of Tesora.

<iframe allowfullscreen="" frameborder="0" height="380" marginheight="0" marginwidth="0" scrolling="no" src="http://www.slideshare.net/slideshow/embed_code/45191725" style="border:1px solid #CCC; border-width:1px; margin-bottom:5px; max-width: 100%;" width="600"></iframe>

 

<iframe allowfullscreen="" frameborder="0" height="337" mozallowfullscreen="" src="http://player.vimeo.com/video/120762115" webkitallowfullscreen="" width="600"></iframe>

Getting Started with Database as a Service on OpenStack Trove from Tesora on Vimeo

 

 

by 1 at February 27, 2015 04:51 PM

Sébastien Han

Analyse OpenStack guest writes and reads running on Ceph

Analyse IO pattern of all your guest machines.

Append the following in your ceph.conf:

[client]
log file = /var/log/qemu/qemu-guest.$pid.log
debug rbd = 20

The path of the log file must be writable by QEMU. The log show the offset and the lenght of the IO that was submitted.

Some examples:

  • DD one time 4K: dd if=/dev/zero of=/dev/vdb bs=4k count=1 conv=fsync

librbd: aio_write 0x7f2b01690ab0 off = 0 len = 4096 buf = 0x7f2a981b2000
  • DD 10 times 1M: dd if=/dev/zero of=/dev/vdb bs=1M count=10 conv=fsync

librbd: aio_write 0x7f2b01690ab0 off = 0 len = 1048576 buf = 0x7f2a98338200
librbd: aio_write 0x7f2b01690ab0 off = 1048576 len = 1048576 buf = 0x7f2a98338200
librbd: aio_write 0x7f2b01690ab0 off = 2097152 len = 1048576 buf = 0x7f2a98338200
librbd: aio_write 0x7f2b01690ab0 off = 3145728 len = 1048576 buf = 0x7f2a98338200
librbd: aio_write 0x7f2b01690ab0 off = 4194304 len = 1048576 buf = 0x7f2a98338200
librbd: aio_write 0x7f2b01690ab0 off = 5242880 len = 1048576 buf = 0x7f2a98338200
librbd: aio_write 0x7f2b01690ab0 off = 6291456 len = 1048576 buf = 0x7f2a98338200
librbd: aio_write 0x7f2b01690ab0 off = 7340032 len = 1048576 buf = 0x7f2a98338200
librbd: aio_write 0x7f2b01690ab0 off = 8388608 len = 1048576 buf = 0x7f2a98338200
librbd: aio_write 0x7f2b01690ab0 off = 9437184 len = 1048576 buf = 0x7f2a98338200

Note that most these data are also aggregated through the admin socket which can be setup for virtual machines running on Ceph as well.

February 27, 2015 02:24 PM

Swapnil Kulkarni

[OpenStack] How to set up DevStack with Rally

Prerequisites:
– DevStack setup requires to have 1 VM/ BM machine with internet connectivity.
– Setup a fresh supported Linux installation. (Ubuntu/Fedora/CentOs)
– Install Git

Steps
1. Clone devstack-rally from devstack-rally

$git clone https://github.com/svashu/devstack-rally

2. Setup default configuration parameters for rally with rally_setup.sh

$./rally_setup.sh

3. Modify the devstack/localrc for IP and password modifications

4. Deploy your Devstack

$cd devstack && ./stack.sh

After completion of the sript you will get following message

Horizon is now available at http://X.X.X.X/
Keystone is serving at http://X.X.X.X:5000/v2.0/
Examples on using novaclient command line is in exercise.sh
The default users are: admin and demo
The password: xxxxxxx
This is your host ip: X.X.X.X

Source credentials required for executing commands

For demo user

$source accr/demo/demo

For admin user

$source accr/admin/admin

Filed under: development, devstack, OpenStack, rally Tagged: Contributions, development, devstack, OpenStack

by Swapnil Kulkarni at February 27, 2015 01:39 PM

February 26, 2015

OpenStack Superuser

OpenStack + Kubernetes = More choice and flexibility for developers

Sometimes a demo can go almost too well.

By the time Craig Peters and Georgy Okrokvertskhov took the mic to show how OpenStack support for Kubernetes makes managing Docker containers pretty much a point-and-click operation, the crowd at the Kubernetes San Francisco meetup was almost hoping for a glitch.

Instead, in just under 10 minutes, the pair did a seamless walk-through of how to set up a Kubernetes cluster using OpenStack in the final demo of the night. The key component of the demo was Murano, a new application catalog available in the OpenStack StackForge repository. (If you’re running OpenStack and want to try out the Kubernetes integration, you should install Murano now. Murano is an exciting new project that you can contribute to and help drive the direction.)

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

During the demo, Kubernetes appeared just like any other application inside Murano: a cluster was created with a simple drag-and-drop from the point-and-click interface Horizon. Murano then works its magic with OpenStack Orchestration Heat to create all the resources, networking and compute, needed to deploy the cluster. Once the Kubernetes cluster was set up, the presenters added pods and applications from the Murano catalog. That’s it. The pair from Mirantis asked for questions, but there were none, so participants shifted to the bar where drinks and food were offered courtesy Intel.

Over 100 people — with standing-room only at the back — filled co-working space Galvanize in the city’s SOMA district for the meetup. Other demos included Clayton Coleman on OpenShift 3 and a true nail-biter by Kelsey Hightower from CoreOS as he live demoed Kubernetes' capabilities for self-healing, automatic scaling and rolling upgrades.

The partnership comes at a crucial time. Many private and public cloud users want to integrate tools like containers into their orchestration platform. Containers won't solve every workload, so companies need a tool to tie it all together. To work, that tool must be integrated into their systems, flexible and broadly supported, while bringing in the right technologies for the right use cases.

Craig McLuckie, a product manager for Google’s cloud services, sat down with Superuser to tell us more about what this all means.

“We look at it as a way to address our major customer’s needs around the hybrid cloud,” McLuckie says. “Our customers want to bring Google’s style of management to their IT environment, but it has to extend to public and private cloud,” he adds. “[The partnership] greatly extends the reach of Kubernetes to a broad array of users deploying OpenStack to their private clouds. Those users experience Google-style cluster management in this environment.” McLuckie says that he hopes OpenStack users will like it so much they’ll migrate to the public cloud, potentially becoming customers for Google’s public cloud services.

alt text here

The Mountain View company developed Kubernetes — the name means “ship master” or “helmsman” in ancient Greek — to help schedule/manage Docker containers. Kubernetes, originally built on top of Google Compute Engine, went open source in June 2014.

At the time, there were so many container management tools coming out that some predicted an “arms race.” But McLuckie says he hopes there will be no container wars, adding that “the technology is nascent. There’s a lot of work for everyone to do. I certainly feel that everyone’s going to benefit from a concerted push forward. There’s so much value to be had,” he adds. “I would not want to participate in any form of container wars, and Google is determined to work in a community-friendly fashion.”

McLuckie says he’s “psyched” about open source, which brings a different kind of competitive edge thanks to its virtuous cycle. “I look at what we’ve done with Kubernetes and I’m awed by what companies including Red Hat and CoreOS have brought to the table. They’ve made it so much better than what we could have done by ourselves.”

OpenStack COO Mark Collier agrees, calling this a “very exciting time for OpenStack users and developers looking for ways to integrate and take full advantage of whatever strengths the technology has.”

Google choosing to open source the code with an Apache 2.0 license makes it even more accessible to people. As a result, he adds, “We’re starting to see some early experimentation, searching for the right places to integrate Kubernetes into an OpenStack cloud.”

“We’re just scratching the surface of how the our communities can work together,” Collier says.

by Nicole Martinelli at February 26, 2015 11:47 PM

Rob Hirschfeld

OpenStack DefCore Accelerates & Simplifies with Clear and Timely Guidelines [Feedback?]

Last week, the OpenStack DefCore committee rolled up our collective sleeves and got to work in a serious way.  We had a in-person meeting with great turn out
with 5 board members, Foundation executives/staff and good community engagement.

defcore timelineTL;DR > We think DefCore deliverables should be dated milestone guidelines instead tightly coupled to release events (see graphic).

DefCore has a single goal expressed from two sides: 1) defining the “what is OpenStack” brand for Vendors and 2) driving interoperability between OpenStack installations.  From that perspective, it is not about releases, but about testable stable capabilities.  Over time, these changes should be incremental and, most importantly, trail behind new features that are added.

For those reasons, it was becoming confusing for DefCore to focus on an “Icehouse” definition when most of the capabilities listed are “Havana” ones.  We also created significant time pressure to get the “Kilo DefCore” out quickly after the release even though there were no “Kilo” specific additions covered.

In the face-to-face, we settled on a more incremental approach.  DefCore would regularly post a set of guidelines for approval by the Board.  These Guidelines would include the required, deprecated (leaving) and advisory (coming) capabilities required for Vendors to use the mark (see footnote*).  As part of defining capabilities, we would update which capabilities were included in each component and  which components were required for the OpenStack Platform.  They would also include the relevant designated sections.  These Guidelines would use the open draft and discussion process that we are in the process of outlining for approval in Vancouver.

Since DefCore Guidelines are simple time based lists of capabilities, the vendors and community can simply reference an approved Guideline using the date of approval (for example DefCore 2015.03) and know exactly what was included.  While each Guideline stands alone, it is easy to compare them for incremental changes.

We’ve been getting positive feedback about this change; however, we are still discussing it and appreciate your input and questions.  It is very important for us to make DefCore simple and easy.  For that, your confused looks and WTF? comments are very helpful.

* footnote: the Foundation manages the OpenStack brand and the process includes multiple facets.  The DefCore Guidelines are just one part of the brand process.


by Rob H at February 26, 2015 08:18 PM

Kyle Mestery

Scaling OpenStack Neutron Development

We’re nearing the end of the Kilo development cycle. This is typically where the rubber meets the road, as we’re trying our hardest to merge a lot of code near the end. It’s a fairly busy part of the cycle. I wanted to take a moment to write about two efforts which will help to scale Neutron development in Kilo and beyond.

The Kilo cycle has involved a couple of efforts which are meant to expand and scale the Neutron development community. These efforts are Plugin Decomposition and the Advanced Services Split. Both of these were designed to enable a more scalable development environment which will allow for fast code iteration in all the areas affected. How have we done with these? Lets take a look at each individually.

Plugin Decomposition

As of the Kilo-2 milestone, Neutron has a total of 48 plugins and drivers. These span everything from basic L2 plugins, L2 ML2 MechanismDrivers, L3 service plugins, and advanced service plugins for FWaaS, LBaaS and VPNaaS. Not only is that a lot of plugins, it’s also by far the largest amount of plugins and drivers for any OpenStack project. Given that Neutron has only 14 core reviewers, the amount of code required to review across those drivers and plugins was growing by leaps and bounds. For example, we had around 10 additional plugins/drivers proposed for Kilo. Core reviewers who were reviewing all the backend logic in these plugins and drivers were left reviewing code for which they had no knowledge of the backend logic, nor could they test a lot of these due to lack of hardware or software from the various vendors. The model we had when we started with a small number of plugins and drivers was no longer scaling. Something had to change.

Enter “Plugin Decomposition.” The idea here is that all upstream plugins and drivers will leave a small shim in-tree in the Neutron git repository, and move all their backend logic out. Most chose to move it to StackForge, with some moving straight to GitHub. The benefit is clear to all parties involved: Neutron core reviewers can now focus on reviewing core Neutron code, and plugin maintainers can now iterate at their own pace in their plugins and drivers, releasing when they want. Everyone gets faster merges, greater velocity, and ultimately faster innovation speed.

Advanced Services Split

All of the advanced services in Neutron (FWaaS, LBaaS, and VPNaaS) used to have their code in the Neutron git repository. The teams working on these projects were suffering from a similar fate as the plugins: Lack of reviews by Neutron core reviewers. They also wanted to iterate faster and drive new features at a fast pace. Thus, the idea of spinning these into their own git repositories was proposed and ultimately implemented. These git repositories still reside as a part of Neutron, but they have separate (and sometimes overlapping) core reviewer teams.

Similar to decomposition, this has been a success. The LBaaS team in particular has moved fast and created a new LBaaS API, the LBaaS V2 API. The VPNaaS team has moved to solidify their testing, including new Tempest scenario tests. And the FWaaS team is fixing a long-standing issue around deploying multiple FWs per tenant. The early results are very positive here as well.

Innovation in Neutron Moving Forward

Neutron is well positioned now moving forward to enable innovation at all the different layers. Plugin and driver maintainers can now iterate quickly. Advanced services can move at their own pace within the existing release window. And core Neutron can now continue evolving into a solid API endpoint and DB layer. All of these changes are wins for the end users of Neutron, and OpenStack in general.

February 26, 2015 08:01 PM

Adam Young

Three Types of Tokens

One of the most annoying administrative issues in Keystone is The MySQL backend to the token database filling up. While we have a flush scrit, it needs to be scheduled via cron. Here is a short over view of the types of tokens, why the backend is necessary, and what is being done to mitigate the problem.

DRAMATIS PERSONAE:

Amanda: The companies OpenStack system Admin

Manny: The IT manager.

ACT 1 SCENE1: Small conference room. Manny has called a meeting with Amanda.

Manny: Hey Amanda, What are these keystone tokens and why are they causing so many problems?

Amanda: Keystone tokens are an opaque blob used to allow caching of an authentication event and some subset of the authorization data associated with the user.

Manny: OK…backup. what does that mean?

Amanda: Authentication means that you prove that you are who you claim to be. For the most of OpenStack’s history, this has meant handing over a symmetric secret.

Manny: And a symmetric secret is …?

Amanda: A password.

Manny:Ok Got it. I hand in my password to prove that I am me. What is the authorization data?

Amanda: In OpenStack, it is the username and the user’s roles.

Manny: All their roles?

Amanda: No. only for the scope of the token. A token can be scoped to a project. Also to a domain, but in our set up, only I ever need a domain scoped token.

Manny: The domain is how I select between the customer list and our employees out of our LDAP server, right?

Amanda: Yep. There is another domain just for admin tasks, too. It has the service users for Nova and so on.

Manny: OK, so I get a token, and I can see all this stuff?

Amanda: Sort of. For most of the operation we do, you use the “openstack” command. That is the common command line, and it hides the fact that it is getting a token for most operations. But you can actually use a web tool called curl to go direct to the keystone server and request a token. I do that for debugging sometimes. If you do that, you see the body of the token data in the response. But that is different from being able to read the token itself. The token is actually only 32 characters long. It is what is known as a UUID.

Manny (slowly): UUID? Universally Unique Identifier. Right?

Amanda: Right. Its based on a long random number generated by the operating system. UUIDs are how most of OpenStack generates remote identifiers for VMs, images, volumes and so on.

Manny: Then the token doesn’t really hold all that data?

Amanda: It doesn’t. The token is just a…well, a token.

Manny: Like we used to have for the toll machines on route 93. Till we all got Easy pass!

Amanda: Yeah. Those tokens showed that you had paid for the trip. For OpenStack, a token is a remote reference to a subset of your user data. If you pass a token to Nova, it still has to go back to Keystone to validate the token. When it validates the token, it gets the data. However, our OpenStack deployment is so small, Nova and Keystone are on the same machine. Going back to Keystone does not require a “real” network round trip.

Manny: So when now that we are planning on going to the multi host set up, validating a token will require a network round trip?

Amanda: Actually, when we move to the multi-site, we are going to switch over to a different form of token that does not require a network round trip. And that is where the pain starts.

Manny: These are the PKI tokens you were talking about in the meeting?

Amanda: Yeah.

Manny: OK, I remember the term PKI was Public Key…something.

Amanda: The I is for infrastructure, but you remembered the important part.

Manny: Two keys, Public versus private: you encode with one and decode with the other.

Amanda: Yes. In this case, it is the token data that is encoded with private key, and decode with the public key.

Manny: I thought that made it huge. Do you really encode all the data?

Amanda: No, just a signature of the data. A Hash. This is called message signing, and it is used in a lot of places, basically to validate that the message is both unchanged and that it comes from the person you think it comes from.

Manny: OK, so…what is the pain.

Amanda: Two things. One, the tokens are bigger, much bigger, than a UUID. They have all of the validation data in them. To include the service catalog. And our service catalog is growing on the multi-site deployment, so we’ve been warned that the tokens might get so big that it causes problems.

Manny: Let’s come back to that. What is the other problem?

Amanda: OK…since a token is remotely validated, there is the possibility that something hass changed on Keystone, and the token is no longer valid. With our current system, Keystoen knows this immediately, and just dumps the token. So When Nova comes to validate it, its no longer valid and the user has to get another token. With remove validation, Nova has to periodically request a list of revoked tokens.

Manny: So either way Keystone needs to store data. What is the problem?

Amanda: Well, today we store our tokens in Memcached. Its a simple Key value store, its local to the Keystone instance, and it just dumps old data that hasn’t been used in a while. With revocations, if you dump old data, you might lose the fact that a token was revoked.

Manny: Effectively un-revoking that token?

Amanda: Yep.

Manny: OK…so how do we deal with this?

Amanda: We have to move from storing token in Memcached to MySQL. According to the docs and upstream discussions, this can work, but you have to be careful to schedule a job to clean up the old tokens, or you can fill up the token database. Some of the larger sites have to run this job very frequently.

Manny: Its a major source of pain?

Amanda: It can be. We don’t think we’ll be at that scale at the multisite launch, but it might happen as we grow.

Manny: OK, back to the token size thing, then. How do we deal with that?

Amanda: OK, when we go multi-site, we are going to have one of everything at each site: Keystone, Nova, Neutron, Glance. We have some jobs to synchronize the most essential things like the glance images and the customer database, but the rest is going to kept fairly separate. Each will be tagged as a region.

Manny: So the service catalog is going to be galactic, but will be sharded out by Keystone server?

Amanda: Sort of. We are going to actually make it possible to have the complete service catalog in each keystone server, but there is an option in Keystone to specify a subset of the catalog for a given project. So when you get a token, the service catalog will be scoped down to the project in question. We’ve done some estimates of size and we’ll be able to squeak by.

Manny: So, what about the multi-site contracts? Where a company can send there VMs to either a local or remote Nova?

Amanda: for now they will be separate projects. But for the future plans where we are going to need to be able to put them in the same project, we are stuck.

Manny: Ugh. We can’t be the only people with this problem.

Amanda: Some people are moving back to UUID tokens, but there are issues both with replication of the token database and also with cross site network traffic. But there is some upstream work that sounds promising to mitigate that.

Manny: The lightweight thing?

Amanda: Yeah, lightweight tokens. Its backing off the remotely validated aspect of Keystone tokens, but doesn’t need to store the tokens themselves. They use a scheme called Authorized Encryption which puts a minimal amount of info into the token to be able to recreate the whole authorization data. But only the Keystone server can expand that data. Then, all that needs to be persisted is the revocations.

Manny: Still?

Amanda: Yeah, and there are all the same issues there with flushing of data, but the scale of the data is much smaller. Password changes and removing roles from users are the ones we expect to see the most. We still need a cron job to flush those.

Manny: No silver bullet, eh? Still how will that work for multisite?

Amanda: Since the token is validated by cryptography, the different sites will need to synchronize the keys. There was a project called Kite that was part of Keystone, and then it wasn’t, and then it was again, but it is actually designed to solve this problem. So all of the Keystone servers will share their keys to validate tokens locally.

Manny: We’ll still need to synchronize the revocation data?

Amanda: No silver bullet.

Manny: Do we really need the revocation data? What if we just … didn’t revoke. Made the tokens short lived.

Amanda: Its been proposed. The problem is that a lot of the workflows were being built around the idea of long lived tokens. The Tokens went from 24 hours valid to 1 hour valid by default, and that broke some things. Some people have had to crank the time back up again. We think we might be able to get away with shorter tokens, but we need to test and see what it breaks.

Manny: Yeah, I could see HA having a problem with that…wait, 24hours…how does heat do what it needs to. It can restart a machine a mong afterwards. DO we just hand over the passwords to HEAT?

Amanda: Heh..used to,. But Heat uses a delegation mechanism called trusts. A user creates a trust, and that effectively says that Heat can do something on the users behalf, but Heat has to get its own token first. It first proves that it is Heat, and then it uses the trust to get a token on the users behalf.

Manny: So…trusts should be used everywhere?

Amanda: Something like trusts, but more lightweight. Trusts are deliberate delegation mechanisms, and a re set op on a per user bases. TO really scale, it would have to be something where the admin set up the delegation agreement as a template. If that were the case, then these long lived work flows would not need to use the same token.

Manny: And we could get rid of the revocation events. OK, that is time, and I have a customer meeting. Thanks.

Amanda: No problem.

EXIT

by Adam Young at February 26, 2015 05:24 PM

Florian Haas

Spreading knowledge: OpenStack and Ceph in New Zealand and Germany

Lately, we've had the pleasure of working with two very interesting and capable customers, both of which were just on the verge of a product launch: Catalyst IT in New Zealand and SUSE in Germany.

In January, just after linux.conf.au 2015, we partnered up with Catalyst IT in New Zealand to deliver back-to back OpenStack training in the beautiful cities of Auckland and Wellington. A mixed crowd of Catalyst customers and employees went through our Cloud Fundamentals, Networking, Ceph and Security courses on the OpenStack platform. For us, it was an excellent experience to work with an extremely sharp Kiwi crowd that set aside some time in the New Zealand summer to learn about the world's fastest-innovating cloud platform. 

We taught our two-day Cloud Fundamentals course both weeks, adding two extra days with alternate modules each week. In week 1, attendees went straight into our OpenStack Neutron deep-dive, Networking for OpenStack (HX102). HX102 is a dense, fast-paced, highly technical 2-day deep dive into software-defined networking, OpenStack Neutron, OpenFlow and Open vSwitch, and also includes advanced Neutron topic like LBaaS, FWaaS and VPNaaS. The class was so capable and eager that we even had some extra time to cover some OpenStack Heat content from our Orchestration and Scaling (HX107) class.

In week 2, the add-on modules were our courses about Ceph Distributed Storage for OpenStack (HX104) and Advanced Security for OpenStack (HX108). This is where we provide a deep introduction to Ceph integration with OpenStack, and cover a bunch of critical OpenStack security topics.

A couple of weeks later, Catalyst IT officially launched the Catalyst Cloud, New Zealand's first public cloud service, based on OpenStack. Congrats, Catalyst! We're happy to have made our small contribution.

At the same time that Catalyst Cloud launched, we were already on-site with our next major customer: at SUSE in Nuremberg, we helped a team of support engineers and consultants hone their expertise on Ceph. Again, we were working with a sharp and demanding crowd, multinational this time: we had folks from Germany, France, Senegal, Spain, the U.S., the U.K., and the Czech Republic in attendance.

We ran an extended version of our Ceph Distributed Storage Fundamentals (HX112) class — normally a 2-day course, customized and extended to 3 days. (I should perhaps mention, by the way, that we teach this course on Ubuntu 14.04 LTS, SLES 12 and RHEL/CentOS 7.) Attendees got a deep technical introduction into the fundamentals of distributed storage, into RADOS, CRUSH, RBD, and RadosGW. We also covered Ceph integration in OpenStack (from our HX104 course) and even had time for a hands-on CephFS preview. Feedback from attendees was extremely positive with several reporting it was the best professional training class they had ever taken.

Shortly after, SUSE released SUSE Enterprise Storage, and again, we're thrilled about having made our own small contribution toward their product release. 

Congratulations again to both of these wonderful hastexo customers, and all the best with the new products — we'll see you again soon!

And just in case your company needs top-notch OpenStack or Ceph training: we're always happy to help. Just get in touch!

read more

by florian at February 26, 2015 10:46 AM

February 25, 2015

Rob Hirschfeld

Art Fewell and I discuss DevOps, SDN, Containers & OpenStack [video + transcript]

A little while back, Art Fewell and I had two excellent discussions about general trends and challenges in the cloud and scale data center space.  Due to technical difficulties, the first (funnier one) was lost forever to NSA archives, but the second survived!

The video and transcript were just posted to Network World as part of Art’s on going interview series.  It was an action packed hour so I don’t want to re-post the transcript here.  I thought selected quotes (under the video) were worth calling out to whet your appetite for the whole tamale.

<iframe allowfullscreen="true" class="youtube-player" frameborder="0" height="359" src="http://www.youtube.com/embed/msYLBSx9_pY?version=3&amp;rel=1&amp;fs=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" type="text/html" width="584"></iframe>

My highlights:

  1. .. partnering with a start-up was really hard, but partnering with an open source project actually gave us a lot more influence and control.
  2. Then we got into OpenStack, … we [Dell] wanted to invest our time and that we could be part of and would be sustained and transparent to the community.
  3. Incumbents are starting to be threatened by these new opened technologies … that I think levels of playing field is having an open platform.
  4. …I was pointing at you and laughing… [you’ll have to see the video]
  5. docker and containerization … potentially is disruptive to OpenStack and how OpenStack is operating
  6. You have to turn the crank faster and faster and faster to keep up.
  7. Small things I love about OpenStack … vendors are learning how to work in these open communities. When they don’t do it right they’re told very strongly that they don’t.
  8. It was literally a Power Point of everything that was wrong … [I said,] “Yes, that’s true. You want to help?”
  9. …people aiming missiles at your house right now…
  10. With containers you can sell that same piece of hardware 10 times or more and really pack in the workloads and so you get better performance and over subscription and so the utilization of the infrastructure goes way up.
  11. I’m not as much of a believer in that OpenStack eats the data center phenomena.
  12. First thing is automate. I’ve talked to people a lot about getting ready for OpenStack and what they should do. The bottom line is before you even invest in these technologies, automating your workloads and deployments is a huge component for being successful with that.
  13. Now, all of sudden the SDN layer is connecting these network function virtualization ..  It’s a big mess. It’s really hard, it’s really complex.
  14. The thing that I’m really excited about is the service architecture. We’re in the middle of doing on the RackN and Crowbar side, we’re in the middle of doing an architecture that’s basically turning data center operations into services.
  15. What platform as a service really is about, it’s about how you store the information. What services do you offer around the elastic part? Elastic is time based, it’s where you’re manipulating in the data.
  16. RE RackN: You can’t manufacture infrastructure but you can use it in a much “cloudier way”. It really redefines what you can do in a datacenter.
  17. That abstraction layer means that people can work together and actually share scripts
  18. I definitely think that OpenStack’s legacy will more likely be the community and the governance and what we’ve learned from that than probably the code.

by Rob H at February 25, 2015 09:09 PM

OpenStack Superuser

Welcome to the wild west of networking and cloud infrastructure

alt text herePanel, left to right: Chris Price, Chris Wright, Nolan Leake, Stefano Maffulli and Neela Jacques. Photo: Linux Foundation.

It may look like the wild west now, but the future of open networking and cloud infrastructure is bright.

That’s the main takeaway from a panel featuring speakers from Red Hat, Ericsson, Cumulus Networks and the OpenStack Foundation moderated by Neela Jacques at the Linux Collaboration Summit.

Networking is an industry with hardware and proprietary software implementing widely adopted, slowly evolving standards managed by international standards bodies. Fast forward to 2015: the industry must collaborate and transform/adopt open source software on commodity hardware and collaborate with competitors to respond to market pressure in a timely fashion.

“It’s the wild, wild west, and we’re at the very beginning,” said Chris Wright, technical director for software-defined networking at Red Hat. There are “cool components all over the place,” he added, but there are more than a few “pain points.” “We’re running as fast as we can to fix those. Everything is changing. There are blurred lines between what we have now what will fuel the next generation.”

Christopher Price of Ericsson who says the innovation is so breakneck he “wakes up dizzy every day,” would like to see some order in these boom towns.

“We need to keep rolling, we need to keep innovating and we shouldn’t stop someone doing something.” The community should be accepting of new ideas and new concepts, and forge foundational pieces where it make sense, said Price, who sported an OPNFV hoodie onstage. “The townships out on the edges of that wild west may survive or not, but we need to create room for the next level of growth,” he added.

The OpenStack community sits in the middle of this new territory, reflecting at least three different use cases: the public cloud, the private cloud (enterprise) and the telecom carriers with the new needs dictated by Network Function Virtualization. These groups may seem to have different priorities, and speak a different language, but they’re slowly coming together.

“Open source may have lost the desktop war, but we’re winning the data center and cloud wars,” said Stefano Maffulli, developer advocate at OpenStack. “The collaboration around it is brilliant, it’s what we hoped for when the open source movement started.”

When Jacques asked why the networking industry needs open source, Nolan Leake CTO of Cumulus Networks replied that without openness at the lowest levels, it’s difficult to innovate at the levels above. “It's very difficult to build a platform on top of a closed base.”

“Where the NFV is going, it will drag the data center with it,” Leake added. The use cases for NFV are not that different from enterprise data centers. “I’m pretty optimistic about that future.”

Cover Photo by Oilibac/ // CC BY NC; Event photo: Nicole Martinelli.

by Nicole Martinelli at February 25, 2015 07:30 PM

IBM OpenTech Team

Hybrid OpenStack Borderless Cloud

Yesterday, in front of more than 15,000 attendees at IBM’s Interconnect conference in Las Vegas, IBM announced the OpenStack borderless cloud beta surfaced through IBM Bluemix. With the new beta service, developers can deploy applications to OpenStack cloud infrastructure from within Bluemix. This borderless OpenStack cloud can provide compute, storage, and network resources through public, dedicated and local cloud infrastructure, all available to developers directly within Bluemix.
Developers want to focus on what they do best, creating and coding new applications. That shouldn’t have to change just because they need to deploy an application to a different cloud environment. With the new OpenStack borderless cloud beta in Bluemix, developers can easily deploy applications in a consistent manner across public, dedicated and local OpenStack based clouds, realizing the promise of hybrid cloud. With Bluemix providing the ability to deploy applications across cloud deployment models, developers can create your infrastructure configuration one time and deploy consistently regardless of the stage of your development life cycle.
A key component for delivering reliable infrastructure for application development and delivery is trust, earned by providing application availability, secure access and safeguarding application data. The OpenStack public and dedicated cloud services available in Bluemix are hosted on SoftLayer infrastructure, which provides reliable, secure, and high-performance infrastructure for OpenStack trusted clouds. Building on that trust, SoftLayer plans to deploy Trusted Platform Modules with Intel’s Trusted eXecuion Technology and when combined with IBM’s Cloud Security Services, you can be assured of software integrity. Additionally, in the future, developers will be able to ensure geographic boundaries for the deployment of their application via geographic tagging of systems with TPM.
The BlueMix cloud services beta is open now for you to test out deploying and running applications on IBM’s OpenStack cloud. Check it out, sign up for a Bluemix 30 day free trial.

The post Hybrid OpenStack Borderless Cloud appeared first on IBM OpenTech.

by Garth Tschetter at February 25, 2015 12:35 AM

February 24, 2015

IBM OpenTech Team

Building a Cloud Storage Application

Among the many cloud services, cloud object storage has perhaps experienced
the most growth. It is commonly used for storing photographs, images,
documents, and audio and video files. Every day, people find new uses for
their favorite object store. Regardless of the purpose of the application
you are developing, understanding cloud object storage services is an
essential part of building a real-world, scalable solution.

Armed with the knowledge learned here, you can develop
great applications very quickly by utilizing IBM Object
Storage.

In this tutorial we show you how to develop a simple application using the
Object Storage service for Bluemix, an OpenStack Swift-based object store
service, and explain how the restful storage APIs can be used. The
application you develop lets users upload, download, and manage their
pictures and documents leveraging the Object Storage service for Bluemix.
You can also download the sample application and view the code. Armed with
the knowledge learned here, you can quickly develop great applications
using the Object Storage service.

What you need to build the sample application in this tutorial

 
  1. A Bluemix account
  2. Basic understanding of the Cloud Foundry
    command line interface
  3. Basic understanding of Node.js. The following documents can be
    helpful:
  4. Basic understanding of Object Storage service application terminology
    (optional)
  5. Basic understanding of the OpenStack Swift API (optional)

Step 1. Create a Node.js application

 
  1. Login to your Bluemix account.
  2. Click CREATE AN APP.
  3. Choose WEB application type.
  4. Select SDK for Node.js and click
    CONTINUE.
  5. Enter an application name that is unique within the Bluemix domain you
    have selected. In this example, we use awesome-store as the
    name of our application.

Step 2. Add a service instance of Object Storage for Bluemix

 
  1. Once the application is created, click the ADD A
    SERVICE
    button.
  2. Scroll down to the Data Management section and click the
    Object Storage icon.
  3. The Object Storage service comes with a fairly detailed Getting
    started with Object Storage
    document, which you can review by
    clicking the VIEW DOCS button. You can also click the
    TERMS button to review the service license
    agreement as well. As you probably noticed, this beta service is
    provided free.
  4. Click the CREATE button and RESTAGE
    your application. You have now created an application by using the
    Bluemix-provided starter Node.js template project, created an Object
    Storage service instance, and bound the instance to the newly-created
    application.
  5. Your application name, awesome-store, is displayed at the top
    of the browser and below the name is a link. To see what the
    application does, click on that link. A window similar to the figure
    below opens. Notice that your browser address area shows
    awesome-store.mybluemix.net. This URL is a combination of
    the application name and the domain that you selected. You can access
    your application from any browser using the URL.
  6. image001 Building a Cloud Storage Application

  7. Click the Environment Variables link to examine
    VCAP_SERVICES. This is an important piece of information
    that your application needs to make calls to the Object Storage
    service, such as creating an account for the application user, and
    allowing users to upload and download documents. See the figure below
    for an example. The username and password
    were scrambled to increase security.
    image002 Building a Cloud Storage Application

    Notice that in the credentials section you find
    auth_uri, username, and
    password. These three fields are used to authenticate
    your application with the service, and are referenced as
    auth_uri, username, and password from now
    on. It is very important to know that this username
    and password pair is the credential for your app to
    authenticate with the service. They are not the same as your
    application’s username and password if
    your application supports multiple users.

Step 3. Get the startup code from Bluemix

 
  1. While viewing your awesome-store application on the Bluemix
    screen, click the Start Coding link on the left
    sidebar.
  2. image003 Building a Cloud Storage Application

  3. A few steps regarding the Bluemix application development process are
    on the right. If you have not yet set up the Cloud Foundry command
    line interface in your favorite environment, do it now by following
    the Setup step.
  4. Click Download Starter Code. The downloaded code is
    in a zipped file. When you unzip the file, you see the following
    folders and files structure: image004 Building a Cloud Storage Application

    Because the purpose of this tutorial is to demonstrate how
    to use the Object Storage service, we do not dive into the details
    of Node.js application programming. Instead we focus on three key
    points necessary for programming the Object Storage service.

    1. We examine how the Object Storage service leverages the
      OpenStack Swift object storage APIs for applications to
      manipulate various entities in its object store.
    2. We show that in order for an application to access Swift
      object storage, the application has to first get an access
      token.
    3. We show how applications then pass this access token in
      requests to manipulate folders and objects.
  5. Though there are many ways to accomplish the tasks set above, to make
    this simple, we developed a few URLs to complete the tasks in this
    Node.js application. These URLs are entered into youAmong the many cloud services, cloud object storage has perhaps experienced
    the most growth. It is commonly used for storing photographs, images,
    documents, and audio and video files. Every day, people find new uses for
    their favorite object store. Regardless of the purpose of the application
    you are developing, understanding cloud object storage services is an
    essential part of building a real-world, scalable solution.r browser as GET
    requests, so you do not need any special tools to test the code. Here
    is the list of the URLs with a brief description of each. Replace the
    variables preceded by a colon with a string of your choice. For
    example, /gettoken/:userid becomes
    /gettoken/tongli.
    • /gettoken/:userid – Retrieve an access token for
      the userid.
    • /listfolders/:userid – List all the folders for
      the userid.
    • /createfolder/:userid/:foldername – Create a
      folder named foldername for the
      userid.
    • /delfolder/:userid/:foldername – Delete a folder
      for the userid and the folder. This one is not implemented in
      the application, you can add the implementation as an
      exercise. The implementation is very similar to the
      deldoc operation.
    • /listdocs/:userid/:foldername – List all the
      documents for the userid and the folder.
    • /createdoc/:userid/:foldername/:docname – Create
      a document named docname for the userid under the
      foldername folder. You may notice that the GET
      request does not really send any content for the document. In
      this application, you just send a hardcoded string.
    • /getdoc/:userid/:foldername/:docname – Retrieve
      the document named docname under the
      userid in the foldername
      folder.
    • /deldoc/:userid/:foldername/:docname – Delete a
      document.
  6. In your views folder, create a Jade template file named layout.jade,
    copy and paste the following code as its content, then save the file.

    		doctype html
    		html
    		  include head
    		  body
    		    table(style="width:100%")
    		      tr
    		        td(style= "width:307px" )
    		          img(src="/images/newapp-icon.png")
    		        td
    		          block content
  7. In your views folder, create another Jade template file named
    results.jade, copy and paste the following code as its content, then
    save the file.

    		extends layout
    		block content
    		  div(id="results")
    		    h2
    		      pre
    		        !=JSON.stringify(body, null, 2)
  8. Open the app.js file and add the following lines after the
    var services line:

    		The existing code:
    		
    		var services = JSON.parse(process.env.VCAP_SERVICES || "{}");
    		
    		The new code to be added after the above line:
    		
    		app.get('/gettoken/:userid', function(req, res){
    			res.render('results', {body: {});
    		});

    The
    new code uses results.jade as a template, passes a
    variable named body to the template, then renders the
    HTML.

  9. These new files and the changes in app.js have prepared
    us to do more. Even though we have not accomplished a whole lot to
    this point, you still can deploy this application and see your new
    code working. Use the cf push command to deploy the
    application onto Bluemix, then point your browser to http://awesome-store.mybluemix.net/gettoken/tong to see
    results similar to this:
    image005 Building a Cloud Storage Application

    If you see errors, double-check the two files and the
    changes made in the app.js file. Normally the browser
    will display helpful debugging information.

Step 4. Obtain an access token for a user of your
application

 
  1. The Object Storage service supports multiple user accounts per service
    instance. It allows you to easily develop an application that supports
    multiple users because is does the heavy lifting for you. To get an
    access token for a user of your application, send a request by using
    basic authentication to this endpoint:
    auth_uri/<app_userid>. In this example, the URL
    looks like this:

    Notice
    that the URL is not exactly the auth_uri you get from
    the VCAP_SERVICES variable. Rather, it is the
    concatenation of the auth_uri and a userid chosen by
    your application. Your application can support multiple users, and
    you can choose anything as a userid as long as these userids are
    unique across your application. Each userid should uniquely
    identify a user in your application. The basic authentication
    header should follow the standard basic authentication
    protocol.

  2. Since you are going to send HTTP requests in the application, you need
    to include a Node.js HTTP request library. Add the following line in
    the app.js file.

    		The existing code:
    		var express = require('express');
    		
    		The new code to be added after the above line:
    		var request = require('request');
  3. Add the following code in the app.js file to replace the lines added
    earlier in Step 3.7.

    var cache = {};
    		var auth = null;
    		
    		var set_app_vars = function() {
    			var credentials = services['objectstorage'][0]['credentials'];
    			auth = {"auth_uri": credentials['auth_uri'],
    				 "userid" : credentials['username'],
    				 "password" : credentials['password'],
    			};
    			auth["secret"] = "Basic " +
    				Buffer(auth.userid + ":" + auth.password).toString("base64");
    		};
    		
    		app.get('/gettoken/:userid', function(req, res){
    			if (!auth) { set_app_vars(); }
    			var res_handler = function(error, response, res_body) {
    				var body = {};
    				if (!error && response.statusCode == 200) {
    					body = {"userid": req.params.userid,
    						 "token": response.headers['x-auth-token'],
    						 "url": response.headers['x-storage-url']};
    					cache[req.params.userid] = body;
    				}
    				else {
    					body = {"token": error, "url": ""};
    				};
    				res.render('results', {"body": body});
    			};
    			var req_options = {
    			    	url: auth.auth_uri + '/' + req.params.userid,
    				headers: {'accept': 'application/json',
    			   	          'Authorization': auth.secret},
    				timeout: 100000,
    				method: 'GET'
    			};
    			request(req_options, res_handler);
    		});

    In
    the above code, the function set_app_vars takes in
    the values from the variable VCAP_SERVICES and finds
    objectstorage settings for the service instance. It
    also gets the auth_uri, userid, and
    password, then creates a base64 string
    according to the basic authentication protocol and saves it for
    later use.

    The app.get('/gettoken/:userid')
    call sets up a request handler so that all GET requests targeting
    /gettoken/:userid are handled by this code block. In
    this code block, define a response handler and request option to
    get an access token for an endpoint provided by IBM Bluemix Object
    Storage. Once the request is sent, the response handler checks the
    response status code, creates a body variable, and
    passes it on to be rendered as an HTML page according the
    definition found in the results.jade file.

    Also
    notice that the token received is temporarily cached by this
    application for later use. If you are developing a real-world
    application using this service, the token should be cached so you
    do not have to send a request to the service to get an access
    token every time.

  4. Redeploy the application using the cf push command.
  5. Access the same URL you used before. You should see something similar
    to the following. The token was again scrambled for security reasons.

    http://awesome-store.mybluemix.net/gettoken/tong

    image006 Building a Cloud Storage Application

  6. The response displays a token and a URL. The token and the URL are the
    two pieces of information your application should keep for creating
    folders (containers) and documents (objects) in a cloud object storage
    service.
  7. When the GET request is sent the first time for a specific user, this
    request may return a 202 status code. This is because the first access
    to object storage will cause the object storage to provision a new
    account for that user, which may take five or ten minutes. If your
    request is not getting the token, you can wait a few minutes and try
    the same request again. Eventually you get a 200 status code for the
    request and the access token, and a URL in the response header. With
    these two pieces of information, you are ready to explore Object
    Storage using the OpenStack Swift API. If you are not familiar with
    the API, you can learn more at http://docs.openstack.org/api/openstack-object-storage/1.0/content/.

Step 5. Create folder for a user with the OpenStack Swift
API

 
  1. In the previous steps, you successfully retrieved an access token for
    the user tong. Now it is time to create a folder for
    tong. The API defined by OpenStack Swift to create a
    folder is a PUT request with the access token in the request header.
    The endpoint is simply the storage URL, which you received when the
    access token was obtained. So to create a folder simply use the access
    token and the storage URL received above to send a PUT request.

    	app.get('/createfolder/:userid/:foldername', function(req, res){
    		var user_info = cache[req.params.userid];
    		var res_handler = function(error, response, body) {
    			if (!error && (response.statusCode == 201 ||
    					 response.statusCode == 204)) {
    				res.render('results', {'body': {result: 'Succeeded!'}});
    			}
    			else {
    				res.render('results', {'body': {result: 'Failed!'}});
    			}
    		};
    		var req_options = {
    			url: user_info['url'] + "/" + req.params.foldername,
    			headers: {'accept': 'application/json',
    			  	   'X-Auth-Token': user_info['token']},
    			timeout: 100000,
    			method: 'PUT'
    		};
    		request(req_options, res_handler);
    	});
  2. The above code defines a handler for requests coming against the
    endpoint /createfolder/:userid/:foldername. In your own
    application, you can define the URL any way you prefer, but you should
    have a way to get the folder name and the userid. For example, you can
    use session tracking for the userid. Or you can use a form post to get
    the folder name. There are many options available. In this
    application, simply use a parameter in the request path. Once a
    request is received it finds the user information in the cache,
    defines the response handler and request options, then sends the
    request.
  3. Add the above code to the app.js file and redeploy the application.
    When you redeploy the application, the cached user token is lost so
    you need to get the token again with the
    /gettoken/:userid request. Here are the two URLs to try
    things out.
  4. If everything goes well, you see something like this: image007 Building a Cloud Storage Application

Step 6. Upload a document for a user with the OpenStack Swift
API

 

In the previous steps, you successfully added functions to get an access
token and create a folder. In this section, add a few more lines of code
to upload a document to the already created folder named newfolder. The
OpenStack Swift upload object API is also a PUT request at the endpoint,
which is made up of the storage URL and the folder name. The access token
must be present in the request header.

Here is the code to upload a document:

		app.get('/createdoc/:userid/:foldername/:docname', function(req, res){
		    var user_info = cache[req.params.userid];
		    var res_handler = function(error, response, body) {
			if (!error && response.statusCode == 201) {
			    res.render('results', {'body': {result: 'Succeeded!'}});
			}
			else {
			    res.render('results', {'body': {result: 'Failed!'}});
			}
		    };
		    var req_options = {
			url: user_info['url'] + "/" + req.params.foldername + "/" +
				req.params.docname,
			headers: {'accept': 'application/json',
				   'X-Auth-Token': user_info['token']},
			timeout: 100000,
			body: "Some random data",
			method: 'PUT'
		    };
		    request(req_options, res_handler);
		});

The above code follows the same pattern as previous code blocks. It first
gets the cached access token for the userid in the request URL, then
defines a response handler and request options, then sends the request.
The only difference is that we added a string
"Some random data" as the content of the object that is
uploaded.

In your application you can provide a variety of ways for users to upload
documents. Since that is not what we are discussing in this tutorial, we
coded sample data for the document content. This makes the code very
simple and direct for demonstration purposes, but of course if the app is
used as is, every document would have the same content.

Conclusion

 

In this tutorial we demonstrated how to create a simple Node.js application
to use the Object Storage service for Bluemix. The application shows how
to obtain access tokens for a user, create a folder, and upload a
document. In the application code, you can also find code blocks to handle
requests such as list folders of a user, list documents in a folder,
delete a document, and other requests. Using what you have learned in this
tutorial, you can quickly and easily develop great applications using
Object Storage.


BLUEMIX SERVICE USED IN THIS TUTORIAL:Object Storage service has built-in support for provisioning independent
object stores and it creates an individual subaccount per object
store.

RELATED TOPIC:Node.js


The post Building a Cloud Storage Application appeared first on IBM OpenTech.

by Brad Topol at February 24, 2015 10:19 PM

OpenStack Superuser

Romancing the devs: another reason Walmart Labs went open source

In a job market where developers are the equivalent of the hot young thing at a crowded bar, using open source can make a big difference in the talent your company attracts.

So says Dion Almaer of Walmart Labs in a Q&A with the todogroup. Almaer says the team stands on the shoulders of open source giants — thanks to the great people who worked on Linux and Apache and Node and JavaScript and PostgreSQL - making him “bullish” on open sourcing whatever he can.

One of those giants is OpenStack. Walmart Labs currently runs over 100,000 cores of OpenStack on its compute layer, leading one journalist to call it a “full employment act” for Walmart Labs engineers.

alt text here

“One quick anecdote: a great engineer that I know well was recruited to work for a top-class company. They basically lost him when he was told that he would be working with an old Java stack and his workflow would not be git based. The tools matter,” he said.

Romancing the devs is can be just as tricky as romancing a mate, but open source can make finding common ground easier.

“An interview process is dating. It is hard to know if you want to marry after a date or two,” Almaer added. ”When you interview with a team that has open source at its core, you can hack together on issues in the queue and really get a feel for things. It is a fantastic advantage.”

Check out the full interview on the todogroup blog.

Cover Photo by El Neato; Flyer photo Jason Tester // CC BY NC

by Nicole Martinelli at February 24, 2015 08:11 PM

IBM OpenTech Team

Open technologies are driven by open user communities

This week I am out among developers, IT and business leaders in Las Vegas for InterConnect 2015, IBMs premier cloud and mobile conference, and a common theme that I have heard this week in many of the presentations is the importance meetups play in the Open Source Community. After all, open source is all about people coming together from around the world, to collaborate on a common goal, to solve shared problems.

Everyone talks about the role of social technology enabled by instant messaging, social media, and GitHub as collaborative forces for shared problem solving for sure, but all along the simple act of getting together in person to learn, share, and collaborate has driven open technology forward and is one of the most powerful ways to participate in the open source community.

jobs 300x208 Open technologies are driven by open user communities

Flash back to the first user groups, such as SHARE, which was organized by IBM and is the oldest computer user group still in existence. Later user groups such as the Homebrew Computer Club were considered the birthplace of new technologies. The meetings were where Steve Jobs and Steve Wozniak gained critical inspiration for the first Apple computer, and where the first code was shared on printed paper and typed in by hand (or over-enthusiastically copied, to the chagrin of Bill Gates). That spirit lives on in what are now more generally known as meetups (and most easily facilitated using – what else – a cloud SaaS: Meetup.com).

IBMers have long participated in user groups in their area, and IBM has hosted several groups, such as New York PHP, for over ten years (and still going strong; join us at our latest meetup tonight!). But over the last year in particular, the growth around Cloud Foundry, OpenStack, and Docker has been phenomenal. So first a big shout out to all of you who volunteer your time to make this meets up happen in your own communities and in particular the IBMers that participate in a truly open and collaborative way in these events.

IBM Led/Co-led Meetups around the world
City Members City Members
Boston Area Cloud Foundry Meetup 165 NYC OpenStack Meetup 1,340
NYC Cloud Foundry Meetup 435 Connecticut OpenStack Meetup 104
Triangle Cloud Foundry Meetup 101 Seattle Cloud Foundry Meetup 119
Silicon Valley Cloud Foundry Meetup 853 Bay Area PaaS & Cloud Foundry Meetup 262
Beijing Cloud Foundry Meetup 140 Shanghai Cloud Foundry Meetup 85
Shenzhen Cloud Foundry Meetup 54 Hang Zhou Cloud Foundry Meetup 8
Beijing OpenStack Meetup 310 Shanghai OpenStack Meetup 89
Ningbo OpenStack Meetup 10 Shenzhen OpenStack Meetup 4
Xi’an OpenStack Meetup 148 Austin OpenStack Meetup 1009
Austin Cloud Foundry Meetup 104 Fort Lauderdale Cloud App Meetup 25

ChinaMeetup 300x225 Open technologies are driven by open user communities

If you look at the table above closely you will see that some of the biggest growth areas are in China. IBMers all over China are bringing the message of Open Source to their communities. To the right is a picture from a recent meetup in Xi’an China. Where a group of like minded individuals met to discuss OpenStack and how the OpenStack Community can bring value to their companies and businesses.

In US cities such as New York City, Silicon Valley, Austin, Seattle, and Boston, Cloud Foundry meetups run by IBMers continue to grow in size. Last year, IBMers have also co-organized and presented at the OpenStack New York, Philadelphia, Austin, and Connecticut groups. We’ve hosted events with Docker groups as well. We are continuing that impact in this year, with our last event in Silicon Valley again getting huge response from the community.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="356" marginheight="0" marginwidth="0" scrolling="no" src="https://www.slideshare.net/slideshow/embed_code/45085921" style="border:1px solid #CCC; border-width:1px; margin-bottom:5px; max-width: 100%;" width="427"> </iframe>

 

So join us at one of our upcoming IBM hosted or co-organized meetups, such as the Cloud Foundry event in Austin tonight.

CloudFoundryMeetup Open technologies are driven by open user communities

As we like to say, the powerful thing about cloud computing is that resources are virtualized and distributed globally. But a great way to learn (or share your knowledge) about cloud computing itself is to attend a local meetup.

Who knows, we just might have cake for you!

Cake Open technologies are driven by open user communities

 

The post Open technologies are driven by open user communities appeared first on IBM OpenTech.

by Todd Moore at February 24, 2015 06:30 PM

Red Hat Stack

A Closer Look at RHEL OpenStack Platform 6

Last week we announced the release of Red Hat Enterprise Linux OpenStack Platform 6, the latest version of our cloud solution providing a foundation for production-ready cloud. Built on Red Hat Enterprise Linux 7 this latest release is intended to provide a foundation for building OpenStack-powered clouds for advanced cloud users. Lets take a deeper dive into some of the new features on offer!

IPv6 Networking Support

IPv6 is a critical part of the promise of the cloud. If you want to connect everything to the network, you better plan for massive scale and have enough addresses to use. IPv6 is also increasingly important in the network functions virtualization (NFV) and telecommunication service provider space.

This release introduces support for IPv6 address assignment for tenant instances including those that are connected to provider networks; while IPv4 is more straight forward when it comes to IP address assignment, IPv6 offers some more flexibility and options to choose from. Both stateful and stateless DHCPv6 are supported, as well as the ability to use Stateless Address Autoconfiguration (SLAAC).

Neutron Routers High Availability

The neutron-l3-agent is the Neutron component responsible for layer 3 (L3) forwarding and network address translation (NAT) for tenant networks. This is a key piece of the project that hosts the virtual routers created by tenants and allows instances to have connectivity to and from other networks, including networks that are placed outside of the OpenStack cloud, such as the Internet.

Historically the neutron-l3-agent has been placed on a dedicated node or nodes, usually bare-metal machines referred to as “Network Nodes”. Until now, you could have to utilize multiple Network Nodes to achieve load sharing by scheduling different virtual routers on different nodes, but not high availability or redundancy between the nodes. The challenge that this model presented was that all the routing for the OpenStack happened in a centralized point. This introduced two main concerns:

  1. This makes each Network Node a single point of failure (SPOF)
  2. Whenever routing is needed, packets from the source instance have to go through a router in the Network Node and then sent to the destination. This centralized routing creates a resource bottleneck and an unoptimized traffic flow

This release endeavours to address these issues by adding high-availability to the virtual routers scheduled on the Network Nodes, so that when one router is failing, another can take over automatically. This is implemented using the well-known VRRP protocol internally. Highly-available Network Nodes are able to handle routing and centralized source NAT (SNAT) to allow instances to have basic outgoing connectivity, as well as advanced services such as virtual private networks or firewalls – which by design require seeing both directions of the traffic flow in order to operate properly.

Single root I/O virtualization (SR-IOV) networking

The ability to pass physical devices through to virtual machine instances, allowing for premium cloud flavors that provide physical hardware such as dedicated network interfaces or GPUs, was originally introduced in Red Hat Enterprise Linux OpenStack Platform 4. This release adds an SR-IOV mechanism driver (sriovnicswitch) to OpenStack networking to provide enhanced support for passing through networking devices that support SR-IOV.

This driver is available starting with Red Hat Enterprise Linux OpenStack Platform 6 and requires an SR-IOV enabled NIC on the Compute node. This driver allows for the assignment of SR-IOV VFs (Virtual Functions) directly into VM instances, so that the VM is communicating directly with the NIC controller, effectively bypassing the vSwitch . The Nova scheduler has also been enhanced to be able to consider not only device availability but the related external network connectivity when placing instances with specific networking requirements included in their boot request.

Support for Multiple Identity Backends

OpenStack Identity (Keystone) is usually integrated with an existing identity management system such as an LDAP server, when used in production environments. Using the default SQL identity backend is not an ideal choice for identity management, as it only provides basic password authentication, it lacks password policy support, and the user management capabilities are fairly limited. Configuring Keystone to use an existing identity store has its challenges, but some of the changes in RHEL OpenStack Platform 6 make this easier. RHEL OpenStack Platform 5 and earlier supported configuring Keystone with only one single identity backend. This means that all service accounts and all OpenStack users had to exist on the same identity management system. In real-world production scenarios, it is commonly required to use the identity store in read-only configuration, not intruding schema or use account changes, so accounts would be managed using native tools. Previously one of the challenges was that the OpenStack service accounts had to be stored on the same LDAP server with rest of the user accounts. In RHEL OpenStack Platform 6, it is possible to configure Keystone to use multiple identity backends. This allows Keystone to use an LDAP server to store normal user accounts and use SQL backend for storing OpenStack service accounts. In addition, this allows multiple LDAP servers to be used by a single Keystone instance when using Keystone Domains which previously worked only with the SQL identity backend.

Tighter Ceph Integration

The availability of Red Hat Enterprise Linux OpenStack Platform 6, based on OpenStack Juno, marks a particularly important milestone for Red Hat through the delivery of Ceph Enterprise 1.2 as a complete storage solution for Nova, Cinder, and Glance for virtual machine requirements.

This release introduces an advanced support for ephemeral and persistent storage featuring thin provisioning, snapshots, cloning, and copy-on-write.

  • With RHEL OpenStack Platform 6, VM storage functions can now be delivered transparently to the user on Ceph as customers can now run diskless compute nodes.
  • The new Ceph-backed ephemeral volumes enable the data to remain situated within the Ceph cluster allowing the VM to boot more quickly without data moving across the network. This also means that snapshots of the ephemeral volume can be performed on the Ceph cluster instantaneously and then put into the Glance library, without data migration across the network. Now VM storage functions can be delivered transparently to the user on Ceph.

The Ceph RBD drivers are now shipped by default with RHEL OpenStack Platform 6 and configured through a single, integrated installer that simplifies and speeds deployment of Ceph as part of the OpenStack deployment.

Interested in trying the latest OpenStack-based cloud platform from the world’s leading provider of open source solutions? Download a free evaluation at: http://www.redhat.com/en/technologies/linux-platforms/openstack-platform.

by stephenagordon at February 24, 2015 05:53 PM

Ben Nemec

Quick and Dirty Snapshots of Instances Booted from Volume

I've discussed my local OpenStack installation before, such as here and here. One of the results of the changes I've made to it over time is that I have some instances that are booted from volume. Unfortunately, instances booted from volume don't snapshot properly with the nova image-create. Below is the quick and dirty method I've been using to take snapshot backups of those vms for a while now.

I won't go too deeply into the implementation details since you can read the script below, but the basic process is to stop the instance, find the volume it was booted from, use Cinder to create an image from the volume, then restart the instance. I also download the image and then delete it from Glance because I'm primarily using this as a backup method.

There are some obvious limitations to this method (having to stop the instance could be a problem for many people), but I've been running it a few times a week for several months now and it's been working well for me so I figured I'd share it. If there's a simpler way to do this I'm all ears. :-)

vms_to_backup="space separated list of vms to run the script on"
wait_for_image()
{
   image=$1
   count=0
   while ! glance image-show $image | grep active
   do
      count=$(($count+1))
      if [ $count -ge 200 ]
      then
         echo "Image never became active"
         exit 1
      fi
      sleep 10
   done
}

for i in $vms_to_backup
do
   img_name=backup-$i
   restart=0
   
   if nova show $i | grep ACTIVE
   then
      nova stop $i
      restart=1
   fi
   # This is likely to break with multiple attached volumes
   volid=$(nova show $i | grep volumes_attached | sed 's/.*\[{"id": "\([a-zA-Z0-9\-]\+\)"}\].*/\1/')
   while nova show $i | grep ACTIVE
   do
      sleep 1
   done
   cinder upload-to-image $volid $img_name --disk-format qcow2 --force True || continue
   
   wait_for_image $img_name
   
   if [ $restart -eq 1 ]
   then
      nova start $i
   fi
      
   glance image-download $img_name --file /backup/$i.qcow2

   glance image-delete $img_name
   
done

by bnemec at February 24, 2015 04:33 PM

Mirantis

Integrating OpenStack and Kubernetes with Murano

There’s a perceived competition between OpenStack and containers such as Docker, but in reality, the two technologies are a powerful combination. They both solve similar problems, but on different layers of the stack, so combining the two can give users more scalability and automation than ever before.

That containers app you wrote needs to run somewhere. This is particularly true for orchestrated container applications, such as those managed by Kubernetes. What’s more, if your application is complicated enough that it needs to scale up and down, you need to be running it in an environment that can, itself, scale up and down. This is where OpenStack comes in.

The idea of making OpenStack and Kubernetes work together might seem a little daunting, but as of today, it just got easier.  A lot easier.

Today we are announcing a joint project with Google to integrate the Kubernetes container management application with the OpenStack Application Catalog, Murano. This project will enable you to click a few buttons and end up with a working, scalable Kubernetes application within minutes.

While that in itself is a pretty heady thought, let’s stop for a moment and think about what that means.  Remember, Kubernetes lets you move workloads between different clouds, as long as they’re both running Kubernetes.  That means that you will be able to move workloads between OpenStack and other clouds, such as Google Cloud Platform.  Suddenly hybrid cloud — an OpenStack private cloud integrated with public cloud for scale — doesn’t sound so crazy anymore.

For example, you might construct an application environment in which your internal database lives in your private OpenStack cloud, but the web application component that presents it to the world lives in public cloud.  Perhaps you have an external-facing application in the public cloud that sends analytics back to a big data application in the private cloud. Or you might simply have an application that runs on the private cloud but uses the public cloud as a bank of additional resources when needed.

But how does it actually work?

How it works

Craig Peters and Georgy Okrokvertskhov will be doing a live demo at the Kubernetes Gathering on Wednesday, February 25, but the idea here is to build on the fact that users can easily self-provision applications using Murano. Murano will provide a Kubernetes package, which provides an abstraction layer for Kubernetes and Pods. Developers can then package their applications for use as they normally would, easily adding them to the Kubernetes cluster.

Kubernetes does the same things you expect it to, such as providing Pods that implement the Docker service, monitoring availability and load of the Pods, and scaling Pods up and down based on the Kubernetes configuration.  It also coordinates connectivity between the Pods and the underlying infrastructure.

Meanwhile, Murano manages and orchestrates that underlying infrastructure, which consists of OpenStack resources. It configures the virtual network for Kubernetes and the Pods, and uses OpenStack Orchestration (Heat) to provision the resources Kubernetes needs, such as virtual machines and interface connections, network and subnet configs, security groups, router configurations, and storage.

If you’re already using Kubernetes, you’re probably already familiar with how scaling works.  Containers are grouped into Pods, and Kubernetes scales the Pods within a Kubernetes cluster, spawning containers on Kubernetes hosts. If you have multiple hosts, Kubernetes distributes the containers among them.

When your application grows to the point where the Kubernetes cluster itself needs to scale, however, the system needs some outside help; an external system needs to add resources. In this case, that “external system” is Murano, which uses OpenStack Telemetry (Ceilometer) to detect when additional resources are needed. Murano adds a new host to the Kubernetes cluster using the Kubernetes “add node” function, and Kubernetes redistributes the load.  (Murano can also initiate scaling applications within a cluster, if necessary.)

Kubernetes and OpenStack in Action

Well, that all sounds great in theory, but how do you actually do it?  Fortunately, the process is pretty straightforward.  

NOTE:  The following steps are in development, and will be available will be available for technical preview use on Mirantis OpenStack Express in April 2015.

  1. Deploy an OpenStack cluster, and install Murano.  (In Mirantis OpenStack, this is simply a matter of checking the “Murano” box in the “Create Environment” wizard.

 
  1. Open the OpenStack Dashboard (Horizon).

 
  1. Click Murano and create a new environment.

 
  1. Add the Kubernetes application to the environment.

 
  1. Add the Kubernetes Pod to the environment.

 
  1. Add an application to the Pod.  In this case, we’ll add a web server just so we can see it work.

 
  1. Deploy the environment.

 
  1. Test.

 

That’s it.

You can see a demo of the entire process here:

<iframe allowfullscreen="" frameborder="0" height="281" mozallowfullscreen="" src="https://player.vimeo.com/video/120445504" webkitallowfullscreen="" width="500"></iframe>
Remember, you can also see a live demo at the Kubernetes Gathering on Wednesday, February 25. Starting in April, you’ll be able to do this yourself on Mirantis OpenStack Express. Sign up now to be notified when the functionality is available.

The post Integrating OpenStack and Kubernetes with Murano appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Nick Chase at February 24, 2015 04:00 PM

Sébastien Han

Quick and efficient Ceph DevStacking

Recently I built a little repository on github/ceph where I put two files to help you building your DevStack Ceph.

<figure class="code"><figcaption></figcaption>
1
2
3
4
5
$ git clone https://git.openstack.org/openstack-dev/devstack
$ git clone https://github.com/ceph/ceph-devstack.git
$ cp ceph-devstack/local* devstack
$ cd devstack
$ ./stack.sh
</figure>

Happy DevStacking!

February 24, 2015 10:27 AM

February 23, 2015

OpenStack Superuser

DefCore isn't a punk band, but if you use OpenStack, listen up

In this series of interviews, we’re talking to the new Individual Directors on the 2015 OpenStack Board of Directors. These Directors provide oversight of the Foundation and its budget, strategy and goals. The 24-person Board is composed of eight Individual Directors elected by the Individual Members, eight directors elected by the Gold Members and eight directors appointed by the Platinum Members.

We talk to Russell Bryant, open source software engineer. He shares thoughts on why DefCore matters, expanding the global community and what success will look like in the future for OpenStack.

Superuser: As part of your candidate profile, your aim was to ensure "regional diversity are reflected in the governance and the vision of the project." What are your first steps towards that?

Russell Bryant: Driving global participation starts by recognizing that OpenStack is very much a global community. The OpenStack Summit is already held in different places around the globe.  Anything we can do to support regional OpenStack events is great, as well. Internationalization efforts for the software and its documentation are also important.

In the development community, it's important to adopt community processes that are as inclusive as possible.  OpenStack projects place a lot of emphasis on the design summit, in person mid-cycle meetups, and IRC meetings.  All of those things end up excluding some parts of the world.  The efficiency of real time communication has to be balanced with the importance of being globally inclusive.

Finally, I think role models are important. Having more diversity in various positions providing leadership in the OpenStack community would encourage more people around the world to get more actively involved.

What will success look like for OpenStack five years from now?

The OpenStack project mission statement is:

   To produce the ubiquitous Open Source Cloud Computing platform that    will meet the needs of public and private clouds regardless of    size, by being simple to implement and massively scalable.

We're doing a great job on some parts of the mission.  The part that I feel we are lagging the most on is "being simple to implement." I hope that in five years we've made enormous progress in that area.  Having very strong open source deployment and management tools is a big part of that.  Another part is ongoing focus on improving the end user interfaces, which include the APIs, SDKs, and graphical interfaces like Horizon.

I also expect to see OpenStack become more and more dominant as the standard way people manage their infrastructure.  We should see a lot more adoption, including those that are not traditionally the early adopters of new technology.

Who are your real-life heroes?

In my life, it would be my wife, Julie.

I have several heroes in OpenStack.  One would be Thierry Carrez, OpenStack's director of engineering. I think Thierry has done more than anyone else to hold the development community together since the founding of OpenStack. I am continually impressed and very thankful for all that he does.

What are you looking forward to most at the next summit?

There are always more companies out there doing really interesting work with OpenStack.  Hard problems are being solved.  New deployments are being done that break new ground.  The OpenStack Summit tends to be the time when we learn about a lot of these things that we didn't expect. I'm excited to see what we'll learn this time.

What's your favorite/most important OpenStack debate?

The DefCore effort is referenced a lot in the context of the OpenStack board. There are some very important changes happening to the technical governance of OpenStack that are related and important for people to be aware of.

The technical community is moving away from its integrated release model. Previously, the integrated release served as the base that DefCore worked from and was the primary signal used by downstream consumers about what might be most appropriate to include in an OpenStack product.

In this new model, the technical community becomes a much more inclusive place filled with a lot more choice.  With a much bigger picture of what the OpenStack community includes, it's even more important that we communicate useful information to downstream consumers so that they know which parts of OpenStack they should use to solve their problems.  The DefCore effort plays an important role in defining some set of base criteria to help ensure some amount of consistency and interoperability among OpenStack clouds.

Cover Photo by Sergio Rojas // CC BY NC

by Nicole Martinelli at February 23, 2015 04:34 PM

Rafael Knuth

Live Event: Scaling OpenStack (100 nodes) labs on AWS

When: February 26 6-8pm PSTWhere: 2479 E Bayshore Road, Suite 188(left entrance to the...

February 23, 2015 01:45 PM

Opensource.com

OpenStack at Walmart, project reform status, and more

Interested in keeping track of what's happening in the open source cloud? Opensource.com is your source for news in OpenStack, the open source cloud infrastructure project.

by Jason Baker at February 23, 2015 08:00 AM

February 22, 2015

Jamie Lennox

V3 Authentication With Auth_token Middleware

Auth_token is the middleware piece in OpenStack responsible for validating tokens and passing authentication and authorization information down to the services. It has been a long time complaint of those wishing to move to the V3 identity API that auth_token only supported the v2 API for authentication.

Then auth_token middleware adopted authentication plugins and the people rejoiced!

Or it went by almost completely unnoticed. Auth is not an area people like to mess with once it’s working and people are still coming to terms with configuring via plugins.

The benefit of authentication plugins is that it allows you to use any plugin you like for authentication - including the v3 plugins. A downside is that being able to load any plugin means that there isn’t the same set of default options present in the sample config files that would indicate the new options available for setting. Particularly as we have to keep the old options around for compatibility.

The most common configuration I expect for v3 authentication with auth_token middleware is:

[keystone_authtoken]
auth_uri = https://public.keystone.uri:5000/
cafile = /path/to/cas

auth_plugin = password
auth_url = http://internal.keystone.uri:35357/
username = service
password = service_pass
user_domain_name = service_domain
project_name = project
project_domain_name = service_domain

The password plugin will query the auth_url for supported API versions and then use either v2 or v3 auth depending on what parameters you’ve specified. If you want to save a round trip (once on startup) you can use the v3password plugin which takes the same parameters but requires a V3 URL to be specified in auth_url.

An unfortunate thing we’ve noticed from this is that there is going to be some confusion as most plugins present an auth_url parameter (used by the plugin to know where to authenticate the service user) along with the existing auth_uri parameter (reported in the headers of 403 responses to tell users where to authenticate). This is a known issue we need to address and will likely result in changing the name of the auth_uri parameter as the concept of an auth_url is used by all existing clients and plugins.

For further proof that this works as expected checkout devstack which has been operating this way for a couple of weeks.

NOTE: Support for authentication plugins was released in keystonemiddleware 1.3.0 released 2014-12-18.

February 22, 2015 11:57 PM

Christian Berendt

Vote for Vancouver

The session voting for the OpenStack Vancouver Summit is closing soon. There are more than 1000 proposed presentations. Try to find some time and participate at the vote now. Participation in the vote is the only way to see presentations in Vancouver you really care. We have proposed the following 5 presentations and hope that you are interested in:

by Christian Berendt at February 22, 2015 07:18 PM

February 21, 2015

OpenStack Blog

OpenStack Community Weekly Newsletter (Feb 13 – 20)

OpenStack Technical Committee Update: Project Reform Progress

Over the last few months, the Technical Committee has been discussing plans to dissolve the binary concept of the integrated release and adapt our projects structure to the future needs of collaborative development in the OpenStack community. A specification was written to describe the rationale for the reform and its goals. In the past weeks, the OpenStack Technical Committee approved a first set of changes, affecting how OpenStack upstream teams and projects developed by our community are organized.

Contributing to open cloud projects without losing your mind

If you’re interested in contributing to OPNFV, OpenDaylight or OpenStack, there are a few things to keep in mind to make the process easier.

Welcome to the wild west of networking and cloud infrastructure

It may look like the wild west now, but the future of open networking and cloud infrastructure is bright. That’s the main takeaway from a panel featuring speakers from Red Hat, Ericsson, Cumulus Networks and the OpenStack Foundation moderated by Neela Jacques at the Linux Collaboration Summit.

Nice APIs: Limits in OpenStack SDK

Brian Curtin has been working with a team on the OpenStack SDK, a project aimed at providing a great experience for Python developers who use OpenStack. He and his colleagues want to enable people to build great things on top of OpenStack and are looking for feedback.

The Road to Vancouver

Relevant Conversations

Deadlines and Development Priorities

Reports From Previous Events

Security Advisories and Notices

  • None

Tips ‘n Tricks

Upcoming Events

The 2015 events plan is now available on the Global Events Calendar wiki.

Other News

Got Answers?

Ask OpenStack is the go-to destination for OpenStack users. Interesting questions waiting for answers:

OpenStack Reactions

The effect to the gate when introducing a bad commit in a client release

<iframe class="imgur-embed" frameborder="0" height="404" src="http://i.imgur.com/0ZXNXYw.gifv#embed" width="100%"></iframe>

The weekly newsletter is a way for the community to learn about all the various activities occurring on a weekly basis. If you would like to add content to a weekly update or have an idea about this newsletter, please leave a comment.

by Stefano Maffulli at February 21, 2015 12:31 AM

February 20, 2015

OpenStack Superuser

Superuser Weekend Reading

Here's the news from the OpenStack world you won't want to miss -- the musings, polemics and questions posed by the larger community.

Got something you think we should highlight? Tweet, blog, or email us!

In Case You Missed It

“Like a lemming at the proverbial sea cliff, however, the enterprise would do well to avoid blindly following the group. With data infrastructure becoming so fungible these days, the ability to craft optimized solutions is cheaper and easier than ever, even on traditional data center infrastructure." Arthur Cole at Tesora picks apart the Cloud by the Numbers.

You probably spotted the story on Walmart and Openstack, but did you catch the part about what it means for the career trajectory of their engineering team?

"So the ability of the shiny new OpenStack systems to interface with infrastructure that’s been in place for decades or so — some for as much as 50 years — is critical. It also spells the full employment act for all those @WalmartLabs engineers." Gigaom.

Nextcast has got a great interview with Joe Arnold, CEO of SwiftStack the first public OpenStack launch of Swift independent of Rackspace. Arnold, aka "the nicest guy in open source," tells how his early tech exploits almost him put on probation and threatened with lawsuits by college officials.

"What comes next is more cloud, more advanced cloud and more advanced management technology, just be forewarned and forearmed," writes Adrian Bridgewater on Red Hat and advanced clouds at Forbes.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

We feature user conversations throughout the week, so tweet, blog, or email us your thoughts!

Cover Photo by Enrique López // CC BY NC

by Nicole Martinelli at February 20, 2015 06:52 PM

Tesora Corp

Short Stack: OpenStack at Walmart, myths debunked, and Oracle teams up with Tesora

short stack_b small.jpgWelcome to the Short Stack, our weekly feature where we search for the most intriguing OpenStack links to share with you. These links may come from traditional publications or company blogs, but if it's about OpenStack, we'll find the best links we can to share with you every week.

If you like what you see, please consider subscribing.

Here we go with this week's links:

OpenStack Comes Up Huge for Walmart | Gigaom

Walmart started working with OpenStack about a year ago and is now running over 100,000 cores of OpenStack on its compute layer. They successfully used the technology for Cyber Monday sales and holiday season sales operations. Walmart plans on expanding their production of open source technology by adopting various OpenStack projects such as Trove, Database as a Service.

Tesora Provides Agent to Provision Oracle Database in OpenStack | ZDNet

Oracle and Tesora are teaming up to bring the Oracle database management service (DBMS) and OpenStack together. With the use of a Trove guest agent for Oracle Database 12c, Oracle DBAs can quickly and easily provision relational or non-relational databases while automating various other tasks that otherwise would have had cost them valuable time.

Three Cloud Computing Myths, Debunked | Superuser Blog

In this interview with Egle Sigler, OpenStack Individual Director, she shares her thoughts on the future success of OpenStack and the upcoming Vancouver Summit. She also explains how to make OpenStack more beginner friendly and how to avoid stubborn myths about moving to the cloud.

5 Ways OpenStack Trove Will Change How you Manage Databases | InfoWorld

Many enterprise IT departments face challenges of keeping up with the demanding customer market, which can significantly cut into their IT budget. The challenges of providing more and better services at lower costs are typically related to the increasing amount of data and number of databases. OpenStack Trove, Database as a Service, technology can be your solution. Read these 5 specific ways that DBaaS can change the way enterprise IT is managing databases today.

Cloud Foundry and OpenStack- World's Top Two Open Source Cloud Projects | IBMOpenTech

Recent research shows that OpenStack and Cloud Foundry are the leading open source cloud technologies. In a joint Silicon Valley Meetup between the OpenStack and Cloud Foundry groups, the talk focused on how the two technologies complement each other through automation and scaling. Check out the presentation decks shared at this Meetup.

by 1 at February 20, 2015 02:47 PM

Short Stack: OpenStack at Walmart, myths debunked, and Oracle teaming up with Tesora

short stack_b small.jpgWelcome to the Short Stack, our weekly feature where we search for the most intriguing OpenStack links to share with you. These links may come from traditional publications or company blogs, but if it's about OpenStack, we'll find the best links we can to share with you every week.

If you like what you see, please consider subscribing.

Here we go with this week's links:

OpenStack Comes Up Huge for Walmart | Gigaom

Walmart started working with OpenStack about a year ago and is now running over 100,000 cores of OpenStack on its compute layer. They successfully used the technology for Cyber Monday sales and holiday season sales operations. Walmart plans on expanding their production of open source technology by adopting various OpenStack projects such as Trove, Database as a Service.

Tesora Provides Agent to Provision Oracle Database in OpenStack | ZDNet

Oracle and Tesora are teaming up to bring the Oracle database management service (DBMS) and OpenStack together. With the use of a Trove guest agent for Oracle Database 12c, Oracle DBAs can quickly and easily provision relational or non-relational databases while automating various other tasks that otherwise would have had cost them valuable time.

Three Cloud Computing Myths, Debunked | Superuser Blog

In this interview with Egle Sigler, OpenStack Individual Director, she shares her thoughts on the future success of OpenStack and the upcoming Vancouver Summit. She also explains how to make OpenStack more beginner friendly and how to avoid stubborn myths about moving to the cloud.

5 Ways OpenStack Trove Will Change How you Manage Databases | InfoWorld

Many enterprise IT departments face challenges of keeping up with the demanding customer market, which can significantly cut into their IT budget. The challenges of providing more and better services at lower costs are typically related to the increasing amount of data and number of databases. OpenStack Trove, Database as a Service, technology can be your solution. Read these 5 specific ways that DBaaS can change the way enterprise IT is managing databases today.

Cloud Foundry and OpenStack- World's Top Two Open Source Cloud Projects | IBMOpenTech

Recent research shows that OpenStack and Cloud Foundry are the leading open source cloud technologies. In a joint Silicon Valley Meetup between the OpenStack and Cloud Foundry groups, the talk focused on how the two technologies complement each other through automation and scaling. Check out the presentation decks shared at this Meetup.

by 1 at February 20, 2015 02:47 PM

Short Stack: OpenStack at Walmart, myths debunked, and Oracle deploys OpenStack Trove

short stack_b small.jpgWelcome to the Short Stack, our weekly feature where we search for the most intriguing OpenStack links to share with you. These links may come from traditional publications or company blogs, but if it's about OpenStack, we'll find the best links we can to share with you every week.

If you like what you see, please consider subscribing.

Here we go with this week's links:

OpenStack Comes Up Huge for Walmart | Gigaom

Walmart started working with OpenStack about a year ago and is now running over 100,000 cores of OpenStack on its compute layer. They successfully used the technology for Cyber Monday sales and holiday season sales operations. Walmart plans on expanding their production of open source technology by adopting various OpenStack projects such as Trove, Database as a Service.

Tesora Provides Agent to Provision Oracle Database in OpenStack | ZDNet

Oracle and Tesora are teaming up to bring the Oracle database management service (DBMS) and OpenStack together. With the use of a Trove guest agent for Oracle Database 12c, Oracle DBAs can quickly and easily provision relational or non-relational databases while automating various other tasks that otherwise would have had cost them valuable time.

Three Cloud Computing Myths, Debunked | Superuser Blog

In this interview with Egle Sigler, OpenStack Individual Director, she shares her thoughts on the future success of OpenStack and the upcoming Vancouver Summit. She also explains how to make OpenStack more beginner friendly and how to avoid stubborn myths about moving to the cloud.

5 Ways OpenStack Trove Will Change How you Manage Databases | InfoWorld

Many enterprise IT departments face challenges of keeping up with the demanding customer market, which can significantly cut into their IT budget. The challenges of providing more and better services at lower costs are typically related to the increasing amount of data and number of databases. OpenStack Trove, Database as a Service, technology can be your solution. Read these 5 specific ways that DBaaS can change the way enterprise IT is managing databases today.

Cloud Foundry and OpenStack- World's Top Two Open Source Cloud Projects | IBMOpenTech

Recent research shows that OpenStack and Cloud Foundry are the leading open source cloud technologies. In a joint Silicon Valley Meetup between the OpenStack and Cloud Foundry groups, the talk focused on how the two technologies complement each other through automation and scaling. Check out the presentation decks shared at this Meetup.

by 1 at February 20, 2015 02:47 PM

James Page

OpenStack Summit Vancouver: Ubuntu OpenStack team presentations

Amongst the numerous submissions for speaking slots at the OpenStack Summit in Vancouver in May, you’ll find a select number of submissions from my team:

Multi-node OpenStack development on single system (Speakers: James Page, Corey Bryant)

Corey has been having some fun hacking on enabling deployment from source in the OpenStack Juju Charms for Ubuntu – come and hear about what we’ve done so far and how we’re trying to enable a multi-node OpenStack deployment from source in a single node using KVM and LXC container, with devstack style reloads!

Scaling automated testing of Ubuntu OpenStack (Speakers: James Page, Ryan Beisner, Liam Young)

The Ubuntu OpenStack team have a ever increasing challenge of supporting testing of numerous OpenStack versions on many different Ubuntu releases; we’ll be covering how we’ve used OpenStack itself to help us scale-out our testing infrastructure to support these activities, as well as some of the technologies and tools we use to deploy and test OpenStack itself.

OpenStack HA Nirvana on Ubuntu (Speaker: James Page)

We’ve been able to deploy OpenStack in Highly Available configurations using Juju and Ubuntu since the Portland Summit in 2013 – since then we have evolved and battle-tested our HA reference architecture into a rock-solid solution to ensure availability of cloud services to end users.  This session will cover the Ubuntu OpenStack HA reference architecture in detail – we might even manage a demo as well!

Testing Openstack with Openstack (Speaker: Ryan Beisner)

Ryan Beisner has been leading Ubuntu OpenStack QA for Canonical since 2014; he’ll be deep-diving on the challenges faced in ensuring the quality of Ubuntu OpenStack and how we’ve leveraged the awesome tool set we have in Ubuntu for deploying and testing OpenStack to support testing of OpenStack both virtually and on bare-metal 100’s of times a day.

also of interest, and building on and around the base technology that the Ubuntu OpenStack team delivers:

OpenStack IPv6 Support (Speaker: Edward Hope-Morley)

Ed’s team have made great in-roads into enabling Ubuntu OpenStack deployments in IPv6 only environments; he’ll be discussing the challenges encountered and how the team overcame them as well as setting out some suggested improvements that would make IPv6 support a first class citizen for OpenStack.

Autopiloting OpenStack (Speaker: Dean Henrichsmeyer)

Dean will be talking about how the Ubuntu OpenStack Autopilot pulls together all of the various technologies in Ubuntu (MAAS, Juju and OpenStack) to fully automate deployment and scale-out of complex OpenStack deployments on Ubuntu.

Containers for Dummies (Speaker: Tycho Andersen)

Tycho promises an enlightening and fun talk about containers introducing all the basic technologies in Linux that support containers – all done through the medium of pictures of cats!

You can find the full list of Canonical submissions here – see you all in Vancouver!


by JavaCruft at February 20, 2015 09:58 AM

Daniel P. Berrangé

Nova metadata recorded in libvirt guest instance XML

One of the issues encountered when debugging libvirt guest problems with Nova, is that it isn’t always entirely obvious why the guest XML is configured the way it is. For a while now, libvirt has had the ability to record arbitrary application specific metadata in the guest XML. Each application simply declares the XML namespace it wishes to use and can then record whatever it wants. Libvirt will treat this metadata as a black box, never attempting to interpret or modify it. In the Juno release I worked on a blueprint to make use of this feature to record some interesting information about Nova.

The initial set of information recorded is as follows:

  • Version – the Nova version number, and any vendor specific package suffiix (eg RPM release number). This is useful as the user reporting a bug is often not entirely clear what particular RPM version was installed when the guest was first booted.
  • Name – the Nova instance display name. While you can correlated Nova instances to libvirt guests using the UUID, users reporting bugs often only tell you the display name. So recording this in the XML is handy to correlate which XML config corresponds to which Nova guest they’re talking about.
  • Creation time – the time at which Nova booted the guest. Sometimes useful when trying to understand the sequence in which things happened.
  • Flavour – the Nova flavour name, memory, disk, swap, ephemeral and vcpus settings. Flavours can be changed by the admin after a guest is booted, so having the original values recorded against the guest XML is again handy.
  • Owner – the tenant user ID and name, as well as their project
  • Root image – the glance image ID, if the guest was booted from an image

The Nova version number information in particular has already proved very useful in a couple of support tickets, showing that the VM instance was not booted under the software version that was initially claimed. There is still scope for augmenting this information further though. When working on another support issues it would have been handy to know the image properties and flavour extra specs that were set, as the user’s bug report also gave misleading / incorrect information in this area. Information about cinder block devices would also be useful to have access to, for cases where the guest isn’t booting from an image.

While all this info is technically available from the Nova database, it is far easier (and less dangerous) to ask the user to provide the libvirt XML configuration than to have them run random SQL queries. Standard OS trouble shooting tools such as  sosreport from RHEL/Fedora already collect the libvirt guest XML when run. As a result, the bug report is more likely to contain this useful data in the initial filing, avoiding the need to ask the user to collect further data after the fact.

To give an example of what the data looks like, a Nova guest booted with

$ nova boot --image cirros-0.3.0-x86_64-uec --flavor m1.tiny vm1

Gets the following data recorded

$ virsh -c qemu:///system dumpxml instance-00000001
<domain type='kvm' id='2'>
  <name>instance-00000001</name>
  <uuid>d0e51bbd-cbbd-4abc-8f8c-dee2f23ded12</uuid>
  <metadata>
    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.0">
      <nova:package version="2015.1"/>
      <nova:name>vm1</nova:name>
      <nova:creationTime>2015-02-19 18:23:44</nova:creationTime>
      <nova:flavor name="m1.tiny">
        <nova:memory>512</nova:memory>
        <nova:disk>1</nova:disk>
        <nova:swap>0</nova:swap>
        <nova:ephemeral>0</nova:ephemeral>
        <nova:vcpus>1</nova:vcpus>
      </nova:flavor>
      <nova:owner>
        <nova:user uuid="ef53a6031fc643f2af7add439ece7e9d">admin</nova:user>
        <nova:project uuid="60a60883d7de429aa45f8f9d689c1fd6">demo</nova:project>
      </nova:owner>
      <nova:root type="image" uuid="2344a0fc-a34b-4e2d-888e-01db795fc89a"/>
    </nova:instance>
  </metadata>
 ...snip...
</domain>

The intention is that as long as the XML namespace URI (http://openstack.org/xmlns/libvirt/nova/1.0) isn’t changed, the data reported here will not change in a backwards incompatible manner. IOW, we will add further elements or attributes to the Nova metadata, but not change or remove existing elements or attributes. So if OpenStack related troubleshooting / debugging tools want to extract this data from the libvirt XML they can be reasonably well assured of compatibility across future Nova releases.

In the Kilo development cycle there have been patches submitted to record similar kind of data for VMWare guests, though obviously it uses a config data format relevant to VMWare rather than XML. Sadly this useful debugging improvement for VMWare had its feature freeze exception request rejected, pushing it out to Liberty, which is rather a shame :-(

by Daniel Berrange at February 20, 2015 09:47 AM

Sean Roberts

Let the Liberty Voting Begin

The EMC Federation of companies have submitted many talks this summit. Take a look and vote for the ones you believe will be useful. I am especially interested in getting feedback on DefCore and the Product Team. Organization Speaker(s) Session Type (Talk, Demo, Panel) Track Title Description URL for Voting 1 EMC Shamail Tahir & … Continue reading Let the Liberty Voting Begin

by sarob at February 20, 2015 06:32 AM

OpenStack Superuser

Contributing to open cloud projects without losing your mind

If you’re interested in contributing to OPNFV, OpenDaylight or OpenStack, there are a few things to keep in mind to make the process easier.

Understanding the release cycle, making your first contributions count and considering the culture will go a long way. These were some of the takeaways from a discussion featuring Red Hat, OPNFV, OpenDaylight and OpenStack at the Linux Collaboration Summit.

Know the release cycle

If you don’t have any idea about the release cycle, missteps are easy to make.

For example, OpenStack has a six-month cadence, in the spring and fall. New features start with a specification (blueprint) that goes through a review and approval process. When the blueprint is approved, it’s targeted to a release and a milestone. New specifications won’t be discussed at least until feature freeze is reached in the third week of March and each OpenStack project will have its own schedule. OpenDaylight is on a similar six-month release timeline, said Luis Gomez, leader of the integration group in OpenDaylight. OPNFV is planning its first release in April, including platform, basic components and functional test, said Chris Price.

Plan strategically

“Think ahead and get your work in early,” said Chris Wright, technical director for software-defined networking at Red Hat. “Individual projects are resource constrained even if they are a reasonable additions.” Community members should advocate for what they think is important, but prioritize changes, keeping in mind future involvement and maintenance. “If we’re not showing up as developers, it’s not going to work.”

Take baby steps

You’ll generate more goodwill if you “don’t show up with code but start by reviewing code,” said Stefano Maffulli, developer advocate at OpenStack. Reviewers may already have a backlog of code to review, if you offer to take some of that work off their hands, you’re already helping rather than creating more work, he added.

If you’re joining the community because you need something specific, approach it with a community mindset.

“Focus on the feature and why it’s generally useful — and not just useful to telcos,” Wright said. Calling it an ‘NFV solution’ has some level of negative connotation, he added. “Be problem-statement focused, not problem-solution focused. Open it up to the community to find the best way to solve it.”

alt text here

Upstream first

Price put an emphasis on “upstream first,” adding that with OPNFV, the intention is not to fork project codes. Your best bet is to clearly communicate your requirements and collaborate on the dev side instead. The goal at OPNFV is to take customer requirements and turn them into relevant upstream development software projects.

Keeping these basic strategies in mind will help get everyone where they need to go.

Cross collaboration between OSS communities is a “top priority” to accelerate open source NFV implementation, Price added. Heather Kirksey of OPNFV, agreed. “We view ourselves as big tents for all open source projects around NFV.”

Cover Photo by Radio Saigon; Photo of OPNFV stickers from the conference by Nicole Martinelli. // CC BY NC

by Nicole Martinelli at February 20, 2015 01:12 AM

February 19, 2015

OpenStack Superuser

Chris Anderson on how the cloud fosters innovation

alt text here

Chris Anderson sees only blue skies. The former Wired editor-in-chief and CEO of drone maker 3D Robotics is riding high on the recent rules proposed by the Federal Aviation Administration that will free up the airspace for drones.

“To my amazement and delight, the new proposed regulations are actually progressive,” Anderson told a packed house at the Linux Collaboration Summit. "By not requiring aircraft certification, they've liberated these kind of drones based on open innovation and open source."

Two years after humble beginnings at his dining room table in Berkeley, 3D Robotics became the biggest drone manufacturer in North America. To face the competition, Anderson is putting his faith in open source.

Anderson brought two drones on stage to make his point. One is Iris (pictured with Anderson above) from 3D Robotics, which retails for $750. The other? A drone from a Chinese manufacturer using open source software from 3D Robotics. They didn't have to ask permission, there were no meetings about it, and “they’re innovating in ways we don't have to,” he adds.

Along with companies including Qualcomm and Intel, 3D Robotics belongs to Dronecode, the Linux Foundation’s open source platform for Unmanned Aerial Vehicles (UAVs).

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="https://www.youtube.com/embed/_yOCTgVqmeQ?feature=player_embedded" width=""></iframe>

“We’re not going to beat the competition by raising more money or being smarter than they are,” Anderson said. “We’ll do it by being more open.”

Iris has sensors that talk directly to the cloud. The processing power is basically unlimited, and computational resources run in parallel, he says. There’s a “cloud of sensors, not one pilot, one drone,” he adds. Drones aren't just drones anymore – they're sensors in the sky producing big data – this is a way to extend the internet into the skies, he added.

“This is 21st-century open innovation: Everyone has access to software and hardware; people become the competitive edge.”

Cover Photo by Phil; Photo of Chris Anderson by Nicole Martinelli. CC BY NC

by Nicole Martinelli at February 19, 2015 06:19 PM

About

Planet OpenStack is a collection of thoughts from the developers and other key players of the OpenStack projects. If you are working on OpenStack technology you should add your OpenStack blog.

Subscriptions

Last updated:
March 06, 2015 03:53 PM
All times are UTC.

Powered by:
Planet