March 28, 2015

OpenStack Blog

OpenStack Community Weekly Newsletter (Mar 20 – 27)

OpenStack DefCore Process Draft Posted for Review [major milestone]

OpenStack DefCore Committee is looking for community feedback about the proposed DefCore Process. March has been a month for OpenStack DefCore milestones. At the March Board meeting, the first official DefCore Guideline (called DefCore 2015.03) was approved. And the first DefCore Process draft is ready to be committed.

2015 OpenStack T-Shirt Design Contest

It’s that time of year again and we’re looking for a new design to grace OpenStack’s T-Shirts. Here’s your chance to show us your creative talent and submit an original design for our 2015 OpenStack T-shirt Design Contest!

The Road to Vancouver

Relevant Conversations

Deadlines and Development Priorities

Security Advisories and Notices

Tips ‘n Tricks

Reports from Previous Events

Upcoming Events

OpenStack @ PyCon 2015: Booth info, looking for volunteers, posting of jobs, OpenStack presentations

Other News

OpenStack Reactions

Lion hugs man

Yes, I’ll be your mentor for Outreachy

The weekly newsletter is a way for the community to learn about all the various activities occurring on a weekly basis. If you would like to add content to a weekly update or have an idea about this newsletter, please leave a comment.

by Stefano Maffulli at March 28, 2015 01:26 AM

Matt Fischer

Scale & Maturity: Thoughts on the OpenStack Mid-Cycle Operators Meetup

A re-post of an article I wrote last week for SuperUser.

A couple weeks back I attended the OpenStack Operators Mid-Cycle Meetup in Philadelphia.

One of the best parts of these meetups is that the agenda is all formed in public and everyone gets a say in it. Some of the things I was most concerned about made the list (RabbitMQ, packaging), some did not get on the main agenda (database maintenance, Puppet), but many side groups formed during the event and these topics were covered at lunch conversations and in the lobby.

The interesting part of this summit was hearing from other operators problems and solutions. This was more focused, yet with a larger audience, than the previous sessions in Paris. I think a real sense of camaraderie and support for shared solutions was achieved.

Puppet OpenStack Discussion at Lunch

As I was listening to people discuss their issues with OpenStack and how others had solved them, I realized that OpenStack operators have different issues at different scales and maturity levels. When I think about scale and maturity, it’s not necessarily about number of nodes or the number of customers, its more about the number of resources you have, the number of services you provide, the maturity of your processes (such as deployment), and to a some extent how many of your problems you’ve solved with automation.

Our team started at a small scale. We had four people and the goal was to stand-up OpenStack. With four people, you are limited in scope and have to judiciously focus your resources. As our team grew and we worked through forming our deployment and automation processes, we’re able to spend more time on improving our service offerings and adding more services. We can also go back and clean up technical debt which you accumulate as you build an OpenStack deployment. Before these tools and processes are fully in place (and they are never perfect), making changes can take away valuable time.

For example, when an operator finds a bug, it takes a lot of resources and time to get a fix for that bug into production. This includes debugging, filing a bug, fixing the code, pushing a fix, begging for reviews, doing a cherry-pick, begging for more reviews, waiting for or building a package, deploying the new code. Many operators stop around the “filing a bug” step of this process.

Medium-sized operators or ones with more mature processes will sometimes work on a fix (depending on the issue), and may or may not have systems in place to allow them to hold the patch locally and build local packages. On the other hand, larger operators who have good systems in place can give the issue to the Team. They may have 10 people working on it, some of them core members. They have a full continuous integration/automation team that has solved package builds and deployments for them.

Our goal has always been to increase our scale of services, not only via more resources but through automation and creating tools/processes that allow us to offer services for the same amount of resource investment. The main reason for this that is our customers don’t care about Keystone or Neutron, these are just building blocks for them, they really want services like Domain Name System (DNS,) load-balancing-as-a-service (LBaaS), firewall-as-a-service (FWaaS) and database-as-a-service (DBaaS). But until the processes and tools are solid for the core components, it’s hard to find time to work on those, because while customers may not know what Keystone is, they sure care when it doesn’t work.

So how does any of this relate to the conference besides I was daydreaming about it in the lobby? What is clear to me after the sessions is that we have some specific areas where we’re going to work on process improvements and tooling.

My top three are:

  1. RabbitMQ monitoring and tooling
  2. Speeding up and clarifying our development & deployment process
  3. Investigating alternatives to heavy-weight OS packages for deploying OpenStack code

When we revisit this again in six months at the next Mid-Cycle, I suspect that number two will remain on the list, probably forever, since you can always make this process better. I’m certain we’ll have a new number one, and I’m pretty hopeful about the options for number three.

What will these investments get us? Hopefully more time for second-level service offerings and happier customers.

by Matt Fischer at March 28, 2015 01:02 AM

March 27, 2015

OpenStack Superuser

Superuser weekend reading

Here's the news from the OpenStack world you won't want to miss -- the musings, polemics and questions posed by the larger community.

Got something you think we should highlight? Tweet, blog, or email us!

In Case You Missed It

PayPal’s Sri Shivananda shared the company’s latest milestone on the PayPal blog. PayPal has converted nearly 100 percent of its traffic serving web/application program interface apps and mid-tier services to run on its OpenStack-based private cloud. “OpenStack has been a key choice in our journey to drive agility and operational efficiencies,” says Shivananda. "In leveraging open source technology, it’s important that we engage with the community. We’re doing so as an active contributor to several OpenStack projects"

HP Helion's Stephen Walli gives a great account as to why OpenStack is different from other open source projects. "Openstack is going through forced growth in a time frame that is a 20%-25% of the time of other large-scale successful open source-licensed infrastructure projects. Openstack continues to demonstrate enormous growth and participation," notes Walli.

The 2015 OpenStack T-Shirt Design Contest is live! You have until April 17, 2015 to submit a sketch of your design. The winning design will be featured on t-shirts to be given out at upcoming OpenStack events and the new online store.

What do women in tech need and how can male allies help? Rackspace's Anne Gentle offers six ideas from her own experiences, including being that friendly colleague at meetups.

Not all cores are equal, according to Tim Bell. The CERN OpenStack cloud team provides hints and tips for benchmarking in this blog post.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

We feature user conversations throughout the week, so tweet, blog, or email us your thoughts!

Cover Photo by Kevan // CC by 2.0

by Hong Nga Nguyen at March 27, 2015 07:41 PM

IBM OpenTech Team

Checklist for performing OpenStack code reviews

Reviewing code is essential to open source development. Code reviews are at the heart of the OpenStack community; code reviews must be performed by Core Reviewers to merge patches within OpenStack projects. A solid code review can be as beneficial as the idea being proposed. The following post may not be for everyone, it’s just a guide that I use when reviewing code.

Where to start

Pick a project that has familiar concepts. Know django and web development? Check out Horizon. Want to learn about security? Check out Keystone. There are so many OpenStack and Stackforge projects now available, pick one that interests you. It’s a good idea to know the projects that are related to your primary focus. For instance, Keystone also has: keystone-specs, pycadf, python-keystoneclient and keystonemiddleware. To make searching through Gerrit easier, create bookmarks, in the form of:

https://review.openstack.org/#/q/status:open+project:{repo}/{projectName},n,z

Sean Dague has a great write up about how to leverage Gerrit queries to avoid OpenStack review overload.

Picking the right patch

  • Patches related to blueprints will have a lot of code. Pro: Better chance to provide helpful reviews. Con: Lots of code, could be a bit complicated for a newcomer.
  • Patches related to bugs, are usually smaller changes. Pro: Easier to understand whats going on. Con: Might limit your feedback.
  • If a patch has a +A (+1 for workflow), then it’s already approved, choose another patch.
  • If a patch is marked as WIP (-1 for workflow), then it’s a Work In Progress, choose another patch (this depends on the author and project, but usually it’s not ready for review).

Reviewing

This is a checklist that I try to follow, this is by no means complete, or suitable for everyone.

The golden rules

  • Good quality code reviews produce good quality code.
  • Don’t be biased, treat colleagues (from the same company) the same as you would anyone else; or better yet, review their patches even harder.
  • Avoid rubber stamping. Rubber stamping is +1’ing with no comments. Obviously this depends on the patch, if it’s a simple fix then it’s probably fine. But try to let the author know you’ve actually read and understood the changes, leave an inline comment or general comment.

The commit message

  • It goes title (on one line), then description (many lines).
  • The title should be a summary of what’s happening: “Do x y z”.
  • The description should explain WHY. Is the change a refactoring? Is it a bug? A new feature? What’s the motivation behind the change? It might be complicated, but it should be clear.
  • Know the keywords and their syntax:
    • Closes-Bug: 123456, Partial-Bug: 123456, Related-Bug: 123456
    • Implements bp blueprint-name
    • Co-Authored-By: Name <e-mail>
    • Is there a DocImpact, APIImpact or SecurityImpact?
  • OpenStack Wiki link for GitCommitMessages.

Tests

  • Is there an accompanying test that covers the new behaviour?
  • Is there sufficient test coverage? A single test may not be enough.
  • Are there negative tests that cover failure scenarios, and assert exceptions are raised?
  • This topic had a memorable thread on the mailing list.

Libraries

  • Is there a library that an author could leverage?
  • If the author is adding a new library, is it appropriately licensed?
  • Is the import order correct? It should go: standard lib, third party libs, then local to the project.
  • Is the author importing a library or making a new utility function when they can be leveraging Oslo?
    • Importing json instead of using oslo.serialization?
    • Importing datetime instead of oslo.utils?
  • A list of all the Oslo projects

Messages

  • Is the author properly enclosing messages with _()? These indicate that they are marked for translation.
  • Don’t mark messages within tests for translation.
  • Don’t mark debug messages for translation.
  • In the event of an exception, is the author leaving the user with enough information? Exception messages should be helpful.
  • Is the author using the right logging level, DEBUG, INFO, WARNING?

Docstrings and comments

  • Docstring – How to use the code
    • If the author creates a function, and it’s doing something non-obvious then ask for a docstring. It’ll help future contributors understand what’s going on.
    • A docstring should summarize what the function does in a few lines, and document the input parameters, return type and return value.
  • Comment – Why (rationale) and how the code works
    • Comments should be properly formatted and helpful.
    • Use the right format: # TODO(stevemar) or # NOTE(stevemar)
    • Comments should be helpful, not blatantly obvious. Don’t do the following:
            # We assert that result is true
            assertTrue(result)
            # Initiate a counter
            counter = 0 

Being Pythonic
Read this slide deck. No really, go read it.

  • foo.get('bar', None) This is pointless since .get() will default to None anyway.
  • foo['bar'] Can be dangerous, will result in KeyError if the bar doesn’t exist. Use if 'bar' in foo:.
  • Avoid \’s when line wrapping, use ()’s
  • Casing:
    • joined_lower for functions, methods, attributes
    • ALL_CAPS for constants
    • CamelCaps for classes

Code smells

  • Duplicated code: Similar functions can usually be consolidated into one.
  • Long method: A method or function that is too large, can usually be broken up into smaller pieces.
  • Contrived complexity: Could a simpler design work?
  • Excessively long identifiers: headers_of_http_auth_token_response = resp['headers'] might be a bit too long.
  • Excessively short identifiers: a = self.create_auth_plugin() might be a bit too short.
  • Excessive use of literals: Try to use constants when possible.
  • Complex conditionals: No one wants to try and figure out why your branch needs to check six things.
  • Learn more about code smells.

Other tips

  • Does the patch align with the overall architecture?
  • Got a question? Ping the author on IRC, or make an inline comment.
  • Checkout the change, add a debug breakpoint and run a test.
  • Don’t try to review everything, pick as many patches are you are comfortable with. Folks will reply to your comments and expect you to review the new change set again, and again, and again…

The post Checklist for performing OpenStack code reviews appeared first on IBM OpenTech.

by Steve Martinelli at March 27, 2015 03:15 PM

Red Hat Stack

An ecosystem of integrated cloud products

In my prior post, I described how OpenStack from Red Hat frees  you to pursue your business with the peace of mind that your cloud is secure and stable. Red Hat has several products that enhance OpenStack to provide cloud management, virtualization, a developer platform, and scalable cloud storage.

Cloud Management with Red Hat CloudForms            

CloudForms contains three main components

  • Insight – Inventory, Reporting, Metrics Logotype_RH_Cloudforms_RGB_Black
  • Control – Eventing, Compliance, and State Management
  • Automate – Provisioning, Reconfiguration, Retirement, and Optimization

 

Business Benefit Use Case
One unified tool to manage virtualization and OpenStack cloud reduces the IT management overhead of multiple consoles and tools. Manage your Red Hat Virtualization, OpenStack, and VMware vSphere infrastructure with one tool, Cloud Forms.
One unified tool to manage private OpenStack and public cloud with the three components above. For temporary capacity needs, you can burst to an Amazon or OpenStack public cloud.

Scale up with Red Hat Enterprise Virtualization

Virtualization improves efficiency, frees up resources, and cuts costs. RHEV_logo

And as you plan for the cloud, it’s important to build common services that use your virtualization investment and the cloud, while avoiding vendor dependency.

Business Benefit Use Case
Consolidate your physical servers, lower costs and improve efficiency. Run and enterprise applications like Oracle, SAP, SAS, Microsoft Exchange and other traditional applications on virtual servers.

Red Hat Ceph Storage                           

Ceph™ is a massively scalable, software-defined storage system that runs on commodity hardware. It provides a unified solution for cloud computing environments and manages block, object, and image storage. Logotype_Storage_Ceph_CMYK_Gray

 

Business Benefit Use Case
The Red Hat Enterprise Linux OpenStack Platform installer automatically configures the included storage driver and Ceph clients. Store virtual machine images, volumes, and snapshots or Swift object storage for tenant applications.

 

Platform as a Service with Red Hat OpenShift            

An on-premise, private Platform as a Service (PaaS) solution offering that allows you to deliver apps faster and meet your enterprise’s growing application demands. Logotype_RH_OpenShift_wLogo_CMYK_Gray

Business Benefit Use Case
Accelerate IT service delivery and streamline application development. Choice of programming languages and frameworks, databases and development tools allows your developers to get the job done, using the languages and tools they already know and trust. Including:

  • Web Console, Command-line, or IDE
  • Java(EE6), Ruby, PHP, Python, and Perl

With an open hybrid cloud infrastructure from Red Hat, your IT organization can better serve your business by delivering more agile and flexible solutions while protecting business assets and preparing for the future.

Red-Hat-open-hybrid-cloud-1000x563_0

 

by Jonathan Gershater at March 27, 2015 02:16 PM

Tesora Corp

Short Stack: Paypal approaches OpenStack, cloud scalability, interview with Doug Hellmann

Welcome to the Short Stack, our weekly feature where we search for the most intriguing OpenStack links to share with you. These links may come from traditional publications or company blogs, but if it's about OpenStack, we'll find the best links we can to share with you every week.short stack_b small_0_0_0.jpg

If you like what you see, please consider subscribing.

Here we go with this week's links:

The ecosystem supporting OpenStack is really taking off in 2015.  First Walmart, and now PayPal, who recently announced that they're running almost 100% of their web/API application and mid-tier services on OpenStack.  We are seeing increasing user adoption as more and more companies are available to help build and operate their private clouds.   
 
Private clouds simply do not scale as well as public ones, but there are plenty of ways that scalability can be architected into local resources for private cloud environments.  How critical is scalability in a cloud?  OpenStack is the leading open cloud environment and it continues to work on improving scalability.
 

Interview with Doug Hellmann, OpenStack contributor and technical committee member | Tesora Blog

In this interview with Doug Hellmann, an OpenStack contributor at HP, he shares his involvement and contributions to the OpenStack Community.  He explains his experience being on the Technical Committee and what he enjoys most about working with the other members.  Lastly, he shares his thoughts on the state of OpenStack Trove in 2015 and concludes with a fun fact about himself.  

 

Juniper and Mirantis snuggle into a cloud OpenStack alliance | The Inquirer

Mirantis and Juniper are creating a cloud OpenStack alliance by signing an engineering partnership.  The two companies will now inter-operate an open source software-defined networking system that provides a reliable, scalable solution.  As part of the partnership, they also published a reference architecture that simplifies deployment and reduces the need for third-party involvement. 

 

Why OpenStack is different from other open source projects | Opensource.com

Does OpenStack differ that much from other open source projects?  Henrik Ingo suggests so.  His analysis and observations show that an open source licensed project must be functioning in and of itself.  Projects need to be stable and well-run before escalating to end user adoption.  The role of the Community also plays an important role in such projects. 

by 1 at March 27, 2015 01:10 PM

March 26, 2015

Adam Young

OpenStack keeps resetting my hostname

No matter what I changed, something kept setting the hostname on my vm to federate.cloudlab.freeipa.org.novalocal. Even forcing the /etc/hostname file to be uneditable did not prevent this change. Hunting this down took far too long, and here is the result of my journey.

Old Approach

A few releases ago, I had a shell script for spinning up new virtual machines that dealt with dhclient resetting values by putting overrides into /etc/dhclient.conf.  Find this file was a moving target.  First it moved into

/etc/dhcp/dhclient.conf.

Then to a file inside

/etc/dhcp/dhclient.d

And so on.  The change I wanted to make was to do two things:

  1.  set the hostname explicitly and keep it that way
  2. Use my own dnsserver, not the dhcp managed one

Recently, I started working on a RHEL 7.1 system running on our local cloud.  No matter what I did, I could not fix the host name.  Here are some  of the things I tried:

  1. Setting the value in /etc/hostname
  2. running hostnamectl set-hostname federate.cloudlab.freeipa.org
  3. Using nmcli to set the properties for the connections ipv4 configuration
  4. Explicitly Setting it in /etc/sysconfig/network-scripts/ifcfg-eth0
  5. Setting the value in /etc/hostname and making hostname immutable with chattr +i /etc/hostname

Finally, Dan Williams (dcbw) suggested I look in the journal to see what was going on with the host name.  I ran journalctl -b and did a grep for hostname.  Everything looked right until…

Mar 26 14:01:10 federate.cloudlab.freeipa.org cloud-init[1914]: [CLOUDINIT] stages.py[DEBUG]: Running module set_hostname (<module 'cloudinit.config.cc_set_hostname' from '/usr/lib/python2.7/site-packages/cloudinit...

cloud-init?

But…I thought that was only supposed to be run when the VM was first created? So, regardless of the intention, it was no longer helping me.

yum erase cloud-init

And now the hostname that I set in /etc/hostname survives a reboot. I’ll post more when I figure out why cloud-init is still running after initialization.

by Adam Young at March 26, 2015 06:31 PM

Red Hat Stack

An OpenStack Cloud that frees you to pursue your business

As your IT evolves toward an open, cloud-enabled data center, you can take advantage of OpenStack’s benefits: broad industry support, vendor neutrality, and fast-paced innovation.

As you move into implementation, your requirements for an OpenStack solutions shares a familiar theme: enterprise-ready, fully supported, and seamlessly-integrated products.

Can’t we just install and manage OpenStack ourselves?

OpenStack is an open source project and freely downloadable. To install and maintain OpenStack you need to recruit and retain engineers trained in Python and other technologies. If you decide to go it alone consider:

  1. How do you know OpenStack works with your hardware?
  2. Does OpenStack work with your guest instances?
  3. How do you manage and upgrade OpenStack?
  4. When you encounter problems, consider how you would solve them? Some examples:
Problem scenario Using OpenStack from Red Hat Do it yourself
Security breach Dispatch a team of experts to assess. Issue a hotfix (and contribute the fix upstream for the benefit of all). Rely on your own resources to assess. Wait for a fix from the upstream community.
Change of hardware/driver update etc Certified hardware and software partners continuously jointly develop and test. Issue a hotfix (and contribute the fix upstream for the benefit of all). Contact the hardware provider, sometimes a best guess effort for un-supported and un-certified software.
Application problem Red Hat consulting services assess and determines if the problem is with the application, OpenStack configuration, guest instance, hypervisor or host Red Hat Enterprise Linux. Red Hat works across the stack to resolve and fix, Troubleshooting across the stack involves different vendors who do not have joint certified solutions. Fixes come from a variety of sources or your own limited resources.

 

Thus the benefits of using OpenStack from Red Hat are:

Certification

Certified Hardware partners                            fingerpointing

Red Hat has a broad ecosystem of certified hardware. for OpenStack Red Hat is a premier member of TSANet that provides support and interoperability across vendor solutions.

Business Benefit Use Case
Provides a stable and long-term OpenStack cloud deployment. Helps you provide a high SLA to your customers. When problems arise you need solutions, not fingerpointing. The value of certified partners means Red Hat and its partners work together to resolve problems.

 

Certified software vendors                            certified

Red Hat Enterprise Linux OpenStack Platform provides an open pluggable framework for compute (nova), networking (neutron), and storage (cinder/glance) partner solutions.

Business Benefit Use Case
Choice. You are not locked into any one provider.You are not locked into one pricing and licensing model. You can integrate with an existing or select a new hypervisor, networking, or storage solution that meets your needs and changes as your future business demands evolve.

 

Certified guest OS on Instance/VM    four-virtual-machines-clipart

An OpenStack cloud platform runs virtualized guest instances with an operating system. OpenStack from Red Hat is certified to run Microsoft Windows (see certifications), Red Hat Enterprise Linux, and SUSE guest operating systems. Other operating systems are supported per this model

 

Business Benefit Use Case
The cloud provider can provide a stable platform with higher SLA since there is support across the stack. If there is a problem with a guest/instance, you are not alone getting support, Red Hat works with the O/S provider to resolve the problem.

 

Integrated with Linux and the Hypervisor    stack                               

As Arthur detailed in his blog post, OpenStack requires a hypervisor to run virtual machines, manage CPU, memory, networking, storage resources, security and drivers. Read how Red Hat helped a customer solve their problem across the stack

Business Benefit Use Case
For support and maintenance, Red Hat Enterprise Linux Openstack Platform co-engineers the hypervisor, operating system, and OpenStack services to create a production-ready platform. If you encounter a performance, scalability, or security problem that requires driver level, kernel, Linux or libvirt expertise, Red Hat is equipped to resolve.

 

A secure solution                                    EAL4-logo

Red Hat is the lead developer of SELinux which helps secure Linux. Red Hat has a team of security researchers and developers constantly hardening Red Hat Enterprise Linux.

Business Benefit Use Case
You can provide an OpenStack cloud that has government and military grade Common Criteria EAL4 and other certifications. Financial, Healthcare, Retail and other sectors benefit from the military grade security. Should a breach occur, you have one number to call to a team of experts that can diagnose the problem across the entire stack: operating system, hypervisor, and OpenStack.

 

Services expertise                    

Red Hat has extensive OpenStack design, deployment, and expert training experience across vertical industries and backed by proven Reference Architectures.

Business Benefit

You are not alone but have a trusted partner as you walk the private cloudjourney to deploy and integrate OpenStack in your environment.

Use Case

  1. Design, deploy, upgrade
  2. High availability
  3. Create an Open Hybrid Cloud
  4. Add Platform-as-Service
  5. ……

Lifecycle support                                     ???????????????????????????????????????????????????????????????????????????????????????????????????????

Upgrades, patches, and 24×7 support. OpenStack from Red Hat offers three years of Production life-cycle support and the underlying Red Hat Enterprise Linux has ten years of life-cycle support.

Business Benefit Use Case
Provides a long-term and stable OpenStack cloud platform.
With Red Hat’s  24×7 support, you can provide a high SLA to your customers.
Obtaining latest features and fixes in a new release of OpenStack, allows you to meet user requirements, with Red Hat testing and validation..

Upstream compatibility                            compatibility

Red Hat OpenStack is fully compatible with the upstream openstack community code.

Business Benefit Use Case
Red Hat is a Platinum member of the 200+ member foundation that drives the direction of OpenStack. You can have confidence that the Red Hat distribution adheres to the community direction and is not a one-off or “forked” solution. Red Hat represents your needs in the community. Red Hat’s commitment and leadership in the OpenStack community, helps ensure that customer needs are more easily introduced, developed, and delivered.

Contributions to the OpenStack project       Juno            

Red Hat is a leader in the greater Linux and OpenStack communities. Red Hat is the top contributor to the last four OpenStack releases and provides direction to several related open source projects.

Business Benefit Use Case
Using Red Hat Enterprise Linux Openstack Platform gives you a competitive advantage. Red Hat is intimately familiar with the code to best provide support, insight, guidance, and influence over feature development. If you need support or new features as your OpenStack cloud evolves, you can be confident that Red Hat will assist you and continue to evolve with the upstream project.

 Conclusion

Red Hat offers you proven open source technology, a large ecosystem of certified hardware and software partners and expert consulting services to free you up to pursue your business. In part two of this post, I will elaborate on the integrated products Red Hat offers to build and manage an Open Hybrid Cloud.

RHEV0036_RHELOSP_ForRHEV_Diagram_INC0202135_1214swThe OpenStack mark is either a registered trademark/servicemark or trademark/service mark of the OpenStack Foundation, in the United States and other countries, and is used with the OpenStack Foundation’s permission.  We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

by Jonathan Gershater at March 26, 2015 04:35 PM

Tesora Corp

Interview with Doug Hellmann, OpenStack Contributor and Technical Committee Member

In this interview with Doug Hellmann, an OpenStack contributor at HP, he shares his involvement and contributions to the OpenStack Community.  He also explains his experience being on the Technical Committee and his thoughts on the state of OpenStack in 2015.

doughellman.jpg

How long have you been involved in OpenStack?

I started working on OpenStack just before the Folsom Summit in 2012.  Our team spent a few weeks upcoming up to speed on the code base and processes, and understanding the issues at the time to prepare to contribute at the summit. 

Why is the success of OpenStack important to you?

The collaborative effort between the creators and users of OpenStack is an example of how large modern software projects should work.  Rather than being working with one vendor who has to manage all feature requests and fixes, users are part of the Community of contributors who guide, make, and share improvements to the whole project.  
Successfully navigating that process requires some new ways of thinking on everyone's part, but when we make it work there's a palpable feeling of doing something in a better way than we've done in the past. 

What makes you excited about OpenStack in 2015?

Frankly, the boring stuff.  We're talking a lot more about stability than new features, which means the community and the software are maturing.  We're dealing with our growth issues, both at a code level and as they effect the community.  We've been running flat out for a while now, and I think 2015 is the year we will get our second wind. 

How have you contributed to the OpenStack Community and what would you say is your biggest contribution to OpenStack's success to date?

I have served three releases as the PTL for Oslo.  Before that I contributed as a core reviewer for ceilometer, the unified client, and the global requirements team.  I've submitted patches to many of the projects over the years.  
I am also serving my second term as a member of the Technical Committee, and as part of that duty I try to advise community members about the best way to tackle cross-project changes. 
The biggest project I've undertaken is leading the Oslo team to graduate much of the code we've had in the incubator, to create libraries for use by the other projects.  I've learned a lot about the complexity of coordinating sweeping changes with the teams working on projects consuming Oslo libraries.  Thanks to the hard work of the Oslo core team, and our liaisons, we are very close to having all of the existing incubated code graduated. 

Do you have any experience working with the OpenStack Trove project?

I have used databases provisioned by Trove, but I haven't used Trove directly myself. 

In your opinion, what do you think are the biggest benefits to using Trove?

Providing on-demand provisioning for new databases frees application developers from having to worry about a key piece of the infrastructure. 

What do you see as some of the challenges and opportunities of OpenStack Trove in 2015?

I haven't been keeping an eye on the day-to-day work the team is doing, but my impression from the outside of the project is that it is coming along well and I anticipate seeing Trove mature and become more stable over the next year. 

What have you enjoyed most about being a member of the OpenStack Technical Committee?

There is so much fascinating work going on within OpenStack, and I enjoy having an opportunity to interact with the diverse contributors throughout the community.  It's interesting to watch how the project teams approach problems differently.  It's also rewarding to be able to connect different groups who are working on similar problems without being aware of each other, to see they come together and collaborate. 

What is one thing about you that would surprise people?

I didn't grow up watching sports, but my wife is a big University of Georgia fan and I've grown to enjoy tailgating and going to football games since we've been married.  I'm still learning the subtleties of the game, but I'm fairly good at the spotting penalties. 

by 1721 at March 26, 2015 02:26 PM

Maish Saidel-Keesing

Installing OpenStack CLI clients on Mac OSX

I usually have a Linux VM that I use to perform some of my remote management tasks, such a OpenStack CLI commands.

But since I now have a Mac (and yes I am in enjoying it!!) I thought why not do it natively on my Mac. The official documentation on installing clients is on the OpenStack site.

This is how I got it done.

Firstly install pip

easy_install pip

Now to install the clients (keystone, glance, heat, nova, neutron, cinder, swift and the new OpenStack client)

pip install python-keystoneclient python-novaclient python-heatclient python-swiftclient python-neutronclient python-cinderclient python-glanceclient python-openstackclient

First problem – was no permissions

No Permissions

Yes you do need sudo for some things…

sudo –H pip install python-keystoneclient python-novaclient python-heatclient python-swiftclient python-neutronclient python-cinderclient python-glanceclient python-openstackclient

Success!

Success!

Or so I thought…

Maybe not...

Google led me here - https://github.com/major/supernova/issues/55

sudo –H pip uninstall six

uninstall six

And then

sudo –H easy_install six

reinstall six

And all was good

nova works

nova list

Quick and Simple!! Hope this is useful!

by Maish Saidel-Keesing (noreply@blogger.com) at March 26, 2015 11:00 AM

Christian Berendt

OpenStack User Survey - Deadline Apr 8

The OpenStack User Survey is an important information source. Please participate. The last report can be found in the article OpenStack User Survey Insights: November 2014. More details can be found in the User Survey Q&A.

As you know, before previous summits we have been running a survey of OpenStack users. We’re doing it again, and we need your help!

If you are running an OpenStack cloud, please participate in the survey. If you already completed the survey before, you can simply log back in to update your deployment details and answer a few new questions. Please note that if your survey response has not been updated in 12 months, it will expire, so we encourage you to take this time to update your existing profile so your deployment can be included in the upcoming analysis.

As a member of our community, please help us spread the word. We're trying to gather as much real-world deployment data as possible to share back with you. We have made it easier to fill out and added some new questions (eg packaging, containers)

The information provided is confidential and will only be presented in aggregate unless the user consents to making it public.

The deadline to complete the survey and be part of the next report is April 8 at 23:00 UTC.

Source: http://lists.openstack.org/pipermail/openstack/2015-March/012074.html

by Christian Berendt at March 26, 2015 10:23 AM

March 25, 2015

OpenStack Superuser

As Paypal.com approaches 100 percent OpenStack, the private cloud is here to stay

First Walmart, now Paypal: with billions of dollars of transactions now flowing through infrastructure based on OpenStack, 2015 is shaping up to be the year of the OpenStack-Powered Planet.

PayPal recently announced it is running “nearly 100 percent of traffic serving web/API applications and mid-tier services" to run on the internal private cloud based on OpenStack. The news broke just weeks after Walmart, the world’s largest consumer goods retailer by revenue, said it’s running more than 100,000 cores of OpenStack on its compute layer.

There will be more to come.

The ecosystem supporting OpenStack has really helped jumpstart adoption, as users have a number of choices ranging from startups to the largest tech companies in the world to help them build and operate their private clouds.

Walmart, for example, now has plans to build in more block storage and venture into software-defined networks using OpenStack projects such as Neutron and Cinder and is currently building a multi-petabyte object storage using Swift.

To talk about this major shift in the company, Walmart’s two cloud leads, James Downs and Amandeep Singh, will take the stage at the upcoming OpenStack Summit. Other large organizations who will be sharing their work in the OpenStack-sphere at the Summit include Adobe, Time Warner, NASA and eBay.

Stay tuned.

Mark Collier is a co-founder of OpenStack and currently serves as COO at the OpenStack Foundation. You can find him on Twitter at @sparkycollier

Cover Photo by Quinn Dombrowski // CC BY NC

by Mark Collier at March 25, 2015 07:22 PM

Adam Young

Troubleshooting Keystone in a New Install

Recently heard complaints:

I’ve done a deployment , and every time I try to log in to the dashboard, I get “An error occurred authenticating. Please try again later.” Somewhat surprisingly, the only log that I’m noticing showing anything of note is the Apache error log, which reports ‘Login failed for user “admin”‘. I’ve bumped keystone — where I’d assume the error is happening — to DEBUG, but it’s showing exactly zero activity. How do I go about debugging this?’

Trying to enable LDAP with OpenStack/keystone in Juno release. All the horizon users return error “You are not authorized for any projects.” Similarly, all the OpenStack services are reported not to be authorized.’
What is supposed to happen:

  1. You Login to Horizon using admin and the correct password
  2. Horizon passes that to Keystone in a token request
  3. Keystone uses that information to create a token. If the user has a default project set, the token is scoped to the default proejct
  4. token is returned to Horizon

Let’s take a deeper look at step 3.
In order to perform an operation on a resource in a project, a user needs to be assigned a role in a project. So the failure could happen at a couple steps.

  1. The user does not exist in the identity backend
  2. The user has the wrong password
  3. The user has no role assignments
  4. The user has a default project assigned, but does not have a role assignment for that project

The Keystone configuration file

Most deployments run with Keystone reading its configuration values from /etc/keystone/keystone.conf. It is an ini file, with section headers.
In Juno and Icehouse, the storage is split into two pieces: Identity and Assignment. Identity holds users and groups. Assignment holds roles, role assignments, projects and domains. Let’s start with the simplest scenario.
Identity in SQL, Assignments in SQL:
This is what you get from devstack if you make no customizations. To confirm that you are running this way, look in your Keystone.conf file for the sections that starts with
[identity]
and
[assignment]
and look for the value driver. In a Devstack deployment that I just ran, I have

[identity]
driver = keystone.identity.backends.sql.Identity

Which confirms I am running witht he SQL driver for identity, and

[assignment]
driver = keystone.assignment.backends.sql.Assignment

Which confirms I am running with the SQL driver for Assignment
First steps
For Devstack, I get my environment variables set using

. openrc
and this will set:
$OS_AUTH_URL $OS_NO_CACHE $OS_TENANT_NAME
$OS_CACERT $OS_PASSWORD $OS_USERNAME
$OS_IDENTITY_API_VERSION $OS_REGION_NAME $OS_VOLUME_API_VERSION
echo $OS_USERNAME
demo

To change to the admin user:

$ export OS_USERNAME=admin
$ export OS_PASSWORD=FreeIPA4All

While we are trying to get people to move to the common CLI, older deployments may only have the keystone CLI to work with. I’m going to start with that.

$ keystone --debug token-get
DEBUG:keystoneclient.auth.identity.v2:Making authentication request to http://192.168.1.58:5000/v2.0/tokens
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): 192.168.1.58
DEBUG:requests.packages.urllib3.connectionpool:"POST /v2.0/tokens HTTP/1.1" 200 3783
+-----------+----------------------------------+
| Property | Value |
+-----------+----------------------------------+
| expires | 2015-03-25T16:03:25Z |
| id | ec7c2d1f07c5414499c3cbaf7c59d4be |
| tenant_id | 69ff732083a64a1a8e34fc4d2ea178dd |
| user_id | 042b50edf70f484dab1f14e893a73ea8 |
+-----------+----------------------------------+

OK, what happens when I do keystone token-get? The CLI uses the information I provide to try and get a token;

$ echo $OS_AUTH_URL

http://192.168.1.58:5000/v2.0

OK…It is going to go to a V2 specific URL. And, to confirm:

$ echo $OS_IDENTITY_API_VERSION

2.0

We are using Version 2.0
The username, password and tenant used are

$ echo $OS_USERNAME
admin
$ echo $OS_PASSWORD
FreeIPA4All
$ echo $OS_TENANT_NAME
demo

Let’s assume that running keystone token-get fails for you. Let’s try to isolate the issue to the role assignments by getting an unscoped token:

$ unset OS_TENANT_NAME
$ echo $OS_TENANT_NAME

That should return a blank line. Now:

$ keystone token-get
+----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| expires | 2015-03-25T16:14:28Z |
| id | 2a3ce489422342f2b6616016cb43ebc2 |
| user_id | 042b50edf70f484dab1f14e893a73ea8 |
+----------+----------------------------------+

If this fails, it could be one of a few things:

  1. User does not exist
  2. Password is wrong
  3. User has a default tenant that is invalid

How can we check:

Using Admin Token

Bootstrapping the Keystone install requires putting users in the database before there are any users defined. Most installers take advantage of an alternate mechanism called the ADMIN_TOKEN or SERVICE_TOKEN. To see the value for this, look in keystone.conf section:
[DEFAULT]
for a value like this:
#admin_token = ADMIN
Note that devstack follows the best practice of disabling the admin token by commenting it out. This password is very powerful and should be disabled in common usage, but is very powerful for fixing broken systems. To enable it, uncomment the value, and restart Keystone.

Using the Common CLI

The keystone command line has been deprecated with an eye toward using the openstack client. Since you might be deploying an old version of Openstack that has different library dependencies, you might not be able to install the latest version on your server, but you can (and should) run an updated version on your workstation which will then be capable of talking to older versions of keystone.
To perform operations using the common cli you need to pass the endpoint and admin_token as command line parameters.

The os-url needs to be the publicly routed URL to the admin interface. The firewall port for that URL needs to be Open.

$ openstack --os-token ADMIN --os-url http://192.168.1.58:35357/v2.0/ user list
+----------------------------------+----------+
| ID | Name |
+----------------------------------+----------+
| 042b50edf70f484dab1f14e893a73ea8 | admin |
| eb0d4dc081f442dd85573740cfbecfae | demo |
+----------------------------------+----------+
$ openstack --os-token ADMIN --os-url http://127.0.0.1:35357/v2.0/ role list
+----------------------------------+-----------------+
| ID | Name |
+----------------------------------+-----------------+
| 1f069342be2348ed894ea686706446f2 | admin |
| 2bf27e756ff34024a5a9bae269410f44 | service |
| dc4e9608b6e64ee1a918030f23397ae1 | Member |
+----------------------------------+-----------------+
$ openstack --os-token ADMIN --os-url http://192.168.1.58:35357/v2.0/ project list
+----------------------------------+--------------------+
| ID | Name |
+----------------------------------+--------------------+
| 69ff732083a64a1a8e34fc4d2ea178dd | demo |
| 7030f12f6cb4443cbab8f0d040ff023b | admin |
+----------------------------------+--------------------+

Now, to check to see if the admin user has a role on the admin project:

$ openstack --os-token ADMIN --os-url http://192.168.1.58:35357/v2.0/ user role list --project admin admin

+----------------------------------+-------+---------+-------+
| ID | Name | Project | User |
+----------------------------------+-------+---------+-------+
| 1f069342be2348ed894ea686706446f2 | admin | admin | admin |
+----------------------------------+-------+---------+-------+

If this returns nothing, you probably have found the root of your problem. Add the assignment with
$ openstack --os-token ADMIN --os-url http://192.168.1.58:35357/v2.0/ role add --project admin --user admin admin
+-------+----------------------------------+
| Field | Value |
+-------+----------------------------------+
| id | 1f069342be2348ed894ea686706446f2 |
| name | admin |
+-------+----------------------------------+

by Adam Young at March 25, 2015 07:00 PM

March 24, 2015

OpenStack Superuser

Scale & Maturity: Thoughts on the OpenStack Mid-Cycle Operators Meetup

A couple weeks back I attended the OpenStack Operators Mid-Cycle Meetup in Philadelphia.

One of the best parts of these meetups is that the agenda is all formed in public and everyone gets a say in it. Some of the things I was most concerned about made the list (RabbitMQ, packaging), some did not get on the main agenda (database maintenance, Puppet), but many side groups formed during the event and these topics were covered at lunch conversations and in the lobby.

The interesting part of this summit was hearing from other operators problems and solutions. This was more focused, yet with a larger audience, than the previous sessions in Paris. I think a real sense of camaraderie and support for shared solutions was achieved.

alt text hereA breakout session at the Mid-Cycle Meetup. Photo: Matt Fischer.

As I was listening to people discuss their issues with OpenStack and how others had solved them, I realized that OpenStack operators have different issues at different scales and maturity levels. When I think about scale and maturity, it’s not necessarily about number of nodes or the number of customers, its more about the number of resources you have, the number of services you provide, the maturity of your processes (such as deployment), and to a some extent how many of your problems you’ve solved with automation.

Our team started at a small scale. We had four people and the goal was to stand-up OpenStack. With four people, you are limited in scope and have to judiciously focus your resources. As our team grew and we worked through forming our deployment and automation processes, we’re able to spend more time on improving our service offerings and adding more services. We can also go back and clean up technical debt which you accumulate as you build an OpenStack deployment. Before these tools and processes are fully in place (and they are never perfect), making changes can take away valuable time.

For example, when an operator finds a bug, it takes a lot of resources and time to get a fix for that bug into production. This includes debugging, filing a bug, fixing the code, pushing a fix, begging for reviews, doing a cherry-pick, begging for more reviews, waiting for or building a package, deploying the new code. Many operators stop around the “filing a bug” step of this process.

Medium-sized operators or ones with more mature processes will sometimes work on a fix (depending on the issue), and may or may not have systems in place to allow them to hold the patch locally and build local packages. On the other hand, larger operators who have good systems in place can give the issue to the <insert here="here" openstack="openstack" service="service"> Team. They may have 10 people working on it, some of them core members. They have a full continuous integration/automation team that has solved package builds and deployments for them.</insert>

Our goal has always been to increase our scale of services, not only via more resources but through automation and creating tools/processes that allow us to offer services for the same amount of resource investment. The main reason for this that is our customers don’t care about Keystone or Neutron, these are just building blocks for them, they really want services like Domain Name System (DNS,) load-balancing-as-a-service (LBaaS), firewall-as-a-service (FWaaS) and database-as-a-service (DBaaS). But until the processes and tools are solid for the core components, it’s hard to find time to work on those, because while customers may not know what Keystone is, they sure care when it doesn’t work.

So how does any of this relate to the conference besides I was daydreaming about it in the lobby? What is clear to me after the sessions is that we have some specific areas where we’re going to work on process improvements and tooling.

My top three are:

  • RabbitMQ monitoring and tooling
  • Speeding up and clarifying our development & deployment process
  • Investigating alternatives to heavy-weight OS packages for deploying OpenStack code

When we revisit this again in six months at the next Mid-Cycle, I suspect that number two will remain on the list, probably forever, since you can always make this process better. I’m certain we’ll have a new number one, and I’m pretty hopeful about the options for number three.

What will these investments get us? Hopefully more time for second-level service offerings and happier customers.

Fischer, also a brewer of beer and a hiker of mountains in his spare time, blogs here.

Cover Photo of the Fairmount neighborhood in Philadelphia by Bob Jagendorf // CC BY NC

by Matt Fischer at March 24, 2015 06:00 PM

Christian Berendt

B1 Systems presentations on the OpenStack Summit in Vancouver

We have three accepted talks for the OpenStack Summit in Vancouver. This is amazing and we are really happy to be there. Thank you everybody for voting. We are looking forward to meet you in one of our presentations or anywhere else on the summit in Vancouver.

by Christian Berendt at March 24, 2015 03:40 PM

Cloudscaling Corporate Blog

DevOps Event @ EMC World 2015

I am super excited to announce that EMC is sponsoring a DevOps event at EMC World 2015.  As many of you guessed, with the acquisition of Cloudscaling, and the recent creation of the EMC{code} initiative, we are trying to become a company that engages more directly with developers and the DevOps community in particular.

We have some great guests who are going to come and speak and some of the EMC{code} evangelists will be leading sessions as well.  Here’s a list of the currently planned sessions:

  • Engaging the New Developer Paradigm
  • The DevOps Toolkit
  • The Enterprise Journey to DevOps
  • Docker 101
  • Container Management at Scale
  • Deploying Data-Centric APIs
  • Predictive Analytics to Prevent Fraud
  • Deploying Modern Apps with CloudFoundry

This will not be your normal EMC event and does not require registration for EMC World to attend.  So if you are in Las Vegas May 3rd, come join us!

REGISTER HERE

by Randy Bias at March 24, 2015 03:03 PM

Rob Hirschfeld

My OpenStack Super User Interview [cross-post]

This post of my interview for the OpenStack Super User site originally appeared there on 3/23 under the title “OpenStack at 10: different code, same collaboration?”

With over 15 years of cloud experience, Rob Hirschfeld also goes way back with OpenStack. His involvement dates to before it was officially founded and he was also one of the initial Board Members. In addition to his role as Individual Director, Hirschfeld currently chairs the DefCore committee. He’ll be speaking about DefCore at the upcoming Vancouver Summit with Alan Clark, Egle Sigler and Sean Roberts.

He talks to Superuser about the importance of patches, priorities for 2015 and why you should care about OpenStack vendors making money.

Superuser: You’ve been with the project since before it started, where do you hope it will be in five years?

In five years, I expect that nearly every line of code will have been replaced. The thing that will endure is the community governance and interaction models that we’re working out to ensure commercial collaboration.

[3/24 Added Clarification Note: I find humbled watching traditionally open-unfriendly corporations using OpenStack to learn how to become open source collaborations.  Our governance choices will have long lasting ramifications in the industry.] 

What is something that a lot of people don’t know about OpenStack?

It was essentially a “rewrite fork” of Eucalyptus created because they would not accept patches.  That’s a cautionary tale about why accepting patches is essential that should not get lost from the history books.

Any thoughts on your first steps to the priorities you laid out in your candidacy profile?

I’ve already started to get DefCore into an execution phase with additional Board and Foundation leadership joining into the effort.  We’ve set a very active schedule of meetings with two sub-committees running in parallel…It’s going to be a busy spring.

You say that the company you founded, RackN, is not creating an OpenStack product. How are you connected to the community?

RackN supports OpenCrowbar which provides a physical ready state infrastructure for scale platforms like OpenStack. We are very engaged in the community from below by helping make other distributions, vendors and operators successful.

What are the next steps to creating the “commercially successful ecosystem” you mentioned in your candidacy profile? What are the biggest obstacles to this?

We have to make stability and scale a critical feature. This will mean slowing features and new projects; however, I hear a lot of frustration that OpenStack is not focused on delivering a solid base.

Without a base, the vendors cannot build profitable products.  Without profits, they cannot keep funding the project. This may be radical for an open project, but I think everyone needs to care more if vendors are making money.

What are some more persistent myths about the cloud?

That the word cloud really means anything.  Everyone has their own definition.  Mine is “infrastructure with an API” but I’d happily tell you it’s also about process and ops.

Who are your real-life heroes?

FIRST (For Inspiration and Recognition of Science and Technology) founders Dean Kamen and Woodie Flowers. They executed a real vision about how to train for both competition and collaboration in the next generation of engineers.  Their efforts in building the next generation of leaders really impact how we will should open source collaboration. That’s real innovation.

What do you hope to get out of the next summit?

First, I want to see vendors passing DefCore requirements.  After that, I’d like to see the operators get more equal treatment and I’m hoping to spend more time working with them so they can create places to share knowledge.

What’s your favorite/most important OpenStack debate?

There are two.  First, I think the API vs. implementation is a critical growth curve for OpenStack.  We need to mature past being so implementation driven so we can have stand alone APIs.

Second, I think the “benevolent dictator” discussion is useful. Since we are never going to have one, we need a real discussion about how to define and defend project wide priorities in a meaningful way.  Resolving both items is essential to our long-term viability.


by Rob H at March 24, 2015 02:51 PM

Sean Dague

OpenStack Emacs Tools

Over the past few weeks I've been tweaking my emacs configs, and writing some new modes to help with OpenStack development. Here is a non comprehensive list of some of the emacs integration I'm finding useful in developing for OpenStack (especially things that have come up in conversation). URLs provided, though I won't walk through all configuration in depth.

tramp

Tramp is a built in facility in emacs that lets you open files on remote machines via ssh (and other protocols). This means your emacs runs locally, with all the latency gains that has, as configured as you would like, but editing can be done across multiple systems. A remote file url looks like /servername:/file/path. All the normal file completion that you expect works after that point.

I tend to do code that doesn't need a full stack run locally on my laptop or desktop, but full stack code happens on my NUC devstack box, and tramp lets me do that from an single emacs instance.

More info about tramp at emacswiki.

fly-hack

Emacs has an in-buffer syntax checker called flymake. There are various tutorials out there for integrating pyflakes or flake8 into that. However in OpenStack we have hacking, which extends flake8 with new rules. Also, every project turns on custom ignores. Also, many projects extend flake8 further with custom rules for that repo.

screenshot_151

fly-hack uses the flake8 in the .tox/pep8 venv for each project, and uses the tox.ini config for each project, so when in Nova, nova rules will be enforced, when in Cinder, cinder rules will be enforced. Mousing over the error will pop up what it is (you can see the H306 in that screenshot). It has a fallback mode when using it over tramp that's a reasonably sane flake8 least common denominator for OpenStack projects.

More info at fly-hack github page - https://github.com/sdague/fly-hack

stacktest

Our testr / subunit / testtools testing toolchain gets a bit heavy handed when trying to iterate on test development. testr discovery takes 20 - 30s on the Nova source tree, even if you are trying to only run 1 test. I became inspired at the Ops Summit in Philly to see if I could do better. And stacktest.el was born.

screenshot_153

It's mostly a port from nosemacs which allows you to run tests from an emacs buffer, however it does so using tox, subunit, or testtools, depending on whether you want to run a top level target, test a file, test an individual test function, and/or use pdb. It works over tramp, it works with pdb, and it uses the subunit-trace tooling if available.

I've now bound F12 to stacktest-one, which is a super fast way to both iterate on test development.

More info at the stacktest github page - https://github.com/sdague/stacktest

pycscope

OpenStack is a lot of code, and uses a ton of libraries. git grep works ok in a single repo, but the moment some piece of code ends up calling into an oslo library, that breaks down.

Peter Portante, OpenStack Swift contributor, maintains a pythonized version of cscope. It parses the AST of all the files to build a quite rich symbol cscope database. This lets you search for definitions (searching down), calling points (searching up), and references (searching sideways). Which very quickly lets you dive through a code path and figure out where you end up.

screenshot_155

The only draw back is the AST parse is consuming on something as large as the Nova tree, especially if you index all the .tox directories, which I do to let myself get all the way back through the library stacks that we include.

You can learn more about pycscope at it's github page - https://github.com/portante/pycscope

flyspell-prog-mode

Emacs includes a spell checker called flyspell. Very useful for text files. What I only learned last year is that there is also a flyspell-prog-mode, which is like flyspell, but only acts on comments and strings that are semantically parsed by Emacs. This helps avoid a spelling mistake when writing inline documentation.

screenshot_156

More information at Emacs wiki.

lambda-mode

This is totally gratuitous, but fun. There is a small mode that does a display adjustment of the word 'lambda' to an actual 'λ'. It's a display adjustment only, this is still 6 characters in the buffer. But it makes the lambda expressions pop out a bit more.

screenshot_157

More information at Emacs wiki.

The important thing about having an extensible editor is actually extending it to fit your workflow. If you are an emacs user, hopefully this will give you some fun things to dive into. If you use something else, hopefully this will inspire you to go looking into your toolchain for similar optimizations.

by Sean Dague at March 24, 2015 12:00 PM

Opensource.com

Why OpenStack is different from other open source projects

How does OpenStack differ from other large, popular open source projects and how do these differences affect the way the project is growing and maturing?

by stephenrwalli at March 24, 2015 09:00 AM

Mirantis

Mirantis OpenStack Summit Vancouver presentations

The post Mirantis OpenStack Summit Vancouver presentations appeared first on Mirantis | The #1 Pure Play OpenStack Company.

Mirantis Logo (hiRes) -- Pure Play OpenStackPlease join Mirantis at the Vancouver OpenStack Summit. We’ll be giving the following presentations:

Monday, 5/18
Optimizing Contributions with Globally Distributed Teams

2:00pm 
Join this workshop panel to learn tips and tricks on how to maximize your effectiveness and efficiency when working on large open source projects such as OpenStack.
Kamesh Pemmaraju (Mirantis), Beth Cohen (Verizon), Diane Mueller (Red Hat), Karin Levenstein (Rackspace), Fernando Oliveira (Verizon)

Taming Neutron with Collaborative Documentation

3:40pm
Everyone knows that networking presents one of the biggest challenges in planning, deploying, and operating an OpenStack cloud, and the lack of sufficient practical documentation contributes significantly to the problem. Here we will discuss how a group of people with a wide range of skills and expertise joined forces in the last development cycle to create a new collaborative Networking Guide, and in the process established new rules for creating OpenStack documentation, including defining topics, engaging the community and SMEs, and combining skill sets to achieve a large and complicated goal.
Nick Chase and Sean Collins (Mirantis), Matthew Kassawara and Matt Dorn (Rackspace), Edgar Magana (Workday)

Swift vs Ceph from an Architectural Standpoint
5:30pm
Ceph may not be an “official” OpenStack project, but it’s common enough in OpenStack deployments that our Architectural Design Assessments often include a discussion of Ceph versus Swift, and Ceph versus other Cinder backends. This talk will go into detail on how to design a storage infrastructure that fulfills the requirements of the project without breaking the bank.
Christian Huebner (Mirantis)

Tuesday, 5/19
APIs Matter

11:15am
A working group of community members from across the OpenStack projects has come together to focus on the public face of OpenStack:  RESTful HTTP APIs, which can hurt the user’s first impression of OpenStack. In this session, we will discuss the goals of the API working group, the guidance delivered in the last release cycle, as well as the pain points and ongoing debates. We’ll also demonstrate a new testing framework developed by Chris Dent called “Gabbi” that aims to clarify both how the OpenStack contributors *think about* APIs as well as how we validate that our APIs are consistent, user friendly, and well structured.
Jay Pipes, (Mirantis), Chris Dent (Red Hat)

High performance Neutron using SR-IOV and Intel 82599
12:05pm
This presentation will describe Mirantis’ effort to add SR-IOV support to Neutron using a common Intel 82599 network chip and other enhancements that enable bonding SR-IOV interfaces in the VM. These changes can provide high performance, low latency, multi-tenant networks using a commodity chip built into a large number of server platforms.
Greg Elkinbard (Mirantis)

What’s coming for IPv6 and L3 in Neutron
2:50pm
IPv6 in OpenStack Neutron has made progress enough that the addressing and L2 issues have been resolved.  Neutron still faces a number of issues when it comes to IPv6 and L3.  This talk will discuss these issues and how the Neutron team is addressing them in Kilo and future releases. 
Sean Collins (Mirantis), Carl Baldwin (HP)

Using Rally for OpenStack Certification at Scale
3:40pm
It goes without saying that one of the most important things in all OpenStack clouds from big to small is to be 100% sure that everything works as expected BEFORE taking on production workloads. Join us to see how Rally can fully automate these steps for you and save dozens if not hundreds of hours. 
Boris Pavlovic (Mirantis), Jesse Keating (Blue Box)

Mad Stacks: Beyond Thunderdome
4:40pm
The Thunderdome is back. This full-house session from the Paris Summit is reloaded for Vancouver with a panel of experts who have deployed OpenStack in each of five methodologies — services, appliances, distros, DIY, and consulting. Come hear them make their cases for the best OpenStack deployment model, and see the scores guest judges give them as they lift their score cards, Olympic style, to rate performances. The Thunderdome will strip away vendor shill and cloudwashing, with host Randy Bias forcing panelists to give direct and unambiguous responses to tough questions about how to move OpenStack from PoC to production. 
Boris Renski (Mirantis), Jesse Proudman (Blue Box), Chris Kemp (Nebula), Randy Bias (EMC), Christopher MacGown (Piston), Caroline McCrory (GigaOm)

Wednesday, 5/20
Ask the Experts: Are Containers a Threat to OpenStack?

9:00am
This all-star panel discussion will discuss and debate if containers are a threat to OpenStack and will be recorded as a “Speaking in Tech” Podcast, which is distributed by Europe’s largest tech publication, The Register.
Boris Renski (Mirantis), Greg Knieriemen (HDS), Manju Ramanathpura (HDS), Kenneth Hui (EMC), Caroline McCrory (GigaOm), Jan Mark Holzer (Red Hat), Jesse Proudman (Blue Box)

Running Kubernetes on top of OpenStack with Application Catalog
11:50am
In this session you will learn about various options for using Kubernetes on OpenStack and how to make it easier. We’ll also perform a live demo of different use cases with an application catalog approach. 
Georgy Okrokvertskhov (Mirantis)

Thursday, 5/21
Building Your First Ceph Cluster for OpenStack— Fighting for Performance, Solving Tradeoffs

9:50am
In this talk, speakers from Mirantis will share practical advice and lessons learned (sometimes the hard way) on what it takes to power an OpenStack cloud with Ceph.
Dmitriy Novakovskiy and Greg Elkinbard (Mirantis)

Deliver Cloud-Ready Applications to End-Users Using the Glance Artifact Repository
11:50am
After this talk you will have a deep understanding of the capabilities of the new Glance Artifact Repository and how to use them to make your applications discoverable in multi-tenant OpenStack clouds.
Alexander Tivelkov (Mirantis)

What your customers don’t know about OpenStack might hurt you
11:50am 
In this presentation we’ll look at the biggest land mines of explaining the complexities of OpenStack to app developers and other consumers, including what to explain — and what to avoid — regarding governance and the integrated release cycle, including assessing their engineering chops and appetite for getting into the weeds of OpenStack. We’ll also examine how to talk with new users about the best way to become engaged in the community, without scaring them away from open source altogether. 
David Fishman (Mirantis), Jesse Proudman (Blue Box), Kenneth Hui (EMC), Lisa-Marie Namphy (HP)

Moving an AWS Workload to OpenStack
1:30pm
You’ve decided to make the switch to OpenStack. Great! Now what do you do with all of those apps already running on Amazon Web Services? In this talk, we’ll look at some of your options for moving your existing applications and workloads from AWS to OpenStack.
Nick Chase (Mirantis)

The Multi-hypervisor OpenStack Cloud: Are We There Yet?
4:10pm
One of the visions for OpenStack is for it to be the unifying common fabric for all cloud and infrastructure components in the enterprise, but the options for full-featured and reasonably performant programmable networking with multiple hypervisors have been limited. This presentation covers what has changed in the last six months, giving you a solid understanding of today’s options for using multiple hypervisors in the same OpenStack cloud, and why you may (or may not) need it in your environment.
Evgeniya Shumakher and Dmitriy Novakovskiy (Mirantis), Maxim Datskovskiy (HP)

Detecting targeted cyber attacks in the cloud

4:10pm
These days, we see an increasing number of cyber-attacks affecting both public and private clouds. By the end of this session, attendees will understand the specifics of such attacks and be able to detect Advanced Persistent Threats (APTs) or general purpose malware activity in their own cloud networks by using an intrusion detection system.
Alexander Adamov (Mirantis)

The post Mirantis OpenStack Summit Vancouver presentations appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Denise Walters at March 24, 2015 03:10 AM

March 23, 2015

OpenStack Blog

2015 OpenStack T-Shirt Design Contest

It’s that time of year again and we’re looking for a new design to grace OpenStack’s T-Shirts. Here’s your chance to show us your creative talent and submit an original design for our 2015 OpenStack T-shirt Design Contest!

If you’d like to participate simply send a sketch of your design to events@openstack.org.

Deadline: April 17, 2015

The winning design will be showcased on T-Shirts given out at an upcoming OpenStack event as well as on the new OpenStack online store!

2015 T-Shirt Contest

(Click image to view deadline restrictions)

For some inspiration, check out last year’s winning design by Jaewin.  Get your pencils sharpened or fire up your design software of choice and send us your sketches! We’re excited to see what you’ll come up with!

2014 Winning T-Shirt Design

2014 OpenStack Winner

 

 

Guidelines:

  • The design must be your own original, unpublished work and must not include any third-party logos or copyrighted material; by entering the competition, you agree that your submission is your own work
  • Design should be one that appeals to the majority of the OpenStack developer community
  • Design may include line art, text, and photographs
  • Your design is for the front of the shirt and may encompass an area up to 10″ x 10″ (inches)
  • Design may use a maximum of three colors

The Fine Print:

  • One entry per person, please. And it must be original art. Content found on the internet rarely has the resolution needed for print, and it’s considered unlawful to use without permission.
  • Submissions will be screened for merit and feasibility, and we reserve the right to make changes such is image size, ink or t-shirt color before printing.
  • By submitting your design, you grant permission for your design to be used by the OpenStack Foundation including, but not limited to, the OpenStack website, the OpenStack online store, and future marketing materials.
  • The OpenStack Foundation reserves the right to final decision.
  • The creator of the winning design will receive attribution on the T-shirt and public recognition on the OpenStack website!

 

by Kendall Waters at March 23, 2015 07:52 PM

OpenStack Superuser

OpenStack at 10: different code, same collaboration?

In this series of interviews, we’re talking to the new Individual Directors on the 2015 OpenStack Board of Directors. These Directors provide oversight of the Foundation and its budget, strategy and goals. The 24-person Board is composed of eight Individual Directors elected by the Individual Members, eight directors elected by the Gold Members and eight directors appointed by the Platinum Members.

With over 15 years of cloud experience, Rob Hirschfeld also goes way back with OpenStack. His involvement dates to before it was officially founded and he was also one of the initial Board Members. In addition to his role as Individual Director, Hirschfeld currently chairs the DefCore committee. He'll be speaking about DefCore at the upcoming Vancouver Summit with Alan Clark, Egle Sigler and Sean Roberts.

He talks to Superuser about the importance of patches, priorities for 2015 and why you should care about OpenStack vendors making money.

Superuser: You've been with the project since before it started, where do you hope it will be in five years?

In five years, I expect that nearly every line of code will have been replaced. The thing that will endure is the community governance and interaction models that we're working out to ensure commercial collaboration.

What is something that a lot of people don't know about OpenStack?

It was essentially a "rewrite fork" of Eucalyptus created because they would not accept patches.  That's a cautionary tale about why accepting patches is essential that should not get lost from the history books.

Any thoughts on your first steps to the priorities you laid out in your candidacy profile?

I've already started to get DefCore into an execution phase with additional Board and Foundation leadership joining into the effort.  We've set a very active schedule of meetings with two sub-committees running in parallel...It's going to be a busy spring. 

You say that the company you founded, RackN, is not creating an OpenStack product. How are you connected to the community?

RackN supports OpenCrowbar which provides a physical ready state infrastructure for scale platforms like OpenStack. We are very engaged in the community from below by helping make other distributions, vendors and operators successful.

What are the next steps to creating the “commercially successful ecosystem” you mentioned in your candidacy profile? What are the biggest obstacles to this?

We have to make stability and scale a critical feature. This will mean slowing features and new projects; however, I hear a lot of frustration that OpenStack is not focused on delivering a solid base. 

Without a base, the vendors cannot build profitable products.  Without profits, they cannot keep funding the project. This may be radical for an open project, but I think everyone needs to care more if vendors are making money.

What are some more persistent myths about the cloud?

That the word cloud really means anything.  Everyone has their own definition.  Mine is "infrastructure with an API" but I'd happily tell you it's also about process and ops.

Who are your real-life heroes?

FIRST (For Inspiration and Recognition of Science and Technology) founders Dean Kamen and Woodie Flowers. They executed a real vision about how to train for both competition and collaboration in the next generation of engineers.  Their efforts in building the next generation of leaders really impact how we will should open source collaboration. That's real innovation.

What do you hope to get out of the next summit?

First, I want to see vendors passing DefCore requirements.  After that, I'd like to see the operators get more equal treatment and I'm hoping to spend more time working with them so they can create places to share knowledge.

What's your favorite/most important OpenStack debate?

There are two.  First, I think the API vs. implementation is a critical growth curve for OpenStack.  We need to mature past being so implementation driven so we can have stand alone APIs. 

Second, I think the “benevolent dictator” discussion is useful. Since we are never going to have one, we need a real discussion about how to define and defend project wide priorities in a meaningful way.  Resolving both items is essential to our long-term viability.

Cover Photo by Michael Cory // CC BY NC

by Nicole Martinelli at March 23, 2015 07:05 PM

OpenStack in Production

Not all cores are created equal

Within CERN's compute cloud, the hypervisors vary significantly in performance. We generally run the servers for around 5 years before retirement and there are around 3 different configurations selected each year through public procurement.

Benchmarking in High Energy Physics is done using a benchmark suite called HEPSpec 2006 (HS06). This is based on the C++ programs within the Spec 2006 suite run in parallel according the number of cores in the server. The performance range is around a factor of 3 between the slowest and the fastest machines [1].  


When machines are evaluated after delivery, the HS06 rating for each hardware configuration is saved into a hardware inventory database.

Defining a flavor for each hardware type was not attractive as there are 15 different configurations to consider and users would not easily find out which flavors have free cores. Instead, users ask for the standard flavors, such as m1.small 1 core virtual machine, you could land on a hypervisor giving 6 HS06 or one giving 16. However, the accounting and quotas is done using virtual cores so the 6 and 16 HS06 virtual cores are considered equivalent.

in order to improve our accounting, we therefore wanted to provide the performance of the VM along with the metering records giving the CPU usage through ceilometer. Initially, we thought that this would require some additional code to be added to ceilometer but this is actually possible using the standard ceilometer functions with transformers and publishers.


The following approach was implemented.
  • On the hypervisor, we added an additional meter 'hs06' which provides the CPU rating of the VM normalised by the HS06 performance of the hypervisor. This value is determined using the HS06 value stored in the Hardware Database which can be provided to the hypervisor via a Puppet Fact.
  • This data is stored, in addition to the default 'cpu' record in ceilometer
The benefits of this approach are
  • There is no need for external lookup to the hardware database to process the accounting
  • No additional rights for the accounting process is required (such as to read the mapping between VM and hypervisor
  • Scenarios such as live migration of VMs from one hypervisor to another of different HS06 are correctly handled
  • No modifications to the ceilometer upstream code are required which both improves deployment time and does not invalidate upstream testing
  • Multiple benchmarks can be run concurrently. This allows a smooth migration from HS06 to a following benchmark HS14 by providing both sets of data.
  • Standard ceilometer behaviour is not modified so existing programs such as Heat which use this data can continue to run
  • This assumes no overcommitment of CPU. Further enhancements to the configuration would be possible in this area but this would require further meters.
  • The information is calculated directly on the hypervisor so it is scalable and it is calculated inline which avoids race conditions when the virtual machine is deleted and therefore the mapping VM to HV is no longer available
The assumptions are
  • The accounting is based on the delivered clock ticks to the hypervisor. This will vary in cases where the hypervisor is running a more recent version of the operating system with a later compiler (and thus probably has a higher HS06 rating). Running older OS versions is therefore corresponding less efficient.
  • The cloud is running at least the Juno OpenStack release
To implement this feature, the pipeline capabilities of ceilometer are used. These are configured automatically by the puppet-ceilometer component into /etc/ceilometer/pipeline.yaml.
The changes required are in several blocks. In the <fixed>sources</fixed> section as indicated by
---
sources:
A further source needs to be defined to get the CPU metric available for transformation. This polls every 10 minutes (600 seconds) from the CPU meter and sends the data to the sink for the hs06
    - name: hs06_source
interval: 600
meters:
- "cpu"
sinks:
- hs06_sink
The <fixed>hs06_sink</fixed> processing is defined later in the file in the <fixed>sinks</fixed> section
sinks:
The entry below takes the number of virtual cores of the VM and scales by 10 (which is the example HS06 CPU performance per core) and 0.98 (for the virtualisation overhead factor). It is reported in units of HS06s (i.e. HepSpec 2006). The value of 10 would be derived from the Puppet HS06 value for the machine divided by the number of cores in the server (from the Puppet fact processorcount). Puppet can be used to configure a hard-coded value per hypervisor that is delivered to the machine as a fact and used to generate the pipeline.yaml configuration file.
    - name: hs06_sink
transformers:
- name: "arithmetic"
parameters:
target:
name: "hs06"
unit: "HS06"
type: "gauge"
expr: "$(cpu).resource_metadata.vcpus*10*0.98"
publishers:
- notifier://
Once these changes have been done, the ceilometer daemons can be restarted to get the new configuration.
 service openstack-ceilometer-compute restart
If there are errors, these will be reported to <fixed>/var/log/ceilometer/compute.log</fixed>. These can be checked with
egrep "(ERROR|WARNING)" /var/log/ceilometer/compute.log
The first messages like "dropping sample with no predecessor" are to be expected as they are handling differences between the previous values and the current ones (such as cpu utilisation).
After 10 minutes or so, ceilometer will poll the CPU, generate the new hs06 value and this can be queried using the ceilometer CLI.
ceilometer meter-list | grep hs06
will include the hs06 meter
| hs06                                | cumulative | HS06        | c6af7651-5fc5-4d37-bf57-c85238ee098c         | 1cdd42569f894c83863e1b76e165a70c | c4b673a3bb084b828ab344a07fa40f54 |
| hs06 | cumulative | HS06 | e607bece-d9df-4792-904a-3c4adca1b99c | 1cdd42569f894c83863e1b76e165a70c | c4b673a3bb084b828ab344a07fa40f54 |
and the last 5 entries in the database can be retrieved
ceilometer sample-list -m hs06 -l 5
produces the output
+--------------------------------------+------+-------+--------+------+---------------------+
| Resource ID | Name | Type | Volume | Unit | Timestamp |
+--------------------------------------+------+-------+--------+------+---------------------+
| 1fa28676-b41c-4673-9d31-1fa83711725a | hs06 | gauge | 12.0 | HS06 | 2015-03-22T09:19:49 |
| 1fa28676-b41c-4673-9d31-1fa83711725a | hs06 | gauge | 12.0 | HS06 | 2015-03-22T09:16:49 |
| 1fa28676-b41c-4673-9d31-1fa83711725a | hs06 | gauge | 12.0 | HS06 | 2015-03-22T09:13:49 |
| 1fa28676-b41c-4673-9d31-1fa83711725a | hs06 | gauge | 12.0 | HS06 | 2015-03-22T09:10:49 |
| b812c69c-3c9f-4146-952e-078a266b11c5 | hs06 | gauge | 11.0 | HS06 | 2015-03-22T08:54:25 |
+--------------------------------------+------+-------+--------+------+---------------------+

References

  1. Ulrich Schwickerath - "VM benchmarking: update on CERN approach" http://indico.cern.ch/event/319819/session/1/contribution/7/material/slides/0.pdf
  2. Ceilometer architecture http://docs.openstack.org/developer/ceilometer/architecture.html
  3. Basic introduction to ceilometer using RDO - https://www.rdoproject.org/CeilometerQuickStart
  4. Ceilometer configuration guide for transformers http://docs.openstack.org/admin-guide-cloud/content/section_telemetry-pipeline-configuration.html
  5. Ceilometer arithmetic guide at https://github.com/openstack/ceilometer-specs/blob/master/specs/juno/arithmetic-transformer.rst

by Tim Bell (noreply@blogger.com) at March 23, 2015 06:40 PM

Red Hat Stack

Co-Engineered Together: OpenStack Platform and Red Hat Enterprise Linux

OpenStack is not a software application that just runs on top of any random Linux. OpenStack is tightly coupled to the operating system it runs on and choosing the right Linux  operating system, as well as an OpenStack platform, is critical to provide a trusted, stable, and fully supported OpenStack environment.

OpenStack is an Infrastructure-as-a-Service cloud management platform, a set of software tools, written mostly in Python, to manage hosts at large scale and deliver an agile, cloud-like infrastructure environment, where multiple virtual machine Instances, block volumes and other infrastructure resources can be created and destroyed rapidly on demand.

ab 1

In order to implement a robust IaaS platform providing API access to low level infrastructure components, such as block volumes, layer 2 networks, and others, OpenStack leverages features exposed by the underlying operating system subsystems, provided by Kernel space components, virtualization, networking, storage subsystems, hardware drivers, and services that rely on the operating system’s capabilities.

Exploring how OpenStack Flows

OpenStack operates as the orchestration and operational management layer on top of many existing features and services. Let’s first examine how Nova Compute service is implemented to better understand the OpenStack design concept. Nova is implemented through 4 Linux services: nova-api which is responsible to accept Nova’s API calls, nova-scheduler which implements a weighting and filtering mechanism to schedule creation of new instances, nova-conductor which makes database operations, and nova-compute which is responsible to create and destroy the actual instances. A message-bus, implemented through Oslo Messaging and instantiated since Red Hat Enterprise Linux OpenStack Platform 5 using  RabbitMQ, is used for services inner-communication.

To create and destroy instances, which are usually implemented as virtual machines, nova-compute service uses a supportive backend driver to make libvirt API calls, while Libvirt manages qemu-kvm virtual machines on the host.

All OpenStack services are implemented in a similar manner using integral operating system components. Each OpenStack distribution may decide on using different implementations. Here we will focus on the implementation choices made with Red Hat Enterprise Linux OpenStack Platform 6. For example the DHCP service is implemented using dnsmasq service, Security Groups are implemented using Linux IPTables, Cinder commonly uses LVM Logical Volumes for block volumes and scsi-target-utils to share tgt volumes over iSCSI protocol.ab 2

This is an oversimplified description of the complete picture and many additional sub-systems are also at play, such as SELinux with corresponding SELinux policies for each service and files in use, Kernel namespaces, hardware drivers and many others.

When deploying OpenStack in a highly available configuration, which is commonly found in real-world production environments, the story becomes even more complex with HAProxy load-balancing traffic, Pacemaker active-active clusters that use multicast for heartbeat, Bonding the Network Interfaces using Kernel’s LACP bonding modules, Galera which implements a database multi-master replication mechanism across the controller nodes, and RabbitMQ message broker which uses an internal queue mirroring mechanism across controller nodes.

Co-engineering the Operating system with OpenStack

Red Hat’s OpenStack technologies are purposefully co-engineered with the Red Hat Enterprise Linux operating system, and integrated with all its subsystems, drivers, and supportive components – to help deliver a trusted, long-term stability, and a fully supported, production-ready, OpenStack environment.

Red Hat is uniquely positioned to support customers effectively across the entire stack, we maintain an engineering presence that proactively works together with each of the communities across the entire stack, starting with the Linux Kernel, all the way up to the hypervisor and virtualized guest operating system. In addition Red Hat Enterprise Linux OpenStack Platform maintains the largest OpenStack-certified partner ecosystem, working closely with  OpenStack vendors to certify 3rd party solutions, and work through support cases when external solutions are involved.ab 3

Red Hat Enterprise Linux OpenStack Platform also benefits from the rich hardware certification ecosystem Red Hat Enterprise Linux offers working with major hardware vendors to provide driver compatibility. For example, the Neutron single root I/O virtualization (SR-IOV) feature is built on the top of the SR-IOV certified Kernel driver. Similarly, the support for tunneling protocols (VXLAN, NVGRE) to offload, which is key for performance, is derived from the Red Hat Enterprise Linux driver support.

We are doing this not only to deliver world-class, production-ready support for the whole platform stack, but also to drive new features requested by customers –  since adding new functionality to OpenStack often requires invasive changes to a large portion of the stack,  from OpenStack APIs, down to the kernel.

Introducing New NFV Features – NUMA, CPU Pinning, Hugepages

The Network Functions virtualization (NFV) use case, which required adding support for NUMA, CPU Pinning, and Hugepages, is a good example of this implementation. To support these features, work began at the kernel level, both for memory management and KVM. Red Hat Enterprise Linux 7.0 kernel added support for 2M and 1G huge pages for KVM and virtual machines, and IOMMU support with huge pages. Work on the Kernel continued and with the Red Hat Enterprise Linux 7.1 kernel, adding support for advanced NUMA placement and dynamic huge pages per NUMA node.

In parallel with the features in Red Hat Enterprise Linux Kernel, changes were made to qemu-kvm-rhev to utilize these features and were exposed via the libvirt API and XML.

Red Hat engineers worked on getting the needed changes into the OpenStack Nova project to determine availability of huge pages on the host as well as assign them to individual VMs when requested. Nova was then enhanced so that users could define hugepages as a requirement for all VMs booted from a given image, via image properties, or for all VMs booted with a given flavor, via flavor extra specs. The scheduler was enhanced to track the availability of huge pages as reported by the compute service and then confirm that the VM is scheduled to a host capable of fulfilling the hugepages requirement specified.

Coordinating the support for these features across the entire stack, kernel -> qemu-kvm -> libvirt -> Nova-compute -> nova-schedule -> nova-api , required several different teams, working in several upstream communities, to work closely together. Thanks to Red Hat’s strong engineering presence in each of the respective communities, and the fact that most of these engineers were all within the same company, we were able to drive each of these features into the upstream code bases and coordinate backporting them to Red Hat Enterprise Linux and Red Hat Enterprise Linux OpenStack Platform so that they could be utilized together with the combination of RHEL 7.1 used as the base operating system for Red Hat Enterprise Linux OpenStack Platform 6 which is based on the upstream Juno release.

Supporting Changes  

Red Hat Enterprise Linux 7.0 and 7.1 also included numerous enhancements to better support Red Hat Enterprise Linux OpenStack Platform 6. Some of these  enhancements include Kernel changes in the core networking stack to better support VXLAN with  TCP Segmentation Offloading (TSO) and Generic Segmentation Offloading (GSO) which used to lead to guest crashes, fixed issued with dhcpclient sending requests over VXLAN Interfaces, SELinux policy fixes and enhancements for Glance image files and other services, enhancements fixing issues in qemu-kvm for librdb (Ceph), changes in iscsi-initiator-utils preventing hosts from potential hangs while reboot, and much more.

Conclusion

In order to implement an IaaS solution and provide API access to low level infrastructure components, OpenStack needs to be tightly integrated with the operating system it runs on, making the operating system a crucial factor for long-term OpenStack stability. Red Hat Enterprise Linux OpenStack Platform is co-engineered and integrated with various RHEL services and subsystems leading to an IaaS Cloud environment enterprise customers can trust. To provide the world class support Red Hat customers are used to, Red Hat is actively participating in upstream communities across all OpenStack projects, positioning Red Hat to be able to support OpenStack effectively across all components in use. Red Hat’s active participation in the upstream communities also enables Red Hat to introduce and drive new OpenStack features and functionalities requested by customers.

by Arthur Berezin at March 23, 2015 03:42 PM

Red Hat Enterprise Linux OpenStack Platform 6: SR-IOV Networking – Part II: Implementation Best Practices

Red Hat Enterprise Linux OpenStack Platform 6: SR-IOV Networking – Part II: Implementation Best Practices

Prerequisites

The compute nodes should have support for SR-IOV (i.e. support in the BIOS, in the operating system and in the hardware. In our case : Network Adapter).

Setup

  1. Enable Intel VT-d or the AMD IOMMU specifications in the BIOS and kernel
  2. Load the Network driver with the right parameters to activate Virtual Functions
  3. Enable ‘sriovnicswitch’ and ‘openvswitch’ ML2 Mechanism drivers (openvswitch is needed for the network node)
  4. Configure VLAN and/or configurations in Neutron ML2 configuration files
  5. Configure supported_pci_vendor_devs in ML2 SR-IOV configuration file if needed
  6. Configure pci_passthrough_whitelist in the Compute Nova configuration to specify which VFs to allocated to map to which Physical network
  7. Create a Network, create a port with vnic_type ‘direct’ or ‘macvtap’ and launch an instance with the port attached

How does it look like from the Hypervisor perspective?

Using lspci utility we can see the physical functions (PFs) and the virtual functions (VFs).

In the following example we have one NIC with 2 ports – one PF and 7 VFs for each.

# lspci  | grep -i 82576

05:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)

05:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)

05:10.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

05:10.1 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

05:10.2 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

05:10.3 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

05:10.4 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

05:10.5 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

05:10.6 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

05:10.7 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

05:11.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

05:11.1 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

05:11.2 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

05:11.3 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

05:11.4 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

05:11.5 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

 

We can see all the virtual functions of a PF:

 

# ls -l /sys/class/net/enp5s0f1/device/virtfn*

 

lrwxrwxrwx. 1 root root 0 Jan 25 13:22 /sys/class/net/enp5s0f1/device/virtfn0 -> ../0000:05:10.1

lrwxrwxrwx. 1 root root 0 Jan 25 13:22 /sys/class/net/enp5s0f1/device/virtfn1 -> ../0000:05:10.3

lrwxrwxrwx. 1 root root 0 Jan 25 13:22 /sys/class/net/enp5s0f1/device/virtfn2 -> ../0000:05:10.5

lrwxrwxrwx. 1 root root 0 Jan 25 13:22 /sys/class/net/enp5s0f1/device/virtfn3 -> ../0000:05:10.7

lrwxrwxrwx. 1 root root 0 Jan 25 13:22 /sys/class/net/enp5s0f1/device/virtfn4 -> ../0000:05:11.1

lrwxrwxrwx. 1 root root 0 Jan 25 13:22 /sys/class/net/enp5s0f1/device/virtfn5 -> ../0000:05:11.3

lrwxrwxrwx. 1 root root 0 Jan 25 13:22 /sys/class/net/enp5s0f1/device/virtfn6 -> ../0000:05:11.5

 

The first VF has index 0 and the last has index 6

We can see all the Interface names of the Virtual Functions that are not allocated as a PCI device to an Instance:

#ls   /sys/class/net/enp5s0f1/device/virtfn*/net

/sys/class/net/enp5s0f1/device/virtfn0/net:

enp5s16f1

/sys/class/net/enp5s0f1/device/virtfn1/net:

enp5s16f3

/sys/class/net/enp5s0f1/device/virtfn2/net:

enp5s16f5

/sys/class/net/enp5s0f1/device/virtfn3/net:

enp5s16f7

/sys/class/net/enp5s0f1/device/virtfn4/net:

enp5s17f1

 

Using ‘ip link’ command we can see details of the VFs.

 

# ip link show enp5s0f1

12: enp5s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT qlen 1000

link/ether 44:1e:a1:73:3d:ab brd ff:ff:ff:ff:ff:ff

vf 0 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

vf 1 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

vf 2 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

vf 3 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

vf 4 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

vf 5 MAC fa:16:3e:0e:3f:0d, vlan 83, spoof checking on, link-state auto

vf 6 MAC fa:16:3e:b9:b6:5c, vlan 82, spoof checking on, link-state auto

 

In the above example 2 VFs are allocated to instances. VF 5 is configured with vlan ‘83’.

 

The instance perspective

Again using ‘lspci’ we can see that the instance see the Interface as a PCI device

 

# lspci  |grep -i 82576

00:04.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

 

Using ‘ethtool’ we can see that the driver is ‘igbvf’ which is Intel’s driver for 82576 Virtual Functions.

 

# ethtool -i eth0

driver: igbvf

version: 2.0.2-k

firmware-version:

bus-info: 0000:00:04.0

supports-statistics: yes

supports-test: yes

supports-eeprom-access: no

supports-register-dump: yes

supports-priv-flags: no

Current status

  • Missing support live migration
    • An SR-IOV port may be directly connected to its VF. Or it may be connected with a macvtap device that resides on the host, which is then connected to the corresponding VF. Using a macvtap device makes live migration with SR-IOV possible.
  • No hot plug/unplug of ‘SRIOV’ port
  • Missing support for Security Groups
  • Launching an instance with a ‘SRIOV’ port without creating a port first

Links

Openstack Wiki – SR-IOV Passthrough For Networking

Kilo Openstack summit – SRIOV etherpad

Nova bugs with tag ‘pci-passthrough’

by emoralesrh at March 23, 2015 03:22 PM

Rob Hirschfeld

OpenStack DefCore Process Draft Posted for Review [major milestone]

OpenStack DefCore Committee is looking for community feedback about the proposed DefCore Process.

Golden PathMarch has been a month for OpenStack DefCore milestones.  At the March Board meeting, we approved the first official DefCore Guideline (called DefCore 2015.03) and we are poised to commit the first DefCore Process draft.

Once this initial commit is approved by the DefCore Committee (expected at DefCore Scale.8 Meeting 3/25 @ 9 PT), we’ll be ready for broader input by the community using the standard OpenStack Gerrit review process.  If you are not comfortable with Gerrit, we’ll take your input anyway that you want to give it except via telepathy (we’ve already got a lot on our minds).

Note: We’re also looking for input on the 2015.next Guideline targeted for 2015.04,

The DefCore Process documents the rules (who, what, when and where) that will govern how we create the DefCore Guidelines.  By design, it has to be detailed and specific without adding complexity and confusion.  The why of DefCore is all that work we did on principles that shape the process.

This process reflects nearly a year of gestation starting from the June 2014 DefCore face-to-face.  Once of the notable recent refinements was to organize material into time phases and to be more specific about who is responsible for specific actions.

To make review easier, I’ve reposted the draft.  Comments are welcome here and on the patch (and here after it lands).

DRAFT: OpenStack DefCore Process 2015A (reposted from OpenStack/DefCore)

This document describes the DefCore process required by the OpenStack bylaws and approved by the OpenStack Technical Committee and Board.

Expected Time line:

Time Frame Milestone Activities Lead By
-3 months S-3 “preliminary” draft (from current) DefCore
-2 months S-2 ID new Capabilities Community
-1 month S-1 Score capabilities DefCore
Summit S “solid” draft Community
Advisory items selected DefCore
+1 month S+1 self-testing Vendors
+2 months S+2 Test Flagging DefCore
+3 months S+3 Approve Guidance Board

Note: DefCore may accelerate the process to correct errors and omissions.

Process Definition


The DefCore Guideline process has two primary phases: Draft and Review. During the Draft phase (A), the DefCore Committee is working with community leaders to update and score the components of the guideline. During the Review phase (B), general community and vendors have an opportunity to provide input and check the guidelines (C) against actual implementations. Review phase ends with Board approval of the draft guideline (D).

This section provides specific rules and structure for each phase.

NOTE: To ensure continuity of discussion, process components defined below must _not_ reuse numbers in future revisions. The numbering pattern follows draft, section and sub-item numbering, e.g.: 20015A.B2.2. This requirement may create numbering gaps in future iterations that will help indicate changes.

Guidelines Draft Phase (A)

Starting: S-3

A1. New Guidelines Start From Previous Guidelines

  1. Receive DefCore Dedicated Section Guidelines
  2. New Guidelines start from the previous Board approved document.
  3. New Guidelines are given the preliminary name of the target year and .next.

A2. Community Groups Tests into Capabilities

  1. DefCore Committee coordinates community activities with the TC and PTLs to revise the capabilities based on current technical needs and functionality.
  2. Groupings may change between iterations.
  3. Tests must have unique identifiers that are durable across releases and grouping changes to be considered.
  4. Tests must be under OpenStack governance.

A3. PTLs Recommend Changes to Designated Sections

A4. DefCore Committee identifies required capabilities

  1. DefCore uses Board approved DefCore scoring criteria to evaluate capabilities.
  2. DefCore needs Board approval to change scoring criteria.
  3. Scoring criteria factor or weights cannot change after Draft is published.
  4. DefCore identifies cut-off score for determining that a capability is required.
  5. Capabilities will not be removed without being deprecated in the previous Guideline.
  6. Capabilities will not be added without being advisory in the previous Guideline.

A5. Foundation recommends OpenStack Components and OpenStack Platform Scope

  1. Foundation recommends capabilities to include in each OpenStack Component.
  2. Foundation recommends which Components are required for the OpenStack Platform.

A6. Additional Capabilities and Tests

  1. DefCore will work with the community to define new capabilities.
  2. Test grouping for new capabilities will be included in the DefCore documents.
  3. DefCore will publish a list of missing and gapped capabilities.

A7. DefCore Committee creates recommendation for Draft.

  1. DefCore Committee coordinates activities to create draft.
  2. DefCore Committee may choose to ignore recommendations with documented justification.

Guidelines Review Phase (B)

Starting: Summit

B1. All Reference Artifacts are reviewed via Gerrit

  1. Draft Guideline
  2. Designated sections
  3. Test-Capability groupings
  4. Flagged Test List
  5. Capability Scoring criteria and weights
  6. Not in Gerrit: Working materials (spreadsheets, etc)

B2. Presentation of Draft Guidelines

  1. DefCore will present Draft Guidelines to the Board for Review
  2. DefCore will distribute Draft Guidelines to the community
  3. Foundation provides Draft to vendors for review

B3. Changes to Guideline made by Gerrit Review Process

  1. Community discussion including vendors must go through Gerrit
  2. All changes to draft must go through Gerrit process

B4. For Gerrit reviews, DefCore CoChairs act as joint PTLs

  1. Board committee members of DefCore serve as “core” reviewers (+2).
  2. Requests for changes, must be submitted as patches by the requested party.
  3. DefCore Committee members cannot proxy for community change requests.

Community Review & Vendor Self-Test (C)

Starting: S and continues past S+3

C1. Vendor Self-Tests

  1. Vendors are responsible for executing these tests identified by the DefCore committee.
  2. The Foundation may, but is not required to, provide tooling for running tests.
  3. The Foundation may, but is not required to, define a required reporting format.
  4. Self-test results may be published by Vendors in advance of Foundation review, but must be clearly labeled as “Unofficial Results – Not Yet Accepted By The OpenStack Foundation”. Vendors who publish self-tests MUST provide them in the same format that would be submitted to the OpenStack Foundation but MAY provide additional formats if they choose to do so.
  5. Self-test results cannot be used as proof of compliance.

C2. Vendor submits results to Foundation for review

  1. The Foundation determines the acceptable format for submissions.
  2. The Foundation has final authority to determine if Vendor meets criteria.
  3. The Foundation must provide a review of the results within 30 days.

C3. Vendor Grievance Process

  1. Vendors may raise concerns with specific tests to the DefCore committee.
  2. The DefCore committee may choose to remove tests from a Guideline (known as flagging).
  3. The DefCore committee must respond to vendor requests to flag tests within 30 days.
  4. Vendors may not request flagging all tests in a capability.

C4. Results of Vendor Self-Tests will be open

  1. The Foundation will make the final results of approved vendors available to the community.
  2. The Foundation will not publish incomplete or unapproved results.
  3. Only “pass” results will be reported. Skipped and failed results will be omitted from the reports.
  4. Reports will include individual test results, not just capability scoring.

C5. API Usage Data Report

  1. The Foundation will provide DefCore committee with an open report about API usage based on self-tests.
  2. To the extent the data is available, capabilities beyond the DefCore list will be included in the report.

Guideline Approval (D)

Starting: S+3

D1. Board will review and approve DefCore Guideline from draft

  1. Guidelines are set at the Platform, Component and Capability level only.
  2. Text guideline document is authorative over the JSON representation.
  3. DefCore will provide JSON representation for automated scoring
  4. Guidelines only apply to the identified releases (a.k.a. release tags).

D2. DefCore Committee has authority on test categorization

  1. Can add flagged tests before and after Guideline approval.
  2. Cannot change Test to Capability mappings after approval.
  3. Maintains the test to capability mappings in the JSON representation.

D3. Designated sections only enforced for projects with required capabilities

  1. Designated sections may be defined for any project.
  2. Designated sections applies to the release (a.k.a. release tags) identified in the Guideline.

D4. Guidelines are named based on the date of Board approval

  1. Naming pattern will be 4 digital year dot and 2 digit month.

by Rob H at March 23, 2015 02:14 PM

Sébastien Han

OpenStack: reserve memory on your hypervisors

One major use case for operators is to be able to reserve a certain amount of memory in the hypervisor. This is extremely useful when you have to recover from failures. Imagine that you run all your virtual machines on shared storage (Ceph RBD or Sheepdog or NFS). The major benefit from running your instances on shared storage is that it will ease live-migration and evacuation. However, if a compute node dies you want to make sure that you have enough capacity on the other compute nodes to relaunch your instances. Given that the nova host-evacuate call goes through the scheduler again you should get an even distribution.

But how to make sure that you have enough memory on the other hypervisors? Unfortunately there is no real memory restriction mechanism. In this article I will explain how we can mimic such behavior.


First attempt

Since we don’t really to play with memory over subscription make sure to always set the ram_allocation_ratio to 1 in your nova.conf. Then restart your nova-compute process.

There are not many options to do some memory housekeeping. Actually I only see one, using the reserved_host_memory_mb option. There are many default filters in Nova, however I will be focusing on the memory one. The way Nova Scheduler applies filters is be based on the RamFilter. It will calculate the amount of memory available on the compute host minus reserved_host_memory_mb. This value will be reported by the nova-compute resource statistics. The scheduler will use this report to correctly choose compute nodes. That way we can preserve a certain amount of memory on the compute node.



The tricky part

Now we kept let’s say 32768 MB of memory. From the Nova point of view, this value is dedicated to the host system nothing less nothing more. The problem here is that during a failure scenario, this option does not solve the re-scheduling issue. The only thing I can think of at the moment is a hook mechanism that will change this value to 512 MB for example. Thus this will free up 32256 MB of memory on each other compute nodes. This hook can be setup via your preferred configuration management system. Once you have fixed the dead hypervisor you should probably move some of the virtual machines back to this new hypervisor. Whilst doing this you should change reserved_host_memory_mb back to its original value.


I’m not sure of something can be done using other scheduler filters.

March 23, 2015 02:09 PM

Anne Gentle

Male allies for women in tech: What’s needed?

I realized the other day that I have given my “Women in Tech: Be That Light” presentation a half a dozen times in the last year. One question that I still want a great answer for is when a man in the audience asks, “What can I do to make it better? How can I be that light?” I have ideas from my own experiences, and also point to the training courses and Ally Skills workshops offered by the Ada Initiative.

flickr-ngmmemuda-ally

On a personal level, here’s my short list based on my own experiences. My experiences are colored by my own privileges being white, straight, married with an amazing partner, a parent, living in a great country in a safe neighborhood, working in a secure job. So realize that even while I write my own experiences at a specific place in my career, all those stations in life color my own views, and may not directly help people with backgrounds dissimilar to mine.

What do women in tech need? How can I help?

  • Be that friendly colleague at meetups, especially to the few women in the room, while balancing the fact that she probably doesn’t want to be called out as uniquely female or an object to be admired. If you already know her, try to introduce her to someone else with common interests and make connections. If you don’t already know her, find someone you think she would feel comfortable speaking with to say hello. It’s interesting, sometimes I’m completely uncertain about approaching a woman who’s the only “other” woman at a meetup. So, women should also try to find a commonality — maybe one of her coworkers could introduce you to her. For women, it’s important make these connections in friendly and not competing ways, because oddly enough, when I’m the second woman in the room I don’t want to make the other woman feel uncomfortable either!
  • Realize that small annoyances over years add up to real frustration. I don’t point this out to say “don’t be annoying” but rather, be a great listener and be extremely respectful. Micro annoyances over time add up to women departing technical communities in droves. See what you can do in small ways, not just large, to keep women in your current tech communities.
  • For recruiting, when new women show up online on mailing lists or IRC or Github, please do answer questions with a “there are no questions too small or too large” attitude. I never would have survived my first 90 days working on OpenStack if it weren’t for Jay Pipes and Chuck Thier. Jay patiently helped me set up a real development environment by walking me through his setup on IRC. And since he was used to Github and going to Launchpad/Bazaar himself, he didn’t make me feel dumb for asking. Chuck didn’t laugh too hard when I tried to spell check the HTTP header “referer” to “referrer.” I felt like any other newbie, not a “new girl” with these two. (Woops, and I should never use the term “girl” for anyone over the age of 18.)
  • Recognize individuality when talking to team members, regardless of visible differences like gender or ethnicity. I struggle with this myself, having to pause before talking about my kids or my remodeling projects, since not everyone is interested! I struggle with assumptions about people all the time, and have to actively fight them myself. For men, you don’t want to assume an interest in cars or sports, so really this applies regardless of gender. All humans struggle with finding common interests without making assumptions.
  • See if you can do small, non-attention-drawing actions that ensure the safety of women in your communities. With the OpenStack Summit being held in different cities twice a year, I’ve been concerned for my personal safety as a woman traveling alone. Admitting that fear means I try to be more savvy about travel, but I still make mistakes like letting my phone battery die after calling a cab in another city after 11 at night. If you see a woman at a party alone, see if you can first make her feel welcome, but then also ensure there are safety measures for her traveling after the event.
  • If you see something, say something, and report correctly and safely for both the bad actor and target. This is really hard to do in the moment, believe me, I’ve been there. For me, being prepared is best, and knowing the scenarios and reporting methods ahead of time gives me the slightly better confidence I can do the right thing in the moment even if I’m shocked or scared. Find the “good and bad” ways to deal with incidents through this excellent Allies_training page.

If you’ve read this far, you really do want to make life better as a male ally. Realize that it’s okay to make mistakes — I’ve made them and learned from them over the years. This inclusion work by allies is not the easiest work to do, nor is it rewarding really. It’s the work of being a good human, and we’re all going to screw up. If someone points out a foible to you, such as saying “girls” instead of “women,” say “thank you” and move on, promising to do better next time.

If you think you already do all these things, make sure you look for ways to expand your reach to other minority groups and less privileged participants. I’m trying to do better with the physically different people I encounter at work. I would like to find ways to work well with people suffering from depression. I’ve got a son with Type I diabetes, what sort of advocacy can I do for people with unique medical needs? I’m asking myself how to make a difference. How about you? How can you do your part to equalize the tech industry?

by annegentle at March 23, 2015 11:58 AM

Opensource.com

Getting started guide, making your first OpenStack commit, and more

The Opensource.com weekly look at what is happening in the OpenStack community and the open source cloud at large.

by Jason Baker at March 23, 2015 07:00 AM

March 21, 2015

OpenStack in Production

Nova quota usage - synchronization

Nova quota usage gets frequently out of sync with the real usage consumption.
We are hitting this problem since a couple of releases and it’s increasing with the number of users/tenants in the CERN Cloud Infrastructure.

In nova there are two configuration options (“max_usage” and “until_refresh”) that define when the quota usage should be refreshed. In our case we have configured them with “-1” which means the quota usage must be refreshed every time “_is_quota_refresh_needed” method is called.
For more information about these options you can see a great blog post by Mike Dorman at http://t.co/Q5X1hTgJG1

This worked well in the releases before Havana. The quota gets out of sync and it’s refreshed next time a tenant user performs an operation (ex: create/delete/…).
However, in Havana with the introduction of “user quotas” (https://wiki.openstack.org/wiki/ReleaseNotes/Havana#Quota) this problem started to be more frequent even when forcing the quota to refresh every time.

At CERN Cloud Infrastructure a tenant usually has several users. When a user creates/deletes/… an instance and the quota gets out of sync it will affect all users in the tenant. The quota refresh only updates the resources of the user that is performing the operation and not all tenant resources. This means that in a tenant the quota usage will only be fixed if the user owner of the resource out of sync performs an operation.

The source of quota desync is very difficult to reproduce. In fact all our tries have failed to reproduce it consistently.
In order to fix the quota usage the operator needs to manually calculate the quota that is in use and update the database. This process is very cumbersome, time consuming and is can lead to the introduction of even more inconsistencies in the database.

In order to improve our operations we developed a small tool to check which quotas are out of sync and fix them if necessary.
The tool is available in CERN Operations github at: https://github.com/cernops/nova-quota-sync

How to use it?

usage: nova-quota-sync [-h] [--all] [--no_sync] [--auto_sync]
                       [--project_id PROJECT_ID] [--config CONFIG]

optional arguments:
  -h, --help            show this help message and exit
  --all                 show the state of all quota resources
  --no_sync             don't perform any synchronization of the mismatch
                        resources
  --auto_sync           automatically sync all resources (no interactive)
  --project_id PROJECT_ID
                        searches only project ID

  --config CONFIG       configuration file

The tool calculates the resources in use and compares them with the quota usages.
For example, to see all resources in quota usages that are out of sync:

# nova-quota-sync --no_sync

+-------------+----------+--------------+----------------+----------------------+----------+
| Project ID  | User ID  |  Instances   |     Cores      |         Ram          |  Status  |
+-------------+----------+--------------+----------------+----------------------+----------+
| 58ed2d48... | user_a   |  657 -> 650  |  2628 -> 2600  |  5382144 -> 5324800  | Mismatch |
| 6f999252... | user_b   |    9 -> 8    |    13 -> 11    |    25088 -> 20992    | Mismatch |
| 79d8d0a2... | user_c   |  232 -> 231  |  5568 -> 5544  |  7424000 -> 7392000  | Mismatch |
| 827441b0... | user_d   |   42 -> 41   |    56 -> 55    |   114688 -> 112640   | Mismatch |
| 8a5858da... | user_e   |    2 -> 4    |     2 -> 4     |     1024 -> 2048     | Mismatch |
+-------------+----------+--------------+----------------+----------------------+----------+

The quota usage synchronization can be performed interactively per tenant/project (don’t specify the argument --no_sync) or automatically for all “mismatch” resources with the argument “--auto-sync”.

This tool needs access to nova database. The database endpoint should be defined in the configuration file (it can be nova.conf). Since it reads and updates the database be extremely careful when using it.

Note that quota reservations are not considered in the calculations or updated.

by Belmiro Moreira (noreply@blogger.com) at March 21, 2015 11:54 AM

March 20, 2015

OpenStack Superuser

Superuser weekend reading

Here's the news from the OpenStack world you won't want to miss -- the musings, polemics and questions posed by the larger community.

Got something you think we should highlight? Tweet, blog, or email us!

In Case You Missed It

Percona ambassador George O. Lorch gives a great guide on how getting started as an OpenStack contributor. Lorch suggests a mentor may come in handy when it comes to the mechanics. Read on to see his other tips!

Rackspace’s Anne Gentle describes her journey working on OpenStack documentation. Gentle compares it to her first taste of "cool, disruptive technology applied to an old problem," namely KangaROOS, those sneakers with Velcro-fastened pockets that were the hottest thing on the playground back in the 1980s.

Database-as-a-service technologies offer considerable promise in helping address many data-related challenges, explains Tesora’s Amrith Kumar on his InfoWorld blog post. Kumar describes five specific ways OpenStack trove can help manage databases. In related news, Tesora also compiled a spreadsheet that includes the most popular DBaaS solutions on the market here.

Steve Martinelli over at IBM offers up advice on leveraging OpenStackClient as your unified command line interface. "It’s fitting that I publish this write up at this current time. OpenStackClient v1.0.3 was released last week and this release includes a large number of feature requests and bug fixes. But the even more exciting news is that the OpenStack Technical Committee voted in favor to include OpenStackClient as part of it’s list of official projects!"

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

We feature user conversations throughout the week, so tweet, blog, or email us your thoughts!

Cover Photo by Sean Ganann) // CC BY NC

by Hong Nga Nguyen at March 20, 2015 11:28 PM

OpenStack Blog

OpenStack Community Weekly Newsletter (Mar 13 – 20)

Feature freeze + Kilo-3 development milestone available

We just hit Feature Freeze, so please do not approve changes that add features or new configuration options unless those have been granted a feature freeze exception. This is also String Freeze, so you should avoid changing translatable strings. If you have to modify a translatable string, you should give a heads-up to the I18N team. Finally, this is also DepFreeze so you should avoid adding new dependencies (bumping oslo or openstack client libraries is OK until RC1). If you have a new dependency to add, raise a thread on openstack-dev about it.

The kilo-3 development milestone was tagged, it contains more than 200 features and 825 bugfixes added since the kilo-2 milestone 6 weeks ago. More details on the full announcement.

The joy of contributing to OpenStack

Contributing to OpenStack doesn’t have to be painful. To ease these pain points, OpenStack developed free upstream training.. You can sign up for the next one by May 2.

Another nice reading on the topic is Why your first OpenStack commit will always be the hardest, the interview with Susanne Balle, distinguished technologist with HP Cloud working on platform services.

Leveraging OpenStackClient as your unified command line interface

OpenStackClient v1.0.3 was released last week and this release includes a large number of feature requests and bug fixes. But the even more exciting news is that the OpenStack Technical Committee voted in favor to include OpenStackClient as part of it’s list of official projects! The newly integrated OpenStackClient project will include python-openstackclient, cliff and os-client-config.

Handling High Email Volume with sup

Over the last year, the openstack-dev mailing list has averaged 2500 messages every month. Staying on top of that much email can be challenging, especially with some of the consumer-grade email clients available today. Doug Hellmann recently upgraded his email setup to use sup, a terminal-based mail client. He says the switch helped him process the mailing list, and even keep up with gerrit at the same time. Sounds too good to be true to me. What do you think?

The Road to Vancouver

Relevant Conversations

Deadlines and Development Priorities

Security Advisories and Notices

  • None

Tips ‘n Tricks

Upcoming Events

OpenStack @ PyCon 2015: Booth info, looking for volunteers, posting of jobs, OpenStack presentations

Other News

Got Answers?

Ask OpenStack is the go-to destination for OpenStack users. Interesting questions waiting for answers:

OpenStack Reactions

Weird ping pong shot shocks opponent

And ready to commit… oh, when was Feature Freeze again?

The weekly newsletter is a way for the community to learn about all the various activities occurring on a weekly basis. If you would like to add content to a weekly update or have an idea about this newsletter, please leave a comment.

 

by Stefano Maffulli at March 20, 2015 11:19 PM

Sean Roberts

Join Mark McClain’s Talk on Akanda Open Source Networking

Akanda means “not broken.” Mark McClain is going discuss how OpenStack networking is not broken and can be simplified in his upcoming webinar.   Akanda is the only open source network virtualization solution built by OpenStack operators for real OpenStack clouds. Akanda eliminates the need for complex SDN controllers, overlays and multiple plugins for cloud … Continue reading Join Mark McClain’s Talk on Akanda Open Source Networking

by sarob at March 20, 2015 06:39 PM

Rafael Knuth

Online Meetup: How StackStorm builds and operates StackStorm software itself

StackStorm has been described as IFTTT for devops. And recently Rackspace and other StackStorm...

March 20, 2015 06:11 PM

OpenStack Superuser

The joy of contributing to OpenStack

Contributing to OpenStack doesn’t have to be painful.

“There can be real joy in contributing to open-source projects,” said Stefano Maffulli, developer advocate at the OpenStack Foundation. “Very few companies are well-equipped to do this, and that’s when it hurts.” The main culprit? If your company is built on a foundation of making widgets and selling them to clients, that structure and process often doesn’t adapt well to open collaboration, he noted.

For example, business functions like “technology development” and “procurement” are rigid and designed for interaction with a seller-buyer relationship in mind. Product development usually uses a stage/gate approach with strict management of resources timelines, roadmaps and with open source, few of these fit.

In open source, the peer-to-peer relationship governs the entire process. Deadlines depend on social contracts instead of legal contracts; the usual leverage of enforcing agreements (firing, promoting, bonus, penalties) don’t apply, he added, speaking at the 2015 Linux Collabration Summit. The slides from his 40-minute talk are online here.

To ease these pain points, OpenStack developed free upstream training.You can sign up for the next one by May 2.

For long-term success with open source, Maffulli said change must come from within. Corporations need to change their structure and processes — break the mold for widget-making and selling, essentially — to allow for this new way of working.

For example, you can create a dedicated open-source division that operates outside the usual corporate structure. His presentation also offered other ideas for short-term solutions, including: knowing the release cycles and aligning your work with them and budgeting time to help the open source community by fixing bugs, answering questions and cleaning up documentation (“housekeeping” he called it).

alt text here

Teaching basic collaboration principles, including widening the conversation to the OpenStack community instead of your company’s internal water cooler, will also help. Make sure your team doesn’t have a built-in bottleneck - for example, if there’s only one person authorized to commit code on your company’s behalf. Performance objectives for individuals should take into account open source project collaboration upstream — not just for your projects. “You can’t just go into a project, contribute the feature you need and walk away from it. That’s a recipe for heartbreak.”

When asked how to best deal with compliance issues, Maffulli said the fastest way to solve them often involves circumventing the legal department. “Go straight to the CEO. If the investment in open source is strategic for your company, then you can go to legal, human resources, marketing and work things out. Not the other way around.”

If it’s not strategic for the company, prepare for a long, uphill battle that may end badly. So badly, you might want to consider looking for another job, he added. “Licensing and legal aspects are the least important part of joining an open source project,” Maffulli said. “Corporations need to change the way they develop, how they engage, and the way they think of collaboration and competition.”

Cover Photo by Thomas Hawk, article photo by Johan Carlström // CC BY NC

by Nicole Martinelli at March 20, 2015 05:31 PM

IBM OpenTech Team

Leveraging OpenStackClient as your unified command line interface

It’s fitting that I publish this write up at this current time. OpenStackClient v1.0.3 was released last week and this release includes a large number of feature requests and bug fixes. But the even more exciting news is that the OpenStack Technical Committee voted in favor to include OpenStackClient as part of it’s list of official projects! The newly integrated OpenStackClient project will include python-openstackclient, cliff and os-client-config. This news was especially nice to hear, since I’ve been involved with the project for quite a while.

For readers wondering what OpenStackClient (OSC) is, it’s mission statement provides an excellent description.

To provide a single command line interface for OpenStack services with a uniform command set and format.

Overview of current command line tools

Each OpenStack program has their own client that has python bindings; think of these as regular python libraries. The Compute (Nova) program has python-novaclient, Object Store (Swift) has python-swiftclient, and Identity (Keystone) has python-keystoneclient. Unfortunately, each client also delivers a built-in command line interface. This leads to a very splintered and inconsistent user experience. Let’s look at the following examples:

$ nova list
$ nova flavor-list
$ keystone list
$ keystone user-list
$ swift list
$ swift list Pictures

With the exception of keystone list, all the above commands work, but sometimes a list argument is all that is needed, other times it’s resource-list (flavor-list/user-list) or an actual resource name (Pictures). Cases where an add or remove action is performed are just as strange. Sometimes they are treated as optional arguments, other times as positional.

$ keystone user-role-remove --user user_x --role admin --tenant production
$ neutron dhcp-agent-network-remove dhcp_agent network

These types of inconsistencies (and many more) and their negative impact on user experience are what triggered the need for a common command line interface.

OpenStackClient, the CLI that will solve all your problems

Predictable command format
One of OpenStackClient’s main goal is to be predictable and easy to understand. Here are the basics of an openstack command:

 openstack (object-1) (action) [(object-2)] 

It can be expressed in English as “(Take) object-1 (and perform) action (using) object-2 (to it).” For example, adding a user to a group, “Take group (admins) and perform add using user (alice) to it.”:

 $ openstack group add user admins alice

For a full list of supported commands, check out our docs.

Authenticating
OpenStackClient leverages python-keystoneclient‘s authentication plugins; so users can authenticate with a token, password, trust or even using a SAML assertion. Authentication can be performed with environment variables or passed in as command arguments. The environment variable names and command argument names will be very similar (–os-username vs OS_USERNAME). To authenticate with OpenStack Identity Service’s version 3.0 API, try using the following:

$ export OS_IDENTITY_API_VERSION=3
$ export OS_AUTH_URL=http://localhost:5000/v3
$ export OS_DEFAULT_DOMAIN=default
$ export OS_USERNAME=admin
$ export OS_PASSWORD=openstack
$ export OS_PROJECT_NAME=admin

Example Identity v3 commands
One of the driving factors for OpenStackClient is support for version 3 commands of OpenStack’s Identity service. Below are some examples that showcase OpenStackClient working with Identity Service v3 only concepts.

Creating a new group

$ openstack group create my_test_group
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description |                                  |
| domain_id   | default                          |
| id          | 4cec99eb65464875a968497784fec02f |
| name        | my_test_group                    |
+-------------+----------------------------------+

Listing domains

$ openstack domain list
+---------+---------+---------+----------------------------------------------------------------------+
| ID      | Name    | Enabled | Description                                                          |
+---------+---------+---------+----------------------------------------------------------------------+
| default | Default | True    | Owns users and tenants (i.e. projects) available on Identity API v2. |
+---------+---------+---------+----------------------------------------------------------------------+

Looking ahead
The OpenStackClient team has several cool ideas that have yet to be implemented. Such as user-based configs (so we no longer depend on environment variables or command options), caching tokens to reduce trips to keystone, and maybe even a smarter engine that reduces the amount of interaction required. There is no reason that the following commands:

  $ nova boot --flavor='2G' -- image='Gentoo' # Nova talks to Glance
  $ cinder give-me-a-10G-volume
  $ nova attach-that-volume-to-my-computer # nova talks to cinder
  $ neutron give-me-an-ip
  $ nova attach-that-floating-ip-to-my-computer # nova talks to neutron
  $ designate call-that-ip 'example.com' --reverse-dns # designate to neutron

Can’t be consolidated to:

$ openstack boot gentoo on-a 2G-VM with-a publicIP with-a 10G-volume call-it example.com

The post Leveraging OpenStackClient as your unified command line interface appeared first on IBM OpenTech.

by Steve Martinelli at March 20, 2015 03:32 PM

Tesora Corp

Short Stack: OpenStack analytics dashboard, getting started with contributions, list of DBaaS solutions

Welcome to the Short Stack, our weekly feature where we search for the most intriguing OpenStack links to share with you. These links may come from traditional publications or company blogs, but if it's about OpenStack, we'll find the best links we can to share with you every week.short stack_b small_0_0_0.jpg

If you like what you see, please consider subscribing.

Here we go with this week's links:

Meet AVOS: the analytics dashboard for your OpenStack cloud | Superuser Blog

Do you have an OpenStack cloud up and running, but don't know how well it's actually running? AVOS (Analytics & Visualization on OpenStack) was developed by a team at Cisco to present operational insights of an OpenStack cloud. This analytics dashboard gives operators a better understanding of their cloud's configuration, state, performance, and faults.

Getting started guide for OpenStack contributions | Percona Blog

George Lorch III from Percona shares his experience of getting involved with OpenStack development, documentation, and testing. He explains the basics of contributing to OpenStack and what you need to know to be successful. If you want to learn more, attend his getting started with OpenStack contributions talk at Percona OpenStack Live next April.

The ultimate list of Database as a Service solutions | Tesora Blog

There are many different Database as a Service (DBaaS) solutions on the market today and finding the right one can be a challenge. We've made it easier by compiling a spreadsheet that includes the most popular solutions along with their features and compatibility.

Convenience trumps 'open' in clouds and data centers | The Register

What do people want from a cloud? The answer to this question should help determine what type of cloud to deploy. While OpenStack is built on open source software and doesn't lock in their customers, it's argued that the 'closed' cloud options such as AWS and Microsoft Azure offer more convenience.

Tesora announces enhancements to enterprise-hardened OpenStack Database as a Service | Nasdaq

The Tesora database as a service platform is the first OpenStack Trove-based product to support high availability, offering expanded replication capabilities in MySQL. Tesora has also extended their database support and added dashboard support for configuration groups. The Trove-based platform is based on the most current OpenStack code base and is the best supported Trove distribution available.

by 1 at March 20, 2015 01:49 PM

March 19, 2015

IBM OpenTech Team

What’s Going On With Heat-Translator

Recently I had a discussion with the team about the great progress we have made with Heat-Translator development and the vast potential that we have going forward. I would like to share some of that conversation with you.

TOSCA Parser
Our first topic of conversation was the TOSCA parser development and the incredible strides we have made since we were recently elevated from the StackForge project to an OpenStack project under the Heat program. The team has put in a lot of work to capture the recent changes in the TOSCA Simple Profile in YAML specification. Some of the ongoing efforts to add in new TOSCA support and update related functionality are enumerated below.

    Networking: TOSCA specs has completed networking modeling and parser work is going on to provide support for networking node, related types used to describe logical network information, port definition and network endpoint capability with a single port or a complex endpoint with multiple ports.
    Support for new TOSCA capabilities: New capabilities have been introduced in the recent revision of the current specification. Besides new networking endpoint capability, other major capabilities that need a new implementation include specialized database endpoint capability and TOSCA compute capabilities for operating system and scalability.
    Cloud Service Archive (CSAR): The TOSCA CSAR is a container file. It typically includes a TOSCA service template containing reference to one or more file definitions of needed TOSCA base types as well as custom types and all artifacts that are required to deploy and manage lifecycle of the corresponding cloud application. The original CSAR specification, developed for the TOSCA XML specification, needs to be updated considering new features added in the ongoing TOSCA Simple Profile in YAML specification. The parser, which currently is tested with the YAML template, also needs to support CSAR as an input.
    Enhanced validation: The parser does a good job validating various sections and properties of service template but misses validation for nested sections specifically for interfaces and capabilities. Work is in progress to correct these issues so that there is better validation of TOSCA templates.
    Containers, Monitoring and Policies: The modeling of container semantics is in progress by the TOSCA Container Ad Hoc Working Group. A similar effort is going on for monitoring types as well by the Monitoring Ad Hoc Working Group. Policies are generally used to convey a set of capabilities, requirements and general characteristics of an entity and this work is in progress too. As these things are addressed in the TOSCA specification, the TOSCA translator team will be working diligently to consume these new requirements and update the code accordingly.
    Other major items: The support for node substitution, artifacts definition, new topology_template section, modified requirement definition, implementation of setting output variables to an attribute and updated definition with new or renamed properties and attributes etc.

HOT Generator
One of the other conversations the team and I had was surrounding the mapping of the TOSCA language to Heat Orchestration Templates (HOT). We have made good progress in generating HOT from a TOSCA template and it’s only getting better with the following ongoing work.

    Template library: The TOSCA parser produces an in-memory graph of relationship and dependencies between various nodes. The basic generator code is already there to read this graph and provide a mapping to the HOT. Currently it’s been tested for basic use cases like WordPress web application with a MySQL database hosted on a single server and node.js with MongoDB deployed on multiple servers. The extension to node.js and MongoDB template with other components like Elasticsearch, Logstash, Kibana, Collectd and Rsyslog with sample application running on node.js is in progress. We are expecting this work to be done during the OpenStack Kilo release cycle. As this will be a really good use case covering many aspects of TOSCA, I will follow up in more detail with a possible new blog post as we complete it. We are emphasizing on creating many new use cases and their mapping to the HOT.
    Mapping to the HOT: The current mapping to HOT uses Heat resource SoftwareConfig which supports storing single configuration script. Kudos to Steve Baker, Thomas Spatzier and other Heat developers for the new Heat resources like SoftwareComponent and SoftwareDeployments which now allows for storing multiple configurations to address handling of lifecycle operations like create, update, delete etc. for a software component in one place. We will be using these new resources for TOSCA node templates with multiple operations, this will provide a concise output HOT template compared to creating a SoftwareConfig resource to process individual configuration operation.
    TOSCA CSAR support: Currently the Heat-Translator doesn’t fully support processing user input provided on the CLI but work is in progress and expected to be completed soon. Also it only supports translation of a TOSCA template when provided as a YAML file. In addition to this, the plan is to support translation of TOSCA Cloud Service Archive (CSAR) file. The TOSCA Parser development section of this blog article covers an overview of CSAR file.

Consumability
When I spoke with the team we also discussed the consumability aspect and how to make the Heat-Translator available to Heat users. The current plan to make the tool available in Heat is via CLI. This will be achieved by creating new command(s) in the python-heatclient which will invoke the Heat-Translator with user input as in a TOSCA service template and provide a HOT as an output. In addition to this, we will be going for a next step where instead of just getting translated output, users can seamlessly deploy the TOSCA service temple. This will be achieved by silently invoking the Heat stack-create command with translated output. Once we achieve these two developments goal, per the current discussion within the team we will look into providing a possible API based translation. Another future consideration is to provide a graphical view of TOSCA node relationships in the Horizon dashboard something similar to how users can view Heat resources relationship in the dashboard.

Documentation
Finally, the team and I discussed one of the most important aspects of creating a tool for users and that is documentation. We all agreed, without clear concise documentation that makes the Heat-Translator easy to use it will never gain broad adoption and use.
High level documentation is available at the Heat-Translator documentation but as a team we are going to focus on creating high quality accurate documentation, technical and functional, as we reach important development milestones..

A detailed list of blueprints are available at the Heat-Translator Launchpad
The latest draft of the TOSCA Simple Profile in YAML specification can be found at the TOSCA – OASIS site.

Stay Tuned.

The post What’s Going On With Heat-Translator appeared first on IBM OpenTech.

by spzala at March 19, 2015 08:27 PM

OpenStack Superuser

Meet AVOS: the analytics dashboard for your OpenStack cloud

Let’s say your OpenStack cloud is up and running, but your sense of how well it runs is a bit nebulous.

Enter AVOS (Analytics & Visualization on OpenStack.) Developed by a team at Cisco to give operators a clearer picture of their clouds, it was shown at the monitoring session at the Mid-Cycle Ops Meetup, along with VMTP. Cisco's Debo Dutta, who worked on it with lead developer and "co-conspirator" Alex Holden, talked to Superuser over email about this recently-released tool.

alt text here

What were the operational issues that led to the creation of the dashboard? 

We were running Hadoop workloads on OpenStack and optimizing the stack for better performance. We realized, while operating our mini-cloud, that we had no idea of (or visibility into) what is happening when workloads are run. This made us think about visibility and interactive dashboards with minimal information to maximize the insight to the operator.

What kind of information were you looking to show?

We wanted a single page with a bunch of useful features that we felt would have helped us. For example, we needed to search for things within our project, hence we put a search feature. We needed to see metrics only when we drilled down. Hence we created the topology view and upon click, we drill down and show more metrics and info.

Then we wanted to figure out when something was going wrong. Instead of first writing complex data science algorithms (which we also do), we felt it would be better for the human expert to detect anomalies quickly if we presented the information to the person in a novel way. Hence we added heat maps. Finally, we felt the need to see virtual network VM-VM traffic and this lead to collecting data from Open vSwitch and showing the virtual network traffic matrix.

What problem does it solve? 

AVOS solves the problem of presenting operational insights of an OpenStack cloud in a crisp fashion, interactively conveying a lot of data with a very intuitive user experience.  We believe that a combination of smart insights, data science and interactive visualization could lead to better tools for both the operator as well as the tenants. This is a gap in OpenStack today. 

Where can people find it?

We released an early version at: https://github.com/CiscoSystems/avos

We also have interactive storage/Ceph insights which we demoed here: http://lnkd.in/bpp7kJ6 

Cover Photo by Master Phillip // CC BY NC

by Nicole Martinelli at March 19, 2015 07:09 PM

Deadline for Superuser Awards: March 22

Superuser awards recognize a team that uses OpenStack to meaningfully improve their business and differentiate in a competitive industry, while also contributing back to the community.

If you fit the bill, or know a team that does, you've got until midnight Central Time March 22 to submit a nomination here. (We've extended the deadline in response to your feedback!)

Last year, the super team at CERN won the first award at the Paris Summit. Tim Bell, of the IT operating systems and infrastructure group at CERN, will be passing the baton to this year's winner at the OpenStack Summit in Vancouver.

CERN was chosen as the winner from an impressive roster of finalists, including teams from Comcast, MercadoLibre, and Time Warner Cable. The OpenStack Infrastructure team took home an honorable mention.

When evaluating contestants for the Superuser Award, judges take into account the unique nature of use case(s), as well as integrations and applications of OpenStack performed by a particular team. Additional selection criteria includes:

  • Demonstration of organizational transformation
  • Interesting workloads, applications and use cases running on OpenStack
  • Quantitative or qualitative results of performance, money or time-to-market saved with OpenStack cloud infrastructure
  • Growth and maturity of deployment, in terms of size and number of users
  • Community impact in terms of code contributions, feedback, knowledge sharing, etc.

For more information about the Superuser Awards, please visit http://superuser.openstack.org/awards.

by Superuser at March 19, 2015 07:08 PM

Captain KVM

Custom Cloud Images for OpenStack pt1

Hi folks,

We previously finished our multi-part series on deploying RHEL-OSP with the RHEL-OSP-Installer. In a few weeks, if all goes according to plan I’ll fire up a new series on the next gen installer… In the mean time, I’d like to show you some useful things to do once you’ve got everything up and running. So what’s up first? Well, as the title suggests, we’re going to create some custom images.

Specifically for RHEL, Fedora, and CentOS. Our new and current multi-part series…

As you go through the steps for each, you’ll find that some steps will work for all 3, but then again, some steps will only work for that distro. What I’m getting at is that there isn’t a unified toolset for creating custom images. That being said, nothing is stopping you from seeing what happens when you use the Fedora tools on RHEL or the CentOS tools on Fedora.. It likely just comes down to ensuring that you have access to the proper package repos..

Regardless, we’ll step through each of the different major “RPM”-based distros and see how things compare. As always, I’d love to hear how you do it differently in the comments section.

Sandbox Environment

For creating your custom images, you should likely have a sandbox environment that is completely separate from your OpenStack environment. The sandbox should be a non-OpenStack KVM host. I won’t go so far as to say it has to be the same distro as the images you will create, but it may help and it may help avoid issues. For example, if you’re creating mostly RHEL-based cloud images, you probably want a RHEL-based KVM host. If you create the odd Fedora or CentOS image, it’s up to you if you want to spend the 15 minutes to setup a second hypervisor. :)

Most of the tools that are used to create and/or edit images depend on virtualization packages and/or a running hypervisor. Therefore, your sandbox really needs to be a fully functioning hypervisor, plus extra packages as mentioned below. If you’re creating the host from scratch, you’ll want the following package groups:

@virtualization-hypervisor
@virtualization-client
@virtualization-platform
@virtualization-tools

If you’re host is already installed, you’ll want to add the following packages:

qemu-kvm qemu-img virt-manager libvirt libvirt-python python-virtinst libvirt-client libguestfs-tools libguestfs-tools-c qemu-kvm-tools

RHEL Custom Cloud Images

This method will actually work for any distro, but it’s not exactly the most eloquent. It requires that you have the ISO image for the version of RHEL that you want, plus 2x the storage space for the image itself. Figure on creating 8GB disk images that will get “sparsified” when we’re all done, leaving us with images that are a fraction of that size. But for our sandbox, figure on at least 16GB of working space.

  1. Create a working disk image:
    # qemu-img create -f qcow2 /tmp/rhel7-web-working.qcow2 8G
  2. Fire up “virt-manager”
  3. Create a new VM as you would normally do from ISO, except..
  4. Don’t create a new disk, select the “working image” created above
  5. When you get to the disk partitioning, blow it all away and only create “/” or “/” and “/boot”. It’s a cloud image. You don’t need LVM. You likely don’t even need “/boot”.
  6. Choose the bare minimum packages you can get away with, for example web server.
  7. Choose bare networking, but turn on at least one interface with DHCP, and no fixed MAC address
  8. Let it install and reboot.
  9. Did you get your packages? Does httpd start? If not enable it. Check other services to enable/disable. What other packages got installed? Maybe remove NetworkManager and other unwanted packages.
  10. Be sure to install the package “cloud-init”
  11. Be sure to add the line
    NOZEROCONF=yes

    to “/etc/sysconfig/network in order to allow access to the OpenStack metadata service.

  12. Add the options
    console=tty0 console=ttyS0,115200n8

    To the end of kernel line of /boot/grub2/grub.cfg

  13. Shut it down.
  14. Upload it to OpenStack, and create a test instance with it. Can you connect to it? Did networking start up on it properly? Expect that networking will be different between your KVM sandbox and OpenStack. You may just need to switch your “ensX” device over to “eth0″ for the cloud image, then restart networking and “boom”, you’re running. Then double check your access again. Does everything else work as expected? If yes, then shut it down, then delete the instance.
  15. Then delete the image from OpenStack. Yes, you heard (read) me right. We have one more step to get things right. We just ran our “maximum smoke test” against the custom build, now we need to make it properly generic and properly small.
  16. Go back to the image in the sandbox. Again, if everything is as expected we’re almost done. If not, go back and fix packages, services, access, or whatever else is wrong first. Otherwise, do the following:
    # virt-sysprep -a /tmp/rhel7-web-working.qcow2
    # virt-sparsify /tmp/rhel7-web-working.qcow2 /tmp/rhel7-web.qcow2 
    # ls -lsh /tmp/rhel7-web.qcow2

    See the difference between the advertised size and the actual size? The “virt-sysprep” command takes out all of the things like SSH keys, UUIDs, network UDEV rules, and everything else that would make the image not generic. The “virt-sparsify” command then ensures that we’re dealing with the smallest image possible.

  17. Upload the final image to OpenStack, this is your final custom image for RHEL 7

Other Options

Ok, so how do we make this more eloquent? A little more automated? Well, we can certainly script it. Once you know exactly how the networking is going to show up in your OpenStack environment, you can make certain changes to the working image that will allow you to completely avoid the “test upload” process. You’ll get a feel for what’s going to fly and what’s not going to fly. But let’s face it, taking 5-10 minutes to test something “just in case” is still a good idea.

So what commands do we have at our disposal that we can script this? Easy:

  • qemu-img creates all sorts of image files from qcow2 to raw to vmdk.
  • virt-install is the command line version of the “virt-manager” install function, with many many options available, including kickstart! In otherwords, we can get ~very~ specific with the builds.
  • guestfs is a tool that allows us to affect change on disk images while they are powered down. Change passwords, copy files, etc.
  • virt-sysprep is a tool that makes a VM more generic for cloning purposes. Way back when I still worked at NetApp and I was working with the early (2.x !!!) versions of RHEV, I opened many RFE’s (Request for Enhancement) with Red Hat.. I wrote a blog article called Don’t Kickstart.. Clone! back in 2011 and talked about “static artifacts” and “dynamic artifacts”. I wrote some scripts to help make boot LUNs and VMs more generic. Then I opened and RFE for Red Hat to take over.. They responded in spades..
  • virt-sparsify is a tool that effectively rips out the empty space. When fully booted, you might have an 8GB VM, but in storage, the image might be 400MB or 1.5GB.

So how about an example script before we go?

#/bin/sh

IMAGE=demo
KSTARTTREE="http://192.168.200.2:8080"
KSIMAGE="http://192.168.200.2:8080/ks-apache.cfg"

qemu-img create -f qcow2 /tmp/$IMAGE-working.qcow2 8G

virt-install -r 2048 --vcpus 1 \
 --name="$IMAGE-working" \
 --disk="/tmp/$IMAGE-working.qcow2" \
 --os-type=linux --os-variant=rhel7 \
 --location=$KSTARTTREE \
 --graphics none \
 --console pty,target_type=serial \
 --extra-args "console=ttyS0 ks=$KSIMAGE"

virt-sysprep -a /tmp/$IMAGE-working.qcow2
virt-sparsify /tmp/$IMAGE-working.qcow2 /tmp/$IMAGE.qcow2
ls -lsh /tmp/$IMAGE.qcow2

Of course this assumes that everything is just as we want it as it includes the ‘sysprep’ and ‘sparsify’ commands in it, but you get the general idea. And I didn’t include “guestfs” or “guestfish”, as I think I’ll save that for a later post.. it’s pretty cool. Besides, this should be enough to get you going in the mean time. Again, you can likely make this as complex as you need. I just threw this together as an example.

So why didn’t I just start with the commands and the script? Because I want you know, A) how it all fits together and, B) how to break it all down into smaller steps for troubleshooting.

We’ll pick up on Fedora or CentOS next time.

Hope this helps,

Captain KVM

The post Custom Cloud Images for OpenStack pt1 appeared first on Captain KVM.

by CaptainKVM at March 19, 2015 07:02 PM

OpenStack Superuser

Why your first OpenStack commit will always be the hardest

This post is part of the Women of OpenStack Open Mic Series to spotlight women in various roles within our community, who have helped make OpenStack successful. With each post, we will learn more about each woman’s involvement in the community and how they see the future of OpenStack taking shape. If you’d like to be featured, please email editor@openstack.org.

Susanne Balle is a distinguished technologist with HP Cloud working on platform services. She has been involved in OpenStack since the Essex OpenStack summit. Her current focus is on Neutron load-balancer-as-a-service (LBaas), Neutron Advanced Services, and Octavia.

Balle has more than 20 years of experience in the high tech industry including dealing with open source software. While working in high-performance computing, she contributed to SLURM – a scalable and high-performing open-source workload manager designed for Linux clusters of all sizes. Her areas of expertise include cloud computing, grid computing, high performance computing and distributed-memory matrix computations.

Balle lives in snowy Southern New Hampshire with her husband and two wonderful kids.

What's your role in the OpenStack community?

In the past, I have been involved in the Content Distributed Network (CDN) and Swift, briefly in Sahara and have attended all the OpenStack Design Summits since Essex.

Currently I am focusing on Neutron LBaas V2, Neutron Advanced Services, andOctavia, an operator-grade open source scalable load balancer under development in Stackforge.

I spearheaded an ad hoc meeting at the 2014 Atlanta Summit to discuss splitting the Advanced Networking Services out of Neutron due to the lack of velocity in those services. The services were stuck and not in a good place. LBaaS was not production ready and it wasn’t clear that the current LBaaS implementation could get us there. The meeting was attended by more than 70 people, which was encouraging since that showed that people cared. We had representatives from Rackspace, BlueBox, eBay, F5, A10, and many more. It was clear that the developers working on the Advanced Networking Services were different than those working on the Neutron core, so separating the two made a lot of sense.

It took the full Juno release for us to convince the community, but in Kilo, the Advanced Networking Services’ repos are now split from the Neutron repo allowing Neutron to focus on layer 2 and 3 while the Advanced Networking Services focus on Layer 4 to 7. We are hoping that this will increase the velocity of the individual projects LBaaS, FW and VPC. We have seen a great increase in participation and velocity in the LBaaS project for Kilo.

Why do you think it's important for women to get involved with OpenStack?

OpenStack is the driving force in open source cloud computing software and presents a huge opportunity for technologists at many levels. Having women participate in OpenStack will increase their ability to get interesting and challenging opportunities at a wide range of companies. Their participation brings more diverse thinking to the community and the resulting OpenStack platform. The benefit of participating in an open, community-driven developer community is huge. You have the sense of belonging and you work with a network of people across the globe.

What obstacles do you think women face when getting involved in the OpenStack community?

The first commit has a steep learning curve, especially if you haven’t had experience with git, gerrit, and the rest of the OpenStack tooling. My experience has been that the community is very welcoming and always ready to help a newbie with his/her first commit.

What do you think can help get women more involved with OpenStack?

I think we have to start at the root of the problem---namely that there are fewer women in engineering and computer science, which explains the smaller number of women involved in OpenStack. So getting involved at the primary and secondary education level with STEM-like activities and convincing women that technical careers are interesting and should be considered is important.

Efforts such as the Women of OpenStack group are important to help women get more involved in OpenStack.

In 2014, I was part of the committee at HP that awarded an HP OpenStack Scholarship to four very deserving women grad students. The scholarship also included some mentorship to help them through the OpenStack maze.

Of the Women of OpenStack events you have attended, which was your favorite and why?

My favorite was the breakfast event in Portland. I really enjoyed chatting with other women in OpenStack and had the opportunity to meet 70 women stackers at that event. It was a great networking opportunity.

What do you want to see out of the Women of OpenStack group in the near and distant future?

The Women of OpenStack group should be more involved at the college level and evangelize the benefits of computer science careers as well as participation in an open, community-driven developer community.

As a developer, where is your favorite place to code? How did you learn to code?

I usually write code in my office. In theory I learned how to code at university, but I really gained coding experience while writing the parallel Singular Value Decomposition (SVD) algorithm for the Connection Machine CM-5’s CMSSL math library while being a visiting scholar at Thinking Machines. That was fun!

Create an original OpenStack GIF or haiku

Open, community-driven developer community
A network of people across the globe with a purpose
OpenStack is here to stay

Cover Photo by Brian Yap // CC BY NC

by Hong Nga Nguyen at March 19, 2015 12:39 AM

March 18, 2015

Percona

Getting started guide for OpenStack contributors

So you want to contribute to OpenStack? I can help!

For the last year or so I have been involved with OpenStack and more specifically the Trove (DBaaS) project as sort of an ambassador for Percona, contributing bits of knowledge, help and debugging wherever I could and thought I would share some of my experience with others that wanted to get involved with OpenStack development, documentation, testing, etc. Getting started with OpenStack contributions is also the idea behind my talk next month at Percona OpenStack Live 2015. (Percona Live attendees have access to OpenStack Live)

Back at the last OpenStack Conference and Design Summit in Paris last November, I had the amazing opportunity to attend the two-day OpenStack Upstream Training hosted by Stefano Maffulli, Loic Dachary and several other very kind and generous folks. If you ever find yourself in a position to attend one of these training sessions, I strongly suggest that you take advantage of the opportunity, you will not be disappointed.

Using some of the material from the OpenStack Foundation and a little personal experience, I’m going to go through some of the basics of what you’ll need to know if you want to contribute. There are several steps but they are mostly painless:

– It all starts with a little bit of legal work such as signing a either an individual or corporate contributor agreement.

– You will need to decide on a project or projects that you want to contribute to. Chances are that you already have one in mind.

– Find the various places where other contributors to that project hang out, usually there is a mailing list and IRC channel. Logon, introduce yourself, make some friends and sit and listen to what they are working on. Find the PTL (Project Team Lead) and remember his/her name. Let him/her know who you are, who you work for, what you are interested in, etc. Sit in on their meetings, ask questions but don’t be a pest. Observe a little etiquette, be polite and humble and you will reap many rewards later on.

– Eventually you will need to find and get the code and install whatever tools are necessary for that project, build it, stand up a test/working environment, play with it and understand what the various moving parts are. Ask more questions, etc.

– Do you think you are ready to do some coding and submit a patch? Talk to the PTL and get a lightweight bug or maybe a documentation task to work on.

– In order to submit a patch you will need to understand the workflow use the OpenStack gerrit review system which takes a little bit of time to understand if you have never used gerrit before. You’ll need to find and install git-review. Here is where making friends above really helps out. In every project there are usually going to be a few folks around with the time and patience to help you work through your first review.

– Find a bit of a mentor to help you with the mechanics in case you run into trouble, could just be the PTL if he/she has the time, make your patch, send it in and work through the review process.

– As with most peer review situations, you’ll need to remember never to take things personally. A negative review comment is not an insult to you and your family! Eventually your patch will either be accepted and merged upstream (yay!) or rejected and possibly abandoned in favor of some alternative (boo!). If rejected, fret not! Talk to the PTL and your new friends to try and understand the reason why if the review comments were unclear and simply try again.

It is that easy!

Come join me on Tuesday, April 14th in Santa Clara, California and we’ll chat about how you can begin contributing to OpenStack.

The post Getting started guide for OpenStack contributors appeared first on MySQL Performance Blog.

by George O. Lorch III at March 18, 2015 01:00 PM

March 17, 2015

Ravello Systems

SUSE Openstack Cloud 4 lab environment on AWS and Google Cloud

SUSE logo

The SUSE Openstack Cloud distribution is amongst one of the leading Openstack platforms. With any Openstack KVM setup, you need hardware to deploy it and a lot of time to setup and configure the environment to get it up and running.

The SUSE Openstack ISV engineering group was looking for a way to make it easy for their ISV partners to get access to SUSE Cloud 4 lab environments, where they could test their products for interoperability. While they had an internal lab setup fulfilling some of this need, increasing demand by their ISV partners compelled them to look for an on-demand model.

The SUSE team wanted to leverage AWS or Google cloud to spin up these OpenStack KVM environments. However, it is not possible to set up a bare metal like environment on a public cloud on top of which they could install KVM and then Openstack. In order to solve this problem, SUSE turned to Ravello. With Ravello, SUSE could do a lot more on AWS and Google cloud. For instance:

SUSE partnered with Ravello and set up their Openstack Cloud 4 based environment, so that it can be made available to their ISV partners on-demand. Here is a link to a detailed write up on how this environment is configured.

The test environment is now available as a “SUSE OPENSTACK Cloud 4 - Final Release” blueprint in Ravello. Any ISV partner that needs a fully functional SUSE Cloud 4 environment just needs to create account with www.ravellosystems.com and request that this blueprint be moved to their account. Then, with a single click, they can provision one or multiple test environments from this blueprint, running on either AWS or Google Cloud.

The post SUSE Openstack Cloud 4 lab environment on AWS and Google Cloud appeared first on The Ravello Blog.

by Manisha Arora at March 17, 2015 09:54 PM

Daniel P. Berrangé

Announce: Entangle “Charm” release 0.7.0 – an app for tethered camera control & capture

I am pleased to announce a new release 0.7.0 of Entangle is available for download from the usual location:

  http://entangle-photo.org/download/

The main features introduced in this release are a brand new logo, a plugin for automated capture of image sequences and the start of a help manual. The full set of changes is

  • Require GLib >= 2.36
  • Import new logo design
  • Switch to using zanata.org for translations
  • Set default window icon
  • Introduce an initial help manual via yelp
  • Use shared library for core engine to ensure all symbols are exported to plugins
  • Add framework for scripting capture operations
  • Workaround camera busy problems with Nikon cameras
  • Add a plugin for repeated capture sequences
  • Replace progress bar with spinner icon

The Entangle project has a bit of a quantum physics theme in its application name and release code names. So primary inspiration for the new logo was the interference patterns from (electromagnetic) waves. As well as being an alternate representation of an interference pattern, the connecting filaments can also be considered to represent the (USB) cable connecting camera & computer. The screenshot of the about dialog shows the new logo used in the application:

Logo

by Daniel Berrange at March 17, 2015 09:44 PM

Kashyap Chamarthy

Minimal DevStack with OpenStack Neutron networking

This post discusses a way to setup minimal DevStack (OpenStack development environment from git sources) with Neutron networking, in a virtual machine.

(a) Setup minimal DevStack environment in a VM

Prepare VM, take a snapshot

Assuming you have a Linux (Fedora 21 or above or any of Debian variants) virtual machine setup (with at-least 8GB memory and 40GB of disk space), take a quick snapshot. The below creates a QCOW2 internal snapshot (that means, your disk image should be a QCOW2 image), you can invoke it live or offline:

 $ virsh snapshot-create-as devstack-vm cleanslate

So that if something goes wrong, you can revert to this clean state, by simple doing:

 $ virsh snapshot-revert devstack-vm cleanslate

Setup DevStack

There’s plenty of configuration variants to set up DevStack. The upstream documentation has its own recommendation of minimal configuration. The below configuration is of much smaller foot-print, which configures only: Nova (Compute, Scheduler, API and Conductor services), Keystone Neutron and Glance (Image service) services.

$ mkdir -p $HOME/sources/cloud
$ git clone https://git.openstack.org/openstack-dev/devstack
$ chmod go+rx $HOME
$ cat << EOF > local.conf
[[local|localrc]]
DEST=$HOME/sources/cloud
DATA_DIR=$DEST/data
SERVICE_DIR=$DEST/status
SCREEN_LOGDIR=$DATA_DIR/logs/
LOGFILE=/$DATA_DIR/logs/devstacklog.txt
VERBOSE=True
USE_SCREEN=True
LOG_COLOR=True
RABBIT_PASSWORD=secret
MYSQL_PASSWORD=secret
SERVICE_TOKEN=secret
SERVICE_PASSWORD=secret
ADMIN_PASSWORD=secret
ENABLED_SERVICES=g-api,g-reg,key,n-api,n-cpu,n-sch,n-cond,mysql,rabbit,dstat,quantum,q-svc,q-agt,q-dhcp,q-l3,q-meta
SERVICE_HOST=127.0.0.1
NETWORK_GATEWAY=10.1.0.1
FIXED_RANGE=10.1.0.0/24
FIXED_NETWORK_SIZE=256
FORCE_CONFIG_DRIVE=always
VIRT_DRIVER=libvirt
# To use nested KVM, un-comment the below line
# LIBVIRT_TYPE=kvm
IMAGE_URLS="http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img"
# If you have `dnf` package manager, use it to improve speedups in DevStack build/tear down
export YUM=dnf

NOTE: If you’re using KVM-based virtualization under the hood, refer this upstream documentation on setting it up with DevStack, so that the VMs in your OpenStack cloud (i.e. Nova instances) can run, relatively, faster than with plain QEMU emulation. So, if you have the relevant hardware, you might want setup that before proceeding further.

Invoke the install script:

 $ ./stack.sh 

[27MAR2015 Update]: Don’t forget to systemctl enable the below services so they start on reboot — this allows you to successfully start all OpenStack services when you reboot your DevStack VM:

 $ systemctl enable openvswitch mariadb rabbitmq-server 

(b) Configure Neutron networking

Once DevStack installation completes successfully, let’s setup Neutron networking.

Set Neutron router and add security group rules

(1) Source the user tenant (‘demo’ user) credentials:

 $ . openrc demo

(2) Enumerate Neutron security group rules:

$ neutron security-group-list

(3) Create a couple of environment variables, for convenience, capturing the IDs of Neutron public, private networks and router:

$ PUB_NET=$(neutron net-list | grep public | awk '{print $2;}')
$ ROUTER_ID=$(neutron router-list | grep router1 | awk '{print $2;}')

(4) Set the Neutron gateway for router:

$ neutron router-gateway-set $ROUTER_ID $PUB_NET

(5) Add security group rules to enable ping and ssh:

$ neutron security-group-rule-create --protocol icmp \
    --direction ingress --remote-ip-prefix 0.0.0.0/0 default
$ neutron security-group-rule-create --protocol tcp  \
    --port-range-min 22 --port-range-max 22 --direction ingress default

Boot a Nova instance

Source the ‘demo’ user’s Keystone credentials, add a Nova key pair, and boot an ‘m1.small’ flavored CirrOS instance:

$ . openrc demo
$ nova keypair-add oskey1 > oskey1.priv
$ chmod 600 oskey1.priv
$ nova boot --image cirros-0.3.3-x86_64-disk \
    --nic net-id=$PRIV_NET --flavor m1.small \
    --key_name oskey1 cirrvm1 --security_groups default

Create a floating IP and assign it to the Nova instance

The below sequence of commands enumerate the Nova instance, finds the Neutron port ID for a specific instance. Then, creates a floating IP, associates it to the Nova instance. Then, again, enumerates the Nova instances, so you can notice both floating and fixed IPs for it:

 
$ nova list
$ neutron port-list --device-id $NOVA-INSTANCE-UUID
$ neutron floatingip-create public
$ neutron floatingip-associate $FLOATING-IP-UUID $PORT-ID-OF-NOVA-INSTANCE
$ nova list

A new tenant network creation can be trivially done with a script like this.

Optionally, test the networking setup by trying to ping or ssh into the CirrOS Nova instance.


Given the procedural nature of the above, all of this can be trivially scripted to suit your needs — in fact, upstream OpenStack Infrastructure does use such automated DevStack environments to gate (as part of CI) every change that is submitted to any of the various OpenStack projects.

Finally, to find out how minimal it really is, one way to test is to check the memory footprint inside the DevStack VM, using the ps_mem tool and compare that with a Different DevStack environment with more OpenStack services enabled. Edit: A quick memory profile in a Minimal DevStack environment here — 1.3GB of memory without any Nova instances running (but, with OpenStack services running).


by kashyapc at March 17, 2015 07:35 PM

Daniel P. Berrangé

QEMU QCow2 built-in encryption: just say no. Deprecated now, to be deleted soon

A little over 5 years ago now, I wrote about a how libvirt introduced support for QCow2 built-in encryption. The use cases for built-in qcow2 encryption were compelling back then, and remain so today. In particular while LUKS is fine if your disk backend is already a kernel visible block device, it is not a generically usable alternative for QEMU since it requires privileged operation to set it up, would require yet another I/O layer via a loopback or qemu-nbd device, and finally is entirely Linux specific. The direction QEMU has taken over the past few years has in fact been to take the kernel out of the equation for more & more functionality. For example, QEMU can now natively connect to RBD, Gluster, iSCSI and NFS servers with no kernel assistance – the client code is implemented entirely within QEMU block driver layer, which precludes the use of LUKS there.

At the time I wrote that blog post, no one had seriously looked at the QCow2 encryption design to see if it was in any way sane from a security POV. At least if they had, AFAIK, they didn’t make their analysis public. Over time though, various QEMU maintainers did eventually look at the QCow2 encryption code and their conclusions were not positive. The consensus opinion amongst QEMU maintainers today is that QCow2 encryption is terminally flawed in a number of serious ways, including but not limited to:

  • The AES-CBC cipher is used with predictable initialization vectors based on the sector number. This makes it vulnerable to chosen plaintext attacks which can reveal the existence of encrypted data.
  • The user passphrase is directly used as the encryption key.
    • A poorly chosen or short passphrase will compromise the security of the encryption.
    • In the event of the passphrase being compromised there is no way to change the passphrase to protect data in any qcow images.
    • It is difficult to make the data permanently inaccessible upon file deletion – at best you can try to destroy data with shred, though even this is ineffective with many modern storage technologies.

By comparison the LUKS encryption format does not suffer from any of these problems. With LUKS the initialization vectors typically use ESSIV to ensure unpredictability; the passphrase is only indirectly used to unlock the master encryption key material, so can be changed at will; the passphrase is put through a PBKDF2 function to mitigate the effects of short sequences of letters; the size of the master key material is artificially inflated with an anti-forensic algorithm to increase the difficulty of recovering the key from deleted volumes.

The QCow2 encryption scheme is a prime example of why merely using a well known standard algorithm (AES) is not sufficient to guarantee a secure implementation. In January 2014, I submitted an update for the QEMU docs to explicitly warn users about the security limitations of QCow2 encryption, which made it into the 1.5.0 release of QEMU. This week Markus has gone one step further and explicitly deprecated use of QCow2 encryption for the forthcoming 2.3.0 release of QEMU. Any attempt to use an encrypted QCow2 file with the QEMU system emulator will now result in a warning being printed to stderr, which in turn ends up in the libvirt logfile for that guest. As well as the security issues, Markus’ other motivation for deprecating this is that the way it is integrated into QEMU block driver layer causes a number of technical & usability problems. So even if we want encrypted block devices in QEMU, the internals for encryption need a complete rewrite from scratch.

In the 2.4.0 release, the intention is to go one step further and actually delete support for QCow2 encryption from the QEMU system emulator entirely, as well as all the infrastructure for block device encryption. We will keep support for decrypting images in the qemu-img program only, to provide a way for users to get their previously encrypted data out into a supported format.

In the immediate future, the recommendation is that users who need encryption for virtual disks should use LUKS on the host, despite the limitations that I noted earlier. At some point in the next 6 months my intention is to start working on a QEMU block driver implementation of the LUKS format, which will enable QEMU to add encryption to any of its virtual disk backends, not merely QCow2. This will require designing new infrastructure for handling decryption keys inside QEMU too, to replace the unsatisfactory approach used today. By using the LUKS format directly though, QEMU will benefit from the security knowledge of those who designed and analysed this format over many years to identify its strengths & weaknesses. It will also provide good interoperability. eg an encrypted qcow2-luks file will be able to be converted to/from a block device for access by the kernel’s LUKS driver with no need to re-encrypt the data, which is very desirable as it lets users decide whether to use in-QEMU or in-kernel block device backends at the flick of a switch.

So just to sum up. Do not ever use QCow2 built-in encryption as it exists today. It is unfixably broken by design. It is deprecated in QEMU 2.3.0 and is likely to be deleted in QEMU 2.4.0.

by Daniel Berrange at March 17, 2015 05:43 PM

Mirantis

Verify before you deploy: The Mirantis OpenStack Hardware Certification program

The post Verify before you deploy: The Mirantis OpenStack Hardware Certification program appeared first on Mirantis | The #1 Pure Play OpenStack Company.

Mirantis_Logo_600pxWhether you distribute hardware with OpenStack Mirantis pre-installed, or you’re a customer about to deploy it, you need to know the installation will operate smoothly ahead of time. You can now test your equipment with the new Mirantis Hardware Certification program to make sure everything works as it should before shipping Mirantis OpenStack to customers or going live with Mirantis OpenStack in your own environment. In this article, we cover the details of the certification process and instructions for hardware self-certification with Mirantis OpenStack.

Certification overview

When you certify your hardware with the Mirantis program, you and your customers know deployments will be successful. If you’re an end-user, you know before you deploy that Mirantis OpenStack works in your environment. Once the certification is complete, you can request that Mirantis post your successful results, and you can also advertise your certification. 

The process typically involves three steps:

  1. Perform the testing procedure with the instructions below, using an installation of Mirantis OpenStack.
  2. Send the results file created during testing to mos-hw-cert@mirantis.com. We use your findings to authenticate your certification and to improve and refine the certification process.
  3. With your approval, Mirantis publishes your certification on our public certification list, which we update on an ongoing basis.

Note: You must send the test results to obtain a certificate.

Certify your hardware

To complete the hardware certification process, you’ll need to set up Fuel for OpenStack deployment and management, follow the other preliminary steps outlined below, perform the testing itself, and then send the results to Mirantis.

Preliminary steps

  1. Setup Fuel by following these directions

  2. Make sure your computer has a bash console with access to the Fuel master node. To verify, execute the following command, using the personal Fuel URL generated in Step 1: 

wget FUEL_URL
  1. Verify that you are using  Python 2.7.
  2. Download the hardware certification verification tool from https://certification.mirantis.com/mos_certification_tool.sh

Start Testing

Note: For hardware certification, use simple configurations of 2-3 nodes for best results.

  1.  To begin, copy the certification tool you downloaded in Step 4 above to the MOS master node as follows:

$ scp PATH_TO_SCRIPT root@MASTER_NODE_IP:SOME_FOLDER

 For example:

$ scp /tmp/mos_certification_tool.sh root@10.20.0.1:/tmp    

  2.  Log into the MOS master node and run the script with the Fuel URL from Step 1 in “Preliminary Steps” above, using auth and  MOS login credentials:

$ ssh root@MASTER_NODE_IP
# cd PATH_TO_SCRIPT_FOLDER
$ bash mos_certification_tool.sh -a LOGIN:PASSWD:TENANT --fuel-ssh-creds MOS_NODE_CREDS FUEL_URL

For example:

$ ssh root@10.20.0.2
# cd /tmp
# bash mos_certification_tool.sh -a admin:admin:admin --fuel-ssh-creds root:masterkey http://172.16.50.200:8000/

The verification tool uses all nodes currently available (not less than two). If you want to create a cluster no smaller than XX nodes size, add the  ‘–min-nodes XX’ parameter, for example:

# bash mos_certification_tool.sh -a admin:admin:admin --fuel-ssh-creds root:masterkey --min-nodes 10 http://172.16.50.200:8000/

3. When your nodes are ready, the tool creates a test cluster and asks you to set up network parameters. Open the FUEL_CLUSTER_URL, login into FUEL, and set up all   required network and parameters, as shown in the Mirantis OpenStack 6.0 documentation.

Then type ‘ready’ and press “Enter” to install the cluster.

Certification Results

After testing is done, the Mirantis hardware certification tool displays the test results and saves them to a file with a naming convention such as “HW_cert_report_XXXX.txt” where “XXXX” is the current Unix time. Copy and paste the successful results in an email and send it to mos-hw-cert@mirantis.com to receive official Mirantis hardware certification. We will also use your files to make ongoing improvements to hardware testing.

You’ll quickly see the benefits of Mirantis hardware certification program, which takes the guesswork out of installing Mirantis OpenStack on your hardware. You’ll know before you begin that deployment will be smooth sailing, either for customers using your hardware, or in your own environment with Mirantis OpenStack.

 

The post Verify before you deploy: The Mirantis OpenStack Hardware Certification program appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Mikhail Semenov at March 17, 2015 04:39 AM

Doug Hellmann

Handling High Email Volume with sup

Handling High Email Volume with sup

Over the last year, the openstack-dev mailing list has averaged 2500 messages every month. Staying on top of that much email can be challenging, especially with some of the consumer-grade email clients available today. I’ve recently upgrade my email setup to use sup, a terminal-based mail client that is helping me process the mailing list, and even keep up with gerrit at the same time.

Read more...

March 17, 2015 04:00 AM

March 16, 2015

Ben Nemec

QuintupleO Success!

Yep, that's right. I've successfully deployed a cloud in a cloud using a third cloud. I have a video, but I'm not sharing it just yet because it was done using some slightly broken pre-release overcloud images. On the plus side they had exactly the same issues as the non-QuintupleO environments so I'm declaring success. :-) As soon as I have a chance to re-record with a fully functioning overcloud I'll post it here.

Since my previous posts have been a little light on details of how someone might try doing this themselves, I thought I'd do a how-to post. Be forewarned that this is a brand spanking new thing and changing all the time. Even since I started writing this post some important details have changed, so keep that in mind if you're trying to follow the process outlined below.

Environment

I'm currently using a single-node devstack installation with the Nova and Neutron patches from my first QuintupleO update. I've updated that post with the latest diffs of my changes, but be aware that they both changed since it was originally written so they may get out of date again. The actual changes are fairly simple though, so hopefully you can figure out what to do even if the diffs don't apply cleanly.

Baremetal Instance Setup

In my current QuintupleO setup, each fake baremetal instance has a corresponding OpenStack BMC instance. This is rather like a real baremetal environment with a BMC managing a baremetal box.

Baremetal Image

First, I create an empty image to start the baremetal instances with. This can be done with the following command: qemu-img create -f qcow2 empty.qcow2 40G Then I uploaded the image to Glance and called it, creatively enough, empty. :-)

BMC Image

Before you can deploy anything, you'll need a BMC image. These can be created with my openstack-bmc diskimage-builder element. When deployed using the Heat templates below, this image will automatically configure itself to manage a specified baremetal instance. Do note that it's still hard-coded to my devstack installation, so it may need minor tweaking for other environments. Eventually I'll make it configurable.

I called this image openstack-bmc.

Networking

You can see a diagram of the networking layout I'm using in my BMC Post. public is essentially my home network where floating ips are allocated, private is the network all of the instances get attached to, and undercloud is the provisioning network where DHCP is disabled so the undercloud Neutron can handle that.

Deploying a QuintupleO Environment

I've written a couple of tools to help with setting the rest of the environment up. It's hardly a foolproof end-to-end system, but it automates a lot of the most tedious bits.

Deploying through Heat

The first major piece is a Heat template for deploying the images created above. That can be found here: OpenStack BMC Template, and used with Heat as follows: heat stack-create -f ./openstack-bmc.yaml -e ./resource-registry.yaml -P node_count=[baremetal nodes to deploy] baremetal Again, the defaults are hard-coded for my environment. There are some parameters that can be configured, but as of this writing I'm fairly certain not everything that should be configurable is. They're fairly simple templates so it shouldn't be too difficult to tweak them to your liking though.

The template now generates a separate flavor for each baremetal instance so Ironic can set the boot method on a per-instance basis. Because of this, you will need to enable the nova-flavor contrib extension in Heat: https://github.com/openstack/heat/tree/master/contrib/nova_flavor This was not available by default in my Devstack installation.

Collecting IPMI Details

Now you should have a bunch of baremetal_* and bmc_* instances running, but you still need to collect all of their IP and MAC details for Ironic so it can manage them. Luckily, there's a script for that too. It is also configurable in case you named things differently, and will dump out a file called nodes.json containing the details of your baremetal instances. This file is suitable for the os-cloud-config register-nodes command. Note that the IPMI credentials are currently hard-coded the same as the openstack-bmc image element, so if you made changes there you'll need to here as well.

Deploying to a QuintupleO Environment

My testing lately has been using instack-undercloud to install an undercloud instance. In theory devtest should be able to do essentially the same thing, but I've been neck-deep in instack-undercloud lately so that's what I used. :-)

If everything is set up correctly, you should be able to do a baremetal-style deployment to the instances deployed via Heat earlier. Note that unlike a virt-based setup in either devtest or instack-undercloud, this is using the ipmitool driver in Ironic so there's no need to copy around ssh keys for power control.

That's pretty much it for the major steps, at least as I remember them. I realize everything above is assuming a fair amount of OpenStack experience, but if you're too new to know how to set up your own Neutron networks and such you probably want to start somewhere simpler than QuintupleO anyway. ;-)

Hope this was useful and as always feel free to contact me with any questions or comments.

by bnemec at March 16, 2015 08:13 PM

Rafael Knuth

Cloud Online Meetup: Instant Cloud w/ Stackato Cloud Foundry & Mirantis OpenStack

We are hosting our Cloud Online Meetup on Tuesday, March 24 at 9.00 - 10.00 am Pacific!  In order to...

March 16, 2015 04:32 PM

Sébastien Han

OpenStack Glance NFS and Compute local direct fetch

This feature has been around for quite a while now, if I remember correctly it was introduced in the Grizzly release. However, I never really got the chance to play around with it. Let’s assume that you use NFS to store Glance images, we know that the default booting mechanism implies to fetch the instance image from Glance to the Nova compute. This is basically streaming the image which involves network throughput and makes the boot process longer. OpenStack Nova can be configured to directly access Glance images from a local filesystem path. This is ideal for our NFS scenario.

The setup is a bit tricky, so first configure your glance-api.conf and apply the following flags:

[DEFAULT]
show_multiple_locations = True
filesystem_store_metadata_file = /etc/glance/nfs.json

Create your /etc/glance/nfs.json file:

{
    "id": "f5e1eee7-9160-493e-9b6f-d4b1c34eaa23",
    "mountpoint": "/srv/nfs/glance/"
}

Make sure to restart both Glance API and registry.

Now, in your nova.conf on your compute nodes only, append the following values in the appropriate sections:

[DEFAULT]
...
...
allowed_direct_url_schemes = file
...
...

[image_file_url]
filesystems = nfs

[image_file_url:nfs]
id = f5e1eee7-9160-493e-9b6f-d4b1c34eaa23
mountpoint = /srv/nfs/glance/

During the boot sequence, Nova will issue a cp instead of fetching the Glance image through the network. Et voilà!

March 16, 2015 03:12 PM

Opensource.com

Charting the OpenStack galaxy, under the hood at TryStack, and more

The Opensource.com weekly look at what is happening in the OpenStack community and the open source cloud at large.

by Jason Baker at March 16, 2015 07:00 AM

March 14, 2015

Anne Gentle

Be sure to read about my Stacker journey

The editors at The New Stack do great things with their articles, and mine is no exception! Be sure to read Anne Gentle: One Stacker’s Journey.

Even though I’ve lived in Austin, Texas, for over 14 years now, I tend to say I’m a midwesterner when asked. There are traits associated with the spirit of the midwest that I’ll always identify in myself: hard work, resource creativity and conservation, humility, and a sense of wonder at trends being set somewhere in the world. Read more…

by annegentle at March 14, 2015 07:49 PM

OpenStack Superuser

Superuser weekend reading

Here's the news from the OpenStack world you won't want to miss -- the musings, polemics and questions posed by the larger community.

Got something you think we should highlight? Tweet, blog, or email us!

In Case You Missed It

Red Hat software engineer and individual director at OpenStack Russell Bryant shares his thoughts on the many different facets of OpenStack HA and how work can be done in or around OpenStack to better support legacy workloads.

Meanwhile, Jodi Smith over at Mirantis explains how the HP-championed project TripleO appears to be losing steam. The project was originally intended to simplify deployment by using OpenStack as an undercloud on which to deploy OpenStack, but attention has shifted to using container technology to approach OpenStack implementation.

History fact of the day: Rear Admiral Grace Hopper coined the phrase, "It's easer to ask forgiveness than it is to get permission." In the early days of OpenStack, it was just that. Thierry Carrez believes that now "we pushed the regulation and "ask for permission" cursor so far we actually prevent things from happening." See his take on why the Technical Community should step away and act as an appeals board.

And finally, Arthur Cole explains on the Tesora blog how OpenStack is undergoing the tech equivalent of a complete psycho-analysis. Read on to see his thoughts on OpenStack's survival.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

We feature user conversations throughout the week, so tweet, blog, or email us your thoughts!

Cover Photo by LendingMemo via Flickr// CC BY NC

by Hong Nga Nguyen at March 14, 2015 04:57 PM

March 13, 2015

OpenStack Blog

OpenStack Community Weekly Newsletter (Mar 6 – 13)

Five years in: Charting the OpenStack galaxy

The Starship Enterprise had a five year mission, to “explore strange new worlds,” among other things, and as we approach the five year mark in our own mission, it’s fun to think about the worlds we’ve seen and contemplate where to go next. Did anyone think we’d have a non-profit foundation in year three, with VMware joining right after we started it? Strange new worlds, indeed.

Check out what’s under the hood at TryStack

Billed as the easiest way to try out OpenStack, this free service lets you test what the cloud can do for you, offering networking, storage, and compute instances, without having to go all in with your own hardware. Dan Radez, whose day job is at Red Hat, unveiled TryStack’s new gear in a lightning show-and-tell talk at the recent Mid-Cycle Ops Meetup.

The Road to Vancouver

Relevant Conversations

Deadlines and Development Priorities

Security Advisories and Notices

Reports from Previous Events

Tips ‘n Tricks

Upcoming Events

OpenStack @ PyCon 2015: Booth info, looking for volunteers, posting of jobs, OpenStack presentations

Other News

Got Answers?

Ask OpenStack is the go-to destination for OpenStack users. Interesting questions waiting for answers:

OpenStack Reactions

http://lh3.ggpht.com/-6xFloLYcJD8/TpN0_8eMnEI/AAAAAAAABcg/t7VTmbfYEUQ/nickshock_thumb.gif

After taking one day off the mailing list …

 

The weekly newsletter is a way for the community to learn about all the various activities occurring on a weekly basis. If you would like to add content to a weekly update or have an idea about this newsletter, please leave a comment.

 

by Stefano Maffulli at March 13, 2015 11:28 PM

About

Planet OpenStack is a collection of thoughts from the developers and other key players of the OpenStack projects. If you are working on OpenStack technology you should add your OpenStack blog.

Subscriptions

Last updated:
March 28, 2015 12:07 PM
All times are UTC.

Powered by:
Planet