May 05, 2015

Mirantis

How to allocate, price and utilize Your OpenStack private cloud resources

The post How to allocate, price and utilize Your OpenStack private cloud resources appeared first on Mirantis | The #1 Pure Play OpenStack Company.

OpenStack is often deployed to help improve efficiency for an organization, but without an adequate amount of attention, deployment can run inefficiently and lead to a significant decrease in ROI. In this post, we will cover best practices and lessons learned from monitoring allocation and utilization within OpenStack private cloud environments, and explain several techniques for properly allocating and pricing your resources.

Before deploying OpenStack, it’s important to be aware of your private cloud management capabilities and available resources so that you can allocate them properly. To do that, you must monitor your system with a specific set of tools that analyze usage and trends, enabling you to properly price your resources and appropriately plan capacity.

We’ll start by looking at allocation, then delve into pricing.

Avoiding imbalanced Virtual Machine (VM) allocation

The first place to start in properly utilizing your OpenStack system is to avoid unoptimized CPU  and memory allocation. To ensure efficient and organized allocation in any private cloud environment, you’ll need to define VM types, which are often referred to as ‘flavors’ or ‘families’. The hardware in the example below has a capacity of 16GB of RAM and 16 virtual CPUs (a ratio of 1:1), and the servers are partitioned according to two flavors:

  1. The first flavor is composed of small, medium, and large instances. The small instance has 1 virtual CPU and 1GB of memory; the medium instance has 2 virtual CPUs and 2GBs of memory; and the large instance has 4 virtual CPUs and 4GBs of memory.
  2. The second flavor is composed of a small instance with 1 virtual CPU and 2GBs of memory; the medium instance has 2 virtual CPUs and 4GBs of memory.

In the diagram below, you can clearly see that Flavor 1, configured with a 1:1 ratio of CPU to RAM, optimally allocates the available hardware. However, Flavor 2, with a 2:1 ratio of CPU to RAM, doesn’t demonstrate optimal allocation because all of the RAM is used up with plenty of CPU left over.

As a general rule of thumb to prevent imbalanced allocation, we recommend that both virtual and hardware layers have the same VM to RAM ratio.

Each flavor should maintain the ratio, while doubling the resources of the previous one.  For example, you might have flavors of:

Cloudyn_Article_CPU_RAM











By coordinating with the CPU/RAM ratio of your hardware, you can prevent underutilization of your hardware.

Now let’s look at what these resources are actually costing you.

Calculating the cost of the private cloud

Resource costs in an OpenStack environment take into account a combination of several factors, some of which involve passing expenses back to each business unit.

Resource unit costs

Cloud cost allocation is critical and challenging for financial and accounting enterprise teams, especially when calculating costs for each business unit or project in the cloud. It’s eminently valuable to associate proper resources to their respective costs within any given group or department. Server flavor pricing is not an easy task. First, it is important to calculate the overall cost of the hardware, server room, and manpower that it takes to manage a certain amount of CPU and RAM (e.g. TCO, or the Total Cost of Ownership). You can then divide the TCO by each resource unit, which reveals the specific internal cost of each resource. We’ll refer to this as the “unit list price”.

Private cloud chargeback costs

When it comes to the private cloud, enterprise IT needs to worry about more than just controlling provisioned capacity; there’s also the need to perform resource cost allocation and chargebacks for each business unit. Once you set a resource unit cost, the next step involves calculating the price of each flavor. This simple and straightforward calculation signifies the cost of a single unit that is then multiplied by the amount of units that the flavor holds.

The actual cost of chargebacks per flavor is driven by physical resource utilization. In order to calculate the unit cost per resource, you need to multiply the “actual usage” by the “unit list price”. For example, if hardware is underutilized, the IT department needs to perform a chargeback, compensating for the idle capacity. This means that the cost per resource will be higher than its “list price”. Conversely, if you overcommit CPU usage, the cost per resource is lower.

Eventually, the flavor cost comes out to:

Flavor cost (unit list price, number of units) = unit list price x number of
resources (per flavor) x (average) underutilization/(average) overcommit

* The average is based on monitoring metrics of hardware utilization during a specific period of time (monthly, quarterly).

Obviously, in order to set the chargeback, IT needs to continuously monitor the average utilization and usage and be able to dynamically set flavor costs.

Capacity planning

In private clouds such as OpenStack, jumps in cost tend to occur when more hardware is purchased, requiring new employees for operational support and maintenance, so it’s preferable to avoid that as much as possible in order to maximize ROI.  In the other hand, you need to make sure that you do have enough capacity, so private cloud capacity planning involves defining your utilization policy.

For example, if your policy is to make sure that you’re never less than 15% underutilized to allow for sudden spikes, you’ll need to take that into account when planning purchases. When you know your specific resource hardware utilization levels, your environment, or even the overall cloud, you can make better decisions regarding allocation, as well as purchasing additional hardware.

Continuous Monitoring and Optimization

Strategically allocating your hardware doesn’t guarantee that the hardware will be properly utilized. Contrary to static traditional data center environments, the OpenStack environment’s usage and utilization need to be closely monitored due to their flexible and dynamic nature. Additionally, monitoring utilization levels, including idle capacity trends within specific business units and across organizations, is crucial for hardware allocation, remodeling and resource or workload re-allocation. Maintaining transparency and proactive optimization will generate an efficient environment, avoid new redundant IT capacity purchasing, and maximize your OpenStack cloud ROI.


About Cloudyn

Cloudyn uses Ceilometer, a built-in OpenStack and open source metering module, to measure performance metrics for resources. Cloudyn’s enterprise customers enjoy cloud comparison, monitoring, and optimization tools. Cloudyn’s cost monitoring for OpenStack provides proper visibility and better predictability for cost, resource association, attachment to particular projects or applications, and even the ability to compare OpenStack to public cloud alternatives. Cloudyn offers these tools to help customers engage in balanced decision-making in the modern and heterogeneous enterprise IT environment.

The post How to allocate, price and utilize Your OpenStack private cloud resources appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Guest Post at May 05, 2015 06:17 AM

Ceph vs Swift – An Architect’s Perspective

The post Ceph vs Swift – An Architect’s Perspective appeared first on Mirantis | The #1 Pure Play OpenStack Company.

When engineers talk about storage and Ceph vs Swift, they usually agree that one of them is great and the other a waste of time. Trouble is, they usually don’t agree on which one is which.

I frequently get the same question from customers who say, “We heard this Ceph thing replaces all other storage. Can’t we use that for everything?”

I’ll be discussing Ceph vs Swift from an architectural standpoint at the OpenStack Summit in Vancouver, sharing details on how to decide between them, and advising on solutions including both platforms. For now, let’s look at some of their architectural details and differences.

A Closer Look

Swift has been around since the dawn of OpenStack time – which is a bare five years ago. It is one of the core projects of OpenStack and has been tested and found stable and useful time and again.

Trouble is, Swift’s design comes up short in both transfer speed and latency. A major reason for these issues is that the traffic to and from the Swift cluster flows through the proxy servers.

Another reason many people think Ceph is the better alternative is that Swift does not provide block or file storage.

Finally, latency rears its ugly head when object replicas aren’t necessarily updated at the same time, which can cause requesters to receive an old version of an object after the first write of the new version. This behavior is known as eventual consistency.

Ceph, on the other hand, has its own set of issues, especially in a cloud context. Its multi-region support, while often cited as an advantage, is also a master-slave model. With replication possible only from master to slave, you see uneven load distribution in an infrastructure that covers more than two regions.

Ceph’s two-region design is also impractical as writes are only supported on the master, with no provision to block writes on the slave. In a worst case scenario, such a configuration can corrupt the cluster.

Another drawback to Ceph is security. RADOS clients on cloud compute nodes communicate directly with the RADOS servers over the same network Ceph uses for unencrypted replication traffic. If a Ceph client node gets compromised, an attacker could observe traffic on the storage network.

In light of Ceph’s drawbacks, you might ask why we don’t just build a Ceph cluster that spans two regions? One reason is that Ceph writes only synchronously and requires a quorum of writes to return successfully.

With those issues in mind, let’s imagine a cluster with two regions, separated by a thousand miles, 100ms latency, and a fairly slow network connection. Let’s further imagine we are writing two copies into the local region and two more to the remote region. Now the quorum of our four copies is three, which means the write request is not going to return before at least one remote copy is written. It also means that even a small write will be delayed by 0.2 seconds, and larger writes are going to be seriously hampered by the throughput restriction.

On the other hand, Swift in the same two-region architecture will be able to write locally first and then replicate to the remote region over a period of time due to the eventual consistency design. Swift also requires a write quorum, but the write_affinity setting can configure the cluster to force a quorum of writes to the local region, so after the local writes are finished the write returns a success status.

So how do we decide between Ceph and Swift?

How To Choose

In a single-region deployment without plans for multi-region expansion, Ceph can be the obvious choice. Mirantis OpenStack offers it as a backend for both Glance and Cinder; however, once larger scale comes into play, Swift becomes more attractive as a backend for Glance. Its multi-region capabilities may trump Ceph’s speed and stronger consistency model.

In many cases, speed is not the deciding factor, with security being a bigger issue, and that favors Swift, with its closed-off replication network. On the other hand, if the cloud infrastructure is well-protected, security may be a lower priority, putting Ceph back in the running.

Rather than choosing one over the other, it may make sense to have both alternatives in the same cloud infrastructure. For example, you could use Ceph for local high performance storage while Swift could serve as a multi-region Glance backend where replication is important but speed is not critical. However, a solution with both components incurs additional cost, so it may be desirable to standardize on one of the options.

My personal recommendation from many customer engagements — Mirantis offers architectural design assessments to assist with the collection of requirements and parameters and provides a solution that fits individual use cases and business drivers — is a thorough assessment of all business, technical, and operational factors. You can then weigh the factors and check them against the capabilities and drawbacks of both options. And who knows? You may be surprised at the winner.

Of course, this is a pretty simplistic view of the topic.  I will be discussing this topic in depth on Monday, May 18 at 5:30 at the OpenStack Summit in Vancouver. I’d love to know what you’d like to hear about; please let me know in the comments below.

The post Ceph vs Swift – An Architect’s Perspective appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Christian Huebner at May 05, 2015 05:59 AM

Cloud Platform @ Symantec

Begun the cloud war has

Happy Star Wars Day - 12 posts on "begun the cloud war has"

Read More

by David T Lin at May 05, 2015 03:07 AM

IBM OpenTech Team

Horizon Kilo Cycle Recap

Background
Horizon is the OpenStack Dashboard project. It provides a single pane of glass interface with which users can communicate with the core OpenStack services. In addition, it gives them the ability to customize and seamlessly build additional dashboards. Horizon was created as a purely Python-based project, like the rest of OpenStack. It was built on Django, a Python web framework. This was because experienced Python developers wanted some way to get a dashboard up as efficiently as possible. With the growing need for a responsive, more modern UI, Horizon developers started to look into client-side frameworks. Change was slow for several releases, with small enhancements here and there, until the Paris Summit when a demo of a dashboard written completely in AngularJS made big waves. So the journey began!

Some key features for this release:

  • Launch Instance Wizard
  • Single Sign On Portal
  • Cinder Enhancements
  • Add Serial Console Support
  • Continued internationalization
  • Magic Search
  • Better Angular Test Coverage

Launch Instance Wizard
This is a key Horizon feature which had also been cited as the major pain point in various usability studies and Ops sessions. HP stepped up to the plate and put together a great team to lead this effort. This feature is currently in “Beta” and needs to be turned on in the local_settings.py file.


# Enable the new launch instance button, then go to Project > Instances, click the left, Launch Instance button
LAUNCH_INSTANCE_NG_ENABLED = True

# To disable the legacy launch instance button
LAUNCH_INSTANCE_LEGACY_ENABLED = False

The new wizard provides a step-by-step approach to guide the user, as well as context-based help panels to provide additional information without cluttering the forms. There is validation and error handling. There are also graphs to show you real-time resource usage.

As part of this initiative, many reusable components were created, including the restyled Angular smart table, transfer tables, actions, help panels, filters and formatters, and donut charts. These will be used in the further “angularization” of Horizon.

Single Sign On Portal
Single Sign-On (SSO) makes it more convenient for users and more secure for enterprises. Keystone is no longer required to store your credentials, your identity provider does that. Users are now able to authenticate into Horizon via the OpenID Connect or SAML2 protocol, where each protocol can be paired to a single identity provider.


# Enables keystone web single-sign-on if set to True.
#WEBSSO_ENABLED = False

# Determines which authentication choice to show as default.
#WEBSSO_INITIAL_CHOICE = "credentials"

# The list of authentication mechanisms
# which include keystone federation protocols.
# Current supported protocol IDs are 'saml2' and 'oidc'
# which represent SAML 2.0, OpenID Connect respectively.
# Do not remove the mandatory credentials mechanism.
#WEBSSO_CHOICES = (
# ("credentials", _("Keystone Credentials")),
# ("oidc", _("OpenID Connect")),
# ("saml2", _("Security Assertion Markup Language")))

Cinder Enhancements
There is a continued effort to provide consistency between the Cinder v2 service and the Horizon dashboard. Several features were added this release:

  • Add bootable flag: Expose the Cinder CLI command to the UI
  • Manage/unmanage volumes: Administrators can access existing volumes not managed by OpenStack.
  • Transfer volumes: Transfer volumes from one owner to another. The donor will create a transfer request and send the ID and authorization key to the recipient.
  • Expose encryption metadata: User can now see the encryption metadata associated with a volume.

Add Serial Console Support
Prior, Horizon only supported VNC proxy which allowed users to connect to their launched instances. However, not all hypervisor platforms support VNC, for example System Z. This release added the full-screen serial console.

Continued Internationalization
A psuedo translation tool is now available in Horizon. It allows Horizon developers to find untranslated strings visually on the dashboard. The tool also helps find hard-coded strings, concatenated strings, and improper ascii character handling. To run a pseudo-translation for Deutsch (de), run the following commands:

./run_test.sh –makemessages
./run_test.sh –pseudo de
./run_test.sh –compilemessages

Magic Search
Today’s search is based on the jQuery Tablesorter plugin. It is very rudimentary and only works on the visible (HTML) data on the page. We cannot filter data across multiple pages. Horizon will implement search capability based off of Eucalyptus’s Magic Search widget. This is an Angular directive that provides a UI for both faceted filtering and typeahead search. It will depend on service API support. Moving us in this direction will allow us to mask the inconsistent API support along with allowing for a hook up with elastic search down the road. However, the widget is not hooked up to any tables; that is slated for Liberty.

Better Angular Test Coverages
We find ourselves writing more Javascripts as Horizon shifts toward client-side. Better test coverage and quality control for the new Angular work just make sense. JSHint globals have been updated to run at the gate to ensure quality code. Additionally, new functionality coming in are required to be paired with a Jasmine spec for unit testing.

Things to Look Forward to in Liberty

Angularization of Tables
Launch Instance wizard is the first Angular piece to land. Work has already started to redo the Instance Details page and the Identity Users tables. This patch will pave the way for how tables are structured and written. We will also take advantage of the Magic Search widget for filtering. Considering that Horizon is composed primarily of tables, this work is a good starting point with a big imprint.

Single Page Navigation
There has already been mock ups flying around on redesigning and restructuring the navigation piece. Navigation is a crucial piece because it glues the various panels together and currently the blocker for moving us toward a single-page application. We want to be able to navigate using the accordion sidebar without triggering a full-page refresh. This will facilitate a smoother and pleasant user experience.

Network Workflow Improvements
The current setup for networks is very confusing. You have to understand Neutron in-depth in order to create and link various components together (network, subnet, router). The goal is to add a new Network Configuration form to walk Administrators through the process by asking for minimal input from the user and configuring as much as possible behind the scenes. This panel will also provide help text and error handling.

Multi-domain identity support (K2K)
Maintaining multiple projects across clouds is a popular use case. However, delivering seamless, cross-cloud authorization is not an easy feat. Since Juno, Keystone has supported identity federation between two or more Keystone instances. However, it is in an “experimental” stage and still improving. This feature can be extended to support project federation across two or more Keystone instances. Horizon should show this capability. The login process will show a list of all the regions across the Keystone instances. When the user selects a region, the code will retrieve the proper scoped token to retrieve the proper Keystone instance.

Plugin-model for Angular Panels
We learned a lot from the Launch Instance work and paved the way for Angular work. An example is code format, file structure, and a plugin model for new dashboards. Traditionally, Javascripts only existed in Horizon, but as we convert python panels to angular ones, there is a need to have Javascript in Dashboard as well. In addition, we are taking advantage of the existing plugin architecture for Angular panels. This would allow deployers the ability to write their own plugin and/or disable panels they do not want to include.

Documentation
Documentation for the new Angular work is a bit sparse. Newcomers may struggle to understand how the new client-side architecture works. We will provide guides to how everything works, Testing Best Practices, and more commentary. We will continue to use ngDoc to document the AngularJS source code.

Watch for us at the Vancouver Summit and future blogs!
Cindy Lu
Thai Tran

The post Horizon Kilo Cycle Recap appeared first on IBM OpenTech.

by clu at May 05, 2015 12:03 AM

May 04, 2015

OpenStack Superuser

Rebuilding an OpenStack instance and keeping the same fixed IP

OpenStack, and in particular the compute service, Nova, has a useful rebuild function that allows you to rebuild an instance from a fresh image while maintaining the same fixed and floating IP addresses, amongst other metadata. However if you have a shared storage back end, such as Ceph, you're out of luck as this function is not for you. Fortunately, there is another way.

Prepare for the Rebuild:

Note the fixed IP address of the instance that you wish to rebuild and the network ID:

$ nova show demoinstance0 | grep network
| DemoTutorial network                       | 192.168.24.14, 216.58.220.133                     |
$ export FIXED_IP=192.168.24.14
$ neutron floatingip-list | grep 216.58.220.133
| ee7ecd21-bd93-4f89-a220-b00b04ef6753 |                  | 216.58.220.133      |
$ export FLOATIP_ID=ee7ecd21-bd93-4f89-a220-b00b04ef6753
$ neutron net-show DemoTutorial | grep " id "
| id              | 9068dff2-9f7e-4a72-9607-0e1421a78d0d |
$ export OS_NET=9068dff2-9f7e-4a72-9607-0e1421a78d0d

You now need to delete the instance that you wish to rebuild:

$ nova delete demoinstance0
Request to delete server demoinstance0 has been accepted.

Manually Prepare the Networking:

Now you need to re-create the port and re-assign the floating IP, if it had one:

$ neutron port-create --name demoinstance0 --fixed-ip ip_address=$FIXED_IP $OS_NET
Created a new port:
+-----------------------+---------------------------------------------------------------------------------------+
| Field                 | Value                                                                                 |
+-----------------------+---------------------------------------------------------------------------------------+
| admin_state_up        | True                                                                                  |
| allowed_address_pairs |                                                                                       |
| binding:vnic_type     | normal                                                                                |
| device_id             |                                                                                       |
| device_owner          |                                                                                       |
| fixed_ips             | {"subnet_id": "eb5db27f-edad-480e-92cb-1f8fec8848a8", "ip_address": "192.168.24.14"}  |
| id                    | c1927578-451b-4682-8888-55c7163898a4                                                  |
| mac_address           | fa:16:3e:5a:39:67                                                                     |
| name                  | demoinstance0                                                                         |
| network_id            | 9068dff2-9f7e-4a72-9607-0e1421a78d0d                                                  |
| security_groups       | 5898c15a-4670-429b-a414-9f59671c4d8b                                                  |
| status                | DOWN                                                                                  |
| tenant_id             | gsu7j52c50804cf3aad71b92e6ced65e                                                      |
+-----------------------+---------------------------------------------------------------------------------------+
$ export OS_PORT=c1927578-451b-4682-8888-55c7163898a4
$ neutron floatingip-associate $FLOATIP_ID $OS_PORT
Associated floating IP ee7ecd21-bd93-4f89-a220-b00b04ef6753
$ neutron floatingip-list | grep $FIXED_IP
| ee7ecd21-bd93-4f89-a220-b00b04ef6753 | 192.168.24.14   | 216.58.220.133     | c1927578-451b-4682-8888-55c7163898a4 |

Re-build!

Now you need to boot the instance again and specify port you created:

$ nova boot --flavor=m1.tiny --image=MyImage --nic port-id=$OS_PORT demoinstance0
$ nova show demoinstance0 | grep network
| DemoTutorial network                       | 192.168.24.14, 216.58.220.133                     |

Now your rebuild has been completed, you've got your old IPs back and you're done. Enjoy :-)

This post was originally published on on McWhirter's blog](https://mcwhirter.com.au/craige/blog/2015/Rebuilding_An_OpenStack_Instance_andKeeping_the_Same_Fixed_IP/). Superuser is always looking for interesting content, email us at editor@superuser.com to get involved._

Cover photo by Yann // CC BY NC-ND 2.0

by Craig McWhirter at May 04, 2015 09:20 PM

IBM OpenTech Team

Women in OPENTech: The Why, Where and How of Getting Started

Women in OpenTech 300x192 Women in OPENTech: The Why, Where and How of Getting StartedThe discussion of Women in Tech is gaining ground and its more inclusive than ever — Women with skills in busdev, marketing, finance and design (to name a few fields) are all being recognized for our impact. Inclusiveness in roles aside, there is another trend emerging — a laser-like focus on critical must have skills for today’s developers; it’s Open technology.

For any woman in the computer sciences, getting involved in OpenTech is one of the best career decisions you can make. For today’s leading developers, it’s not enough to leverage free-for-use quasi-open tech. We’re talking about a sustained ability to engage in the leading open technology communities, contributing to the advancement of the technology as well as an ability to consume the technology within solutions. Ongoing collaboration within today’s top open source communities is a career must have.

Why OpenTech and why Open Governance-based communities?

OpenTech makes up around 70-80% of the code within today’s enterprise architectures making open technology skills critical for today’s development managers. Contributing to Open Source communities is a great place to build your skills and your network simultaneously. And because there is a clear structure for how to work within the community, the quality of your coding effort matters more than gender or other subjective criteria. When this is not the case, the fact you’ve chosen to work in an openly governed community (as opposed to an ad hoc or single-entity controlled free for use project), you’ll find there are resources and support to help you address any bad behaviors you might encounter – if you find you need help.

What’s even better is that having OSS skills enables you to better manage balancing your commitments to family and career. You can staying connected to the developer community in your skill area through open source, keep your resume relevant and your skills honed. And because these communities are employ open governance models, they are not single-vendor oriented, — this means that the skills you build are transferable from one job to another when you decide to make a change.

All these benefits aside, there are still too few women taking the CompSci path today and even lower percentages in Open Technology communities. After doing a survey of peers, here are 3 communities that you may want to consider. Their technologies are widely used, they are all openly governed and they are doubling-down on their efforts to help new members feel comfortable:

Interested in Cloud technologies? Check out OpenStack.

Cloud technologies are a must have for today’s enterprise making skills around Open Technologies for Cloud a great investment for today’s developers. OpenStack is the single largest and most successful open source community for building public and private cloud infrastructures. Plus, the number of member companies participating in OpenStack development is dizzying. As OpenStack usage in the Enteprise grows so do the job opportunities. From IBM to Walmart to eBay and others, the growing user community means there is a huge job potential for anyone learning how to use OpenStack. The organization even has a job board so member companies and developers may quickly find each other.

They have great resources for getting started at OpenStack (in multiple languages too!). Plus, because its a relatively new organization and it’s growing so fast, you’ll find you are not alone – lots of developers will be just getting started along with you.

If you are in the Vancouver area May 18-22, you might even pop into the next OpenStack Summit - it’s one of the most vibrant events in Cloud development you’ll find out there and this event if five days that can’t be beat in terms of meeting the people you’ll work with and getting up to speed on what the community is doing. These OpenStack technical sessions are a great way to kick-off your Cloud development career!

Interested in Application Development? The jQuery Foundation is the single largest openly governed .js community in the market today and a great place to start with open javaScript frameworks.

JavaScript is not always easy to work with- there are so many options and new innovative frameworks coming forward everyday. The fact is over 30% of Internet sites use jQuery. So do many of the of the innovative .js frameworks. In addition, the number and typse of jQuery projects are growing based on the Foundation’s philosophy of embrace and extend their support across the JavaScript community. For instance, their new ‘Globalize’ project just delivered a technology that helps any .js Framework wanting to ensure global relevance an easier method to for international standards compliance. Simply put, it opens up a world of possibilities beyond the jQuery projects themselves.

It’s a great place to get started. You can join the community at their upcoming conference in California where you’ll meet folks working with all sorts of javaScript frameworks. Or you can check out their Welcome materials and FAQs.

Interested in Big Data, Data Analytics or Mobile App dev? Check out the Apache Software Foundation’s (ASF) getting started page – their advice can’t be beat and they are working to make Open Source collaborations more newcomer friendly.

Why Apache for Big Data and Mobile? With every new person or device becoming connected, more and more data is being collected. Enterprises want to analyze the data so they can provide more value to their customers. If you are a web developer looking to quickly extend your skills into mobile app dev, Apache Cordova is a project for Hybrid mobile development involving most of the world’s leading mobile platform vendors meaning your web development skills can easily be extended to mobile appdev – you write an app and it can be deployed on multiple devices without rewriting it. If you are woman who has other STEM-releted skills in math and statistics, great! There is a shortage of Data Scientists. Apache is home to some of the most popular Big Data and Analytics related OSS projects like Hadoop, Kafka, CouchDB and the up and coming Graph technology, TinkerPop.

In fact, Apache is probably one of the best places to get started on ANY open source project! There are a number of reasons:

First, they have a large number of technology projects to choose from offering lots of choice and with one of most openly transparent of all OSS efforts, they set the standard for how to work collaboratively, from a distance. This helps Apache continue to grow their cadre of projects year of year and it means your ‘how to’ skills are transferable from project to project. If you learn how to participation in Apache Kafka and then when you switch jobs and companies your employer asks you to get involved in Apache Spark, you know the Apache philosophy and processes making getting started in the new community easier.

Second, they have a program for newcomers to Apache OSS development through a regional mentor program as well as an initiative specifically meant to grow the number of women working on Apache projects. IBM had the opportunity to sponsor the recent ‘Women of Apache’ luncheon at April’s ApacheCon2015 conference and saw their effort is one of the most inclusive women in tech programs, explicitly inviting ‘any person who identifies as female’ to join the discussion.

The Apache team recognizes it can be difficult to get started and build collaborative relationships so they are investing in making it easier. When they say , ‘Ask any question’, they mean it. Their robust project incubator program means they are extremely skilled at coaching new OSS project teams into what they call ‘The Apache Way’. You can find out more on their Newcomers page.

Regardless of which community you choose, check out each community’s ‘getting started’ guides to learn more about the benefits of working within OpenTech communities in general. Oh and to you guys out there, all these benefits apply to you as well!

The post Women in OPENTech: The Why, Where and How of Getting Started appeared first on IBM OpenTech.

by Eli Cleary at May 04, 2015 07:54 PM

Nathan Kinder

Leveraging Linux Platform for Identity Management in Enterprise Web Applications

I gave a presentation this past weekend at Linuxfest Northwest on the topic of using a collection of Apache HTTPD modules and SSSD to provide identity management for web applications.   This is an approach that is currently used by OpenStack, ManageIQ, and Foreman to allow the Apache HTTPD web server to handle all of the authentication and retrieval of user identity data and exposing it to the web applications.

This is a nice approach that can remove the complexity of interfacing directly with a centralized identity source from the web application itself, with all of the great advantages that SSSD and the underlying Linux platform provides.  It’s particularly useful when using FreeIPA, as you can get Kerberos single sign-on and use centralized host-based access control (HBAC) to control access to your web applications.  If you’re developing web applications that have to deal with any sort of authentication and authorization (which is pretty much everything), it’s worth taking a look.

The slides from the presentation are available here.

There is also a great write-up about this approach from FreeIPA developer Jan Pazdziora on the FreeIPA wiki here.

by Nathan Kinder at May 04, 2015 06:31 PM

Opensource.com

The Kilo release is here, and other OpenStack news

Interested in keeping track of what's happening in the open source cloud? Opensource.com is your source for news in OpenStack, the open source cloud infrastructure project.

by Jason Baker at May 04, 2015 07:00 AM

May 01, 2015

OpenStack Superuser

Superuser weekend reading

Here's the news from the OpenStack world you won't want to miss -- the musings, polemics and questions posed by the larger community.

Got something you think we should highlight? Tweet, blog, or email us!

In Case You Missed It

It's a big week for the OpenStack Foundation with the release of Kilo.

For context, check out Barb Darrow over at Fortune. Her main takeaway about the 11th release centers on the new identity (ID) federation capability "that, in theory, will let a customer in California use her local OpenStack cloud for everyday work, but if the load spikes, allocate jobs to other OpenStack clouds either locally or far, far away." Darrow, skeptical of how different brands of cloud and time zones will play together nicely, goes on to talk about the massive investment from the big dogs (AWS, Microsoft) that shaping the data center landscape. Calling the Foundation "scrappy," she concludes "So the OpenStack gang, if they want to play in public cloud, need to make sure that this federation plan works as promised."

It's always nice to be right. Randy Bias over at the Cloud Scaling blog predicted the recent AWS earnings and now puts into context what this could mean for the rest of the cloud world. "I’ve always been bullish on public cloud and I think these numbers reinforce that it’s potentially a massively disruptive business model. Similarly, I’ve been disappointed that there has been considerable knee-jerk resistance to looking at AWS as a partner, particularly in OpenStack land."

For the 30,000-foot-view of the week: father of the iPod and internet-of-things pioneer Tony Fadell recently blew out his hamstrings while water skiing. Better data - managed in the cloud - might have prevented the Nest founder's downfall, he tells the Wall Street Journal. "As a male over 30, in frigid water, the odds of seriously hurting myself were incredibly high. One medical journal listed water skiing as among the most common causes of my injury—alongside bull riding. An enormous amount of information was all right there, it just wasn’t in front of me right when I needed it."

If you're coming to the OpenStack Vancouver Summit and looking for something fun to do, Rich Bowen suggests Geocaching. There are something like 500 of these treasures planted around the Vancouver convention center, Bowen notes. As a map lover (cartophile?), I can attest that this is a good way to visit a new place beyond eyeballing the monuments. "If you’re interested in Geocaching in Vancover, let me know, and we’ll try to set something up,"says Bowen, who works for Red Hat. Who's in?

We feature user conversations throughout the week, so tweet, blog, or email us your thoughts!

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Cover Photo via morguefile.com

by Nicole Martinelli at May 01, 2015 10:54 PM

Snapshot of the OpenStack community behind Kilo

The OpenStack community that helped create the Kilo release has done some seriously heavy lifting.

For the 11th release, there were more contributors, more companies involved and more work across time zones than ever before.

Some 1,500 developers put their weight behind Kilo, merging over 19,500 patches, dispatching with nearly 14,000 tickets and exchanging more than 20,000 emails. The total number of contributors increased five percent over the previous release, with core contributors staying stable at around 200. This latest release, however, bulked up the number of casual developers by 13 percent.

alt text here

“It usually happens that a very small number contributors are responsible for 80 percent of the code,” says Stefano Maffulli, developer advocate at the OpenStack Foundation. “What makes OpenStack unique is the large number of people who are responsible for the remaining 20 percent. We’re attracting and retaining a large number of new contributors even if they make small and infrequent patches or fix bugs.”

alt text here

Those contributors added heft to this release from around the world. A timezone analysis reveals a “continuous development community,” says Daniel Izquierdo Cortázar, chief data officer at Bitergia which maintains the OpenStack Activity Board. There are three main hubs of activity, he notes: the U.S., Europe and Africa, and Asia, creating a hive of development that never sleeps.

Cover Photo via Morguefile.

by Superuser at May 01, 2015 07:49 PM

OpenStack Blog

OpenStack Community Weekly Newsletter (Apr 24 – May 1)

Kilo Logo

Superuser Awards final faceoff: your vote counts

Four great finalists, but only one can win: it’s a close call as the voting deadline approaches for this edition of the Superuser Awards.

Snapshot of the OpenStack community behind Kilo

The OpenStack community that helped create the Kilo release has done some seriously heavy lifting. For the 11th release, there were more contributors, more companies involved and more work across time zones than ever before.

The Road to Vancouver

Relevant Conversations

Deadlines and Development Priorities

  • Relax for a day :)

Reports from Previous Events

Security Advisories and Notices

Tips ‘n Tricks

Upcoming Events

OpenStack Israel CFP Voting is Open

PyCon-AU Openstack miniconf CFP closes May 8th

Other News

OpenStack Reactions

    Kilo released. Going to the next one.

Kilo released. Another smooth ride to the next one.

The weekly newsletter is a way for the community to learn about all the various activities occurring on a weekly basis. If you would like to add content to a weekly update or have an idea about this newsletter, please leave a comment.

 

by Stefano Maffulli at May 01, 2015 07:14 PM

Tesora Corp

Short Stack: OpenStack Kilo release is here, Open source eating the database market

Welcome to the Short Stack, our weekly feature where we search for the most intriguing OpenStack links to share with you. These links may come from traditional publications or company blogs, but if it’s about OpenStack, we’ll find the best links we can to share with you every week If you like what you see, […]

The post Short Stack: OpenStack Kilo release is here, Open source eating the database market appeared first on Tesora.

by Leslie Barron at May 01, 2015 02:03 PM

OpenStack Superuser

Superuser Awards final faceoff: your vote counts

Four great finalists, but only one can win: it’s a close call as the voting deadline approaches for this edition of the Superuser Awards.

Based on an impressive collection of nominations, the Superuser Editorial Advisory Board conducted the first round of judging and narrowed the pool to Comcast, eBay Inc., the National Supercomputer Center in Guangzhou, and Walmart.

When evaluating finalists for the Superuser Award, take into account the unique nature of use case(s), as well as integrations and applications of OpenStack performed by a particular team. Voting is limited to one choice per community member.

This is the first edition of the Superuser Awards to incorporate community voting. As it stands now, the frontrunners are very close. Help determine who should take the stage along with past winner CERN's Tim Bell at the Vancouver Summit.

If you haven’t voted yet, please weigh in.

You have until Monday, May 4 at 11:59 p.m. Central Time Zone to make your vote count!

For more information about the Superuser Awards, please visit http://superuser.openstack.org/awards.

Cover Photo by The Magic Tuba Pixie // CC BY NC

by Superuser at May 01, 2015 04:35 AM

April 30, 2015

Ronald Bradford

Understanding the different Openstack tox configs

Openstack projects use tox to manage virtual environments and run unit tests which I talked about here.

In this example I am using the oslo.config repo to look at the various tox configs in openstack use. The Governance Project Testing Interface is a starting point to read about project guidelines.

Get the current codebase

$ git clone git://git.openstack.org/openstack/oslo.config
$ cd oslo.config/
$ git rev-parse HEAD
7b1e157aeea426c58e3d4c9a76a231f0e5bc8241

The last line helps me identify the specific git commit I am working with. When moving between branches or when looking at a repo that may be a few days old, if I need to recreate this exact codebase all I need is this. For example, to look at a prior version at 3ab403925e9cb2928ba8e893c4d0f4a6f4b27d7 for example.

$ git checkout 3ab403925e9cb2928ba8e893c4d0f4a6f4b27d72
Note: checking out '3ab403925e9cb2928ba8e893c4d0f4a6f4b27d72'.
...
HEAD is now at 3ab4039... Merge "Added Raw Value Loading to Test Fixture"
$ git rev-parse HEAD
3ab403925e9cb2928ba8e893c4d0f4a6f4b27d72

To revert back to the current repo master.

$ git checkout master
Previous HEAD position was 3ab4039... Merge "Added Raw Value Loading to Test Fixture"
Switched to branch 'master'
Your branch is up-to-date with 'origin/master'.
$ git rev-parse HEAD
7b1e157aeea426c58e3d4c9a76a231f0e5bc8241

NOTE: You don’t need to specify the full commit hash. In this example 3ab4039 also works.

tox configuration

The tox.ini file contains various config sections.

  • [tox] are global options
  • [testenv] are the default options for each virtual environment
  • [testenv:NAME] are the specific test environments

tox.ini

$ cat tox.ini
[tox]
distribute = False
envlist = py33,py34,py26,py27,pep8

[testenv]
setenv = VIRTUAL_ENV={envdir}
deps = -r{toxinidir}/requirements.txt
       -r{toxinidir}/test-requirements.txt
commands = python setup.py testr --slowest --testr-args='{posargs}'

[testenv:pep8]
commands = flake8

[testenv:cover]
setenv = VIRTUAL_ENV={envdir}
commands =
  python setup.py testr --coverage

[testenv:venv]
commands = {posargs}

[testenv:docs]
commands = python setup.py build_sphinx

[flake8]
show-source = True
exclude = .tox,dist,doc,*.egg,build

NOTE: These file differ between projects. See later for an example comparison with python-openstackclient, nova and horizon.

Prerequisites

The default [testenv] options first refer to requirements.txt and test-requirements.txt which define the specific packages and required versions. Either minimum (e.g. netaddr>=0.7.12), range (e.g. stevedore>=1.3.0,<1.4.0) or more specific (e.g. pbr>=0.6,!=0.7,<1.0).

requirements.txt

$ cat requirements.txt
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.

pbr>=0.6,!=0.7,<1.0
argparse
netaddr>=0.7.12
six>=1.9.0
stevedore>=1.3.0,<1.4.0  # Apache-2.0

test-requirements.txt

$ cat test-requirements.txt
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.

hacking>=0.10.0,<0.11

discover
fixtures>=0.3.14
python-subunit>=0.0.18
testrepository>=0.0.18
testscenarios>=0.4
testtools>=0.9.36,!=1.2.0
oslotest>=1.5.1,<1.6.0  # Apache-2.0

# when we can require tox>= 1.4, this can go into tox.ini:
#  [testenv:cover]
#  deps = {[testenv]deps} coverage
coverage>=3.6

# this is required for the docs build jobs
sphinx>=1.1.2,!=1.2.0,!=1.3b1,<1.3
oslosphinx>=2.5.0,<2.6.0  # Apache-2.0

# Required only for tests
oslo.i18n>=1.5.0,<1.6.0  # Apache-2.0

# mocking framework
mock>=1.0

Style Guidelines (PEP8)

The first test we look at is pep8 run by flake8. This starts by reviewing the code with Style Guide for Python Code and any specific Openstack Style Guidelines.

$ tox -e pep8
GLOB sdist-make: /home/rbradfor/oslo.config/setup.py
pep8 inst-nodeps: /home/rbradfor/oslo.config/.tox/dist/oslo.config-1.10.0.zip
pep8 runtests: PYTHONHASHSEED='3973315668'
pep8 runtests: commands[0] | flake8
_________ summary ___________
  pep8: commands succeeded
  congratulations :)

As with all unit tests you are wanting to see "The bar is green, the code is clean". An example of a failing test would look like:

$ tox -e pep8
GLOB sdist-make: /home/rbradfor/oslo.config/setup.py
pep8 inst-nodeps: /home/rbradfor/oslo.config/.tox/dist/oslo.config-1.10.0.zip
pep8 runtests: PYTHONHASHSEED='820640265'
pep8 runtests: commands[0] | flake8
./oslo_config/types.py:51:31: E702 multiple statements on one line (semicolon)
        self.choices = choices; self.quotes = quotes
                              ^
ERROR: InvocationError: '/home/rbradfor/oslo.config/.tox/pep8/bin/flake8'
________ summary _________
ERROR:   pep8: commands failed


$ tox -e pep8
GLOB sdist-make: /home/rbradfor/oslo.config/setup.py
pep8 inst-nodeps: /home/rbradfor/oslo.config/.tox/dist/oslo.config-1.10.0.zip
pep8 runtests: PYTHONHASHSEED='1937373059'
pep8 runtests: commands[0] | flake8
./oslo_config/types.py:52:13: E113 unexpected indentation
            self.quotes = quotes
            ^
./oslo_config/types.py:52:13: E901 IndentationError: unexpected indent
            self.quotes = quotes
            ^
ERROR: InvocationError: '/home/rbradfor/oslo.config/.tox/pep8/bin/flake8'
__________ summary __________
ERROR:   pep8: commands failed

Running tests

To run all tests for a given Python version you just specify said version.

$ tox -e py27
GLOB sdist-make: /home/rbradfor/oslo.config/setup.py
py27 inst-nodeps: /home/rbradfor/oslo.config/.tox/dist/oslo.config-1.10.0.zip
py27 runtests: PYTHONHASHSEED='1822382852'
py27 runtests: commands[0] | python setup.py testr --slowest --testr-args=
running testr
running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_TEST_TIMEOUT=60 ${PYTHON:-python} -m subunit.run discover -t ./ . --list
running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_TEST_TIMEOUT=60 ${PYTHON:-python} -m subunit.run discover -t ./ .  --load-list /tmp/tmpbHjMgm
running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_TEST_TIMEOUT=60 ${PYTHON:-python} -m subunit.run discover -t ./ .  --load-list /tmp/tmpLA0oO0
running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_TEST_TIMEOUT=60 ${PYTHON:-python} -m subunit.run discover -t ./ .  --load-list /tmp/tmpMqT_s_
running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_TEST_TIMEOUT=60 ${PYTHON:-python} -m subunit.run discover -t ./ .  --load-list /tmp/tmpyJLbu8
running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_TEST_TIMEOUT=60 ${PYTHON:-python} -m subunit.run discover -t ./ .  --load-list /tmp/tmpF5KG5t
running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_TEST_TIMEOUT=60 ${PYTHON:-python} -m subunit.run discover -t ./ .  --load-list /tmp/tmpebkBDp
running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_TEST_TIMEOUT=60 ${PYTHON:-python} -m subunit.run discover -t ./ .  --load-list /tmp/tmpscXbNV
running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_TEST_TIMEOUT=60 ${PYTHON:-python} -m subunit.run discover -t ./ .  --load-list /tmp/tmpTv0jAn
Ran 1182 tests in 0.475s (-0.068s)
PASSED (id=4)
Slowest Tests
Test id                                                                                                Runtime (s)
-----------------------------------------------------------------------------------------------------  -----------
tests.test_cfg.ConfigFileOptsTestCase.test_conf_file_dict_value_no_colon                               0.029
oslo_config.tests.test_cfg.ConfigFileReloadTestCase.test_conf_files_reload_default                     0.024
oslo_config.tests.test_cfg.SubCommandTestCase.test_sub_command_resparse                                0.016
tests.test_cfg.ConfigFileOptsTestCase.test_conf_file_dict_ignore_dname                                 0.016
tests.test_cfg.ConfigFileOptsTestCase.test_conf_file_list_spaces_use_dgroup_and_dname                  0.016
tests.test_cfg.MultipleDeprecatedCliOptionsTestCase.test_conf_file_override_use_deprecated_multi_opts  0.015
oslo_config.tests.test_cfg.OverridesTestCase.test_default_override                                     0.014
oslo_config.tests.test_cfg.ConfigFileOptsTestCase.test_conf_file_list_default_wrong_type               0.014
oslo_config.tests.test_cfg.RequiredOptsTestCase.test_missing_required_group_opt                        0.012
tests.test_generator.GeneratorTestCase.test_generate(long_help,output_file)                            0.011
________ summary _______
  py27: commands succeeded
  congratulations :)

You can pass a specific test or tests via command line identifying the names by looking at the test classes.

$ ls -l oslo_config/tests/[^_]*.py
-rw-rw-r-- 1 rbradfor rbradfor  12788 Apr 30 12:46 oslo_config/tests/test_cfgfilter.py
-rw-rw-r-- 1 rbradfor rbradfor 144538 Apr 30 12:46 oslo_config/tests/test_cfg.py
-rw-rw-r-- 1 rbradfor rbradfor   4938 Apr 30 12:46 oslo_config/tests/test_fixture.py
-rw-rw-r-- 1 rbradfor rbradfor  16479 Apr 30 12:46 oslo_config/tests/test_generator.py
-rw-rw-r-- 1 rbradfor rbradfor   3865 Apr 30 12:46 oslo_config/tests/test_iniparser.py
-rw-rw-r-- 1 rbradfor rbradfor  13259 Apr 30 12:46 oslo_config/tests/test_types.py

NOTE: This project has a top level /tests directory which uses the old import API and I am informed is being removed for liberty.

$ tox -e py27 -- test_types
GLOB sdist-make: /home/rbradfor/oslo.config/setup.py
py27 create: /home/rbradfor/oslo.config/.tox/py27
py27 installdeps: -r/home/rbradfor/oslo.config/requirements.txt, -r/home/rbradfor/oslo.config/test-requirements.txt
py27 inst: /home/rbradfor/oslo.config/.tox/dist/oslo.config-1.10.0.zip
py27 runtests: PYTHONHASHSEED='1505218584'
py27 runtests: commands[0] | python setup.py testr --slowest --testr-args=test_types
running testr
...
Ran 186 (-996) tests in 0.100s (-0.334s)
PASSED (id=6)
Slowest Tests
Test id                                                                                     Runtime (s)
------------------------------------------------------------------------------------------  -----------
tests.test_types.BooleanTypeTests.test_other_values_produce_error                           0.001
oslo_config.tests.test_types.DictTypeTests.test_equal                                       0.001
oslo_config.tests.test_types.BooleanTypeTests.test_not_equal_to_other_class                 0.000
tests.test_types.IntegerTypeTests.test_positive_values_are_valid                            0.000
tests.test_types.DictTypeTests.test_dict_of_dicts                                           0.000
oslo_config.tests.test_types.ListTypeTests.test_not_equal_with_non_equal_custom_item_types  0.000
tests.test_types.IntegerTypeTests.test_with_max_and_min                                     0.000
oslo_config.tests.test_types.FloatTypeTests.test_exponential_format                         0.000
tests.test_types.BooleanTypeTests.test_yes                                                  0.000
tests.test_types.ListTypeTests.test_repr                                                    0.000
________ summary ________
  py27: commands succeeded
  congratulations :)
$ echo $?
0
$ tox -epy27 -- '(test_types|test_generator)'

A failing test is going to produce the following.

$ tox -epy27 -- test_types
GLOB sdist-make: /home/rbradfor/oslo.config/setup.py
py27 create: /home/rbradfor/oslo.config/.tox/py27
py27 installdeps: -r/home/rbradfor/oslo.config/requirements.txt, -r/home/rbradfor/oslo.config/test-requirements.txt
py27 inst: /home/rbradfor/oslo.config/.tox/dist/oslo.config-1.10.0.zip
py27 runtests: PYTHONHASHSEED='3672144590'
py27 runtests: commands[0] | python setup.py testr --slowest --testr-args=test_types
running testr
...
======================================================================
FAIL: oslo_config.tests.test_types.IPv4AddressTypeTests.test_ipv4_address
tags: worker-0
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/rbradfor/oslo.config/oslo_config/tests/test_types.py", line 386, in test_ipv4_address
    self.assertConvertedValue('192.168.0.1', '192.168.0.2')
  File "/home/rbradfor/oslo.config/oslo_config/tests/test_types.py", line 27, in assertConvertedValue
    self.assertEqual(expected, self.type_instance(s))
  File "/usr/lib/python2.7/unittest/case.py", line 515, in assertEqual
    assertion_func(first, second, msg=msg)
  File "/usr/lib/python2.7/unittest/case.py", line 508, in _baseAssertEqual
    raise self.failureException(msg)
AssertionError: '192.168.0.2' != '192.168.0.1'
Ran 186 (-117) tests in 0.102s (-0.046s)
FAILED (id=8, failures=2 (+2))
error: testr failed (1)
ERROR: InvocationError: '/home/rbradfor/oslo.config/.tox/py27/bin/python setup.py testr --slowest --testr-args=test_types'
________ summary ________
ERROR:   py27: commands failed
$ echo $?
1

Testr

This is a wrapper for the underlying testr command (found in the command line of the [testenv] section). We can reproduce what this runs manually with.

$ source .tox/py27/bin/activate
(py27)$  python setup.py testr
running testr
...
Ran 1182 tests in 0.443s (-0.025s)
PASSED (id=5)

The current tox.ini config includes the --slowest argument which is self explaining.

One benefit of running this specifically is when writing failing tests (i.e. the Test Driven Development (TDD) approach to agile software development). You do not really want to run all tests in order to see a failure. The -f option helps.

$ testr run
...
Ran 1182 (+637) tests in 2.058s (+1.064s)
FAILED (id=12, failures=2 (+1))
$ testr run -- -f
...
Ran 545 (-637) tests in 1.075s (-0.900s)
FAILED (id=13, failures=1 (-1))
$ testr run test_types -- -f
...
Ran 34 (-152) tests in 0.030s (-0.000s)
FAILED (id=18, failures=1 (-1))

NOTE: It takes a bit to realize the syntax of tox and testr and handling doubledash? -- placement. When you work it out you realize you can reproduce this with tox directly using:

$ tox -e py27 -- test_types -- -f
...
Ran 151 (+117) tests in 0.125s (+0.120s)
FAILED (id=19, failures=2 (+1))
error: testr failed (1)
ERROR: InvocationError: '/home/rbradfor/oslo.config/.tox/py27/bin/python setup.py testr --slowest --testr-args=test_types -- -f'
________ summary ________
ERROR:   py27: commands failed

The reason for dropping into an activated virtual environment and running testr manually is because tox will destroy and recreate your virtual environment each time the command is executed, which is time consuming.

The Testr source can be found at testrepository, identified by (py27)$ more `which testr`.

Testr syntax

Testr has multiple options and commands you can read about via various help options:

$ testr help
$ testr quickstart
$ testr commands
$ testr help run

Usage: testr run [options] testfilters* doubledash? testargs*
...

While debugging several testr commands were useful.

List all tests

$ testr list-tests
running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_TEST_TIMEOUT=60 ${PYTHON:-python} -m subunit.run discover -t ./ . --list
oslo_config.tests.test_cfg.ChoicesTestCase.test_choice_bad
oslo_config.tests.test_cfg.ChoicesTestCase.test_choice_default
oslo_config.tests.test_cfg.ChoicesTestCase.test_choice_good
oslo_config.tests.test_cfg.ChoicesTestCase.test_conf_file_bad_choice_value
oslo_config.tests.test_cfg.ChoicesTestCase.test_conf_file_choice_bad_default
oslo_config.tests.test_cfg.ChoicesTestCase.test_conf_file_choice_empty_value
...

(py27)$ testr list-tests | wc -l
1183

1183 - 1 corresponds to the 1182 test run.

Last run

This enables you to review the last run tests (in a separate thread) and also get a correct error response code.

(py27)$ testr last
Ran 1182 tests in 0.575s (+0.099s)
PASSED (id=27)
(py27)$ echo $?
0
(py27)$ testr last
======================================================================
FAIL: oslo_config.tests.test_types.IPAddressTypeTests.test_ipv4_address
tags: worker-6
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/rbradfor/oslo.config/oslo_config/tests/test_types.py", line 386, in test_ipv4_address
    self.assertConvertedValue('192.168.0.1', '192.168.0.2')
  File "/home/rbradfor/oslo.config/oslo_config/tests/test_types.py", line 27, in assertConvertedValue
    self.assertEqual(expected, self.type_instance(s))
  File "/usr/lib/python2.7/unittest/case.py", line 515, in assertEqual
    assertion_func(first, second, msg=msg)
  File "/usr/lib/python2.7/unittest/case.py", line 508, in _baseAssertEqual
    raise self.failureException(msg)
AssertionError: '192.168.0.2' != '192.168.0.1'
======================================================================
FAIL: oslo_config.tests.test_types.IPv4AddressTypeTests.test_ipv4_address
tags: worker-7
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/rbradfor/oslo.config/oslo_config/tests/test_types.py", line 386, in test_ipv4_address
    self.assertConvertedValue('192.168.0.1', '192.168.0.2')
  File "/home/rbradfor/oslo.config/oslo_config/tests/test_types.py", line 27, in assertConvertedValue
    self.assertEqual(expected, self.type_instance(s))
  File "/usr/lib/python2.7/unittest/case.py", line 515, in assertEqual
    assertion_func(first, second, msg=msg)
  File "/usr/lib/python2.7/unittest/case.py", line 508, in _baseAssertEqual
    raise self.failureException(msg)
AssertionError: '192.168.0.2' != '192.168.0.1'
Ran 1182 tests in 0.445s (-0.130s)
FAILED (id=28, failures=2 (+2))
(py27)$ echo $?
1

Code Coverage

The tox.ini also provides a section for code coverage.

$ tox -e cover
GLOB sdist-make: /home/rbradfor/oslo.config/setup.py
cover inst-nodeps: /home/rbradfor/oslo.config/.tox/dist/oslo.config-1.10.0.zip
cover runtests: PYTHONHASHSEED='546795877'
cover runtests: commands[0] | python setup.py testr --coverage
running testr
...
Ran 1182 tests in 0.493s (-0.046s)
PASSED (id=26)
_________ summary _________
  cover: commands succeeded
  congratulations :)

Which is a wrapper for:

$ python setup.py testr --coverage
...
Ran 1182 tests in 0.592s (+0.116s)
PASSED (id=27)

These commands produces a /cover directory (which is not currently in .gitignore). The contents are HTML. I suspect there is likely an option for a more CLI readable format however for simplicity we publish these to an available running web server.

Apache Setup

In order to view what code coverage produces I configured Apache with a separate port and vhost in this devstack environment.

$ echo "ServerName "`hostname` | sudo tee /etc/apache2/conf-enabled/servername.conf
$ echo "Listen 81

<virtualhost>
    DocumentRoot /var/www/html
    <directory>
        Options FollowSymLinks MultiViews
        AllowOverride All
        Order allow,deny
        allow from all
    </directory>

    LogLevel warn
    ErrorLog \${APACHE_LOG_DIR}/localhost.error.log
    CustomLog \${APACHE_LOG_DIR}/localhost.access.log combined
</virtualhost>" | sudo tee /etc/apache2/sites-enabled/localhost.conf
$ sudo apache2ctl graceful

Then I simply copied the projects coverage output as a quick hack to view.

$ sudo cp -r cover/ /var/www/html/
$ sudo apt-get install lynx-cur
$ lynx http://localhost:81/cover
                             Module                            statements missing excluded coverage
   Total                                                       12         0       0        100%
   .tox/py27/lib/python2.7/site-packages/oslo/config/__init__  6          0       0        100%
   .tox/py27/lib/python2.7/site-packages/oslo/config/cfg       1          0       0        100%
   .tox/py27/lib/python2.7/site-packages/oslo/config/cfgfilter 1          0       0        100%
   .tox/py27/lib/python2.7/site-packages/oslo/config/fixture   1          0       0        100%
   .tox/py27/lib/python2.7/site-packages/oslo/config/generator 1          0       0        100%
   .tox/py27/lib/python2.7/site-packages/oslo/config/iniparser 1          0       0        100%
   .tox/py27/lib/python2.7/site-packages/oslo/config/types     1          0       0        100%

   coverage.py v3.7.1

Documentation

The last testenv setup in oslo.config is for documentation.

$ tox -e docs
GLOB sdist-make: /home/rbradfor/oslo.config/setup.py
docs create: /home/rbradfor/oslo.config/.tox/docs
docs installdeps: -r/home/rbradfor/oslo.config/requirements.txt, -r/home/rbradfor/oslo.config/test-requirements.txt
docs inst: /home/rbradfor/oslo.config/.tox/dist/oslo.config-1.10.0.zip
docs runtests: PYTHONHASHSEED='4293391351'
docs runtests: commands[0] | python setup.py build_sphinx
running build_sphinx
creating /home/rbradfor/oslo.config/doc/build
creating /home/rbradfor/oslo.config/doc/build/doctrees
creating /home/rbradfor/oslo.config/doc/build/html
Running Sphinx v1.2.3
loading pickled environment... not yet created
Using openstack theme from /home/rbradfor/oslo.config/.tox/docs/local/lib/python2.7/site-packages/oslosphinx/theme
building [html]: all source files
updating environment: 15 added, 0 changed, 0 removed
reading sources... [100%] types
looking for now-outdated files... none found
pickling environment... done
checking consistency... done
preparing documents... done
writing output... [100%] types
writing additional files... genindex py-modindex search
copying static files... WARNING: html_static_path entry u'/home/rbradfor/oslo.config/doc/source/static' does not exist
done
copying extra files... done
dumping search index... done
dumping object inventory... done
build succeeded, 1 warning.
creating /home/rbradfor/oslo.config/doc/build/man
Running Sphinx v1.2.3
loading pickled environment... done
Using openstack theme from /home/rbradfor/oslo.config/.tox/docs/local/lib/python2.7/site-packages/oslosphinx/theme
building [man]: all source files
updating environment: 0 added, 0 changed, 0 removed
looking for now-outdated files... none found
writing... osloconfig.1 { cfg opts types configopts cfgfilter helpers fixture parser exceptions namespaces styleguide generator faq contributing }
build succeeded.
___________________________________________________________________________________________________________________________ summary ____________________________________________________________________________________________________________________________
  docs: commands succeeded
  congratulations :)

This creates a /doc directory (in .gitignore) which I copied to my previously configured web container to view in HTML.

$ sudo cp -r doc/ /var/www/html/
$ lynx http://localhost:81/doc/build/html

Other tox.ini configuration

As I navigate around other Openstack projects I have noticed some differences. These include:

Alternative global settings

[tox]
minversion = 1.6
skipdist = True

More detailed [testenv]

[testenv]
setenv = VIRTUAL_ENV={envdir}
deps = -r{toxinidir}/requirements.txt
       -r{toxinidir}/test-requirements.txt
commands = python setup.py testr --slowest --testr-args='{posargs}'
[testenv]
usedevelop = True
install_command = pip install -U {opts} {packages}
setenv = VIRTUAL_ENV={envdir}
deps = -r{toxinidir}/requirements.txt
       -r{toxinidir}/test-requirements.txt
commands = python setup.py testr --testr-args='{posargs}'
whitelist_externals = bash

Some fancy output coloring.

[testenv]
usedevelop = True
install_command = pip install -U {opts} {packages}
setenv = VIRTUAL_ENV={envdir}
         NOSE_WITH_OPENSTACK=1
         NOSE_OPENSTACK_COLOR=1
         NOSE_OPENSTACK_RED=0.05
         NOSE_OPENSTACK_YELLOW=0.025
         NOSE_OPENSTACK_SHOW_ELAPSED=1
# Note the hash seed is set to 0 until horizon can be tested with a
# random hash seed successfully.
         PYTHONHASHSEED=0
deps = -r{toxinidir}/requirements.txt
       -r{toxinidir}/test-requirements.txt
commands = /bin/bash run_tests.sh -N --no-pep8 {posargs}
[testenv]
usedevelop = True
# tox is silly... these need to be separated by a newline....
whitelist_externals = bash
                      find
install_command = pip install -U --force-reinstall {opts} {packages}
# Note the hash seed is set to 0 until nova can be tested with a
# random hash seed successfully.
setenv = VIRTUAL_ENV={envdir}
         OS_TEST_PATH=./nova/tests/unit
         LANGUAGE=en_US
deps = -r{toxinidir}/requirements.txt
       -r{toxinidir}/test-requirements.txt
commands =
  find . -type f -name "*.pyc" -delete
  bash tools/pretty_tox.sh '{posargs}'
# there is also secret magic in pretty_tox.sh which lets you run in a fail only
# mode. To do this define the TRACE_FAILONLY environmental variable.

Alternative [testenv:NAME] sections

[testenv:functional]
commands = bash -x {toxinidir}/functional/harpoon.sh

[testenv:debug]
commands = oslo_debug_helper -t openstackclient/tests {posargs}

[tox:jenkins]
downloadcache = ~/cache/pip

[testenv:jshint]
commands = nodeenv -p
           npm install jshint -g
           /bin/bash run_tests.sh -N --jshint

[testenv:genconfig]
commands =
  bash tools/config/generate_sample.sh -b . -p nova -o etc/nova

Different Style guidelines

[flake8]
show-source = True
exclude = .tox,dist,doc,*.egg,build
[flake8]
show-source = True
exclude =  .venv,.git,.tox,dist,doc,*openstack/common*,*lib/python*,*egg,build,tools
[testenv:pep8]
commands = flake8
[testenv:pep8]
commands =
  /bin/bash run_tests.sh -N --pep8
  /bin/bash run_tests.sh -N --makemessages --check-only

Different Code Coverage

[testenv:cover]
commands = python setup.py testr --coverage --testr-args='{posargs}'
[testenv:cover]
# Also do not run test_coverage_ext tests while gathering coverage as those
# tests conflict with coverage.
commands =
  coverage erase
  python setup.py testr --coverage \
    --testr-args='{posargs}'
  coverage combine
  coverage html --include='nova/*' --omit='nova/openstack/common/*' -d covhtml -i

Different Docs

[testenv:docs]
commands = python setup.py build_sphinx
[testenv:docs]
commands =
  python setup.py build_sphinx
  bash -c '! find doc/ -type f -name *.json | xargs -t -n1 python -m json.tool 2>&1 > /dev/null | grep -B1 -v ^python'

Additional sections

[testenv:pip-missing-reqs]
# do not install test-requirements as that will pollute the virtualenv for
# determining missing packages
# this also means that pip-missing-reqs must be installed separately, outside
# of the requirements.txt files
deps = pip_missing_reqs
       -rrequirements.txt
commands=pip-missing-reqs -d --ignore-file=nova/tests/* nova
[hacking]
import_exceptions = oslo_log._i18n

What's Next

In a followup blog I will be talking about debugging with pdb and how to use this with tox.

References

by ronald at April 30, 2015 08:22 PM

Solinea

Image Creation - Packer and OpenStack

packer-logoIn today's post we will be talking about image creation. If you have been around the cloud game for any length of time, you know how important it can be to get your images right from the outset. It's important to have all the basics baked in, security settings just right, and users and keys already created. You probably also know, if you have done this by hand, that it can be a huge pain and you just never quite trust your image. The key to mitigating this is having a process to follow for each environment you deploy into and have a set of documented steps that you can point to. Enter Packer. Packer is a tool developed by HashiCorp (the people behind Vagrant) to help you create identical cloud images for a variety of different environments. It also allows you to create image templates that are easy to version control and understand what happens during the image creation process.

by Spencer Smith (spencer@solinea.com) at April 30, 2015 03:00 PM

IBM OpenTech Team

A Guide to the OpenStack Kilo Release

OpenStack Kilo Logo A Guide to the OpenStack Kilo Release

Kilo Release of OpenStack: Dedicated to the loving memory of Chris Yeoh

Sadly, during this past release the OpenStack and IBM communities lost Chris Yeoh, an incredible contributor, leader, and dear friend to all of us. Chris was simply an amazingly talented OpenStack contributor and was beloved by many for his kindness, generosity, and his willingness to help others no matter how much he had on his plate. Please consider sharing your fond memories of Chris with his family by clicking here.

Highlights of the Kilo Release

Increased enterprise security, improved interoperability, quality improvements, and user experience enhancements are just a few of the highlights of the latest release of OpenStack codenamed Kilo. As the OpenStack ecosystem celebrates yet another successful milestone, I wanted to honor all of OpenStack’s developers by highlighting some of the best new features users can expect in this latest release as well as describing some truly amazing contributions by team IBM.

Ecosystem Growth

The OpenStack ecosystem continues to experience outstanding growth. In the Kilo release, contributors from 112 organizations made over 19,000 commits, added over 390 features, and fixed over 7,200 bugs. Like many of the loyal contributors to OpenStack, IBM remains committed to the success of OpenStack by increasing contributions focused on improving the OpenStack ecosystem for the benefit of the entire OpenStack community and accelerating its growth. I’m excited to have the opportunity to present an early preview of the key contributions for this latest release.

Increased Enterprise Security

IBM continues to lead several enterprise integration efforts in OpenStack’s Identity Service project codenamed Keystone and we have been the top contributor to Keystone for the past four OpenStack releases. IBMers enabled remote administration of domain configuration attributes without requiring a shutdown/restart of Keystone. In addition, we added OpenID Connect support to the Keystone federation framework, and increased CADF audit support coverage for Keystone operations on users, groups, domains, trusts, regions, endpoints, policies, and roles.

In addition, IBMers collaborated heavily with contributors from CERN, Rackspace, Yahoo, HP, and UFCG-LSD to add new enhancements to improve Keystone’s hybrid cloud support based on industry standard federation protocols. These efforts are key to enabling interoperable hybrid clouds based upon Keystone to Keystone federation based hybrid clouds. For users this means it will be simpler to combine their on premise OpenStack cloud with the public cloud of their choice.

Improved Interoperability

One of the key themes for the OpenStack Kilo release was improved interoperability. IBM is one of the key leaders of the Refstack and Refstack-client projects which are the OpenStack projects focused on improving interoperability. IBM is currently the #1 contributor to refstack-client and the #2 contributor for Refstack. IBMers defined specifications for Refstack and Refstack-client, added configuration documentation, and developed APIs to retrieve and render individual test run data uploaded by vendors to the Refstack server.

Block Storage Improvements

For three consecutive release cycles, IBM has continued to lead in contributions to OpenStack’s Block Storage project codenamed Cinder. In this release, IBMers focused on improving the quality of Cinder. Key quality improvements were made to volume migration, volume replication, lazy translation, and its oslo support. Additionally, IBMers added a new Flashsystem volume driver, and consistency group support for the GPFS, storwize SVC and XIV/DS8k drivers.

Enhanced User Experience: Horizon and OpenStackClient Enhancements

As we continue to improve the OpenStack user experience, IBM increased its contributions in the Kilo cycle to both the OpenStack dashboard project (Horizon) and the OpenStackClient. IBMers contributed over 1600 code reviews and also were Horizon’s third highest committer. New features added to Horizon by IBMers include serial console support, integration of the magic search functionality from Eucalyptus, and a new globalization verification tool. We also contributed Angular enhancements for performing password validation, confirmation dialogs, and launching instances. In addition, IBMers collaborated with contributors from Yahoo to add federation based web single sign-on support to Horizon.

In this release of OpenStack, the OpenStackClient command line project became officially integrated into OpenStack. The OpenStackClient is a key user experience enhancement as it provides a single, unified command line interface for all the OpenStack integrated projects. Without this unified client, consumers of OpenStack would have to learn separate command line interfaces for each OpenStack sub-project. IBMers have been contributing to the OpenStackClient since its early days as a little known incubator project and in the Kilo release we were the top contributor to this project. We are very happy to see that the OpenStackClient has reached a level of maturity that the OpenStack Technical Committee has approved it to become an OpenStack project.

Database as a Service Enhancements

In this release of OpenStack, IBMers began contributing to OpenStack’s database as a service project codenamed Trove. IBMers added both Apache CouchDB and DB2 Express C plugins to Trove. Additionally, we added CADF based audit support to several key Trove supported operations. With this new functionality, we are helping to grow Trove’s applicability and consumability.

Heat Enhancements

In this release of OpenStack, the TOSCA Heat Translator was officially accepted and integrated as part of the OpenStack Heat orchestration program. This project was initially started as an incubator by IBMers and grew to have its own substantial ecosystem. It’s key function is to support the translation of portable applications written in the OASIS TOSCA open standard language to run as HOT templates. The project provides a framework to enable other orchestration template languages to also be translated to HOT templates. Thus, this component serves as a conduit for any orchestration technology to be deployed on Heat as HOT templates.

Furthermore, IBMers added multi-region support to Heat. This allows for deploying parts of an overall stack, defined as nested templates, to different regions in OpenStack. This long requested feature is a critical enabler for important enterprise and Telco use cases.

Join us in Vancouver!

Unfortunately it is simply not possible in this article to cover all the innovations that have been added to OpenStack by IBM in the Kilo release. Furthermore, there are many other outstanding contributions in this release by active contributors from other companies. Please join us at the next OpenStack summit in Vancouver May 18-22 for a much more comprehensive overview of the advances and improvements in the latest version of OpenStack. I look forward to seeing you in Vancouver!

The post A Guide to the OpenStack Kilo Release appeared first on IBM OpenTech.

by Brad Topol at April 30, 2015 02:44 PM

April 29, 2015

Nir Yechiel

Red Hat Enterprise Linux OpenStack Platform 6: SR-IOV Networking – Part II: Walking Through the Implementation

nyechiel:

Second part of the SR-IOV networking post I wrote for the Red Hat Stack blog.

Originally posted on Red Hat Stack:

In the previous blog post in this series we looked at what single root I/O virtualization (SR-IOV) networking is all about and we discussed why it is an important addition to Red Hat Enterprise Linux OpenStack Platform. In this second post we would like to provide a more detailed overview of the implementation, some thoughts on the current limitations, as well as what enhancements are being worked on in the OpenStack community.

Note: this post does not intend to provide a full end to end configuration guide. Customers with an active subscription are welcome to visit the official article covering SR-IOV Networking in Red Hat Enterprise Linux OpenStack Platform 6 for a complete procedure.

Setting up the Environment

In our small test environment we used two physical nodes: one serves as a Compute node for hosting virtual machine (VM) instances, and the other serves as both the OpenStack Controller and…

View original 2,177 more words


by nyechiel at April 29, 2015 10:04 PM

Red Hat Stack

Red Hat Enterprise Linux OpenStack Platform 6: SR-IOV Networking – Part II: Walking Through the Implementation

In the previous blog post in this series we looked at what single root I/O virtualization (SR-IOV) networking is all about and we discussed why it is an important addition to Red Hat Enterprise Linux OpenStack Platform. In this second post we would like to provide a more detailed overview of the implementation, some thoughts on the current limitations, as well as what enhancements are being worked on in the OpenStack community.

Note: this post does not intend to provide a full end to end configuration guide. Customers with an active subscription are welcome to visit the official article covering SR-IOV Networking in Red Hat Enterprise Linux OpenStack Platform 6 for a complete procedure.

 

Setting up the Environment

In our small test environment we used two physical nodes: one serves as a Compute node for hosting virtual machine (VM) instances, and the other serves as both the OpenStack Controller and Network node. Both nodes are running Red Hat Enterprise Linux 7.

 

Compute Node

This is a standard Red Hat Enterprise Linux OpenStack Platform Compute node, running KVM with the Libvirt Nova driver. As the ultimate goal is to provide OpenStack VMs running on this node with access to SR-IOV virtual functions (VFs), SR-IOV support is required on several layers on the Compute node, namely the BIOS, the base operating system, and the physical network adapter. Since SR-IOV completely bypasses the hypervisor layer, there is no need to deploy Open vSwitch or the ovs-agent on this node.

 

Controller/Network Node

The other node which serves as the OpenStack Controller/Network node includes the various OpenStack API and control services (e.g., Keystone, Neutron, Glance) as well as the Neutron agents required to provide network services for VM instances. Unlike the Compute node, this node still uses Open vSwitch for connectivity into the tenant data networks. This is required in order to serve SR-IOV enabled VM instances with network services such as DHCP, L3 routing and network address translation (NAT). This is also the node in which the Neutron server and the Neutron plugin are deployed.

 

Topology Layout

For this test we are using a VLAN tagged network connected to both nodes as the tenant data network. Currently there is no support for SR-IOV networking on the Network node, so this node still uses a normal network adapter without SR-IOV capabilities. The Compute node on the other hand uses an SR-IOV enabled network adapter (from the Intel 82576 family in our case).
Screen Shot 2015-04-29 at 2.28.49 PM

 

Configuration Overview

Preparing the Compute node

  1. The first thing we need to do is to make sure that Intel VT-d is enabled in the BIOS and activated in the kernel. The Intel VT-d specifications provide hardware support for directly assigning a physical device to a virtual machine.
  2. Recall that the Compute node is equipped with an Intel 82576 based SR-IOV network adapter. For proper SR-IOV operation, we need to load the network adapter driver (igb) with the right parameters to set the maximum number of Virtual Functions (VFs) we want to expose. Different network cards support different values here, so you should consult the proper documentation from the card vendor. In our lab we chose to set this number to seven. This configuration effectively enables SR-IOV on the card itself, which otherwise defaults to regular (non SR-IOV) mode.
  3. After a reboot, the node should come up ready for SR-IOV. You can verify this by utilizing the lspci utility that lists detailed information about all PCI devices in the system.

 

Verifying the Compute configuration

Using lspci we can see the Physical Functions (PFs) and the Virtual Functions (VFs) available to the Compute node. Our network adapter is a dual port card, so we get total of two PFs available (one PF per physical port), and seven VFs available for each PF:

 

# lspci  | grep -i 82576

05:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)

05:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)

05:10.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

05:10.1 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

05:10.2 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

05:10.3 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

05:10.4 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

05:10.5 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

05:10.6 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

05:10.7 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

05:11.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

05:11.1 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

05:11.2 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

05:11.3 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

05:11.4 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

05:11.5 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

 

You can also get all the VFs assigned to a specific PF:

 

# ls -l /sys/class/net/enp5s0f1/device/virtfn*

 

lrwxrwxrwx. 1 root root 0 Jan 25 13:22 /sys/class/net/enp5s0f1/device/virtfn0 -> ../0000:05:10.1

lrwxrwxrwx. 1 root root 0 Jan 25 13:22 /sys/class/net/enp5s0f1/device/virtfn1 -> ../0000:05:10.3

lrwxrwxrwx. 1 root root 0 Jan 25 13:22 /sys/class/net/enp5s0f1/device/virtfn2 -> ../0000:05:10.5

lrwxrwxrwx. 1 root root 0 Jan 25 13:22 /sys/class/net/enp5s0f1/device/virtfn3 -> ../0000:05:10.7

lrwxrwxrwx. 1 root root 0 Jan 25 13:22 /sys/class/net/enp5s0f1/device/virtfn4 -> ../0000:05:11.1

lrwxrwxrwx. 1 root root 0 Jan 25 13:22 /sys/class/net/enp5s0f1/device/virtfn5 -> ../0000:05:11.3

lrwxrwxrwx. 1 root root 0 Jan 25 13:22 /sys/class/net/enp5s0f1/device/virtfn6 -> ../0000:05:11.5

 

One parameter you will need to capture for future use is the PCI vendor ID (in vendor_id:product_id format) of your network adapter. This can be extracted from the output of the lspci command with -nn flag. Here is the output from our lab (the PCI vendor ID marked in bold):

 

# lspci  -nn | grep -i 82576

05:00.0 Ethernet controller [0200]: Intel Corporation 82576 Gigabit Network Connection [8086:10c9] (rev 01)

05:00.1 Ethernet controller [0200]: Intel Corporation 82576 Gigabit Network Connection [8086:10c9] (rev 01)

05:10.0 Ethernet controller [0200]: Intel Corporation 82576 Virtual Function [8086:10ca] (rev 01)

Note: this parameter may be different based on your network adapter hardware.

Setting up the Controller/Network node

  1. In the Neutron server configuration, ML2 should be configured as the core Neutron plugin. Both the Open vSwitch (OVS) and SR-IOV (sriovnicswitch) mechanism drivers need to be loaded.
  2. Since our design requires a VLAN tagged tenant data network, ‘vlan’ must be listed as a type driver for ML2. Other alternative would be to use ‘flat’ networking configuration which allows transparent forwarding with no specific VLAN tag assignment.
  3. The VLAN configuration itself is done through Neutron ML2 configuration file, where you can set the appropriate VLAN range and the physical network label. This is the VLAN range you need to make sure is properly configured for transport (i.e., trunking) across the physical network fabric. We are using ‘sriovnet’ as our network label with 80-85 as the VLAN range: network_vlan_ranges = sriovnet:80:85
  4. One of the great benefits of the SR-IOV ML2 driver is the fact that it is not bound to any specific NIC vendor or card model. The ML2 driver can be used with different cards as long as they support the standard SR-IOV specification. As Red Hat Enterprise Linux OpenStack Platform is supported on top of Red Hat Enterprise Linux, we inherit RHEL rich support of SR-IOV enabled network adapters. In our lab we use the igb/igbvf driver which is included in RHEL 7 and being used to interact with our Intel SR-IOV NIC. To set up the ML2 driver so that it can communicate properly with our Intel NIC, we need to configure the PCI vendor ID we captured earlier in the ML2 SR-IOV configuration file (under supported_pci_vendor_devs), then restart the Neutron server. The format of this config is product_id:vendor_id  which is 8086:10ca in our case.
  5. To allow proper scheduling of SR-IOV devices, Nova scheduler needs to use the FilterScheduler with the PciPassthroughFilter filter. This configuration should be applied on the Controller node under the nova.conf file.

 

Mapping the required network

To enable scheduling of SR-IOV devices, the Nova PCI Whitelist has been enhanced to allow tags to be associated with PCI devices. PCI devices available for SR-IOV networking should be tagged with physical network label. The network label needs to match the label we used previously when setting the VLAN configuration in the Controller/Network node (‘sriovnet’).

Using the pci_passthrough_whitelist under the Nova configuration file, we can map the VFs to the required physical network. After configuring the whitelist there is a need to restart the nova-compute service for the changes to take effect.

In the below example, we set the Whitelist so that the Physical Function (enp5s0f1) is associated with the physical network (sriovnet). As a result, all the Virtual Functions bound to this PF can now be allocated to VMs.

# pci_passthrough_whitelist={“devname”: “enp5s0f1″, “physical_network”:”sriovnet”}

 

Creating the Neutron network

Next we will create the Neutron network and subnet; Make sure to use the –provider:physical_network option and specify the network label as was configured on the Controller/Network node (‘sriovnet’). Optionally, you can also set a specific VLAN ID from  the range:

# neutron net-create sriov-net1 –provider:network_type=vlan –provider:physical_network=sriovnet –provider:segmentation_id=83

 

# neutron subnet-create sriov-net1 10.100.0.0/24

 

Creating an SR-IOV instance

After setting up the base configuration on the Controller/Network node and Compute node, and after creating the Neutron network, now we can go ahead and create our first SR-IOV enabled OpenStack instance.

In order to boot a Nova instance with an SR-IOV networking port, you first need to create the Neutron port and specify its vnic-type as ‘direct’. Then in the ‘nova boot’ command you will need to explicitly reference the port-id you have created using the –nic option as shown below:

 

# neutron port-create <sriov-net1 net-id> –binding:vnic-type direct

# nova boot –flavor m1.large –image <image>  –nic port-id=<port> <vm name>

 

Examining the results

  • On the Compute node, we can now see that one VF has been allocated:

# ip link show enp5s0f1

12: enp5s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT qlen 1000

link/ether 44:1e:a1:73:3d:ab brd ff:ff:ff:ff:ff:ff

vf 0 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

vf 1 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

vf 2 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

vf 3 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

vf 4 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

vf 5 MAC fa:16:3e:0e:3f:0d, vlan 83, spoof checking on, link-state auto

vf 6 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

 

In the above example enp5s0f1 is the Physical Function (PF), and ‘vf 5’ is allocated to a instance. [We can tell that because it shows with a specific MAC address][this phrase read oddly to me, re-read to confirm that it says what you intend for it to say], and configured with VLAN ID 83 which was allocated based on our configuration.

  • On the Compute node, we can also verify the virtual interface definition on Libvirt XML:

Locate the instance_name and of the VM and the hypervisor it is running on:

# nova show <vm name>

The relevant fields are OS-EXT-SRV-ATTR:host and OS-EXT-SRV-ATTR:instance_name.

 

On the compute node run:

# virsh dumpxml <instance_name>

 

<SNIP>

<interface type=’hostdev’ managed=’yes’>

<mac address=’fa:16:3e:0e:3f:0d’/>

<driver name=’vfio’/>

<source>

<address type=’pci’ domain=’0x0000′ bus=’0x05′ slot=’0x10′ function=’0x5’/>

</source>

<vlan>

<tag id=’83’/>

</vlan>

<alias name=’hostdev0’/>

<address type=’pci’ domain=’0x0000′ bus=’0x00′ slot=’0x04′ function=’0x0’/>

</interface>

 

  • On the virtual machine instance, running the ‘ifconfig’ command shows a ‘eth0’ interface exposed the the guest operating system with IP address assigned:

 

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1400

inet 10.100.0.2  netmask 255.255.255.0  broadcast 10.100.0.255

inet6 fe80::f816:3eff:feb9:d855  prefixlen 64  scopeid 0x20<link>

ether fa:16:3e:0e:3f:0d  txqueuelen 1000  (Ethernet)

RX packets 182  bytes 25976 (25.3 KiB)

 

Using ‘lspci’ in the instance we can see that the the interface is indeed a PCI device:

 

# lspci  | grep -i 82576

00:04.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

Using ‘ethtool’ in the instance we can see that the interface driver is ‘igbvf’ which is Intel’s driver for 82576 Virtual Functions:

 

# ethtool -i eth0

driver: igbvf

version: 2.0.2-k

firmware-version:

bus-info: 0000:00:04.0

supports-statistics: yes

supports-test: yes

supports-eeprom-access: no

supports-register-dump: yes

supports-priv-flags: no

 

As you can see, the interface behaves as a regular one from the instance point of view, and can be used by any application running inside the guest. The interface was also assigned an IPv4 address from Neutron which means that we have proper connectivity to the Controller/Network node where the DHCP server for this network resides.

 

As the interface is directly attached to the network adapter and the traffic does not flow through any virtual bridges on the Compute node, it’s important to note that Neutron security groups cannot be used with SR-IOV enabled instances.

 

What’s Next?

Red Hat Enterprise Linux OpenStack Platform 6 is the first version in which SR-IOV networking was introduced. While the ability to bind a VF into a Nova instance with an appropriate Neutron network is available, we are still looking to enhance the feature to address more use cases as well as to simplify the configuration and operation.

Some of the items we are currently considering include the ability to plug/unplug a SR-IOV port on the fly which is currently not available, launching an instance with a SR-IOV port without explicitly creating the port first, and the ability to allocate an entire PF to a virtual machine instance. There is also active work to enable Horizon (Dashboard) support.

One other item is Live Migration support. An SR-IOV Neutron port may be directly connected to its VF as shown above (vnic_type ‘direct’) or it may be connected with a macvtap device that resides on the Compute (vnic_type ‘macvtap’), which is then connected to the corresponding VF. The macvtap option provides a baseline for implementing Live Migration for SR-IOV enabled instances.

Interested in trying the latest OpenStack-based cloud platform from the world’s leading provider of open source solutions? Download a free evaluation of Red Hat Enterprise Linux OpenStack Platform 6 or learn more about it from the product page.

 

by Nir Yechiel at April 29, 2015 06:48 PM

James Page

Neutron, ZeroMQ and Git – Ubuntu OpenStack 15.04 Charm release!

Alongside the Ubuntu 15.04 release on the 23rd April, the Ubuntu OpenStack Engineering team delivered the latest release of the OpenStack charms for deploying and managing OpenStack on Ubuntu using Juju.

Here are some selected highlights from this most recent charm release.

OpenStack Kilo support

As always, we’ve enabled charm support for OpenStack Kilo alongside development. To use this new release use the openstack-origin configuration option of the charms, for example:

juju set cinder openstack-origin=cloud:trusty-kilo

NOTE: Setting this option on an existing deployment will trigger an upgrade to Kilo via the charms – remember to plan and test your upgrade activities prior to production implementation!

Neutron

As part of this release, the team have been working on enabling some of the new Neutron features that were introduced in the Juno release of OpenStack.

Distributed Virtual Router

One of the original limitations of the Neutron reference implementation (ML2 + Open vSwitch) was the requirement to route all north/south and east/west network traffic between instance via network gateway nodes.

For Juno, the Distributed Virtual Router (DVR) function was introduced to allow routing capabilities to be distributed more broadly across an OpenStack cloud.

DVR pushes alot of the layer 3 network routing function of Neutron directly onto compute nodes – instances which have floating IP’s no longer have the restriction of routing via a gateway node for north/south traffic. This traffic is now pushed directly to the external network by the compute nodes via dedicated external network ports, bypassing the requirement for network gateway nodes.

Network gateway nodes are still required for snat northbound routing for instances that don’t having floating ip addresses.

For the 15.04 charm release, we’ve enabled this feature across the neutron-api, neutron-openvswitch and neutron-gateway charms – you can toggle this capability using configuration in the neutron-api charm:

juju set neutron-api enabled-dvr=true l2-population=true \
    overlay-network-type=vxlan

This feature requires that every compute node have a physical network port onto the external public facing network – this is configured on the neutron-openvswitch charm, which is deployed alongside nova-compute:

juju set neutron-openvswitch ext-port=eth1

NOTE: Existing routers will not be switched into DVR mode by default – this must be done manually by a cloud administrator.  We’ve also only tested this feature with vxlan overlay networks – expect gre and vlan enablement soon!

Router High Availability

For Clouds where the preference is still to route north/south traffic via a limited set of gateway nodes, rather than exposing all compute nodes directly to external network zones, Neutron has also introduced a feature to enable virtual routers in highly available configurations.

To use this feature, you need to be running multiple units of the neutron-gateway charm – again it’s enabled via configuration in the neutron-api charm:

juju set neutron-api enable-l3ha=true l2-population=false

Right now Neutron DVR and Router HA features are mutually exclusive due to layer 2 population driver requirements.

Our recommendation is that these new Neutron features are only enabled with OpenStack Kilo as numerous features and improvements have been introduced over the last 6 months since first release with OpenStack Juno.

Initial ZeroMQ support

The ZeroMQ lightweight messaging kernel is a library which extends the standard socket interfaces with features traditionally provided by specialised messaging middleware products, without the requirement for a centralized message broker infrastructure.

Interest and activity around the 0mq driver in Oslo Messaging has been gathering pace during the Kilo cycle, with numerous bug fixes and improvements being made into the driver code.

Alongside this activity, we’ve enabled ZeroMQ support in the Nova and Neutron charms in conjunction with a new charm – ‘openstack-zeromq’:

juju deploy redis-server
juju deploy openstack-zeromq
juju add-relation redis-server openstack-zeromq
for svc in nova-cloud-controller nova-compute \
    neutron-api neutron-openvswitch quantum-gateway; do
    juju deploy $svc
    juju add-relation $svc openstack-zeromq
done

The ZeroMQ driver makes use of a Redis server to maintain a catalog of topic endpoints for the OpenStack cloud so that services can figure out where to send RPC requests.

We expect to enable further charm support as this feature matures upstream – so for now please consider this feature for testing purposes only.

Deployment from source

A core set of the OpenStack charms have also grown the capability to deploy from git repositories, rather than from the usual Debian package sources from Ubuntu.   This allows all of the power of deploying OpenStack using charms to be re-used with deployments from active development.

For example, you’ll still be able to scale-out and cluster OpenStack services deployed this way –  seeing a keystone service deploy from git, running with haproxy, corosync and pacemaker as part of a fully HA deployment is pretty awesome!

This feature is currently tested with the stable/icehouse and stable/juno branches – we’re working on completing testing of the kilo support and expect to land that as a stable update soon.

This feature is considered experimental and we expect to complete further improvements and enablement across a wider set of charms – so please don’t use it for production services!

And finally…

Alongside the features delivered in this release, we’ve also been hard at work resolving bugs across the charms – please refer to milestone bug report for the full details.

We’ve also introduced features to enable easier monitoring with Nagios and support for Keystone PKI tokens as well as some improvements in the failure detection capabilities of the percona-cluster charm when operating in HA mode.

You can get the full low down on all of the changes in this release from the official release notes.


by JavaCruft at April 29, 2015 04:25 PM

Adam Young

Creating Hierarchical Projects in Keystone

Hierarchical Multitenancy is coming. Look busy.

Until we get CLI support for creating projects with parent relationships, we have to test via curl. This has given me a chance to clean up a few little techniques on using jq andd heredocs.

#!/usr/bin/bash -x
. ./keystonerc_admin

TOKEN=$( curl -si  -H "Content-type: application/json"  -d@- $OS_AUTH_URL/auth/tokens <<EOF | awk '/X-Subject-Token/ {print $2}'
{
    "auth": {
        "identity": {
            "methods": [
                "password"
            ],
            "password": {
                "user": {
                    "domain": {
                        "name": "$OS_USER_DOMAIN_NAME"
                    },
                    "name": "admin",
                    "password": "$OS_PASSWORD"
                }
            }
        },
        "scope": {
            "project": {
                "domain": {
                    "name": "$OS_PROJECT_DOMAIN_NAME"
                },
                "name": "$OS_PROJECT_NAME"
            }
        }
    }
}
EOF
)

PARENT_PROJECT=$( curl  -H "Content-type: application/json" -H"X-Auth-Token:$TOKEN"  -d@- $OS_AUTH_URL/projects <<EOF |  jq -r '.project  | {id}[]  '
{
    "project": {
        "description": "parent project",
        "domain_id": "default",
        "enabled": true,
        "name": "Parent"
    }
}
EOF
)

echo $PARENT_PROJECT


curl  -H "Content-type: application/json" -H"X-Auth-Token:$TOKEN"  -d@- $OS_AUTH_URL/projects <<EOF 
{
    "project": {
        "description": "demo-project",
        "parent_project_id": "$PARENT_PROJECT",
        "domain_id": "default",
        "enabled": true,
        "name": "child"
    }
}
EOF


Note that this uses V3 of the API. I have the following keystone_adminrc

export OS_USERNAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_PASSWORD=cf8dcb8aae804722
export OS_AUTH_URL=http://192.168.1.80:5000/v3/

export OS_IDENTITY_API_VERSION=3

export OS_REGION_NAME=RegionOne
export PS1='[\u@\h \W(keystone_admin)]\$ '

by Adam Young at April 29, 2015 03:54 PM

Craige McWhirter

Rebuilding An OpenStack Instance and Keeping the Same Fixed IP

OpenStack and in particular the compute service, Nova, has a useful rebuild function that allows you to rebuild an instance from a fresh image while maintaining the same fixed and floating IP addresses, amongst other metadata.

However if you have a shared storage back end, such as Ceph, you're out of luck as this function is not for you.

Fortunately, there is another way.

Prepare for the Rebuild:

Note the fixed IP address of the instance that you wish to rebuild and the network ID:

$ nova show demoinstance0 | grep network
| DemoTutorial network                       | 192.168.24.14, 216.58.220.133                     |
$ export FIXED_IP=192.168.24.14
$ neutron floatingip-list | grep 216.58.220.133
| ee7ecd21-bd93-4f89-a220-b00b04ef6753 |                  | 216.58.220.133      |
$ export FLOATIP_ID=ee7ecd21-bd93-4f89-a220-b00b04ef6753
$ neutron net-show DemoTutorial | grep " id "
| id              | 9068dff2-9f7e-4a72-9607-0e1421a78d0d |
$ export OS_NET=9068dff2-9f7e-4a72-9607-0e1421a78d0d

You now need to delete the instance that you wish to rebuild:

$ nova delete demoinstance0
Request to delete server demoinstance0 has been accepted.

Manually Prepare the Networking:

Now you need to re-create the port and re-assign the floating IP, if it had one:

$ neutron port-create --name demoinstance0 --fixed-ip ip_address=$FIXED_IP $OS_NET
Created a new port:
+-----------------------+---------------------------------------------------------------------------------------+
| Field                 | Value                                                                                 |
+-----------------------+---------------------------------------------------------------------------------------+
| admin_state_up        | True                                                                                  |
| allowed_address_pairs |                                                                                       |
| binding:vnic_type     | normal                                                                                |
| device_id             |                                                                                       |
| device_owner          |                                                                                       |
| fixed_ips             | {"subnet_id": "eb5db27f-edad-480e-92cb-1f8fec8848a8", "ip_address": "192.168.24.14"}  |
| id                    | c1927578-451b-4682-8888-55c7163898a4                                                  |
| mac_address           | fa:16:3e:5a:39:67                                                                     |
| name                  | demoinstance0                                                                         |
| network_id            | 9068dff2-9f7e-4a72-9607-0e1421a78d0d                                                  |
| security_groups       | 5898c15a-4670-429b-a414-9f59671c4d8b                                                  |
| status                | DOWN                                                                                  |
| tenant_id             | gsu7j52c50804cf3aad71b92e6ced65e                                                      |
+-----------------------+---------------------------------------------------------------------------------------+
$ export OS_PORT=c1927578-451b-4682-8888-55c7163898a4
$ neutron floatingip-associate $FLOATIP_ID $OS_PORT
Associated floating IP ee7ecd21-bd93-4f89-a220-b00b04ef6753
$ neutron floatingip-list | grep $FIXED_IP
| ee7ecd21-bd93-4f89-a220-b00b04ef6753 | 192.168.24.14   | 216.58.220.133     | c1927578-451b-4682-8888-55c7163898a4 |

Re-build!

Now you need to boot the instance again and specify port you created:

$ nova boot --flavor=m1.tiny --image=MyImage --nic port-id=$OS_PORT demoinstance0
$ nova show demoinstance0 | grep network
| DemoTutorial network                       | 192.168.24.14, 216.58.220.133                     |

Now your rebuild has been completed, you've got your old IPs back and you're done. Enjoy :-)

by Craige McWhirter at April 29, 2015 06:39 AM

OpenStack Superuser

OpenStack superusers deliver the goods at CONNECT

MELBOURNE, Australia -- OpenStack all-stars converged on the scene at CONNECT 2015, a large annual trade show organized in partnership with the Victorian Government.

The daylong conference dedicated to OpenStack was specifically designed for cloud leaders, technology decision-makers and heads of infrastructure and innovation to learn more about OpenStack; to discover the benefits of open source and its community; and to outline concrete steps that businesses can take to decide if an OpenStack Cloud is right for them.

alt text hereRandy Bias delivers the keynote.

By all accounts they delivered. Randy Bias, currently vice president of technology at EMC and a well-known and outspoken supporter of OpenStack, led the charge with a keynote titled “Bringing OpenStack into the Enterprise,” that you can catch the flavor of with his slide deck.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Bias also moderated a lively panel on the rise of the superuser which featured David Medberry of Time Warner Cable, Mike Dorman of GoDaddy, Glenn Moloney from the NeCTAR project, and Rik Harris at telecommunications giant Telstra.

There was standing-room only to hear Erez Yarkoni, the CIO of Telstra, speak on “The next step in our cloud partner journey,” a peek into the company’s new cloud-first approach. Yarkoni hit the crowd with some eye-opening numbers, saying that 80 percent of Fortune 500 companies use OpenStack and that the number of developers will triple by 2020. In that landscape, he said, the open cloud platform is growing quickly as enterprises look to avoid vendor lock-in.

Other talks included Michael Still, senior software development manager at Rackspace and the project team lead for Nova Compute – the primary OpenStack project. His talk on deployment recommendations, touching on issues including scale, hypervisors, networking and testing, was based on a post from his personal blog. Mike Dorman of GoDaddy spoke about how the OpenStack cloud platform enables innovation and efficiencies and shared his slides from the talk, too.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

A fireside chat brought the heat at midday with OpenStack Foundation Board Director Tristan Goode, CEO of Aptira, and Tom Fifield, community manager from the OpenStack Foundation and members of the Foundation ecosystem, including Red Hat and Brocade.

If you missed them in Melbourne, you can catch many of these ace speakers at the upcoming OpenStack Summit in Vancouver:

 

Cover Photo by Michael Theis // CC BY NC

by Nicole Martinelli at April 29, 2015 01:45 AM

IBM OpenTech Team

Season of Summits – Cloud Foundry and OpenStack

May is on the horizon, and with it starts the season of summer conferences. Two of my favorite conferences are happening next month, the Cloud Foundry and OpenStack Summits. I love them not only because they are great open source technologies, which beyond doubt they are, but also because my team has been personally invested in the joint success of these two great projects to come out of the open source community.

Over the course of the last year, we have participated and given talks around the intersection of Cloud Foundry and OpenStack at each of their conferences, which can be viewed here and here. We have also worked with community members like Pivotal, Rackspace, Intel to deliver related talks at the meetups my team runs in each of their home cities.

We have also led design workshops at the last two OpenStack Summits around BOSH which plays a major role in binding these technologies. If you are interested the blueprints they can be accessed here and here.

I am often asked during meetups these days what’s next coming from my team around the intersection of Cloud Foundry. So let me focus some light on our upcoming talks:

CFSummit Graphic Season of Summits   Cloud Foundry and OpenStack

During the Cloud Foundry summit, my colleagues Daniel Krook, Manuel Silveyra and I are going to talk about how to build strong user communities and meetup groups around Cloud Foundry, and how to then ensure we can sustain, grow, and take them to next level. We will also discuss the kind of organizational changes we can introduce to ensure a culture of creating a socially relevant, open source oriented organization which has a great community presence. Again a cause where we have invested significant efforts  – so if you are interested, and want to be a Cloud Foundry rockstar in the community, do join our session.

Other than this, there are great and exciting talks happening around core Cloud Foundry technical topics like Microservices, Diego, Docker and more.  Cloud Foundry foundation CEO Sam Ramji joined us yesterday at our Silicon Valley Cloud Foundry meetup to talk about the strides IBM and countless others are making with Cloud Foundry, and why you should join CF Summit. So go, register, and see you all there.

<script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>

<script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>

By the way, if you are wondering what’s next from us at the junction of Cloud Foundry and OpenStack, we are coming up with two very exciting talks at OpenStack summit in Vancouver – details will be posted in my next blog around this topic. A very popular member of our offering portfolio, Bluemix is going to play a significant role in these talks , so stay tuned!!

 

The post Season of Summits – Cloud Foundry and OpenStack appeared first on IBM OpenTech.

by Animesh Singh at April 29, 2015 12:01 AM

April 28, 2015

IBM OpenTech Team

reStructuredText markups in manuals and the translation tips

OpenStack manuals are gradually changing from being written in DocBook format to reStructuredText (RST). This has caused a number of issues for the translation team when taking a document and translating it from English to another language. While the translation process remains very similar to what it has been in the past: slicing, uploading, translating, downloading and building the documents. After the process is complete the resulting documents contain some markups which were previously represented as XML tags. In the past this made it very easy for translators to know exactly what needed translating and what was the markup. But in reStructuredText markups are difficult to tell, because they look like common text in the translation strings. So, to help out the translation community I have made this cheat-sheet that will hopefully make your job easier when translating OpenStack documentation.

rst markup cheatsheet reStructuredText markups in manuals and the translation tips

The post reStructuredText markups in manuals and the translation tips appeared first on IBM OpenTech.

by Daisy.Guo at April 28, 2015 10:31 PM

Ronald Bradford

Running openstack tests with tox

Recently the OSC (python-openstackclient) project removed run_tests.sh #177066 and tools/install_venv.py scripts #177086.

As I was very new to OpenStack development practices this threw me because of reading several OpenStack documentation pages including Getting the code that specifically mentions in Hacking on your laptop and running unit tests an example Setting Up a Developer Environment, and consulting with a friend that is a ATC this is the way I learned to setup virtual environments and running tests.

The Testing OpenStack Projects documentation also refers to run_tests.sh however caveats “There is an older convention, as follows. Most projects have a shell script, named “run_tests.sh”, that runs the unit tests of that project.” (i.e. devil in the details).

With run_tests.sh and tools/install_venv.py removed what is the *correct* way?

Setting up a virtual environment with tox

First you create the tox virtual environments. The tox configuration is held in tox.ini and this supports multiple environments for compatibility testing. You can see the environments created with.

$ grep envlist tox.ini
envlist = py33,py34,py26,py27,pep8

I have chosen to specify the Python 2.7 version. I am running Ubuntu 12.04.2 LTS which has Python 2.7 and Python 3.4. The HACKING.rst docs makes reference to “OpenStackClient strives to be Python 3.3 compatible.” Side Note: Python 3.4 fails to work with the openstackclient codebase, See later for issues I am seeing.

I setup the 2.7 virtual environment without running any tests.

$ tox -e py27 --notest
py27 create: /home/rbradfor/tmp/python-openstackclient/.tox/py27
py27 installdeps: -r/home/rbradfor/tmp/python-openstackclient/requirements.txt, -r/home/rbradfor/tmp/python-openstackclient/test-requirements.txt
py27 develop-inst: /home/rbradfor/tmp/python-openstackclient
_________ summary ____________
  py27: skipped tests
  congratulations :)

I can then reference the openstack binary directly in this virtual environment with:

$ .tox/py27/bin/openstack --version
openstack 1.1.0

You can use this virtual environment without requiring any pathing by activating.

$ source .tox/py27/bin/activate
$ openstack

This actually adds the applicable /bin directory to PATH and not PYTHONPATH.

$ which openstack
/home/rbradfor/tmp/python-openstackclient/.tox/py27/bin/openstack
$ openstack --version
openstack 1.1.0

Running Tests

There are now several ways I can run individual or full tests.

$ tox -epy27 -- test_shell
py27 create: /home/rbradfor/tmp/python-openstackclient/.tox/py27
py27 installdeps: -r/home/rbradfor/tmp/python-openstackclient/requirements.txt, -r/home/rbradfor/tmp/python-openstackclient/test-requirements.txt
py27 develop-inst: /home/rbradfor/tmp/python-openstackclient
py27 runtests: PYTHONHASHSEED='2928878700'
py27 runtests: commands[0] | python setup.py testr --testr-args=test_shell
running testr
...
Ran 32 tests in 0.148s (+0.029s)
PASSED (id=6)
_____ summary _________________
  py27: commands succeeded
  congratulations :)

NOTE: It seems every second invocation of this fails. This is what I see.

$ tox -epy27 -- test_shell
py27 recreate: /home/rbradfor/tmp/python-openstackclient/.tox/py27
ERROR: invocation failed (exit code 3), logfile: /home/rbradfor/tmp/python-openstackclient/.tox/py27/log/py27-0.log
ERROR: actionid=py27
msg=getenv
cmdargs=['/usr/bin/python', '-m', 'virtualenv', '--setuptools', '--python', '/home/rbradfor/tmp/python-openstackclient/.tox/py27/bin/python2.7', 'py27']
env={'LESSOPEN': '| /usr/bin/lesspipe %s', 'SSH_CLIENT': '192.168.1.2 60030 22', 'LOGNAME': 'rbradfor', 'USER': 'rbradfor', 'PATH': '/home/rbradfor/tmp/python-openstackclient/.tox/py27/bin:/home/rbradfor/tmp/python-openstackclient/.tox/py27/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games', 'HOME': '/home/rbradfor', 'PS1': '(py27)\\[\\e]0;\\u@\\h: \\w\\a\\]${debian_chroot:+($debian_chroot)}\\u@\\h:\\w\\$ ', 'LANG': 'en_US.UTF-8', 'TERM': 'xterm', 'SHELL': '/bin/bash', 'SHLVL': '1', 'PYTHONHASHSEED': '4072653076', 'XDG_RUNTIME_DIR': '/run/user/1000', 'VIRTUAL_ENV': '/home/rbradfor/tmp/python-openstackclient/.tox/py27', 'XDG_SESSION_ID': '12', '_': '/usr/local/bin/tox', 'SSH_CONNECTION': '192.168.1.2 60030 192.168.1.60 22', 'LESSCLOSE': '/usr/bin/lesspipe %s %s', 'SSH_TTY': '/dev/pts/2', 'OLDPWD': '/home/rbradfor', 'PWD': '/home/rbradfor/tmp/python-openstackclient', 'MAIL': '/var/mail/rbradfor', 'LS_COLORS': 'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36:'}
The executable /home/rbradfor/tmp/python-openstackclient/.tox/py27/bin/python2.7 (from --python=/home/rbradfor/tmp/python-openstackclient/.tox/py27/bin/python2.7) does not exist

ERROR: InvocationError: /usr/bin/python -m virtualenv --setuptools --python /home/rbradfor/tmp/python-openstackclient/.tox/py27/bin/python2.7 py27 (see /home/rbradfor/tmp/python-openstackclient/.tox/py27/log/py27-0.log)
______ summary ___________
ERROR:   py27: InvocationError: /usr/bin/python -m virtualenv --setuptools --python /home/rbradfor/tmp/python-openstackclient/.tox/py27/bin/python2.7 py27 (see /home/rbradfor/tmp/python-openstackclient/.tox/py27/log/py27-0.log)

Using the tox.ini command syntax.

$ python setup.py testr --slowest --testr-args=test_shell
...

PASSED (id=5)
Slowest Tests
Test id                                                                             Runtime (s)
----------------------------------------------------------------------------------  -----------
openstackclient.tests.test_shell.TestShellHelp.test_help_options                    0.026
openstackclient.tests.test_shell.TestShellPasswordAuth.test_only_url_flow           0.015
openstackclient.tests.test_shell.TestShellPasswordAuth.test_only_project_id_flow    0.009
openstackclient.tests.test_shell.TestShellTokenAuth.test_empty_auth                 0.009
openstackclient.tests.test_shell.TestShellTokenEndpointAuth.test_only_url           0.008
openstackclient.tests.test_shell.TestShellPasswordAuth.test_only_auth_type_flow     0.007
openstackclient.tests.test_shell.TestShellPasswordAuth.test_only_project_name_flow  0.007
openstackclient.tests.test_shell.TestShellPasswordAuth.test_only_trust_id_flow      0.007
openstackclient.tests.test_shell.TestShellTokenAuthEnv.test_only_auth_url           0.007
openstackclient.tests.test_shell.TestShellTokenAuthEnv.test_only_token              0.007

References

The following is recommended reading.

Thanks to Jeremy Stanley (fungi) and Doug Hellmann from the openstack-dev mailing list for setting me on the correct path.

Problems with Python 3.4 on Ubuntu 14.04.2 LTS

I was unable to run tests with Python 3.x. I have not spent the time to investigate why there are issues with libyaml which is not listed as core dependency in requirements.txt.
UPDATE: Seems this also is a simple problem. I did not have the -dev package installed.

$ sudo apt-get install -y python3.4-dev

And it’s all fine. Thanks Doug for that insight.

My next objective is to install Python 3.3 also as this is referenced as the baseline compatibility of the project.

$ git rev-parse HEAD
416d840dc4cb00026bac2512b259ce88a0e4a765

$ tox -epy34 -- notests
py34 create: /home/rbradfor/tmp/python-openstackclient/.tox/py34
py34 installdeps: -r/home/rbradfor/tmp/python-openstackclient/requirements.txt, -r/home/rbradfor/tmp/python-openstackclient/test-requirements.txt
ERROR: invocation failed (exit code 1), logfile: /home/rbradfor/tmp/python-openstackclient/.tox/py34/log/py34-1.log
ERROR: actionid=py34
msg=getenv
cmdargs=[local('/home/rbradfor/tmp/python-openstackclient/.tox/py34/bin/pip'), 'install', '-U', '-r/home/rbradfor/tmp/python-openstackclient/requirements.txt', '-r/home/rbradfor/tmp/python-openstackclient/test-requirements.txt']
env={'XDG_RUNTIME_DIR': '/run/user/1000', 'VIRTUAL_ENV': '/home/rbradfor/tmp/python-openstackclient/.tox/py34', 'LESSOPEN': '| /usr/bin/lesspipe %s', 'SSH_CLIENT': '192.168.1.2 60030 22', 'LOGNAME': 'rbradfor', 'USER': 'rbradfor', 'HOME': '/home/rbradfor', 'PATH': '/home/rbradfor/tmp/python-openstackclient/.tox/py34/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games', 'XDG_SESSION_ID': '12', '_': '/usr/local/bin/tox', 'SSH_CONNECTION': '192.168.1.2 60030 192.168.1.60 22', 'LANG': 'en_US.UTF-8', 'TERM': 'xterm', 'SHELL': '/bin/bash', 'LESSCLOSE': '/usr/bin/lesspipe %s %s', 'SHLVL': '1', 'SSH_TTY': '/dev/pts/2', 'OLDPWD': '/home/rbradfor', 'PWD': '/home/rbradfor/tmp/python-openstackclient', 'PYTHONHASHSEED': '1330227753', 'MAIL': '/var/mail/rbradfor', 'LS_COLORS': 'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36:'}
Collecting pbr!=0.7,<1.0,>=0.6 (from -r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 4))
  Using cached pbr-0.10.8-py2.py3-none-any.whl
Collecting six>=1.9.0 (from -r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 5))
  Using cached six-1.9.0-py2.py3-none-any.whl
Collecting Babel>=1.3 (from -r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 7))
  Using cached Babel-1.3.tar.gz
Collecting cliff>=1.10.0 (from -r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 8))
  Using cached cliff-1.12.0.tar.gz
Collecting cliff-tablib>=1.0 (from -r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 9))
  Using cached cliff-tablib-1.1.tar.gz
Collecting os-client-config (from -r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 10))
  Using cached os-client-config-0.8.0.tar.gz
Collecting oslo.config>=1.9.3 (from -r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 11))
  Using cached oslo.config-1.11.0-py2.py3-none-any.whl
Collecting oslo.i18n>=1.5.0 (from -r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 12))
  Using cached oslo.i18n-1.6.0-py2.py3-none-any.whl
Collecting oslo.utils>=1.4.0 (from -r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 13))
  Using cached oslo.utils-1.5.0-py2.py3-none-any.whl
Collecting oslo.serialization>=1.4.0 (from -r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 14))
  Using cached oslo.serialization-1.5.0-py2.py3-none-any.whl
Collecting python-glanceclient>=0.15.0 (from -r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 15))
  Using cached python_glanceclient-0.18.0-py2.py3-none-any.whl
Collecting python-keystoneclient>=1.1.0 (from -r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 16))
  Using cached python_keystoneclient-1.4.0-py2.py3-none-any.whl
Collecting python-novaclient>=2.22.0 (from -r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 17))
  Using cached python_novaclient-2.24.1-py2.py3-none-any.whl
Collecting python-cinderclient>=1.1.0 (from -r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 18))
  Using cached python_cinderclient-1.2.0-py2.py3-none-any.whl
Collecting python-neutronclient<3,>=2.3.11 (from -r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 19))
  Using cached python_neutronclient-2.5.0-py2.py3-none-any.whl
Collecting requests!=2.4.0,>=2.2.0 (from -r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 20))
  Using cached requests-2.6.2-py2.py3-none-any.whl
Collecting stevedore>=1.3.0 (from -r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 21))
  Using cached stevedore-1.4.0-py2.py3-none-any.whl
Collecting hacking<0.11,>=0.10.0 (from -r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 4))
  Using cached hacking-0.10.1-py2.py3-none-any.whl
Collecting coverage>=3.6 (from -r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 6))
  Using cached coverage-3.7.1.tar.gz
Collecting discover (from -r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 7))
  Using cached discover-0.4.0.tar.gz
Collecting fixtures>=0.3.14 (from -r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 8))
  Using cached fixtures-1.0.0.tar.gz
Collecting mock>=1.0 (from -r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 9))
  Using cached mock-1.0.1.tar.gz
Collecting oslosphinx>=2.5.0 (from -r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 10))
  Using cached oslosphinx-2.5.0-py2.py3-none-any.whl
Collecting oslotest>=1.5.1 (from -r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 11))
  Using cached oslotest-1.6.0-py2.py3-none-any.whl
Collecting requests-mock>=0.6.0 (from -r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 12))
  Using cached requests_mock-0.6.0-py2.py3-none-any.whl
Collecting sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2 (from -r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 13))
  Using cached Sphinx-1.2.3-py3-none-any.whl
Collecting testrepository>=0.0.18 (from -r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 14))
  Using cached testrepository-0.0.20.tar.gz
Collecting testtools!=1.2.0,>=0.9.36 (from -r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 15))
  Using cached testtools-1.7.1-py2.py3-none-any.whl
Collecting WebOb>=1.2.3 (from -r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 16))
  Using cached WebOb-1.4.1.tar.gz
Requirement already up-to-date: pip in ./.tox/py34/lib/python3.4/site-packages (from pbr!=0.7,<1.0,>=0.6->-r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 4))
Collecting pytz>=0a (from Babel>=1.3->-r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 7))
  Using cached pytz-2015.2-py2.py3-none-any.whl
Collecting argparse (from cliff>=1.10.0->-r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 8))
  Using cached argparse-1.3.0-py2.py3-none-any.whl
Collecting cmd2>=0.6.7 (from cliff>=1.10.0->-r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 8))
  Using cached cmd2-0.6.8.tar.gz
Collecting PrettyTable<0.8,>=0.7 (from cliff>=1.10.0->-r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 8))
  Using cached prettytable-0.7.2.tar.bz2
Collecting pyparsing>=2.0.1 (from cliff>=1.10.0->-r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 8))
  Using cached pyparsing-2.0.3-py2.py3-none-any.whl
Collecting tablib (from cliff-tablib>=1.0->-r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 9))
  Using cached tablib-0.10.0-py2.py3-none-any.whl
Collecting PyYAML>=3.1.0 (from os-client-config->-r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 10))
  Using cached PyYAML-3.11.tar.gz
Collecting netaddr>=0.7.12 (from oslo.config>=1.9.3->-r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 11))
  Using cached netaddr-0.7.14-py2.py3-none-any.whl
Collecting iso8601>=0.1.9 (from oslo.utils>=1.4.0->-r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 13))
  Using cached iso8601-0.1.10-py33-none-any.whl
Collecting netifaces>=0.10.4 (from oslo.utils>=1.4.0->-r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 13))
  Using cached netifaces-0.10.4.tar.gz
Collecting msgpack-python>=0.4.0 (from oslo.serialization>=1.4.0->-r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 14))
  Using cached msgpack-python-0.4.6.tar.gz
Collecting pyOpenSSL>=0.11 (from python-glanceclient>=0.15.0->-r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 15))
  Using cached pyOpenSSL-0.15.1-py2.py3-none-any.whl
Collecting warlock<2,>=1.0.1 (from python-glanceclient>=0.15.0->-r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 15))
  Using cached warlock-1.1.0.tar.gz
Collecting simplejson>=2.2.0 (from python-novaclient>=2.22.0->-r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 17))
  Using cached simplejson-3.6.5.tar.gz
Collecting flake8==2.2.4 (from hacking<0.11,>=0.10.0->-r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 4))
  Using cached flake8-2.2.4.tar.gz
Collecting pep8==1.5.7 (from hacking<0.11,>=0.10.0->-r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 4))
  Using cached pep8-1.5.7-py2.py3-none-any.whl
Collecting mccabe==0.2.1 (from hacking<0.11,>=0.10.0->-r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 4))
  Using cached mccabe-0.2.1.tar.gz
Collecting pyflakes==0.8.1 (from hacking<0.11,>=0.10.0->-r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 4))
  Using cached pyflakes-0.8.1-py2.py3-none-any.whl
Collecting testscenarios>=0.4 (from oslotest>=1.5.1->-r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 11))
  Using cached testscenarios-0.4.tar.gz
Collecting python-subunit>=0.0.18 (from oslotest>=1.5.1->-r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 11))
  Using cached python-subunit-1.1.0.tar.gz
Collecting mox3>=0.7.0 (from oslotest>=1.5.1->-r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 11))
  Using cached mox3-0.7.0.tar.gz
Collecting docutils>=0.10 (from sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2->-r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 13))
  Using cached docutils-0.12.tar.gz
Collecting Jinja2>=2.3 (from sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2->-r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 13))
  Using cached Jinja2-2.7.3.tar.gz
Collecting Pygments>=1.2 (from sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2->-r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 13))
  Using cached Pygments-2.0.2-py3-none-any.whl
Collecting unittest2>=1.0.0 (from testtools!=1.2.0,>=0.9.36->-r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 15))
  Using cached unittest2-1.0.1-py2.py3-none-any.whl
Collecting traceback2 (from testtools!=1.2.0,>=0.9.36->-r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 15))
  Using cached traceback2-1.4.0-py2.py3-none-any.whl
Collecting extras (from testtools!=1.2.0,>=0.9.36->-r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 15))
  Using cached extras-0.0.3.tar.gz
Collecting python-mimeparse (from testtools!=1.2.0,>=0.9.36->-r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 15))
  Using cached python-mimeparse-0.1.4.tar.gz
Collecting cryptography>=0.7 (from pyOpenSSL>=0.11->python-glanceclient>=0.15.0->-r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 15))
  Using cached cryptography-0.8.2.tar.gz
Collecting jsonschema<3,>=0.7 (from warlock<2,>=1.0.1->python-glanceclient>=0.15.0->-r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 15))
  Using cached jsonschema-2.4.0-py2.py3-none-any.whl
Collecting jsonpatch<2,>=0.10 (from warlock<2,>=1.0.1->python-glanceclient>=0.15.0->-r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 15))
  Using cached jsonpatch-1.9.tar.gz
Collecting markupsafe (from Jinja2>=2.3->sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2->-r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 13))
  Using cached MarkupSafe-0.23.tar.gz
Collecting linecache2 (from traceback2->testtools!=1.2.0,>=0.9.36->-r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 15))
  Using cached linecache2-1.0.0-py2.py3-none-any.whl
Collecting pyasn1 (from cryptography>=0.7->pyOpenSSL>=0.11->python-glanceclient>=0.15.0->-r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 15))
  Using cached pyasn1-0.1.7.tar.gz
Collecting setuptools (from cryptography>=0.7->pyOpenSSL>=0.11->python-glanceclient>=0.15.0->-r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 15))
  Using cached setuptools-15.2-py2.py3-none-any.whl
Collecting cffi>=0.8 (from cryptography>=0.7->pyOpenSSL>=0.11->python-glanceclient>=0.15.0->-r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 15))
  Using cached cffi-0.9.2.tar.gz
Collecting jsonpointer>=1.5 (from jsonpatch<2,>=0.10->warlock<2,>=1.0.1->python-glanceclient>=0.15.0->-r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 15))
  Using cached jsonpointer-1.7.tar.gz
Collecting pycparser (from cffi>=0.8->cryptography>=0.7->pyOpenSSL>=0.11->python-glanceclient>=0.15.0->-r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 15))
  Using cached pycparser-2.12.tar.gz
Installing collected packages: pbr, six, pytz, Babel, argparse, pyparsing, cmd2, PrettyTable, stevedore, cliff, tablib, cliff-tablib, PyYAML, os-client-config, netaddr, oslo.config, oslo.i18n, iso8601, netifaces, oslo.utils, msgpack-python, oslo.serialization, pyasn1, setuptools, pycparser, cffi, cryptography, pyOpenSSL, requests, jsonschema, jsonpointer, jsonpatch, warlock, python-keystoneclient, python-glanceclient, simplejson, python-novaclient, python-cinderclient, python-neutronclient, pyflakes, pep8, mccabe, flake8, hacking, coverage, discover, linecache2, traceback2, unittest2, extras, python-mimeparse, testtools, fixtures, mock, oslosphinx, testscenarios, python-subunit, mox3, testrepository, oslotest, requests-mock, docutils, markupsafe, Jinja2, Pygments, sphinx, WebOb
  Running setup.py install for Babel
  Running setup.py install for cmd2
  Running setup.py install for PrettyTable
  Running setup.py install for cliff
  Running setup.py install for cliff-tablib
  Running setup.py install for PyYAML
    Complete output from command /home/rbradfor/tmp/python-openstackclient/.tox/py34/bin/python3.4 -c "import setuptools, tokenize;__file__='/tmp/pip-build-p19auoc2/PyYAML/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-xlgu_evx-record/install-record.txt --single-version-externally-managed --compile --install-headers /home/rbradfor/tmp/python-openstackclient/.tox/py34/include/site/python3.4/PyYAML:
    running install
    running build
    running build_py
    creating build
    creating build/lib.linux-x86_64-3.4
    creating build/lib.linux-x86_64-3.4/yaml
    copying lib3/yaml/representer.py -> build/lib.linux-x86_64-3.4/yaml
    copying lib3/yaml/tokens.py -> build/lib.linux-x86_64-3.4/yaml
    copying lib3/yaml/constructor.py -> build/lib.linux-x86_64-3.4/yaml
    copying lib3/yaml/reader.py -> build/lib.linux-x86_64-3.4/yaml
    copying lib3/yaml/__init__.py -> build/lib.linux-x86_64-3.4/yaml
    copying lib3/yaml/error.py -> build/lib.linux-x86_64-3.4/yaml
    copying lib3/yaml/scanner.py -> build/lib.linux-x86_64-3.4/yaml
    copying lib3/yaml/loader.py -> build/lib.linux-x86_64-3.4/yaml
    copying lib3/yaml/parser.py -> build/lib.linux-x86_64-3.4/yaml
    copying lib3/yaml/nodes.py -> build/lib.linux-x86_64-3.4/yaml
    copying lib3/yaml/serializer.py -> build/lib.linux-x86_64-3.4/yaml
    copying lib3/yaml/cyaml.py -> build/lib.linux-x86_64-3.4/yaml
    copying lib3/yaml/emitter.py -> build/lib.linux-x86_64-3.4/yaml
    copying lib3/yaml/events.py -> build/lib.linux-x86_64-3.4/yaml
    copying lib3/yaml/dumper.py -> build/lib.linux-x86_64-3.4/yaml
    copying lib3/yaml/resolver.py -> build/lib.linux-x86_64-3.4/yaml
    copying lib3/yaml/composer.py -> build/lib.linux-x86_64-3.4/yaml
    running build_ext
    creating build/temp.linux-x86_64-3.4
    checking if libyaml is compilable
    x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.4m -I/home/rbradfor/tmp/python-openstackclient/.tox/py34/include/python3.4m -c build/temp.linux-x86_64-3.4/check_libyaml.c -o build/temp.linux-x86_64-3.4/check_libyaml.o
    checking if libyaml is linkable
    x86_64-linux-gnu-gcc -pthread build/temp.linux-x86_64-3.4/check_libyaml.o -lyaml -o build/temp.linux-x86_64-3.4/check_libyaml
    building '_yaml' extension
    creating build/temp.linux-x86_64-3.4/ext
    x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.4m -I/home/rbradfor/tmp/python-openstackclient/.tox/py34/include/python3.4m -c ext/_yaml.c -o build/temp.linux-x86_64-3.4/ext/_yaml.o
    ext/_yaml.c:8:22: fatal error: pyconfig.h: No such file or directory
     #include "pyconfig.h"
                          ^
    compilation terminated.
    error: command 'x86_64-linux-gnu-gcc' failed with exit status 1

    ----------------------------------------
    Command "/home/rbradfor/tmp/python-openstackclient/.tox/py34/bin/python3.4 -c "import setuptools, tokenize;__file__='/tmp/pip-build-p19auoc2/PyYAML/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-xlgu_evx-record/install-record.txt --single-version-externally-managed --compile --install-headers /home/rbradfor/tmp/python-openstackclient/.tox/py34/include/site/python3.4/PyYAML" failed with error code 1 in /tmp/pip-build-p19auoc2/PyYAML

ERROR: could not install deps [-r/home/rbradfor/tmp/python-openstackclient/requirements.txt, -r/home/rbradfor/tmp/python-openstackclient/test-requirements.txt]; v = InvocationError('/home/rbradfor/tmp/python-openstackclient/.tox/py34/bin/pip install -U -r/home/rbradfor/tmp/python-openstackclient/requirements.txt -r/home/rbradfor/tmp/python-openstackclient/test-requirements.txt (see /home/rbradfor/tmp/python-openstackclient/.tox/py34/log/py34-1.log)', 1)
________________________________________________________________________________________________________________________ summary ________________________________________________________________________________________________________________________
ERROR:   py34: could not install deps [-r/home/rbradfor/tmp/python-openstackclient/requirements.txt, -r/home/rbradfor/tmp/python-openstackclient/test-requirements.txt]; v = InvocationError('/home/rbradfor/tmp/python-openstackclient/.tox/py34/bin/pip install -U -r/home/rbradfor/tmp/python-openstackclient/requirements.txt -r/home/rbradfor/tmp/python-openstackclient/test-requirements.txt (see /home/rbradfor/tmp/python-openstackclient/.tox/py34/log/py34-1.log)', 1)

by ronald at April 28, 2015 05:13 PM

Jon Proulx

Cloudy Sysadmin Wanted

This is the down & dirty, the position isn’t posted yet and this is the description our technical team has worked up so far and hasn’t been through the HR wringer yet. But we want to fill this as quickly as we can possibly find the right person so hoping this informal post saves us a week or two while the bureaucracy churns.

The really really short version is we want someone who can help with cloud ops and planning (OpenStack, Ceph, etc.) as well as direct random linux support. Now at CSAIL we do support some pretty amazing people, so that part isn’t all bad.

Once it’s posted the wording may be a bit different but we’ll still be looking for more or less the same thing. Screening is pretty much all technical staff so you don’t need to tick off buzz words to get through a pre screener that doesn’t really understand what you’re talking about.

If you have question feel free to ask in the comments (or me privately if you prefer).

If you’re interested keep an eye on on http://careers.mit.edu/ (search for Department “Comp Sci & Artificial Intelligence Lab” and Employment Type “Full-Time”). There’s only 6 listings today (without this one) and none of them are even vaguely similar so should be obvious.

I’m sure I’ll tweet it. Hopefully I’ll edit this post as well and while it can’t hurt to drop me an email (jon at csail.mit.edu); I can’t promise it will help either.

So the some what long winded description …

Systems Administrator MIT/CSAIL

Would you like to help provide the computing environment for one of the premiere Computer Science research labs in the world?

The MIT Computer Science and Artificial Intelligence Laboratory seeks a strong Linux generalist to join our systems administration staff.

The position is about half service development and server operations and half customer support (generally as 2nd level to our existing help desk). Roughly equivalent to a USNIX/LISA Level III: Intermediate/Advanced System Administrator position.

Required Skills

  • Strong interpersonal and communication skills; ability to write purchase justifications, train users in complex topics, make presentations to an internal audience, and interact positively with technical and non technical users

  • Independent problem-solving, self-direction

  • Comfort with most aspects of operating system administration; for example, managing processes and services, configuration of mail systems, system installation and configuration, printer systems, and fundamentals of security

  • Familiarity with the principles and practice of system configuration management using modern declarative tools; ability to model and reduce complex system requirements as configuration declarations

  • A solid understanding of the operating systems in use at the site (Ubuntu, Debian, FreeBSD, MacOS, Windows); understanding of paging and swapping, inter-process communication, devices and what device drivers do, and filesystem concepts, for this position Linux variants are most important

  • Familiarity with fundamental networking/distributed computing environment concepts; ability to configure file sharing (NFS/SAMBA or Windows); ability to query DNS records; understanding of basic routing concepts

  • Works well both independently and on a small team

Required Background

  • Three to five years of system administration experience

Desirable Background and Skills

  • A degree in computer science or a related field

  • Ability to do minimal debugging and modification of C or Java programs

Responsibilities

  • Receives general instructions for new responsibilities from supervisor

  • Initiates some new responsibilities and helps to plan for the future of the site

  • Mentor novice system administrators or operators

  • Receive mentoring from senior administrators

  • Evaluate and/or recommend purchases; has strong influence on purchasing process

Current Technologies

Roles often adapt to the person who fills them here, so we have a certain expectation that the person we hire will not be doing exactly the same set of things as the person who leaves. This is not an absolute set of requirements so much as it is a wish list.

We try to stay at the forefront of emerging technologies. Our environment is constantly evolving and we value strong fundamental understanding with a demonstrated ability to learn and adapt over simply having a rote understanding of our current checklist of applications and products.

Core Systems (things your predecessor spent a lot of time on)

  • Ubuntu GNU/Linux
  • Puppet
  • Ceph
  • OpenStack
  • TiBS (backup software)
  • FAI (PXE based deployment)

Core Tools (things you’ll need to use to get your job done)

  • Git
  • Request Tracker (RT)
  • Nagios

Languages

  • Shell Scripting ability is a must

  • And at least one of Ruby or Python, bonus points for RAILS or Django knowledge.

Secondary (underlying technologies you’ll frequently encounter in our environment)

  • Kerberos
  • OpenAFS
  • MySQL
  • Postgresql
  • Apache
  • NFS
  • ZFS
  • FreeBSD

Bonus (things you could optionally dive into)

  • HTCondor
  • ELK (or other logging/trending-fu)
  • Hadoop
  • Spark
  • AWS
  • Drupal
  • JunOS
  • MacOS
  • Windows

Hours and Location

CSAIL is located in building 32 (the Stata Center) on MIT’s Cambridge Massachusetts campus.

As a new hire you will be expected to be on site 9-5. That said once you’ve come up to speed more flexibility to do remote work is possible if you need some quieter time to focus on a project or occasionally to deal with life’s unexpected events (like contractors or sick kids).

This position is part of an on call rotation. On call duties are shared between a primary and secondary responder for one week shifts. Frequency fluctuates somewhat depending on staffing and vacations, but typically in a six week period you’ll be primary once and secondary once.you will be on call once a month as either primary or secondary responder.

April 28, 2015 01:59 PM

Kyle Mestery

Subnetpools in Neutron

One of the most interesting new features in Neutron for the Kilo release is the addition of subnetpools to the API. This work was initially targeted to integrate nicely with pluggable IPAM. Unfortunately, this did not make it into Kilo and is targeted at Liberty. But even without pluggable IPAM as a consumer, the subnetpools addition is quite useful. Here’s an example of how you might use it.

neutron net-create webapp
neutron subnetpool-create –default-prefixlen 24 –pool-prefix 10.10.0.0/16 webpool
neutron subnet-create –subnetpool webpool websubnet

What I’ve shown above is the creation of a network, pool, and subnet for an example web application. The subnetpool was created with a default prefix length of /24. Thus, when the user creates the subnet ‘websubnet’, it will end up with a CIDR such as “10.10.0.0/24”.

Creating another subnet from the pool will result in the allocation code selecting another range:

neutron subnet-create –subnetpool webpool dbsubnet

This would result in the subnet ‘dbsubnet’ having a CIDR of “10.10.1.0/24”.

As you can see, the subnetpool code is really powerful because it removes the need for a user to have to select a CIDR. An admin could create a shared subnetpool with a large block of addresses and let tenants create subnets from that. It’s a powerful addition to the Neutron API, and will integrate nicely with the pluggable IPAM addition we expect to land during Liberty.

April 28, 2015 01:41 PM

Mirantis

Mirantis OpenStack: real open community development

The post Mirantis OpenStack: real open community development appeared first on Mirantis | The #1 Pure Play OpenStack Company.

When you talk about a company “doing open source,” people frequently assume that employees build the product in-house, then release the source code to the community. When it comes to Mirantis and OpenStack, however, the situation is just the opposite. We start with upstream OpenStack and build our product through a combination of testing, hardening patches, and additional open source features that make it easier to use without creating vendor lock-in.

Once you add a commercial distribution such as Mirantis OpenStack to the mix, things can seem complicated — particularly given that every piece of OpenStack code we work on is open source. Let’s take a look at how this process works so you can understand which parts of Mirantis’ work are, and are not, open.

How Mirantis works in the community

Although it would be simple to assume that Mirantis builds code in-house and then sends the result into the community, that’s not, in fact, how it works.

To succeed in open source, a company must first nurture the community on which it aims to build its business. For Mirantis, that’s OpenStack. That means that when we work on new features, we do them in public, in the community. When we find bugs, we fix them in public, in the community. The goal is to make OpenStack as strong as possible, so that we can provide the best value for our customers using it.

 Mirantis OpenStack has zero proprietary code in it.

 Let me say that again.

 Mirantis OpenStack has zero proprietary code in it.

Of course, that doesn’t mean that Mirantis OpenStack is identical to the OpenStack trunk; far from it. When we package up Mirantis OpenStack, it includes the following:

  •  The latest stable branch of OpenStack

  • Hardening packages that include fixes for issues that we discover in testing

  • Select additional bug fixes that may or may not have been merged/backported by the community yet

Let’s dig down a little and get some more details.

Mirantis OpenStack components

When you download Mirantis OpenStack, you get OpenStack code with multiple components that work together for optimized functionality.

  • OpenStack packages: The Mirantis OpenStack distribution is based on OpenStack project packages from the latest stable project release; for example, Mirantis OpenStack 6.x release is based on the OpenStack Juno release.

  • Fuel: Fuel is a deployment and management tool for OpenStack, related community projects, and plugins, enabling provisioning, deployment, and lifecycle management.

  • Continuous Integration (CI) Infrastructure: Mirantis OpenStack includes components to implement and test your OpenStack environment, such as OpenStack unit tests, Fuel unit and system tests, and the Rally and Tempest projects.

  • Mirantis OpenStack Express service: Mirantis OpenStack Express enables you to build your hybrid OpenStack cloud and add compute and data capacity on demand. This service provides a self-service portal, hardware on demand, and, of course, documentation.

All of these components are based on work from the Mirantis engineering team as well as from the OpenStack community, and virtually all of it is open source, as you can see from Figure 1:

Mirantis_OpenStack_ComponentsFigure 1. Mirantis OpenStack Components

In Figure 1, anything in green is open source, while items in blue are not. The latter include Mirantis OpenStack Express components and hardening patches for OpenStack packages.

While the Mirantis OpenStack Express self-service portal and hardware-on-demand service are essential to the functioning of Mirantis OpenStack Express, they aren’t actually part of OpenStack; they just enable us to provide OpenStack as a service.

Hardening patches for OpenStack packages is where a lot of people get tripped up. The actual patches that we use for hardening OpenStack are closed, but all of the code provided in them is open. (We’ll explain how that’s possible–as well as why we do it this way–in a moment.) 

Let’s take a look at how all of this works in practice.

Fixing bugs

There are two ways we might discover a bug; we might discover it as part of our testing and development, or we might discover it while resolving a customer support request.  In either case, the process is the same:

  •  Check to see if a bugfix already exists on the master branch of the affected component. If so, we submit a backport of the fix to the corresponding stable release branches in our internal fuel-infra repo.

  • If there is no existing bugfix, the Mirantis OpenStack components team:

    1. Reports the bug to the affected OpenStack component (if it has not already been reported).

    2. Documents the relation of the upstream bug in a comment or description of the corresponding Mirantis OpenStack bug report on Launchpad (in the Mirantis OpenStack project).

    3. Proposes a fix for the master branch of the public git repository on review.openstack.org, and gathers initial feedback from the community to make sure it’s on track.

    4. Ports the fix to the stable release branches in a private git repository on review.fuel-infra.org.

At this point, the bug fix becomes available to our customers while we wait for it to be accepted and merged by the OpenStack community.

We follow a similar process when we discover bugs in Linux packages such as KVM, libvirt, Open vSwitch, and even the Linux kernel, as well as for Python dependencies of OpenStack components, such as kombu, and tools and libraries required for Fuel, such as, MCollective, Puppet, and Cobbler. One significant distinction here, however, is that the git repository on review.fuel-infra.org is public, not private.

We track bugs in Fuel and its Python and Ruby dependencies in the Fuel project on Launchpad. Bugs in all other packages are tracked in the Mirantis OpenStack project.

Hardening patches for OpenStack packages

Mirantis stays as close as possible to upstream OpenStack code, and as I said, we commit 100% of our bug fixes and patches upstream. However, reviewing those patches and getting them included in a stable OpenStack release takes time, so Mirantis provides some bug fixes to you, our users, ahead of the upstream to allow you to take advantage of them immediately.

While the source code of all software packages in the Mirantis OpenStack distribution is open, the breakdown of the difference between the code shipped in Mirantis hardened packages and the corresponding public stable release of OpenStack into individual patches and their relation to bugs (and other reasoning behind specific code changes) constitutes the core of Mirantis intellectual property, and is closed.

So what does that mean? Well, consider the thousands of patches that are submitted to OpenStack each release cycle. Because we don’t add proprietary extensions to Mirantis OpenStack, you know that every bit of code we ship is included in one of those patches. However, we don’t necessarily include every single patch in Mirantis OpenStack.

The “hardening patches” are simply the log that shows which specific bug fixes we chose to apply, why, and in what order — the record of how to get from the community stable branch to Mirantis OpenStack.

In other words, hardened packages are the log of our build of working OpenStack packages, a build that is designed for successful production deployment. Not all upstream patches are created with production deployment in mind; it’s that selection and application process, established through testing and review of patches and bugs, that distinguishes Mirantis OpenStack from any arbitrary build or application of patches.

Which Mirantis OpenStack components are open

All other Mirantis OpenStack product components, including Mirantis patches for non-OpenStack packages, are available to the general public and, where applicable, released under open source licenses, with Apache License 2.0 preferred. Some examples include:

Why do we do this? Because we recognize that open sourcing these components is a benefit to both the wider community and to the Mirantis product engineering team. For example, when Mirantis publishes the source code for Python libraries and Linux packages, users can build an ISO easily, without hacking to get the correct versions of third-party Python libraries. When users contribute bug fixes to the OpenStack master branch, the Mirantis OpenStack team can spend less time porting bug fixes across OpenStack releases.  It also means our engineers don’t have to spend time porting bug fixes between Mirantis OpenStack and the “vanilla” version.

The lifecycle of changes going into Mirantis OpenStack is summarized in Figure 2:

Mirantis_OpenStack_Changes-Lifestyle
Figure 2. Mirantis OpenStack changes lifecycle

Once again, green boxes represent code repositories and issue trackers that are open to the community, blue boxes represent trackers and repositories that are only available to Mirantis employees.

In particular, we use closed git repositories to track the hardening patches for OpenStack components, and a closed support portal to protect confidential customer information.

While we keep the intermediate builds of Mirantis OpenStack private until we have a build that our Quality Assurance team has approved for general availability, we publish builds based on the same code with the Mirantis branding removed. This way, users can experiment with the features from the future releases of Mirantis OpenStack, and yet can count on our quality standards and our support when deploying production environments using officially branded Mirantis OpenStack release builds.

Handling security vulnerabilities

Mirantis follows the responsible disclosure rules for non-zero-day security vulnerabilities. That means that embargoed vulnerabilities are reported and fixed in private, and not published until the disclosure date coordinated with other software vendors. Security-related bugs in Fuel and Mirantis OpenStack projects on Launchpad are marked private until the fix is published via the Mirantis OpenStack Technical Bulletins Process. Security vulnerabilities in base operating system packages are resolved based on the corresponding Linux distribution’s security announcements and security updates channels.

Package maintenance

Although base operating systems such as CentOS and Ubuntu include RPM and .deb packages for most OpenStack and Fuel dependencies, not all of these packages have the versions required by Mirantis OpenStack. To solve this problem, the Mirantis components team often builds RPM and .deb packages of specific versions using available sources.

This process is completely open.  The teams responsible for specific packages for Fuel, Mirantis OpenStack components, and Mirantis OpenStack Linux produce both code and package specs for their packages. The Fuel OpenStack continuous integrations team creates all required git repos, gerrit, and branches.

Understanding the coordination of Mirantis OpenStack code

As you can see, Mirantis works within the community to make OpenStack as strong as possible, while still providing the best possible options and creating new solutions for our customers. Mirantis is committed to making our code completely open and uses multiple methods to get you the quickest updates while working within the upstream community.

The post Mirantis OpenStack: real open community development appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Dmitry Borodaenko at April 28, 2015 06:15 AM

Mirantis OpenStack: real open community development

The post Mirantis OpenStack: real open community development appeared first on Mirantis | The #1 Pure Play OpenStack Company.

When you talk about a company “doing open source,” people frequently assume that employees build the product in-house, then release the source code to the community. When it comes to Mirantis and OpenStack, however, the situation is just the opposite. We start with upstream OpenStack and build our product through a combination of testing, hardening patches, and additional open source features that make it easier to use without creating vendor lock-in.

Once you add a commercial distribution such as Mirantis OpenStack to the mix, things can seem complicated — particularly given that every piece of OpenStack code we work on is open source. Let’s take a look at how this process works so you can understand which parts of Mirantis’ work are, and are not, open.

How Mirantis works in the community

Although it would be simple to assume that Mirantis builds code in-house and then sends the result into the community, that’s not, in fact, how it works.

To succeed in open source, a company must first nurture the community on which it aims to build its business. For Mirantis, that’s OpenStack. That means that when we work on new features, we do them in public, in the community. When we find bugs, we fix them in public, in the community. The goal is to make OpenStack as strong as possible, so that we can provide the best value for our customers using it.

 Mirantis OpenStack has zero proprietary code in it.

 Let me say that again.

 Mirantis OpenStack has zero proprietary code in it.

Of course, that doesn’t mean that Mirantis OpenStack is identical to the OpenStack trunk; far from it. When we package up Mirantis OpenStack, it includes the following:

  •  The latest stable branch of OpenStack

  • Hardening packages that include fixes for issues that we discover in testing

  • Select additional bug fixes that may or may not have been merged/backported by the community yet

Let’s dig down a little and get some more details.

Mirantis OpenStack components

When you download Mirantis OpenStack, you get OpenStack code with multiple components that work together for optimized functionality.

  • OpenStack packages: The Mirantis OpenStack distribution is based on OpenStack project packages from the latest stable project release; for example, Mirantis OpenStack 6.x release is based on the OpenStack Juno release.

  • Fuel: Fuel is a deployment and management tool for OpenStack, related community projects, and plugins, enabling provisioning, deployment, and lifecycle management.

  • Continuous Integration (CI) Infrastructure: Mirantis OpenStack includes components to implement and test your OpenStack environment, such as OpenStack unit tests, Fuel unit and system tests, and the Rally and Tempest projects.

  • Mirantis OpenStack Express service: Mirantis OpenStack Express enables you to build your hybrid OpenStack cloud and add compute and data capacity on demand. This service provides a self-service portal, hardware on demand, and, of course, documentation.

All of these components are based on work from the Mirantis engineering team as well as from the OpenStack community, and virtually all of it is open source, as you can see from Figure 1:

Mirantis_OpenStack_ComponentsFigure 1. Mirantis OpenStack Components

In Figure 1, anything in green is open source, while items in blue are not. The latter include Mirantis OpenStack Express components and hardening patches for OpenStack packages.

While the Mirantis OpenStack Express self-service portal and hardware-on-demand service are essential to the functioning of Mirantis OpenStack Express, they aren’t actually part of OpenStack; they just enable us to provide OpenStack as a service.

Hardening patches for OpenStack packages is where a lot of people get tripped up. The actual patches that we use for hardening OpenStack are closed, but all of the code provided in them is open. (We’ll explain how that’s possible–as well as why we do it this way–in a moment.) 

Let’s take a look at how all of this works in practice.

Fixing bugs

There are two ways we might discover a bug; we might discover it as part of our testing and development, or we might discover it while resolving a customer support request.  In either case, the process is the same:

  •  Check to see if a bugfix already exists on the master branch of the affected component. If so, we submit a backport of the fix to the corresponding stable release branches in our internal fuel-infra repo.

  • If there is no existing bugfix, the Mirantis OpenStack components team:

    1. Reports the bug to the affected OpenStack component (if it has not already been reported).

    2. Documents the relation of the upstream bug in a comment or description of the corresponding Mirantis OpenStack bug report on Launchpad (in the Mirantis OpenStack project).

    3. Proposes a fix for the master branch of the public git repository on review.openstack.org, and gathers initial feedback from the community to make sure it’s on track.

    4. Ports the fix to the stable release branches in a private git repository on review.fuel-infra.org.

At this point, the bug fix becomes available to our customers while we wait for it to be accepted and merged by the OpenStack community.

We follow a similar process when we discover bugs in Linux packages such as KVM, libvirt, Open vSwitch, and even the Linux kernel, as well as for Python dependencies of OpenStack components, such as kombu, and tools and libraries required for Fuel, such as, MCollective, Puppet, and Cobbler. One significant distinction here, however, is that the git repository on review.fuel-infra.org is public, not private.

We track bugs in Fuel and its Python and Ruby dependencies in the Fuel project on Launchpad. Bugs in all other packages are tracked in the Mirantis OpenStack project.

Hardening patches for OpenStack packages

Mirantis stays as close as possible to upstream OpenStack code, and as I said, we commit 100% of our bug fixes and patches upstream. However, reviewing those patches and getting them included in a stable OpenStack release takes time, so Mirantis provides some bug fixes to you, our users, ahead of the upstream to allow you to take advantage of them immediately.

While the source code of all software packages in the Mirantis OpenStack distribution is open, the breakdown of the difference between the code shipped in Mirantis hardened packages and the corresponding public stable release of OpenStack into individual patches and their relation to bugs (and other reasoning behind specific code changes) constitutes the core of Mirantis intellectual property, and is closed.

So what does that mean? Well, consider the thousands of patches that are submitted to OpenStack each release cycle. Because we don’t add proprietary extensions to Mirantis OpenStack, you know that every bit of code we ship is included in one of those patches. However, we don’t necessarily include every single patch in Mirantis OpenStack.

The “hardening patches” are simply the log that shows which specific bug fixes we chose to apply, why, and in what order — the record of how to get from the community stable branch to Mirantis OpenStack.

In other words, hardened packages are the log of our build of working OpenStack packages, a build that is designed for successful production deployment. Not all upstream patches are created with production deployment in mind; it’s that selection and application process, established through testing and review of patches and bugs, that distinguishes Mirantis OpenStack from any arbitrary build or application of patches.

Which Mirantis OpenStack components are open

All other Mirantis OpenStack product components, including Mirantis patches for non-OpenStack packages, are available to the general public and, where applicable, released under open source licenses, with Apache License 2.0 preferred. Some examples include:

Why do we do this? Because we recognize that open sourcing these components is a benefit to both the wider community and to the Mirantis product engineering team. For example, when Mirantis publishes the source code for Python libraries and Linux packages, users can build an ISO easily, without hacking to get the correct versions of third-party Python libraries. When users contribute bug fixes to the OpenStack master branch, the Mirantis OpenStack team can spend less time porting bug fixes across OpenStack releases.  It also means our engineers don’t have to spend time porting bug fixes between Mirantis OpenStack and the “vanilla” version.

The lifecycle of changes going into Mirantis OpenStack is summarized in Figure 2:

Mirantis_OpenStack_Changes-Lifestyle
Figure 2. Mirantis OpenStack changes lifecycle

Once again, green boxes represent code repositories and issue trackers that are open to the community, blue boxes represent trackers and repositories that are only available to Mirantis employees.

In particular, we use closed git repositories to track the hardening patches for OpenStack components, and a closed support portal to protect confidential customer information.

While we keep the intermediate builds of Mirantis OpenStack private until we have a build that our Quality Assurance team has approved for general availability, we publish builds based on the same code with the Mirantis branding removed. This way, users can experiment with the features from the future releases of Mirantis OpenStack, and yet can count on our quality standards and our support when deploying production environments using officially branded Mirantis OpenStack release builds.

Handling security vulnerabilities

Mirantis follows the responsible disclosure rules for non-zero-day security vulnerabilities. That means that embargoed vulnerabilities are reported and fixed in private, and not published until the disclosure date coordinated with other software vendors. Security-related bugs in Fuel and Mirantis OpenStack projects on Launchpad are marked private until the fix is published via the Mirantis OpenStack Technical Bulletins Process. Security vulnerabilities in base operating system packages are resolved based on the corresponding Linux distribution’s security announcements and security updates channels.

Package maintenance

Although base operating systems such as CentOS and Ubuntu include RPM and .deb packages for most OpenStack and Fuel dependencies, not all of these packages have the versions required by Mirantis OpenStack. To solve this problem, the Mirantis components team often builds RPM and .deb packages of specific versions using available sources.

This process is completely open.  The teams responsible for specific packages for Fuel, Mirantis OpenStack components, and Mirantis OpenStack Linux produce both code and package specs for their packages. The Fuel OpenStack continuous integrations team creates all required git repos, gerrit, and branches.

Understanding the coordination of Mirantis OpenStack code

As you can see, Mirantis works within the community to make OpenStack as strong as possible, while still providing the best possible options and creating new solutions for our customers. Mirantis is committed to making our code completely open and uses multiple methods to get you the quickest updates while working within the upstream community.

The post Mirantis OpenStack: real open community development appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Dmitry Borodaenko at April 28, 2015 06:15 AM

April 27, 2015

Robert Collins

Dealing with deps in OpenStack

We’ve got a problem in OpenStack.. dependency management.

In this post I explore it as input to the design summit session on this in Vancouver.

Goals

We have some goals that are broadly agreed:

  1. Guarantee co-installability of a single release of OpenStack
  2. Be able to deliver known-good installs of OpenStack at any point in time – e.g. ‘this is known to work’
  3. Deliver good, clear dependency metadata to redistributors
  4. Support CD deployments of OpenStack from git. Both production and devstack for developers to hack on/with
  5. Avoid firedrills in CI – both internal situations where we run incompatible things we produced, and external situations where some dependency releases a broken version, like the pycparsing one last week
  6. Deployments using the Python dependencies should be up to date and secure
  7. Support doing upgrades in the same Python environment

Assumptions

And we have some baseline assumptions:

  1. We cooperate with the Python ecosystem – publishing our libraries to PyPI for instance
  2. Every commit of server projects is a ‘release’ from the perspective of e.g. schema management
  3. Other things release when they release, not per-commit

The current approach uses a single global list of acceptable install-requires for all our projects, and then merges that into the git trees being tested during the test. Note in particular that this doesn’t take place for things not being tested, which we install from PyPI. We create a branch of that global list for each stable release, and we also create branches of nearly everything when we do the stable release, a system that has evolved in part due to the issues in CI when new releases would break stable releases. These new branches have tightly defined constraints – e.g. “DEP >= version-at-this-release < next-point-release”‘. The idea behind this is that if the transitive closure of deps is constrained, we can install from PyPI such a version, and it won’t bring in a different version. One of the reasons we needed that was PIP bug 988, where pip takes the first occurrence of a dependency, and so servers would depend on oslo.utils which would depend on an unversioned cliff or some such, and if cliff wasn’t already installed we’d get the next releases cliff. Now – semver says we’re keeping those things compatible, but mistakes happen, and for stable branches there’s really little reason to upgrade.

Issues

We have some practical issues with the current system:

  1. Just one dependency uncapped anywhere in the wider ecosystem (including packages outside of OpenStack) that depends on a dependency that we wanted to stay unchanged, and if that dep is encountered first by the pip scanner… game over. Worse, there are components out there that introspect the installed dependencies and fail hard if one is not listed as compatible, which takes a ‘testing with unexpected version’ situation and makes it a hard error
  2. We have to run stable branches for everything, even things like OpenStackClient which are intended for end users, and are aimed at a semver rather than branched release model
  3. Due to PIP bug 2687 each time we call pip may introduce the skew that breaks the gate
  4. We don’t deliver goal 1:- because we override the requirements at test time, the actual co-installability may be different, and we don’t know
  5. We deliver goal 2 but its hard to use:- you have to dig through a specific CI log, and if the CI system has pruned it, you’re toast
  6. We don’t avoid external firedrills:- because most of our external dependencies are broad, external releases break us trivially and frequently
  7. Lastly, our requirements are too tight to support upgrades: if bug 2687 was fixed, installing the first upgraded server component would error because its requirements are declared as being incompatible with the last release.

We do deliver goals 3,4 and 6 though, which is good.

So what can we do differently? In an ideal world, can we get all 6 goals?

Proposal

I think we can. Here’s one way it could work:

  1. We fix the two pip bugs above (I’m working on that now)
  2. We teach pip about constraints *if* something is requested without actually requesting it
  3. We change our project overrides in CI to use a single constraints file rather than merging into each projects requirements
  4. The single constraints file would be exactly specified: “DEP == VERSION”, not semver or compatible matched.
  5. We make changes to the single constraints file by running a proposed set of constraints
  6. We find out that we should change the constraints file by having a periodic task which compares the constraints file to the published versions on  PyPI and proposes changes to the constraints repository automatically
  7. We loosen up the constraints in all our release branches to permit upgrade co-installability

And some optional bits…

  1. We could start testing new-library old-servers again
  2. We could potentially change our branching strategy for non-server components, but I don’t think it harms things – it may just be unnecessary
  3. We could add periodic jobs for testing with unreleased versions of dependencies

Working through each point. Bug 988 causes compatible requirements to be ignored – if we have one constraint of “X > 1.4″ and another of “X > 1.3, !=1.5.1″ but the “> 1.4″ constraint is encountered first, we can end up with 1.5.1 installed, violating a known-bad constraint. Fixing this means that rather than having to have global knowledge of deps at the point where pip is being entered, we can have local knowledge about compatible versions in each package, and as long as the union of requirements is satisfiable, we’ll be ok. Bug 2687 causes the constraints that thing A had when it was installed by pip be ignored by the requirements checking for thing B. For instance, pip install python-openstackclient after pip install nova, will meet python-openstackclient’s requirements even if that means breaking nova’s requirements.

The reason we can’t just use a requirements file today, is that a requirements file specifies what needs to be installed as well as what versions are acceptable. We don’t want devstack, when configured for nova-network, to install neutron dependencies. But it would today unless we put in place a bunch of complex processing logic. Whereas pip could do this very easily internally.

Merging each requirement into things we’re installing from git fails when we install releases – e.g. of client libraries, in particular because of the interactions with bug 988 above. A single constraints file could include all known good versions of everything we might use, and would apply globally in concert with local project requirements. Best of both worlds, in theory :)

The use of inexact versions is a hard limitation today – we can’t upgrade multiple project trees local version needs atomically, and because we’re supplying all the version constraints in one place – the project’s merged install_requirements – they have to be broad enough to co-exist during changes to the requirements, and to remain co-installed during upgrades from release to release of OpenStack. But inexact versions leads to variation in CI – every single run becomes a gamble. The primary goal of CI is to tell  us whether a new commit X meets all of our quality criteria – change one thing at a time. Running with every new version of every dependency doesn’t tell us more about X, it tells us about ecosystem things. Using exact constraints will solve this: we’ll decouple ‘update dependencies’ or ‘pycparsing Y is broken’ from testing X – e.g. ‘improve nova cells’.

We need to be able to update those dependencies though, and the existing global requirements mechanisms are pretty much right, they just need to work with a constraints file instead of patching each repo at test time. We will still want to check that the local requirements are compatible with the global constraints file.

One of the big holes such approaches have is that we may miss out on important improvements – security, performance or just plain old features – if we don’t update our constraints. So we need to be on top of that. A small amount of automation can give us a lot of assistance on that. Just try the new versions and if they work – great. If they don’t, show a failing proposal where we can assess what to do.

As I mentioned earlier today we can’t actually upgrade: kilo’s version locks exclude liberty versions of our libraries, meaning that trying to upgrade nova/kilo to nova/liberty will bring in library versions that conflict with the version deps neutron expresses. We need to open up the project local requirements to avoid this – and we also need to make some guarantees about compatibility with our prior release in our library development (otherwise rebooting a server with only one component upgraded will be a gamble).

Making those guarantees will either require testing every commit against the prior server, or if we can find some way of doing it, testing proposed releases against the prior servers – which would allow more latitude during development of our libraries. The use of constraints files will give us hermetic insulation against bad releases though – we’ll be able to stay productive while we fix the issue and issue a new better release. The crucial thing is to have a tight feedback loop though – so I’m in favour of us either testing each commit against last-stable, or figuring out the ‘tests before releases’ logic (perhaps by removing direct tag access and instead having a thing we propose the intent to as a review).

All this might be enough that we choose to make less stable branches of libraries and go back to plain semver – but its not a requirement: thats something we can discuss in detail if people care, or just wait and see what the overheads and benefits of keeping those branches are.

Lastly, this new structure will make it possible, if we want to, to test that unreleased versions of external dependencies work with a given component, by using a periodic job. Why periodic? There are two sides to each dependencies, and neither side would want their gate to wedge if an accident breaks the other side. E.g. using two of our own components – oslo.messaging and nova. oslo.messaging releases must not break nova, but an individual oslo.messaging commit isn’t necessarily constrained (if we have the before-release testing described above). External dependencies are exactly the same, except even less closely aligned than intra-OpenStack components. So running tests with a git version of e.g. libvirt in a periodic job might give us (and libvirt) valuable prior warning about issues.


by rbtcollins at April 27, 2015 09:35 PM

IBM OpenTech Team

OpenStack Summit Demos, Offerings and Free Trials

As we approach the Vancouver Summit, I wanted to provide a brief prelude to some of the exciting things that we will be showing at the IBM booth as well as share the links to some resources to help you get more familiar with IBM offerings. More importantly, as the blog title implies, there are several free trials and tiers as well as promotions that make it easier and cheaper to get hands on experience.

Demos
OpenStack Local – Run and manage your OpenStack cloud in your own data center for the ultimate in visibility, control and security.

OpenStack Dedicated – Open, managed and dedicated cloud on SoftLayer bare metal resources, for superior performance and security.

OpenStack Shared: Open, managed and shared cloud bringing together SoftLayer, OpenStack and Bluemix to improve developer productivity and deployment flexibility.

Process Orchestration – IBM Cloud Orchestrator helps automate and manage hybrid cloud processes and tasks. It comes in a software version for running in your data center as well as a SaaS version that can be used to manage both on and off prem clouds. Check out the free trial.

SoftLayer Bare Metal Resources – Bare metal cloud hosted resources are a great way to get exactly what you want in the cloud without the concerns of noisy neighbors or sharing security. Run your own OpenStack distribution or anything else because the servers are as the name implies – bare and up to you to install and manage.

I’ll publish more as we finalize our demonstrations and promotions so check back for updates.

The post OpenStack Summit Demos, Offerings and Free Trials appeared first on IBM OpenTech.

by ShawnJaques at April 27, 2015 09:00 PM

Rich Bowen

Geocaching in Vancover

Coming to the OpenStack Summit in Vancouver?

Like Geocaching?

It looks like there’s a lot of caches around the summit location. This map shows the 500 closest.

vancouver

 

I keep meaning to spend a little time at conferences walking around the area and geocaching. Perhaps if I have a few folks with me I’ll see more of it, and meet some interesting people as well.

If you’re interested in Geocaching in Vancover, let me know, and we’ll try to set something up. I’ll be there from Sunday night (May 17th) through Thursday night (May 21st), and although I know it’s an incredibly busy week, I expect we can find an hour or two free in there somewhere.

I’ll also bring my new CryptoCard travel bug to drop off somewhere, since all of my other travel bugs have long since vanished.

cryptoI’m also hoping that by the time Red Hat Summit rolls around, I have some special Red Hat community project geocoins to accompany our geocaching outing. If this works out, I’ll try to make it a regular feature of my conference trips. So, here’s hoping.

 

 

by rbowen at April 27, 2015 06:09 PM

Xen Project Blog

Introducing the Xen Project – OpenStack CI Loop

We recently introduced the new Xen Project Test Lab, a key piece of infrastructure to improve the quality of our code and code coverage. As stated earlier, “we are serious and proactive when it comes to minimising and, whenever possible, eliminating any adverse effects from defects, security vulnerabilities or performance problems”. This also applies to Xen Project integration with OpenStack, which is why the Xen Project Advisory Board made available funds to create the Xen Project – OpenStack CI Loop in January 2015. We started work on setting up our OpenStack CI Loop immediately after the funds were made available, fixed a number of issues in the Xen Project Hypervisor, Libvirt and OpenStack Nova and are excited to announce that our Xen Project – OpenStack CI Loop is now live and in production.

I wanted to thank members of our Test Infrastructure Working Group, comprised of employees from AMD, Amazon Web Services, Citrix, Intel and Oracle, and community members from Rackspace and Suse who have been supporting the creation of the Xen Project – OpenStack CI Loop.

What does an OpenStack CI Loop Do?

An OpenStack external testing platform (or CI Loop) enables third parties to run tests against an OpenStack environment that is configured with that third party’s drivers or hardware and reports the results of those tests on the code review of a proposed OpenStack patch. It is easy to see the benefit of this real-time feedback by taking a look at a code review that shows how these platforms provide feedback.

In this screenshot, you can see a number Verified +1 and one Verified -1 labels added by CI loops to OpenStack Nova.

In this screenshot, you can see a number Verified +1 and one Verified -1 labels added by CI loops to OpenStack Nova.

The figure below, shows the number of OpenStack Nova drivers for Hypervisors, which allow you to choose which Hypervisor(s) to use for your Nova Deployment. Note that these are classified into groups A, B and C.

This diagram shows the different Nova compute drivers and their quality status

This diagram shows the different Nova compute drivers and their quality status

In a nutshell, the groups have the following meaning:

  • Group C: These drivers have minimal testing.
  • Group B: These drivers have unit tests that gate commits and functional testing providing by an external system that does not gate commits, but advises patch authors and reviewers of results in the OpenStack code review system.
  • Group A: These drivers have unit tests that gate commits and functional testing that gate commits.

With the introduction of the Xen Project – OpenStack CI Loop we are close to achieving our first goal of moving the Xen Project Hypervisor from Group C to B. This is a goal we first publicly stated at this year’s FOSDEM’15 where some of you may had had the opportunity to hear Stefano Stabellini talk about using the Xen Project Hypervisor in OpenStack.

What we test against?

Currently, our CI Loop tests all OpenStack Nova commits against Ubuntu Xen 4.4.1 packages with a number of patches applied to them, together Libvirt 1.2.14 with a number of patches applied to them (for more details see this specification). All the patches are already integrated upstream. When off-the shelf packages of these changes are available in Linux distros, we will start testing against them.

What is next?

Monitor the CI loop test results against the official OpenStack CI: we will run the CI loop for a few weeks and monitor, whether there are any discrepancies between the results of the official Nova CI loop and ours. If there are any, we will investigate where the issue is and fix them. This is important, as we had a number of intermittent issues in the Xen Libvirt driver and we want to be sure, we have fixed them all.

Fix two known issues, and enable two disabled tests: We have two test cases related to iSCSI support (test_volume_boot_pattern), which are currently disabled. In one case, we believe the that the Tempest Integration Suite makes some KVM specific assumptions: Xen Project community members are working with the OpenStack community to fix the issue. The second issue is related, but requires further investigation.

Make the Xen Project – OpenStack CI Loop voting: The next logical step, is to work with the OpenStack community to make our CI loop voting. This is the first step to move from group C to B.

Beyond That: Build trust and become an active member of the OpenStack community. Our ultimate goal, is to ensure that the Xen Project Hypervisor moves into group A, alongside KVM. We are also investigating, whether it makes sense to run Xen Project + Libvirt against OpenStack Neutron. We are also looking at integrating OpenStack Tempest into our new Xen Project Test Lab, such that we can ensure that upstream Xen and Libvirt will work with OpenStack Nova.

Can I Help?

If you use Xen Project with OpenStack and Libvirt, feel free to get in touch. Some members of the Xen Project community will be at the Vancouver OpenStack Summit and we are keen to learn, what issues you have and whether we can help fixing them. As community manager, will help arrange meetings and build bridges: feel free to contact me under community dot manager at xenproject dot org. You can also ask questions on xen-devel@ and xen-users@ and connect with Xen Project Developers and Users.

Additional Resources

by Lars Kurth at April 27, 2015 04:00 PM

Cloudscaling Corporate Blog

What AWS Revenues Mean for Public Cloud and OpenStack More Generally

At the risk of sounding like “I told you so”, I wanted to comment on the recent Amazon 10-Q report.  If you were paying attention you likely saw it as it was the first time that AWS revenues were reported broken out from the rest of Amazon.com, ending years of speculation on revenue. The net of it is that AWS revenues for Q1 2015 were 1.566B, putting it on a run rate of just over 6B this year, which is almost on the money for what I predicted at the 2011 Cloud Connect keynote I gave [ VIDEO, SLIDES ]. Predictions in cloud pundit land are tricky as we’re usually about as often wrong as we are right; however, I do find it somewhat gratifying to have had this particular prediction correct and I will explain why shortly.

The 2015 Q1 AWS 10-Q

If you don’t want to wade through the 10-Q, there are choice pieces in here that are quite fascinating.  For example as pointed out here AWS is actually the fastest growing segment of Amazon by a long shot.  It is also the most profitable in terms of gross margin according to the 10-Q.  I remember having problems convincing people that AWS was operating at a significant profit over the last 5 years, but here it is laid out in plain black and white numbers.

Other interesting highlights include:

  • Growth from Q1 2014 -> Q1 2015 is 50% y/o/y, matching my original numbers of 100% y/o/y growth in the early days scaling down to 50% in 2015/2016
  • Goodwill + acquisitions is 760M, more than that spent on Amazon.com (retail) internationally and a third of what is spent on Amazon.com in North America
  • 1.1B spent in Q1 2015 “majority of which is to support AWS and additional capacity to support our fulfillment operations”
  • AWS y/o/y growth is 49% compared to 24% for Amazon.com in North America and AWS accounts for 7% of ALL Amazon sales

Here is a choice bit from the 10-Q:

Property and equipment acquired under capital leases were $954 million and $716 million during Q1 2015 and Q1 2014. This reflects additional investments in support of continued business growth primarily due to investments in technology infrastructure for AWS. We expect this trend to continue over time.

The AWS Public Cloud is Here to Stay

I’ve always been bullish on public cloud and I think these numbers reinforce that it’s potentially a massively disruptive business model. Similarly, I’ve been disappointed that there has been considerable knee-jerk resistance to looking at AWS as a partner, particularly in OpenStack land [1].

What does it mean now that we can all agree that AWS has built something fundamentally new?  A single business comparable to all the rest of the U.S. hosting market combined?  A business focused almost exclusively on net new “platform 3” applications that is growing at an unprecedented pace?

It means we need to get serious about public and hybrid cloud. It means that OpenStack needs to view AWS as a partner and that we need to get serious about the AWS APIs.  It means we should also be looking closely at the Azure APIs, given it appears to be the second runner-up.

As the speculation ceases, let’s remember, this is about creating a whole new market segment, not about making incremental improvements to something we’ve done before.


[1] If you haven’t yet, make sure to check out the latest release we cut of the AWS APIs for OpenStack

by Randy Bias at April 27, 2015 03:20 PM

Swapnil Kulkarni

[OpenStack] OpenStack oslo packages not available in devstack? Here’s how you should do it

Prerequisites:
– DevStack setup requires to have 1 VM/ BM machine with internet connectivity.
– Setup a fresh supported Linux installation. (Ubuntu/Fedora/CentOs)
– Install Git

Steps
1. Clone devstack from devstack.

$git clone https://github.com/openstack-dev/devstack.git

2. Create your local.conf as per requirement

2. Add following line in your local.conf

LIBS_FROM_GIT = cliff,debtcollector,oslo.concurrency,oslo.config,oslo.context,oslo.db,oslo.i18n,oslo.log,oslo.messaging,oslo.middleware,oslo.policy,oslo.rootwrap,oslo.serialization,oslo.utils,oslo.versionedobjects,oslo.vmware,pycadf,stevedore,taskflow,tooz,pbr

4. Deploy your Devstack

$cd devstack && ./stack.sh

After completion of the sript you will get following message

Horizon is now available at http://X.X.X.X/
Keystone is serving at http://X.X.X.X:5000/v2.0/
Examples on using novaclient command line is in exercise.sh
The default users are: admin and demo
The password: xxxxxxx
This is your host ip: X.X.X.X

Locate if your oslo packages are cloned from git

$ls /opt/stack

Source credentials required for executing commands

For demo user

$source accr/demo/demo

For admin user

$source accr/admin/admin

by coolsvap at April 27, 2015 10:31 AM

[OpenStack] OpenStack Clients not available in devstack? Here’s how you should do it

Prerequisites:
– DevStack setup requires to have 1 VM/ BM machine with internet connectivity.
– Setup a fresh supported Linux installation. (Ubuntu/Fedora/CentOs)
– Install Git

Steps
1. Clone devstack from devstack.

$git clone https://github.com/openstack-dev/devstack.git

2. Create your local.conf as per requirement

2. Add following line in your local.conf

LIBS_FROM_GIT = python-ceilometerclient,python-cinderclient,python-glanceclient,python-heatclient,python-ironicclient,python-keystoneclient,python-neutronclient,python-novaclient,python-saharaclient,python-swiftclient,python-troveclient,python-openstackclient

4. Deploy your Devstack

$cd devstack && ./stack.sh

After completion of the sript you will get following message

Horizon is now available at http://X.X.X.X/
Keystone is serving at http://X.X.X.X:5000/v2.0/
Examples on using novaclient command line is in exercise.sh
The default users are: admin and demo
The password: xxxxxxx
This is your host ip: X.X.X.X

Locate if your clients are cloned from git

$ls /opt/stack

Source credentials required for executing commands

For demo user

$source accr/demo/demo

For admin user

$source accr/admin/admin

by coolsvap at April 27, 2015 09:25 AM

Maish Saidel-Keesing

Why I Decided to Run for the OpenStack Technical Committee

As of late I have been thinking long and hard about if I can in some way contribute in a more efficient way into the OpenStack community.

Almost all of my focus today is on OpenStack, on its architecture and how to deploy certain solutions on top such an infrastructure.

What is the Technical Committee?

It is a group of 13 elected people by the OpenStack ATC’s (Active Technical contributors – a.k.a the people that are actively contributing code to the projects over the last year). There are seven spots up for election for this term, in addition to the six TC members that were chosen 6 months ago for a term of one year.

The TC’s Mission is defined as follows:

The Technical Committee (“TC”) is tasked with providing the technical leadership for OpenStack as a whole (all official projects, as defined below). It enforces OpenStack ideals (Openness, Transparency, Commonality, Integration, Quality...), decides on issues affecting multiple projects, forms an ultimate appeals board for technical decisions, and generally has technical oversight over all of OpenStack.

On Thursday I decided to take the plunge. Here is the email where I announced my candidacy.

This is not a paid job, if anything it more of a “second” part-time job – a voluntary part-time job. There are meetings, email discussions on a regular basis.

There are a number of reasons that I am running for a spot on the TC.

Diversity

In my post The OpenStack Elections - Another Look, I noted that operators were not chosen to for the board. This is something that I think is lacking in the OpenStack community today. The influence that the people who are actually using and deploying the software is minimal if at all. The influence they have is mostly after the fact (at best) and not much of an input of what they would like to have put into the product.

openstackI am hoping to bring in a new perspective to the TC, to help  them understand the needs of those who actually deploy the software and have to deal with it day in and day out. There are valid pain points that they have, and in my honest opinion they feel they are not being heard or not being taken into consideration, at least not enough in their eyes.

Acceptance of others

The people who vote are only those who contribute code. Those who have committed a patch to the OpenStack code repositories. That is the definition of an ATC.

It is not easy to get a patch committed. Not at all (at least that is my opinion). You have to learn how to use the tools that the OpenStack community has in place. That takes time. I tried to ease the process with a Docker container to help you along. But even with that, it still seems (to me) that to get into this group of contributors takes time.

It is understandable. There is a standard of doing things (and rightfully so) so the chances of you getting your change accepted the first time are slim, for a number of reasons that I will not go into in this post.

I think that the definition of contributor should be expanded and not only limited to a those who write the code. There are a number of other ways to contribute.

I know that this will not be an easy “battle to win”. I am essentially asking the people to relinquish the way they have been doing things for the past 5 years and allow those who are not developers, those who do not write the code, to steer the technical direction of OpenStack.

I do think this will be in the best interest of everyone to extend the reach of OpenStack community, to branch out.

More information on the actual election that will run until April 30th can be found here. If you are one of the approximate 1,800 people who is an ATC, you should have received a ballot for voting.

It will be interesting to see the results which should be out in a few days.

As always your thoughts and comments are appreciated, please feel free to leave them below.

by Maish Saidel-Keesing (noreply@blogger.com) at April 27, 2015 09:00 AM

Opensource.com

The next OpenStack Summit, a bug-fixing hackathon, and more

The Opensource.com weekly look at what is happening in the OpenStack community and the open source cloud at large.

by Jason Baker at April 27, 2015 07:00 AM

IBM OpenTech Team

Debugging keystone tests and live deployments

Debugging an application is sometimes necessary, and keystone (OpenStack’s Identity service) is like any other, though it does have it’s quirks. For the most part, developers will just add in import pdb; pdb.set_trace() and run the application, resulting in an interactive prompt.

Debugging keystone tests

The wrong way
Say a developer wants to debug a keystone federation test, test_create_idp in test_v3_federation.py. Most will use python’s debugger, pdb.

    def test_create_idp(self):
        """Creates the IdentityProvider entity associated to remote_ids."""

        import pdb; pdb.set_trace()
        keys_to_check = list(self.idp_keys)
        body = self.default_body.copy()
        body['description'] = uuid.uuid4().hex

However, if a developer runs tox -e py27 test_v3_federation to run the unit tests, with the above changes, they’ll see the following output:

steve$ tox -e py27 test_v3_federation
...
Captured traceback:
~~~~~~~~~~~~~~~~~~~
    Traceback (most recent call last):
      File "keystone/tests/unit/test_v3_federation.py", line 2001, in setUp
        super(FederatedTokenTests, self).setUp()
      File "keystone/tests/unit/test_v3.py", line 158, in setUp
        super(RestfulTestCase, self).setUp(app_conf=app_conf)
      File "keystone/tests/unit/rest.py", line 66, in setUp
        self.load_fixtures(default_fixtures)
      File "keystone/tests/unit/test_v3_federation.py", line 2032, in load_fixtures
        self.load_federation_sample_data()
      File "keystone/tests/unit/test_v3_federation.py", line 669, in load_federation_sample_data
        self._inject_assertion(context, variant)
      File "keystone/tests/unit/test_v3_federation.py", line 184, in _inject_assertion
        import pdb; pdb.set_trace()
      File "/usr/lib/python2.7/bdb.py", line 53, in trace_dispatch
        return self.dispatch_return(frame, arg)
      File "/usr/lib/python2.7/bdb.py", line 91, in dispatch_return
        if self.quitting: raise BdbQuit
    bdb.BdbQuit

This is caused by an issue with testr and testtools, it is further discussed on the OpenStack Wiki

The right way

What a developer should do is use Oslo Test’s debug helper, oslo_debug_helper. Simply run tox with the debug option instead of py27. Voilà! Your own debugging prompt.

steve$ tox -e debug test_v3_federation
debug develop-inst-nodeps: /opt/stack/keystone
debug runtests: commands[0] | oslo_debug_helper test_v3_federation
Tests running...
--Return--
/opt/stack/keystone/keystone/tests/unit/test_v3_federation.py(184)_inject_assertion();None
import pdb; pdb.set_trace()
(Pdb)

There is additional documentation on how to add the debug environment to any OpenStack project by viewing the Oslo Test documentation.

Debugging live keystone deployments

Now let’s say a deployer want to debug a live server where keystone is running under Apache (the recommended way to deploy keystone).

The wrong way

Like the above example, most folks will use pdb; let’s try using pdb and to try debugging an issue with listing users.

    @controller.filterprotected('domain_id', 'enabled', 'name')
    def list_users(self, context, filters):
        import pdb; pdb.set_trace()

Restart Apache so the changes take effect.

steve:devstack$ sudo service apache2 restart
 * Restarting web server apache2 - [ OK ] 

Attempting to list users will result in an error and upon checking the logs, a similar exception to the one seen above will be logged.

steve$ openstack user list
WARNING: openstackclient.shell The volume version  is not in supported versions 
ERROR: openstack An unexpected error prevented the server from fulfilling your request:  (Disable debug mode to suppress these details.) (HTTP 500) (Request-ID: req-7795e720-0196-4201-8a0a-44a136f4449e)

The right way

Use rpdb instead of pdb to remotely debug an application running under Apache. (Note, to install rpdb, simply run: pip install rpdb.)

Simply drop in import rpdb; rpdb.set_trace() instead of the usual import pdb; pdb.set_trace(). Attempting to run $ openstack user list again will cause the terminal to hang, this is expected. The keystone log will show a message similar to: 2015-04-27 02:02:44.041686 pdb is running on 127.0.0.1:4444. Attempt to connect to this service using the nc command in another terminal.

steve$ nc 127.0.0.1 4444
/opt/stack/keystone/keystone/identity/controllers.py(221)list_users()
hints = UserV3.build_driver_hints(context, filters)
(Pdb) 

Now you may debug your application. Happy debugging!

The post Debugging keystone tests and live deployments appeared first on IBM OpenTech.

by Steve Martinelli at April 27, 2015 06:16 AM

April 24, 2015

OpenStack Blog

OpenStack Community Weekly Newsletter (Apr 17 – 24)

Why you should attend an OpenStack Summit

OpenStack Summits don’t miss a beat – with a schedule full of diverse breakout sessions, captivating speakers, off-the-wall evening events and the occasional surprise, it’s the twice-yearly event you simply cannot miss. What would you add to the list of 10 most memorable summit moments to date?

Gnocchi 1.0: storing metrics and resources at scale

Gnocchi provides a scalable way to store and retrieve data and metrics from instances, volumes, networks and all the things that make an OpenStack cloud. Gnocchi also provides a REST API that allows the user to manipulate resources (CRUD) and their attributes, while preserving the history of those resources and their attributes. The Gnocchi team takes great pride in the quality of their documentation too, fully available online.

The Road to Vancouver

Relevant Conversations

Deadlines and Development Priorities

Reports from Previous Events

Security Advisories and Notices

Tips ‘n Tricks

Upcoming Events

OpenStack Israel CFP Voting is Open

PyCon-AU Openstack miniconf CFP open

Other News

OpenStack Reactions

 

Having to look for an error into a devstack log

Having to look for an error into a devstack log

 

 The weekly newsletter is a way for the community to learn about all the various activities occurring on a weekly basis. If you would like to add content to a weekly update or have an idea about this newsletter, please leave a comment.

by Stefano Maffulli at April 24, 2015 10:09 PM

Zane Bitter

A Vision for OpenStack

One of the great things about forcing yourself to write down your thoughts is that it occasionally produces one of those lightbulb moments of clarity, where the jigsaw pieces you have been mentally turning over suddenly all fit together. I had one of those this week while preparing my platform for the OpenStack Technical Committee election.

I want to talk a little about Keystone, the identity management component of OpenStack. Although Keystone supports a database back-end for managing users, the most common way to deploy it in a private cloud is with a read-only LDAP connection to the organisation’s existing identity management system. As a consequence, a ‘user’ in Keystone parlance typically refers to a living, breathing human user with an LDAP entry and an HR file and a 401(k) account.

That should be surprising, because once you have gone to the trouble of building a completely automated system for allocating resources with a well-defined API the very least interesting thing you can do next is to pay a bunch of highly-evolved primates to press its buttons. That is to say, the transformative aspect of a ‘cloud’ is the ability for the applications running in it to interact with and control their own infrastructure. (Autoscaling is the obvious example here, but it is just the tip of an unusually dense iceberg.) I think that deserves to stand alongside multi-tenancy as one of the pillars of cloud computing.

Now when I think back to all the people who have told me they think OpenStack should provide “infrastructure only” I still do not understand their choice of terminology, but I think I finally understand what they mean. I think they mean that applications should not talk back. Like in the good old days.


I think the history of Linux in the server market is instructive here. Today, Linux is the preferred target platform for server applications, but imagine for a moment that this had never come to pass: cast your mind back 15 years to when Steve Ballmer was railing about communists and imagine that .NET had gone on to win the API wars. What would that world look like for Linux? Certainly not a disaster. A great many legacy applications would still have been migrated to Linux from the many proprietary UNIX platforms that proliferated in the 1990s. (Remember AIX? HP/UX? Me neither.) When hardware vendors stopped maintaining their own entire operating systems to focus on adding hardware support to a common open source kernel, everybody benefited (they scaled back an unprofitable line of business, their customers stopped bleeding money, platform vendors still made a healthy profit and the technology advances accrued to the community at large). Arguably, that transition may have funded a lot of the development of Linux over the past 15 years. Yet if that is all that had happened, we could not call it fully successful either.

Real success for open source platforms means applications written against open implementations of open APIs. Moving existing applications over is important, and may provide the bridge funding to accelerate development, but new applications are written every day. Each one written for a proprietary platform instead of an open one represents a cost to society. Linux has come to dominate the server platform, but applications are bigger than a single server now. They need to talk back to the cloud and if OpenStack is to succeed—really succeed—in the long term then it needs to be able to listen.

MicroSoft understands this very well, by the way. The subject of Marxist theory and its similarities to the open source movement usually does not even come up when you launch a Linux VM on their cloud—the goal now is to lock you in to Azure, not .NET. Of course the other proprietary clouds (Amazon, Google) are doing exactly the same.

I am passionate about OpenStack because I think it is our fastest route to making an open source platform the preferred option for the applications of the (near) future. I hope you will join me. We can get started right now.


Having an application interact with the OpenStack APIs is really hard to do at the moment, because there is no way I am going to put the unhashed password that authenticates me to my corporate overlords on an application server connected to the Internet. The first step to fixing this actually already exists: Keystone now supports multiple domains, each with its own backend, so that application ‘user’ accounts in a database can co-exist with real meatspace-based user accounts in LDAP. The Heat project has cobbled together some workarounds that make use of this but they rely on Heat’s privileged position as one of the services deployed by the operator, and other projects do not automatically get the benefit either.

The next obstacle is that the authorisation functionality provided by Keystone is too simplistic: all rules must be predefined by the operator; by default a user does not need any particular role in a tenant to be granted permission for most operations; and, incidentally, user interfaces have no way of determining which operations should be exposed to any given user. We need to put authorisation under user control by allowing users to decide which operations are authorised for an account, including filtering on tenant-specific data. To get this to work properly, every OpenStack service will need to co-operate at least to some extent.

That gets us a long way toward applications talking back to the cloud, but when the cloud itself talks it must do so asynchronously, without sacrificing reliability. Fortunately, the Zaqar team has already developed a reliable, asynchronous, multi-tenant messaging service for OpenStack. We now need to start the work of adopting it.

These are the first critical building blocks on which we can construct a consistent user experience for application developers across projects like Zaqar, Heat, Mistral, Ceilometer, Murano, Congress, and probably others I am forgetting. There is no need to take anything away from other projects or make them harder to deploy. What we will need is consensus on what we are trying to achieve.

by Zane Bitter at April 24, 2015 07:56 PM

OpenStack Superuser

OpenStack: disruptive and innovative by design

This post is part of the Women of OpenStack Open Mic Series to spotlight women in various roles within our community, who have helped make OpenStack successful. With each post, we learn more about each woman’s involvement in the community and how they see the future of OpenStack taking shape. If you’d like to be featured, please email editor@openstack.org.

Anne Gentle works on the OpenStack project at Rackspace, using open source techniques for API design and documentation. She is responsible for ensuring the OpenStack Documentation site contains relevant and accurate documentation for over 20 projects written in Python across 130 git repositories. She advocates for cloud users and administrators by providing accurate technical information to increase OpenStack adoption as a cloud for the world. You can connect with her on Twitter and through her blog.

What's your role in the OpenStack community?

I serve in two elected leadership roles. I'm the documentation project technical lead (PTL) and also serve on the OpenStack Technical Committee. I've worked on the project since just after it became an open source project in 2010. In 2012 I coordinated efforts to connect OpenStack to the GNOME Outreach Program for Women, now named Outreachy. We wanted to connect female interns to mentors who would work together on OpenStack projects. I like that Claire Massey at the OpenStack Foundation says I'm another API (applicaiton program interface) for OpenStack. I just like connecting people, projects, processes together, whatever it takes.

Why is it important for women to get involved with OpenStack?

I started looking for more women to connect with as I kept going to the in-person Summits twice a year. I've been to every Summit except for the first in Austin. I kept looking around and didn't see many people quite like me. Not that I didn't feel a part of the groups or social scenes, but that I knew I'd have more fun and be more comfortable with technical women joining the community.

Rackspace has been a great place to connect with technical women, and we believe that diversity in both contributor base and user base can be a surprising advantage when making products and working in communities. As an early contributor--and by early I mean when the community numbered in the tens, not hundreds--I always felt like the men I worked with and went to lunch with wanted diversity as well. It's not just a feeling that it's important--I know so from talking with my male colleagues!

What obstacles do women face when getting involved in the OpenStack community?

Our community is large and complex with plenty of entry points, which doesn't make it easy for anyone to approach. I haven't found specific obstacles related to being female, but there are certainly intimidation factors for certain styles of communication. If you're in a Design Summit session, you have to speak up even if your voice shakes. If you're on IRC in an important meeting, you have to type fast and be prepared. If you feel you are not worthy of a leadership role (the old imposter syndrome creeping in), you have to fake it until you make it.

I fully realize these are not obstacles for women only, that some cultures will find our complex community difficult to match up with. What I want to be sure of is that we are considerate of lots of styles and appreciate the diversity of our community.

What can help get women more involved with OpenStack?

I really believe in the power of allies. For women, male allies can make a huge difference in the quality of daily interactions as well as at community events. I recently wrote a blog post about my own personal experiences with male allies since I often get asked the question, "How can men help?" as I go out and talk about the issues facing women in technology. And you know what happened? My male allies like Stefano Maffulli and Nick Chase who both write a weekly newsletter linked to my blog. That was a huge support boost for me and I thank them both. We can help raise the awareness of these issues and also do specific actions to make sure women are welcomed, safe, and thriving in our community.

What changes would you like to see in OpenStack by the time a 12-year-old is 22 and graduating from college? How might that differ if the 12-year-old is a girl?

As long as we're looking 10 years in the future, let's look 10 years back to 2005. Phrases with "iPod" were four of the 10 ten Froogle searches for the year. Proprietary technology abounded in the consumer markets. Top-gaining searches were Orkut, Wikipedia, with Myspace at the very top.

We know the social technologies (Orkut, Myspace) dropped off but the open source one remained. Somehow, Wikipedia's staying power is formidable. I believe OpenStack has that sort of staying power, due to being open source in a disruptive and innovative way. In contrast, cloud computing itself will become a given, basically gone from our vocabulary as we just "get" computing power, storage power and interconnected virtual networks. Let's change OpenStack over the years to be self-sustaining and as ubiquitous and "invisible" as web servers.

Two weeks ago, another Racker mom and I gave cloud computing simulation demos to third, fourth and fifth-graders at our kids’ elementary school. We had the kids pretend to be cloud servers and networks with colored balls serving as the packets or files moving through the cloud. They loved it and even the teachers were engrossed in understanding "the cloud" by being "the cloud."

In 10 years these kids will be in higher-education classes, thinking back to simulations and finding ways to abstract away the layers of complexity to gain understanding. That simplification is what I hope for the current sixth-grade girls, to become women in technology who can make the complex so simple that we take it for granted. And please, let's strive to make OpenStack so simple that we don't even notice cloud computing, we just use it. In 10 years, let there be women participating in building and using OpenStack in such numbers that we don't even notice gender.

Cover Photo by Adam Arroyo // CC BY NC-ND 2.0

by Hong Nga Nguyen at April 24, 2015 06:14 PM

Kyle Mestery

Some Thoughts on the OpenStack TC Election

The spring 2015 OpenStack TC elections are underway right now. It’s great to see the number of candidates, 19 in all, who have the desire to help shape OpenStack from the TC level. This shows there is still a real interest in being on the TC. Each of the candidate’s emails shows what they view as important and how they hope to address the issues they deem relevant to the tasks at hand. I highly encourage you to read all of these before voting.

I’m also encouraging you to think about a quote from Maru Newby’s candidacy email:

… and you are as frustrated with the status quo as I am, I hope you will consider voting for candidates that support the goal of building a more engaged TC focused on ensuring that participation in OpenStack becomes more enjoyable and sustainable.

Now go and vote!

April 24, 2015 05:44 PM

OpenStack Superuser

Superuser weekend reading

Here's the news from the OpenStack world you won't want to miss -- the musings, polemics and questions posed by the larger community.

Got something you think we should highlight? Tweet, blog, or email us!

In Case You Missed It

Ahead of the Amazon earnings call, The Economist took a peek at the cloud computing industry. In its opinion, cloud providers run the risk of ending up "somewhat like airlines and mass-market carmakers: chronically afflicted by overcapacity, constantly struggling to achieve a decent margin and perennially hoping that their competitors will keel over first." It's always interesting to read 30,000 foot-view of a mainstream weekly (this is, at best, what your relatives will know about what you do all day) and don't miss the comments section.

Think big is always better? Think again. Mercado Libre's 14-person team handles one of the world's largest e-commerce sites, thanks to OpenStack. More about how they do it -- and the challenges of finding those stellar team members -- on Computing, free registration required.

Ed Leafe, OpenStack hacker at IBM, spent some time manning the OpenStack booth at the recent PyCon. His experience, recounted on his personal blog, will ring true for anyone who's been on the front lines.

"Some had heard the name, but not much else. Others knew it was “cloud something,” but weren’t sure what that something was. Others had installed and played around with it, and had very specific configuration questions. Many people, even those familiar with what OpenStack was, were surprised to learn that it is written entirely in Python, and that it is by far the largest Python project today. It was great to be able to talk to so many different people and share what the OpenStack community is all about."

Speaking of community, Citrix joined the OpenStack Foundation as a corporate sponsor this week. The press release features welcome aboard messages from fellow members at Cisco, Intel, Mirantis and Rackspace.

Veteran IT journalist Mike Vizard at Talkin' Cloud had this to say about the Foundation's newest member: "If nothing else, having Citrix contributing to the OpenStack Foundation should considerably reduce a level of noise in the open source community that for all intents and purposes at this point has become little more than a distraction."

Wondering what's up with open virtual networks (OVN)? Russell Bryant, Red Hat software engineer and individual director at OpenStack, has an OVN status update ready for you.

Diane Mueller, who lives near Vancouver, will be sharing her tips for discovering the best in eats, drinks and activities city in Superuser's special Summit print edition....Until then, check these out: <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

We feature user conversations throughout the week, so tweet, blog, or email us your thoughts!

Cover Photo by Gerry Balding // CC BY NC

by Nicole Martinelli at April 24, 2015 05:13 PM

Opensource.com

8 new tutorials for success with OpenStack

The OpenStack community is full of helpful tutorials to help you with installing, deploying, and managing your open source cloud. Here are some of the best published in the last month.

by Jason Baker at April 24, 2015 07:00 AM

April 22, 2015

Rafael Knuth

Catching up with Cloud Online Meetups

There are three upcoming Cloud Online Meetups you may be interested in:* April 24: High Availability...

April 22, 2015 07:43 PM

OpenStack Superuser

China hosts OpenStack bug-fix hackathon

Dubbed the “Kilo Bug Fix Fest,” the recent hackathon held in China brought together 16 engineers in Shanghai.

During the three-day marathon, they focused on fixing Nova and Neutron bugs, although there was some work done on Ceilometer and Keystone, too. In total they identified 43 bugs and fixed 29 of them before the hackathon ended. (A virtual team will continue to work out the kinks on the remaining bugs.) You can check out the etherpad to see the list of attendees and bugs worked on.

alt text here

Intel and Huawei hosted the event at Intel’s Zizhu campus from April 13-15. The hackathon was designed as a way to get more local engineers involved in OpenStack, helping them learn tips and tricks and gain a better understanding by working closely with others. Cores and project team leads (PTLs) were also on hand to review patches and vote for merges. The two host companies plan to make the bug-fixing fests a regular, twice-yearly event in China.

“It’s good to share ideas with so many friends in the open-source community,” said Shane Wang, engineering manager of the datacenter and cloud software (DCS) team at Intel's open source technology center. "One key thing we’re learning is to plan with core developers as early as possible, so we can have the best chance to impact an upcoming OpenStack release.”

by Superuser at April 22, 2015 05:19 PM

April 21, 2015

IBM OpenTech Team

Get your scheduling on: IBMers @ OpenStack Vancouver!

With the OpenStack Summit in Vancouver fast approaching - and scheduling building underway – I wanted to share the 34 or so IBMers who will be presenting and the 18 sessions they will be participating in.  The quality of sessions at OpenStack Summits (like every other conference) is almost solely determined by the speaker: “doers” with deep expertise and previous presentation experience are pretty good bets.  By sharing a little background on each of these IBMers, I hope to make it easier for you to find those high-quality presenters – and sessions! - that lead to a successful conference experience.  And, if you find someone who has knowledge you are looking for, be sure to reach out to them at the session, after the session, or any other way, they are there to share! 

First, our OpenStack Core Contributors who will be presenting:


  • Brad Topol is a core contributor to keystone-specs, pycadf, and heat-translator, contributor to multiple other OpenStack projects including keystone and devstack, and IBM Distinguished Engineer with cross-company responsibility for coordinating contributions. 
  • Cindy Lu is a core contributor on the Horizon project and has been an active technical contributor to OpenStack since Icehouse.
  • Henry Nash is a core contributor to the OpenStack Keystone project who spent considerable time on the Keystone v3 API, domains, and federation.
  • Jay Bryant is a core contributor to Cinder and serves as the subject matter expert within IBM for all things Cinder (especially storage driver development).
  • Matt Riedemann is a core contributor to multiple OpenStack projects, including Nova, and serves as a subject matter expert within IBM for all things Nova and OpenStack CI/CD and QA related questions.
  • Steve Martinelli is core contributor to the Keystone project who focuses on better integration to enterprise environments.  Steve was responsible for adding OAuth support to Keystone and is currently working on Federated Identity.
  • Thai Tran is a core contributor to Horizon and an IBM developer specializing in front-end web development.

Finally, our active technical contributors, implementers, and researchers (among other roles) who will be presenting:


  • Animesh Singh is a Senior Cloud Architect for IBM Cloud Labs who works with customers designing cloud computing solutions on OpenStack and Cloud Foundry in industries such as telco, banking, and healthcare.
  • Ann Corrao is a Distinguished Engineer in the IBM Cloud Innovation Lab who works on the integration and exploitation of future storage technologies.
  • Bengi Karacali is part of the IBM Research with a focus on software defined networking and cloud computing.
  • Bill Owen is part of the IBM General Parallel File System (GPFS) team and has responsibility for integration with OpenStack
  • Catherine Diep is a Senior Architect and Performance Engineer for IBM Cloud Labs providing technical leadership to proof-of-concept, scalability design & testing, and presale opportunities.
  • CV Venugopal is a global architect for Distributed Systems Management in IBM’s Global Technology Services (GTS).
  • Daniel Krook is a Senior Certified IT Specialist who is heavily involved in innovation around OpenStack, Cloud Foundry, and Docker. He also runs the New York City Cloud Foundry meetup and is co-organizer of the OpenStack New York and OpenStack Connecticut user groups.  Daniel previously spoke at both the Atlanta and Paris summits on OpenStack integration with Cloud Foundry and Docker.
  • Dean Hildebrand manages IBM’s Scale-out Storage Software Research group and has authored numerous scientific publications, including pioneering pNFS by demonstrating the feasibility of providing standard and scalable access to any parallel file system.
  • Dilip Krishnaswamy is a Senior Research Scientist in IBM Research with research interests in distributed data centers, network functions virtualization, edge services, wireless distributed computing, distributed analytics, distributed optimization, and nano-scale networks and systems.  Previously he served as Associate Editor-in-Chief of IEEE Wireless Communications.
  • Dmitry Sotnikov is a Researcher in the Cloud Storage group focused on all aspects of storage systems, from single disk performance up to Cloud storage environments.  Dmitry’s broad background in Object Storage includes work related to OpenStack Swift and data reduction technologies, including data compression and deduplication.
  • James Busche is a Cloud Systems Engineer for IBM Cloud Labs with a passion for cloud deployments & OpenStack.
  • Jason Anderson is a Cloud Architect for IBM Cloud Labs that has been engaging customers for the past 3 years with IBM Cloud technologies (including IBM PureApplication System).
  • John Kasperski has been involved with OpenStack Neutron since the Folsom release – most recently with a focus on hybrid connectivity - and serves as a Neutron subject matter expert within IBM.
  • John Tracey is in IBM Research focused on software defined networking for cloud data centers through OpenStack.
  • Joshua Packer is a developer with IBM Bluemix focused on defining and building a new breed of platform on OpenStack.
  • Kalonji Bankole is a developer with IBM Cloud Labs focused on the integration of OpenStack, Cloud Foundry, and Docker.
  • Manuel Silveyra is a Senior Cloud Solutions Architect focused on innovation and solutions using OpenStack, Docker, and Cloud Foundry. 
  • Michael Factor is an IBM Fellow focused on cloud storage services and object storage in particular.  He has presented on a range of storage topics at three of the last four OpenStack summits, including disaster recovery and adding computational capabilities to object storage.
  • Michael Hines is an IBM Researcher who creates and analyzes experimental systems software and networks, including manipulating OSes, VMs, containers, or network protocols.
  • Mike Williams is a Distinguished Engineer leading cloud and software defined environment initiatives with IBM’s services business.
  • Mohammad Banikazemi is an active contributor to the Neutron project and part of the IBM Research team focused on software defined networking and cloud computing.
  • Nilesh Bhosale is an active contributor to both Cinder and Manila in his role integrating IBM’s General Parallel File System (GPFS) with OpenStack.
  • Radha Ratnaparkhi is the Vice President for Software Defined Environments at IBM Research where she leads a global Research team towards creating differentiated offerings to meet the needs of Hybrid Cloud deployments.
  • Simon Lorenz is an IT Architect based in Germany focused on the integration of IBM’s General Parallel File System (GPFS) with OpenStack.
  • Todd Johnson is a developer on the IBM Cloud Manager with OpenStack team focused on OpenStack Neutron, including hybrid connectivity between on-premise and hosted OpenStack installations.
  • Venkata Jagana leads various cloud and software defined environments initiatives within IBM’s services business.  Previously he led and contributed to many opensource initiatives around Linux and served as IPv6 working group chair for the Linux Foundation to get Linux IPv6 ready for production deployments.
  • Vinit Jain is the architect responsible for IBM Cloud OpenStack Services (ICOS), a hosted, managed, single-tenant OpenStack offering.

Monday, May 18th

2:05pm - 12:45pm

A Conversation with Cinder Developers

Jay Bryant

4:40pm - 5:20pm

Tales From the Gate: How Debugging the Gate Helps Your Enterprise

Matt Riedemann

4:40pm - 5:20pm

From Archive to Insight: Debunking Myths of Analytics on Object Stores  

Dean Hildebrand, Simon Lorenz

Tuesday, May 19th

11:15am - 11:55am

How to Configure your Cloud and Tempest for Interoperability Testing

Catherine Diep

12:05pm - 12:45pm

Past, Present and Future of Fibre Channel in OpenStack

Jay Bryant

2:00pm - 2:40pm

Building a Production Grade PaaS platform like Bluemix on OpenStack, leveraging Container based scalable services

Jason Anderson, Animesh Singh, James Busche, Joshua Packer

2:00pm - 2:40pm

Standing Tall in the Room - Sponsored by the Women of OpenStack

Radha Ratnaparkhi

5:30pm - 6:10pm

OpenStack, Docker, and Cloud Foundry - How do the Leading Open Source Triumvirate Come Together

Animesh Singh, Daniel Krook, Manuel Silveyra, Kalonji Bankole

5:30pm - 6:10pm

New Advances in Federated Identity and Federated Service Provider Support for OpenStack Clouds

Brad Topol, Steve Martinelli

Wednesday, May 20th

9:50am - 10:30am

Network Connectivity in a Hybrid OpenStack Cloud

Todd Johnson, John Kasperski, Vinit Jain

1:50pm - 2:30pm

 

SwiftSight -- Leveraging open source tools to gain insight into OpenStack Swift

Dmitry Sotnikov, Michael Factor

1:50pm - 2:30pm

Keystone advanced authentication methods

Steve Martinelli, Henry Nash

2:40pm - 3:20pm

Helping Telcos go Green and save OpEx via Policy

Dilip Krishnaswamy

Thursday, May 21th

9:00am - 9:40am

Big Data Analytics and Docker: The Thrilla in Manila

Bill Owen, Dean Hildebrand,  Michael Hines, Nilesh Bhosale

9:50am - 10:30am

Role of NFV Research in Open Source and Open Standards

Dilip Krishnaswamy

1:30pm - 2:10pm

On-demand Disaster Recovery (DR) service enablement through Software Defined Environments under hybrid clouds

Venkata Jagana, Ramesh Palakodeti, CV Venugopal, Mike Williams, Ann Corrao

1:30pm - 2:10pm

OpenStack Networking: It's time to talk Performance

Bengi Karacali, John Tracey, Mohammad Banikazemi, George Almasi

4:10pm - 4:50pm

 

Beyond the Horizon: Innovating and Customizing Horizon using AngularJS

 

Cindy Lu, Thai Tran

The post Get your scheduling on: IBMers @ OpenStack Vancouver! appeared first on IBM OpenTech.

by Michael Fork at April 21, 2015 10:22 PM

Russell Bryant

OVN and OpenStack Status – 2015-04-21

It has been a couple weeks since the last OVN status update. Here is a review of what has happened since that time.

ovn-nbd is now ovn-northd

Someone pointed out that the acronym “nbd” is used for “Network Block Device” and may exist in the same deployment as OVN.  To avoid any possible confusion, we renamed ovn-nbd to ovn-northd.

ovn-controller now exists

ovn-controller is the daemon that runs on every hypervisor or gateway.  The initial version of this daemon has been merged.  The current version of ovn-controller performs two important functions.

First, ovn-controller populates the Chassis table of the OVN_Southbound database.  Each row in the Chassis table represents a hypervisor or gateway running ovn-controller.  It contains information that identifies the chassis and what encapsulation types it supports.  If you run ovs-sandbox with OVN support enabled, it will run the following commands to configure ovn-controller:

ovs-vsctl set open . external-ids:system-id=56b18105-5706-46ef-80c4-ff20979ab068
ovs-vsctl set open . external-ids:ovn-remote=unix:"$sandbox"/db.sock
ovs-vsctl set open . external-ids:ovn-encap-type=vxlan
ovs-vsctl set open . external-ids:ovn-encap-ip=127.0.0.1
ovs-vsctl add-br br-int

After setup is complete, we can check the OVN_Southbound table’s contents and see the corresponding Chassis entry:

Chassis table
_uuid                                encaps                                 gateway_ports name                                  
------------------------------------ -------------------------------------- ------------- --------------------------------------
2852bf00-db63-4732-8b44-a3bc689ed1bc [e1c1f7fc-409d-4f74-923a-fc6de8409f82] {}            "56b18105-5706-46ef-80c4-ff20979ab068"

Encap table
_uuid                                ip          options type 
------------------------------------ ----------- ------- -----
e1c1f7fc-409d-4f74-923a-fc6de8409f82 "127.0.0.1" {}      vxlan

The other important task performed by the current version of ovn-controller is to monitor the local switch for ports being added that match up to logical ports created in OVN.  When a port is created on the local switch with an iface-id that matches the OVN logical port’s name, ovn-controller will update the Bindings table to specify that the port exists on this chassis.  Once this is done, ovn-northd will report that the port is up to the OVN_Northbound database.

$ ovsdb-client dump OVN_Southbound
Bindings table
_uuid                                chassis                                logical_port                           mac parent_port tag
------------------------------------ -------------------------------------- -------------------------------------- --- ----------- ---
...
2dc299fa-835b-4e42-aa82-3d2da523b4d9 "81b0f716-c957-43cf-b34e-87ae193f617a" "d03aa502-0d76-4c1e-8877-43778088c55c" []  []          [] 
...

$ ovn-nbctl lport-get-up d03aa502-0d76-4c1e-8877-43778088c55c
up

The next steps for ovn-controller are to program the local switch to create tunnels and flows as appropriate based on the contents of the OVN_Southbound database.  This is currently being worked on.

The Pipeline Table

The OVN_Southbound database has a table called Pipeline.  ovn-northd is responsible for translating the logical network elements defined in OVN_Northbound into entries in the Pipeline table of OVN_Southbound.  The first version of populating the Pipeline table has been merged. One thing that is particularly interesting here is that ovn-northd defines logical flows.  It does not have to figure out the detailed switch configuration for every chassis running ovn-controller.  ovn-controller is responsible for translating the logical flows into OpenFlow flows specific to the chassis.

The OVN_Southbound documentation has a good explanation of the contents of the Pipeline table.  If you’re familiar with OpenFlow, the format will be very familiar.

As a simple example, let’s just use ovn-nbctl to manually create a single logical switch that has 2 logical ports.

ovn-nbctl lswitch-add sw0
ovn-nbctl lport-add sw0 sw0-port1 
ovn-nbctl lport-add sw0 sw0-port2 
ovn-nbctl lport-set-macs sw0-port1 00:00:00:00:00:01
ovn-nbctl lport-set-macs sw0-port2 00:00:00:00:00:02

Now we can check out the resulting contents of the Pipeline table.  The output of ovsdb-client has been reordered to group the entries by table_id and priority. I’ve also cut off the _uuid column since it’s not important for understanding here.

Pipeline table
match                          priority table_id actions                                                                 logical_datapath
------------------------------ -------- -------- ----------------------------------------------------------------------- ------------------------------------
"eth.src[40]"                  100      0        drop                                                                    843a9a4a-8afc-41e2-bea1-5fa58874e109
vlan.present                   100      0        drop                                                                    843a9a4a-8afc-41e2-bea1-5fa58874e109
"inport == \"sw0-port1\""      50       0        resubmit                                                                843a9a4a-8afc-41e2-bea1-5fa58874e109
"inport == \"sw0-port2\""      50       0        resubmit                                                                843a9a4a-8afc-41e2-bea1-5fa58874e109
"1"                            0        0        drop                                                                    843a9a4a-8afc-41e2-bea1-5fa58874e109

"eth.dst[40]"                  100      1        "outport = \"sw0-port2\"; resubmit; outport = \"sw0-port1\"; resubmit;" 843a9a4a-8afc-41e2-bea1-5fa58874e109
"eth.dst == 00:00:00:00:00:01" 50       1        "outport = \"sw0-port1\"; resubmit;"                                    843a9a4a-8afc-41e2-bea1-5fa58874e109
"eth.dst == 00:00:00:00:00:02" 50       1        "outport = \"sw0-port2\"; resubmit;"                                    843a9a4a-8afc-41e2-bea1-5fa58874e109

"1"                            0        2        resubmit                                                                843a9a4a-8afc-41e2-bea1-5fa58874e109

"outport == \"sw0-port1\""     50       3        "output(\"sw0-port1\")"                                                 843a9a4a-8afc-41e2-bea1-5fa58874e109
"outport == \"sw0-port2\""     50       3        "output(\"sw0-port2\")"                                                 843a9a4a-8afc-41e2-bea1-5fa58874e109

In table 0, we’re dropping anything with a broadcast/multicast source MAC. We’re also dropping anything with a logical VLAN tag, as that doesn’t make sense. Next, if the packet comes from one of the ports connected to the logical switch, we will continue processing in table 1. Otherwise, we drop it.

In table 1, we will output the packet to all ports if the destination MAC is broadcast/multicast. Note that the output action to the source port is implicitly handled as a drop. Finally, we’ll set the output variable based on destination MAC address and continue processing in table 2.

Table 2 does nothing but continue to table 3. In the ovn-northd code, table 2 is where entries for ACLs go. ovn-nbctl does not currently support adding ACLs. This table is where Neutron will program security groups, but that’s not ready yet, either.

Table 3 handles sending the packet to the right output port based on the contents of the outport variable set back in table 1.

The logical_datapath column ties all of these rows together as implementing a single logical datapath, which in this case is an OVN logical switch.

There is one other item supported by ovn-northd that is not reflected in this example. The OVN_Northbound database has a port_security column for logical ports. Its contents are defined as “A set of L2 (Ethernet) or L3 (IPv4 or IPv6) addresses or L2+L3 pairs from which the logical port is allowed to send packets and to which it is allowed to receive packets.” If this were set here, table 0 would also handle ingress port security and table 3 would handle egress port security.

We will look at more detailed examples in future posts as both OVN and its Neutron integration progress further.

Neutron Integration

There have also been several changes to the Neutron integration for OVN in the last couple of weeks.  Since ovn-northd and ovn-controller are becoming more functional, the devstack integration runs both of these daemons, along with ovsdb-server and ovs-vswitchd.  That means that as you create networks and ports via the Neutron API, they will be created in OVN and result in Bindings and Pipeline updates.

We now also have a devstack CI job that runs against every patch proposed to the OVN Neutron integration.  It installs and runs Neutron with OVN.  Devstack also creates some default networks.  We still have a bit more work to do in OVN before we can expand this to actually test network connectivity.

Also related to testing, Terry Wilson submitted a patch to OVS that will allow us to publish the OVS Python bindings to PyPI.  The patch has been merged and Terry will soon be publishing the code to PyPI.  This will allow us to install the library for unit test jobs.

The original Neutron ML2 driver implementation used ovn-nbctl.  It has now been converted to use the Python ovsdb library, which should be much more efficient.  neutron-server will maintain an open connection to the OVN_Northbound database for all of its operations.

I’ve also been working on the necessary changes for creating a port in Neutron that is intended to be used by a container running inside a VM.  There is a python-neutronclient change and two changes needed to networking-ovn that I’m still testing.

There are some edge cases where a resource can be created in Neutron but fail before we’ve created it in OVN.  Gal Sagie is working on some code to get them back in sync.

Gal Sagie also has a patch up for the first step toward security group support.  We have to document how we will map Neutron security groups to rules in the OVN_Northbound ACL table.

One piece of information that is communicated back up to the OVN_Northbound database by OVN is the up state of a logical port.  Terry Wilson is working on having our Neutron driver consume that so that we can emit a notification when a port that was created becomes ready for use.  This notification gets turned into a callback to Nova to tell it the VIF is ready for use so the corresponding VM can be started.


by russellbryant at April 21, 2015 08:13 PM

Tesora Corp

You can try OpenStack before you, er, “buy”

On the fence about OpenStack? Need to see if your specific application will be straightforward to run? Enter TryStack.org, the OpenStack Sandbox, or, as the developers call it, “a free way to try OpenStack with your apps.” What’s the catch? There really isn’t one. TryStack is just what it appears to be — large clusters […]

The post You can try OpenStack before you, er, “buy” appeared first on Tesora.

by Valerie Silverthorne at April 21, 2015 07:04 PM

DreamHost

Blow off Steam with DreamHost, Akanda, & Cumulus Networks!

Ain’t no party like a DreamHost-Akanda-Cumulus Networks partyyy!!

Say whaaat?! That’s right — we’re throwing a TRIPLE  joint party at the OpenStack Summit in Vancouver, BC!

OpenStack Summit attendees, we’d like to invite you to join us on May 20 to celebrate the first OpenStack of 2015! This is your chance to partake in stimulating conversation about OpenStack, Vancouver, open-source networking, and just life in general after a long hard day of nerd talk. Not only that, we’ll be providing delicious cocktails, cold craft beers (brewed on-site!), and yummy appetizers — all on us!

We’ll be at Vancouver’s famed steam-powered STEAMWORKS BREW PUB!

This is a FREE event, but space is limited — Sign up now at the link below!

RSVP here!

Please note this is a 21+ event. See you there!

by ellice at April 21, 2015 05:00 PM

OpenStack Superuser

Why you should attend an OpenStack Summit

OpenStack Summits don’t miss a beat - with a schedule full of diverse breakout sessions, captivating speakers, off-the-wall evening events and the occasional surprise, it’s the twice-yearly event you simply cannot miss.

With the Vancouver Summit kicking off in four weeks, here are 10 of the most memorable summit moments to date.

1) Surprise guests

The keynote stage has welcomed speakers from around the world, including iQIYI’s Eric Ye and BBVA’s Jose-Maria Sanjose. At the OpenStack Paris Summit, a surprise guest was in store for attendees when a BMW i8 rolled out in front of the audience, setting the stage for Dr. Stefan Lenz’s BMW keynote.

alt text here

Wondering what the surprise we’ll roll out at the OpenStack Vancouver Summit? There’s only one way to find out!

2) Dynamic Duos

Cars aren’t the only surprise guests that have graced the OpenStack Summit keynote stage. At the Atlanta Summit in May 2014, Guillaume Aubuchon, CTO for DigitalFilm Tree, joined OpenStack Foundation COO Mark Collier on stage, posing as Zach Galifianakis. The only rational thing to do at that point was to bring out two real ferns, spoofing Galifianakis’ famous “Between Two Ferns” comedy bit. With phrases like, “software-defined paper” and “proprietary clouds killed 25,000 people last year,” the skit resulted in an eruption of laughter and tweets, similar to when Dopenstack took the San Diego and Portland Summits by storm.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="https://www.youtube.com/embed/UZM114Ng2CM" width=""></iframe>

3) CERN, featuring Tim Bell - enough said

No OpenStack Summit is complete without Tim Bell and his all-star team from CERN. First appearing at the OpenStack Boston Summit in October 2011, Bell discussed CERN’s experiences moving towards an environment around OpenStack. He’s been a highlight ever since. With the Large Hadron Collider (LHC), the largest and most powerful particle accelerator in the world, fired up for its second three-year run, expect fireworks. Check out CERN’s most recent keynote in Paris.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="https://www.youtube.com/embed/QJll5nBclh4" width=""></iframe>

alt text here

4) OpenStack Summit PARTIES

What happens after the close of sessions? PARTIES! Nothing is better than free drinks, food and the opportunity to keep the summit rolling and hang out with OpenStack friends - old and new. The packed Marketplace Booth Crawl in Atlanta and HP’s circus-themed party in Paris are still fresh on the minds of many attendees.

alt text here

alt text here

This time around, you don’t want to miss the fantastic evening that HP has organized for Vancouver Summit attendees. In a nod to Vancouver’s history as a backdrop for horror flicks, they’ve put together a spook-tacular evening shindig on Tuesday, May 19.

5) The National Security Agency (NSA) keynote in the final weeks of a pre-Snowden era

Proving that anyone can open up at the summit, Nate Burton from the NSA shared how the NSA has leveraged OpenStack to build its own private cloud... Though he did leave a few key things [redacted].

alt text here

6) The growing Women of OpenStack community

Since the April 2013 Portland Summit, the Women of OpenStack community has gathered to meet each other and discuss the ongoing initiative of increasing participation of women in the OpenStack community. Now, not all productive conversations have to be held within the many walls of a Summit venue. In Hong Kong, the women of OpenStack boarded a “junk” boat to mingle over wine and appetizers and then gathered again in November 2014 for a happy hour with a city-wide view of Paris, followed by a Tuesday morning working session to plan initiatives for the year ahead.

The Women of OpenStack are at it again in Vancouver and invite all women summit attendees to board a chartered yacht on Sunday afternoon and slowly cruise around the perimeter of Vancouver with a spectacular view of the city and mountains before heading to a happy hour at a nearby venue.

alt text here

alt text here

7) The explosion of international OpenStack Summits

In November 2013, the first international OpenStack Summit ventured to Hong Kong. Since then, the OpenStack Summit has traveled to Paris, and will be held in Vancouver in May 2015 and Tokyo in October 2015.

Want to know where the next international OpenStack Summit will be held? Attend the OpenStack Vancouver Summit where the 2016 OpenStack Summit locations will be revealed!

alt text here

alt text here

8) Live demos that don’t disappoint

Whether on the keynote stage or breakout session, live demonstrations take OpenStack from bullets on slides to real application and data. What could be better? Here are two of the most memorable demos to be shown at a summit:

  • Comcast Cable is the largest provider in the United States of video, Internet and telephone services and is primarily under the brand, XFINITY. At the April 2013 Summit in Portland, Mark Muehl, SVP of product engineering, performed a live demo of its set top cable box running on OpenStack.
<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="https://www.youtube.com/embed/8qzAMcN8uic" width=""></iframe>
  • Mark Shuttleworth, founder of Canonical, captivates the audience, wherever it may be, including the keynote stage at the OpenStack Atlanta Summit where he did a demo kicking off an OpenStack deployment with mixed hypervisors.

9) Early dev lounge

From the earliest days, the Summit has included a dev lounge for developers to chill and talk between sessions. While we instituted a strong "no suits" policy, the occasional politician did slip in.

alt text here

Here Julian Castro, former mayor of San Antonio, TX and now U.S. Department of Housing and Urban Development Secretary, is crashing the gates at the lounge:

alt text here

10) Learning how the world’s biggest brands are running OpenStack

OpenStack Summits are THE place to hear directly from the world’s largest companies and how OpenStack plugs into their organizations. Hearing from users like the Walt Disney Company, Best Buy, Ctrip, and BMW, gives attendees first-hand knowledge of how global companies are moving faster with OpenStack.

Attend the Vancouver Summit to hear from a wide range of users including Walmart, American Express and Adobe.

That’s a wrap - have a memory that we missed? Tweet @OpenStack to be included in a follow up post. Before you miss out on the next whirlwind of summit activities, register for the Vancouver Summit - don’t forget: prices increase in one week, April 28 at 2am CT!

by Allison Price at April 21, 2015 04:20 PM

Moving from CityCloud legacy to OpenStack – Part 1

During the Demo and QA on CityCloud’s OpenStack, there was a question that caught my attention. One of the participants asked the panel if they could provide a path to migrate existing first generation servers to the new OpenStack platform.

The answer is that such a path or service is not provided by CityCloud at the moment. However, you can easily migrate from first-generation servers to OpenStack.

The following diagram captures the basic steps I follow for moving my gen-one cloud servers to the OpenStack Platform. I also use this opportunity to upgrade from CentOS 6 to 7.

alt text here

I defined following steps:

  • Analysis
  • Deploy
  • Interconnect
  • Migrate/Sync Data
  • Switch or Set temporary DNS references
  • Testing
  • Decommission

This post focuses only on analysis and deploy. (Part two covers interconnectivity and moving your data.)

Analysis

The most important step of all. You need to know exactly what you have now (on your first generation server) and how you want to setup this environment on OpenStack. As OpenStack is offering much more features and makes it more scalable , you must cease the momentum to review and enhance your infrastructure. For instance, I completely reviewed networking and volumes/storage. Additionally, I will make use of LBaaS, security groups, keypairs and, once available, private images. So make sure you have your infrastructure drawing ready and document well each feature you want to make use of.

Deploy

Actually this is the easy and most fun part. CityCloud’s Control Center provides a intuitive interface allowing you to create your servers and configure the features (as networking, volumes/storage, …) very quickly. If you have a large environment to deploy, I prefer to script the actual deployment by making use of the API functionality. A prerequisite is that you are familiar with the API syntax and have a reasonable knowledge of scripting.

More information on how to deploy can be found by viewing the recorded demo session: https://www.citycloud.com/openstack/openstack-live-demo-qa-video/

In a next post, I will be talking on interconnecting the your Gen1 environment with your deployed OpenStack servers and how to move your data.

Now go to work and start prepping your migration. Have fun!

This post by Luc Van Steen first appeared on the City Cloud blog. Superuser is always looking for interesting content, email us at editor@superuser.com to get involved.

Cover Photo by chollingsworth3 // CC BY NC

by Luc Van Steen at April 21, 2015 03:22 PM

eNovance Engineering Teams

Gnocchi 1.0: storing metrics and resources at scale

A few months ago, I wrote a long post about what I called back then the “Gnocchi experiment“.

Time passed and we – me and the rest of the Gnocchi team – continued to work on that project, finalizing it.

It’s with a great pleasure that we are going to release our first 1.0 version this month, roughly at the same time that the integrated OpenStack projects release their Kilo milestone. The first release candidate numbered 1.0.0rc1 has been released this morning!

The problem to solve

Before I dive into Gnocchi details, it’s important to have a good view of what problems Gnocchi is trying to solve.

Most of the IT infrastructures out there consists of a set of resources. These resources have properties: some of them are simple attributes whereas others might be measurable quantities (also known as metrics).

And in this context, the cloud infrastructures make no exception. We talk about instances, volumes, networks… which are all different kind of resources. The problems that are arising with the cloud trend and is the scalability of storing all this data and be able to request them later, for whatever usage.

What Gnocchi provides is a REST API that allows the user to manipulate resources (CRUD) and their attributes, while preserving the history of those resources and their attributes.

Gnocchi is fully documented and the documentation is available online. We are the first OpenStack project to require patches to integrate the documentation. We want to raise the bar, so we took a stand on that. That’s part of our policy, the same way it’s part of the OpenStack policy to require unit tests.

I’m not going to paraphrase the whole Gnocchi documentation, which covers things like installation (super easy), but I’ll guide you through some basics of the features provided by the REST API. I will show you some example so you can have a better understanding of what you could leverage Gnocchi!

Handling metrics

Gnocchi provides a full REST API to manipulate time-series that are called metrics. You can easily create a metric using a simple HTTP request:

POST /v1/metric HTTP/1.1
Content-Type: application/json

{
 "archive_policy_name": "low"
}

HTTP/1.1 201 Created
Location: http://localhost/v1/metric/387101dc-e4b1-4602-8f40-e7be9f0ed46a
Content-Type: application/json; charset=UTF-8

{
 "archive_policy": {
 "aggregation_methods": [
 "std",
 "sum",
 "mean",
 "count",
 "max",
 "median",
 "min",
 "95pct"
 ],
 "back_window": 0,
 "definition": [
 {
 "granularity": "0:00:01",
 "points": 3600,
 "timespan": "1:00:00"
 },
 {
 "granularity": "0:30:00",
 "points": 48,
 "timespan": "1 day, 0:00:00"
 }
 ],
 "name": "low"
 },
 "created_by_project_id": "e8afeeb3-4ae6-4888-96f8-2fae69d24c01",
 "created_by_user_id": "c10829c6-48e2-4d14-ac2b-bfba3b17216a",
 "id": "387101dc-e4b1-4602-8f40-e7be9f0ed46a",
 "name": null,
 "resource_id": null
}

The archive_policy_name parameter defines how the measures that are being sent are going to be aggregated. You can also define archive policies using the
API and specify what kind of aggregation period and granularity you want. In that case , the low archive policy keeps 1 hour of data aggregated over 1 second and 1 day of data aggregated to 30 minutes. The functions used for aggregations are the mathematical functions standard deviation, minimum, maximum, … and even 95th percentile. All of that is obviously customizable and you can create your own archive policies.

If you don’t want to specify the archive policy manually for each metric, you can also create archive policy rule, that will apply a specific archive policy based on the metric name, e.g. metrics matching disk.* will be high resolution metrics so they will use the high archive policy.

It’s also worth noting Gnocchi is precise up to the nanosecond and is not tied to the current time. You can manipulate and inject measures that are years old and precise to the nanosecond. You can also inject points with old timestamps (i.e. old compared to the most recent one in the timeseries) with an archive policy allowing it (see back_window parameter).

It’s then possible to send measure to this metric:

POST /v1/metric/387101dc-e4b1-4602-8f40-e7be9f0ed46a/measures HTTP/1.1
Content-Type: application/json

[
 {
 "timestamp": "2014-10-06T14:33:57",
 "value": 43.1
 },
 {
 "timestamp": "2014-10-06T14:34:12",
 "value": 12
 },
 {
 "timestamp": "2014-10-06T14:34:20",
 "value": 2
 }
 ]
 
HTTP/1.1 204 No Content

These measure are synchronously aggregated and stored into the storage backend configured. Our most scalable storage drivers for now are either based on Swift or Ceph which are both scalable storage objects systems.

It’s then possible to retrieve these values:

GET /v1/metric/387101dc-e4b1-4602-8f40-e7be9f0ed46a/measures HTTP/1.1

HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8

[
 [
 "2014-10-06T14:30:00.000000Z",
 1800.0,
 19.033333333333335
 ],
 [
 "2014-10-06T14:33:57.000000Z",
 1.0,
 43.1
 ],
 [
 "2014-10-06T14:34:12.000000Z",
 1.0,
 12.0
 ],
 [
 "2014-10-06T14:34:20.000000Z",
 1.0,
 2.0
 ]
]

As older Ceilometer users might notice here, metrics are only storing points and values, nothing fancy such as metadata anymore.

By default, values eagerly aggregated using mean are returned for all supported granularities. You can obviously specify a time range or a different aggregation function using the aggregation, start and stop query parameter.

Gnocchi also supports doing aggregation across aggregated metrics:

GET /v1/aggregation/metric?metric=65071775-52a8-4d2e-abb3-1377c2fe5c55&metric=9ccdd0d6-f56a-4bba-93dc-154980b6e69a&start=2014-10-06T14:34&aggregation=mean HTTP/1.1

HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8

[
 [
 "2014-10-06T14:34:12.000000Z",
 1.0,
 12.25
 ],
 [
 "2014-10-06T14:34:20.000000Z",
 1.0,
 11.6
 ]
]

This computes the mean of mean for the metric 65071775-52a8-4d2e-abb3-1377c2fe5c55 and 9ccdd0d6-f56a-4bba-93dc-154980b6e69a starting on 6th October 2014 at 14:34 UTC.

Indexing your resources

Another object and concept that Gnocchi provides is the ability to manipulate resources. There is a basic type of resource, called generic, which has very
few attributes. You can extend this type to specialize it, and that’s what Gnocchi does by default by providing resource types known for OpenStack such as instance, volume, network or even image.

POST /v1/resource/generic HTTP/1.1

Content-Type: application/json

{
 "id": "75C44741-CC60-4033-804E-2D3098C7D2E9",
 "project_id": "BD3A1E52-1C62-44CB-BF04-660BD88CD74D",
 "user_id": "BD3A1E52-1C62-44CB-BF04-660BD88CD74D"
}

HTTP/1.1 201 Created
Location: http://localhost/v1/resource/generic/75c44741-cc60-4033-804e-2d3098c7d2e9
ETag: "e3acd0681d73d85bfb8d180a7ecac75fce45a0dd"
Last-Modified: Fri, 17 Apr 2015 11:18:48 GMT
Content-Type: application/json; charset=UTF-8

{
 "created_by_project_id": "ec181da1-25dd-4a55-aa18-109b19e7df3a",
 "created_by_user_id": "4543aa2a-6ebf-4edd-9ee0-f81abe6bb742",
 "ended_at": null,
 "id": "75c44741-cc60-4033-804e-2d3098c7d2e9",
 "metrics": {},
 "project_id": "bd3a1e52-1c62-44cb-bf04-660bd88cd74d",
 "revision_end": null,
 "revision_start": "2015-04-17T11:18:48.696288Z",
 "started_at": "2015-04-17T11:18:48.696275Z",
 "type": "generic",
 "user_id": "bd3a1e52-1c62-44cb-bf04-660bd88cd74d"
}

The resource is created with the UUID provided by the user. Gnocchi handles the history of the resource, and that’s what the revision_start and revision_end fields are for. They indicates the lifetime of this revision of the resource. The ETag and Last-Modified headers are also unique to this resource revision and can be used in a subsequent request using If-Match or If-Not-Match header, for example:

GET /v1/resource/generic/75c44741-cc60-4033-804e-2d3098c7d2e9 HTTP/1.1
If-Not-Match: "e3acd0681d73d85bfb8d180a7ecac75fce45a0dd"

HTTP/1.1 304 Not Modified

Which is useful to synchronize and update any view of the resources you might have in your application.

You can use the PATCH HTTP method to modify properties of the resource, which will create a new revision of the resource. The history of the resources are available via the REST API obviously.

The metrics properties of the resource allow you to link metrics to a resource. You can link existing metrics or create new one dynamically:

POST /v1/resource/generic HTTP/1.1
Content-Type: application/json

{
 "id": "AB68DA77-FA82-4E67-ABA9-270C5A98CBCB",
 "metrics": {
 "temperature": {
 "archive_policy_name": "low"
 }
 },
 "project_id": "BD3A1E52-1C62-44CB-BF04-660BD88CD74D",
 "user_id": "BD3A1E52-1C62-44CB-BF04-660BD88CD74D"
}

HTTP/1.1 201 Created
Location: http://localhost/v1/resource/generic/ab68da77-fa82-4e67-aba9-270c5a98cbcb
ETag: "9f64c8890989565514eb50c5517ff01816d12ff6"
Last-Modified: Fri, 17 Apr 2015 14:39:22 GMT
Content-Type: application/json; charset=UTF-8

{
 "created_by_project_id": "cfa2ebb5-bbf9-448f-8b65-2087fbecf6ad",
 "created_by_user_id": "6aadfc0a-da22-4e69-b614-4e1699d9e8eb",
 "ended_at": null,
 "id": "ab68da77-fa82-4e67-aba9-270c5a98cbcb",
 "metrics": {
 "temperature": "ad53cf29-6d23-48c5-87c1-f3bf5e8bb4a0"
 },
 "project_id": "bd3a1e52-1c62-44cb-bf04-660bd88cd74d",
 "revision_end": null,
 "revision_start": "2015-04-17T14:39:22.181615Z",
 "started_at": "2015-04-17T14:39:22.181601Z",
 "type": "generic",
 "user_id": "bd3a1e52-1c62-44cb-bf04-660bd88cd74d"
}

Haystack, needle? Find!

With such a system, it becomes very easy to index all your resources, meter them and retrieve this data. What’s even more interesting is to query the system to find and list the resources you are interested in!

You can search for a resource based on any field, for example:

POST /v1/search/resource/instance HTTP/1.1
Content-Type: application/json

{
 "=": {
 "user_id": "bd3a1e52-1c62-44cb-bf04-660bd88cd74d"
 }
}

That query will return a list of all resources owned by the user_id bd3a1e52-1c62-44cb-bf04-660bd88cd74d.

You can do fancier queries such as retrieving all the instances started by a user this month:

POST /v1/search/resource/instance HTTP/1.1
Content-Type: application/json
Content-Length: 113

{
 "and": [
 {
 "=": {
 "user_id": "bd3a1e52-1c62-44cb-bf04-660bd88cd74d"
 }
 },
 {
 "&gt;=": {
 "started_at": "2015-04-01"
 }
 }
 ]
}

And you can even do fancier queries than the fancier ones (still following?). What if we wanted to retrieve all the instances that were on host foobar the 15th April and who had already 30 minutes of uptime? Let’s ask Gnocchi to look in the history!

POST /v1/search/resource/instance?history=true HTTP/1.1
Content-Type: application/json
Content-Length: 113

{
 "and": [
 {
 "=": {
 "host": "foobar"
 }
 },
 {
 "&gt;=": {
 "lifespan": "1 hour"
 }
 },
 {
 "&lt;=": {
 "revision_start": "2015-04-15"
 }
 }

]
}

I could also mention the fact that you can search for value in metrics.

One feature that I will very likely include in Gnocchi 1.1 is the ability to search for resource whose specific metrics matches some value. For example, having the ability to search for instances whose CPU consumption was over 80% during a month.

Cherries on the cake

While Gnocchi is well integrated and based on common OpenStack technology, please do note that it is completely able to function without any other OpenStack component and is pretty straight-forward to deploy.

Gnocchi also implements a full RBAC system based on the OpenStack standard oslo.policy and which allows pretty fine grained control of permissions.

There is also some work ongoing to have HTML rendering when browsing the API using a Web browser. While still simple, we’d like to have a minimal Web
interface served on top of the API for the same price!

Ceilometer alarm subsystem supports Gnocchi with the Kilo release, meaning you can use it to trigger actions when a metric value crosses some threshold. And OpenStack Heat also supports auto-scaling your instances based on Ceilometer+Gnocchi alarms.

And there are a few more API calls that I didn’t talked about here, so don’t hesitate to take a peek at the full documentation!

Towards Gnocchi 1.1!

Gnocchi is a different beast in the OpenStack community. It is under the umbrella of the Ceilometer program, but it’s one of the first projects that is
not part of the (old) integrated release. Therefore we decided to have a release schedule not directly linked to the OpenStack and we’ll do release more often that the rest of the old OpenStack components – probably once every 2 months or the like.

What’s coming next is a close integration with Ceilometer (e.g. moving the dispatcher code from Gnocchi to Ceilometer) and probably more features as we have more requests from our users. We are also exploring different backends such as InfluxDB (storage) or MongoDB (indexer).

Stay tuned, and happy hacking!

by Julien Danjou at April 21, 2015 03:19 PM

Julien Danjou

Gnocchi 1.0: storing metrics and resources at scale

<figure class="pull-right illustration" style="width: 33%;"> </figure>

A few months ago, I wrote a long post about what I called back then the "Gnocchi experiment". Time passed and we – me and the rest of the Gnocchi team – continued to work on that project, finalizing it.

It's with a great pleasure that we are going to release our first 1.0 version this month, roughly at the same time that the integrated OpenStack projects release their Kilo milestone. The first release candidate numbered 1.0.0rc1 has been released this morning!

The problem to solve

Before I dive into Gnocchi details, it's important to have a good view of what problems Gnocchi is trying to solve.

Most of the IT infrastructures out there consists of a set of resources. These resources have properties: some of them are simple attributes whereas others might be measurable quantities (also known as metrics).

And in this context, the cloud infrastructures make no exception. We talk about instances, volumes, networks… which are all different kind of resources. The problems that are arising with the cloud trend is the scalability of storing all this data and being able to request them later, for whatever usage.

What Gnocchi provides is a REST API that allows the user to manipulate resources (CRUD) and their attributes, while preserving the history of those resources and their attributes.

Gnocchi is fully documented and the documentation is available online. We are the first OpenStack project to require patches to integrate the documentation. We want to raise the bar, so we took a stand on that. That's part of our policy, the same way it's part of the OpenStack policy to require unit tests.

I'm not going to paraphrase the whole Gnocchi documentation, which covers things like installation (super easy), but I'll guide you through some basics of the features provided by the REST API. I will show you some example so you can have a better understanding of what you could leverage using Gnocchi!

Handling metrics

Gnocchi provides a full REST API to manipulate time-series that are called metrics. You can easily create a metric using a simple HTTP request:

POST /v1/metric HTTP/1.1
Content-Type: application/json
 
{
"archive_policy_name": "low"
}
 
HTTP/1.1 201 Created
Location: http://localhost/v1/metric/387101dc-e4b1-4602-8f40-e7be9f0ed46a
Content-Type: application/json; charset=UTF-8
 
{
"archive_policy": {
"aggregation_methods": [
"std",
"sum",
"mean",
"count",
"max",
"median",
"min",
"95pct"
],
"back_window": 0,
"definition": [
{
"granularity": "0:00:01",
"points": 3600,
"timespan": "1:00:00"
},
{
"granularity": "0:30:00",
"points": 48,
"timespan": "1 day, 0:00:00"
}
],
"name": "low"
},
"created_by_project_id": "e8afeeb3-4ae6-4888-96f8-2fae69d24c01",
"created_by_user_id": "c10829c6-48e2-4d14-ac2b-bfba3b17216a",
"id": "387101dc-e4b1-4602-8f40-e7be9f0ed46a",
"name": null,
"resource_id": null
}


The archive_policy_name parameter defines how the measures that are being sent are going to be aggregated. You can also define archive policies using the API and specify what kind of aggregation period and granularity you want. In that case , the low archive policy keeps 1 hour of data aggregated over 1 second and 1 day of data aggregated to 30 minutes. The functions used for aggregations are the mathematical functions standard deviation, minimum, maximum, … and even 95th percentile. All of that is obviously customizable and you can create your own archive policies.

If you don't want to specify the archive policy manually for each metric, you can also create archive policy rule, that will apply a specific archive policy based on the metric name, e.g. metrics matching disk.* will be high resolution metrics so they will use the high archive policy.

It's also worth noting Gnocchi is precise up to the nanosecond and is not tied to the current time. You can manipulate and inject measures that are years old and precise to the nanosecond. You can also inject points with old timestamps (i.e. old compared to the most recent one in the timeseries) with an archive policy allowing it (see back_window parameter).

It's then possible to send measures to this metric:

POST /v1/metric/387101dc-e4b1-4602-8f40-e7be9f0ed46a/measures HTTP/1.1
Content-Type: application/json
 
[
{
"timestamp": "2014-10-06T14:33:57",
"value": 43.1
},
{
"timestamp": "2014-10-06T14:34:12",
"value": 12
},
{
"timestamp": "2014-10-06T14:34:20",
"value": 2
}
]

HTTP/1.1 204 No Content


<figure class="illustration pull-right" style="width: 5%;"> </figure> <figure class="illustration pull-right" style="width: 5%;"> </figure>

These measures are synchronously aggregated and stored into the configured storage backend. Our most scalable storage drivers for now are either based on Swift or Ceph which are both scalable storage objects systems.

It's then possible to retrieve these values:

GET /v1/metric/387101dc-e4b1-4602-8f40-e7be9f0ed46a/measures HTTP/1.1
 
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
 
[
[
"2014-10-06T14:30:00.000000Z",
1800.0,
19.033333333333335
],
[
"2014-10-06T14:33:57.000000Z",
1.0,
43.1
],
[
"2014-10-06T14:34:12.000000Z",
1.0,
12.0
],
[
"2014-10-06T14:34:20.000000Z",
1.0,
2.0
]
]


As older Ceilometer users might notice here, metrics are only storing points and values, nothing fancy such as metadata anymore.

By default, values eagerly aggregated using mean are returned for all supported granularities. You can obviously specify a time range or a different aggregation function using the aggregation, start and stop query parameter.

Gnocchi also supports doing aggregation across aggregated metrics:

GET /v1/aggregation/metric?metric=65071775-52a8-4d2e-abb3-1377c2fe5c55&metric=9ccdd0d6-f56a-4bba-93dc-154980b6e69a&start=2014-10-06T14:34&aggregation=mean HTTP/1.1
 
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
 
[
[
"2014-10-06T14:34:12.000000Z",
1.0,
12.25
],
[
"2014-10-06T14:34:20.000000Z",
1.0,
11.6
]
]


This computes the mean of mean for the metric 65071775-52a8-4d2e-abb3-1377c2fe5c55 and 9ccdd0d6-f56a-4bba-93dc-154980b6e69a starting on 6th October 2014 at 14:34 UTC.

Indexing your resources

Another object and concept that Gnocchi provides is the ability to manipulate resources. There is a basic type of resource, called generic, which has very few attributes. You can extend this type to specialize it, and that's what Gnocchi does by default by providing resource types known for OpenStack such as instance, volume, network or even image.

POST /v1/resource/generic HTTP/1.1
 
Content-Type: application/json
 
{
"id": "75C44741-CC60-4033-804E-2D3098C7D2E9",
"project_id": "BD3A1E52-1C62-44CB-BF04-660BD88CD74D",
"user_id": "BD3A1E52-1C62-44CB-BF04-660BD88CD74D"
}
 
HTTP/1.1 201 Created
Location: http://localhost/v1/resource/generic/75c44741-cc60-4033-804e-2d3098c7d2e9
ETag: "e3acd0681d73d85bfb8d180a7ecac75fce45a0dd"
Last-Modified: Fri, 17 Apr 2015 11:18:48 GMT
Content-Type: application/json; charset=UTF-8
 
{
"created_by_project_id": "ec181da1-25dd-4a55-aa18-109b19e7df3a",
"created_by_user_id": "4543aa2a-6ebf-4edd-9ee0-f81abe6bb742",
"ended_at": null,
"id": "75c44741-cc60-4033-804e-2d3098c7d2e9",
"metrics": {},
"project_id": "bd3a1e52-1c62-44cb-bf04-660bd88cd74d",
"revision_end": null,
"revision_start": "2015-04-17T11:18:48.696288Z",
"started_at": "2015-04-17T11:18:48.696275Z",
"type": "generic",
"user_id": "bd3a1e52-1c62-44cb-bf04-660bd88cd74d"
}


The resource is created with the UUID provided by the user. Gnocchi handles the history of the resource, and that's what the revision_start and revision_end fields are for. They indicates the lifetime of this revision of the resource. The ETag and Last-Modified headers are also unique to this resource revision and can be used in a subsequent request using If-Match or If-Not-Match header, for example:

GET /v1/resource/generic/75c44741-cc60-4033-804e-2d3098c7d2e9 HTTP/1.1
If-Not-Match: "e3acd0681d73d85bfb8d180a7ecac75fce45a0dd"
 
HTTP/1.1 304 Not Modified


Which is useful to synchronize and update any view of the resources you might have in your application.

You can use the PATCH HTTP method to modify properties of the resource, which will create a new revision of the resource. The history of the resources are available via the REST API obviously.

The metrics properties of the resource allow you to link metrics to a resource. You can link existing metrics or create new ones dynamically:

POST /v1/resource/generic HTTP/1.1
Content-Type: application/json
 
{
"id": "AB68DA77-FA82-4E67-ABA9-270C5A98CBCB",
"metrics": {
"temperature": {
"archive_policy_name": "low"
}
},
"project_id": "BD3A1E52-1C62-44CB-BF04-660BD88CD74D",
"user_id": "BD3A1E52-1C62-44CB-BF04-660BD88CD74D"
}
 
HTTP/1.1 201 Created
Location: http://localhost/v1/resource/generic/ab68da77-fa82-4e67-aba9-270c5a98cbcb
ETag: "9f64c8890989565514eb50c5517ff01816d12ff6"
Last-Modified: Fri, 17 Apr 2015 14:39:22 GMT
Content-Type: application/json; charset=UTF-8
 
{
"created_by_project_id": "cfa2ebb5-bbf9-448f-8b65-2087fbecf6ad",
"created_by_user_id": "6aadfc0a-da22-4e69-b614-4e1699d9e8eb",
"ended_at": null,
"id": "ab68da77-fa82-4e67-aba9-270c5a98cbcb",
"metrics": {
"temperature": "ad53cf29-6d23-48c5-87c1-f3bf5e8bb4a0"
},
"project_id": "bd3a1e52-1c62-44cb-bf04-660bd88cd74d",
"revision_end": null,
"revision_start": "2015-04-17T14:39:22.181615Z",
"started_at": "2015-04-17T14:39:22.181601Z",
"type": "generic",
"user_id": "bd3a1e52-1c62-44cb-bf04-660bd88cd74d"
}


Haystack, needle? Find!

With such a system, it becomes very easy to index all your resources, meter them and retrieve this data. What's even more interesting is to query the system to find and list the resources you are interested in!

You can search for a resource based on any field, for example:

POST /v1/search/resource/instance HTTP/1.1
Content-Type: application/json
 
{
"=": {
"user_id": "bd3a1e52-1c62-44cb-bf04-660bd88cd74d"
}
}


That query will return a list of all resources owned by the user_id bd3a1e52-1c62-44cb-bf04-660bd88cd74d.

You can do fancier queries such as retrieving all the instances started by a user this month:

POST /v1/search/resource/instance HTTP/1.1
Content-Type: application/json
Content-Length: 113
 
{
"and": [
{
"=": {
"user_id": "bd3a1e52-1c62-44cb-bf04-660bd88cd74d"
}
},
{
">=": {
"started_at": "2015-04-01"
}
}
]
}


And you can even do fancier queries than the fancier ones (still following?). What if we wanted to retrieve all the instances that were on host foobar the 15th April and who had already 30 minutes of uptime? Let's ask Gnocchi to look in the history!

POST /v1/search/resource/instance?history=true HTTP/1.1
Content-Type: application/json
Content-Length: 113
 
{
"and": [
{
"=": {
"host": "foobar"
}
},
{
">=": {
"lifespan": "1 hour"
}
},
{
"<=": {
"revision_start": "2015-04-15"
}
}
 
]
}


I could also mention the fact that you can search for value in metrics. One feature that I will very likely include in Gnocchi 1.1 is the ability to search for resource whose specific metrics matches some value. For example, having the ability to search for instances whose CPU consumption was over 80% during a month.

Cherries on the cake

While Gnocchi is well integrated and based on common OpenStack technology, please do note that it is completely able to function without any other OpenStack component and is pretty straight-forward to deploy.

Gnocchi also implements a full RBAC system based on the OpenStack standard oslo.policy and which allows pretty fine grained control of permissions.

<figure class="illustration pull-right" style="width: 33%;"> </figure>

There is also some work ongoing to have HTML rendering when browsing the API using a Web browser. While still simple, we'd like to have a minimal Web interface served on top of the API for the same price!

Ceilometer alarm subsystem supports Gnocchi with the Kilo release, meaning you can use it to trigger actions when a metric value crosses some threshold. And OpenStack Heat also supports auto-scaling your instances based on Ceilometer+Gnocchi alarms.

And there are a few more API calls that I didn't talk about here, so don't hesitate to take a peek at the full documentation!

Towards Gnocchi 1.1!

Gnocchi is a different beast in the OpenStack community. It is under the umbrella of the Ceilometer program, but it's one of the first projects that is not part of the (old) integrated release. Therefore we decided to have a release schedule not directly linked to the OpenStack and we'll release more often that the rest of the old OpenStack components – probably once every 2 months or the like.

What's coming next is a close integration with Ceilometer (e.g. moving the dispatcher code from Gnocchi to Ceilometer) and probably more features as we have more requests from our users. We are also exploring different backends such as InfluxDB (storage) or MongoDB (indexer).

Stay tuned, and happy hacking!

by Julien Danjou at April 21, 2015 03:00 PM

About

Planet OpenStack is a collection of thoughts from the developers and other key players of the OpenStack projects. If you are working on OpenStack technology you should add your OpenStack blog.

Subscriptions

Last updated:
May 05, 2015 03:13 PM
All times are UTC.

Powered by:
Planet