October 30, 2014

Opensource.com

Contributing effectively to OpenStack's Neutron project

Kyle Mestery is an open source cloud computing architect working on the Neutron project on OpenStack, where he serves as PTL (program team lead). Neutron is the networking component of OpenStack, which handles the complex task of connecting machines in a virtual environment. Here what he's working on now prior to his talk at the OpenStack Summit in Paris this year.

by Jen Wike Huger at October 30, 2014 11:00 AM

Sébastien Han

OpenStack Glance: allow user to create public images

Since Juno, it is not possible anymore for an user to create public images nor make one of his images/snapshots public. Even though this new Glance policy is a good initiative, let’s see how we can get the old behavior back.

As an administrator, edit the file /etc/glance/policy.json and change the following line:

"publicize_image": "role:admin",

With:

"publicize_image": "",

Then restart glance:

<figure class="code"><figcaption></figcaption>
1
$ sudo glance-control all restart
</figure>

That’s all!

October 30, 2014 09:38 AM

hastexo

Automated Deployment of a Highly Available OpenStack Cloud

On Monday, November 3, Adam Spiers (from SUSE) and Florian are doing an OpenStack Summit tutorial about the Automated Deployment of a Highly Available OpenStack Cloud (16:20, Room 241). There was a similar tutorial in Atlanta, and from that a few lessons have been learned about making virtual environments available to attendees.

First up: it is up to you to decide whether you want to follow along during the tutorial interactively. You will take just as much out of the tutorial if you just watch while at the Summit, and then reproduce the interactive parts from the comfort of your home or office. However, if you do choose to follow along, please come prepared.

The entire tutorial runs on Vagrant with VirtualBox. Adam has put together an excellent list of prerequisites for your perusal. Please follow this list closely, and make sure you download and add the two Vagrant boxes suse/cloud4-admin and suse/sles11sp3 before you arrive at the conference. Please don't download those on the conference Wi-Fi; if you do so, you'll only end up annoying your network neighbors.

There is also an equally excellent walk-through for setup of the virtual environment itself, which we'll cover during the tutorial -- but it obviously doesn't hurt to familiarize yourself with the steps ahead of time.

See you on Monday!

read more

by florian at October 30, 2014 08:57 AM

October 29, 2014

SUSE Conversations

SUSE in the Spotlight at the Paris OpenStack Summit

The OpenStack universe will be expanding to the City of Lights from November 3rd – November 7th for the OpenStack Kilo Summit. It is a great time for those interested in OpenStack to learn about the latest features in the most recent OpenStack Juno release, hear how the OpenStack vendor ecosystem is integrating with OpenStack …

+read more

The post SUSE in the Spotlight at the Paris OpenStack Summit appeared first on Conversations. Douglas Jarvis

by Douglas Jarvis at October 29, 2014 08:17 PM

SUSE OpenStack Summit Partner Theater

By delivering unique solutions that make it easier to deploy, manage and get more value out of running an OpenStack cloud, the collaborative OpenStack partner ecosystem is driving enterprise adoption.  In recognition of the importance of the ecosystem in extending the capabilities of OpenStack for enterprise deployments, SUSE is hosting a Partner Theater in our …

+read more

The post SUSE OpenStack Summit Partner Theater appeared first on Conversations. Douglas Jarvis

by Douglas Jarvis at October 29, 2014 08:14 PM

IBM OpenStack Team

You can trust an OpenStack cloud — here’s why

By Matt Rutkowski

Can I trust an OpenStack cloud?

This questions is probably not one that many of us developing in or on OpenStack might ask ourselves or even consider. With the recent announcement of OpenStack Juno, the milestone 10th release, and considering all of its compelling new features culled from existing and new components, it seems that we can assume wide customer adoption of OpenStack to be a forgone conclusion. However, the reality is that there are a large number of enterprise customers that face challenges moving to an OpenStack cloud.

These challenges have nothing to do with OpenStack lacking infrastructure as a service (IaaS) functions such as exposing vCPU topologies to guest images in Nova or supporting Backup and Restore for Couchbase in Trove. Instead, customers face the challenge of auditing and monitoring their valuable workloads and data in the cloud in accordance with their strict corporate, industry or regional policies.

(Related: A guide to the OpenStack Juno release)

Yes, you can trust OpenStack by using its built in, standardized CADF auditing

I am here to let you know that since the Havana release, the OpenStack community has been continuously adding auditing support to OpenStack using an open standard called Cloud Auditing Data Federation (CADF), which is developed by the Distributed Management Task Force (DMTF). The analysis of CADF audit records can be an effective way to help customers prove their various policies are being enforced within OpenStack.

The CADF model answers the critical questions about any activity or event that might happen in any software service (in OpenStack or at other layers) in a normative manner using CADF’s seven Ws of auditing:

CADF Auditing Questions

The ability to audit OpenStack started in Havana release

During the Havana release, a Python implementation of the CADF standard (pyCADF) was created and introduced as both a library for components that wished to generate CADF-formatted audit events directly and also as a pluggable Web Server Gateway Interface (WSGI) middleware filter that uses the library. This audit filter can be added to the application programming interface (API) processing pipeline of any OpenStack component, including Nova, Neutron, Cinder and so forth.

Using Nova as an example, the following diagram shows how the filter gets inserted into the pipeline (after being included in the service’s api-paste.ini file):

NOVA Server

The filter was initially designed and tested to handle the majority of OpenStack v2 APIs during the Havana release, but was primarily tested on the most popular Nova APIs.

Icehouse and Juno releases see CADF auditing support increase

The auditing goal of the Icehouse release was to expand support and testing of the AuditMiddleware filter to formally include Glance, Cinder, Neutron and Swift and to create customizable mapping files to allow component owners better control of how OpenStack resource names get mapped to the CADF standard’s normalized names. We also expanded the filter to handle nearly every Nova API, even ones that essentially “wrapped” other APIs in the body of the HTTP message. As recognition of its expanding support, the pyCADF library and filter were updated to a core library and to use Oslo Messaging.

Also during Icehouse, I was pleased that Keystone core developers took notice of CADF and its potential and worked to directly emit CADF events for every user authentication request coming from every OpenStack service that uses Keystone. Now we had layered proof from CADF audit event records to show exactly what tenant administrators and users were doing at both the API and access control levels. At the Juno summit we were able to demonstrate Icehouse services sending CADF audit events to IBM Security QRadar SIEM (see this video). The standing-room-only audience of this general session was very receptive to seeing real customer policies and rules being enforced using CADF. We set up rules to alert admins in real time when strange or anomalous uses of an OpenStack Cloud were happening

In the Juno release, and with the incredible support of awesome Keystone developers such as Brad Topol, Brant Knudson, Dolph Mathews and Steve Martinelli, Keystone not only took ownership of the pyCADF library in order to assure quality of the codebase, but also expanded its CADF support. Now, Keystone emits CADF events that allow customers to track both federated identity management (FIM) authentication and role assignment (such as tenant or group) activities. This means that customers can monitor how their users are authenticated with any external identity provider (IdP) and can see all role-based access control (RBAC) permissions being granted to them as well.

What does the future hold for CADF in OpenStack?

As we prepare for the upcoming OpenStack Kilo summit, we are looking to continue getting the message out to new components that may not be aware of pyCADF or the CADF AuditMiddleware filter. We want these components to assist us in creating a custom mapping file that helps the filter accurately interpret their APIs and normalize the names of resources they manage to CADF standardized semantics that can be effectively analyzed. I have heard that Keystone is expanding its support to (security) policy management activities and that CADF can again be used to audit any new APIs or activities in this area.

CADF for real-time operational intelligence

From its current use in OpenStack, you may think CADF is all about traditional security auditing. Of course it is, but CADF is designed for much more. It is designed for handling operational metric data and real-time measurements from the actual cloud data center hardware and software.

This means that the actual compute hosts, networks and storage devices that underlie and fulfill the OpenStack services can generate data for OpenStack to audit workloads and data performance to assure they adhere to service level agreements (SLAs). Additionally, this means CADF metric data could be used to immediately detect when servers, networks or storage assigned to customer workloads underperform or fail and allow OpenStack clouds to take automatic corrective actions to scale or failover.

To this end, a new project called Monasca—an acronym for “monitoring at scale”—will be discussed at the upcoming summit and the sponsoring companies (HP, Rackspace and IBM) are looking at supporting CADF events as a normative format for such use cases.

Indeed, it is an exciting time when we can effectively apply open standards such as CADF to deliver to customers an enterprise-worthy OpenStack platform that they can trust!

If you want to know more about how to use CADF in OpenStack to produce data that can be used to audit or analyze your customers’ security, operational or business policies, feel free to comment below or look for me at the upcoming OpenStack Summit in Paris. During the Design Summit on Thursday and Friday, you can likely find me hanging out in Heat or Heat-Translator sessions.

The post You can trust an OpenStack cloud — here’s why appeared first on Thoughts on Cloud.

by IBM Cloud Staff at October 29, 2014 03:48 PM

Tesora Corp

OpenStack Paris Summit Could be a Turning Point for Project

7664034712_68ac5d5940_z_0.jpgWe are about a week away from the OpenStack Summit in Paris and it's worth doing a little soul searching before the community gathers in the City of Lights. The project has arrived hasn't it? When you host your world party in Paris it would seem so, but as you celebrate the project, it's worth asking where it's going from here.

More than four years after Rackspace and NASA launched OpenStack as a way to compete with the growing power of Amazon Web Services, Google Cloud and Microsoft Azure, it's suddenly become quite fashionable to flash your OpenStack credentials or even to buy a hot young OpenStack-powered startup. Hey all the big kids are doing it, whether we are talking HP, IBM, Cisco, EMC or Red Hat. It seems the cloud is all the rage and the way to get you there is via OpenStack, but there is a danger in being the popular open source project of the moment.

As Dan Kusnetzky pointed out in a recent article, All Aboard The OpenStack Train, perhaps when everyone says they support something, maybe we should start to at least wonder if all of these vendors can be as committed to the project as they say they are. As Kunetsky wrote, they could be in it for their own reasons:

"Suppliers want to be, in the words of Robert Heinlein, the bride at every wedding and the corpse at every funeral. They want to always be in the hearts and minds of their customer base and be seen as a good solution to just about every IT problem. One way to do this is to be seen as an important member of just about every movement or trend."

Every vendor has its own agenda and it's worth remembering that, but at the same time many of them are making substantial contributions to the body of work at the community level. As a project begins to grow and develop, if you want it to be a real success those big suppliers need to be involved.

Up until now the project has been for the most part, the earliest adopters and geeks, but over the last year we have seen a broadening of the base as more people and more big companies take notice, whatever their reasons.

Meanwhile just last week, Mirantis, got $100M in Series B funding. That's the biggest round ever for an open source company and it was for a company that is offering a packaged version of OpenStack. However you feel about any individual vendor, it is a huge stamp of approval and validation for the OpenStack project, and it shows that investors believe in the project too.

As you gather next week in Paris, you have a lot to be proud of as a community regarding just how far OpenStack has come. It has clearly grown in leaps and bounds in a hurry, but it's the next year or two that is going to really define just what this project is going to be --and who's going to lead it.

Photo by Flickr user Emax-photo. Used under CC BY-SA 2.0 license.

by 693 at October 29, 2014 03:00 PM

Red Hat Stack

Red Hat, Nuage Networks, OpenStack, and KISS

Nuage Networks logo

RHOSCIPN_logo_small

The reality is that IT is serious money – IDC estimates that the Internet of Things (IoT) market alone will hit $7.1 trillion by 2020![1]  But a lot of that money is due to the IT industry practice of “lock-in” – trapping a customer into a proprietary technology and then charging high costs, in some instances up to 10X cost, for every component   For some reason, customers object to having to pick one vendor’s approach, being subject to limitations – whether technological or otherwise, paying high markups for every incremental extension, then having to pay high switching costs for the next solution at end of life in five years or less.

As a consequence, many of those customers are taking a good, hard look at open source software (OSS) that can minimize vendor lock-in. OSS communities also encourage the development of software solutions that run on industry-standard and reasonably priced hardware. In particular, OpenStack has been well received by businesses of all sizes, and the OpenStack community is growing by leaps-and-bounds with 625% more participating developers and 307% more business members as of its fourth birthday![2] Since OpenStack can orchestrate operations for an entire datacenter, it offers a vision of the future where  customers are free from server, network, and storage lock-in.

However, legacy naysayers have always articulated three catches with OSS:
1)    Making it enterprise-grade in terms of scalability, reliability, and security
2)    Ensuring that the code base grows over time so that others can move the ball forward
3)    Getting enterprise-class support for the code base

That’s where Red Hat and Nuage Networks come in. By working together, both companies rely on one anothers pedigree of being a leader in their respective data center functions – server, storage and middleware for Red Hat, and networking for Nuage.  The net result is a true enterprise-grade and integrated OpenStack solution designed to work for any cloud.

Both companies have well-developed reputations for enabling real scalability of their technologies, up to service provider levels. These scalability requirements are a critical requirement to truly enable cloud environments for multi-national enterprises.

Both companies are also strong contributors to OpenStack. Red Hat products begin in the upstream communities, building from an open source technology landscape that is ever changing. Being a top code contributor to OpenStack helps Red Hat best represent our customers’ needs and requirements with regards to the various OpenStack components, as well as the Linux Kernel, the KVM hypervisor, Ceph storage, and other dependencies. All combined, Red Hat is uniquely positioned as a community contributor and developer of open technologies to provide maximum value to our OpenStack customers.

For its part, Nuage Networks has significantly increased contributions to the Juno OpenStack release (Juno) with more to come…

Lastly and most importantly, both company’s business is based on providing support. Of all the issues above, support is probably the top concern for enterprises as they start adopting OSS platforms. Red Hat and Nuage Networks provide world-class, 24×7 support for their products. Hence, the KISS rule for OpenStack is “Keep It Superbly Supported”!

Stay tuned for more information about Red Hat and Nuage Networks in the very near future.

[1] IDC, “The Internet of Things Moves Beyond the Buzz: Worldwide Market Forecast to Exceed $7 Trillion by 2020, IDC Says”, press release, 6/3/2014, at: http://www.idc.com/getdoc.jsp?containerId=prUS24903114

[2] http://thoughtsoncloud.com/2014/07/openstack-anniversary-birthday/

 

by Scott Drennan at October 29, 2014 01:00 PM

Percona

MySQL and Openstack deep dive talk at OpenStack Paris Summit (and more!)

MySQL and Openstack deep dive talk at OpenStack Paris Summit (and more!)I will present a benchmarking talk next week (Nov. 4) at the OpenStack Paris Summit with Jay Pipes from Mirantis. In order to be able to talk about benchmarking, we had to be able to set up and tear down OpenStack environments really quickly. For the benchmarks, we are using a deployment on AWS (ironically) where the instances aren’t actually started and the tenant network is not reachable but all the backend operations still happen.

The first performance bottleneck we hit wasn’t at the MySQL level. We used Rally to benchmark the environment. We started 1,000 fake instances with it at the first glance.

The first bottleneck that we saw was neutron-server eating up a single CPU core. We took a deeper look, and saw that neutron-server is utilizing a single core completely. By default, neutron does everything in a single process. After configuring the api workers and the rpc workers, performance became significantly better.

api_workers = 64
rpc_workers = 32

Before adding the options:

u'runner': {u'concurrency': 24, u'times': 1000, u'type': u'constant'}}
+------------------+-----------+-----------+-----------+---------------+---------------+---------+-------+
| action           | min (sec) | avg (sec) | max (sec) | 90 percentile | 95 percentile | success | count |
+------------------+-----------+-----------+-----------+---------------+---------------+---------+-------+
| nova.boot_server | 4.125     | 9.336     | 15.547    | 11.795        | 12.362        | 100.0%  | 1000  |
| total            | 4.126     | 9.336     | 15.547    | 11.795        | 12.362        | 100.0%  | 1000  |
+------------------+-----------+-----------+-----------+---------------+---------------+---------+-------+
Whole scenario time without context preparation:  391.359671831

After adding the options:

u'runner': {u'concurrency': 24, u'times': 1000, u'type': u'constant'}}
+------------------+-----------+-----------+-----------+---------------+---------------+---------+-------+
| action           | min (sec) | avg (sec) | max (sec) | 90 percentile | 95 percentile | success | count |
+------------------+-----------+-----------+-----------+---------------+---------------+---------+-------+
| nova.boot_server | 2.821     | 6.958     | 36.826    | 8.165         | 10.49         | 100.0%  | 1000  |
| total            | 2.821     | 6.958     | 36.826    | 8.165         | 10.49         | 100.0%  | 1000  |
+------------------+-----------+-----------+-----------+---------------+---------------+---------+-------+
Whole scenario time without context preparation:  292.163493156

Stop by our talk at the OpenStack Paris Summit for more details!

In addition to our talk, Percona has two additional speakers at the OpenStack Paris Summit. George Lorch, Percona software engineer, will speak with Vipul Sabhaya of the HP Cloud Platform Services team on “Percona Server Features for OpenStack and Trove Ops.” Tushar Katarki, Percona director of product management, will present a vBrownBag Tech Talk entitled “MySQL High Availability Options for OpenStack.” Percona is exhibiting at the OpenStack Paris Summit conference, as well – stop by booth E20 and say hello!

At Percona, we’re pleased to see the adoption of our open source software by the OpenStack community and we are working actively to develop more solutions for OpenStack users. We also provide Consulting assistance to organizations that are adopting OpenStack internally or are creating commercial services on top of OpenStack.

We are also pleased to introduce the first annual OpenStack Live, a conference focused on OpenStack and Trove, which is April 13 & 14, 2015 in Santa Clara, California. The call for speaking proposals is now open for submissions which will be reviewed by our OpenStack Live Conference Committee (including me!).

The post MySQL and Openstack deep dive talk at OpenStack Paris Summit (and more!) appeared first on MySQL Performance Blog.

by Peter Boros at October 29, 2014 12:59 PM

Tesora Corp

Short Stack: EMC's Hybrid Cloud Play, Canonical's OpenStack offering and Kilo specs

short stack_b small_0_0.jpgWelcome to the Short Stack, our weekly feature where we search for the most intriguing OpenStack links to share with you. These links may come from traditional publications or company blogs, but if it's about OpenStack, we'll find the best links we can to share with you every week.

If you like what you see, please consider subscribing.

Here we go with this week's links:

EMC made a huge hybrid cloud announcement yesterday, announcing the purchase of 3 cloud companies, a hybrid cloud product and they reorganized the company around cloud delivery. Part of that announcement was the CloudScaling purchase, which gives EMC a way to deliver a hybrid OpenStack Infrastructure as a Service play that could include its own VMware, OpenStack and Microsoft integration.
 
As OpenStack gains in popularity, it's not just the main OS distributions getting attention from investors. Some of the supporting pieces are getting into the act too, and this week SwiftStack, which provides storage services for OpenStack, got $16M in Series B funding to continue its efforts.
 
DreamHost throws its hat into the ring as an OpenStack public cloud offering, but instead of going after the enterprise as many of the bigger players are, DreamHost hopes to find its niche with developers who are looking to learn about OpenStack and deliver solutions to SMBs.
 
Specs for Kilo - Stillhq
In case you haven't caught on yet, OpenStack versions are making their way through the alphabet and after Icehouse and Juno, the next version is going to be called Kilo. One writer has jotted down a handful of thoughts ahead of next week's OpenStack Summit for people to consider in this version, which is scheduled for release in April, 2015.
 
Since everyone is getting into OpenStack, why not Ubuntu too? While many people think of Ubuntu as a Linux desktop distro, they also have a server a product and now they are spinning out an OpenStack version too. Worth noting that they believe they will be in the conversation as one of the players along with Red Hat and HP who understand how this all comes together from a hardware, software and OS perspective.

by 693 at October 29, 2014 11:32 AM

Opensource.com

What software defined storage means for OpenStack

Recently, I had the opportunity to speak with Sage Weil, founder and chief architect of Ceph and a speaker at the upcoming OpenStack Summit in Paris.

I seized the chance to ask him a few questions about his talk and some of the things that matter most to him.

by dbhurley at October 29, 2014 11:00 AM

Aaron Rosen

Pushing patches to gerrit over https

I’ve been doing a good amount of traveling lately and have unfortunately found myself on several networks that only allow outbound http(s) and dns traffic. This makes it a little tricky to push patches to gerrit (or review.openstack.org) unless you’re able to do it over http(s). Luckily enough gerrit supports this! To do this first stage the commits you want to push like you’d normally do. Then, navigate to gerrit -> settings  -> HTTP Password tab which should look like this:

 

Next, push you’re commit(s) with the following command:

$ git push https://review.openstack.org/<path-to-repo> HEAD:refs/for/<branch>

For example:

$ git push https://review.openstack.org/stackforge/python-congressclient HEAD:refs/for/master
Username for 'https://review.openstack.org': arosen
Password for 'https://arosen@review.openstack.org': 
Counting objects: 14, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 429 bytes | 0 bytes/s, done.
Total 3 (delta 2), reused 0 (delta 0)
remote: Resolving deltas: 100% (2/2)
remote: Processing changes: new: 1, refs: 1, done    
remote: 
remote: New Changes:
remote:   https://review.openstack.org/131669
remote: 
To https://review.openstack.org/stackforge/python-congressclient
 * [new branch]      HEAD -> refs/for/master

One important security note: you want to use https here otherwise it will send your password in clear text over the wire. If you do this by mistake gerrit lets you easily generate a new password by clicking the generate password button. Hopefully someone finds this helpful!

by arosen at October 29, 2014 08:58 AM

Flavio Percoco

Hiding unnecessary complexity

This post does not represent a strong opinion but something I've been thinking about for a bit. The content could be completely wrong or it could even make some sense. Regardless, I'd like to throw it out there and hopefully gather some feedback from people interested in this topic.

Before I get into the details, I'd like to share why I care. Since I started programming, I've had the opportunity to work with experienced and non experienced folks in the field. This allowed me to learn from others the things I needed and to teach others the things they wanted to learn that I knew already. Lately, I've dedicated way more time to teaching others and welcoming new people to our field. Whether they already had some experience or not is not relevant. What is indeed relevant, though, is that there's something that needed to be taught, which required a base knowledge to exist.

As silly as it may sound, I believe the process of learning, or simply the steps we follow to acquire new knowledge, can be represented in a directed graph. We can't learn everything at once, we must follow an order. When we want to learn something, we need to start somewhere and dig into the topic of our interest one step at a time.

The thing I've been questioning lately is how deep does someone need to go to consider something as learned? When does the required knowledge to do/learn X ends? Furthermore, I'm most interested in what we - as developers or creators of these abstractions that will then be consumed - can do to define this.

Learning new things is fascinating, at least for me. When I'm reading about a topic I know nothing about, I'd probably read until I feel satisfied with what I've discovered whereas when I'm digging into something I need to know to do something else, I'd probably read until I hit that a-ha moment and I feel I know enough to complete my task. Whether I'll keep digging afterwards or not depends on how interesting I think the topic is. However, the important bit here is that I'll focus on what I need to know and I leave everything else aside.

I believe the same thing happens when we're consuming an API - regardless it's a library, a RESTFul API, RPC API, etc. We'll read the documentation - or just the API - and then we'll start using it. There's no need to read how it was implemented and, hopefully, no further reading will be necessary either. If we know enough and/or the API is simple enough - in terms of how it exposes the internal implementation, vocabulary, pattern, etc - we won't need to dig into any other topics that we may not know already.

Whenever we are writing an API, we tend to either expose too many things or too few things. Finding the right balance between the things that should be kept private and the ones that should be made public is a never-ending crusade. Moreover, keeping the implementation simple and yet flexible becomes harder as we move on writing the API. Should we expose all the underlying context? What is the feeling a consumer of this API should have?

By now, you are probably thinking that I just went nuts and this is all nonsense and you're probably right but I'll ignore that and I'll keep going. Let me try to explain what I mean by using some, hopefully more realistic, examples.

Imagine you're writing an API for a messaging system - you saw this example coming, didn't you? - that is supposed to be simple, intuitive and yet powerful in terms of features and semantics. Now, before thinking about the API you should think about the things you want this service to support. As a full featured messaging service, you probably want it to support several messaging patterns. For the sake of this post, lets make a short list:

These are the 2 messaging patterns - probably the most common ones - that you'd like to have support for in your API. Now, think about how you'd implement them.

For the Producer/Consumer case you'd probably expose endpoints that will allow your users to post messages and get messages. So far so good, it's quite simple and straightforward. To make things a little bit more complicated, lets say you'd like to support grouping for messages. That is, you'd like to provide a simple way to keep a set of messages separated from another set of messages. A very simple way to do that is by supporting the concept of queues. However, Queue is probably a more complex type of resource which implicitly brings some properties into your system. For example, by adding queues to your API you're implicitly saying that messages have an order, therefore it's possible to walk through it - pagination, if you will - and these messages cannot - or shouldn't - be accessed randomly. You probably know all this, which makes the implementation quite simple and intuitive for you but, does the consumer of the API know this? will consuming the API be as simple and intuitive as implementing it was for you? Should the consumer actually care about what queue is? Keep in mind the only thing you wanted to add is grouping for messages.

You may argue saying that you could use lightweight queues or just call it something else to avoid bringing all these properties in. You could, for example, call them topics or even just groups. The downside of doing this is that you'd be probably reinventing a concept that already exists and assigning to it a different name and custom properties. Nothing wrong with that, I guess.

You've a choice to make now. Are you going to expose queues through the API for what they are? Or are you going to expose them in a simpler way and keep them as queues internally? Again, should your users actually care? What is it that they really need to know to use your API?

As far as your user is concerned, the important bit of your API is that messages can be grouped, posting messages is a matter of sending data to your server and getting them is a matter of asking for messages. Nonetheless, many messaging services with support for queues would require the user to have a queue instance where messages should be posted but again: should users actually care?

Would it be better for your API to be something like:

MyClient.Queue('bucket').post('this is my message')

or would it be simpler and enough to be something like:

MyClient.post('this is my message', group='bucket')

See the difference? Am I finally making a point? Leave aside CS and OOP technicality, really, should the final user care?

Lets move onto the second messaging pattern we would like to have support for, publish/subscribe. At this point, you've some things already implemented that you could re-use. For instance, you already have a way to publish messages and the only thing you have to figure out for the publishing part of the message pattern is how to route the message being published to the right class. This shouldn't be hard to implement, the thing to resolve is how to expose it through the API. Should the user know this is a different message pattern? Should the user actually know that this is a publisher and that messages are going to be routed once they hit the server? Is there a way all these concepts can be hidden from the user?

What about the subscriber? The simplest form of subscription for an messaging API is the one that does not require a connection to persist. That is, you expose an API that allows users to subscribe an external endpoint - HTTP, APN, etc - that will receive messages as they're pushed by the messaging service.

You could implement the subscription model by exposing a subscribe endpoint that users would call to register the above-mentioned receivers. Again, should this subscriber concept be hidden from the user? What about asking the user where messages published to group G should be forwarded to instead of asking the users to register subscribers for the publish/subscribe pattern?

Think about how emails - I hate myself for bringing emails as a comparison - work. You've an inbox where all your emails are organized. Your inbox will normally be presented as a list. You can also send an email to some user - or group of users - and they'll receive that email as you receive other's emails. In addition to this, your email service also provides a way to forward email, filter email and re-organize it. Do you see where I'm going with this? have you ever dug into how your email service works? have you ever wondered how all these things are implemented server side? Is your email provider using a queue or just a simple database? You may have wondered all these things but, were they essential for you to understand how to use your email client? I'd bet they weren't.

Does the above make any sense? Depending on how you read the above it may seem like a silly and unnecessary way of reinventing concepts, theories and things that already exist or it may be a way to just ask the users to know what they really need to know to use your service as opposed to forcing them to dig into things they don't really need - or even care about. The more you adapt your service - or API - to what the user is expected to know, the easier it'll be for them to actually use it.

If you got to this point, I'm impressed. I'm kinda feeling I may be really going nuts but I think this post has got me to sort of a fair conclusion and probably an open question.

As much as purists may hate this, I think there's no need to force 'knowledge' into users just for the sake of it. People curious enough will dig into problems, concepts, implementations, etc. The rest of the people will do what you expect them to do, they'll use your API - for better or for worse - and they shouldn't care about the underlying implementation, theories or complexities. All these things should be hidden from the user.

Think about newcomers and how difficult it could be for a person not familiar at all with messaging system to consume a library that requires Producers and Consumers to be instantiated separately. Think about this newcomer trying to understand why there are are producers, consumers, publishers and subscribers. What if this newcomer just wanted to send a message?

As a final note, I'd probably add that the whole point here is not to use silly names for every well-known concept just to make lazy people happy. If that were the case, we wouldn't have sets and everything would be an array with random properties attached to it. The point being made here is that we tend to expose through our APIs lots of unnecessary theories and concepts that users couldn't care less about. When working on the APIs our users will consume, we should probably ask ourselves how likely it is for them to know all this already and how we can hide unnecessary concepts from them without preventing them for digging deeper into it.

Although all this may sound like "Writing APIs 101", I don't believe it is as obvious for everyone as it seems.

by FlaPer87 at October 29, 2014 12:29 AM

OpenStack Blog

2014 Summer of Interns at OpenStack

OpenStack has been a regular participant in community-led internship programs, such as the FOSS Outreach Program and for the
first time this year, the Google Summer of Code. Our wonderful mentors and coordinators have made it possible for OpenStack to have some great interns over the (northern hemisphere) summer. Julie Pichon has helped collect thoughts from the interns. Here is what they have to say about their experience:

Artem Shepelev worked on a scheduler solution based on the non-compute metrics: Working as a part of Google Summer of Code program was very interesting and useful for me. I liked the experience of working with a real project with all its difficulty, size and people involved with it. (Mentors: Yathiraj Udupi, Debojyoti Dutta)

Tzanetos Balitsaris worked on measuring the performance of the deployed Virtual Machines on OpenStack: The experience was really good. Of course one has to sacrifice some things over the summer, but at the end of the day, you have the feeling that it was worth it. (Mentors: Boris Pavlovic and Mikhail Dubov).

Rishabh Kumar: I worked on improving the benchmarking context mechanism in the Rally project. It was a really awesome experience to be part of such a vibrant and diverse community. Getting to know people from all sorts of geographies and the amazing things they are doing humbled me a lot. The code reviews were particularly good with so many people giving their reviews which made me a better programmer. (Mentors: Boris Pavlovic and Mikhail Dubov).

Prashanth Raghu: GSoC was a great opportunity for me to get started with learning about contributing to open source. During my project I was greatly backed by the community which helped me a lot in finally getting my project successfully shipped into the OpenStack Zaqar repository. It was great fun interacting with the team and I would like to thank all those who supported me in this wonderful experience. (Mentor: Alejandro Cabrera).

Ana: I am very grateful for being given the chance to participate in OPW. I had a really positive experience thanks to an amazing mentor, Eoghan Glynn, who explained everything clearly and was enthusiastic about the project and was patient with my many mistakes. I was working on Gnocchi, a new API for Ceilometer; my project was to add moving statistics to the available aggregation functionality. (Mentor: Eoghan Glynn).

Victoria: During my GSoC internship in OpenStack I researched the feasibility of adding AMQP 1.0 as a storage backend for the Messaging and Notifications Service (Zaqar). Since this was not possible, I changed the direction of my research to the transport layer and worked
on creating a POC for it. (Mentor: Flavio Percoco).

Masaru: Awesome experience which is more than I expected at the beginning of my project about VmWare API! Also, great and considerate hackers there, I’m grateful to have participated in GSoC 2014  as one of the students from the OpenStack Foundation. (Mentor: Mr. Arnaud Legendre).

Nataliia: It was a fascinating opportunity. During the internship I worked with the Zaqar team, mainly on Python 3 support, but also with developing api-v1.1. Professionally I learnt a lot, about Python 3 of course, but also from reading and participating in discussions of other interns: about Redis and AMPQ and how to do proper benchmarking. Socially-wise: There was no feeling of being “an intern”. The team considers all interns as teammates and treats them equally as any other developer. Anyone could (and actually can — why not?) actively participate in discussions and in making decisions. After finishing it, I helped with other tasks, in particular api-v1.1-response-document-changes. (Mentor: Flavio Percoco, Kurth Griffiths).

OpenStack doesn’t plan on stopping there and is already preparing for the next round of the FOSS Outreach Program, this time scheduled during the southern hemisphere summer round starting this December. Stay tuned for more announcements.

by Stefano Maffulli at October 29, 2014 12:14 AM

October 28, 2014

IBM OpenStack Team

IBM’s OpenStack offerings continue to grow

In March 2013, IBM committed to open source and open standards in our cloud services and software. Since that time we have invested our money and resources in products that support OpenStack and other key open source projects like Cloud Foundry and Docker.

Today, we take another leap forward by welcoming IBM Cloud OpenStack Services and IBM Object Storage for Bluemix to our growing family of OpenStack-based offerings. The breadth and depth of IBM’s OpenStack offerings is second to none in the industry. I’m proud to give you an inside peek at this latest announcement as well as provide a refresher on the full portfolio of IBM offerings currently supporting OpenStack.

OpenStack and IBM SoftLayer

SoftLayer LogoToday’s announcement is significant as it joins the industry leading open source IaaS management software, OpenStack, with the industry leading IaaS platform, SoftLayer, providing our customers with a flexible and reliable offering managed through an intuitive self-service portal. First released as a limited availability offering on OpenStack’s marketplace, IBM Cloud OpenStack Services provides a managed private cloud environment running on SoftLayer dedicated bare metal resources, ensuring enhanced performance, security and agility.

This offering also delivers hybrid cloud at its best by enabling interoperability between existing IT systems and off-premises cloud workloads. Today’s announcement further demonstrates IBM’s commitment to bring OpenStack to the enterprise and assist clients in finding additional value in an open standards-based cloud approach.

IBM is also introducing object storage as a service on IBM Bluemix, IBM’s premier open cloud development platform. IBM Object Storage for Bluemix uses SoftLayer Object Storage, which is based on the OpenStack Swift project, to manage your data. Object Storage for Bluemix gives developers the means to use IBM Bluemix to create applications using object storage as a service to satisfy their object storage needs. This solution is more scalable than traditional file system storage. It requires less metadata than file systems to store and access files, significantly reducing metadata overhead. The result is a storage facility that is (almost) endlessly scalable.

Meet the rest of the IBM Cloud OpenStack family

When I reflect on IBM Cloud support for OpenStack, it’s quite an impressive list:

IBM Cloud Orchestrator: One of the first IBM Cloud offerings based on OpenStack, provides comprehensive automation of cloud services, supporting infrastructure as well as application and platform services.

IBM Cloud Manager with OpenStackA lightweight cloud offering that provides a self-service portal for workload provisioning, virtualized image management, monitoring, security, automation, basic metering and integrated platform management.

IBM PowerVC: Based on OpenStack technology, delivers a comprehensive virtualization management offering facilitating virtual machine setup and management. 

IBM® PureApplication™ System: An integrated, highly-scalable system, providing an application-centric computing model in a cloud environment

IBM XIV Storage System: Incorporating data redundancy for consistent and predictable I/O performance that is always load balanced

IBM Storwize Family (including SAN Volume Controller): A virtualized midrange disk system that includes technologies that both complement and enhance virtual environments

IBM DS8000An enterprise disk system long recognized for its ability to support most demanding mission-critical workloads and applications 

IBM Elastic Storage, based on IBM General Parallel File System (GPFS) technology: For a single, unified, scale-out data plane for file-based storage across private, public, or hybrid OpenStack cloud environments

IBM Tivoli Storage Manager (TSM): A data protection platform that gives enterprises a single point of control and administration for backup and recovery

Learn more about IBM Open Cloud Technologies

Yes, today is quite a significant day as IBM extends an already large portfolio of IBM Cloud offerings based on OpenStack. I encourage you to follow the links to these offerings to learn more. And, come visit us at the IBM Booth at the upcoming OpenStack Summit to see demos of these offerings and more. To learn more about IBM’s open cloud architecture, follow me @angelluisdiaz or check out my previous pontifications on open technologies on Thoughts on Cloud.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="http://www.youtube.com/embed/nqZ0NBg9J28" width="640"></iframe>

The post IBM’s OpenStack offerings continue to grow appeared first on Thoughts on Cloud.

by Angel Luis Diaz at October 28, 2014 06:44 PM

OpenStack Blog

OpenStack at LinuxCon / CloudOpen Europe

The Foundation had a great time meeting friends old and new as a Silver sponsor of this year’s LinuxCon / CloudOpen Europe in Düsseldorf, Germany on October 13-15. A huge thank you to our Community heroes Tomasz Napierala, Oded Nahum, Christian Berendt, Adalberto Medeiros, Marton Kiss, Kamil Swiatkowski, and Jamie Hannaford who helped us staff our busy booth alongside Foundation Community Manager Stefano Maffulli and Marketing Associate Shari Mahrdt. Our swag (150 T-Shirts and Stickers) was gone by the morning of day 2 and we met a great deal of visitors who were very interested in OpenStack.

This year’s talks showed how leaders in various different industries are using the power of open source and collaboration for innovation and advancement in technology.

 

Highlights included:

  • VIP Reception at the oldest restaurant in Dusseldorf the “Brauerei Zum Schiffchen”, which was open to speakers, sponsors and media, offered traditional German food and made sure every single guest had a glass of beer in their hand at all times :)
  • The closing party for all attendees at the “Nachresidenz”, an architecturally unique club in Düsseldorf.

 

OpenStack related speaking sessions included:

 

photo-3

 

 

LinuxCon-CloudCon-12
LinuxCon-CloudCon-1

 

(Photos by Tomasz Napierala)

 

by Shari Mahrdt at October 28, 2014 05:07 PM

Mirantis

Yes! You CAN upgrade OpenStack. Here’s how.

The need for an OpenStack cloud upgrade strategy has been around since OpenStack’s second release, Bexar, meant there was something to upgrade to. For many companies and teams, the complexity of upgrading has meant it was easier to simply migrate workloads from the old cloud to a new cloud rather than directly upgrading at all. Even today this is an option, of course, but creating a whole new cloud alongside the old one means essentially doubling your hardware and management investment.

Unfortunately, there is still no ideal and universal approach that guarantees a smooth upgrade experience. While the public OpenStack documentation includes some common suggestions for the upgrade process, it involves a great deal of manual work and customization, and even then special cases frequently derail the “standard” experience.

Here at Mirantis, we’ve faced this problem on multiple occasions at long-term services customers, so we decided it was time to develop an approach that will help us to bring the upgrade process to the next level, making it faster, more reliable, smoother, and more efficient.

We’re still working on creating an automated procedure that we can truly call “universal”, but because the main idea of public relationships in OpenStack strategy is to be open and make OpenStack better for everyone, we would like to share some of our current thoughts with the community.

Upgrade approach overview

To start, the Mirantis OpenStack development team did a great job researching and tuning OpenStack, and came up with certain standards and best practices based on our experience in cloud technologies. As a result, we know all of the possible changes in the settings for OpenStack services, such as configuration settings, database schemas, and so on, from version to version.

To start, we thought about trying to upgrade OpenStack without any additional hardware, but after more detailed investigation, we decided that due to possible issues with package version conflicts, kernel module conflicts, and additional downtime and complexity, we’d have to go in a slightly different direction. The approach suggested in this article provides more reliable way to upgrade.

After discarding the idea of upgrading with no additional hardware, we moved to the idea of first replacing the administrative nodes in the cluster. At a high level, the upgrade scheme looks something like Figure 1:

OpenStack Upgrade.png

Figure 1 – High level upgrade scheme

Thanks to improvements in OpenStack Icehouse that decouple the controllers from the compute nodes, it’s possible to upgrade them separately, and we’re going to take advantage of that here. The general idea is that we want to deploy a new controller that runs the target OpenStack version, and then transfer the databases and services configuration to this new controller, making DB adoption and configuration changes during the transfer. The main goal of the controller node switch is to transfer the IP address for particular networks (management, storage, fixed) from the old controller on the new one.

In other words, when we turn off the old controller node, the new controller — with reconfigured network interfaces — will completely replace it in the cluster.

We should note that it’s not actually necessary to install additional hardware in order to do this.  You can just as easily use a virtualized server (not on the source OpenStack cluster, of course!) to perform the upgrade, then reprovision the old controller in the new cluster and move the data back onto it.  This process is beyond the scope of this article, however.

Another advantage of creating a new controller is that it leaves the old controller data intact, which enables us to revert changes in the event that any problems appear.

Once the administrative data has been transferred and the IP addresses have been switched to the new controller, compute nodes will start interacting with new services. These services will already know the state of the compute nodes, because the databases have been moved over.

The general methodology

Every service has its own idiosyncrasies, but in general upgrades follow this pattern:

  1. Stop the service.

  2. Create a database dump for the source cloud.

  3. Replace the config files on the target cloud with a copy of the config files from the source cloud.

  4. Replace the target database for the service with the dump from the source cloud.

  5. If necessary, upgrade the structure of the database to match the target cloud’s release level.  (Fortunately, this can be done easily via provided scripts.)

  6. Adjust permissions on the relevant folders.

  7. Restart the service.

Performing the migration

As a means of testing our new process, we updated a cloud running Mirantis OpenStack-4.1 (Havana) to Mirantis OpenStack-5.0 (Icehouse) by completely replacing the cloud controller.  (The process would be the same to move to later versions of OpenStack as well.)

Setting up

Don’t forget to make sure that the credentials for the database and AMQP services on the new controller are the same as those for the old controller so the compute nodes can interact with them correctly.

We started by deploying two clouds, one using OpenStack Havana, and one using OpenStack Icehouse, in the same configuration, using FUEL for deployment. Our clouds have the same settings for data storage, file based storage for OpenStack Glance, and iSCSI over LVM for OpenStack Cinder. We also used shared networks (management, storage, internal) for both clouds.

Before starting the upgrade procedure, we made backups of the Havana configuration files and databases, as well as the openrc file, but we don’t want any of the image files to have their contents changed while we’re copying so we need to disable any activity in the source cloud.

In order to avoid undesirable activity we should turn off Api Service on the Controller Node of source cloud.

# service openstack-nova-api stop

Next, create directories for the backup config files:

# for i in keystone glance nova cinder neutron openstack-dashboard; \
do mkdir $i-havana; \
done

Then do the actual backup:

# for i in keystone glance nova cinder neutron openstack-dashboard; \<
do cp -r /etc/$i/* $i-havana/; \
done

Finally, copy the storage data from the Havana installation to the Icehouse installation.  In our case, that consists of Glace image files and Cinder volume data on LVM.

# scp -r /var/lib/glance <user_name>@<ip_of_Icehouse_controller>:/var/lib/

At this point you’ve finished the preparation phase and you’re ready to do the actual switchover.  

Upgrading Keystone

Start by creating the Keystone database dump on the source cloud:

# mysqldump -u keystone -pfDm3oOLv keystone > havana-keytone-db-backup.sql

Copy the output file to the Icehouse controller.

Next you’ll need to replace the keystone configuration files on the Icehouse controller with the configuration from the Havana installation.  Start by stopping the keystone process:

# service openstack-keystone stop

On the target server, change  /etc/keystone/keystone.conf so that the credentials match what the database on Icehouse will expect.  Once you’ve done that, drop existing keystone database and create the new one with the needed permissions for the keystone user:

# mysql> DROP DATABASE keystone;
# mysql> CREATE DATABASE keystone;

Import the exported Keystone database:

# mysql -uroot keystone < backup-havana/havana-keystone-db-backup.sql

Next, we can use the keystone-manage tool function, which enables us to upgrade the database structure to Icehouse:

# keystone-manage db_sync

Finally, change permissions on the keystone folder:

# chown -R keystone:keystone /etc/keystone/

Now start the service and make sure everything works:

# service openstack-keystone start
># source openrc.havana

You should get a list of endpoints:

# keystone endpoint-list
.
+----------------------------------+-----------+-------------------------------------------+---------------------------------------+---------------------------------------+----------------------------------+
| id | region | publicurl | internalurl | adminurl | service_id |
+----------------------------------+-----------+-------------------------------------------+---------------------------------------+---------------------------------------+----------------------------------+
| 199949f2481242fc96b8e50b95e0053e | RegionOne | http://172.16.40.43:9696 | http://10.0.0.4:9696 | http://10.0.0.4:9696 | c2c6fc9a8b7f4feeaeb970069174dacc |
| 2e2b3f5c1906465f8e269644e73a1268 | RegionOne | http://172.16.40.43:9292 | http://10.0.0.4:9292 | http://10.0.0.4:9292 | eca3144671534bb1be469e9acac22d60 |
| 33a1717ea672454ead5daf2aa3a05df6 | RegionOne | http://172.16.40.43:8773/services/Cloud | http://10.0.0.4:8773/services/Cloud | http://10.0.0.4:8773/services/Admin | c16c6ae8b9e445c197b66aaa4566a039 |
| 5869dee417c94c39865ab44acbbb021d | RegionOne | http://172.16.40.43:8004/v1/%(tenant_id)s | http://10.0.0.4:8004/v1/%(tenant_id)s | http://10.0.0.4:8004/v1/%(tenant_id)s | ca4a8d13471144bcbc7c4920b55890e3 |
| 8cb1c8216990434eb46e8180faf522bd | RegionOne | http://172.16.40.43:5000/v2.0 | http://10.0.0.4:5000/v2.0 | http://10.0.0.4:35357/v2.0 | 43e1d90586df44f399224c67ea6e9b97 |
| 9a8c4808244a48feb6f6df739b1cfcaa | RegionOne | http://172.16.40.43:8776/v1/%(tenant_id)s | http://10.0.0.4:8776/v1/%(tenant_id)s | http://10.0.0.4:8776/v1/%(tenant_id)s | 21bcde17c33347d1bb00bcbd8b84b447 |
| eb4adb395e72482f8447d50103299a6d | RegionOne | http://172.16.40.43:8774/v2/%(tenant_id)s | http://10.0.0.4:8774/v2/%(tenant_id)s | http://10.0.0.4:8774/v2/%(tenant_id)s | 438d65e654af44fb9c047cd661a65a14 |
+----------------------------------+-----------+-------------------------------------------+---------------------------------------+---------------------------------------+----------------------------------+

Upgrading Glance

The approach for other services will be similar to that for Keystone. Configuration files should be changed on the new controller, and databases from the old controller should be converted and imported.

First stop all glance services:

for i in /etc/init.d/openstack-glance-*; do $i stop; done

Drop the glance database on the new controller, create a new one and import the glance database from the Havana cluster:

mysql -uroot glance  < backup-havana/havana-glance-db-backup.sql

Make sure to convert the character set for each table to UTF-8:

# mysql -u root -p
mysql> SET foreign_key_checks = 0;
mysql> ALTER TABLE glance.image_locations CONVERT TO CHARACTER SET 'utf8';
mysql> ALTER TABLE glance.image_members CONVERT TO CHARACTER SET 'utf8';
mysql> ALTER TABLE glance.image_properties CONVERT TO CHARACTER SET 'utf8';
mysql> ALTER TABLE glance.image_tags CONVERT TO CHARACTER SET 'utf8';
mysql> ALTER TABLE glance.images CONVERT TO CHARACTER SET 'utf8';
mysql> ALTER TABLE glance.migrate_version CONVERT TO CHARACTER SET 'utf8';
mysql> SET foreign_key_checks = 1;
mysql> exit

Update the glance database:

glance-manage db_sync

Replace the glance configuration files on the Icehouse controller with the configuration from the Havana controller:

# cp -f backup-havana/glance-havana/* /etc/glance/

Change the credentials in /etc/glance/glance-api.conf to the database and RabbitMQ to match the expected values:

sql_connection=mysql://glance:1vCYsATB@127.0.0.1/glance?read_timeout=60
rabbit_use_ssl = False
rabbit_userid = nova
rabbit_password = ygrfyaNZ

And then in /etc/glance/glance-registry.conf:

sql_connection=mysql://glance:1vCYsATB@127.0.0.1/glance?read_timeout=60

Change the permissions for the /etc/glance directory:

# chown -R glance:glance /etc/glance

Copy the glance image-cache and images files from the Havana OpenStack cluster to the Icehouse OpenStack cluster, to the actual directories. In our case we copied the content from /var/lib/glance to the new server in the same directory and changed the permissions:

# chown -R glance:glance /var/lib/glance

Start the service and make sure it works.

service openstack-glance-api start
service openstack-glance-registry start
# glance image-list
+--------------------------------------+--------+-------------+------------------+----------+--------+
| ID | Name | Disk Format | Container Format | Size | Status |
+--------------------------------------+--------+-------------+------------------+----------+--------+
| 5bd625a0-cb69-4018-93c8-cf8bf0348f52 | TestVM | qcow2 | bare | 14811136 | active |
+--------------------------------------+--------+-------------+------------------+----------+--------+

Upgrade Nova

The upgrade approach for the Nova service will be similar, but we need to remember that in this setup, we also have the Neutron service, which handles metadata transferring from the Nova service, so we can’t run the nova-metadata service.

Stop all nova services on the Icehouse cluster:

# for i in /etc/init.d/openstack-nova-*;do $i stop; done

Drop the nova database, create the new one and import the nova database from the Havana cluster:

# mysql -uroot nova < backup-havana/havana-nova-db-backup.sql

Replace the nova configuration files on the Icehouse controller with the configuration from the Havana cluster:

# cp -f backup-havana/nova-havana/* /etc/nova/

Don’t forget to make sure that the credentials for rabbitmq and mysql on the controller and compute nodes in /etc/nova/nova.conf file are correct for the target cloud.

Change the permissions on the /etc/nova directory:

# chown -R nova:nova /etc/nova

Update the nova database:

nova-manage db_sync

Since we’re going to connect old Havana compute nodes to the icehouse cluster, please note, that Nova supports a limited live upgrade model for the compute nodes in Icehouse. To do this, upgrade controller infrastructure (everything except nova-compute) first, but set the [upgrade_levels]/compute=icehouse-compat option on compute nodes. This will enable Icehouse controller services to talk to Havana compute services. Upgrades of individual compute nodes can then proceed normally. When all the computes are upgraded, unset the compute version option to retain the default and restart the controller services. Find the following section and key in /etc/nova/nova.conf and make sure the version is set to “icehouse-compat”:

[upgrade_levels]
# Set a version cap for messages sent to compute services. If
# you plan to do a live upgrade from havana to icehouse, you
# should set this option to "icehouse-compat" before beginning
# the live upgrade procedure. (string value)
compute=icehouse-compat

Please note, that icehouse-compat was missing in first Havana release, so please check if your Havana OpenStack cluster has needed patch (https://review.openstack.org/#/c/84755/).

Start the nova services, restart the openstack-nova-compute service on the compute nodes, and test. (Be careful not to start nova-metadata if you use the neutron-metadata-service.)

To make sure Nova is working correctly, use the nova-manage tool:

# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-compute node-3.domain.tld nova enabled :-) 2014-09-03 15:17:31
nova-cert node-1.domain.tld internal enabled :-) 2014-09-03 15:17:25
nova-conductor node-1.domain.tld internal enabled :-) 2014-09-03 15:17:25
nova-consoleauth node-1.domain.tld internal enabled :-) 2014-09-03 15:17:25
nova-console node-1.domain.tld internal enabled :-) 2014-09-03 15:17:25
nova-scheduler node-1.domain.tld internal enabled :-) 2014-09-03 15:17:25

Services that are running properly will show a status of “;-)”. Should there be a problem, you’ll see a status of “XXX”.  You can also check the instances list:

# nova list
+--------------------------------------+------------+--------+------------+-------------+---------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+------------+--------+------------+-------------+---------------------+
| 75eee200-5961-4e2e-bc9f-b6c276c68554 | instance-1 | ACTIVE | - | Running | net-1=192.168.100.2 |
| bc04ec96-21f6-4390-a968-b4a722fa544f | instance-2 | ACTIVE | - | Running | net-1=192.168.100.4 |
+--------------------------------------+------------+--------+------------+-------------+---------------------+

Upgrading Neutron

The upgrade approach for Neutron is also similar to other services, but there is a difference in the upgrade sequence. The database adoption process should be done in two phases: one to migrate the source and destination cloud version assignation, and one to migrate the service configuration and plugins list. Because in this particular example, both clusters use Mirantis OpenStack and the same openvswitch plugin, the upgrade process is less complex, but even in the case of the ML2 plugin, the process should be straightforward.

To perform the upgrade, start by stopping all neutron services on the target controller:

# for i in /etc/init.d/neutron-*; do $i stop; done

Drop the neutron database, create the new one and import the neutron database from the source cluster:

# mysql -uroot neutron < backup-havana/havana-neutron-db-backup.sql

Replace the neutron configuration files on the target controller with the configuration from the source cluster:

# cp -f backup-havana/neutron-havana/* /etc/neutron/

In /etc/neutron/neutron.conf, make sure the credentials for the database and AMPQ match the actual values.

Change the permissions for the  /etc/neutron directory:

# chown -R neutron:neutron /etc/neutron

Update the neutron database:

# neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini stamp havana
# neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini upgrade icehouse

Start the neutron services and verify:

# for i in /etc/init.d/neutron-*; do $i start; done
neutron net-list
+--------------------------------------+-----------+-------------------------------------------------------+
| id | name | subnets |
+--------------------------------------+-----------+-------------------------------------------------------+
| 36560f8d-73ee-4f41-8def-8aecf7516417 | net04 | 46009ba8-426b-40ad-bde8-88ad932fb706 192.168.111.0/24 |
| 50b9e0ed-8dd5-4ea8-b4bf-87ef2ca879a5 | net04_ext | b1252005-89f1-49e4-9443-c289602d18ea 172.16.40.32/27 |
| a325ff94-6860-400f-a261-b91979138d35 | net-1 | d4b25a0e-5393-4227-ad76-da00f1a08017 192.168.100.0/24 |
+--------------------------------------+-----------+-------------------------------------------------------+

Please note that occasionally, neutron displays a problem with “flopping” agents.  In this case, the neutron agent shows the wrong status; working agents are shown as down, and vice versa.  The OpenStack community is still working on fixing this issue.

Upgrade Cinder

In our setup, both clouds have the LVM backend for Cinder on the controller, so we need to perform a few extra steps in order to properly complete the data transfer:

  1. Create a new volume on the target controller in advance. This new volume should be the same size as the volume on the source controller.  Transfer the data from the source controller to the destination controller.

  2. Adopt the cinder database. If the iSCSI target for the compute service stays the same (in other words, it has the same IP address), then further database changes are unnecessary. If the IP address changes, however, then you’ll need to correct the database. In this case we need to fix the iSCSI targets so that the instances can save the connection to their attached volumes. In our test, we also changed volumes from the storage network to the management network.

Now you’re ready to move on with the upgrade as usual.

Stop all cinder services on the target cluster:

for i in /etc/init.d/openstack-cinder-*; do $i stop; done

Drop the cinder database on the target controller, create a new one and import the cinder database from the source cluster:

mysql -u root cinder < backup-havana/havana-cinder-db-backup.sql

Replace the cinder configuration files on the starget controller with the configuration from the source cluster:

# cp -f backup-havana/cinder-havana/* /etc/cinder/

In /etc/cinder/cinder.conf, make sure the credentials for the database and AMPQ match the actual credentials.

Change the permissions for the /etc/cinder directory:

# chown -R cinder:cinder /etc/cinder

Update the cinder database:

# cinder-manage db_sync

Start the cinder services and verify:

# service openstack-cinder-api start
# service openstack-cinder-volume start
# service openstack-cinder-scheduler start

Make sure to check that the new controller node and each compute node are connected through the storage network, because this network is used for the iSCSI protocol.

If you have a different IP address for the iSCSI target but volumes are already attached, you will need to change the iscsi_ip_address parameter in the /etc/cinder/cinder.conf file, as well as the connection information in the nova database on the controller node in the block_device_mapping table.

For example, let’s assume that we have mysql on the destination controller node, and the nova database shows an instance with an attached volume (id=f374f730-4adc-40f3-bfd4-67c4ac14ba76). Before the service upgrade we might see:

mysql> select volume_id, connection_info from block_device_mapping where volume_id='f374f730-4adc-40f3-bfd4-67c4ac14ba76';
+--------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| volume_id                            | connection_info                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          |
+--------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| f374f730-4adc-40f3-bfd4-67c4ac14ba76 | {"driver_volume_type": "iscsi", "serial": "f374f730-4adc-40f3-bfd4-67c4ac14ba76", "data": {"access_mode": "rw", "target_discovered": false, "encrypted": false, "qos_specs": null, "target_iqn": "iqn.2010-10.org.openstack:volume-f374f730-4adc-40f3-bfd4-67c4ac14ba76", "target_portal": "192.168.1.4:3260", "volume_id": "f374f730-4adc-40f3-bfd4-67c4ac14ba76", "target_lun": 1, "device_path": "/dev/disk/by-path/ip-192.168.1.4:3260-iscsi-iqn.2010-10.org.openstack:volume-f374f730-4adc-40f3-bfd4-67c4ac14ba76-lun-1", "auth_password": "eNJy8GZUta39wqEiCr23", "auth_username": "3NRooC58UK3YiYRaPyAt", "auth_method": "CHAP"}} |
+--------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.01 sec)
mysql> update block_device_mapping set connection_info='{"driver_volume_type": "iscsi", "serial": "f374f730-4adc-40f3-bfd4-67c4ac14ba76", "data": {"access_mode": "rw", "target_discovered": false, "encrypted": false, "qos_specs": null, "target_iqn": "iqn.2010-10.org.openstack:volume-f374f730-4adc-40f3-bfd4-67c4ac14ba76", "target_portal": "10.0.0.4:3260", "volume_id": "f374f730-4adc-40f3-bfd4-67c4ac14ba76", "target_lun": 1, "device_path": "/dev/disk/by-path/ip-10.0.0.4:3260-iscsi-iqn.2010-10.org.openstack:volume-f374f730-4adc-40f3-bfd4-67c4ac14ba76-lun-1", "auth_password": "eNJy8GZUta39wqEiCr23", "auth_username": "3NRooC58UK3YiYRaPyAt", "auth_method": "CHAP"}}' where volume_id='f374f730-4adc-40f3-bfd4-67c4ac14ba76';
Query OK, 1 row affected (0.14 sec)

Rows matched: 1  Changed: 1  Warnings: 0

So in this case, the iSCSI IP has been changed from 192.168.1.4 to 10.0.0.4 — in other words, from the storage network to the management network.

After you make these changes, restart cinder and the tgtd daemons on the controller node.

To test the volumes, run the Cinder CLI commands:

# cinder list
+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+
| 69717c74-2cdd-4ef7-b7dd-a38ce63baa54 | in-use | vol-2 | 2 | None | false | bc04ec96-21f6-4390-a968-b4a722fa544f |
| f374f730-4adc-40f3-bfd4-67c4ac14ba76 | in-use | vol-1 | 1 | None | false | 75eee200-5961-4e2e-bc9f-b6c276c68554 |
+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+

You might also look for information about a particular volume, as in:

# cinder show <volume_id>

Upgrade Dashboard

Last, but not least, we need to upgrade the OpenStack Dashboard UI.  

To adopt and configure Horizon, the first step is to change default the keystone role directive in the /etc/openstack-dashboard/local_settings file:

- OPENSTACK_KEYSTONE_DEFAULT_ROLE key from "Member" to "_member_"

Next, make sure that the controller IP address is set correctly assigned for the “OPENSTACK_HOST” variable. It should be changed to actual address for the controller, such as:

- OPENSTACK_HOST = "10.0.0.4"

Finally, restart the service:

service httpd restart

After the web server restarts, check the dashboard by accessing it through the browser.

Conclusion

By creating a single new controller and replacing the old one, we eliminate many of the problems that can happen during the upgrade, and we provide a way to roll back if necessary. This approach helps you perform a cloud upgrade with minimal effort, as well as very little new hardware and engineering time. This is just an upgrade example, of course; OpenStack deployments can vary widely and services settings and their interaction scheme may be very different in your own setting, but you should be able to use the ideas presented here to give you an idea of how to proceed.

Moving forward, we’re working on automating the upgrade procedure. Also in our next articles we’re going to cover such interesting and important moments like different CEPH backends support during an upgrade, HA upgrade and various network settings and plugins for neutron. Stay tuned!

The post Yes! You CAN upgrade OpenStack. Here’s how. appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Oleksii Kolodiazhnyi and Igor Pukha at October 28, 2014 03:10 PM

Kenneth Hui

What EMC Is Up To With OpenStack Cloud Solutions

it-proven-rail-images-hybridcloud-255x123

Today, EMC is announcing the availability of a family of EMC Enterprise Hybrid Cloud Solutions, including engineered IaaS solutions based on VMware, Microsoft, and OpenStack technologies.  The official press release and “Redefine Hybrid Cloud” launch event rightly focuses on the Federation SDDC Edition of the EMC Enterprise Hybrid Cloud Solution family, given its immediately availability and the importance of VMware to the EMC Federation.  However, I want to provide readers some insights and an early preview of the OpenStack-powered Edition, which will be available in 2015, along with other solutions offerings that will be announced during the upcoming OpenStack Summit in Paris.

Screen Shot 2014-10-27 at 11.26.12 AM

I rejoined EMC four months ago as a Business Development Manager in the Cloud Solutions Group, focusing specifically on EMC’s go-to-market OpenStack solutions strategy.  Since then, Team OpenStack @EMC has been working on creating solutions that help customers address their challenges with deploying, operating, and supporting OpenStack-powered clouds and enabling developers to be more agile and productive in deploying their applications. Let’s talk about what you might expect from the EMC Enterprise Hybrid Cloud OpenStack-powered Edition.  The caveat is that since the solution has not yet been released, specific implementation details are still in flux.

Solving Customer Challenges

Again, the goal of the EMC Enterprise Hybrid Cloud OpenStack-powered Edition is to help customers be successful with their choice to deploy OpenStack-powered clouds.  Customer reasons for choosing OpenStack can vary from a preference to go the open source route, a desire to deploy multiple hypervisors, a decision to deploy a cloud architecture that aligns to “born in the cloud” workloads, etc..  But no matter the reasons, EMC is committed to honoring customer choice and to providing the best OpenStack solutions in the market.  To deliver on that commitment, we are creating an OpenStack-powered solution to address the needs and challenges of 2 groups of users – the Cloud Administrator and the Developer.

Screen Shot 2014-10-26 at 6.48.55 PM

The primary reason that many enterprise developers have chosen to develop applications on public cloud platforms, such as Amazon Web Services, is how easy it is for them to access and manage infrastructure resources.  For enterprise IT to win users back, they must be able to provide the same self-service and resources on-demand capabilities in their private clouds that their developers have available to them in the public cloud.  However, Cloud administrators question the readiness of OpenStack for enterprise use.  Many of them have attempted the do-it-yourself approach using software from trunk and have recognized that taking this route requires enterprises to assume the burden of being their own systems vendor and integrator, who have to build their own solutions and provide their own support.  This requires a level of engineering, including software development, hardware certification, and system integration that many users do not have the capability or desire to take on.  The other alternative, however, is to choose a product or solution from among existing vendors, many of whom do not have the experience or resources to build and support an enterprise-grade OpenStack solution.

To enable customers to succeed, we are building an engineered turnkey OpenStack solution that address customer use cases for both the Cloud Administrator and the Developer.  This solution will provide the frictionless access to cloud resources that developers need while enabling Cloud Administrators to create a secure and protected infrastructure that can be trusted with the crown jewels of the Business, including confidential data and software intellectual property.  For example, we could give developers the ability to use the OpenStack APIs to self-provision cloud resources but ensure that these resources are provisioned according to policies set in advance by Cloud Administrators.  The same IaaS solution will also be easy to deploy, manage and support so that both groups of users can focus on working together and providing value to their companies. Enabling this type of turnkey OpenStack solution requires innovative engineering around a well-thought-out converged infrastructure system.

EMC Engineered Solution

Screen Shot 2014-10-26 at 8.54.17 PM

Building an enterprise-grade OpenStack solution, that can effectively address customer use cases, must begin with an enterprise-grade OpenStack product as its core.  That core product for the EMC Enterprise Hybrid Cloud OpenStack-powered Edition comes to EMC via the recent Cloudscaling acquisition and their OpenStack-powered cloud operating system, Open Cloud System (OCS).  OCS has been deployed at many customer accounts and has proven itself as a leading OpenStack-powered distribution.  With the Cloudscaling team on board, EMC can continue to innovate and improve on OCS, while leveraging the valuable experience of folks like Randy Bias and Sean Winn.  Randy writes here about the roles of OCS and the Cloudscaling team in accelerating our OpenStack-powered hybrid cloud solution.  Chad Sakac, SVP of EMC Global Systems Engineering also provides his take on why Cloudscaling at EMC.

ocs-overview

With an enterprise-grade OpenStack-powered product, built on OCS as the core, we can then integrate other technologies that can help us build a solution to address all our identified customer use cases.  These technologies may come from the EMC Federation, such as EMC storage and data protection products, VMware NSX, RSA, or Pivotal CF, etc..  They may come from partners such as CliQr or from open source projects, such as Docker or Mesos.  The goal is to choose the appropriate technologies that best address customer challenges and to integrate these tightly into a fully supported turnkey solution.   We want to ensure that any technologies we integrate into our OpenStack-powered solution is fully tested together, configured optimally, and integrated with our deployment and management tools so we can provide the best customer experience and outcomes possible.

Built On Converged Infrastructure

Chad has written in-depth on the vision for Converged Infrastructure (CI) at EMC, offering both a taxonomy for understanding the varieties of CI technologies and a view into EMC’s CI strategy.  As Chad explains, one of the values of CI is to provide the platform for creating turnkey IaaS solutions, including cloud platforms such as OpenStack.  So it should not be a surprise to anyone that the EMC Enterprise Hybrid Cloud OpenStack-powered Edition will be leveraging one or more CI systems.

CI Roadmap

In his most recent post, Chad proposes that Rackscale CI systems are “built for broad disaggregated and flexible commodity hardware and focus on “new application” (aka “platform 3” or “built for failure”) PaaS stacks and data fabrics.”  OpenStack has been designed from the very beginning to be an IaaS platform for these types of new applications and PaaS stacks.  So I would be in full agreement that an OpenStack-powered solution is ideally suited for Rackscale CI and it makes sense for EMC to focus our efforts there as the sweet spot.  It is also not very difficult for me to see an OpenStack-powered solution as the best IaaS platform for driving future Hyper-Rackscale CI systems, allowing enterprises and service providers to scale out resources like the Facebooks and Twitters of the world.

Where I think there may be warrant for further discussion is the ideas of running OpenStack on Integrated Infrastructures, such as Vblocks, and Common Modular Building Block (CMBB) hyper-converged infrastructure systems, similar to EVO:RAIL.  The argument against this would be that since Integrated Infrastructure are designed for traditional Platform 2 workloads and CMBB is designed for ROBO use cases, neither are suitable infrastructure for an OpenStack solution.  Let me make a quick argument for why I would not rule either out and why you may see the EMC Enterprise Hybrid Cloud OpenStack-powered Edition running on multiple types of CI systems.

Screen Shot 2014-10-27 at 7.27.36 PM

EMC has customers who have committed to deploying an OpenStack cloud with arrays such as the VNX and XtremIO and are pushing towards improving or adding features to OpenStack that would enable it to provide more infrastructure resiliency for traditional Platform 2 workloads.  There are other customers who are looking at running multi-hypervisor OpenStack solutions that use KVM for new applications and VMware vSphere for traditional applications.  While I myself have questioned if it makes sense for OpenStack to focus on traditional workloads, in both cases, an OpenStack solution running on an Integrated Infrastructure CI would not only solve real customer problems but could provide a transitional solution between Platform 2 and platform 3 IaaS.

CMBB provides another interesting CI option for hosting an OpenStack solution.  There are a number of customers who wish to start small with their OpenStack deployments, either on-premises or hosted at a service provider.  These small deployments are often proof of concepts, test/dev use cases, or short-term project where starting with a small footprint is preferable.  Successful deployments here could be precursors to a larger scale-out solution.  As chad mentions, a Vblock does not scale-down well and neither does a Rackscale architecture.  So an OpenStack cloud running on a CMBB Hyper-converged system might be an ideal option for providing a turnkey solution at a smaller scale.  In addition, for an IaaS where linear scaling of compute and storage together is not an issue, a CMBB solution can actually provide more than adequate scalability for an OpenStack cloud.

Concluding Thoughts

What is hopefully clear from this blog post is that EMC is serious about supporting customer choice, including providing options for how customers can succeed with their OpenStack deployments.  The announcement of the EMC Enterprise Hybrid Cloud OpenStack-powered Edition is clear indication of that commitment.  Expect to hear me provide more details on our engineered turnkey solution as the release date draws near.  I would also suggest playing close attention to the upcoming OpenStack Summit where we will be announcing additional OpenStack solutions offerings.


Filed under: Cloud, Cloud Computing, Converged Infrastructure, EMC, Hybrid Cloud, IaaS, OpenStack, Private Cloud, Solutions, Vblock Tagged: Cloud, Cloud computing, Cloudscaling, Converged Infrastructure, EMC, Hybrid Cloud, IaaS, OpenStack, Private Cloud, Vblock

by kenhui at October 28, 2014 03:02 PM

Maish Saidel-Keesing

Nova-Docker on Juno

Containers are hot. It is the latest buzzword. Unfortunately buzzwords are not always the right way to go, but I have been wanting to use containers as a first class citizen on OpenStack for a while.

In Icehouse, Heat has support for containers but only in the sense that you can launch an instance and then launch a container within that instance (Scott Lowe – has a good walkthrough for this – it is a great read).

First a bit of history.

DockerStackThe Docker driver is a hypervisor driver for Openstack Nova Compute. It was introduced with the Havana release, but lives out-of-tree for Icehouse and Juno. Being out-of-tree has allowed the driver to reach maturity and feature-parity faster than would be possible should it have remained in-tree. It is expected the driver will return to mainline Nova in the Kilo release.

The Docker driver was removed from Nova – due to CI issues and migrated to Stackforge for the Icehouse release.

From the announcement for Juno

Many operational updates were also made this cycle including improvements for rescue mode that users requested as well as allowing per-network setting on nova-network code. Key drivers were added such as bare metal as a service (Ironic) and Docker support through StackForge.

I set out to try it out. This is my environment:

  • Fedora 20 (x64)
  • All in one RDO installation of OpenStack (2014.2)

First things first was to get OpenStack up and running (that I am not going to go into how that is done in this post).

The stages are as follows:

  1. Install Docker on the compute node
  2. Install required packages to install nova-docker driver
  3. Config file changes
  4. Dockerize all the things!!

Install Docker on the compute Node

Following the documentation (do so for your Linux distribution)

yum -y remove docker
yum -y install docker-io

Then start the docker services and set them to run at startup

systemctl start docker
systemctl enable docker

Now to test that Docker is working correctly without OpenStack

docker run -i -t ubuntu /bin/bash

If all is good then you should see something similar to the screenshots below.

docker run

docker ps

Now we know that Docker is working correctly.

Install required packages to install nova-docker driver

Following the OpenStack documentation for Docker.

There are two packages needed to start, pip (python-pi) and git.

yum install -y python-pip git

Then we get the nova-docker driver from Stackforge and install it.

pip install -e git+https://github.com/stackforge/nova-docker#egg=novadocker
cd src/novadocker/
python setup.py install

This will pull the files from github - will place them under your current working directory. Then you install the modules required for the driver.

Config file changes

The default compute driver needs to be changed, edit your /etc/nova/nova.conf and change the following option.

[DEFAULT]
compute_driver = novadocker.virt.docker.DockerDriver

Create the directory /etc/nova/rootwrap.d, if it does not already exist, and inside that directory create a file "docker.filters" with the following content:

# nova-rootwrap command filters for setting up network in the docker driver
# This file should be owned by (and only-writeable by) the root user

[Filters]
# nova/virt/docker/driver.py: 'ln', '-sf', '/var/run/netns/.*'
ln: CommandFilter, /bin/ln, root

Glance is the place where all the images are stored – and it used to be the case that you needed a private docker registry – but this is no longer the case, they can be added directly.

Edit the /etc/glance/glance-api.conf file and add docker to the supported container_formats value like the following example.

# Supported values for the 'container_format' image attribute
container_formats=ami,ari,aki,bare,ovf,ova,docker

We now need to restart the services for the new setting to take effect.

systemctl restart openstack-nova-compute
systemctl restart openstack-glance-api

If all is well and there were no configuration errors – then you are good to go.

Dockerize all the things!!

No demonstration is ever complete without showing the deployment of a Wordpress application (why in the hell is it always Wordpress???).

We pull the Wordpress container into the host and then push it into Glance (assuming you have already sourced the credentials for Keystone/Glance)

docker pull tutum/wordpress
docker save tutum/wordpress | glance image-create --is-public=True --container-format=docker --disk-format=raw --name tutum/wordpress

**The image name has to be the same as container name

docker pull

glance image-create

image

And in the GUI

Horizon

And now to boot the new instance

nova boot --image "tutum/wordpress" --flavor m1.tiny test

nova boot

Here is the Console log

console log

Opening a web browser to the instance that received an IP from Neutron.

And hey presto – Wordpress!

Hey - Wordpress

This was a preliminary test – still many things to check…

  • Automation (Heat)
  • Bug problems
  • and so on…

Happy Dockerizing!! (and yes it seems that is actually a word)

by Maish Saidel-Keesing (noreply@blogger.com) at October 28, 2014 02:00 PM

Opensource.com

Making cloud storage easy with OpenStack Swift

When you want to learn about object storage in OpenStack, John Dickinson is the guy to ask. John is the Director of Technology at SwiftStack, a company which relies on the OpenStack Swift project to provide unstructured data storage to customers around the world. He also serves as the Program Technical Lead (PTL) for OpenStack Swift and has been involved in the development of Swift since 2009.

by Jason Baker at October 28, 2014 11:00 AM

October 27, 2014

IBM OpenStack Team

IBM to host sponsor sessions at the OpenStack Summit in Paris

In my previous blog post, I went through each of the IBM technical sessions that were selected for the OpenStack Summit in Paris. In this post, I want to shift gears a bit and focus on the agenda for the IBM sponsor sessions on Wednesday.

OpenStack SummitBuilding this agenda, we wanted to avoid the standard vendor sales pitches and focus on sessions that appeal to everyone—community builders, developers, operators and deployment specialists. The outcome is a series of presentations that start at the top with the open technologies that are shaping the new cloud model, proceeds through how these open technologies are going about building healthy, vibrant communities and how you can help, and finally leads into the various ways IBM makes OpenStack available and shows them in action.

Without further ado, here are the IBM sponsor sessions:

Wednesday, November 5

9:50–10:30
Room 243
Step on the Gas: See How Open Technologies are Driving the Future of the Enterprise

Dave Lindquist is a prestigious IBM Fellow and CTO of Cloud and Smarter Infrastructure Software, and Todd Moore has responsibility for open technologies and partnerships and serves as the IBM representative on the OpenStack Board of Directors. Together they will talk through how open source technologies such as OpenStack, Cloud Foundry and Docker are shaping a new cloud model built on interoperability.

This session will appeal to anyone interested in understanding where the future of cloud lays, the concepts and technologies that will drive it and how we will get there.

11:50–12:30
Room 212/213
IBM and OpenStack: Collaborations Beyond the Code

Daniel Krook and Manuel Silveyra, two leading architects on the IBM Cloud Labs team, will take you through the myriad ways IBM contributes to OpenStack—including code contributions, conference and summit content, local meetups and social media—with the goals of soliciting feedback to improve IBM community engagement and enabling you to become a larger contributor.

This session will appeal to non-coders and coders alike who want to contribute to OpenStack, as well as anyone interested in understanding the breadth and depth of the IBM commitment to OpenStack.

13:50–14:30
Room 212/213
A Use Case Driven View of IBM’s OpenStack Based Offerings

Moe Abdula, Vice President of Cloud Strategy responsible for providing direction across the cloud portfolio, will introduce the use cases driving OpenStack adoption and the corresponding IBM solution. He will cover IBM Cloud Manager with OpenStack (the IBM hardened OpenStack distribution), IBM PowerVC Virtualization Center (virtualization management for Power Systems), IBM Cloud OpenStack Service (private, hosted, and managed version of OpenStack on IBM SoftLayer), and IBM Cloud Orchestrator (orchestrated delivery of cloud services).

This session will appeal to anyone looking to understand some of the available options for deploying OpenStack.

14:40–15:20
Room 212/213
IBM OpenStack Offerings in Action

Building on the previous session, Moe will show these IBM offerings in action, starting from building an OpenStack installation through delivering production workloads on top.

This session will appeal to anyone wanting to see the power of OpenStack in action, especially deployment specialists and application builders.

Between the IBM technical and sponsor sessions, this should prove to be an informative and exciting OpenStack Summit! If you still have questions on any of these sessions, find me on Twitter @mjfork.

(Related: IBM’s contributions to OpenStack go beyond the code)

The post IBM to host sponsor sessions at the OpenStack Summit in Paris appeared first on Thoughts on Cloud.

by Michael J. Fork at October 27, 2014 02:04 PM

Opensource.com

OpenStack Kilo planning, Juno reviews, and more

Interested in keeping track of what's happening in the open source cloud? Opensource.com is your source for what's happening right now in OpenStack, the open source cloud infrastructure project.

by Jason Baker at October 27, 2014 09:00 AM

October 26, 2014

Cloudwatt

How to use Affinity and Anti-Affinity in OpenStack Icehouse

From Icehouse version, OpenStack allows user to explicitely specify if he wants or not a group of virtual machines (VMs) to share the same hosts. These policies are called “Affinity” and “Anti-Affinity”.

  • Affinity: the policy that forces Nova to hosts the concerned VMs in a same hypervisor.
  • Anti-Affinity: the policy that forces Nova to hosts the concerned VMs each in a different hypervisor.

Although Affinity and Anti-Affinity features have already appeared in Havana, they were mutually exclusive: admin has to choose to enable either of the features. It is the first time the two features coexist, the choice of using which policy to use returning to user.

Affinity

Affinity and Anti-Affinity in Icehouse

OpenStack Icehouse introduces a new notion: ServerGroup. ServerGroup is a collection of VMs with some relation between them. The idea is for user to define a common policy (Affinity or Anti-Affinity) that will be applied to all members of a group. New added member will automatically inherit the policy defined in the group. User can create a group with a policy, then create the VMs into the group. OpenStack Icehouse does not support adding/removing an existing VM to/from a ServerGroup yet. User can only add a new VM into the Group at its creation. Deleting a VM belonging to a group will automatically remove it from the group’s member list.

For instance, Fig.1 illustrates four groups: two Affinity groups and two Anti-Affinity groups. All VMs in each Affinity group are hosted in the same hypervisor, while no two VMs of a same Anti-Affinity group are hosted in the same hypervisor.

For administrator: How to enable Affinity/Anti-Affinity

Enabling Affinity and Anti-Afiinity is simply done by adding ServerGroupAffinityFilter and ServerGroupAntiAffinityFilter into scheduler_default_filters. By default they are already included, thus if you don’t touch this parameter, they are already enabled.

1scheduler_default_filters = ServerGroupAffinityFilter,ServerGroupAntiAffinityFilter

For user: How to use Affinity/Anti-Affinity

OpenStack API includes a new set of commands for managing ServerGroups, including creating, listing and deleting ServerGroups. At the moment it is not possible to modify the policy of a ServerGroup.

1- Creating ServerGroup:

With nova-client:

1nova server-group-create <group_name> [< policy >]
  • < group_name >: Name of the ServerGroup
  • < policy >: Policy of the ServerGroup (“affinity” or “anti-affinity”)

Example:

1nova server-group-create gr-anti anti-affinity

With REST API:

1POST /v2/< tenant_id >/os-server-groups

Message content:

1{
2    "server_group": {
3        "name": "< group_name >",
4        "policies": ["< policy >"]
5    }
6}

In which:

  • < tenant_id >: Tenant ID
  • < group_name >: Name of the ServerGroup
  • < policy >: Policy of the ServerGroup (affinity ou anti-affinity)

Example :

1curl  -i 'http://172.16.40.20:8774/v2/b4abcf06398a4a1aabbb439d88ba54d1/os-server-groups' -X POST -H "Accept: application/json" -H "Content-Type: application/json" -H "X-Auth-Token: b8b080f9bb6b707b94ca4c6180f8848816310c02" -d '{"server_group": {"name": "gr-anti-aff", "policies": ["anti-affinity"]}}'

2- Getting all ServerGroups:

With nova-client:

1nova server-group-list

With REST API:

1GET /v2/< tenant_id >/os-server-groups

In which:

  • < tenant_id >: Tenant ID

Example :

1curl  -i http://172.16.40.20:8774/v2/b4abcf06398a4a1aabbb439d88ba54d1/os-server-groups -X GET  -H "Accept: application/json" -H "Content-Type: application/json" -H "X-Auth-Token: b8b080f9bb6b707b94ca4c6180f8848816310c02"

3- Getting a ServerGroup’s details:

With nova-client:

1nova server-group-get < group_id >

In which:

  • < group_id >: ID of the ServerGroup

Example:

1nova server-group-get aa8a69f4-2567-47c6-a28c-2934a8c7959c

With REST API:

1GET /v2/< tenant_id >/os-server-groups/< group_id >

In which:

  • < tenant_id >: Tenant ID
  • < group_id >: ID of the ServerGroup

Example :

1curl  -i http://172.16.40.20:8774/v2/b4abcf06398a4a1aabbb439d88ba54d1/os-server-groups/aa8a69f4-2567-47c6-a28c-2934a8c7959c -X GET  -H "Accept: application/json" -H "Content-Type: application/json" -H "X-Auth-Token: b8b080f9bb6b707b94ca4c6180f8848816310c02"

4- Delete a ServerGroup:

With nova-client:

1nova server-group-delete < group_id >

In which:

  • < group_id >: ID of the ServerGroup

In which:

1nova server-group-delete aa8a69f4-2567-47c6-a28c-2934a8c7959c

With REST API:

1DELETE /v2/< tenant_id >/os-server-groups/< group_id >

In which:

  • < tenant_id >: Tenant ID
  • < group_id >: ID of the ServerGroup

Example :

1curl  -i 'http://172.16.40.20:8774/v2/b4abcf06398a4a1aabbb439d88ba54d1/os-server-groups/aa8a69f4-2567-47c6-a28c-2934a8c7959c' -X GET  -H "Accept: application/json" -H "Content-Type: application/json" -H "X-Auth-Token: b8b080f9bb6b707b94ca4c6180f8848816310c02"

5- Create a virtual machine in a ServerGroup :

This is done by simply adding the ServerGroup ID into the VM creation command.

With nova-client: Add ServerGroup ID into nova boot:

1        --hint group=< group_id >

In which:

  • < group_id >: Group ID of the ServerGroup

Example:

1nova boot  --image ubuntu --flavor m1.small --hint group=aa8a69f4-2567-47c6-a28c-2934a8c7959c vm1

With REST API: Add into the nova GET /servers command:

1        "os:scheduler_hints": {"group": "< group_id >"}

In which:

  • <group_id>: ID of the ServerGroup </group_id>

Example :

1curl -i 'http://172.16.40.20:8774/v2/ b4abcf06398a4a1aabbb439d88ba54d1/servers' -X POST -H "Accept: application/json" -H "Content-Type: application/json" -H "X-Auth-Token: b8b080f9bb6b707b94ca4c6180f8848816310c02" -d '{"server": {"min_count": 1, "max_count": 1, "flavorRef": "1", "imageRef": "3278bfa3-3102-4863-af56-e3d0dbea82fe", "name": "vm1"}, "os:scheduler_hints": {"group": "aa8a69f4-2567-47c6-a28c-2934a8c7959c"}}'

Important notes:

1- Deleting a ServerGroup does not delete any VM of the group.

2- ServerGroup API is available from python-novaclient version 2.17.0.6 or above. You can install the lastest versions of python-novaclient via pip:

1pip install python-novaclient

3- User can verify if their VMs are hosted in the same hypervisors by looking at the “hostId” field in the result of the command:

1nova show < intance_id >

in which < instance_id > is the ID of the VM. This field shows a code correspondent to a hypervisor: if two VMs of a user have the same hostID, they are hosted in a same hypervisor. This field is user-dependent: different users will see different codes for the same hypervisor.

4- The use of the Affinity and Anti-Affinity policies is limited by the resources capacity. If there is no space left to satisfy the policy, OpenStack will return a simple NoHostError (“No valid host was found”). Affinity policy is limited by the available resources of the current hypervisor that hosts the group, while Anti-Affinity is limited by the number of available hypervisors in the datacenter (after filtering).

For instance, in Fig.1 User cannot create a new VM in the group Affinity-1 since there is no space left in Hypervisor 1. A NoHostError will be returned if user tries to create a new VM into this group. He can, however, create new VM in group Affinity-2 since there is still available resources in Hypervisor 2.

In the same logic, it is impossible to create a new VM in the group Anti-Affinity-2, since all the hypervisors already have at least one VM of this group. Creating a new VM in the group Anti-Affinity-2 is still possible; the new VM will be hosted in Hypervisor-4, the last one that does not host any VM of this group.

by Toan at October 26, 2014 11:00 PM

Sébastien Han

Interested in Ceph? Join us at the OpenStack summit in Paris!

The next OpenStack summit is just around the corner and as usual Josh Durgin and I will lead the Ceph and OpenStack design session. This session is scheduled for November 3 from 11:40 to 13:10, find the description link here. The etherpad is already available here so don’t hesitate to add your name to the list along with your main subject of interest. See you in Paris!

October 26, 2014 12:46 PM

Cloud Platform @ Symantec

Change endpoint IP addresses after OpenStack installation with DevStack

When working with DevStack, I occasionally run into situations where the IP address of the VM changes due to external factors.  To get back up and running with the new IP address, endpoints for the OpenStack services need to be updated in configuration files and the database.  For example, you'll need to update this value in the nova.conf file:

 

auth_uri = http://[old IP address]:5000/v2.0

 

and change the IP to the new address.  Updating the IP addresses can be automated by running the unstack.sh command and then rerunning stack.sh, but this will destroy any custom updates you've made to the database during development and will remove other objects you've created in OpenStack as well.  Updating each one manually is a painful process, so this blog post contains a few simple commands to change IP addresses for all endpoints without having to restack the environment.

 

Prerequisites

You have a single node DevStack installation using mysql for the database that was working properly before the IP address changed.  If you have important data in your environment that can't be lost, make sure to take a backup of all configuration files, data files, and databases before making further changes to your environment.

 

Shut Down Services

 

Stop all the running OpenStack services with the unstack.sh script:

 

~/devstack$ ./unstack.sh

 

Modifications

 

To modify the endpoints in the database, you'll need to update values in the endpoints table of the keystone schema.

Log into mysql:

 

~/devstack$ mysql -u root -p
Enter password: 
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 267
Server version: 5.5.40-0ubuntu0.12.04.1-log (Ubuntu)

Copyright (c) 2000, 2014, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql>

 

Switch to the keystone schema:

 

mysql> use keystone
Database changed

 

Modify endpoint IPs:

 

mysql> update endpoint set url = REPLACE(url, '[old IP address]', '[new IP address]');

 

The number of rows this command updates will vary by how many endpoints your installation uses.  In my case, it was 30.

 

Validate that the endpoint urls were updated:

 

mysql> select url from endpoint;

 

Log out of mysql:

 

mysql> quit
Bye

 

Update configuration files in /etc:

 

$ grep -rl '[old IP address]' /etc | xargs sed -i 's/[old IP address]/[new IP address]/g'

 

Update configuration files in /opt/stack (necessary if you've got tempest or other components that put configurations in /opt/stack):

 

grep -rl '[old IP address]' /opt/stack | xargs sed -i 's/[old IP address]/[new IP address]/g'

 

Check whether .my.cnf in your home directory needs an update.  In my case, the IP address in this file was set to 127.0.0.1, so it didn't need an update.

 

Restart Services

 

Run the rejoin-stack.sh command, which starts the services again without making changes to the database or configuration files:

 

~/devstack$ ./rejoin-stack.sh

 

The services will now come up using the new IP address definitions.  I hope this saves you some time working with DevStack!

by brad_pokorny at October 26, 2014 12:06 AM

October 24, 2014

OpenStack Blog

OpenStack Community Weekly Newsletter (Oct 17 – 24)

OpenStack Startup/Venture Capital Ecosystem – it’s real and coming to Paris!

Recently OpenStack has been generating financial headlines with the acquisitions of OpenStack ecosystem startups eNovance, Metacloud, Cloudscaling and OpenStack veteran Mirantis raising $100M in venture capital this week. At the OpenStack Summit in Paris next week, we are launching a new track called “CloudFunding” where we will hear from startups that have been successful in attracting essential capital and ventures capitalists who are actively investing in OpenStack startups.

OpenStack Foundation Staffing News!

The Board of Directors approved the promotion of Lauren Sell to Vice President of Marketing and Community Services. Lauren has been instrumental in the growth of Openstack from the beginning. Thierry Carrez, who has managed the OpenStack releases from the beginning has taken on the role of Director of Engineering, and is building out a team of technical leaders. Be sure to check out our open positions if you’d like to join our team!

Peer Reviews for Neutron Core Reviewers

Food for thoughts from members of the Neutron community: they have started an exploration to improve the process by which we understand a core’s responsibilities, and also a process under which we can judge how cores are performing up to that standard. Join the conversation and give comments to Neutron’s PTL Kyle Mestery blog post.

Numerical Dosimetry in the cloud

What’s the connection between a dentist’s chair and OpenStack? Fascinating post by Patrik Eschle about the practical uses of the clouds we’re building.

The Road To Paris 2014 – Deadlines and Resources

Full access sold out! Only a few spots left for Keynotes and Expo Hall passes.

Report from Events

Relevant Conversations

Tips ‘n Tricks

Security Advisories and Notices

Upcoming Events

Other News

Got Answers?

Ask OpenStack is the go-to destination for OpenStack users. Interesting questions waiting for answers:

Welcome New Reviewers, Developers and Core Reviewers

Dmitry Nikishov Peng Xiao
Weidong Shao Jiri Suchomel
Roman Dashevsky Amaury Medeiros
Peng Xiao Chris Grivas
M. David Bennett Sridhar Ramaswamy
Edmond Kotowski Jun Hong Li
Amandeep Jorge Niedbalski
Wayne Warren Alan Erwin
Amaury Medeiros Y L Sun
Vijayaguru Guruchave
Sagar Damani
Daniel Wakefield

OpenStack Reactions

getting-told-its-feature-free

Getting told that this feature is not going to be accepted before next release

The weekly newsletter is a way for the community to learn about all the various activities occurring on a weekly basis. If you would like to add content to a weekly update or have an idea about this newsletter, please leave a

by Stefano Maffulli at October 24, 2014 09:31 PM

Red Hat Stack

Delivering the Complete Open-Source Cloud Infrastructure and Software-Defined-Storage Story

Authored by Neil Levine, Director Product Marketing, Red Hat and Sean Cohen, Principal Technical Product Manager, Red Hat

The OpenStack summit in Paris not only marks the release of Juno to the public but also the 6 month mark since Red Hat acquired Inktank, the commercial company behind Ceph. The acquisition not only underscored Red Hat’s commitment to use open source to disrupt the storage market, as it did in the operating system market with Linux, but also its investment in OpenStack where Ceph is a market leading scale-out storage platform, especially for block.

Even prior to the acquisition, Inktank’s commercial product – Inktank Ceph Enterprise – had been certified with Red Hat Enterprise Linux OpenStack Platform and over the past 6 months, the product teams have worked to integrate the two products even more tightly.
Delivering the complete Open-Source Cloud Infrastructure and Software-Defined-Storage story
The first phase of this work has been focused on simplifying the installation experience. The new Red Hat Enterprise Linux OpenStack Platform installer now handles configuration of the Ceph components on the controller and compute side, from installing the packages to configuring Cinder, Glance and Nova to creating all the necessary authentication keys. With the Ceph client-side components now directly available in RHEL OpenStack Platform, much of what was a manual effort has now been transformed & automated. In addition the RHEL OpenStack Platform installer also takes responsibility for the configuration of the storage cluster network topology and will boot and configure the hosts that will be used by the Ceph storage cluster.

The Inktank Ceph Enterprise installer has also been modified to take pre-seeded configuration files from RHEL OpenStack Platform and use them to build out the storage cluster. With some of the Ceph services architected to run co-resident on the controller nodes, the number of physical nodes needed has been reduced without sacrificing security of performance.

The benefits to the customer are a 100% open-source cloud infrastructure for compute and storage – both blockcloud infrastructure and SDS (via Ceph block device) and object (via Ceph Object Gateway) – with a solution that is backed by Red Hat’s extensive QA and global support services team. With the largest number of committers to both OpenStack Juno and Ceph, Red Hat are the only vendor able to deliver full top to bottom support for the combined stack with a “single throat to choke” model.

Looking ahead to the next 6 months, customers should expect many of the Ceph-specific features of Juno arriving in the next version of  Red Hat Enterprise Linux OpenStack Platform  6. In particular, full support for ephemeral volumes backed by the Ceph block device (RBD) means that Ceph can handle all block storage for both Nova and Cinder, opening the possibility for ‘diskless’ compute nodes and near-instantaneous booting of new VMs. A second phase of integration work around the installation process is also starting, with the goal of creating a single, unified installation workflow for both RHEL OpenStack Platform and Inktank Ceph Enterprise that will allow for flexible topologies and “touch of a button” scale-out.

Within the Ceph project, there is ongoing focus on improving performance in general which will be of particular interest to customers looking to back Trove databases with RBD storage. Finally, during the Kilo cycle, Red Hat’s Ceph and OpenStack will be focusing on blueprint work that extends the multi-site and disaster recovery options. Here, the ongoing work around RBD Mirroring and the volume replication capabilities in OpenStack Cinder are high on the list – though extending Glance to handle legacy image types and prepare them for RBD will also receive attention.

You can see a list of all of the Ceph related sessions on the OpenStack Summit website and developers are invited to attend the Ceph design session held at the summit. The week before the summit the Ceph project will also be running its regular Ceph Design Summit for the upcoming Hammer release.

Red Hat’s goal to be the default vendor for the next generation of IT architectures is moving towards becoming a practical reality through its support and deep investment of OpenStack and Ceph. As ever, we welcome feedback on where customers would like to see deeper integration and look forward to seeing everyone at the upcoming Paris summit.

The new Red Hat Enterprise Linux OpenStack Platform installer deploying Ceph

The new Red Hat Enterprise Linux OpenStack Platform installer deploying Ceph

by neilwlevine at October 24, 2014 12:11 PM

OpenStack Blog

OpenStack Trainings and Certifications Available in Paris

It’s almost here!  The OpenStack Summit in Paris is just around the corner and we wanted to update you on some upcoming OpenStack Training and Certification classes that will take place in Paris around the Summit dates.  For those of you traveling, you might want to take advantage of these offers and make the most of your visit.

 

Training Offerings:

Mirantis Training – OpenStack Bootcamp

  • Dates: October 29 – 31, 2014 (week prior to OpenStack Paris Summit)
  • Time: 9:00 a.m. – 5:00 p.m.
  • Location: 27/29 rue Bassano – 75008 Paris, France

Red Hat OpenStack Administration Training and exam bundle: 50% discount

  • Dates: October 27 – 31, 2014 (week prior to OpenStack Paris Summit)
  • Fee: 50% discount for OpenStack Summit attendees. Register now using code RHOSS to unlock the discount.
  • Location: eNovance 11 bis rue Roquépine – 75008 Paris, France ‘Ada Lovelace’ Room, Ground floor

 

Certification Exams:

Free Red Hat OpenStack Exams: Red Hat Certified System Administrator in Red Hat OpenStack exam

  • Session 1 Registration
  • Session 2 Registration
  • Date: November 6, 2014 (after OpenStack Paris Summit, onsite registration available)
  • Times: Session 1: 9:30 a.m. – 1:00 p.m., Session 2: 2:30 p.m. – 6:00 p.m.
  • Fee: Free exams for OpenStack Summit attendees. Use code RHOSS to receive the promotional pricing.
  • Location: eNovance 11 bis rue Roquépine – 75008 Paris, France ‘Ada Lovelace’ Room, Ground floor

 

Mirantis Certification

  • Dates: November 5, 2014 (after OpenStack Paris Summit, onsite registration available)
  • Time: 9:00 a.m.- 5:00 p.m.
  • Location: 27/29 rue Bassano – 75008 Paris, France

 

If you have any questions regarding the above Training and Certifications, please contact the Member companies directly for more information.  Can’t wait to see you in Paris!

by Allison Price at October 24, 2014 11:00 AM

Michael Still

Specs for Kilo

Here's an updated list of the specs currently proposed for Kilo. I wanted to produce this before I start travelling for the summit in the next couple of days because I think many of these will be required reading for the Nova track at the summit.

API

  • Add instance administrative lock status to the instance detail results: review 127139 (abandoned).
  • Add more detailed network information to the metadata server: review 85673.
  • Add separated policy rule for each v2.1 api: review 127863.
  • Add user limits to the limits API (as well as project limits): review 127094.
  • Allow all printable characters in resource names: review 126696.
  • Expose the lock status of an instance as a queryable item: review 85928 (approved).
  • Implement instance tagging: review 127281 (fast tracked, approved).
  • Implement tags for volumes and snapshots with the EC2 API: review 126553 (fast tracked, approved).
  • Implement the v2.1 API: review 126452 (fast tracked, approved).
  • Microversion support: review 127127.
  • Move policy validation to just the API layer: review 127160.
  • Provide a policy statement on the goals of our API policies: review 128560.
  • Support X509 keypairs: review 105034.


Administrative

  • Enable the nova metadata cache to be a shared resource to improve the hit rate: review 126705 (abandoned).
  • Enforce instance uuid uniqueness in the SQL database: review 128097 (fast tracked, approved).


Containers Service



Hypervisor: Docker



Hypervisor: FreeBSD

  • Implement support for FreeBSD networking in nova-network: review 127827.


Hypervisor: Hyper-V

  • Allow volumes to be stored on SMB shares instead of just iSCSI: review 102190 (approved).


Hypervisor: Ironic



Hypervisor: VMWare

  • Add ephemeral disk support to the VMware driver: review 126527 (fast tracked, approved).
  • Add support for the HTML5 console: review 127283.
  • Allow Nova to access a VMWare image store over NFS: review 126866.
  • Enable administrators and tenants to take advantage of backend storage policies: review 126547 (fast tracked, approved).
  • Enable the mapping of raw cinder devices to instances: review 128697.
  • Implement vSAN support: review 128600 (fast tracked, approved).
  • Support multiple disks inside a single OVA file: review 128691.
  • Support the OVA image format: review 127054 (fast tracked, approved).


Hypervisor: libvirt



Instance features



Internal

  • Move flavor data out of the system_metdata table in the SQL database: review 126620 (approved).
  • Transition Nova to using the Glance v2 API: review 84887.


Internationalization

  • Enable lazy translations of strings: review 126717 (fast tracked).


Performance

  • Dynamically alter the interval nova polls components at based on load and expected time for an operation to complete: review 122705.


Scheduler

  • Add an IOPS weigher: review 127123 (approved).
  • Add instance count on the hypervisor as a weight: review 127871 (abandoned).
  • Allow limiting the flavors that can be scheduled on certain host aggregates: review 122530 (abandoned).
  • Convert the resource tracker to objects: review 128964 (fast tracked, approved).
  • Create an object model to represent a request to boot an instance: review 127610.
  • Decouple services and compute nodes in the SQL database: review 126895.
  • Implement resource objects in the resource tracker: review 127609.
  • Isolate the scheduler's use of the Nova SQL database: review 89893.
  • Move select_destinations() to using a request object: review 127612.


Security

  • Provide a reference implementation for console proxies that uses TLS: review 126958 (fast tracked).
  • Strongly validate the tenant and user for quota consuming requests with keystone: review 92507.


Tags for this post: openstack kilo blueprint spec
Related posts: One week of Nova Kilo specifications; Compute Kilo specs are open; On layers; Juno nova mid-cycle meetup summary: slots; My candidacy for Kilo Compute PTL; Juno nova mid-cycle meetup summary: nova-network to Neutron migration

Comment

October 24, 2014 03:27 AM

October 23, 2014

Mirantis

Thakker: Early bet on Mirantis validated by $100M in new funding

Guest Post by Dharmesh Thakker, Intel Capital

At Intel Capital, where I manage Enterprise Software investments, we have a long-term view on secular shifts in computing, working alongside strategic partners and industry leaders like VMware, Cisco, HP and Amazon Web Services. A couple of years ago, we spent a fair bit of time evaluating cloud operating frameworks that provide the same flexibility, cost structure and agility as Linux did for x86 infrastructure a decade ago.

We had conviction in OpenStack, and we were the first investors in Mirantis in early 2013 – driven mostly by the team’s clarity of vision, deep technical bench and open source community influence. Just 18 months later, and on the back of solid customer acquisition, the company this week announced a $100M Series B round. I’m glad to join forces with Insight Venture Partners and August Capital, who’ve been behind Splunk, Docker, New Relic and others, to collectively help drive Mirantis as a key ingredient of hyper-scale cloud infrastructure.

The Big Deal That Almost Wasn’t!

I first heard about Mirantis from our investment team in Moscow. At Intel Capital, we believe innovation is global, and open source software has democratized enterprise innovation. As a result, we have investors looking for enterprise opportunities across Israel, Eastern Europe, India and China.

My team had gotten wind of a company in Saratov, more than 500 miles from the Russian capital. Once a fortress city that protected trade along the Volga River, today it boasts some of the best engineering schools in the country.

Mirantis had some of the brightest minds on staff, many with Ph.D. degrees in math and computer science, including a couple of worldwide gold medalists in ACM software contests. After hearing their founder’s presentation at a scantily attended OpenStack conference in late 2012, I boarded a plane to Moscow and then Saratov (on a retired Soviet jet – the only way to get there!) to spend a week with the core engineering team.

At first glance, the team didn’t have the typical attributes venture investors look for. What they did have is clarity of vision to jump on OpenStack two years ahead of the market; recognition of the fact that they had a once-in-a-lifetime opportunity to build a lasting enterprise; and a relentless drive to make the most of that opportunity.

The three founders complemented each other very well: Alex Freedland could find and coach stellar engineers, Boris Renski had strong influence on the open source community, and Adrian Ionel combined product vision and stellar sales and partnership execution. We decided to lead the company’s $10M Series A round, and we’ve been working alongside them as they’ve increased revenues manyfold and acquired more than 130 OpenStack customers. Those include leading enterprises (like Comcast, Expedia and Symantec) as well as leading telecommunications/cloud service providers (such as Huawei, NTT, Orange and Tata Communications). The company also signed the largest OpenStack deal to date with Ericsson. And it has expanded its presence across China and Europe (Poland, France, Netherlands) to best support that global customer base.

Onward and Upward

As Moore’s Law continues to march forward, it’s delivering the processing power to enable the next-gen, software-defined datacenter. OpenStack, I believe, unleashes the power of hyper-scale SD infrastructure, providing the flexibility and IT agility most enterprises today seek. Having been through a similar transition from vertically integrated mainframes to Linux enabled x86 infrastructure, Intel Capital is familiar with the opportunity the massive OpenStack transition represents.

From here on, the Mirantis team has enormous opportunity to support bare-metal, virtual and container-oriented infrastructure at the core. Along the way, our partnership is focused on integrating with orchestration, performance management and apps-deployment tools at higher levels of the stack.

We’re in the early stages of the datacenter stack fundamentally being re-written. The Mirantis team is working hard with other OpenStack ecosystem vendors to enable that evolution. The $100M in new funding from top-tier partners – Insight and August – isn’t just a validation of our early bet on Mirantis, but it’s also a key driver that will accelerate OpenStack’s evolution within enterprise hybrid clouds.

We look forward to helping the Mirantis team on the next leg of that exciting journey.

Dharmesh Thakker leads Cloud Software and Big Data Investments globally at Intel Capital. If you have an interesting application in these areas, drop him a note at: dharmesh.d.thakker@intel.com.

This blog is cross-posted from Intel Capital.

The post Thakker: Early bet on Mirantis validated by $100M in new funding appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Guest Post at October 23, 2014 09:44 PM

OpenStack Blog

OpenStack Startup/Venture Capital Ecosystem – it’s real and coming to Paris!

Recently OpenStack has been generating financial headlines with the acquisitions of OpenStack ecosystem startups eNovance, Metacloud, Cloudscaling and OpenStack veteran Mirantis raising $100M in venture capital this week.

Startups have always been critical to driving innovation in software and platforms and OpenStack has spawned a vibrant ecosystem of startups that are finding new ways to serve customer needs and deliver value on top OpenStack.  Access to capital to support development and growth within these companies is critical.  Storm Ventures recently gathered data around OpenStack startups and venture activity estimating the number of startups currently offering OpenStack related products at 63.  These companies have collectively raised approximately $1.8 billion from corporate and traditional venture investors.  And these investors are already starting to see returns.

StormVentures_OpenStack_Infographic_ForwardPushMedia

At the OpenStack Summit in Paris next week, we are launching a new track called “CloudFunding” where we will hear from startups that have been successful in attracting essential capital and ventures capitalists who are actively investing in OpenStack startups.  We hope you’re as excited about this new track as we are and will join us at these sessions to learn more.

by Heidi Bretz at October 23, 2014 09:33 PM

IBM OpenStack Team

Node.js takes a positive step toward open governance

This morning, the Node community announced it is moving toward open governance. If you haven’t heard the news, Node.js is establishing an Open Governance Advisory Board. This board will advise the community as it develops its new governance structure and roadmap, and will also provide a forum for the broader Node.js community to discuss how to drive greater adoption of the technology. IBM is pleased to be named as one of the Advisory Board member companies and is committed to helping the community move forward towards full open governance.

The short story is that the establishment of an advisory board is a first step toward full open governance. It’s an important step, and one that the Node community is embarking on eagerly for all the right reasons.

Enterprises are grappling with the convergence of technologies like cloud, analytics, mobile and social business.  This convergence is a phenomenon that will literally transform industries. It’s only fitting that it transforms the communities where the component technologies are developed and managed.

The Node.js community is a vibrant, talented group, eager to bring even more innovation to the forefront of today’s enterprise transformation, but its governance was holding it back from reaching its full potential.

Over the years, Node contributions have come from a global community and the members have built a technology that enterprises are starting to adopt as a JavaScript runtime. The same enterprises that realize the power Node brings to their dynamic hybrid clouds increasingly recognize that without full open governance, these technologies can present as much risk to their enterprise success as proprietary clouds.

Supporting and building on open technologies is in our DNA. We like to say that IBM is open by design. The IBM portfolio of offerings is strongly influenced by the innovations developed in collaboration with open communities. IBM can attest to the need for high quality governance for these communities. As an active participant in open technology development for over 15 years, we know the most successful communities find a path to an inclusive model to drive rapid community growth and success.

Today’s move sets the Node community on a path to deliver real benefits. Expect the Node community to expand both in size and in the diversity of the development community making technical contributions. This will unleash the creativity of the entire community and enable Node to address more customer scenarios and operating environments than it currently does. With success, we can expect accelerated adoption of Node in the enterprise, as enterprises are much more likely to adopt a technology as mission critical if it is managed in an open and transparent manner. Together, these two factors will unleash a “virtuous cycle” that will drive the Node ecosystem to the next level of industry adoption.

I can’t overstate the value of a level playing field for open communities. Projects where all contributors have voice in how the technology evolves and where all adopters are empowered are good for innovation. Congratulations to the entire Node community. And thank you for inviting IBM to begin this journey with you.

What does Node bring to the Enterprise architecture?

Node helps with orchestrating back-end application services (e.g. RESTful APIs) This makes it a good fit for enterprise applications that must support a diverse set of client devices, in both the “on premises” and cloud deployment scenarios. Node’s large ecosystem of reusable modules, managed via the Node Package Manager, promises to accelerate Enterprise development even further and the ability to use the same programming language on the client and server side allows enterprise development teams to collaborate even more closely and deliver better results. For all these reasons, IBM has been an enthusiastic supporter of the Node community and has demonstrated that commitment by porting Node to multiple IBM platforms (available for download here).

The post Node.js takes a positive step toward open governance appeared first on Thoughts on Cloud.

by Angel Luis Diaz at October 23, 2014 07:15 PM

Rob Hirschfeld

Unicorn captured! Unpacking multi-node OpenStack Juno from ready state.

OpenCrowbar Packstack install demonstrates that abstracting hardware to ready state smooths install process.  It’s a working balance: Crowbar gets the hardware, O/S & networking right while Packstack takes care of OpenStack.

LAYERSThe Crowbar team produced the first open OpenStack installer back in 2011 and it’s been frustrating to watch the community fragment around building a consistent operational model.  This is not an OpenStack specific problem, but I think it’s exaggerated in a crowded ecosystem.

When I step back from that experience, I see an industry wide pattern of struggle to create scale deployments patterns that can be reused.  Trying to make hardware uniform is unicorn hunting, so we need to create software abstractions.  That’s exactly why IaaS is powerful and the critical realization behind the OpenCrowbar approach to physical ready state.

So what has our team created?  It’s not another OpenStack installer – we just made the existing one easier to use.

We build up a ready state infrastructure that makes it fast and repeatable to use Packstack, one of the leading open OpenStack installers.  OpenCrowbar can do the same for the OpenStack Chef cookbooks or Salt Formula.   It can even use Saltstack, Chef and Puppet together (which we do for the Packstack work)!  Plus we can do it on multiple vendors hardware and with different operating systems.   Plus we build the correct networks!

For now, the integration is available as a private beta (inquiries welcome!) because our team is not in the OpenStack support business – we are in the “get scale systems to ready state and integrate” business.  We are very excited to work with people who want to take this type of functionality to the next level and build truly repeatable, robust and upgradable application deployments.


by Rob H at October 23, 2014 06:51 PM

Mirantis

Whats New In OpenStack Juno Q&A

Last week we held our semi-annual “What’s New in OpenStack” webinar to talk about the major new features in OpenStack’s Tenth Release, Juno.  As always we had a great response, and we didn’t have a chance to get to all of the questions, so we’re providing all of the questions and answers here.

The webinar was hosted by me and my colleague, Christopher Aedo.  If you missed it, you can still catch the replay.

Why didn’t you cover TripleO?

Nick Chase: According to the Foundation’s definitions, TripleO isn’t so much a “project” is it is a deployment tool used to deploy OpenStack.  Since we already had a plethora of projects to cover, we elected to skip it.  That’s not to say that the team isn’t working hard; I understand they’ve accomplished a lot this cycle.

What is QuintupleO?

Nick Chase: Ah, QuintupleO.  TripleO, or OpenStack on OpenStack, is designed to let people use OpenStack to deploy OpenStack, so that they don’t have to go through the fuss of wrangling bare metal servers into submission.  However, not everybody has bare metal servers to work with. QuintupleO, or OpenStack on OpenStack on OpenStack, is designed to create a virtual environment where you can create a virtual OpenStack on top of OpenStack.  (Did we say “OpenStack” enough in that answer?)

Do the old L2 agents work with the new ML2 plugin?

Christopher Aedo: Yes!

How hard is it to upgrade from one version of OpenStack to another, really?

Nick Chase: Depending on how you’ve got things architected, not as hard as people would like you to believe.  Teams have been working hard to make this easier, and every project comes with a script for upgrading the database for that project.  We’ve got a blog on one method coming up shortly, and we’ve just published another about an open source project, Pumphouse, that will make it easier as well.

What is Spark?

Nick Chase: Spark is Apache Spark, a data processing tool similar to Hadoop.

When will Fuel support Juno?

Christopher Aedo:  Mirantis OpenStack 6.0, which includes Fuel, is due for a preview release supporting Fuel at the end of October, with general availability in November.

I have questions about HA, plz help me (maybe at the end of your presentation): (1). How could I do HA for Instances (VM). (2). Which practical solution for Bock storage. (3). How can I use multiple Public IP range . or the same idea with multiple L3 agent. (4). HA for multiple L3 agent

Christopher Aedo:

1) For help designing HA for instances, see Designing for the cloud.

2) Practical solutions for block storage: Ceph is a good choice, though there are many commercial options including SolidFire, EMC, Equalogic…

3) Public IP pools allow you to add multiple distinct ranges of IP addresses; just add them to additional pools

4) Not a question really, so I’ll go with … YES!

But what IS NFV?

Nick Chase: NFV, or Network Function Virtualization, is one level down from Software Defined Networking.  Rather than defining the networks themselves, it provides programmatic control over what actually goes on in those networks, such as load balancing or firewalls.  It’s designed to enable chaining of these services so that large operators can remove dedicated hardware and become more agile.

Is neutron ready for production? Which opensource backend plugin is recommended, ovs?

Christopher Aedo: Yes, in fact there are several really large clouds running Neutron in production.  As for which plugin, right now it’s OVS but we may see this change as vendors get more involved.

Will there ever be an automatic evacuation instance feature in case of a nova-compute failure?

Christopher Aedo: It’s unlikely to be a native OpenStack thing, as it would require compute nodes to use shared storage for VMs, but the tooling is there to enable this today (for instance using Ceph for vm root storage). A monitor outside the OpenStack environment can watch the health of the compute node and evacuate when appropriate.

Are the networking events hooks only for nova networking or for both nova and neutron networking?

Nick Chase: As far as I know this is only available for nova-network, but if I’m wrong, please say so in the comments.

What does L3 HA really mean?

Nick Chase: L3 agents are responsible for deploying virtual routers.  Unfortunately, if a host goes down, the routers they deployed are all lost. L3 HA is the process of making sure that they stay up, and if they go down, making sure that the router functionality doesn’t disappear.

What is the difference between IP assignment in IPv4 and IPv6?

Nick Chase: In IPv4, IP addresses are either static, as in assigned specifically on a device or interface, or dynamic, as in assigned by a DHCP server.  In that case, the DHCP server has to keep track of what IP addresses are available.  In IPv6, it’s a whole other ballgame.  First, the address space is MUCH larger.  Second, the subnet portion is typically advertised by the router, and the device assigns itself the host portion based on its MAC address.  (There are actually several ways to do IP assignment in IPv6, but that’s the gist of it.)

Whats a good place to learn more about the dvr and how it works.. ?

Christopher Aedo: You ca try the OpenStack wiki, and the mailing list (https://wiki.openstack.org/wiki/Distributed_Router_for_OVS)

With dvr, who/what decides which incoming flows go to which router agent?

Nick Chase: Like normal routing, it’s based on the IP addresses in the frame, with the Distributed Virtual Router IP used as the source address.  You can find more information here: https://wiki.openstack.org/wiki/Neutron/DVR_L2_Agent#How_Distributed_Routing_Works

Not sure I understood what artifacts are in glance. Is there something more than images/snapshots? Any example?

Nick Chase: Really, anything an be an artifact, but Glance is meant to be involved in deploying new VMs, so perhaps the best example would be a Heat stack.

What is the status of the SSL certificate store project ?

Nick Chase: I’m assuming you’re referring to Barbican, which provides keystore services.  Barbican is now an incubated project.

What is your next step in integrating Mirantis OpenStack with vSphere for VMware Engineers?

Nick Chase:  Watch this space.

When will we get instance tagging in Horizon?

Nick Chase: We weren’t able to find a concrete answer to this question, sorry!  

I have a question here, IMIO, openstack implements an elastic cloud for cattles, but for NFV, for telco appliances, it seems that openstack is used for pets, any idea?

Christopher Aedo: For general application architecture, most OpenStack architects recommend the “cattle” model where multiple VMs are considered disposable. The main focus of NFV is allowing certain VMs to have the highest performance possible in order to make virtualization of these network functions perform well; the VMs hosting the NFV components could still be treated as cattle as long as they are sharing the necessary state information (i.e. how it’s done with L3 HA)

So Juno will support more than one external network interface in the tenant router?

Nick Chase: That’s how I’m interpreting the spec, yes. (http://specs.openstack.org/openstack/nova-specs/specs/juno/implemented/nfv-multiple-if-1-net.html)

Will Gnocchi be supported by Gluten? Seriously, when will someone step up and start creating associative product names….! :)

Christopher Aedo: Soon, I hope.

“Gnocchi” is an Italian dish and the right pronunciation is “ñokki” :)

Nick Chase: In the words of The Doctor, “I’m sorry.  I’m so sorry.”

Is the EMC extremeIO cinder driver replacing EMC SMI-S?

Christopher Aedo: It does not look like it to me as far as I can tell. From what I’ve seen online, they’re driving high IO storage access towards the ViPR driver in the future.

What’s about the live migration without shared storage?

Christopher Aedo: Live migration without shared storage is possible, for instance by using block-migration under KVM. Without shared storage, there’s potential for the VM to be unavailable for a longer duration, but it’s still possible to migrate the VM from one compute node to another.

A question: I’ve surprised that ryu neutron plugin will be deprecated. Is it mean that ryu cannot be used with OpenStack? Or ryu plugin is migrating to ryu ML2 mechanism driver?

Christopher Aedo: ML2 OFAgent ML2 mechanism driver should allow you to use Ryu with OpenStack/Neutron. (https://ask.openstack.org/en/question/27704/ryu-ml2-support/)

Thanks for joining us, and feel free to ask other questions in the comments.  Thanks for joining us!

The post Whats New In OpenStack Juno Q&A appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Nick Chase at October 23, 2014 03:42 PM

Opensource.com

OpenStack for humanity's fast moving technology

Niki Acosta is the Director of Cloud Evangelism at Metacloud, now a part of Cisco. She is one of those technologists who strives to pull together all aspects of the OpenStack community for the betterment of everyone. As an active OpenStack participant, tweeter, and blogger, she has become a recognized name in the cloud industry. Read more in our interview with her prior to her talk at the OpenStack Summit in Paris 2014.

by Matt Micene at October 23, 2014 11:00 AM

ICCLab

Setup a Kubernetes Cluster on OpenStack with Heat

In this post we take a look at Kubernetes and help you setup a Kubernetes Cluster on your existing OpenStack Cloud using its Orchestration Service Heat. This Kubernetes Cluster should only be used as a Proof of Concept.

Technology involved:
Kubernetes: https://github.com/GoogleCloudPlatform/kubernetes
CoreOS: https://coreos.com/
etcd: https://github.com/coreos/etcd
fleet: https://github.com/coreos/fleet
flannel: https://github.com/coreos/flannel
kube-register: https://github.com/kelseyhightower/kube-register

The Heat Template used in this Post is available on Github.

What is Kubernetes?

Kubernetes allows the management of docker containers at scale. Its core concepts are covered in this presentation, held at the recent OpenStack&Docker Usergroup meetups.

A complete overview of Kubernetes is found on the Kubernetes Repo.

Architecture

The provisioned Cluster consists of 5 VMs. The first one, discovery, is a dedicated etcd host. This allows easy etcd discovery thanks to a static IP-Address.

A Kubernetes Master host is setup with the Kubernetes components apiserver, scheduler, kube-register, controller-manager as well as proxy. This machine also gets a floating IP assined and acts as a access point to your Kubernetes cluster.

Three Kubernetes Minion hosts are setup with the Kubernetes components kubelet and proxy.

HowTo

Follow the instructions on the Github repo to get your Kubernetes cluster up and running:

https://github.com/icclab/kubernetes-on-openstack-demo 

Examples

Two examples are provided in the repo:

by Michael Erne at October 23, 2014 09:43 AM

October 22, 2014

OpenStack Blog

OpenStack Foundation Staffing News!

I’m very excited to report that on Monday the Board of Directors approved the promotion of Lauren Sell to Vice President of Marketing and Community Services. Lauren has been instrumental in the growth of Openstack from the beginning.  Under her leadership the Summits have grown from just 75 attendees to over 4,000, and the OpenStack brand has gone from zero to Wall Street Journal in record time.  Since we started the Foundation 2 years ago, she’s built out a high performing marketing and community services team, including recent additions Allison Price , Shari Mahrdt , and Chris Hoge. When not taking OpenStack to new heights Lauren is know to spoil her cat Rhiley.

Rhiley

I’m also happy to report that we’ve continued to expand the OpenStack team over the past several weeks.  Thierry Carrez, who has managed the OpenStack releases from the beginning has taken on the role of Director of Engineering, and is building out a team of technical leaders.  One of his first hires was Clark Boylan, who joins us as an infrastructure engineer along with Jeremy Stanley and Stefano Maffulli.  Thierry is a strong leader in the OpenStack community, who was once again voted in as a member of the Technical Committee and their chairman!

Thierry Carrez

We continue to hire in support of the growth of OpenStack.  Be sure to check out our open positions if you’d like to join our team!

If you’re coming to Paris, I hope you have a good time with OpenStack, don’t forget to say hello to the whole Foundation team!

Mark Collier

COO, OpenStack Foundation

@sparkycollier

 

by Mark Collier at October 22, 2014 09:08 PM

Tesora Corp

Short Stack: OpenStack powers CERN, Juno reviewed, John Engates interview

short stack_b small_0_0.jpgWelcome to the Short Stack, our weekly feature where we search for the most intriguing OpenStack links to share with you. These links may come from traditional publications or company blogs, but if it's about OpenStack, we'll find the best links we can to share with you every week.

If you like what you see, please consider subscribing.

Here we go with this week's links:

John Engates: Rackspace CTO. Cloud evangelist. Drone hobbyist. | TechRepublic

John Engates is CTO at Rackspace and in this interview he talks about how he came to work at Rackspace, why he's stayed so long and what he does when he's not working including insight into his hobbies, his musical taste and the latest book he read.

HP extends OpenStack support to disaster recovery | SiliconANGLE

HP wants a piece of the OpenStack market and they have been making a series of moves lately to make sure that happens. Their latest attempt to get the attention of the Enterprise IT demographic is offering an enhanced OpenStack disaster recovery tool that works on a variety of configurations.

Cloudera, Red Hat Partner Around Hadoop-OpenStack Solutions | Data Center Knowledge

Red Hat is another company that has made it crystal clear it wants a piece of the OpenStack market and hopes to be one of the top players in this space as it transitions from client-server enterprise Linux to the cloud. Last week, it announced a partnership with Cloudera around a Hadoop OpenStack solution.

OpenStack Juno packs in features, pursues wider adoption | InfoWorld

The latest version of OpenStack code-named Juno came out recently and this article looks at the features that make this version special including Hadoop support and enhanced database support. OpenStack Juno also introduced storage policies for more fine-grained control over storage --and much more.

How OpenStack powers the research at CERN | Opensource.com

OpenStack has a big presence at the world-famous  physics research institute, CERN, and in this interview Tim Bell from CERN explains its role. The organization produces a ton of physics research data and OpenStack infrastructure provides the compute power to process and understand it.

by 693 at October 22, 2014 12:15 PM

Short Stack: Mirantis gets $100M, OpenStack powers CERN, Juno reviewed

short stack_b small_0_0.jpgWelcome to the Short Stack, our weekly feature where we search for the most intriguing OpenStack links to share with you. These links may come from traditional publications or company blogs, but if it's about OpenStack, we'll find the best links we can to share with you every week.

If you like what you see, please consider subscribing.

Here we go with this week's links:

Mirantis hopes massive $100M round will pave the road to IPO-ville | GigaOm

Mirantis wants to be the leader of enterprise OpenStack and this week it got $100M in funding to continue to pursue that goal. It's the biggest funding round for an open source company ever, and it paves the way for a possible IPO in 2016.

HP extends OpenStack support to disaster recovery | SiliconANGLE

HP wants a piece of the OpenStack market and they have been making a series of moves lately to make sure that happens. Their latest attempt to get the attention of the Enterprise IT demographic is offering an enhanced OpenStack disaster recovery tool that works on a variety of configurations.

Cloudera, Red Hat Partner Around Hadoop-OpenStack Solutions | Data Center Knowledge

Red Hat is another company that has made it crystal clear it wants a piece of the OpenStack market and hopes to be one of the top players in this space as it transitions from client-server enterprise Linux to the cloud. Last week, it announced a partnership with Cloudera around a Hadoop OpenStack solution.

OpenStack Juno packs in features, pursues wider adoption | InfoWorld

The latest version of OpenStack code-named Juno came out recently and this article looks at the features that make this version special including Hadoop support and enhanced database support. OpenStack Juno also introduced storage policies for more fine-grained control over storage --and much more.

How OpenStack powers the research at CERN | Opensource.com

OpenStack has a big presence at the world-famous  physics research institute, CERN, and in this interview Tim Bell from CERN explains its role. The organization produces a ton of physics research data and OpenStack infrastructure provides the compute power to process and understand it.

by 693 at October 22, 2014 12:15 PM

Opensource.com

Free software hacker on open source telemetry project for OpenStack

Julien Danjou is a free software hacker almost all of the time. At his day job, he hacks on OpenStack for eNovance. And, in his free time, he hacks on free software projects like Debian, Hy, and awesome. Julian has also written The Hacker's Guide to Python and given talks on OpenStack and the Ceilometer project, among other things. Prior to his talk at OpenStack Summit 2014 in Paris this year, we interviewed him about his current work and got some great insight into the work going on for the Ceilometer project, the open source telemetry project for OpenStack.

by Jen Wike Huger at October 22, 2014 11:00 AM

October 21, 2014

Mirantis

Turbocharging the Software Revolution with OpenStack

It’s a big day for us at Mirantis.

We’re proud to welcome great new investors to Mirantis: Insight Venture Partners and August Capital, two of the most successful firms in the business. They’ve backed iconic companies like Twitter, Skype, New Relic and Splunk, among many others. We couldn’t wish for better partners as we build the leading pure-play OpenStack company.

I am also immensely grateful to our customers, partners,existing investors and my colleagues. They believed in us when the odds for success looked slim, yet had so much faith in our mission and our team.

They saw what we did: Openstack can change the world. It turbocharges the software revolution by empowering developers and end-users with easy and instant access to computing power —  free of any vendor lock-in.

So, what can you expect from us in the months and years ahead?

We will double down on our R&D investment to make OpenStack the cloud platform of choice for the enterprise: easy to use, reliable and completely open. Mirantis is committed to building the best OpenStack software, and contributing code back upstream.

We will invest in helping our partners add their unique value to Mirantis and build a successful business together with us. Our customers have diverse use cases that don’t lend themselves to a one-size-fits-all, single-vendor approach. A rich and vibrant partner eco-system for OpenStack will is key to success.

Finally, we want every Mirantis customer to feel as if they are our only customer. We will continue to invest heavily in our 24/7 support and operations team so that people can rely on us with complete confidence.

Opportunities like this don’t come along often in one’s lifetime. We owe it our customers, our investors and ourselves to make the most of it. For us, this means giving it our all and doing our very best work, every single day.

 

Adrian Ionel is CEO of Mirantis.

The post Turbocharging the Software Revolution with OpenStack appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Adrian Ionel at October 21, 2014 10:47 PM

Come listen to Mirantis presentations at OpenStack Summit Paris

With OpenStack Summit Paris less than two weeks away, now is a good time to plan which sessions to attend while you’re there. If you’ll be at the summit, be sure to check out some of the presentations that Mirantis will be speaking in — we’ll be featured in more than 20 talks (our largest number ever), ranging from technical topics to cloudfunding to customer case studies. Click on any of the links below, and you’ll be able to add the presentation to your summit schedule.

Monday, November 3

11:40-13:10  Using Heat and Other Tools to Deploy Highly Available Environments
12:30-13:10  Panel: Experience with OpenStack in Telco Infrastructure Transformation
12:30-13:10  Resiliency and Performance Engineering for OpenStack at Enterprise Scale
14:30 – 15:10  Evaluating Vendors and Drivers in OpenStack Deployments with Rally + OSProfiler
14:30 – 15:10  How Do I Get My Hardware OpenStack-Ready?
15:20 – 16:00  Tales from the Ship: Navigating the OpenStack Community Seas
16:20 – 17:00  Panel: Open Source OpenStack Provisioning Tools: What, Why, and How?
15:40 – 16:20  OpenStack and vSphere/vCenter: Best Practices for ‘Classic’ Enterprise and Cloud-ready Apps
17:10 – 17:50  Ask the Experts: OpenStack as a Service or as a Distribution?

Tuesday, November 4

11:15-11:55  MySQL and OpenStack Deep Dive
12:05-12:45  Fireside Chat: Getting VCs to Believe Your OpenStack Story
12:05-12:45  Pumphouse: Workload Migration and Rolling Upgrades of OpenStack Cloud
12:05-12:45  Walk on Water: 20 Stepping Stones to Reach Production OpenStack Cloud (for Execs, PMs, Architects)
14:50-15:30  Building Telco Grade Cloud Services with OpenStack at Orange
16:40-17:20  The OpenStack Thunderdome

Wednesday, November 5

9:00-9:40  4 Years In
11:00-11:40  Rethinking Ceilometer metric storage with Gnocchi: Time-series as a Service
11:30-11:50  Designing for Scale: Data Storage in OpenStack and Galera Cluster
11:50-12:30  Glance Artifacts: It’s Not Just for Images Anymore
11:50-12:30  Altruism as a Service: An Essential Pillar of OpenStack
14:40-15:20  How We Fought for OpenStack HA
16:30-17:10  How to Take the CI/CD Plunge or How I Learned to Stop Caring and Love the Bomb

denis-makogon-summit-presentation-atlanta
Mirantis engineer Denis Makogon presents at
OpenStack Summit Atlanta in May.

The post Come listen to Mirantis presentations at OpenStack Summit Paris appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Michelle Yakura at October 21, 2014 09:50 PM

Kyle Mestery

OpenDaylight and OpenStack: Now With Helium!

This is just a quick post to note that the devstack support for OpenDaylight was recently updated to use the Helium release of OpenDaylight. For anyone who wants to pull down devstack and have it spin-up Neutron with OpenDaylight, you will now get the latest and greatest OpenDaylight release as a part of this. My blog post on how to use this is still relevant, so if you’re looking for instructions please look there.

Happy SDN’ing!

by mestery at October 21, 2014 08:54 PM

Peer Reviews for Neutron Core Reviewers

As I was recently given the chance to serve as Neutron PTL for a second cycle, I thought it would be a good idea for me to share some insight into what I’m hoping to achieve upstream in Kilo. I’ll have some upcoming posts on what we’re planning on accomplishing, but I wanted to first start with a post about the actual people who are allowed to merge code into Neutron, the core reviewers.

Core Reviewers

Neutron has 14 total core reviewers. You can see a list of these and also some notes around this on our wiki. Cores are responsible for reviewing code submissions submitted to Neutron’s gerrit, as well as merging those code submissions. They are also responsible for other things, most of which fall in a grey zone and aren’t documented that well. We’ll come back to this part in a bit.

The Neutron team has added a small handful of cores as the project has gone on, and we’ve also lost a small handful of people. But for the most part, once you’re a core, you remain a core forever. While this approach has served it’s purpose, there are issues with it.

Improving the Core Experience

OpenStack is always trying to improve things around not only code quality but also governance, so members of the Neutron community have taken it upon themselves to improve the process by which we understand a core’s responsibilities, and also a process under which we can judge how cores are performing up to that standard. The idea is to allow for the constant and consistent evaluation of existing Neutron cores. The long-term goal is to use the mechanism to also vet potential new cores. It’s an ambitious goal, and it takes into account more than just reviews, but also the grey zone aspects of being core which are hard to document.

This grey zone includes things such as community leadership, how you interact with the rest of the community, and participation in things like weekly meetings, IRC chats, and mailing list conversations. It includes mentoring of new contributors (something we haven’t recognized officially, but which happens). It also includes interactions with other OpenStack projects, and the leadership around being a liaison from Neutron into these other projects. It even includes evangelism for Neutron. All of these things are done by Neutron core team members.

Neutron Core Team Peer Review

The result of this has led us to begin to implement a Peer Review process for Neutron core team members. This is currently documented on an etherpad, and we’re in the process of collecting one more round of feedback. I’m highlighting this process here so people can provide input for this process. The goal is to keep this lightweight at first, and collect a good amount of actionable information to relay back to the existing core reviews. See the etherpad link for more info.

The end result of this is that we as a Neutron core team hope to better define what it means to be a Neutron core team member. We also hope to provide actionable feedback to existing members who may have strayed from this definition. A stronger Neutron core team benefits the entire OpenStack ecosystem. I’m proud of the work our Neutron core team is doing, and I hope we can continue to grow and evolve Neutron cores in the future by using this new Peer Review process.

by mestery at October 21, 2014 08:27 PM

IBM OpenStack Team

IBM contributions to OpenStack go beyond the code

Co-authored by Manuel Silveyra

In just four years, OpenStack has become the largest and most active open source project—not just the hottest cloud technology. As of the October 16, 2014, Juno release, the overall number of contributors has surpassed 2,500 and there have been nearly 130,000 code commits. In 2014 alone, there’s been an average of 4,000 source code improvements per month.

As is the case with most open source projects, code contributions are the most high profile indicator of project vitality, as you can tell by the metrics we called out first. But there are other important activities around an open source project that also contribute to community health and user uptake.

Our colleague Brad Topol recently summarized the major advancements made by the community in the most recent OpenStack Juno release. He also highlighted the IBM specific additions, which fall into five major categories:

  • Enterprise security: Contributions to Keystone to enable better hybrid cloud integration and auditing
  • Block storage: Improvements to the resiliency and troubleshooting of Cinder storage volumes
  • User experience: Internationalization and usability upgrades for Horizon
  • Compute management: Improved automation and integration by simplifying the Nova application programming interfaces (APIs)
  • Interoperability: Leading work to ensure that OpenStack vendor implementations are compatible.

These technical contributions are great, but they are only one part of the overall support that IBM has provided for the OpenStack project. Like Linux, Apache and Eclipse before it, OpenStack benefits from IBM activities such as:

If you are attending the Summit in Paris next month, come to our session to learn about these and other IBM contributions to OpenStack. Our goal is to show that there are many ways for individuals and organizations to contribute to an open source project, beyond writing code, and we would like to encourage others to take part.

(Related: IBM technical sessions at the OpenStack Summit)

We also want to hear your suggestions on how IBM can better contribute to Kilo, the next major OpenStack release after Juno. Let us know at the Summit, or on Twitter @DanielKrook and @manuel_silveyra.

Manuel Silveyra is a Senior Cloud Solutions Architect working for the IBM Cloud Performance team in Austin. He is dedicated to bringing open cloud technologies such as OpenStack, Cloud Foundry, and Docker to enterprise clients.

The post IBM contributions to OpenStack go beyond the code appeared first on Thoughts on Cloud.

by Daniel Krook at October 21, 2014 06:28 PM

James E. Blair

Simple Terminal Broadcasting

Here is a very simple way to live broadcast a terminal session without needing a shared machine (as you would for screen):

Open two terminals (or screen windows).  In the first run:

tcpserver -v 0.0.0.0 8888 tail -f /tmp/script

That will start a server on all local IP addresses listening on port 8888.  The -v means verbose so you can see when people connect.  You could remove it and then background this process and only use one terminal.

In the second terminal, run:

script -f /tmp/script

And then whatever you do in that terminal will be broadcast to anyone that connects to your server.

Anyone that wants to view your session can simply run:

telnet $HOSTNAME 8888

When you are finished, hit ctrl-d to stop recording and then ctrl-c in the tcpserver terminal. You will be left with a transcript of your session in /tmp/script. Note that script has a '"-t" option to record a timing file. With those two files, you may be able to create a playable recording of your session, possibly with the help of TermRecord.

The OpenStack Infrastructure team plans on using this method along with an Asterisk audio conferencing server for simple and fully free-software screen broadcasting events.

by James E. Blair (corvus@gnu.org) at October 21, 2014 06:20 PM

Solinea

Making the Case for OpenStack—Critical Success Factors (Part 2)

buildings-48796Last week I wrote about some of the challenges to successfully implementing OpenStack in the enterprise.  The biggest obstacles have nothing to do with technology, but rather have to do with Governance, Processes and Skills.



by Francesco Paola (fpaola@solinea.com) at October 21, 2014 02:00 PM

OpenStack in Production

Kerberos and Single Sign On with OpenStack

External Authentication with Keystone

One of the most commonly requested features by the CERN cloud user community is support for authentication using Kerberos on the command line and single-sign on with the OpenStack dashboard.

In our Windows and Linux environment, we run Active Directory to provide authentication services. During the Essex cycle of OpenStack, we added support for getting authentication based on Active Directory passwords. However, this had several drawbacks:
  • When using the command line clients, the users had the choice of storing their password in environment variables such as with the local openrc script or re-typing their password with each OpenStack command. Passwords in environment variables has significant security risks since they are passed to any sub-command and can be read by the system administrator of the server you are on.
  • When logging in with the web interface, the users were entering their password into the dashboard. Most of CERN's applications use a single sign on package with Active Directory Federation Services (ADFS). Recent problems such as Heartbleed show the risks of entering passwords into web applications.
The following describes how we configured this functionality.

Approach

With our upgrade to Icehouse completed last week, the new release of the v3 identity API, Keystone now supports several authentication mechanisms through plugins. By default password, token and external authentication were provided. In this scenario, other authentication methods such Kerberos or X.509 can be used with a proper apache configuration and the external plugin provided in keystone. Unfortunately, when enabling these methods on apache, there is no way to make them optional so the client can choose the most appropriate.

Also when checking the projects he can access, the client normally does two operations on keystone, one to retrieve the token, and the other one with the token to retrieve the project list. Even if it is specified in the environment variables, the second call always uses the catalog, so if in the catalog has version 2 and we are using version 3 then we have an exception while doing the API call.

Requirements

In this case we need a solution that allows us to use Kerberos, X.509 or another authentication mechanism in a transparent way and also backwards compatible, so we can offer both APIs and let the user choose which is the most appropriate for its workflow. This will allow us to migrate services from one API version to the next one with no downtime.

In order to allow external authentication to our clients, we need to cover two parts, client side and server side. Client side to distinguish which is the auth plugin to use, and Server side to allow multiple auth methods and API versions at once.

Server Solution

In order to have different entry points under the same api, we would need a load balancer, in this particular case we use HAproxy. From this load balancer we are calling two different sets of backend machines, one for version 2 of the API and the other for version 3. In this loadbalancer, we can analyze the version of the url where the client is connecting to so we can redirect him to the appropriate set. Each backend is running keystone under apachea and it is connected to the same database. We need this to allow tokens to be validated no matter the version is used on the client. The only difference between the backend sets is the catalog, the identity service is different on both pointing the client to the available version on each set. For this particular purpose we will use a templatedcatalog.


Right now we solve the multiversion issue of the OpenStack environment, but we didn't allow Kerberos or X.509. As these methods are not optional we may need different entry points for each authentication plugin used. So we need entry points for OpenStack authentication (password, token), Kerberos and X.509. There is no issue with the catalog if we enable these methods, all of them can be registered on the service catalog like normal OpenStack authentication, because any consequent call on the system will use token based authentication.
So in the apache v3 backend we have the following urls defined:

https://mykeystone/main/v3
https://mykeystone/admin/v3
https://mykeystone/krb/v3
https://mykeystone/x509/v3

If you post an authentication request to the Kerberos url, this will require a valid Kerberos token, in case it is not sent it will initiate a challenge. After validating it, it will it the user as the REMOTE_USER. In case of client certificate authentication, you will use the X.509 url that will require a valid certificate, in this case it will use the DN as the REMOTE_USER. After this variable is set, then Keystone can take over and check the user in the Keystone database.
There is a small caveat, we cannot do offloading of SSL client authentication on the HAproxy, so for this purpose we need to connect directly from the client, it uses a different port 8443 and connects directly to the backends configured. So for X.509 authentication we use 'https://mykeystone:8443/x509/v3'

Client Solution

For the client side, the plugin mechanism will only be available on the common cli (python-openstackclient) and not on the rest of the toolset (nova, glance, cinder, ...). There is no code yet that implements the plugin functionality, so in order to provide a short term implementation, and based on our current architecture, we can base it the selection of the plugin on the OS_AUTH_URL for the moment. The final upstream implementation will almost certainly differ at this point by using a parameter or discover the auth plugins available. In that case the client implementation may change but this is likely to be close to the initial implementation.

In openstackclient/common/clientmanager.py
...
        if 'krb' in auth_url and ver_prefix == 'v3':
            LOG.debug('Using kerberos auth %s', ver_prefix)
            self.auth = v3_auth_kerberos.Kerberos(
                auth_url=auth_url,
                trust_id=trust_id,
                domain_id=domain_id,
                domain_name=domain_name,
                project_id=project_id,
                project_name=project_name,
                project_domain_id=project_domain_id,
                project_domain_name=project_domain_name,
            )
        elif 'x509' in auth_url and ver_prefix == 'v3':
            LOG.debug('Using x509 auth %s', ver_prefix)
            self.auth = v3_auth_x509.X509(
                auth_url=auth_url,
                trust_id=trust_id,
                domain_id=domain_id,
                domain_name=domain_name,
                project_id=project_id,
                project_name=project_name,
               project_domain_id=project_domain_id,
                project_domain_name=project_domain_name,
                client_cert=client_cert,
            )
        elif self._url:
...

HAproxy configuration

global
  chroot  /var/lib/haproxy
  daemon
  group  haproxy
  log  mysyslogserver local0
  maxconn  8000
  pidfile  /var/run/haproxy.pid
  ssl-default-bind-ciphers  ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128:AES256:AES:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK
  stats  socket /var/lib/haproxy/stats
  tune.ssl.default-dh-param  2048
  user  haproxy

defaults
  log  global
  maxconn  8000
  mode  http
  option  redispatch
  option  http-server-close
  option  contstats
  retries  3
  stats  enable
  timeout  http-request 10s
  timeout  queue 1m
  timeout  connect 10s
  timeout  client 1m
  timeout  server 1m
  timeout  check 10s

frontend cloud_identity_api_production
  bind 188.184.148.158:443 ssl no-sslv3 crt /etc/haproxy/cert.pem verify none
  acl  v2_acl_admin url_beg /admin/v2
  acl  v2_acl_main url_beg /main/v2
  default_backend  cloud_identity_api_v3_production
  timeout  http-request 5m
  timeout  client 5m
  use_backend  cloud_identity_api_v2_production if v2_acl_admin
  use_backend  cloud_identity_api_v2_production if v2_acl_main

frontend cloud_identity_api_x509_production
  bind 188.184.148.158:8443 ssl no-sslv3 crt /etc/haproxy/cert.pem ca-file /etc/haproxy/ca.pem verify required
  default_backend  cloud_identity_api_v3_production
  rspadd  Strict-Transport-Security:\ max-age=15768000
  timeout  http-request 5m
  timeout  client 5m
  use_backend  cloud_identity_api_v3_production if { ssl_fc_has_crt }

backend cloud_identity_api_v2_production
  balance  roundrobin
  stick  on src
  stick-table  type ip size 20k peers cloud_identity_frontend_production
  timeout  server 5m
  timeout  queue 5m
  timeout  connect 5m
  server cci-keystone-bck01 128.142.132.22:443 check ssl verify none
  server cci-keystone-bck02 188.184.149.124:443 check ssl verify none
  server p01001453s11625 128.142.174.37:443 check ssl verify none

backend cloud_identity_api_v3_production
  balance  roundrobin
  http-request  set-header X-SSL-Client-CN %{+Q}[ssl_c_s_dn(cn)]
  stick  on src
  stick-table  type ip size 20k peers cloud_identity_frontend_production
  timeout  server 5m
  timeout  queue 5m
  timeout  connect 5m
  server cci-keystone-bck03 128.142.159.38:443 check ssl verify none
  server cci-keystone-bck04 128.142.164.244:443 check ssl verify none
  server cci-keystone-bck05 128.142.132.192:443 check ssl verify none
  server cci-keystone-bck06 128.142.146.182:443 check ssl verify none

listen stats
  bind 188.184.148.158:8080
  stats  uri /
  stats  auth haproxy:toto1TOTO$

peers cloud_identity_frontend_production
  peer cci-keystone-load01.cern.ch 188.184.148.158:7777
  peer cci-keystone-load02.cern.ch 128.142.153.203:7777
  peer p01001464675431.cern.ch 128.142.190.8:7777
Apache configuration
WSGISocketPrefix /var/run/wsgi

Listen 443

<VirtualHost *:443>
  ServerName keystone.cern.ch
  DocumentRoot /var/www/cgi-bin/keystone
  LimitRequestFieldSize 65535

  SSLEngine On
  SSLCertificateFile      /etc/keystone/ssl/certs/hostcert.pem
  SSLCertificateKeyFile   /etc/keystone/ssl/keys/hostkey.pem
  SSLCertificateChainFile /etc/keystone/ssl/certs/ca.pem
  SSLCACertificateFile    /etc/keystone/ssl/certs/ca.pem
  SSLVerifyClient         none
  SSLOptions              +StdEnvVars
  SSLVerifyDepth          10
  SSLUserName             SSL_CLIENT_S_DN_CN
  SSLProtocol             all -SSLv2 -SSLv3

  SSLCipherSuite          ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128:AES256:AES:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK
  SSLHonorCipherOrder     on
  Header add Strict-Transport-Security "max-age=15768000"


  WSGIDaemonProcess keystone user=keystone group=keystone processes=2 threads=2
  WSGIProcessGroup keystone

  WSGIScriptAlias /admin /var/www/cgi-bin/keystone/admin
  <Location "/admin">
    SSLRequireSSL
    SSLVerifyClient       none
  </Location>

  WSGIScriptAlias /main /var/www/cgi-bin/keystone/main
  <Location "/main">
    SSLRequireSSL
    SSLVerifyClient       none
  </Location>

  WSGIScriptAlias /krb /var/www/cgi-bin/keystone/main

  <Location "/krb">
    SSLRequireSSL
    SSLVerifyClient       none
  </Location>

  <Location "/krb/v3/auth/tokens">
    SSLRequireSSL
    SSLVerifyClient       none
    AuthType              Kerberos
    AuthName              "Kerberos Login"
    KrbMethodNegotiate    On
    KrbMethodK5Passwd     Off
    KrbServiceName        Any
    KrbAuthRealms         CERN.CH
    Krb5KeyTab            /etc/httpd/http.keytab
    KrbVerifyKDC          Off
    KrbLocalUserMapping   On
    KrbAuthoritative      On
    Require valid-user
  </Location>

  WSGIScriptAlias /x509 /var/www/cgi-bin/keystone/main

  <Location "/x509">
    Order allow,deny
    Allow from all
  </Location>

  WSGIScriptAliasMatch ^(/main/v3/OS-FEDERATION/identity_providers/.*?/protocols/.*?/auth)$ /var/www/cgi-bin/keystone/main/$1

  <LocationMatch /main/v3/OS-FEDERATION/identity_providers/.*?/protocols/saml2/auth>
    ShibRequestSetting requireSession 1
    AuthType shibboleth
    ShibRequireSession On
    ShibRequireAll On
    ShibExportAssertion Off
    Require valid-user
  </LocationMatch>

  <LocationMatch /main/v3/OS-FEDERATION/websso>
    ShibRequestSetting requireSession 1
    AuthType shibboleth
    ShibRequireSession On
    ShibRequireAll On
    ShibExportAssertion Off
    Require valid-user
  </LocationMatch>

  <Location /Shibboleth.sso>
    SetHandler shib
  </Location>

  <Directory /var/www/cgi-bin/keystone>
    Options FollowSymLinks
    AllowOverride All
    Order allow,deny
    Allow from all
  </Directory>
</VirtualHost>

References

The code of python-openstackclient as long as the python-keystoneclient that we are using for this implementation is available at:


We will be working with the community in the Paris summit to find the best way to integrate this functionality into the standard OpenStack release.

Credits

The main author is Jose Castro Leon with help from Marek Denis.

Many thanks to the Keystone core team for their help and advice on the implementation.

by Tim Bell (noreply@blogger.com) at October 21, 2014 01:19 PM

ICCLab

8th Swiss Openstack meetup

docker-logo-loggedout         chosug  icclab-logo

Last week, 16 Oct 2014, great participation to OpenStack User Group - Meeting @ICCLab Winterthur. We have co-located it with docker CH meetupAround 60 participants from both the user groups.

For this event, we have organised the agenda trying to have a good mix of big players and developers presentations. Goals : Analysis of OpenStack and Docker Solutions, deployments and containers orchestration.

Final Agenda  start: 18.00

Snacks and Drinks were kindly offered by ZHAWMirantis

We had some interesting technical discussions and Q&A with some speakers during the evening apero, as usual.

 

IMG_20141016_204104 IMG_20141016_203550 IMG_20141016_200322 IMG_20141016_192012 IMG_20141016_183141 - Copy IMG_20141016_185018IMG_20141016_181052 - Copy

by Antonio Cimmino at October 21, 2014 12:20 PM

Joshua Hesketh

OpenStack infrastructure swift logs and performance

Turns out I’m not very good at blogging very often. However I thought I would put what I’ve been working on for the last few days here out of interest.

For a while the OpenStack Infrastructure team have wanted to move away from storing logs on disk to something more cloudy – namely, swift. I’ve been working on this on and off for a while and we’re nearly there.

For the last few weeks the openstack-infra/project-config repository has been uploading its CI test logs to swift as well as storing them on disk. This has given us the opportunity to compare the last few weeks of data and see what kind of effects we can expect as we move assets into an object storage.

  • I should add a disclaimer/warning, before you read, that my methods here will likely make statisticians cringe horribly. For the moment though I’m just getting an indication for how things compare.

The set up

Fetching files from an object storage is nothing particularly new or special (CDN’s have been doing it for ages). However, for our usage we want to serve logs with os-loganalyze giving the opportunity to hyperlink to timestamp anchors or filter by log severity.

First though we need to get the logs into swift somehow. This is done by having the job upload its own logs. Rather than using (or writing) a Jenkins publisher we use a bash script to grab the jobs own console log (pulled from the Jenkins web ui) and then upload it to swift using credentials supplied to the job as environment variables (see my zuul-swift contributions).

This does, however, mean part of the logs are missing. For example the fetching and upload processes write to Jenkins’ console log but because it has already been fetched these entries are missing. Therefore this wants to be the very last thing you do in a job. I did see somebody do something similar where they keep the download process running in a fork so that they can fetch the full log but we’ll look at that another time.

When a request comes into logs.openstack.org, a request is handled like so:

  1. apache vhost matches the server
  2. if the request ends in .txt.gz, console.html or console.html.gz rewrite the url to prepend /htmlify/
  3. if the requested filename is a file or folder on disk, serve it up with apache as per normal
  4. otherwise rewrite the requested file to prepend /htmlify/ anyway

os-loganalyze is set up as an WSGIScriptAlias at /htmlify/. This means all files that aren’t on disk are sent to os-loganalyze (or if the file is on disk but matches a file we want to mark up it is also sent to os-loganalyze). os-loganalyze then does the following:

  1. Checks the requested file path is legitimate (or throws a 400 error)
  2. Checks if the file is on disk
  3. Checks if the file is stored in swift
  4. If the file is found markup (such as anchors) are optionally added and the request is served
    1. When serving from swift the file is fetched via the swiftclient by os-loganlayze in chunks and streamed to the user on the fly. Obviously fetching from swift will have larger network consequences.
  5. If no file is found, 404 is returned

If the file exists both on disk and in swift then step #2 can be skipped by passing ?source=swift as a parameter (thus only attempting to serve from swift). In our case the files exist both on disk and in swift since we want to compare the performance so this feature is necessary.

So now that we have the logs uploaded into swift and stored on disk we can get into some more interesting comparisons.

Testing performance process

My first attempt at this was simply to fetch the files from disk and then from swift and compare the results. A crude little python script did this for me: http://paste.openstack.org/show/122630/

The script fetches a copy of the log from disk and then from swift (both through os-loganalyze and therefore marked-up) and times the results. It does this in two scenarios:

  1. Repeatably fetching the same file over again (to get a good average)
  2. Fetching a list of recent logs from gerrit (using the gerrit api) and timing those

I then ran this in two environments.

  1. On my local network the other side of the world to the logserver
  2. On 5 parallel servers in the same DC as the logserver

Running on my home computer likely introduced a lot of errors due to my limited bandwidth, noisy network and large network latency. To help eliminate these errors I also tested it on 5 performance servers in the Rackspace cloud next to the log server itself. In this case I used ansible to orchestrate the test nodes thus running the benchmarks in parallel. I did this since in real world use there will often be many parallel requests at once affecting performance.

The following metrics are measured for both disk and swift:

  1. request sent – time taken to send the http request from my test computer
  2. response – time taken for a response from the server to arrive at the test computer
  3. transfer – time taken to transfer the file
  4. size – filesize of the requested file

The total time can be found by adding the first 3 metrics together.

 

Results

Home computer, sequential requests of one file

 

<iframe height="400px" src="https://docs.google.com/spreadsheets/d/1xWNXG2RGK6AhAF4QsdHbPtC2TFM9wNB8OYPMvMvgd1U/pubchart?oid=2145436239&amp;format=interactive" width="100%"></iframe>

The complementary colours are the same metric and the darker line represents swift’s performance (over the lighter disk performance line). The vertical lines over the plots are the error bars while the fetched filesize is the column graph down the bottom. Note that the transfer and file size metrics use the right axis for scale while the rest use the left.

As you would expect the requests for both disk and swift files are more or less comparable. We see a more noticable difference on the responses though with swift being slower. This is because disk is checked first, and if the file isn’t found on disk then a connection is sent to swift to check there. Clearly this is going to be slower.

The transfer times are erratic and varied. We can’t draw much from these, so lets keep analyzing deeper.

The total time from request to transfer can be seen by adding the times together. I didn’t do this as when requesting files of different sizes (in the next scenario) there is nothing worth comparing (as the file sizes are different). Arguably we could compare them anyway as the log sizes for identical jobs are similar but I didn’t think it was interesting.

The file sizes are there for interest sake but as expected they never change in this case.

You might notice that the end of the graph is much noisier. That is because I’ve applied some rudimentary data filtering.

request sent (ms) – disk request sent (ms) – swift response (ms) – disk response (ms) – swift transfer (ms) – disk transfer (ms) – swift size (KB) – disk size (KB) – swift
Standard Deviation 54.89516183 43.71917948 56.74750291 194.7547117 849.8545127 838.9172066 7.121600095 7.311125275
Mean 283.9594368 282.5074598 373.7328851 531.8043908 5091.536092 5122.686897 1219.804598 1220.735632

 

I know it’s argued as poor practice to remove outliers using twice the standard deviation, but I did it anyway to see how it would look. I only did one pass at this even though I calculated new standard deviations.

 

request sent (ms) – disk request sent (ms) – swift response (ms) – disk response (ms) – swift transfer (ms) – disk transfer (ms) – swift size (KB) – disk size (KB) – swift
Standard Deviation 13.88664039 14.84054789 44.0860569 115.5299781 541.3912899 515.4364601 7.038111654 6.98399691
Mean 274.9291111 276.2813889 364.6289583 503.9393472 5008.439028 5013.627083 1220.013889 1220.888889

 

I then moved the outliers to the end of the results list instead of removing them completely and used the newly calculated standard deviation (ie without the outliers) as the error margin.

Then to get a better indication of what are average times I plotted the histograms of each of these metrics.

<iframe height="400px" src="https://docs.google.com/spreadsheets/d/1xWNXG2RGK6AhAF4QsdHbPtC2TFM9wNB8OYPMvMvgd1U/pubchart?oid=732438212&amp;format=interactive" width="100%"></iframe>

<iframe height="400px" src="https://docs.google.com/spreadsheets/d/1xWNXG2RGK6AhAF4QsdHbPtC2TFM9wNB8OYPMvMvgd1U/pubchart?oid=115390465&amp;format=interactive" width="100%"></iframe>

Here we can see a similar request time.
 

<iframe height="400px" src="https://docs.google.com/spreadsheets/d/1xWNXG2RGK6AhAF4QsdHbPtC2TFM9wNB8OYPMvMvgd1U/pubchart?oid=1644363181&amp;format=interactive" width="100%"></iframe>

<iframe height="400px" src="https://docs.google.com/spreadsheets/d/1xWNXG2RGK6AhAF4QsdHbPtC2TFM9wNB8OYPMvMvgd1U/pubchart?oid=434940837&amp;format=interactive" width="100%"></iframe>

Here it is quite clear that swift is slower at actually responding.
 

<iframe height="400px" src="https://docs.google.com/spreadsheets/d/1xWNXG2RGK6AhAF4QsdHbPtC2TFM9wNB8OYPMvMvgd1U/pubchart?oid=1719303791&amp;format=interactive" width="100%"></iframe>

<iframe height="400px" src="https://docs.google.com/spreadsheets/d/1xWNXG2RGK6AhAF4QsdHbPtC2TFM9wNB8OYPMvMvgd1U/pubchart?oid=1964116949&amp;format=interactive" width="100%"></iframe>

Interestingly both disk and swift sources have a similar total transfer time. This is perhaps an indication of my network limitation in downloading the files.

 

Home computer, sequential requests of recent logs

Next from my home computer I fetched a bunch of files in sequence from recent job runs.

 

<iframe height="400px" src="https://docs.google.com/spreadsheets/d/1xWNXG2RGK6AhAF4QsdHbPtC2TFM9wNB8OYPMvMvgd1U/pubchart?oid=1688949678&amp;format=interactive" width="100%"></iframe>

 

Again I calculated the standard deviation and average to move the outliers to the end and get smaller error margins.

request sent (ms) – disk request sent (ms) – swift response (ms) – disk response (ms) – swift transfer (ms) – disk transfer (ms) – swift size (KB) – disk size (KB) – swift
Standard Deviation 54.89516183 43.71917948 194.7547117 56.74750291 849.8545127 838.9172066 7.121600095 7.311125275
Mean 283.9594368 282.5074598 531.8043908 373.7328851 5091.536092 5122.686897 1219.804598 1220.735632
Second pass without outliers
Standard Deviation 13.88664039 14.84054789 115.5299781 44.0860569 541.3912899 515.4364601 7.038111654 6.98399691
Mean 274.9291111 276.2813889 503.9393472 364.6289583 5008.439028 5013.627083 1220.013889 1220.888889

 

<iframe height="400px" src="https://docs.google.com/spreadsheets/d/1xWNXG2RGK6AhAF4QsdHbPtC2TFM9wNB8OYPMvMvgd1U/pubchart?oid=963200514&amp;format=interactive" width="100%"></iframe>

<iframe height="400px" src="https://docs.google.com/spreadsheets/d/1xWNXG2RGK6AhAF4QsdHbPtC2TFM9wNB8OYPMvMvgd1U/pubchart?oid=1689771820&amp;format=interactive" width="100%"></iframe>

What we are probably seeing here with the large number of slower requests is network congestion in my house. Since the script requests disk, swift, disk, swift, disk.. and so on this evens it out causing a latency in both sources as seen.
 

<iframe height="400px" src="https://docs.google.com/spreadsheets/d/1xWNXG2RGK6AhAF4QsdHbPtC2TFM9wNB8OYPMvMvgd1U/pubchart?oid=346021785&amp;format=interactive" width="100%"></iframe>

<iframe height="400px" src="https://docs.google.com/spreadsheets/d/1xWNXG2RGK6AhAF4QsdHbPtC2TFM9wNB8OYPMvMvgd1U/pubchart?oid=10713262&amp;format=interactive" width="100%"></iframe>

Swift is very much slower here.

 

<iframe height="400px" src="https://docs.google.com/spreadsheets/d/1xWNXG2RGK6AhAF4QsdHbPtC2TFM9wNB8OYPMvMvgd1U/pubchart?oid=1488676353&amp;format=interactive" width="100%"></iframe>

<iframe height="400px" src="https://docs.google.com/spreadsheets/d/1xWNXG2RGK6AhAF4QsdHbPtC2TFM9wNB8OYPMvMvgd1U/pubchart?oid=1384537917&amp;format=interactive" width="100%"></iframe>

Although comparable in transfer times. Again this is likely due to my network limitation.
 

<iframe height="400px" src="https://docs.google.com/spreadsheets/d/1xWNXG2RGK6AhAF4QsdHbPtC2TFM9wNB8OYPMvMvgd1U/pubchart?oid=1494494491&amp;format=interactive" width="100%"></iframe>

<iframe height="400px" src="https://docs.google.com/spreadsheets/d/1xWNXG2RGK6AhAF4QsdHbPtC2TFM9wNB8OYPMvMvgd1U/pubchart?oid=604459439&amp;format=interactive" width="100%"></iframe>

The size histograms don’t really add much here.
 

Rackspace Cloud, parallel requests of same log

Now to reduce latency and other network effects I tested fetching the same log over again in 5 parallel streams. Granted, it may have been interesting to see a machine close to the log server do a bunch of sequential requests for the one file (with little other noise) but I didn’t do it at the time unfortunately. Also we need to keep in mind that others may be access the log server and therefore any request in both my testing and normal use is going to have competing load.
 

<iframe height="400px" src="https://docs.google.com/spreadsheets/d/16UtKwF-KaLAh22QpTbglhLYLjE_bwWRc702n8y8XAz4/pubchart?oid=1688949678&amp;format=interactive" width="100%"></iframe>

I collected a much larger amount of data here making it harder to visualise through all the noise and error margins etc. (Sadly I couldn’t find a way of linking to a larger google spreadsheet graph). The histograms below give a much better picture of what is going on. However out of interest I created a rolling average graph. This graph won’t mean much in reality but hopefully will show which is faster on average (disk or swift).
 

<iframe height="400px" src="https://docs.google.com/spreadsheets/d/16UtKwF-KaLAh22QpTbglhLYLjE_bwWRc702n8y8XAz4/pubchart?oid=1484304295&amp;format=interactive" width="100%"></iframe>

You can see now that we’re closer to the server that swift is noticeably slower. This is confirmed by the averages:

 

  request sent (ms) – disk request sent (ms) – swift response (ms) – disk response (ms) – swift transfer (ms) – disk transfer (ms) – swift size (KB) – disk size (KB) – swift
Standard Deviation 32.42528982 9.749368282 245.3197219 781.8807534 1082.253253 2737.059103 0 0
Mean 4.87337544 4.05191168 39.51898688 245.0792916 1553.098063 4167.07851 1226 1232
Second pass without outliers
Standard Deviation 1.375875503 0.8390193564 28.38377158 191.4744331 878.6703183 2132.654898 0 0
Mean 3.487575109 3.418433003 7.550682037 96.65978872 1389.405618 3660.501404 1226 1232

 

Even once outliers are removed we’re still seeing a large latency from swift’s response.

The standard deviation in the requests now have gotten very small. We’ve clearly made a difference moving closer to the logserver.

 

<iframe height="400px" src="https://docs.google.com/spreadsheets/d/16UtKwF-KaLAh22QpTbglhLYLjE_bwWRc702n8y8XAz4/pubchart?oid=963200514&amp;format=interactive" width="100%"></iframe>

<iframe height="400px" src="https://docs.google.com/spreadsheets/d/16UtKwF-KaLAh22QpTbglhLYLjE_bwWRc702n8y8XAz4/pubchart?oid=1689771820&amp;format=interactive" width="100%"></iframe>

Very nice and close.
 

<iframe height="400px" src="https://docs.google.com/spreadsheets/d/16UtKwF-KaLAh22QpTbglhLYLjE_bwWRc702n8y8XAz4/pubchart?oid=346021785&amp;format=interactive" width="100%"></iframe>

<iframe height="400px" src="https://docs.google.com/spreadsheets/d/16UtKwF-KaLAh22QpTbglhLYLjE_bwWRc702n8y8XAz4/pubchart?oid=10713262&amp;format=interactive" width="100%"></iframe>

Here we can see that for roughly half the requests the response time was the same for swift as for the disk. It’s the other half of the requests bringing things down.
 

<iframe height="400px" src="https://docs.google.com/spreadsheets/d/16UtKwF-KaLAh22QpTbglhLYLjE_bwWRc702n8y8XAz4/pubchart?oid=1488676353&amp;format=interactive" width="100%"></iframe>

<iframe height="400px" src="https://docs.google.com/spreadsheets/d/16UtKwF-KaLAh22QpTbglhLYLjE_bwWRc702n8y8XAz4/pubchart?oid=1384537917&amp;format=interactive" width="100%"></iframe>

The transfer for swift is consistently slower.

 

Rackspace Cloud, parallel requests of recent logs

Finally I ran just over a thousand requests in 5 parallel streams from computers near the logserver for recent logs.

 

<iframe height="400px" src="https://docs.google.com/spreadsheets/d/1LXoyF-JausOJArkum-WlKpb19kxxVTau9y4Qled7kxc/pubchart?oid=1688949678&amp;format=interactive" width="100%"></iframe>

Again the graph is too crowded to see what is happening so I took a rolling average.

 

<iframe height="400px" src="https://docs.google.com/spreadsheets/d/1LXoyF-JausOJArkum-WlKpb19kxxVTau9y4Qled7kxc/pubchart?oid=1484304295&amp;format=interactive" width="100%"></iframe>

 

request sent (ms) – disk request sent (ms) – swift response (ms) – disk response (ms) – swift transfer (ms) – disk transfer (ms) – swift size (KB) – disk size (KB) – swift
Standard Deviation 0.7227904332 0.8900549012 434.8600827 909.095546 1913.9587 2132.992773 6.341238774 7.659678352
Mean 3.515711867 3.56191383 145.5941102 189.947818 2427.776165 2875.289455 1219.940039 1221.384913
Second pass without outliers
Standard Deviation 0.4798803247 0.4966553679 109.6540634 171.1102999 1348.939342 1440.2851 6.137625464 7.565931993
Mean 3.379718381 3.405770445 70.31323922 86.16522485 2016.900047 2426.312363 1220.318912 1221.881335

 

The averages here are much more reasonable than when we continually tried to request the same file. Perhaps we’re hitting limitations with swifts serving abilities.

 

<iframe height="400px" src="https://docs.google.com/spreadsheets/d/1LXoyF-JausOJArkum-WlKpb19kxxVTau9y4Qled7kxc/pubchart?oid=963200514&amp;format=interactive" width="100%"></iframe>

<iframe height="400px" src="https://docs.google.com/spreadsheets/d/1LXoyF-JausOJArkum-WlKpb19kxxVTau9y4Qled7kxc/pubchart?oid=1689771820&amp;format=interactive" width="100%"></iframe>

I’m not sure why we have sinc function here. A network expert may be able to tell you more. As far as I know this isn’t important to our analysis other than the fact that both disk and swift match.
 

<iframe height="400px" src="https://docs.google.com/spreadsheets/d/1LXoyF-JausOJArkum-WlKpb19kxxVTau9y4Qled7kxc/pubchart?oid=346021785&amp;format=interactive" width="100%"></iframe>

<iframe height="400px" src="https://docs.google.com/spreadsheets/d/1LXoyF-JausOJArkum-WlKpb19kxxVTau9y4Qled7kxc/pubchart?oid=10713262&amp;format=interactive" width="100%"></iframe>

Here we can now see swift keeping a lot closer to disk results than when we only requested the one file in parallel. Swift is still, unsurprisingly, slower overall.
 

<iframe height="400px" src="https://docs.google.com/spreadsheets/d/1LXoyF-JausOJArkum-WlKpb19kxxVTau9y4Qled7kxc/pubchart?oid=1488676353&amp;format=interactive" width="100%"></iframe>

<iframe height="400px" src="https://docs.google.com/spreadsheets/d/1LXoyF-JausOJArkum-WlKpb19kxxVTau9y4Qled7kxc/pubchart?oid=1384537917&amp;format=interactive" width="100%"></iframe>

Swift still loses out on transfers but again does a much better job of keeping up.
 

<iframe height="400px" src="https://docs.google.com/spreadsheets/d/1LXoyF-JausOJArkum-WlKpb19kxxVTau9y4Qled7kxc/pubchart?oid=1494494491&amp;format=interactive" width="100%"></iframe>

<iframe height="400px" src="https://docs.google.com/spreadsheets/d/1LXoyF-JausOJArkum-WlKpb19kxxVTau9y4Qled7kxc/pubchart?oid=604459439&amp;format=interactive" width="100%"></iframe>

Error sources

I haven’t accounted for any of the following swift intricacies (in terms of caches etc) for:

  • Fetching random objects
  • Fetching the same object over and over
  • Fetching in parallel multiple different objects
  • Fetching the same object in parallel

I also haven’t done anything to account for things like file system caching, network profiling, noisy neighbours etc etc.

os-loganalyze tries to keep authenticated with swift, however

  • This can timeout (causes delays while reconnecting, possibly accounting for some spikes?)
  • This isn’t thread safe (are we hitting those edge cases?)

We could possibly explore getting longer authentication tokens or having os-loganalyze pull from an unauthenticated CDN to add the markup and then serve. I haven’t explored those here though.

os-loganalyze also handles all of the requests not just from my testing but also from anybody looking at OpenStack CI logs. In addition to this it also needs to deflate the gzip stream if required. As such there is potentially a large unknown (to me) load on the log server.

In other words, there are plenty of sources of errors. However I just wanted to get a feel for the general responsiveness compared to fetching from disk. Both sources had noise in their results so it should be expected in the real world when downloading logs that it’ll never be consistent.

Conclusions

As you would expect the request times are pretty much the same for both disk and swift (as mentioned earlier) especially when sitting next to the log server.

The response times vary but looking at the averages and the histograms these are rarely large. Even in the case where requesting the same file over and over in parallel caused responses to go slow these were only in the magnitude of 100ms.

The response time is the important one as it indicates how soon a download will start for the user. The total time to stream the contents of the whole log is seemingly less important if the user is able to start reading the file.

One thing that wasn’t tested was streaming of different file sizes. All of the files were roughly the same size (being logs of the same job). For example, what if the asset was a few gigabytes in size, would swift have any significant differences there? In general swift was slower to stream the file but only by a few hundred milliseconds for a megabyte. It’s hard to say (without further testing) if this would be noticeable on large files where there are many other factors contributing to the variance.

Whether or not these latencies are an issue is relative to how the user is using/consuming the logs. For example, if they are just looking at the logs in their web browser on occasion they probably aren’t going to notice a large difference. However if the logs are being fetched and scraped by a bot then it may see a decrease in performance.

Overall I’ll leave deciding on whether or not these latencies are acceptable as an exercise for the reader.

by Joshua Hesketh at October 21, 2014 11:44 AM

Opensource.com

How OpenStack powers the research at CERN

OpenStack has been in a production environment at CERN for more than a year. One of the people that has been key to implementing the OpenStack infrastructure is Tim Bell. He is responsible for the CERN IT Operating Systems and Infrastructure group which provides a set of services to CERN users from email, web, operating systems, and the Infrastructure-as-a-Service cloud based on OpenStack.

We had a chance to interview Bell in advance of the OpenStack Summit Paris 2014 where he will deliver two talks. The first session is about cloud federation while the second session is about multi-cell OpenStack.

by jhibbets at October 21, 2014 11:00 AM

ICCLab

Numerical Dosimetry in the cloud

What is it all about?

We’re using a bunch of VMs to do numerical dosimetry and are very satisfied with the service and performance we get. Here I try to give some background on our work.
Assume yourself sitting in the dentists chair for an x-ray image of your teeth. How much radiation will miss the x-ray film in your mouth and instead wander through your body? That’s one type of question we try to answer with computer models. Or numeric dosimetry, as we call it.

The interactions between ionizing radiation – e.g. x-rays – and atoms are well known. However, there is a big deal of randomness, so called stochastic behavior. Let’s go back to the dentists chair and follow one single photon (that’s the particle x-rays are composed of). This sounds a bit like ray tracing, but is way more noisy as you’ll see.

The image below shows a voxel phantom (built of Lego bricks made of bone, fat, muscle etc.) during a radiography of the left breast.

torso_view_beam

Tracing a photon

The photon is just about to leave the x-ray tube. We take a known distribution of photon energies, throw dices and pick one energy at random. Then we decide – again by throwing dices – how long the photon will fly until it comes close to an atom. How exactly will it hit the atom? Which of the many processes (e.g. Compton scattering) will take place? How much energy will be lost and in what direction will it leave the atom? The answer – you may have already guessed that – is rolling in the dice. We repeat the process until the photon has lost all it’s energy or leaves our model world.

During its journey the photon has created many secondary particles (e.g. electrons kicked out of an atomic orbit). We follow each of them and their children again. Finally, all particles have come to rest and we know in detail what happened to that single photon and to the matter it crossed. This process takes some 100 micro seconds on an average cloud CPU.

Monte Carlo (MC)

This method of problem solving is called Monte Carlo after the roulette tables. You always apply MC if there are too many parameters to solve a problem in a deterministic way. One well know application is the so called rain drop Pi. By counting the fraction of random points that are within a circle you can approach the number Pi (3.141).

Back to the dentist: Unfortunately, with our single photon we do not see any energy deposit in your thyroid gland (located at the front of your neck) yet. This first photon passed by pure chance without any interaction. So we just start another one, 5’000 a second, 18 Millions per hour etc. until we have enough dose collected in your neck. Only a tiny fraction q of the N initial photons ends up in our target volume and the energy deposit shows fluctuations that typically decrease proportional to 1/sqrt(qN). So we need some 1E9 initial photons to have 1E5 in the target volume and have a relative error smaller than 1 %. This would take 2 CPU days.

MC and the cloud

This type of MC problems is CPU bound and trivial to parallelize, since the photons are independent from each other (remember that in a drop of water there are 1E23 molecules, our 1E9 photons will not disturb that). So with M CPUs my waiting time is just reduced by a factor M. In the above example and with 50 CPUs I have a result after 1 hour instead of 2 days.

This is a quantitative progress on the one hand. But on the other hand and more important for my work is the progress in quality: During one day, I can play with 10 different scenarios, I can concentrate on problem solving and do not waste time unwinding the stack in my head after a week. The cloud helps to improve the quality of our work.

Practical considerations

The code we use is Geant4 (geant4.cern.ch), a free C++ library to propagate particles through matter. Code development is done locally (e.g. Ubuntu in a virtual box) and then uploaded with rsync to the master node.

Our CPUs are distributed over several virtual machines deployed in ICCLab’s OpenStack cloud. From the master we distribute code and collect results via rsync, job deployment and status is done through small bash scripts. The final analysis is then done locally with Matlab.

Code deployment and result collection is done within 30 seconds, which is negligible compared to run times of hours. So even on the job scale our speedup is M.

by Patrik Eschle at October 21, 2014 09:05 AM

Mirantis

Mirantis Raises $100 Million Series B, Challenging Incumbents as the Pure-Play OpenStack Leader

Insight Venture Partners Leads Largest Series B in Open Source Software History

Mirantis, the pure-play OpenStack company, today announced $100 million in Series B funding led by Insight Venture Partners. The financing is the largest Series B open source investment in history, and one of the largest Series B investments in B2B software, validating Mirantis as the breakaway independent pure-play OpenStack vendor. Insight Venture Partners was joined by August Capital, as well as existing investors Intel Capital, WestSummit Capital, Ericsson, and Sapphire Ventures (formerly SAP Ventures). Alex Crisses, managing director at Insight Venture Partners, will join the Mirantis board of directors.

“OpenStack adoption is accelerating worldwide, driven by the need for low cost, scalable cloud infrastructure. 451 Research estimates a market size of $3.3 billion by 2018,” said Alex Crisses. “Mirantis delivers on OpenStack’s promise of cloud computing at a fraction of the time and cost of traditional IT vendors, and without the compromise of vendor lock-in. Their customer traction has been phenomenal.”

“Mirantis is already leading the OpenStack ecosystem. We are committed to helping it become the principal cloud vendor,” said Vivek Mehra, general partner at August Capital, whose partners have helped grow Splunk, Microsoft, Sun Microsystems, Seagate, Skype, and Tegile Systems into dominant technology players. “Its unique pure-play approach will trump the lock-in of traditional IT vendors.”

Mirantis has helped more than 130 customers implement OpenStack – more than any other vendor –  including Comcast, DirectTV, Ericsson, Expedia, NASA, NTT Docomo, PayPal, Symantec, Samsung, WebEx and Workday.  Among these is the largest OpenStack deal on record: a five-year software licensing agreement with Ericsson. Mirantis is also the largest provider of OpenStack products and services for the telecommunications industry, serving Huawei, NTT Docomo, Orange, Pacnet, Tata Communications, and others.

“Our mission is to move companies from an expensive, lock-in infrastructure to an open cloud that empowers developers and end-users at a fraction of the cost. Customers are seeing the value; we’ve gone from signing about $1 million in new business every month to $1 million every week,” said Mirantis President and CEO, Adrian Ionel. “People choose us because we have the best software and expertise for OpenStack, foster an open partner ecosystem, and are a major upstream contributor, influencing the technology’s direction.”

“Mirantis OpenStack is the only truly hardened and commercially-supported OpenStack distribution today that you can just download from the website, install using an intuitive GUI driven process and be up and running in no time,” said Nicholas Summers, Cloud Architect at Home Depot, a Mirantis customer. “With everyone else, you either get raw upstream code or need to engage in an elaborate sales discussion before even getting your hands on the commercial version.”

Mirantis will use the funds to double its engineering investments. It will focus on development of its zero lock-in OpenStack software, including its downloadable distribution, Mirantis OpenStack, and its on-demand, hosted option, Mirantis OpenStack Express. Mirantis is currently the No. 3 contributor to OpenStack and will continue contributing to the community, with particular focus on enterprise-grade reliability and ease-of-use. The funds will also be used to accelerate international expansion in Europe and Asia-Pacific, deepen its bench of support engineers, and grow its open partner ecosystem.

“Driving the accessibility of software defined infrastructure and cloud computing to data centers around the world is an imperative for Intel,” said Jason Waxman, vice president of Intel’s Data Center Group and general manager of Intel’s Cloud Platforms Group. “Mirantis plays a key role in the OpenStack movement, and our investment is designed to accelerate industry adoption of cost-effective workload orchestration solutions.”

About Mirantis
Mirantis is the world’s leading OpenStack company. Mirantis delivers all the software, services, training and support needed for running OpenStack. More customers rely on Mirantis than any other company to get to production deployment of OpenStack at scale. Among the top three companies worldwide in contributing open source software to OpenStack, Mirantis has helped build and deploy some of the largest OpenStack clouds at companies such as Cisco, Comcast, DirectTV, Ericsson, Expedia, NASA, NTT Docomo, PayPal, Symantec, Samsung, WebEx and Workday.

Mirantis is venture-backed by Insight Venture Partners, August Capital, Ericsson, Red Hat, Intel Capital, Sapphire Ventures and WestSummit Capital, with headquarters in Mountain View, California. For more information, visit www.mirantis.com or follow us on Twitter at @mirantisit.

Contact Information:
Sarah Bennett
PR Manager, Mirantis
sbennett@mirantis.com

The post Mirantis Raises $100 Million Series B, Challenging Incumbents as the Pure-Play OpenStack Leader appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Sarah Bennett at October 21, 2014 07:01 AM

October 20, 2014

Florent Flament

Splitting Swift cluster

At Cloudwatt, we have been operating a near hundred nodes Swift cluster in a unique datacenter for a few years. The decision to split the cluster on two datacenters has been taken recently. The goal is to have at least one replica of each object on each site in order to avoid data loss in case of the destruction of a full datacenter (fire, plane crash, ...).

Constraints when updating a running cluster

Some precautions have to be taken when updating a running cluster with customers' data. We want to ensure that no data is lost or corrupted during the operation and that the cluster's performance isn't hurt too badly.

In order to ensure that no data is lost, we have to follow some guidelines including:

  • Never move more that 1 replica of any object at any given step; That way we ensure that 2 copies out 3 are left intact in case something goes wrong.
  • Process by small steps to limit the impact in case of failure.
  • Check during each step that there is no unusual data corruptions, and that corrupted data are correctly handled and fixed.
  • Check after each step that data has been moved (or kept) at the correct place.
  • If any issue were to happen, rollback to previous step.

To limit the impact on cluster's performance, we have to address to following issues:

  • Assess the availability of cluster resources (network bandwidth, storage nodes' disks & CPU availability) at different times of day and week. This would allow to choose the best time to perform our steps.
  • Assess the load on the cluster of the steps planned to split the cluster.
  • Choose steps small enough so that:
    • it fits time frames where cluster's resources are more available;
    • the load incurred by the cluster (and its users) is acceptable.

A number of these requirements have been addressed by Swift for a while:

  • When updating Swift ring files, the swift-ring-builder tool doesn't move more than 1 replica during reassignment of cluster's partitions (unless something really went wrong). By performing only one reassignment per process step, we ensure that we don't move more than 1 replica at each step.
  • Checking for data corruption is made easy by Swift. 3 processes (swift-object-auditor, swift-container-auditor and swift-account-auditor) running on storage nodes are continuously checking and fixing data integrity.
  • Checking that data is at the correct location is also made easy by the swift-dispersion-report provided.
  • Updating the location of data is made seamless by updating and copying the Ring files to every Swift nodes. Once updated, the Ring files are loaded by Swift processes without the need of being restarted. Rollbacking data location is easily performed by replacing the new Ring files by previous ones.

However, being able to control the amount of data to move to a new datacenter at a given step is a brand new feature, that has been fixed in version 2.2.0 of Swift, released on October 4th of 2014.

Checking data integrity

Swift auditor processes (swift-object-auditor, swift-container-auditor and swift-account-auditor) running on storage nodes are continuously checking data integrity, by checking files' checksums. When a corrupted file is found, it is quarantined; the data is removed from the node and the replication mechanism takes care of replacing the missing data. Below is an example of what concretely happens when manually corrupting an object.

Let's corrupt data by hand:

root@swnode0:/srv/node/d1/objects/154808/c3a# cat 972e359caf9df6fdd3b8e295afd4cc3a/1410353767.57579.data
blabla
root@swnode0:/srv/node/d1/objects/154808/c3a# echo blablb > 972e359caf9df6fdd3b8e295afd4cc3a/1410353767.57579.data

The corrupted object is 'quarantined' by the object-auditor when it checks the files integrity. Here's how it appears in the /var/log/syslog log file:

Sep 10 13:56:44 swnode0 object-auditor: Quarantined object /srv/node/d1/objects/154808/c3a/972e359caf9df6fdd3b8e295afd4cc3a/1410353767.57579.data: ETag 9b36b2e89df94bc458d629499d38cf86 and file's md5 6235440677e53f66877f0c1fec6a89bd do not match
Sep 10 13:56:44 swnode0 object-auditor: ERROR Object /srv/node/d1/objects/154808/c3a/972e359caf9df6fdd3b8e295afd4cc3a failed audit and was quarantined: ETag 9b36b2e89df94bc458d629499d38cf86 and file's md5 6235440677e53f66877f0c1fec6a89bd do not match
Sep 10 13:56:44 swnode0 object-auditor: Object audit (ALL) "forever" mode completed: 0.02s. Total quarantined: 1, Total errors: 0, Total files/sec: 46.71, Total bytes/sec: 326.94, Auditing time: 0.02, Rate: 0.98

The quarantined object is then overwritten by the object-replicator of a node that has the appropriate replica uncorrupted. Below is an extract of the log file on such node:

Sep 10 13:57:01 swnode1 object-replicator: Starting object replication pass.
Sep 10 13:57:01 swnode1 object-replicator: <f+++++++++ c3a/972e359caf9df6fdd3b8e295afd4cc3a/1410353767.57579.data
Sep 10 13:57:01 swnode1 object-replicator: Successful rsync of /srv/node/d1/objects/154808/c3a at 192.168.100.10::object/d1/objects/154808 (0.182)
Sep 10 13:57:01 swnode1 object-replicator: 1/1 (100.00%) partitions replicated in 0.21s (4.84/sec, 0s remaining)
Sep 10 13:57:01 swnode1 object-replicator: 1 suffixes checked - 0.00% hashed, 100.00% synced
Sep 10 13:57:01 swnode1 object-replicator: Partition times: max 0.2050s, min 0.2050s, med 0.2050s
Sep 10 13:57:01 swnode1 object-replicator: Object replication complete. (0.00 minutes)

The corrupted data has been replaced by the correct data on the initial storage node (where the file had been corrupted):

root@swnode0:/srv/node/d1/objects/154808/c3a# cat 972e359caf9df6fdd3b8e295afd4cc3a/1410353767.57579.data
blabla

Checking data location

Preparation

We can use the swift-dispersion-report tool provided with Swift to monitor our data dispersion ratio (ratio of objects on the proper device / number of objects). A dedicated Openstack account is required that will be used by swift-dispersion-populate to create containers and objects.

Then we have to configure appropriately the swift-dispersion-report tool with the /etc/swift/dispersion.conf file:

[dispersion]
auth_url = http://SWIFT_PROXY_URL/auth/v1.0
auth_user = DEDICATED_ACCOUNT_USERNAME
auth_key = DEDICATED_ACCOUNT_PASSWORD

Once properly set, we can initiate dispersion monitoring by populating our new account with test data:

cloud@swproxy:~$ swift-dispersion-populate
Created 2621 containers for dispersion reporting, 4m, 0 retries
Created 2621 objects for dispersion reporting, 2m, 0 retries

Our objects should have been placed on appropriate devices. We can check this:

cloud@swproxy:~$ swift-dispersion-report
Queried 2622 containers for dispersion reporting, 2m, 31 retries
100.00% of container copies found (7866 of 7866)
Sample represents 1.00% of the container partition space
Queried 2621 objects for dispersion reporting, 45s, 1 retries
There were 2621 partitions missing 0 copy.
100.00% of object copies found (7863 of 7863)
Sample represents 1.00% of the object partition space

Monitoring data redistribution

Once updated ring has been pushed to every nodes and proxy servers, we can follow the data redistribution with the swift-dispersion-report. The migration is terminated when the number of objects copies reach 100%. Here's an example of results obtained on a 6 nodes cluster.

cloud@swproxy:~$ swift-dispersion-report
Queried 2622 containers for dispersion reporting, 3m, 29 retries
100.00% of container copies found (7866 of 7866)
Sample represents 1.00% of the container partition space
Queried 2621 objects for dispersion reporting, 33s, 0 retries
There were 23 partitions missing 0 copy.
There were 2598 partitions missing 1 copy.
66.96% of object copies found (5265 of 7863)
Sample represents 1.00% of the object partition space

# Then some minutes later
cloud@swproxy:~$ swift-dispersion-report
Queried 2622 containers for dispersion reporting, 5m, 0 retries
100.00% of container copies found (7866 of 7866)
Sample represents 1.00% of the container partition space
Queried 2621 objects for dispersion reporting, 26s, 0 retries
There were 91 partitions missing 0 copy.
There were 2530 partitions missing 1 copy.
67.82% of object copies found (5333 of 7863)
Sample represents 1.00% of the object partition space

Limiting the amount of data to move

There has been a number of recent contributions to Swift that have been done in order to allow the smooth addition of nodes to a new region.

With versions of swift-ring-builder earlier than Swift 2.1, when adding a node to a new region, 1 replica of every object was moved to the new region in order to maximize the dispersion of objects across different regions. Such algorithm had severe drawbacks. Let's consider a one region Swift cluster with 100 storage nodes. Adding 1 node to a second region had the effect of transferring 1/3 of the cluster's data to the new node, which would not have the capacity to store the data previously distributed over 33 nodes. So in order to add a new region to our cluster, we had to add in 1 step enough nodes to store 1/3 of our data. Let's consider we add 33 nodes to the new region. While there is enough capacity on these nodes to receive 1 replica of every objects, such operation would trigger the transfer of Petabytes of data to the new nodes. With a 10 Gigabits/second link between the 2 datacenters, such transfer would take days if not weeks, during which the cluster's network and destination nodes' disks would be saturated.

With commit 6d77c37 ("Let admins add a region without melting their cluster"), that has been released with Swift 2.1, the number of partitions assigned to nodes in a new region was determined by the weights of the nodes' devices. This feature allowed a Swift cluster operator the limit the amount of data transferred to a new region. However, because of bug 1367826 ("swift-ringbuilder rebalance moves 100% partitions when adding a new node to a new region"), even when limiting the amount of data transferred to a new region, a big amount of data is moved uselessly inside the initial region. For instance, it could happen that after a swift-ring-builder rebalance operation, 3% of partitions were assigned to the new region, but 88% of partitions were reassigned to new nodes inside the first region. The would lead to uselessly loading the cluster's network and storage nodes.

Eventually, commit 20e9ad5 ("Limit partition movement when adding a new tier") fixed bug 1367826. This commit has been released with Swift 2.2. It allows an operator to choose the amount of data that flows between regions, when adding nodes to a new region, without border effects. This feature enables the operator to perform a multi steps cluster split, by first adding devices with very low weights to a new region, then by progressively increasing the weights step by step. This can be done until 1 replica of every objects has been transferred to the new region. Since the number of partitions assigned to the new region depends on the weights assigned to the new devices, the operator has to compute the appropriate weights.

Computing new region weight for a given ratio of partitions

In order to assign a given ratio of partitions to a new region, a Swift operator can compute the devices' weights by using the following formula.

Given:

  • w1 is the weight of a single device in region r1
  • r1 has n1 devices
  • W1 = n1 * w1 is the full weight of region r1
  • r2 has n2 devices
  • w2 is the weight of a single device in region r2
  • W2 = n2 * w2 is the full weight of region r2
  • r is the ratio of partitions we want in region r2

We have:

  • r = W2 / (W1+W2)
  • <=> W2 = r*W1 / (1-r)
  • <=> w2 = rW1 / (1-r)n2

w2 is the weight to set to each device of region r2

Computing new devices weight for a given number of partitions

In some cases the operator may prefer to specify the number of partitions (rather than its ratio) that he wishes to assign to the devices of a new region.

Given:

  • p1 the number of partitions in region r1
  • W1 the full weight of region r1
  • p2 the number of partitions in region r2
  • W2 the full weight of region r2

We have the following equality:

  • p1/W1 = p2/W2
  • <=> W2 = W1*p2 / p1
  • <=> w2 = W1p2 / n2p1

w2 is the weight to set to each device of region r2

Some scripts to compute weights automatically

I made some Swift scripts available to facilitate adding nodes to a new region. swift-add-nodes.py allows adding nodes to a new region with a minimal weight so that only 1 partition will be assigned to each device (The number and names of devices is set in a constant at the beginning of the script and has to be updated). Then swift-assign-partitions.py allows assigning a chosen ratio of partitions to the new region.

Example of deployment

Here's an example of the steps that a Swift operator can follow in order to split its one region cluster into 2 regions smoothly. A first step may consist in adding some new nodes to the new region and assigning 1 partition to each device. This would typically move between hundreds of Megabytes to a few Gigabytes; thus allowing to check that everything (network, hardware, ...) is working as expected. We can use the swift-add-nodes.py script to easily add nodes to our new region with a minimal weight so that only 1 partition will be assigned to each device:

$ python swift-add-nodes.py object.builder object.builder.s1 2 6000 127.0.0.1 127.0.0.2 127.0.0.3
Adding device: {'weight': 5.11, 'zone': 0, 'ip': '127.0.0.1', 'region': 2, 'device': 'sdb1', 'port': 6000}
Adding device: {'weight': 5.11, 'zone': 0, 'ip': '127.0.0.1', 'region': 2, 'device': 'sdc1', 'port': 6000}
Adding device: {'weight': 5.11, 'zone': 0, 'ip': '127.0.0.1', 'region': 2, 'device': 'sdd1', 'port': 6000}
Adding device: {'weight': 5.11, 'zone': 0, 'ip': '127.0.0.1', 'region': 2, 'device': 'sde1', 'port': 6000}
Adding device: {'weight': 5.11, 'zone': 0, 'ip': '127.0.0.2', 'region': 2, 'device': 'sdb1', 'port': 6000}
Adding device: {'weight': 5.11, 'zone': 0, 'ip': '127.0.0.2', 'region': 2, 'device': 'sdc1', 'port': 6000}
Adding device: {'weight': 5.11, 'zone': 0, 'ip': '127.0.0.2', 'region': 2, 'device': 'sdd1', 'port': 6000}
Adding device: {'weight': 5.11, 'zone': 0, 'ip': '127.0.0.2', 'region': 2, 'device': 'sde1', 'port': 6000}
Adding device: {'weight': 5.11, 'zone': 0, 'ip': '127.0.0.3', 'region': 2, 'device': 'sdb1', 'port': 6000}
Adding device: {'weight': 5.11, 'zone': 0, 'ip': '127.0.0.3', 'region': 2, 'device': 'sdc1', 'port': 6000}
Adding device: {'weight': 5.11, 'zone': 0, 'ip': '127.0.0.3', 'region': 2, 'device': 'sdd1', 'port': 6000}
Adding device: {'weight': 5.11, 'zone': 0, 'ip': '127.0.0.3', 'region': 2, 'device': 'sde1', 'port': 6000}

$ swift-ring-builder object.builder.s1 rebalance
Reassigned 12 (0.00%) partitions. Balance is now 0.18.

Subsequent steps may consist in increasing the partitions count by steps of some percentage (let's say 3%) until one third of total cluster data is stored in the new region. Script swift-assign-partitions.py allows assigning a chosen ratio of partitions to the new region:

$ python swift-assign-partitions.py object.builder.s2 object.builder.s3 2 0.03
Setting new weight of 10376.28 to device 1342
Setting new weight of 10376.28 to device 1343
Setting new weight of 10376.28 to device 1344
Setting new weight of 10376.28 to device 1345
Setting new weight of 10376.28 to device 1346
Setting new weight of 10376.28 to device 1347
Setting new weight of 10376.28 to device 1348
Setting new weight of 10376.28 to device 1349
Setting new weight of 10376.28 to device 1350
Setting new weight of 10376.28 to device 1351
Setting new weight of 10376.28 to device 1352
Setting new weight of 10376.28 to device 1353

$ swift-ring-builder object.builder.s3 rebalance
Reassigned 25119 (9.58%) partitions. Balance is now 0.25.

Thanks & related links

Special thanks to Christian Schwede for the awesome work he did to improve the swift-ring-builder.

Interested in more details about how Openstack Swift Ring is working ?

Want to know more about all of this ? Come to see our talk Using OpenStack Swift for Extreme Data Durability at the next OpenStack Summit in Paris !

by Florent Flament at October 20, 2014 09:25 PM

OpenStack Blog

OpenStack Workshop At Grace Hopper Open Source Day 2014

This year, OpenStack participated in Open Source Day (OSD) at the Grace Hopper Celebration of Women in Computing (GHC) for the second time. The main focus of this year’s Open Source Day was humanitarian applications. Along with OpenStack, participating open source projects included Microsoft Disaster Recovery, Ushahidi, Sahana Software Foundation and others.

As important as it is to build humanitarian applications, it is equally important that they are up and running in times of need and disaster. Hence, the OpenStack code-a-thon focused on building fault tolerant and scalable architectures using servers, databases and load balancers.

The six-hour code-a-thon started at 12.30 p.m. October 8. OpenStack had more than 55 participants ranging from college and university students, to professors and teachers, to professionals from various software companies. The day kicked off with a presentation by Egle Sigler on the basics of cloud computing and OpenStack, and what factors one must keep in mind when designing a fault tolerant architecture.

We divided the participants into smaller groups of five to six and each group had a dedicated group volunteer. We had two activities planned out for the OSD. During the first activity, the participants wrote a Python script to deploy two web servers with a HA Proxy server and a database server. The second activity involved deploying a demo Ushahidi application on cloud servers using Heat templates. Along with completing these activities, the participants were encouraged to architect and deploy their own solutions on the public cloud.

We had some pre-written base code in the GitHub repository to help the participants get started. We used OpenStack-powered Rackspace Cloud Servers for deployments. Some of the participants were more adventurous and even wrote code to backup their information using Swift/Cloud Files.

The participants were from different skill levels. For some of them it was their first time getting accustomed to the command line and using git; whereas for some it was their first time trying out OpenStack. Everyone who attended the code-a-thon got to learn something new!

At the end of the day, one of the participants, Yanwei Zhang, demoed how after decommissioning one of the two Apache servers the application still could be accessed using the load balancer IP.

We received some great feedback from the participants. Here are some of the responses we received in anonymous survey:

Got to learn about OpenStack deployment and meet some great women.

It was fun learning something new.

I liked the participation of the volunteers, their experience was great to hear!”

The Open Source Day would not have been possible without the help of the amazing volunteers who inspired the participants to keep hacking and learning. One of the participants mentioned: “The helpers were awesome, very positive, and obviously very enthusiastic about their work. Good job.” Overall, we had 14 volunteers, a mix of Rackers and graduates from the GNOME OPW program: Victoria Martínez de la Cruz, Jenny Vo, Sabeen Syed, Anne Gentle, Dragana Perez, Riddhi Shah, Zaina Afoulki, Lisa Clark, Zabby Damania, Cindy Pallares-Quezada, Besan Abu Radwan, Veera Venigalla, Carla Crull and Benita Daniel.

This is a post written and contributed by Egle Sigler and Iccha Sethi.

Egle Sigler is a Principal Architect on a Private Cloud Solutions team at Rackspace. In addition to working with OpenStack and related technologies, Egle is a governing board member for POWER (Professional Organization of Women Empowered at Rackspace), Rackspace’s internal employee resource group dedicated to empowering women in technology. Egle holds a M.S. degree in Computer Science.

Iccha Sethi is a long time contributor to OpenStack and has worked on the Cloud Images (Glance) and Cloud Databases (Trove) OpenStack products at Rackspace. She has been involved in several community initiatives including being a mentor for GNOME OPW program and is the founder of Let’s Code Blacksburg!

by Iccha Sethi at October 20, 2014 08:17 PM

Percona

Autumn: A season of MySQL-related conferences. Here’s my list

Autumn is a season of MySQL-related conferences and I’m about to hit the road to speak and attend quite a  few of them.

Peter Zaitsev prepares for a tour of worldwide MySQL-related conferences including Percona Live London, All Things Open, Highload++, AWS re:Invent - Percona will also be at OpenStack Paris.This week I’ll participate in All Things Open, a local conference for me here in Raleigh, N.C. and therefore one I do not have to travel for. All Things Open explores open source, open tech and the open web in the enterprise. I’ll be speaking on SSDs for Databases at 3:15 p.m. on Thursday, Oct. 23 and I’ll also be participating in a book signing for the High Performance MySQL Book at 11:45 p.m. at the “Meet the Speaker” table. We are also proud to be sponsor of this show so please stop by and say “Hi” at our booth in the expo hall.

Following this show I go to Moscow, Russia to the Highload++ conference. This is wonderful show for people interested in high-performance solutions for Internet applications and I attend almost every year. It has a great lineup of speakers from leading Russian companies as well as many top International speakers covering a lot of diverse technologies. I have 3 talks at this show around Application Architecture, Using Indexes in MySQL and about SSD and Flash Storage for Databases. I’m looking forward to reconnecting with my many Russian friends at this show.

From Highload I go directly to Percona Live London 2014 (Nov. 3-4) which is the show we’re putting together – which of course means it is filled with great in-depth information about MySQL and its variants. I think this year we have a good balance of talks from MySQL users such as Facebook, Github, Booking.com, Ebay, Spil Games, IE Domain registry as well as vendors with in-depth information about products and having experiences with many customer environments – MySQL @ Oracle, HP, HGST, Percona, MariaDB, Pythian, Codership, Continuent, Tokutek, FromDual, OlinData. It looks like it is going to be a great show (though of course I’m biased) so do not forget to get registered if you have not already. (On Twitter use hashtag #PerconaLive)

The show I’m sorry to miss is the OpenStack Paris Summit. Even though it is so close to London, the additional visa logistics make it unfeasible for me to visit. There is going to be a fair amount of Perconians on the show, though. Our guys will be speaking about a MySQL and OpenStack Deep Dive as well as Percona Server Features for OpenStack and Trove Ops. We’re also exhibiting on this show so please stop by our booth and say “hi.”

Finally there is AWS re:Invent in Las Vegas Nov. 11-14. I have not submitted any talks for this one but I’ll drop in for a day to check it out. We’re also exhibiting at this show so if you’re around please stop by and stay “hi.”

This is going to be quite a busy month with a lot of events! There are actually more where we’re speaking or attending. If you’re interested about events we’re participating, there is a page on our web site to tell you just that! I also invite you to submit papers to speak at the new OpenStack Live 2015 conference April 13-14, which runs parallel to the annual Percona Live MySQL Conference and Expo 2015 April 13-16 – both at the Hyatt Regency Santa Clara & The Santa Clara Convention Center in Silicon Valley.

The post Autumn: A season of MySQL-related conferences. Here’s my list appeared first on MySQL Performance Blog.

by Peter Zaitsev at October 20, 2014 02:58 PM