July 24, 2014

Sean Dague

Splitting up Git Commits

Human review of code takes a bunch of time. It takes even longer if the proposed code has a bunch of unrelated things going on in it. A very common piece of review commentary is “this is unrelated, please put it in a different patch”. You may be thinking to yourself “gah, so much work”, but turns out git has built in tools to do this. Let me introduce you to git add -p.

Lets look at this Grenade review - https://review.openstack.org/#/c/109122/1. This was the result of a days worth of hacking to get some things in order. Joe correctly pointed out there was at least 1 unrelated change in that patch (I think he was being nice, there were probably at least 4 things going that should have been separate). Those things are:

  • The quiece time for shutdown, that actually fixes bug 1285323 all on it’s own.
  • The reordering on the directory creates so it works on a system without /opt/stack
  • The conditional upgrade function
  • The removal of the stop short circuits (which probably shouldn’t have been done)

So how do I turn this 1 patch, which is at the bottom of a patch series, into 3 patches, plus drop out the bit that I did wrong?

Step 1: rebase -i master

Start by running git rebase -i master on your tree to put myself into the interactive rebase mode. In this case I want to be editing the first commit to split it out.

screenshot_171

Step 2: reset the changes

git reset ##### will unstage all the changes back to the referenced commit, so I’ll be working from a blank slate to add the changes back in. So in this case I need to figure out the last commit before the one I want to change, and do a git reset to that hash.

screenshot_173

Step 3: commit in whole files

Unrelated change #1 was fully isolated in a whole file (stop-base), so that’s easy enough to do a git add stop-base and then git commit to build a new commit with those changes. When splitting commits always do the easiest stuff first to get it out of the way for tricky things later.

Step 4: git add -p 

In this change grenade.sh needs to be split up all by itself, so I ran git add -p to start the interactive git add process. You will be presented with a series of patch hunks and a prompt about what to do with them. y = yes add it, n = no don’t, and lots of other options to be trickier.

screenshot_176

In my particular case the first hunk is actually 2 different pieces of function, so y/n isn’t going to cut it. In that case I can type ‘e’ (edit), and I’m dumping into my editor staring at the patch, which I can interactively modify to be the patch I want.

screenshot_177

I can then delete the pieces I don’t want in this commit. Those deleted pieces will still exist in the uncommitted work, so I’m not losing any work, I’m just not yet dealing with it.

screenshot_178

Ok, that looks like just the part I want, as I’ll come back to the upgrade_service function in patch #3. So save it, and final all the other hunks in the file that are related to that change to add them to this patch as well.

screenshot_179

Yes, to both of these, as well as one other towards the end, and this commit is ready to be ‘git commit’ed.

Now what’s left is basically just the upgrade_service function changes, which means I can git add grenade.sh as a whole. I actually decided to fix up the stop calls before doing that just by editing grenade.sh before adding the final changes. After it’s done, git rebase –continue rebases the rest of the changes on this, giving me a new shiney 5 patch series that’s a lot more clear than the 3 patch one I had before.

Step 5: Don’t forget the idempotent ID

One last important thing. This was a patch to gerrit before, which means when I started I had an idempotent ID on every change. In splitting 1 change into 3, I added that id back to patch #3 so that reviewers would understand this was an update to something they had reviewed before.

It’s almost magic

As a git user, git add -p is one of those things like git rebase -i that you really need in your toolkit to work with anything more than trivial patches. It takes practice to have the right intuition here, but once you do, you can really slice up patches in a way that are much easier for reviewers to work with, even if that wasn’t how the code was written the first time.

Code that is easier for reviewers to review wins you lots of points, and will help with landing your patches in OpenStack faster. So taking the time upfront to get used to this is well worth your time.

by Sean Dague at July 24, 2014 12:33 PM

Mirantis

Meet Your OpenStack Training Instructor: Reza Roodsari

Next up in our “Meet your OpenStack Training Instructor” series, we spend a few moments talking with Reza Roodsari.


Tell us more about your background. How did you become involved in OpenStack training?

mirantis-instructor-devin-parrish

Just like the satisfaction a person receives after they put together a good puzzle, I have always had passion for working with complex, intricate systems.

For me, the cloud is a perfect extension of this passion – and playing in the OpenStack playground, with its ever-evolving conglomeration of open-source components, is just as intriguing as a Mandelbrot. Just like a Mandelbrot, an OpenStack environment is a grand, captivating, and immense landscape.

It is a place where one is never bored.

What do you enjoy most about training?

Any successful teacher must possess the innate ability to step back, structure and articulate information in a manner that transfers knowledge and facilitates learning. For me this has always been one of teaching’s biggest rewards. I truly enjoy the challenge of discovering new ways to take difficult subjects, and present them in an easy to understand, efficient manner.

Of course, as is the case with any discipline or domain, intuitive understanding comes with a commitment to continued practice and education. For me, this has always been an added benefit that comes from my passion for teaching. It allows me to approach a subject from a deeply personal level, and it opens up the opportunity to gain an intuitive understanding.

mirantis-bootcamp-san-jose

What do you find the biggest challenge in training students to use OpenStack?

Simply put: A good teacher makes complex topics easy to understand. A struggling teacher makes an easy subject difficult to understand.

In terms of OpenStack training, and its ever-evolving collection of moving parts, the challenge becomes remaining committed to approaching a complex subject, and presenting it in an easy to understand manner.

Here at Mirantis Training, our promise and our challenge has always been to ensure that every student walks away with a deep understanding of OpenStack. Our mission is to deliver the knowledge, skillset and tools they need to tackle the challenge of a real-world OpenStack environment.

What kinds of professionals are most likely to benefit from participating in this class?

mirantis-bootcamp-san-jose

In our introductory course, we cover all the fundamentals you need to know before you dive into architecting and deploying a cloud based OpenStack environment. While our classes are structured to prepare students from an array of different backgrounds and skillsets, networking professionals and IT engineers comprise a high percentage of our classes.

What advice would you give our readers who want to learn more about OpenStack?

It all begins with time and commitment to learning. Everything we teach at Mirantis, you can learn on your own, but you need to be prepared to invest the time and effort – and I would start with http://www.openstack.org/software/start/.

Of course, the advantage of an OpenStack training course from Mirantis is that this learning curve is reduced dramatically. I encourage any IT professional committed to expanding their understanding of OpenStack to seriously consider attending one of our trainings.


Read more about our instructors on the Mirantis Training website.

The post Meet Your OpenStack Training Instructor: Reza Roodsari appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Lana Zhudina at July 24, 2014 11:14 AM

Rafael Knuth

Google+ Hangout: Clocker - Creating a Docker Cloud w/ Apache Brooklyn

In this meetup we introduce Clocker - an Apache software licensed open source project which lets you...

July 24, 2014 09:56 AM

Percona

DBaaS, OpenStack and Trove 101: Introduction to the basics

We’ll be publishing a series of posts on OpenStack and Trove over the next few weeks, diving into their usage and purpose. For readers who are already familiar with these technologies, there should be no doubt as to why we are incredibly excited about them, but for those who aren’t, consider this a small introduction to the basics and concepts.

What is Database as a Service (DBaaS)?
In a nutshell, DBaaS – as it is frequently referred to – is a loose moniker to the concept of providing a managed cloud-based database environment accessible by users, applications or developers. Its aim is to provide a full-fledged database environment, while minimizing the administrative turmoil and pains of managing the surrounding infrastructure.

Real life example: Imagine you are working on a new application that has to be accessible from multiple regions. Building and maintaining a large multiregion setup can be very expensive. Furthermore, it introduces additional complexity and strain on your system engineers once timezones start to come into play. The challenge of having to manage machines in multiple datacenters won’t simplify your release cycle, nor increase your engineers’ happiness.

Let’s take a look at some of the questions DBaaS could answer in a situation like this:

- How do I need to size my machines, and where should I locate them?
Small environments require less computing power and can be a good starting point, although this also means they may not be as well-prepared for future growth. Buying larger-scale and more expensive hardware and hosting can be very expensive and can be a big stumbling block for a brand new development project. Hosting machines in multiple DC’s could also introduce administrative difficulties, like having different SLA’s and potential issues setting up WAN or VPN communications. DBaaS introduces an abstraction layer, so these consideration aren’t yours, but those of the company offering it, while you get to reap all the rewards.

- Who will manage my environment from an operational standpoint?
Staffing considerations and taking on the required knowledge to properly maintain a production database are often either temporarily sweeped under the rug or, when the situation turns out badly, a cause for the untimely demise of quite a few young projects. Rather than think about how long ago you should have applied that security patch, wouldn’t it be nice to just focus on managing the data itself, and be otherwise confident that the layers beyond it are managed responsibly?

- Have a sudden need to scale out?
Once you’re up and running, enjoying the success of a growing use base, your environment will need to scale accordingly. Rather than think long and hard on the many options available, as well as the logistics attached to those changes, your DBaaS provider could handle this transparently.

Popular public options: Here are a few names of public services you may have come across already that fall under the DBaaS moniker:

- Amazon RDS
- Rackspace cloud databases
- Microsoft SQLAzure
- Heroku
- Clustrix DBaaS

What differentiates these services from a standard remote database is the abstraction layer that fully automates their backend, while still offering an environment that is familiar to what your development team is used to (be it MySQL, MongoDB, Microsoft SQLServer, or otherwise). A big tradeoff to using these services is that you are effectively trusting an external company with all of your data, which might make your legal team a bit nervous.

Private cloud options?
What if you could offer your team the best of both worlds? Or even provide a similar type of service to your own customers? Over the years, a lot of platforms have been popping up to allow effective management and automation of virtual environments such as these, allowing you to effectively “roll your own” DBaaS. To get there, there are two important layers to consider:

  • Infrastructure Management, also referred to as Infrastructure-as-a-Service (IaaS), focusing on the logistics of spinning up virtual machines and keeping their required software packages running.
  • Database Management, previously referred to DBaaS, transparently coordinating multiple database instances to work together and present themselves as a single, coherent data repository.

Examples of IaaS products:
- OpenStack
- OpenQRM

Ecample of DBaaS:
- Trove

Main Advantages of DBaaS
For reference, the main reasons why you might want to consider using an existing DBaaS are as follows:

- Reduced Database management costs

DBaaS removes the amount of maintenance you need to perform on isolated DB instances. You offload the system administration of hardware, OS and database to either a dedicated service provider, or in the case where you are rolling your own, allow your database team to more efficiently manage and scale the platform (public vs private DBaaS).

- Simplifies certain security aspects

If you are opting to use a DBaaS platform, the responsibility of worrying about this or that patch being applied falls to your service provider, and you can generally assume that they’ll keep your platform secure from the software perspective.

- Centralized management

One system to rule them all. A guarantee of no nasty surprises concerning that one ancient server that should have been replaced years ago, but you never got around to it. As a user of DBaaS, all you need to worry about is how you interface with the database itself.

- Easy provisioning

Scaling of the environment happens transparently, with minimal additional management.

- Choice of backends

Typically, DBaas providers offer you the choice of a multitude of database flavors, so you can mix and match according to your needs.

Main Disadvantages
- Reduced visibility of the backend

Releasing control of the backend requires a good amount of trust in your DBaaS provider. There is limited or no visibility into how backups are run and maintained, which configuration modifications are applied, or even when and which updates will be implemented. Just as you offload your responsibilities, you in turn need to rely on an SLA contract.

- Potentially harder to recover from catastrophic failures

Similarly to the above, unless your service providers have maintained thorough backups on your behalf, the lack of direct access to the host machines means that it could be much harder to recover from database failure.

- Reduced performance for specific applications

There’s a good chance that you are working on a shared environment. This means the amount of workload-specific performance tuning options is limited.

- Privacy and Security concerns

Although it is much easier to maintain and patch your environment. Having a centralized system also means you’re more prone to potential attacks targeting your dataset. Whichever provider you go with, make sure you are intimately aware of the measures they take to protect you from that, and what is expected from your side to help keep it safe.

Conclusion: While DBaaS is an interesting concept that introduces a completely new way of approaching an application’s database infrastructure, and can bring enterprises easily scalable, and financially flexible platforms, it should not be considered a silver bullet. Some big tradeoffs need to be considered carefully from the business perspective, and any move there should be accompanied with careful planning and investigation of options.

Embracing the immense flexibility these platforms offer, though, opens up a lot of interesting perspectives too. More and more companies are looking at ways to roll their own “as-a-Service”, provisioning completely automated hosted platforms for customers on-demand, and abstracting their management layers to allow them to be serviced by smaller, highly focused technical teams.

Stay tuned: Over the next few weeks we’ll be publishing a series of posts focusing on the combination of two technologies that allow for this type of flexibility: OpenStack and Trove.

The post DBaaS, OpenStack and Trove 101: Introduction to the basics appeared first on MySQL Performance Blog.

by Dimitri Vanoverbeke at July 24, 2014 07:00 AM

Mika Ayenson

Openstack Re-Heat



Hello All,

My name is Mika Ayenson and I have the privilege to intern at Johns Hopkins - Applied Physics Lab. I’m really excited to release the latest proof of concept “Re-Heat”  Re-Heat is a JHU/APL developed tool for OpenStack users to help them quickly rebuild their OpenStack environments via OpenStack’s Heat . 

  • I have included the abstract to our paper here:

Abstract

OpenStack has experienced tremendous growth since its initial release just over four years ago.  Many of the enhancements, such as the Horizon interface and Heat, facilitate making complex network environment deployments in the cloud from scratch easier.  The Johns Hopkins University Applied Physics Lab (JHU/APL) has been using the OpenStack environment to conduct research, host proofs-of-concepts, and perform testing & experimentation.  Our experience reveals that during the environment development lifecycle users and network architects are constantly changing the environments (stacks) they originally deployed.  Once development has reached a point at which experimentation and testing is prudent, scientific methodology requires recursive testing be conducted to determine the repetitiveness of the phenomena observed.  This requires the same entry point (an identical environment) into the testing cycle.  Thus, it was necessary to capture all the changes made to the initial environment during the development phase and modify the original Heat template.  However, OpenStack has not had a tool to help automate this process.  In response, JHU/APL developed a proof-of-concept automation tool called “Re-Heat,” which this paper describes in detail. 

I hope you all enjoy this as I have truly enjoyed playing with Heat and developing Re-Heat.

Cheers,
Mika

by Mika ayenson (noreply@blogger.com) at July 24, 2014 04:17 AM

Adam Young

Devstack mounted via NFS

Devstack allows the developer to work with the master branches for upstream OpenStack development. But Devstack performs many operations (such as replacing pip) that might be viewed as corrupting a machine, and should not be done on your development workstation. I’m currently developing with Devstack on a Virtual Machine running on my system. Here is my setup:

Both my virtual machine and my Base OS are Fedora 20. To run a virtual machine, I use KVM and virt-manager. My VM is fairly beefy, with 2 GB of Ram allocated, and a 28 GB hard disk.

I keep my code in git repositories on my host laptop. To make the code available to the virtual machine, I export them via NFS, and mount them on the host VM in /opt/stack, owned by the ayoung user, which mirrors the setup on the base system.

Make sure NFS is running with:

sudo systemctl enable nfs-server.service 
sudo systemctl start  nfs-server.service

My /etc/exports:

/opt/stack/ *(rw,sync,no_root_squash,no_subtree_check)

And to enable changes in this file

sudo exportfs

Make sure firewalld has the port for nfs open, but only for the internal network. For me, this is interface

virbr0: flags=4163 UP,BROADCAST,RUNNING,MULTICAST  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255

I used the firewall-config application to modify firewalld:

For both, make sure the Configuration select box is set on Permanent or you will be making this change each time you reboot.

First add the interface:

firewalld-nfs-interfaces

And enable NFS:

firewalld-nfs-ports

In the Virtual machine, I added a user (ayoung) with the same numeric userid and group id from my base laptop. To find these values:

$ getent passwd ayoung
ayoung:x:14370:14370:Adam Young:/home/ayoung:/bin/bash

I admit I created them when I installed the VM, which I did using the Anaconda installer and a DVD net-install image. However, the same thing can be done using user-add. I also added the user to the wheel group, which simplifies sudo.

On the remote machine, I created /opt/stack and let the ayoung user own them:

$ sudo mkdir /opt/stack ; sudo chown ayoung:ayoung /opt/stack

To mount the directory via nfs, I made an /etc/fstab entry:

192.168.122.1:/opt/stack /opt/stack              nfs4  defaults 0 0 

And now I can mount the directory with:

$ sudo mount /opt/stack

I went through and updated the git repos in /opt/stack using a simple shell script.

 for DIR in `ls` ; do pushd $DIR ; git fetch ; git rebase origin/master ; popd ; done

The alternative is setting RECLONE=yes in /opt/stack/devstack/localrc.

When running devstack, I had to make sure that the directory /opt/stack/data was created on the host machine. Devstack attempted to create it, but got an error induced by nfs.

Why did I go this route? I need to work on code running in HTTPD, namely Horizon and Keystone. THat preclueded me from doing all of my work in a venv on my laptop. The NFS mount gives me a few things:

  • I keep my Git repo intact on my laptop. This includes the Private key to access Gerrit
  • I can edit using PyCharm on my Laptop.
  • I am sure that the code on my laptop and in my virtual machine is identical.

This last point is essential for remote debugging. I just go this to work for Keystone, and have submitted a patch that enables it for Keystone. I’ll be working up something comparable for Horizon here shortly.

by Adam Young at July 24, 2014 01:14 AM

July 23, 2014

Arx Cruz

Deleting OpenStack Instances directly from database

Today I had a problem with my CI. Basically, one of my compute nodes went down, and all the VM’s created in that compute node stop work (of course!). Since I hate to do a nova list and see a lot of VM’s in ERROR instance and I wasn’t being able...

by Arx Cruz at July 23, 2014 11:27 PM

openSUSE Lizards

OpenStack Infra/QA Meetup

Last week, around 30 people from around the world met in Darmstadt, Germany to discuss various things about OpenStack and its automatic testing mechanisms (CI).
The meeting was well-organized by Marc Koderer from Deutsche Telekom.
We were shown plans of what the Telekom intends to do with virtualization in general and OpenStack in particular and the most interesting one to me was to run clouds in dozens of datacenters across Germany, but have a single API for users to access.
There were some introductory sessions about the use of git review and gerrit, that mostly had things I (and I guess the majority of the others) already learned over the years. It included some new parts such as tracking “specs” – specifications (.rst files) in gerrit with proper review by the core reviewers, so that proper processes could already be applied in the design phase to ensure the project is moving in the right direction.

On the second day we learned that the infra team manages servers with puppet, about jenkins-job-builder (jjb) that creates around 4000 jobs from yaml templates. We learned about nodepool that keeps some VMs ready so that jobs in need will not have to wait for them to boot. 180-800 instances is quite an impressive number.
And then we spent three days on discussing and hacking things, the topics and outcomes of which you can find in the etherpad linked from the wiki page.
I got my first infra patch merged, and a SUSE Cloud CI account setup, so that in the future we can test devstack+tempest on openSUSE and have it comment in Gerrit. And maybe some day we can even have a test to deploy crowbar+openstack from git (including the patch from an open review) to provide useful feedback, but for that we might first want to move crowbar (which is consisting of dozens of repos – one for each module) to stackforge – which is the openstack-provided Gerrit hosting.

see also: pleia2′s post

Overall for me it was a nice experience to work together with all these smart people and we certainly had a lot of fun

by bmwiedemann at July 23, 2014 01:54 PM

Tesora Corp

Red Hat and Mirantis battle in the OpenStack market and VMware needs to find a way into the fight

short stack_b small.jpgWelcome to the Short Stack, our weekly feature where we search for the most intriguing OpenStack links to share with you. These links may come from traditional publications or company blogs, but if it's about OpenStack, we'll find the best links we can to share with you every week. If you like what you see, please consider subscribing.

Here we go with this week's links:

Red Hat releases Inktank Ceph Enterprise 1.2 | ZDNet

If you wanted proof that Red Hat is serious about OpenStack, look no further than its purchase of Inktank in April. Just months after acquiring the company, Red Hat has already turned around an enterprise release of Inktank Ceph. Red Hat says it's all part of an overall strategy to be an OpenStack powerhouse and bringing enterprise-class software defined storage to OpenStack via Ceph is a big part of that.

Oracle, Mirantis team up to grab Red Hat's OpenStack crown | InfoWorld

While Red Hat has made its desire to dominate OpenStack clear, the rest of the industry isn't sitting idly and ceding anything to them. Last week, Mirantis announced a deal with Oracle to sell OpenStack services to Oracle Linux and VM customers. It's part of a larger strategy by Mirantis to team with corporate players. Last month, Mirantis announced a similar deal with IBM.

The Cloudcast #152 - How Large does Mirantis Loom Over OpenStack? | Javalobby

And speaking of Mirantis, the company is clearly making a concerted effort to blunt Red Hat's growing influence on the OpenStack community. In this podcast interview, Mirantis CEO Adrian Ionel talks about Mirtantis' role in the community and the growing demand for OpenStack in Europe.

VMware Must Conquer the OpenStack Battleground if It Wants to Grow - TheStreet

As companies like Red Hat and Mirantis exert growing influence on the quickly evolving OpenStack community, Wall Street has taken notice and VMware is a company that has to evolve to continue to stay relevant. This article suggests that VMware could find the next growth path by embracing and conquering the OpenStack market.

What does project management mean to OpenStack? | Opensource.com

In this case, the author is talking about OpenStack as an open source project and how you manage that and the changing needs of users. He wonders whether the project could benefit from more management, and concludes it's a double-edged sword. It could gain and lose something by having more tightly controlled management, but changing community needs could drive whether tighter management of the project is warranted.

by 693 at July 23, 2014 12:26 PM

July 22, 2014

Sean Dague

OpenStack Failures

Last week we had the bulk of the brain power of the OpenStack QA and Infra teams all in one room, which gave us a great opportunity to spend a bunch of time diving deep into the current state of the Gate, figure out what’s going on, and how we might make things better.

Over the course of 45 minutes we came up with this picture of the world.

14681027401_327a720647_o

We have a system that’s designed to merge good code, and keep bugs out. The problem is that while it’s doing a great job of keeping big bugs out, subtle bugs, ones that are low percentage (like show up in only 1% of test runs) can slip through. These bugs don’t go away, they instead just build up inside of OpenStack.

As OpenStack expands in scope and function, these bugs increase as well. They might grow or shrink based on seemingly unrelated changes, dependency changes (which we don’t gate on), timing impacts by anything in the underlying OS.

As OpenStack has grown no one has a full view of the system any more, so even identifying that a bug might or might not be related to their patch is something most developers can’t do. The focus of an individual developer is typically just wanting to land their code, not diving into the system as a whole. This might be because they are on a schedule, or just that landing code feels more fun and productive, than digging into existing bugs.

From a social aspect we seem to have found that there is some threshold failure rate in the gate that we always return to. Everyone ignores base races until we get to that failure rate, and once we get above it for long periods of time, everyone assumes fixing it is someone else’s responsibility. We had an interesting experiment recently where we dropped 300 Tempest tests in turning off Nova v3 by default, which gave us a short term failure drop, but within a couple months we’re back up to our unpleasant failure rate in the gate.

Part of the visibility question is also that most developers in OpenStack don’t actually understand how the CI system works today, so when it fails, they feel powerless. It’s just a big black box blocking their code, and they don’t know why. That’s incredibly demotivating.

Towards Solutions

Every time the gate fail rates get high, debates show up in IRC channels and on the mailing list with ideas to fix it. Many of these ideas are actually features that were added to the system years ago. Some are ideas that are provably wrong, like autorecheck, which would just increase the rate of bug accumulation in the OpenStack code base.

A lot of good ideas were brought up in the room, over the next week Jim Blair and I are going to try to turn these into something a little more coherent to bring to the community. The OpenStack CI system tries to be the living and evolving embodiment of community values at any point in time. One of the important things to remember is those values aren’t fixed points either.

The gate doesn’t exist to serve itself, it exists because before OpenStack had one, back in the Diablo days, OpenStack simply did not work. HP Cloud had 1000 patches to Diablo to be able to put it into production, and took 2 years to migrate from it to another version of OpenStack.

by Sean Dague at July 22, 2014 05:00 PM

Maish Saidel-Keesing

OpenStack Summit - It’s all about the Developers

This one has been sitting in the drafts for a while.

What pushed me to publish and finish this post was an article posted by Brian Gracely,
Will Paris be the last OpenStack Summit?

The Openstack Summit is actually two separate tracks – one for users, and a second for developers. It is just by “chance” (not really) that they are held at the same location – at the same time – because they are catered for two very different audiences.

This is very apparent – even in the logo for the summits.

openstack-cloud-summit

It is even confusing sometimes in regards to what the name of the summit is? Will this be the Juno summit (if you ask an Operator/User – yes it will) or is it the Kilo summit (Developers with give you a thumbs up here).

How the event works?

5 days. the first 3 are the Main conference, and the last 4 is the Design Summit.

schedule

And of course from the mouth of babes..

The Design Summit sessions are collaborative working sessions where the community of OpenStack developers come together twice annually to discuss the requirements for the next software release and connect with other community members. It is not a classic track with speakers and presentations. (The Design Summit is not the right place to get started or learn the basics of OpenStack.)

Steve Ballmer – you remember him? He loved his developers….

<object height="252" width="448"><param name="movie" value="http://www.youtube.com/v/8To-6VIJZRE?hl=en&amp;hd=1"/><embed height="252" src="http://www.youtube.com/v/8To-6VIJZRE?hl=en&amp;hd=1" type="application/x-shockwave-flash" width="448"></embed></object>
Developers, Developers, Developers

The OpenStack Foundation treats the OpenStack Developers – differently. They are the people who create the product. Therefore they receive special treatment.

And by special treatment I mean:

  • The Design Summit is called a Summit, the rest of it is called the Main Conference
    (see above)
  • A completely different part of the conference only for developers – this includes:
    • Separate rooms
    • Separate schedule
    • Separate website for schedule
    • Separate submission process and voting for Design sessions
  • Constant refreshments and treats (M&M’s and Snicker bars galore, drinks, fruit)
  • Brainstorming area outside the discussion rooms
  • Multiple power outlets in every single room and everywhere
  • Every single ATC (Active Technical Contributor) receives a free pass to the summit.

    Individual Members who committed a change to a repository under any of the official OpenStack programs (as defined above) over the last two 6-month release cycles are automatically considered ATC.

Is this unfair – perhaps – but then again – these are the people who are creating the product – so it is in the Foundation’s best interest to keep them engaged, comfortable, happy and available to continue to contribute to the community and the products.

Back to Brian Gracely’s post. Because of the developers there will always be a OpenStack summit. Will it be the same as the past and upcoming summit – I do not know. But it is in the best interest of the Foundation to have the people developing the products, developing the projects to come together, talk, schmooze and also get the details hacked out of what will happen in the upcoming 6 months and the future directions of the product.

So in response to Brian – I still think that the Foundation will hold a summit – and it will always be its central event. The same way that all the major vendors have their own big Conference (Cisco Live, Redhat Summit, VMworld, etc.) every single year, but on the other hand they will make sure they have booths at all the other conferences as well (as a sponsor) it will be the same for OpenStack.

I think that the summit will continue to be here next year in 2015 and beyond.

by Maish Saidel-Keesing (noreply@blogger.com) at July 22, 2014 02:00 PM

Christian Berendt

OpenStack @ EuroPython 2014

OpenStack boot @ EuroPython 2014 We are on the EuroPython 2014 in Berlin at the moment. The OpenStack booth is in the basement. If you are there visit us. We still have some OpenStack 2014 T-Shirts remaining.

by berendt at July 22, 2014 11:15 AM

Opensource.com

OpenStack product management: wisdom or folly?

Two recent, excellent, blog posts have touched on a topic I've been wrestling with since May's OpenStack Summit: What is the role of the Product Management function, if any, in the OpenStack development process?

by Jim Haselmaier at July 22, 2014 09:00 AM

July 21, 2014

Piston

What is SDN and Should You Buy Into the Hype?

Hi. I’m Ben. I support working on SDN integrations within Piston OpenStack™ along with Noel Burton-Krahn and Nick Bartos. For those of you unfamiliar with SDN. The initials (one of many in the world of IT) stands for Software Defined Networking. It’s a buzzword that’s been going around the networking blogs, yet everyone still grapples with the definition, benefits, and overall use case in the enterprise. In this blog, I’ll tackle this overused and mostly misunderstood topic: SDN, and SDN in OpenStack®. I won’t be able to get to all of the nitty gritty details of how SDN can help in every situation, in every datacenter. That would certainly take more than just a blog post.

So, I apologize in advance if you are in need of some clarification on SDN and encourage you to please ask the questions I may not have answered for you already (after all, that’s what the comment box below is for).

Now, let’s begin.

Before we dig in, let’s role play for a minute.

You are the architect of a very important project that will rely on a very particular, perhaps even exotic, network infrastructure. It will certainly be more complex than connecting everything directly to Top of Rack switches and then connecting those to a router or routers. You describe this network to the people who will wire it up for you. Maybe you work for a small team at a university and an intern will be pulling cables for you, or maybe you work at a large corporation and a team of professionals will construct your vast network infrastructure for you.

Either way, you draw the network diagram on a white board and do your best to make sure your people understand each part of it. They then go off to assemble your network. You hope that you described the network properly; you hope that they do not make any mistakes and plug a host into the wrong switch; you hope that they don’t accidentally leave one end of a network cable unplugged. Long story short? Plan to do a lot of hoping.

What is SDN? How does it work? How do you build it?

A simple description is that there are three parts: the physical network, the logical network, and the controller. The physical network is the actual hardware. The routers and switches and cables. The logical network is what hosts and VMs connected to the network perceive as the actual network. The controller is what talks to the physical network and configures it to behave the way that is required to create the logical network.

Why is SDN so awesome?

The Dilbert Cartoon at the top exaggerates the situation, but is pretty representative of how little work you would need to do if you implemented SDN. Things like the aforementioned hypothetical networking nightmare can cause your project to become delayed, or worse, remain unnoticed until your project goes into production and then cause all sorts of hard-to-debug problems. If you had a software defined network you wouldn’t have to deal with problems like that. Instead of drawing diagrams and trying to explain the network to humans, you would be describing it to the SDN controller. The SDN controller would then communicate with your physical networking hardware and have it reconfigure itself to create a logical network that behaved exactly as you described. Without any of the time-consuming and error prone physical steps, you would have the network you desired.

With SDN, your important project’s network would be done faster and with less headaches, so you could focus on the more critical work that relied on that network. You would no longer need to worry about touching your critical networking infrastructure. Instead you would reconfigure the easily manipulated logical network that exists on top of it.

How do I use OpenStack for a SDN?

The simple answer? You play nice with Neutron.

OpenStack is made up of very many pieces, each with a specialized goal. Nova, Cinder, Glance, Keystone and so on. The networking part of OpenStack is called Neutron. Neutron has many different parts. At the simplest level it provides a way for the other parts of OpenStack to inspect and manage the network. But the most powerful part of Neutron is the ability to use different SDN plugins. There is already a large variety of plugins from many well-known developers. The power of being able to use and manage a SDN directly through OpenStack is incredibly useful. Instead of running your cloud on top of a network that is configured from an external SDN, you can manage that network with the same tools you manage the rest of your cloud.

So is SDN just hype?

I don’t know if anyone remembers when VMs were first a “thing”. There was a lot of hype behind it. I think it’s similar with SDN – It’s going to become a thing. It may have a little ways to go, but the reality is that it’s too useful for it not to be a thing.

Managing and changing your network shouldn’t be a day spent in the datacenter. It shouldn’t take down an entire server. It should only take a few minutes, and from a single panel dashboard. Most importantly, it shouldn’t effect your workloads. The feature I work on for Piston OpenStack integrates with various SDNs via the Neutron plug-in. It keeps everything up and running, it only takes one person to change the network configuration, and, best of all, it doesn’t take an entire day. And that’s awesome.

I hope I’ve given you some insight into SDN and its benefits. Is it hype? As someone who’s seen it deployed and who’s seen it worked, I believe the practicality of SDN outweighs the hype. It’s awesome to see it in practice, and you should try it out for yourself with Piston OpenStack. You can schedule a demo or download it here.

Photo credit: Dilbert.com

by Ben Brosenberg at July 21, 2014 07:40 PM

OpenStack Blog

OpenStack Community Celebrates Four Years!

User maturity, software maturity and a focus on cloud software operations are now established areas of focus for OpenStack and none of it would be possible without the consistent  growth of the OpenStack community. In the four years since the community was established, OpenStack now has 70+ active user groups and thousands of active members spread across 139 different countries!Throughout the month of July, we are celebrating our community milestones and progress over the past four years, as well as Superusers who support the OpenStack mission. This year, we also launched the Superuser publication to chronicle the work of users, and their many accomplishments individually and organizationally amplifying their impact among the community.

2014_Singles_InfoGraphics_EP_12

We invite you all to join the party and celebrate 4 awesome years of OpenStack:

  • Check out the OpenStack 4th Birthday page featuring the latest stats, infographic and a web badge to download
  • Attend the birthday party in Portland, Oregon during OSCON, Tuesday, July 22
  • Attend your local birthday party, more than 50 are taking place around the world this month!
  • Visit the Superuser publication to learn about the contributors and user groups who make OpenStack successful
  • Join the conversation on Twitter today using the hashtag #OpenStack4Bday
Here are some community leaders’ perspectives reflecting on the past four years with OpenStack and their predictions for the future:

 

by Allison Price at July 21, 2014 07:36 PM

The Official Rackspace Blog » OpenStack

That Time When OpenStack Turned Four

This is huge. Really huge. If someone told me four years ago that OpenStack would be where it is today – just a mere four years in – I would’ve shrugged my shoulders and said “maybe, we’ll see.”

I am absolutely astounded by how far we’ve come as a community and as a project. Think about it: as of May 2014, OpenStack boasted 16,266 individual members in 139 countries from 355 organizations. There were 2,130 contributors, 466 average monthly contributors and 17,209 patches merged. Let’s compare that to May 2013, when there were 9,511 individual members from 209 organizations. 998 total contributors, 230 average monthly contributors and 7,260 patches merged.

Oh, and the Atlanta Summit this past May was the biggest ever, with more than 4,500 attendees from 55 different countries.

As the project continues to evolve into its fifth year, I’m excited to see increased operator participation. While developers and users are key cornerstones for OpenStack, the operators can tell us what works and what works at scale. One of our big goals for this past year was to close the feedback loop between operators and developers. Moving forward, we as a community have to continue to foster close relationships between the developers and the operators to continue innovation and balance stability. The launch this year of DefCore, a set of standards and tests that will help the community understand which projects are stable, widely used and key to interoperability, will help this progress. Rackspace is hosting the next OpenStack Ops Meetup August 25 and 26 in San Antonio; if you want to learn more.

We’ve also made great strides in making OpenStack more stable and have made great progress defining OpenStack core, two things will continue to hammer on.

And the production use and the maturity of use cases are incredible. If you’ve been to any of the recent OpenStack Summits, you’ve seen household names talking about how they use OpenStack – Comcast, Sony, Disney, eBay, Wells Fargo, AT&T and more showed how they’re using it in production to run very real, critical workloads. More than 1,200 user surveys have been completed by users detailing their OpenStack deployments. There are more than 70 user groups and more than 9,000 members joined a user group this year alone.

At Rackspace, we are co-founders of OpenStack, but we’re also among its largest users. It’s been a boon for us and our business. Our public and private clouds are built on it. It’s a key pillar of our managed cloud strategy. And it powers much of what we do. We’ve been able to rebuild our public cloud for massive scale and OpenStack has empowered us to innovate quickly and be agile (Have you heard of OnMetal yet? That was built with OpenStack Ironic, the bare-metal provisioning program).

I’m as optimistic about OpenStack’s future as I am humbled and inspired by its growth. It’s truly a project that we – the community– have taken from a handful of lines of code to a production-ready cloud operating system that world-beating enterprises use and trust.

Year five is a big one. So let’s celebrate how far we’ve come, and look forward to where we’ll go.

by Paul Voccio at July 21, 2014 03:24 PM

DreamHost

Happy Fourth Birthday, OpenStack!

Four years ago the open and collaborative world of open source software needed a reliable cloud stack that was created not by an army of MBAs and business analysts, but by the engineers and developers that would be using and supporting it every day.

OpenStack’s founders saw to that need, and today OpenStack is much more than simply an open source cloud stack – it’s an entire movement!

The community supporting OpenStack has grown to span members in 139 countries with over two thousand contributors actively committing improvements and enhancements to make it the absolute best that it can be.

It’s no wonder that DreamHost selected OpenStack as the foundation behind DreamCompute, our public cloud solution now being put through a gauntlet of stress tests from beta testers the world over – and you can join them!  Just enter your email address right here to receive an invitation to our cost-free beta!

You can learn more about the technology behind DreamCompute, along with a great look deeper into our use of OpenStack, in this revealing look behind the blue curtain by Jonathan LaCour, DreamHost’s VP of Cloud.

DreamHost is proud to co-sponsor this months’ OpenStack Los Angeles user group meetup!  If you’re in or around Metacloud’s office in Pasadena on Thursday, July 31st, RSVP today!

Happy 4th Birthday, OpenStack

 

by Brett Dunst at July 21, 2014 03:00 PM

IBM OpenStack Team

OpenStack celebrates fourth birthday

Here at IBM, we’re very excited to celebrate OpenStack’s fourth birthday. This is a great opportunity to reflect on the significant accomplishments to date as well as look forward to exciting technology advancements ahead. And of course it’s not a birthday celebration without a present for OpenStack, so make sure you stay for the end of the party when we open gifts!

Unstoppable growth

OpenStack’s growth since the foundation formation has been unprecedented, exceeding the growth of the Linux Foundation, which is no small feat. I thought it would be fun to look at some of the stats we like to track to show just how dramatic the growth has been in just two years:

OpenStack birthday

Wow, pretty amazing stats for a four year old! And OpenStack is showing no signs of stopping either with almost 130,000 overall commits with 61,000 of those in the last 12 months—not to mention more than 4,000 monthly commits since the beginning of 2014. If your developers are not hanging out with OpenStack in some shape or form, it’s time to get on board because this IaaS train is moving fast.

Notable milestones

So, those are impressive stats, you might say, but what does this mean in terms of OpenStack functionality today? As I outlined in my recent “Guide to Icehouse” blog, OpenStack has a lot to be proud of in this department as well. In this latest release of OpenStack, key areas of concern were addressed, including security, authentication, orchestration and quality assurance. IBM worked collaboratively with our partners to specifically help drive improvements to quality assurance (Tempest), compute (Nova), authentication and security (Keystone), storage (Cinder & Swift) and orchestration (Heat & HOT). Icehouse left no question that this latest release of OpenStack is enterprise ready.

(Related: A guide to OpenStack Icehouse)

IBM is proud to partner for success

As a founding member and platinum sponsor, IBM is proud to be a top contributor to the OpenStack Foundation from supporting governance in our role on the board of directors to our team of developers contributing code, reviews and debug skills from across the company. Specifically, we are very proud of our leadership on several development efforts focused on improving OpenStack to ensure it meets the needs of enterprise customers and the greater OpenStack user community. IBM contributors are spearheading efforts in OpenStack’s identity provider (Keystone) to deliver critical enhancements in the areas of federated identity support as well as adding new cross-cloud authentication and authorization mechanisms for enabling hybrid clouds based on OpenStack.

In addition, IBM contributors are driving support for standard based (Distributed Management Task Force’s CADF) auditing support for OpenStack to enable consistent reporting of audit data across cloud providers for the purposes of meeting regulatory compliance requirements.

IBM contributors also continue to drive enhancements into OpenStack’s orchestration layer (Heat) to ensure that it aligns well with popular industry cloud workload orchestration standards such as OASIS TOSCA. Finally, IBMers are also leading the refstack initiative, which is focused on making sure OpenStack distributions are meeting quality assurance requirements necessary to receive official OpenStack branding.

IBM Cloud offerings are built on OpenStack

IBM remains committed to supporting the OpenStack Foundation to provide an open, best-of-breed solution for IaaS as well as to offer our premier line of IBM Cloud offerings based on OpenStack. Please join us at the next OpenStack Summit in Paris where we will showcase our IBM Cloud offerings running on OpenStack, including IBM’s Power8 Server, IBM Bluemix, IBM Cloud Orchestrator, IBM Cloud Manager with OpenStack and Power VC.

Did you say something about a birthday present?

Why yes I did! We’re so thrilled to celebrate OpenStack’s fourth birthday that we thought a birthday present was more than appropriate. To further showcase the outstanding contributions made to OpenStack every day, we’re launching a new OpenStack developers corner blog to feature IBM’s best and brightest contributors to OpenStack code. In this blog series, our developers will give you the real scoop and inside details on day-to-day development activities and achievements at OpenStack. Some of the initial blog topics you can look forward to include deep dives on keystone, heat/hot and horizon, all from the developer’s perspective. Stay tuned.

Happy birthday, OpenStack, from the hundreds of OpenStackers at IBM who proudly contribute to your success. Here’s to an absolutely fantastic fifth year!

The post OpenStack celebrates fourth birthday appeared first on Thoughts on Cloud.

by Brad Topol at July 21, 2014 01:25 PM

Maish Saidel-Keesing

Recording of my Presentation at OpenStack Israel 2014

Embedded below you can find the recording of my session
"OpenStack in the Enterprise - Are you Ready?"

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="http://www.youtube.com/embed/AvdesnmCjYU" width="560"></iframe>

You are welcome to go over the blog post I wrote about the event.

The full playlist of all the sessions can be viewed here

I have already submitted a few sessions for the upcoming summit in Paris.

by Maish Saidel-Keesing (noreply@blogger.com) at July 21, 2014 12:30 PM

Opensource.com

A new OpenStack book, advice for contributing, and more

Interested in keeping track of what's happening in the open source cloud? Opensource.com is your source for what's happening right now in OpenStack, the open source cloud infrastructure project.

OpenStack around the web

There's a lot of interesting stuff being written about OpenStack. Here's a sampling:

by Jason Baker at July 21, 2014 07:00 AM

July 19, 2014

OpenStack in Production

OpenStack plays Tetris : Stacking and Spreading a full private cloud

At CERN, we're running a large scale private cloud which is providing compute resources for physicists analysing the data from the Large Hadron Collider. With 100s of VMs created per day, the OpenStack scheduler has to perform a Tetris like job to assign the different flavors of VMs falling to the specific hypervisors.

As we increase the number of VMs that we're running on the CERN cloud, we see the impact of a number of configuration choices made early on in the cloud deployment. One key choice is how to schedule VMs across a pool of hypervisors.

We provide our users with a mixture of flavors for their VMs (for details, see http://openstack-in-production.blogspot.fr/2013/08/flavors-english-perspective.html).

During the past year in production, we have seen a steady growth in the number of instances to nearly 7,000.


At the same time, we're seeing an increasing elastic load as the user community explores potential ways of using clouds for physics.



Given that CERN has a fixed resource pool and the budget available is defined and fixed, the underlying capacity is not elastic and we are now starting to encounter scenarios where the private cloud can become full. Users see this as errors when they request VMs that no free hypervisor could be located.

This situation occurs more frequently for the large VMs. Physics programs can make use of multiple cores to process physics events in parallel and our batch system (which runs on VMs) benefits from a smaller number of hosts. This accounts for a significant number of large core VMs.


The problem occurs as the cloud approaches being full. Using the default OpenStack configuration (known as 'spread'), VMs are evenly distributed across the hypervisors. If the cloud is running at low utilisation, this is an attractive configuration as CPU and I/O load are also spread and little hardware is left idle.

However, as the utilisation of the cloud increases, the resources free on each hypervisor are reduced evenly. To take a simple case, a cloud with two compute nodes of 24 cores handling a variety of flavors. If there are requests for two 1-core VMs followed by one 24 core flavor, the alternative approaches can be simulated.

In a spread configuration,
  • The first VM request lands on hypervisor A leaving A with 23 cores available and B with 24 cores
  • The second VM request arrives and following the policy to spread the usage, this is scheduled to hypervisor B, leaving A and B with 23 cores available.
  • The request for one 24 core flavor arrives and no hypervisor can satisfy it despite there being 46 cores available and only 4% of the cloud used.
In the stacked configuration,

  • The first VM request lands on hypervisor A leaving A with 23 cores available and B with 24 cores
  • The second VM request arrives and following the policy to stack the usage, this is scheduled to hypervisor A, leaving A with 22 cores and B with 24 cores available.
  • The request for one 24 core flavor arrives and is satisfied by B
A stacked configuration is configured using the RAM weight being negative (i.e. prefer machines with less RAM). This has the effect to pack the VMs. This is done through a nova.conf setting as follows

ram_weight_multiplier=-1.0


When a cloud is initially being set up, the question of maximum packing does not often come up in the early days. However, once the cloud has workload running under spread, it can be disruptive to move to stacked since the existing VMs will not be moved to match the new policy.

Thus, it is important as part of the cloud planning to reflect on the best approach for each different cloud use case and avoid more complex resource rebalancing at a later date.

References

  • OpenStack configuration reference for scheduling at http://docs.openstack.org/trunk/config-reference/content/section_compute-scheduler.html



by Tim Bell (noreply@blogger.com) at July 19, 2014 09:39 PM

Elizabeth K. Joseph

OpenStack QA/Infrastructure Meetup in Darmstadt

I spent this week at the QA/Infrastructure Meetup in Darmstadt, Germany.

Our host was Marc Koderer of Deutsche Telekom who sorted out all logistics for having our event at their office in Darmstadt. Aside from the summer heat (the conference room lacked air conditioning) it all worked out very well, we had a lot of space to work, the food was great, we had plenty of water. It was also nice that the hotel most of us stayed at was an easy walk away.

The first day kicked off with an introduction by Deutsche Telekom that covered what they’re using OpenStack for in their company. Since they’re a network provider, networking support was a huge component, but they use other components as well to build an infrastructure as they plan to have a quicker software development cycle that’s less tied to the hardware lifetime. We also got a quick tour of one of their data centers and a demo of some of the running prototypes for quicker provisioning and changing of service levels for their customers.

Monday afternoon was spent with an on-boarding tutorial for newcomers to OpenStack when it comes to contributing, and on Tuesday we transitioned into an overview of the OpenStack Infrastructure and QA systems that we’d be working on for the rest of the week. Beyond the overview of the infrastructure presented by James E. Blair, key topics included in the infrastructure included jeepyb presented by Jeremy Stanley, devstack-gate and Grenade presented by Sean Dague, Tempest presented by Matthew Treinish (including the very useful Tempest Field Guide) and our Elasticsearch, Logstash and Kibana (ELK) stack presented by Clark Boylan.

Wednesday we began the hacking/sprint portion of the event, where we moved to another conference room and moved tables around so we could get into our respective working groups. Anita Kuno presented the Infrastructure User Manual which we’re looking to flesh out, and gave attendees a task of helping to write a section to help guide users of our CI system. This ended up being a great thing for newcomers to get their feet wet with, and I hope to have a kind of entry level task at every infrastructure sprint moving forward. Some folks worked on getting support for uploading log files to Swift, some on getting multinode testing architected, and others worked on Tempest. In the early afternoon we had some discussions covering recheck language, next steps I’d be taking when it comes to the evaluation of translations tools, a “Gerrit wishlist” for items that developers are looking for as Khai Do prepares to attend a Gerrit hack event and more. I also took time on Wednesday to dive into some documentation I noticed needed some updating after the tutorial day the day before.

Thursday the work continued, I did some reviews, helped out a couple of new contributors and wrote my own patch for the Infra Manual. It was also great to learn and collaborate on some of the aspects of the systems we use that I’m less familiar with and explain portions to others that I was familiar with.


Zuul supervised my work

Friday was a full day of discussions, which were great but a bit overwhelming (might have been nice to have had more on Thursday). Discussions kicked off with strategies for handling the continued publishing of OpenStack Documentation, which is currently just being published to a proprietary web platform donated by one of the project sponsors.

A very long discussion was then had about managing the gate runtime growth. Managing developer and user expectations for our gating system (thorough, accurate testing) while balancing the human and compute resources that we have available on the project is a tough thing to do. Some technical solutions to ease the pain on some failures were floated and may end up being used, but the key takeaway I had from this discussion was that we’d really like the community to be more engaged with us and each other (particularly when patches impact projects or functionality that you might not feel is central to your patch). We also want to stress that the infrastructure is a living entity that evolves and we accept input as to ideas and solutions to problems that we’re encountering, since right now the team is quite small for what we’re doing. Finally, there were some comments about how we run tests in the process of reviewing, and how scalable the growth of tests is over time and how we might lighten that load (start doing some “traditional CI” post merge jobs? having some periodic jobs? leverage experimental jobs more?).

The discussion I was most keen on was around the refactoring of our infrastructure to make it more easily consumable by 3rd parties. Our vision early on was that we were an open source project ourselves, but that all of our customizations were a kind of example for others to use, not that they’d want to use them directly, so we hard coded a lot into our special openstack_projects module. As the project has grown and more organizations are starting to use the infrastructure, we’ve discovered that many want to use one largely identical to ours and that making this easier is important to them. To this end, we’re developing a Specification to outline the key steps we need to go through to achieve this goal, including splitting out our puppet modules, developing a separate infra system repo (what you need to run an infrastructure) and project stuff repo (data we load into our infrastructure) and then finally looking toward a way to “productize” the infrastructure to make it as easily consumable by others as possible.

The afternoon finished up with discussions about vetting and signing of release artifacts, ideas for possible adjustment of the job definition language and how teams can effectively manage their current patch queues now that the auto-abandon feature has been turned off.

And with that – our sprint concluded! And given the rise in temperature on Friday and how worn out we all were from discussions and work, it was well-timed.

Huge thanks to Deutsche Telekom for hosting this event, being able to meet like this is really valuable to the work we’re all doing in the infrastructure and QA for OpenStack.

Full (read-only) notes from our time spent throughout the week available here: https://etherpad.openstack.org/p/r.OsxMMUDUOYJFKgkE

by pleia2 at July 19, 2014 11:07 AM

July 18, 2014

OpenStack Blog

OpenStack Community Weekly Newsletter (July 11 – 18)

DefCore Update: Input Request for Havana Capabilities

As part of our community’s commitment to interoperability, the OpenStack Board of Directors has been working to make sure that “downstream” OpenStack-branded commercial products offer the same baseline functionality and include the same upstream, community-developed code. The work to define these required core capabilities and code has been led by the DefCore Committee co-chaired by Rob Hirschfeld (his DefCore blog) and Joshua McKenty (his post). You can read more about the committee history and rationale in Mark Collier’s blog post. The next deadlines are: OSCON on July 21, 11:30 am PDT and the Board Meeting on July 22nd.

And the K cycle will be named… Kilo !

The results of the poll are just in, and the winner proposal is “Kilo”. “k” is the unit symbol for “kilo”, a SI unit prefix (derived from the Greek word χίλιοι which means “thousand”). “Kilo” is often used as a shorthand for “kilogram”, and the kilogram is the last SI base unit to be tied to a reference artifact (stored near Paris in the Pavillon de Breteuil in Sèvres).

Five Days + Twelve Writers + One Book Sprint = One Excellent Book on OpenStack Architecture

A dozen OpenStack experts and writers from companies across the OpenStack ecosystem gathered at VMware’s Palo Alto campus for the OpenStack Architecture Design Guide book sprint. The intent was to deliver a completed book aimed architects and evaluators, on designing OpenStack clouds — in just five days.

Only developers should file specifications and blueprints

If you try to solve a problem with the wrong tool you’re likely going to have a frustrating experience. OpenStack developers use blueprints define the roadmap for the various projects, the specifications attached to a blueprint are used to discuss the implementation details before code is submitted for review. Operators and users in general don’t need to dive in the details of how OpenStack developers organize their work and definitely should never be asked to use tools designed for and by developers.

Third Party CI group formation and minutes

At this week’s meeting the Third-Party group continues to discuss documentation patches, including a new terminology proposal, as well as CI system naming, logging and test timing. There was also a summary review of the current state of Neutron driver CI rollout. Anyone deploying a third-party test system or interested in easing third-party involvement is welcome to attend the meetings. Minutes of ThirdParty meetings are carefully logged.

The Road To Paris 2014 – Deadlines and Resources

Security Advisories and Notices

Tips ‘n Tricks

Upcoming Events

Other News

Got Answers?

Ask OpenStack is the go-to destination for OpenStack users. Interesting questions waiting for answers:

Welcome New Reviewers and Developers

Will Foster zhangtralon
Walter Heck Gael Chamoulaud
Mithil Arun Fabrizio Fresco
Kieran Forde badveli_vishnuus
JJ Asghar Ryan Lucio
Gilles Dubreuil Martin Falatic
Emily Hugenbruch Bryan Jones
Christian Hofstädtler Tri Hoang Vo
Steven Hillman Ryan Rossiter
Rajesh Tailor Mohit
akash Tushar Katarki
Rajini Ram Pawel Skowron
Pawel Skowron Karthik Natarajan
Abhishek L Ryan Brown
takehirokaneko Keith Basil
Kate Coyne
Ju Lim

Latest Activity In Projects

Do you want to see at a glance the bugs filed and solved this week? Latest patches submitted for review? Check out the individual project pages on OpenStack Activity Board – Insights.

OpenStack Reactions

youwelcome

Trivial fix on a review of someone else while he’s asleep so jenkins can pass

The weekly newsletter is a way for the community to learn about all the various activities occurring on a weekly basis. If you would like to add content to a weekly update or have an idea about this newsletter, please leave a comment

by Stefano Maffulli at July 18, 2014 07:56 PM

July 17, 2014

Arx Cruz

One controller to rule them all

These days I had a problem with my environment: I had two controllers: One for production, and the second for development. The production had a public network interface, where you could connect to your vm directly, however, the development...

by Arx Cruz at July 17, 2014 08:27 PM

Cody Bunch

OSCON Lab Materials

tl;dr Download our OSCON lab materials here.

As a follow-up on my coming to OSCON, I thought it prudent to provide some info & downloads for the lab ahead of time.

Lab Materials

While we will have USB keys in the tutorial for everyone, we figure some of y’all might want to get started early. With that in mind, the lab materials can be downloaded here, but be aware, it’s about 4GB of stuff to download.

  • Slides – Both the PPT & PDF of the slides
  • openstackicehouse.ova – The vAPP we will use in the lab
  • OpenStack_command_guide_reference.pdf – A quick reference for OpenStack CLI commands
  • Access_virtualbox_allinone.pdf – A guide for accessing the lab
  • cirros-0.3.1-x86_64-disk.img – Used in the labs
  • Osco Solutions/ – All of the labs we will be doing
  • Couch to OpenStack/ – An additional 12 hours of Getting Started with OpenStack Material
  • VirtualBox/ – Contains the VirtualBox installer for OSX, Linux, and Windows

Really, you can get the materials here

Prerequisites

To be successful in the lab, there are a few things you will need. None of these are too complex or too deep, but having them will improve your experience overall.

  • A laptop with a minimum of 4GB free ram
  • VirtualBox or VMware Fusion/Workstation/Player installed
  • An SSH client. On Windows, Putty works well.

Some Random Statistics

Building the USB keys was an exercise in insanity. The setup looks kinda like this:
https://pbs.twimg.com/media/BstHHTaCMAACUTk.jpg

The fan was added after the first batch nearly melted the USB hub. The smell of burnt silicon was pretty intense.

  • Each key contains about 4GB of data.
  • We’re copying them 24 at a time and seeing:
    • 40 min to finish all 24 disks
    • 45MB/sec (Yes Megabytes) sustained transfer
    • 12,000 IOPS largely write

by OpenStackPro at July 17, 2014 07:56 PM

Arx Cruz

OpenStack 3rd Party CI - Part III - Configuring your puppet recipes

Last time, we talked about Puppetboard. Now let’s start to work with recipes to install our services. For that I’ve created a github project called openstack-puppet-recipes I will continue to update the github as we progress in this series of...

by Arx Cruz at July 17, 2014 07:15 PM

July 16, 2014

Cody Bunch

USB Key Duplication on OSX on the Cheap

Edit: As I got a bit deeper into the copies, a new method was needed.

Common

First, make an image of the usb disk in question. To do this, open Disk Utility, and then:

  1. Click File
  2. Click New
  3. Click “New Image From Folder…”
  4. Select your folder
  5. Wait

Next, find the image file in finder & mount it, record the place it was mounted.

Methodology 1

This is a variant of the work here.

Now that you’ve got the image and it’s mounted, plug in the USB hub containing your keys and run the following from your terminal:

diskutil list
$ diskutil list
/dev/disk0
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *251.0 GB   disk0
... snip
/dev/disk3
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:     FDisk_partition_scheme                        *8.2 GB     disk3
   1:                 DOS_FAT_32 NO NAME                 8.2 GB     disk3s1

What you are looking for here is the first and last /dev/disk# that represent your USB keys. In my case this is 3 – 23. From there we start the copy:

for i in `jot 23 3`; do asr --noverify --erase --noprompt --source /Volumes/No\ Name --target /dev/disk${i}s1 & done

In the above, note the –source specifies the /Volume/No\ Name\ ## that represents where we mounted the image. What it does then, is loop over each usb disk copying the data from the image.

Methodology 2

This is a variant of the work here.

Now that you’ve got the image and it’s mounted, plug in the USB hub containing your keys and run the following from your terminal:

diskutil list
$ diskutil list
/dev/disk0
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *251.0 GB   disk0
... snip
/dev/disk3
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:     FDisk_partition_scheme                        *8.2 GB     disk3
   1:                 DOS_FAT_32 NO NAME                 8.2 GB     disk3s1

What you are looking for here is the first and last /dev/disk# that represent your USB keys. In my case this is 3 – 27.

First unmount the disks:

for i in `jot 25 3`; do diskutil unmountDisk /dev/disk${i}; done

Next, use homebrew to install PV if you don’t have it:

brew install pv

Finally start the copy:

sudo dd if=/dev/disk2 |pv| tee >(sudo dd of=/dev/disk3 bs=16m) >(sudo dd of=/dev/disk4 bs=16m) >(sudo dd of=/dev/disk5 bs=16m) >(sudo dd of=/dev/disk26 bs=16m) >(sudo dd of=/dev/disk7 bs=16m) >(sudo dd of=/dev/disk8 bs=16m) >(sudo dd of=/dev/disk9 bs=16m) >(sudo dd of=/dev/disk10 bs=16m) >(sudo dd of=/dev/disk11 bs=16m) >(sudo dd of=/dev/disk12 bs=16m) >(sudo dd of=/dev/disk13 bs=16m) >(sudo dd of=/dev/disk14 bs=16m) >(sudo dd of=/dev/disk15 bs=16m) >(sudo dd of=/dev/disk16 bs=16m) >(sudo dd of=/dev/disk17 bs=16m) >(sudo dd of=/dev/disk18 bs=16m) >(sudo dd of=/dev/disk19 bs=16m) >(sudo dd of=/dev/disk20 bs=16m) >(sudo dd of=/dev/disk21 bs=16m) >(sudo dd of=/dev/disk22 bs=16m) >(sudo dd of=/dev/disk23 bs=16m) >(sudo dd of=/dev/disk24 bs=16m) >(sudo dd of=/dev/disk25 bs=16m) | sudo dd of=/dev/disk27 bs=16m

Ok, that is a single line. It is also terrible terrible terrible, but it works. Some notes:
You need a >(sudo dd) section for each disk except the last one. You will also need to change these to match your environment.

by OpenStackPro at July 16, 2014 09:15 PM

Rob Hirschfeld

OpenStack DefCore Review [interview by Jason Baker]

I was interviewed about DefCore by Jason Baker of Red Hat as part of my participation in OSCON Open Cloud Day (speaking Monday 11:30am).  This is just one of fifteen in a series of speaker interviews covering everything from Docker to Girls in Tech.

This interview serves as a good review of DefCore so I’m reposting it here:

Without giving away too much, what are you discussing at OSCON? What drove the need for DefCore?

I’m going to walk through the impact of the OpenStack DefCore process in real terms for users and operators. I’ll talk about how the process works and how we hope it will make OpenStack users’ lives better. Our goal is to take steps towards interoperability between clouds.

DefCore grew out of a need to answer hard and high stakes questions around OpenStack. Questions like “is Swift required?” and “which parts of OpenStack do I have to ship?” have very serious implications for the OpenStack ecosystem.

It was impossible to reach consensus about these questions in regular board meetings so DefCore stepped back to base principles. We’ve been building up a process that helps us make decisions in a transparent way. That’s very important in an open source community because contributors and users want ground rules for engagement.

It seems like there has been a lot of discussion over the OpenStack listservs over what DefCore is and what it isn’t. What’s your definition?

First, DefCore applies only to commercial uses of the OpenStack name. There are different rules for the integrated code base and community activity. That’s the place of most confusion.

Basically, DefCore establishes the required minimum feature set for OpenStack products.

The longer version includes that it’s a board managed process that’s designed to be very transparent and objective. The long-term objective is to ensure that OpenStack clouds are interoperable in a measurable way and that we also encourage our vendor ecosystem to keep participating in upstream development and creation of tests.

A final important component of DefCore is that we are defending the OpenStack brand. While we want a vibrant ecosystem of vendors, we must first have a community that knows what OpenStack is and trusts that companies using our brand comply with a meaningful baseline.

Are there other open source projects out there using “designated sections” of code to define their product, or is this concept unique to OpenStack? What lessons do you think can be learned from other projects’ control (or lack thereof) of what must be included to retain the use of the project’s name?

I’m not aware of other projects using those exact words. We picked up ‘designated sections’ because the community felt that ‘plug-ins’ and ‘modules’ were too limited and generic. I think the term can be confusing, but it was the best we found.

If you consider designated sections to be plug-ins or modules, then there are other projects with similar concepts. Many successful open source projects (Eclipse, Linux, Samba) are functionally frameworks that have very robust extensibility. These projects encourage people to use their code base creatively and then give back some (not all) of their lessons learned in the form of code contributes. If the scope returning value to upstream is too broad then sharing back can become onerous and forking ensues.

All projects must work to find the right balance between collaborative areas (which have community overhead to join) and independent modules (which allow small teams to move quickly). From that perspective, I think the concept is very aligned with good engineering design principles.

The key goal is to help the technical and vendor communities know where it’s safe to offer alternatives and where they are expected to work in the upstream. In my opinion, designated sections foster innovation because they allow people to try new ideas and to target specialized use cases without having to fight about which parts get upstreamed.

What is it like to serve as a community elected OpenStack board member? Are there interests you hope to serve that are difference from the corporate board spots, or is that distinction even noticeable in practice?

It’s been like trying to row a dragon boat down class III rapids. There are a lot of people with oars in the water but we’re neither all rowing together nor able to fight the current. I do think the community members represent different interests than the sponsored seats but I also think the TC/board seats are different too. Each board member brings a distinct perspective based on their experience and interests. While those perspectives are shaped by their employment, I’m very happy to say that I do not see their corporate affiliation as a factor in their actions or decisions. I can think of specific cases where I’ve seen the opposite: board members have acted outside of their affiliation.

When you look back at how OpenStack has grown and developed over the past four years, what has been your biggest surprise?

Honestly, I’m surprised about how many wheels we’ve had to re-invent. I don’t know if it’s cultural or truly a need created by the size and scope of the project, but it seems like we’ve had to (re)create things that we could have leveraged.

What are you most excited about for the “K” release of OpenStack?

The addition of platform services like database as a Service, DNS as a Service, Firewall as a Service. I think these IaaS “adjacent” services are essential to completing the cloud infrastructure story.

Any final thoughts?

In DefCore, we’ve moved slowly and deliberately to ensure people have a chance to participate. We’ve also pushed some problems into the future so that we could resolve the central issues first. We need to community to speak up (either for or against) in order for us to accelerate: silence means we must pause for more input.


by Rob H at July 16, 2014 07:54 PM

DreamHost

How DreamHost is reinventing itself with OpenStack

Original post came from OpenSource.com - http://opensource.com/business/14/7/dreamhost-and-openstack-love-story

Founded in 1997, DreamHost is a seasoned internet business home to over 400,000 happy customers, 1.5 million sites and applications, and hundreds of thousands of installs of WordPress, the dominant open source CMS. Open source is in our blood, and has powered every aspect of our services since 1997. DreamHost is built on a foundation of Perl, Linux, ApacheMySQL, and countless other open source projects. In our 16+ years of existence, DreamHost has seen the realities of internet applications and hosting drastically evolve. Our journey to the cloud requires a bit of history and context, so let’s dive right in.

The rise of the black box cloud

Nearly a decade ago Amazon created the market of cloud infrastructure services with the introduction of the immensely popular S3 for storage and EC2 for compute. The years that followed have been dominated by sweeping changes to the way that infrastructure is consumed and, more importantly, to the underlying design and architecture of software. There’s also there’s been a larger, hidden consequence to the rise of opaque cloud infrastructure services.

While the cloud has been revolutionary it has also been largely a black box. The software and systems that power Amazon Web Services, Microsoft Azure, and many other clouds are closed to prying eyes, leaving users in the dark about the implementation of the most critical component of their application stacks. The era prior to the cloud represented the rise of the open internet – Linux, Apache, MySQL, and languages like PHP, Perl, Python, and Ruby, where developers, engineers, and IT organizations had a large degree of transparency about the software that powered their applications. In the early cloud era much of that transparency disappeared.

A new hope

In 2010, two unlikely partners, NASA and Rackspace Hosting, founded theOpenStack project to create open source cloud software for the creation of private and public clouds. In the years since its inception the OpenStack project has exploded, aiming to live up to its potential as the Linux of cloud. More than 200 companies and countless individuals are now a part of the project, working in concert to create open source software and APIs that power private and public clouds globally.

DreamHost joined OpenStack early in its life, committing code, financial backing, and leadership to the project. We joined the OpenStack Foundation as a Gold member, and DreamHost CEO Simon Anderson was elected to represent us on the OpenStack Foundation Board of Directors. Our commitment to the success of the project runs deep.

Why OpenStack?

DreamHost wouldn’t exist today without a strong commitment to the open source philosophy. We don’t want to live in a future that is again dominated by closed, technically opaque, “magical” cloud platforms. Many traditional hosting customers are interested in the adoption of cloud services, either in addition to, or as a replacement for, their existing shared, VPS, and dedicated hosting, and we believe that they too are looking for a simple and affordable upgrade path. Given our DNA, it makes sense for DreamHost to build our customers what they want using best-of-breed open source software.

Introducing DreamCompute

DreamHost’s first product built on OpenStack is DreamCompute, which allows customers to create virtual machines, block devices, and networks on-demand via the standard OpenStack APIs and command-line tools or via an intuitive web-based user interface. DreamCompute puts more power in the hands of our customers than they’ve ever had access to before, and is built on a large library of open source software. In true DreamHost fashion, even the architecture of DreamCompute is open.

DreamCompute runs on a mixture of high-end Dell servers running Ubuntu Linux. We have two basic types of servers: storage nodes and hypervisor nodes. The hypervisor nodes are optimized for hosting virtual machines running on top of the open source KVM hypervisor, and feature 64 AMD cores and 192 GB of RAM. Our storage nodes are lower-powered, higher-density servers, each with twelve 3 TB disks, and are running Ceph, the open source, massively distributed, fault tolerant storage system that DreamHost helped build.

DreamCompute also features a “cockpit” pod, which represents the “brain” of the cloud. In the cockpit, we run OpenStack and its supporting services on a mixture of bare metal and virtual machines, including HorizonGlanceNovaNeutron,Keystone, and Cinder, along with Apache, HAProxy load balancers, MySQL databases, and RabbitMQ queueing systems. The entire system is configured and managed by Chef, and is monitored using open source tools like logstash,graphitecollectd, and nagios.

Even the networking hardware and software in DreamCompute are based upon open platforms and technology. DreamHost has sourced high-performance, 48 port 10 Gig switches directly from manufacturers. The switches run Cumulus Linux, which is a Linux network operating system from our friends at Cumulus Networks. This unique setup enables us to provision, monitor, and operate our networking infrastructure using the same tools and processes that we use for our compute and storage nodes, greatly minimizing operational overhead.

DreamCompute is compatible with the standard OpenStack Compute, Network, Image and Storage APIs, and is at its core an OpenStack deployment. That said, DreamCompute also has some unique features that set it apart from other clouds. It should come as no surprise that the foundation for these features are, in fact, based upon open source software that DreamHost created.

Fear the Cephalopod

Every virtual machine in DreamCompute boots from a virtual block device backed by a multi-petabyte Ceph storage cluster. Operating system images themselves are stored in the same cluster as these block devices, enabling DreamCompute to leverage Ceph’s Copy-on-Write (COW) functionality. Rather than downloading the operating system image from a central store to a hypervisor (which is time consuming) and then provisioning a new block device, Ceph enables our virtual machines to boot nearly instantly from a thin-provisioned copy of the OS image. As a result, virtual machines in DreamCompute can be created and fully operational in as little as 40 seconds.

Ceph also provides DreamCompute users with confidence that their data is safe, as every piece of data that is stored in the cluster is replicated a total of three times. When disks, servers, or racks fail, the Ceph cluster springs into action to automatically heal itself, ensuring that the proper number of replicas exist. When new capacity is added, Ceph responds by immediately putting it to good use, rebalancing data across the cluster.

Virtualize all the things. Including the network!

Server and storage virtualization are very familiar concepts to most, but network virtualization is a relatively new idea. DreamCompute was built from the ground up to provide full network virtualization for every customer. In DreamCompute, the physical network represents an “underlay,” which is invisible to the customer. A virtual network fabric – an “overlay” – is then layered on top, providing every customer in DreamCompute with a virtual OSI Layer 2 (L2) switch, which is completely isolated at L2 from every other customer.

On top of this virtual L2 network, tenants are provided with a virtualized software router, which provides L3+ services like routing, firewalling, and more. DreamHost has open-sourced this project, named it Akanda, and published it under a liberal open source license on GitHub.

DreamCompute is also built from the ground-up to support IPv6 as the exhaustion of IPv4 address space is nearly upon us. Every virtual machine in DreamCompute is automatically assigned an IPv6 address along with its private IPv4 address.

By connecting network virtualization technology with OpenStack’s Neutron Networking APIs, customers have fully programmable control of their network from L2-L7, with isolation.

The future of the open source cloud is bright

DreamCompute represents the continuation of a long partnership between DreamHost and the open source community. We’re excited to further our contributions to OpenStack, and to be part of a vibrant ecosystem of cloud service providers who provide OpenStack-based services. The future of the open source cloud is very bright, and we’re delighted to be on the forefront.

DreamHost’s DreamCompute is currently in private beta. To register your interest in joining the free beta period, visit DreamCompute and register today.

by Jonathan LaCour at July 16, 2014 07:21 PM

Manishanker Talusani

How to contribute to Openstack

How to contribute to Openstack

My contribution : link

First of all lets answer the question who can contribute to Openstack. Anyone . Yes, anyone can contribute to Openstack. Whether you are interested in developing new feature in Openstack  or in Documentation or in fixing Bugs , you are welcome. That's how Open source projects work.

Lets answer another question . Why should anyone contribute to Openstack. The answer would be :  To learn more about the project. By contributing you learn a lot of things. You are making the system better and helping others all over the world who use Openstack.

Let's begin

This  is where you should start. The link has all the information on How to contribute. All the commands used here are from that link. In case if you want more info please use the link provided in the beginning. My mentor suggested me to fix a Bug in Openstack. Bug can be a very small one like fixing a typo in the code message or it can be a critical  one. Both are considered as contribution to Openstack. You are not fixing a typo error but you are fixing a bug in Openstack and making it better. I went with a very small bug in Sahara (obviously I don't have adequate knowledge to fix a critical Bug but I made my contribution with what I can do) .I had to change all the instances in the code with the word "components" with "component(s)". Seems simple ? yes it is! 


Where to begin?

Launchpad Account

Launchpad has all the information about bugs, overview etc. Get a launchpad account by registering here. Click on Openstack and you will be redirected to its main launchpad page. All the projects in Openstack, links to documentation, irc, mailing list etc can be found there. 

Join Openstack Foundation

Fill out the details and join the foundation link

Log Into Gerrit Review System

Login here with your Launchpad account. This is where all the code reviews happen. Sign the  Openstack individuals contributor License Agreement. And upload SSH keys .

Uploading SSH keys

Once you are logged into review.openstack.org, click here to upload your SSH keys. Follow this link to generate SSH keys. Once you have generated SSH keys, copy the Public key and upload it in review.openstack.org. Change to the directory where you have created your SSH keys, then list the contents of the folder and look for something ending in .pub which would be your public key. 

Install git 

I am using Ubuntu 12.04 LTS so you can begin with opening terminal ;)

Open Terminal (ctrl + alt +t ) and type "sudo apt-get install git" . After installing git, use this command to check for successful installation "git --version". If you can see the version of git, then it's installed. Next set your username and email id . Type these commands in terminal.

git config --global user.name "Firstname Lastname"
git config --global user.email "your_email@youremail.com"
 
To check if you have configured properly check your configuration by using this command

git config --list
 
You would get information about your email id ,username and other details. Learn more about git here

Install git-review

Install git review. Follow this link for different linux distros. In ubuntu it would be "sudo apt-get install git-review"

Lets track some Bugs

Login to your launchpad account and click on bugs. Out of all those bugs, low-hanging-fruit would be easy bugs to fix for beginners. Click on the low-hanging-fruit tag and select a bug which is easy to fix. Click on it, you can see its description, who reported it, which part of Openstack it is affecting and other details. Look for the Bugs which are Triaged and Confirmed and which are not assigned to any one. Triaged bugs will contain information on how to fix them in most of the cases. If you need any help understanding the bug or trying to find a fix for the bug, you can comment in the bug page or you can directly reach out to the reporter in IRC.

Once you have selected a bug and if you feel you could fix it assign it to yourself. This is the bug I assigned to myself. Next thing would be to clone the part of Openstack which has this bug.

I have cloned Sahara since the bug was in it. To clone a project to your local machine, open terminal and then use "git clone https://github.com/openstack/urproject.git" .You would get a message cloning into <whateverproject>, receiving objects, receiving deltas then done. In my case, I would clone sahara project from github using this command : "git clone https://github.com/openstack/sahara.git". Change the directory to project " cd sahara". Next type the command "git review -s". This basically checks if you can login to Gerrit with your SSH keys. If your git username is different from your gerrit (review.openstack.org) username , use this command:

git config --global gitreview.username yourgerritusername
 
Your gerrit username would be the one in your profile tab in review.openstack.org. To verify your configuration use git config --list.

Once again verify your SSH keys , gerritusername in git config. I you get an error "We don't know where your gerrit is.", you will need to add a new git remote. The url should be in the error message. Copy that and create the new remote. 

git remote add gerrit ssh://<username>@review.openstack.org:29418/openstack/urproject.git

Next list the contents of the folder by using "ls -la" which would also list the hidden folders and files. You should see a .git hidden folder and .gitreview hidden file . Now try again "git review -s"

Everyone continuously make changes to the master branch in Github. To get the most updated code with all the changes use the following commands. 

git remote update
git checkout master
git pull --ff-only origin master
 
To learn more about what origin, remote, master mean use this link . It has nice way of teaching about all the useful commands.

***Important Steps***

Create a Topic Branch . Since I am trying to fix a bug I would do git checkout -b "bug/1312908" where 1312908 is the bug id. Each bug is associated with a bug id which can be found in its page. You would see a message "switched to a new branch 'bug/<bugid>' ". Now use "git status" to check your branch.

Fixing the Bug

Now that we have everything in place and properly configured, lets fix the bug. To fix the bug in sahara all I had to do was follow the link Sergey gave in comments of the bug page which had all the occurrences of the word components. I opened those files and changed all the occurrences. Now if you remember we are currently in branch "bug/<bugId>". Follow this link to know how to run test cases. You have to run unit tests to check if everything is working.

Fixed the bug, what to do next?

Next thing would be to commit the changes that you have made to fix the bug. This is where I made a mistake and had to commit twice. Please follow strictly the pattern that is given in the Commit messages  so as to same your time.

Note that in most cases the Change-Id line should be automatically added by a Gerrit commit hook that you will want to install. See Project Setup for details on configuring your project for Gerrit. If you already made the commit and the Change-Id was not added, do the Gerrit setup step and run:

git commit --amend
 
The commit hook will automatically add the Change-Id when you finish amending the commit message, even if you don't actually make any changes.
Make your changes, commit them, and submit them for review:

git commit -a
git review
 
Caution: Do not check in changes on your master branch. Doing so will cause merge commits when you pull new upstream changes, and merge commits will not be accepted by Gerrit.

Submitted for review, Whats next

Once you have submitted it appears on https://review.openstack.org and wait for the code reviewers to review. More info about the review process can be found here .Follow the comments and make necessary changes according to their comments. Once everything is correct it will be reviewed by core developers team and Jenkins will test all the components. You can check the status of gate jobs of your review at http://status.openstack.org/zuul/. After that it will be merged to the master branch.


This was my experience of first bug fix. I have learned a lot of new things by doing this. Hope this helps :-)

by MANISHANKER TALUSANI (noreply@blogger.com) at July 16, 2014 03:21 PM

Sean Roberts

OpenStack is turning 4

It’s a birthday, so we are throwing a party! Join us 7-10pm, 30 July 2014, at the 111 Minna Gallery in San Francisco to celebrate the event. Speakers will include Randy Bias, Joshua McKenty, Monty Taylor, Chris Kemp, Alex Freedland, and Sean Roberts. Light food and drinks will be served. There is limited room, so […]

by sean roberts at July 16, 2014 03:10 PM

OpenStack @ NetApp

Overview of Manila from Atlanta OpenStack Summit

Manila in Atlanta: OpenStack Summit Recap

Welcome to the OpenStack @ NetApp blog – there’s a lot going on here at NetApp around OpenStack, so we thought starting a blog would be a great way to get the word out! I’m Bob Callaway, Technical Marketing Engineer within the Cloud Solutions Group - the business unit within NetApp tasked with helping our customers harness the power of NetApp’s storage and data management solutions in the cloud – public, private, or hybrid – to become stewards of their data wherever it resides.

About two months ago, NetApp sent a large contingent of folks to the biannual OpenStack Summit in Atlanta – where developers, operators, and users converge – (yes, both suits and hoodies were there) – to talk about their experiences around OpenStack, learn more about what’s new in the ecosystem, and help design the next release of OpenStack!

While there was a lot of energy around what NetApp is doing in OpenStack, I was most excited to see the energy around Manila – (not the city in the Philippines - we were in Atlanta, after all) – the OpenStack File Share Service! Manila allows users of OpenStack clouds to provision and securely manage shared file systems through a simple REST API. Manila’s share network concept links the tenant-specific Neutron network and the storage system providing the file share together to ensure a secure, logically isolated connection. Manila has a modular driver architecture, similar to Cinder, that allow different, heterogeneous storage solutions to serve as provisioning backends for file shares.

Manila Overview Diagram

There was a great 60-minute general session on Manila, which gave an overview of Manila, its API structure and key concepts, an architectural overview of the service, and then information on the growing number of drivers being integrated into the project. In the spirit of the community – we had seven presenters – representing NetApp, Red Hat, EMC, IBM, and Mirantis – all vendors who are active within the Manila project. Here’s a link to the recording in case you weren’t able to join us - https://www.youtube.com/watch?v=fR-X7jbG5QM

Manila Session Picture

Leveraging the fact that we had many of the key project leaders all together in the same city – and wanting to harness the great energy level from earlier in the day - NetApp sponsored a Manila design session different from all others held in conjunction with the summit. We all gathered for a great technical discussion at a local sports bar where we discussed the state of the project, the key design and delivery items for the Juno release, and enjoyed some great food and beverages!

Want to learn more about Manila? Get started by checking our wiki page @ http://wiki.openstack.org/wiki/Manila or jump on IRC - we’re always hanging out in #openstack-manila on freenode! We’ve got weekly meetings at XX:XX UTC in #openstack-meeting on freenode as well.

July 16, 2014 02:42 PM

Tesora Corp

Red Hat casts a wide shadow in OpenStack, new features in Swift, and another bet on OpenStack

short stack_b small_0_0.jpgWelcome to the Short Stack, our weekly feature where we search for the most intriguing OpenStack links to share with you. These links may come from traditional publications or company blogs, but if it's about OpenStack, we'll find the best links we can to share with you every week.

If you like what you see, please consider subscribing.

Here we go with this week's links:

Drop your fantasy of a king of OpenStack — there won’t ever be one | VentureBeat.com

There’s been a lot of commentary and assertions about who will become the dominant OpenStack player. (Just look this edition of the Short Stack, we're about to spill a lot of ink on Red Hat.) Alan Clark, board chairman of the OpenStack Foundation, addresses the speculation head-on — and he says, don't send out invitations to the coronation just yet.

Red Hat CEO Whitehurst on VMware, OpenStack and CentOS  | ZDNet.com

ZDNet's Editor in Chief sat down with Jim Whitehurst, CEO of Red Hat, in a wide ranging coversation that covered Red Hat's recent acquisitions, its future vision and its main rival: VMware. 

RHEL OpenStack 5: Red Hat hogs the OpenStack spotlight | InfoWorld

Red Hat's new OpenStack distribution shows how much it wants to be the leader of the pack for the cloud platform — or so the author claims. Is Red Hat exerting too tight a grip on OpenStack's development? The company continues to claim, that their motives are pure.

Here’s what Swift’s ‘most significant update’ brings to OpenStack | SiliconANGLE.com 

Swift is a core component of OpenStack used for storing large swaths of highly varied data on cheap commodity hardware. Supporters believe the community advantage will help them catch up to — and eventually surpass — the proprietary platforms that dominate enterprise storage today. Here’s a review of the new features that bring that goal a lot closer.

How DreamHost is reinventing itself with OpenStack | opensource.com

DreamHost, a web hosting and cloud services provider for entrepreneurs and developers, is doubling down on OpenStack.  They say, open source is part of their DNA and they've build their cloud architecture around OpenStack. It makes sense. They were a major backer of Inktank, recently bought by Red Hat. 

by 693 at July 16, 2014 12:00 PM

Opensource.com

DefCore brings a definition to OpenStack

What's in a name? Quite a bit, actually. To ensure compatibility between products sharing the same name, it's important that users can expect a core set of features to be consistent across different distributions. This is especially true with large projects like OpenStack which are made up of many interlocking components.

by Jason Baker at July 16, 2014 09:00 AM

OlinData

High Availability for Openstack and OpenNebula

<figure class="clearfix field-item even" rel="og:image rdfs:seealso" resource="http://www.olindata.com/sites/default/files/styles/medium/public/CMP_Quadrant1.png?itok=ra0pVN9B"></figure>

If we look in the news since last 8 months or so, we can see that the technology scene is moving towards the cloud. The terms public cloud, private cloud, and hybrid cloud usually crops up and vendors comes up with all sorts of solutions/products for us to use/dissect/troll/shoot. Popular choices are usually Openstack by Rackspace, Eucalyptus by Canonical, CloudStack by Apache Foundation, and OpenNebula by OpenNebula Project. Fret not, for I shall not add another Openstack vs OpenNebula vs Cloudstack vs Eucalyptus blog post to the internet sphere. I believe there are more than enough blog posts or articles out there that can highlight the differences between the mentioned cloud management platforms. Instead, here I will share my findings on high availability strategies provided by Openstack and OpenNebula in order to keep your services running all the time and clients off your back.

Openstack

Openstack is an Open Source solution where all of it's components are either core components or formerly incubated then integrated as part of the official product. These components perform different functions at various levels such as compute, controller, dashboard, networking, and so forth. It's modular design allows for third parties to provide other components which can perform functionalities outside of the core functionalities.

"On one side, there are businesses that understand cloud as an AWS-like cloud on-premise; hence looking for a provisioning tool to supply virtualized resources on-demand."

OpenNebula

A solution that is designed to be an Open Source alternative to VMWare's VCenter and targeted at enterprise users. Unlike Openstack, all of its components comes as a single unit akin to that VCenter.

"On the other side, there are businesses that understand cloud as an extension of virtualization in the datacenter; hence looking for a VMware vCloud-like infrastructure automation tool to orchestrate and simplify the management of the virtualized resources."

Infrastructure HA

From this point onwards, I will be laying down the techniques and strategies used by both Openstack and OpenNebula to provide HA solution in their stacks. As a general rule of thumb, we are to provide the HA to the services to keep our phones from ringing in the middle of a date. First we need to look at the core components of both the stacks. Openstack comprises of nova (compute), keystone (identity), glance (image), cinder (storage), and neutron (networking). Apart from these, MySQL (database), and RabbitMQ (Message Queue) which needs to be part of the HA solution.

Openstack on it's own does not provide a HA solution built-in to the product but the modular design of Openstack opens up a whole wide possibility of various HA designs. As per Openstack's documentation, we can implement active/passive HA designs into Openstack using Pacemaker for cluster resource management, Corosync which is usually coupled with Pacemaker for cluster messaging, Galera for MySQL database, and various solutions like Ceph, DRBD, GlusterFS or SAN for storage.

If we take a look at Openstack's documentation here, it'll provide you a step by step detail on how to setup Pacemaker, DRDB, Galera, and Corosync to create the HA setup.

DRBD is used to provide a distributed block storage across the cluster for the MySQL data directory, and Cinder's block storage replication on across the active/passive setup. Galera is then to provide MySQL with a multi-master replication solution. Pacemaker, utilizing Corosync to communicate with all the cluster members, controls all the resources (Openstack services) in this setup whilst monitoring it at the same time. In the event that services fails on the active node, Pacemaker with start the services on the passive node and all services will failover to that node when the threshold is reached.

OpenNebula on the other hand, comprises of 3 components that needs to be configured, OpenNebula Core (orchestration, compute), Scheduler (resource allocation), Sunstone (GUI), and MySQL (database). OpenNebula uses Ricci (Package that contain a daemon and a client which allow a cluster to be configured and managed remotely), and RedHat's cluster suite (comprises of cman, ccs, rgmanager packages). The setup guide can be found here. Although it seems a lot simpler than Openstack, it is still the same.

Looking at both the stacks, Openstack requires more components to the HA setup owing to the fact that it has a lot of modular components whilst OpenNebula is pretty much all-in-one kind of stack. OpenNebula can provide a simpler manner of managing the HA setup but it is rather tied down to specific packages provided RedHat Cluster Management suite.

Virtual Machine Recovery

Both the Openstack and OpenNebula uses common hypervisors that utilizes libvirt such as KVM and Xen which allows some form of virtual machine migration. Although we know migrations can be via the non-live method or live migration method. Unique configurations are needed to migrate the VMs from one compute node to the other. The differences between both the stacks are how we manage the recovery of the VM in the event of a failure.

For Openstack, it would require the stack to be setup with a shared storage or distributed filesystem in order to prevent a total shutdown of the virtual machines. A dedicated monitoring of the services and server health is very important for pre-emptive measures. The nova (compute) component of Openstack comes with a command that allows the user to "evacuate" all the virtual machines from one compute node to the other.

However, OpenNebula has a more interesting approach where there are hooks available as to provide information in order to prepare for failures in the virtual machines or physical nodes, and recover from them. These failures are categorized depending on whether they come from the physical infrastructure or from the virtualized infrastructure.

These hooks are similar to git hooks where an event can be triggered based on the state of the physical/virtual hosts. The usage of hooks on OpenNebula allows for more simpler HA setup and strategy on the infrastructure.

As I've lay out the the HA strategies as per recommended by both Openstack and OpenNebula, I would prefer a modular approach by Openstack for HA setup which allows me to choose which ever HA setup I want for different components in the stack with being restricted in any manner. But, I would love to see the feature of having hooks to trigger actions on the stack in the event where an error is detected to minimize the potential downtime that might occur. Afterall, I love automated setups with the need for human interventions.

by Choon Ming Goh at July 16, 2014 06:47 AM

July 15, 2014

SwiftStack Team

Building Innovative New Features as a Community

After our big OpenStack Swift 2.0 release last week, it's fun to look back at where we've been and how we got here over the past year. The OpenStack Swift contributor community is made up of a lot of people and a lot of companies. SwiftStack is one participant, but storage policies is the result of the entire community—not just one person or company. In an active open-source project like Swift, how does this happen? Getting people in the same company to work together can sometimes be a daunting task. Getting people to work together for a common goal when their employers directly compete with one another is quite a bit harder.

When we started this journey a year ago, we didn't even know we were going to have storage policies. Joe Arnold and I were listening to some customer requests around erasure code support in Swift, and we started sketching out what changes would be needed in Swift. Based our initial whiteboard designs, I worked with some of the other Swift core developers here at SwiftStack, and we came up with a plan on how to answer the customer's needs.

One of the first things we realized after looking at the customer requirements and the proposed design is that supporting a cluster that only used erasure codes isn't nearly as useful or maintainable as one that can do both. We realized that supporting multiple storage policies in the same cluster is what bests solves the real-world use cases we see, both for our customers and in the community at large.

Swift is ideal for unstructured data that can grow without bound. It's great as supporting massive concurency across the entire data set. These qualities mean that Swift has found a lot of success with video streaming, web and mobile content, and sync-and-share applications. Replicated storage, with the simple and fast access it offers, is ideal in these use cases. But we know that not all data is the same, and being able to support different replication policies—and eventually non-replicated storage—would offer users the power and flexibility to tailor their storage infrastructure to exactly match their use case.

After refining our design outline for storage policies, we blogged about it and started working on it. And then a great thing happened: others joined in. Storage policies solve some very real pain points. The diverse participation from SwiftStack, Intel, Red Hat, HP, Rackspace, and others demonstrates both the strength of the community and the strong need for modern storage.

I'll be talking more about how the open-source community worked together to build storage policies over the next several months. The biggest single lesson learned is that communication is key. We used and developed some tools to help us do this better, and I'm excited to use these lessons and tools to keep the community moving forward to build erasure code support into OpenStack Swift.

July 15, 2014 04:25 PM

Cody Bunch

Hi-ho Hi-ho, Off to OSCON We Go!

For those that didn’t get the messge on the twitters, I will be at OSCON this year. Specifically, I will be helping Egle run a “Getting Started with OpenStack” tutorial.

The tutorial will begin with an overview of OpenStack and its different components. We will then provide access to the individual OpenStack instances to the participants, and walk them through using OpenStack’s web interface followed by command line tutorial.
The tutorial will cover instance life cycle (creation, management, deletion), networking, user management, and how to utilize different storage services available in OpenStack.

If you will be there, please drop by and say hello!

by OpenStackPro at July 15, 2014 03:11 PM

Opensource.com

How DreamHost is reinventing itself with OpenStack

Founded in 1997, DreamHost is a seasoned internet business home to over 400,000 happy customers, 1.5 million sites and applications, and hundreds of thousands of installs of WordPress, the dominant open source CMS. Open source is in our blood, and has powered every aspect of our services since 1997. DreamHost is built on a foundation of Perl, Linux, Apache, MySQL, and countless other open source projects. In our 16+ years of existence, DreamHost has seen the realities of internet applications and hosting drastically evolve. Our journey to the cloud requires a bit of history and context, so let’s dive right in.

by Jonathan LaCour at July 15, 2014 11:00 AM

Rafael Knuth

Automating OpenStack: Join StackStorm’s Online Meetup July 17

This post was originally published at StackStorm by Evan Powell We’re looking forward to co-hosting...

July 15, 2014 07:14 AM

Sean Roberts

OpenStack Operators Mid-Cycle Summit

The Operators Summit is all about getting the implementors input into the development cycle. We ran the first one about six months ago and was a great success. Come and join us, but be prepared to interact with the group. This is very much a round table discussion, rather than a series of talks. If […]

by sean roberts at July 15, 2014 03:21 AM

Widening the Engagement of the Hidden OpenStack Influencers

This is a follow-up post to the OpenStack Hidden Influencers post. After the OpenStack July Gold Membership meeting today, the problem of engaging the developer community line management seemed a bit clearer to me. The Enterprise / OpenStack User Committee research that Intel has been spearheading is a useful model to copy for other verticals […]

by sean roberts at July 15, 2014 02:55 AM

July 14, 2014

OpenStack Blog

Five Days + Twelve Writers + One Book Sprint = One Excellent Book on OpenStack Architecture

Update: You can now download the OpenStack Architecture Design Guide here.

One thing about OpenStack is that you can find lots of information on how to do specific things, such as start an instance or install a test cloud on VirtualBox, but there isn’t much out there to give you the Big Picture, such as how to design a massively-scalable OpenStack cloud, or a cloud that’s optimized for delivering streaming content. That’s why this past week a dozen OpenStack experts and writers from companies across the OpenStack ecosystem gathered at VMware’s Palo Alto campus for the OpenStack Architecture Design Guide book sprint. The intent was to  deliver a completed book on designing OpenStack clouds — in just five days.

Now, I wrote my first book — a pretty straightforward introduction to Active Server Pages 3.0 — in seven weeks, and then it went through months of editing before arriving at the printer. I never wrote a more significant book that took less than six months.  So when I volunteered for the sprint, I confess that I didn’t expect much.  Oh, I knew that at the end of the week we’d have a book.  I just didn’t expect it to be the really great book that actually emerged.

How a book sprint works

Screen Shot 2014-07-14 at 4.45.58 PM

The process of actually writing the book was pretty regimented, but because we felt like we had control over the direction, we didn’t feel stifled by it.  We started by discussing the audience — architects designing OpenStack systems or evaluating it for use — and brainstorming a likely structure.

After deciding that we’d basically cover groupings of use cases for OpenStack clouds, we brainstormed all the different types we might cover, putting them on Post-its and grouping them on the whiteboard. (Let’s just say that “CI/CD” and “dev/test” were on a lot of our minds.)  Before long it was clear that we had seven major categories, such as “compute focused” or “massively scalable”.

We then broke into two groups, each of which was to take half an hour and brainstorm a structure for these categories.  Interestingly, although we used different terms, the structures the two groups emerged with were virtually identical.  (Which meant there was no fight to the death, which is always nice.)

From there our group of 12 broke into 3 groups of 4, each diving into a section.  At the end of Monday, we had 15,000 words already written (of which we’re still sure 10,000 came from Beth Cohen).

I was stunned.

I wasn’t stunned because we had so much content; I was stunned because it was, well, actually pretty good content.

By Wednesday morning, the book was pretty much written, and it was on to editing.  Groups read through sections written by others to try and fill in any holes, and Beth and I began editing, to try and even out the tone.  After that came two more passes: copyediting (by Alexandra Settle, Scott Lowe, and Sean Winn) and fact checking.

Long before Friday, we had a book that we could be proud of.

Screen Shot 2014-07-14 at 4.47.14 PM

What the OpenStack Architecture Design Guide covers

The OpenStack Architecture Design Guide is for architects and evaluators; deployment is covered in the OpenStack Operations Guide, so we didn’t cover that. The Design Guide covers the following types of OpenStack clouds:

  • General Purpose
  • Compute Focused
  • Storage Focused
  • Network Focused
  • Multi-site
  • Hybrid Cloud
  • Massively Scalable
  • Special cases (clouds that don’t fit into those categories, such as multi-hypervisor)

We talked about the different issues, such as user requirements, technical considerations, and operational considerations for each type of cloud, then talked about the actual architecture and provided some prescriptive examples to make things more concrete and easier to understand.

What community really means

Perhaps the most interesting thing about the book sprint is that it was, in many ways, a microcosm of OpenStack itself.  We all work for different companies, some of which don’t particularly get along, but in that room, it didn’t matter. We were just people getting a job done, and doing it in the best way we knew how, working long hours and joking about our evil overlords (sprint facilitators Adam Hyde and Faith Bosworth) and laughing about anything and everything to keep from going stir crazy.

We watched Alex learn that American Mountain Dew is very different from the stuff they have in Australia, and we saw her transform from a nervous newcomer to a confident writer and editor (though I’m still going to use two spaces after a period, sorry).  Anthony Viega and Sean Collins consistently impressed us with their knowledge of networking.  Sebastian Gutierrez showed how passionate he is about storage, and especially the wonders of Ceph. Vinny Valedez produced more great diagrams in two days than I did all of last year. Maish Saidel-Keesing and Kevin Jackson continuously inspired us to be better with their hard work and good humor. I’m still laughing at Steve Gordon’s deadpan humor.  (And I apologize to anyone who still has the music from Doctor Who stuck in their head.)

Our goal was to provide a resource for the OpenStack community, to help adoption of a tool we’re all passionate about. Did we joke about it?  Of course we did.  But at the end of the day, we wouldn’t have been there if we didn’t believe in the future of OpenStack, and what it can do, when it’s done right.

The OpenStack Architecture Design Guide will be available electronically free of charge as part of the OpenStack documentation, and like the Operations Guide and the Security Guide before it, it will be available for anyone to submit patches to, a living document that will only get better.  It will also be available for purchase in hard copy through Lulu.  Watch this space for a link!

by Nick Chase at July 14, 2014 09:48 PM

Stefano Maffulli

Only developers should file specifications and blueprints

If you try to solve a problem with the wrong tool you’re likely going to have a frustrating experience. OpenStack developers have been using the same workflow to plan and manage development of OpenStack software over time, they chose a set of tools appropriate for the project’s software development. Developers use blueprints define the roadmap for the various projects, the specifications attached to a blueprint are used to discuss the implementation details before code is submitted for review.

These are the tools for the developers and this doesn’t mean that blueprints and specifications are the only way to interact with developers. Operators and users in general don’t need to dive in the details of how OpenStack developers organize their work and definitely should never be asked to use tools designed for and by developers. When I read ‘s post the place of a non-developer in the openstack community I immediately felt her pain: she was asked to use the wrong tool to solve a problem. I think this case is a major misunderstanding but the comments on the post signal that this is not an isolated case.

<header class="entry-header"></header>

The most common way for a non-developer to highlight a flaw in OpenStack is to file a new bug report. Bugs can be defects that need to be fixed  or they can be requests for enhancement. In this case, Dafna filed a bug report and interacting with the triager, they agreed that the reported bug was not a defect per se but more of a request for enhancement, a wishlist.

In order to fix a defect or implement a wishlist item, developers need to file a blueprint (and a specification) before they can start writing code. If the person who filed the original bug report is also a developer then s/he can go ahead and continue carrying on the process, file a blueprint to solve a bug # and write specifications (when needed) to describe the details of the implementation. Users interested in the bug can chime in adding comments to the bug and to the specs, provide input to developers.

The process above works in similar way when the person filing the bug is not a developer, like in Dafna’s case. The proper flow of bug #1323578 would have not required Dafna to file a blueprint and specs, but to have a developer do that. Users are required to interact closely with the developers to discuss the implementation details, and that’s where the new specifications process helps. Gerrit may not have the best UI but it’s definitely better than holding discussions on a mailing list. Adding comments and holding conversations with the developer assigned to resolve the issue, on the bug report itself is also a valid option.

While I think that as a community we have plenty of ways to improve how we include users and operators in our conversations, in this particular case, I think frustration came from being pulled in the wrong place from the beginning.

PS: To be super-clear, in this post I’m using the terms developer and operator to describe a role, not a person. One person can be at the same time a developer and an operator, and act at times as a developer writing blueprints, submitting code and as an operator filing bug, giving comments to specifications.

 


© stefano for ][ stefano maffulli, 2014. | Permalink | 5 comments | Add to del.icio.us
Post tags: , , , , , ,

Feed enhanced by Better Feed from Ozh

by stefano at July 14, 2014 06:17 PM

OpenStack Blog

DefCore Update: Input Request for Havana Capabilities

As part of our community’s commitment to interoperability, the OpenStack Board of Directors has been working to make sure that “downstream” OpenStack-branded commercial products offer the same baseline functionality and include the same upstream, community-developed code. The work to define these required core capabilities and code has been led by the DefCore Committee co-chaired by Rob Hirschfeld (his DefCore blog) and Joshua McKenty (his post).  You can read more about the committee history and rationale in Mark Collier’s blog post.

The DefCore Committee has introduced two key concepts that will be used to define the standard requirements across commercial products: Capabilities and Designated Sections. Capabilities represent the functionality that is exposed by an OpenStack-based cloud through APIs, which can be tested and reported on—for instance starting or stopping a virtual server. Designated sections are portions of upstream code from various OpenStack projects that are required in addition to the API-based capabilities. These requirements can change with each OpenStack release, and the DefCore committee has started with the Havana release to create an “advisory” set of requirements.  After community review on Havana, the Board will repeat the process for Icehouse requirements and then enforce those for the commercial trademark license programs.

Get Involved: Next week, the DefCore Committee will host two meetings for community input on the Capabilities they’ve scoped for the Havana release. The meetings will take place Wednesday, July 16, at 8 am PDT (1500 UTC) and 6 pm PDT (0100 UTC on July 17) to accommodate as many time zones as possible. You can reference the DefCore Committee’s current proposal, and join the meetings using the links on the following page:

After getting community input, the DefCore Committee plans to bring the proposed Havana Capabilities to the Board for approval at the next face-to-face meeting, taking place July 22nd in Portland, OR.  If approved, focus will then shift to the Designated Sections for Havana.

If you’d like to catch up on the work the Committee has been doing since the OpenStack Summit Atlanta, the following links contain the notes from their recent meetings:

https://etherpad.openstack.org/p/DefCoreLighthouse.1
https://etherpad.openstack.org/p/DefCoreLighthouse.2
https://etherpad.openstack.org/p/DefCoreLighthouse.F2F
https://etherpad.openstack.org/p/DefCoreLighthouse.3

 

by OpenStack at July 14, 2014 04:24 PM

Rob Hirschfeld

OpenStack DefCore Update & 7/16 Community Reviews

The OpenStack Board effort to define “what is core” for commercial use (aka DefCore).  I have blogged extensively about this topic and rely on you to review that material because this post focuses on updates from recent activity.

First, Please Join Our Community DefCore Reviews on 7/16!

We’re reviewing the current DefCore process & timeline then talking about the Advisory Havana Capabilities Matrix (decoder).

To support global access, there are TWO meetings (both will also be recorded):

  1. July 16, 8 am PDT / 1500 UTC
  2. July 16, 6 pm PDT / 0100 UTC July 17

Note: I’m presenting about DefCore at OSCON on 7/21 at 11:30!

We want community input!  The Board is going discuss and, hopefully, approve the matrix at our next meeting on 7/22.  After that, the Board will be focused on defining Designated Sections for Havana and Ice House (the TC is not owning that as previously expected).

The DefCore process is gaining momentum.  We’ve reached the point where there are tangible (yet still non-binding) results to review.  The Refstack efforts to collect community test results from running clouds is underway: the Core Matrix will be fed into Refstack to validate against the DefCore required capabilities.

Now is the time to make adjustments and corrections!  

In the next few months, we’re going to be locking in more and more of the process as we get ready to make it part of the OpenStack by-laws (see bottom of minutes).

If you cannot make these meetings, we still want to hear from you!  The most direct way to engage is via the DefCore mailing list but 1×1 email works too!  Your input is import to us!


by Rob H at July 14, 2014 03:17 PM

The Official Rackspace Blog » OpenStack

Inside My Home Rackspace Private Cloud, OpenStack Lab, Part 7: LBaaS

With a useful OpenStack lab up and running, it’s time to take advantage of some more advanced features. The first that I want to look at is adding the OpenStack Networking LBaaS (Load Balancer) to my Rackspace Private Cloud. This is currently a Tech Preview and unsupported feature of Rackspace Private Cloud v4.2 and is not considered for use in production at this time. To add this to RPC we simply make a change to the environment and run chef-client across the nodes.

More information on LBaaS and Rackspace Private Cloud can be found here.

Adding LBaaS to Rackspace Private Cloud Lab

1. Edit /opt/base.env.json (or the name of the JSON describing your environment) and add in the following:

 "neutron": {
  "ovs": {
    ...
  },
  "lbaas": {
    "enabled": true
  }
},
...

"horizon": {
  "neutron": {
    "enable_lb": "True"
  }
},

2. Save and then load this into the environment:

knife environment from file /opt/base.env.json

3. Now run chef-client on the controllers:

# openstack1 
chef-client  

# openstack2 
knife ssh "role:ha-controller2" chef-client

That’s it. LBaaS is now enabled in our RPC Lab!

Creating a Load Balancer

The first thing we do is to create a load balance pool.

1. To do this, get the UUID of the private subnet we want our load balancer to live on.

neutron subnet-list

2. Next we create a pool:

neutron lb-pool-create 
    --lb-method ROUND_ROBIN 
    --name mypool 
    --protocol HTTP 
    --subnet-id 19ab172a-87af-4e0f-82e8-3d275c9430ca

3. We can now add members to this pool. For this I’m using two instances spun up running Apache running on:

nova list

neutron lb-member-create --address 192.168.1.152 --protocol-port 80 mypool
neutron lb-member-create --address 192.168.1.153 --protocol-port 80 mypool

4. We can now create a Healthmonitor and associated it with the pool. This tests the members’ availability and controls whether to send traffic to that member or not:

neutron lb-healthmonitor-create --delay 3 --type HTTP --max-retries 3 --timeout 3

neutron lb-healthmonitor-associate 5479a729-ab81-4665-bfb8-992aab8d4aaf mypool

5. With that in place, we now create the load-balanced VIP using the subnet UUID and the name of the load balancer (mypool). The VIP is the address of the load-balanced pool.

neutron lb-vip-create 
    --name myvip 
    --protocol-port 80 
    --protocol HTTP 
    --subnet-id 19ab172a-87af-4e0f-82e8-3d275c9430ca mypool

This has taken an IP from the subnet that we created the load balance pool on of 192.168.1.154 and we can now use this to access our load balanced web pool consisting of the two Apache instances – for example: http://192.168.1.154/.

Viewing details about the load balancers

To list the load balancer pools issue the following:

neutron lb-pool-list

To see information and list the members in a pool issue the following:

neutron lb-pool-show mypool

Horizon

In Horizon this looks like the following:

For more information on OpenStack Networking LBaaS visit here.

To find out how we got here, check out the previous posts in this series.

by Kevin Jackson at July 14, 2014 03:00 PM

Opensource.com

OpenStack Juno previews, Docker all the things, and more

Interested in keeping track of what's happening in the open source cloud? Opensource.com is your source for what's happening right now in OpenStack, the open source cloud infrastructure project.

by Jason Baker at July 14, 2014 03:00 PM

July 12, 2014

Amar Kapadia

The Number One Inhibitor to Cloud Storage (Part 2 of 2)!

The number one inhibitor is Access! (Part 2)

I've been feeling bad about delaying this second part of my blog, but in hindsight it was good; EMC acquired TwinStrata in the meantime validating the whole premise of my current blog!

Anyways, a few weeks ago I talked about how access, in my view, is the biggest inhibitor to cloud storage. Specifically the five issues are:

1. How do I get massive amounts of data in-and-out of the cloud?
2. How do I get my application to interface with cloud storage?
3. How do I get cloud storage to fit within my current workflow?
4. How do I figure out what data to move to the cloud?
5. Once the data is moved, how do I know it's in the cloud?

Read more »

by Amar Kapadia (noreply@blogger.com) at July 12, 2014 12:15 AM

July 11, 2014

OpenStack Blog

OpenStack Community Weekly Newsletter (July 4 – 11)

OpenStack Swift 2.0 Released and Storage Policies Have Arrived

OpenStack Swift 2.0.0. This release includes storage policies – the culmination of a year of work from many members of the Swift contributor community. Storage policies are the biggest thing to happen in Swift since it was open-sourced four years ago. Storage policies allow you to tailor your storage infrastructure to exactly match your use case. This release marks a significant milestone in the life of the project that will lead to further adoption and community growth. You can get Swift 2.0 from http://tarballs.openstack.org/swift/swift-2.0.0.tar.gz. As always, you can upgrade to this version without any client downtime.

Wrapping up the Travel Support Program – Juno

The OpenStack Foundation brought 21 people to Atlanta for the Summit in May, thanks to the grants offered by the Travel Support Program, sponsored by VMware. The Travel Support Program is based on the promise of Open Design and its aim is to facilitate participation of key contributors to the OpenStack Design Summit. The Travel Support Application for the November Summit in Paris is NOW OPEN! You can apply for the Travel Support Program, including costs for travel and accommodation.

Missing Building Blocks for Enterprise OpenStack: Part 1 – High Availability

In the long term debate of pets vs cattle OpenStack has always been on the side of cattles. Dmitriy Novakovskiy shared his thoughts on why pets are good and how far away OpenStack is from supporting the more ‘legacy’ applications (TL;DR: not too far away).

Third Party CI group formation and minutes

At Juno Summit in Atlanta, Kurt Taylor, Anita Kuno, and Jay Pipes agreed to form a group focused on the Third Party experience, including but not limited to continuous integration. Part of the mission of the group is to focus on the quality of Third Party testing for OpenStack through improving documentation, gathering requirements, and easing the deployment of third party testing systems. The group has been working to improve the consumability of the components and documentation. They’re inviting all people involved in CI testing to join and help make the Third Party experience easier for developers and administrators to understand and deploy. The group holds regular weekly meetings. This week they discussed timelines for Cinder and Neutron testing, requirements for documentation patches, a proposal for system terminology and helped openATTIC solve its issues starting up the CI system.

Kudos Corner

It’s a great pleasure to highlight good examples of first time contributors to OpenStack getting through their first changeset proposal. Jeegn Chen‘s first changeset is one of such cases. Kudos to him and the community helping him fixing bug #1327497.

The Road To Paris 2014 – Deadlines and Resources

Reports from Previous Events

Security Advisories and Notices

Tips ‘n Tricks

Upcoming Events

Other News

Got Answers?

Ask OpenStack is the go-to destination for OpenStack users. Interesting questions waiting for answers:

Welcome New Reviewers and Developers

Matthew Printz Slawomir Gonet
Fabio Massimo Di Nitto Prashanth Prahalad
azher ullah khan takehirokaneko
João Cravo azher ullah khan
Romain Soufflet Wayne
deven Vasiliy Artemev
Julia Kreger Dave Neary
Fabrizio Fresco Arnaldo Hernandez
Anant Patil Amit Kumar Das
Liyi Meng Shivakumar M
Richard Jones Richard Hagarty
Maurice Leeflang Michael Chase-Salerno
sh.huang
jizhilong
Rajesh Tailor
Chris Crownhart
Aleksandr Shaposhnikov
Matjaz Pancur
Alok Kumar Maurya
Jyoti
Andrey Epifanov
Abhishek L
Łukasz Oleś
Victor Chima
FeihuJiang
Mike King

Latest Activity In Projects

Do you want to see at a glance the bugs filed and solved this week? Latest patches submitted for review? Check out the individual project pages on OpenStack Activity Board – Insights.

OpenStack Reactions

wyfail

Elastic-recheck bot pointing us to which bug we need to recheck to

The weekly newsletter is a way for the community to learn about all the various activities occurring on a weekly basis. If you would like to add content to a weekly update or have an idea about this newsletter, please leave a comment.

by Stefano Maffulli at July 11, 2014 10:10 PM

Rafael Knuth

Google+ Hangout: Shaping your OpenStack Journey w/ IGATE

It’s no more about when to implement OpenStack: It’s about how. But each step along your...

July 11, 2014 10:48 AM

Opensource.com

Understanding the metrics behind open source projects

What do the numbers behind an open source project tell us about where it is headed? That's the subject of Jesus M. Gonzalez-Barahona's OSCON 2014 talk later this month, where he looks at four open source cloud computing projects—OpenStack, CloudStack, Eucalyptus, and OpenNebula—and turns those numbers into a meaningful analysis.

by Jason Baker at July 11, 2014 09:00 AM

July 10, 2014

Zane Bitter

OpenStack Orchestration Juno Update

As the Juno (2014.2) development cycle ramps up, now is a good time to review the changes we saw in Heat during the preceding Icehouse (2014.1) cycle and have a look at what is coming up next in the pipeline. This update is also available as a webinar that I recorded for the OpenStack Foundation, as are the other PTL updates. The RDO project is collecting a list of written updates like this one.


While absolute statistics are not always particularly relevant, a comparison between the Havana and Icehouse release cycles shows that the Heat project continues to grow rapidly. In fact, Heat was second only to Nova in numbers of commits for the Icehouse release. As well as building contributor depth we are also rotating the PTL position to build leadership depth, so the project is in very healthy shape.

Changes in Icehouse

The biggest change in Icehouse is the addition of software configuration and deployment resource types. These enable template authors to define software configurations separately from the servers on which they are to be deployed. This makes, amongst other things, for much easier re-usability of artifacts. Software deployments can integrate with your existing configuration management tools - in some cases the shims to do so are already available, and we expect to add more during the Juno cycle.

The Heat Orchestration Template format (Hot) is now frozen at version 2013-05-12. Any breaking changes we make to it in future will be accompanied by a bump in the version number, so you can start using the Hot format with confidence that templates should continue to work in the future.

In order to enable that, template formats and the intrinsic functions that they provide are now pluggable. In Icehouse this is effectively limited to different versions of the existing template types, but in future operators will be able to easily deploy arbitrary template format plugins.

Heat now offers custom parameter constraints - for example, you can specify that a parameter must name a valid Glance image - that provide earlier and better error messages to template users. These are also pluggable, so operators can deploy their own, and more will be added in the future.

There are now OpenStack-native resource types for autoscaling, meaning that you can now scale resource types other than AWS::EC2::Instance. In fact, you can scale not just OS::Nova::Server resources, but any type of resource (including provider resources). Eventually there will be a separate API for scaling groups along the lines of these new resource types.

The heat-engine process is now horizontally scalable (though not yet stateless). Each stack is processed by a single engine at a time, but incoming requests can be spread across multiple engines. (The heat-api processes, of course, are stateless and have always been horizontally scalable.)

The API is growing additions to help operators manage a Heat deployment - for example to allow a cloud administrator to get a list of all stacks created by all users in Heat. These improvements will continue into Juno, and will eventually result in a v2 API to tidy up some legacy cruft.

Finally, Heat no longer requires a user to be an administrator in order to create some types of resources. Previously resources like wait conditions required the admin role, because they involved creation of a user with limited access that could authenticate to post data back to Heat. Creating a user requires admin rights, but in Icehouse Heat creates the user itself in a separate domain to avoid this problem.

Juno Roadmap

Software configurations made their debut in Icehouse, and will get more powerful still in Juno. Template authors will be able to specify scripts to handle all of the stages of an application’s life-cycle, including delete, suspend/resume, and update.

Up until now if the creation of a stack or the rollback of an update failed, or if an update failed with rollback disabled, there was nothing further you could do with the stack apart from delete it. In Juno this will finally change - you will be able to recover from a failure by doing another stack update.

There also needs to be a way to cancel a stack update that is still in progress, and we plan to introduce a new API for that.

We are working toward making autoscaling more robust for applications that are not quite stateless (examples include TripleO and Platforms as a Service like OpenShift). The plan is to allow notifications prior to modifying resources to give the application the chance to quiesce the server (this will probably be extended to all resources managed by Heat), and also to allow the application to have a say in which nodes get removed on scaling down.

At the moment, Heat relies very heavily on polling to detect changes in the state of resources (for example, while a Nova server is being built). In Juno, Heat will start listening for notifications to reduce the overhead involved in polling. (Polling is unlikely to go away altogether, but it can be reduced markedly.) In the long term, beyond the Juno horizon, this is leading to continuous monitoring of a stack’s status, but for now we are laying down the foundations.

There will also be other performance improvements, particularly with respect to database access. TripleO relies on Heat and has some audacious goals for deployment sizes, so that is driving performance improvements for all users. We can now profile Heat using the Rally project, so that should help us to identify more bottlenecks.

In Juno, Heat will gain an OpenStack-native Heat stack resource type, and it will be capable of deploying nested stacks in remote regions. That will allow users to deploy multi-region applications using a single tree of nested stacks.

Adopting and abandoning stack resources makes it possible to transition existing applications to and from Heat’s control. These features are actually available already in Icehouse, but they are still fairly rough around the edges; we hope they will be cleaned up for Juno. This is always going to be a fairly risky operation to perform manually, but it provides a viable option for automatic migrations (Trove is one potential user).

Operations Considerations

There are a few changes in the pipeline that OpenStack operators should take note of when planning their future upgrades.

Perhaps the most pressing is version 3 of the Keystone API. Heat increasingly relies on features available only in the v3 API. While there is a v2 shim to allow basic functionality to work without it for now, operators should look to start testing and deploying the v3 API alongside v2 as soon as possible.

Heat has now adopted the released Oslo messaging library for RPC messages (previously it used the Oslo incubator code). This may require some configuration changes, so operators should be aware of it when upgrading to Juno.

Finally, we expect the Heat engine to begin splitting into multiple servers. The first one is likely to be an “observer” process tasked with listening for notifications, but expect more to follow as we distribute the workload more evenly across systems. We expect everything split out from the Heat engine to be horizontally scalable from the beginning.

by Zane Bitter at July 10, 2014 09:05 PM

Red Hat Stack

Juno Preview for OpenStack Compute (Nova)

Originally posted on blog.russellbryant.net.

We’re now well into the Juno release cycle. Here’s my take on a preview of some of what you can expect in Juno for Nova.

NFV

One area receiving a lot of focus this cycle is NFV. We’ve started an upstream NFV sub-team for OpenStack that is tracking and helping to drive requirements and development efforts in support of NFV use cases. If you’re not familiar with NFV, here’s a quick overview that was put together by the NFV sub-team:

NFV stands for Network Functions Virtualization. It defines the
replacement of usually stand alone appliances used for high and low
level network functions, such as firewalls, network address translation,
intrusion detection, caching, gateways, accelerators, etc, into virtual
instance or set of virtual instances, which are called Virtual Network
Functions (VNF). In other words, it could be seen as replacing some of
the hardware network appliances with high-performance software taking
advantage of high performance para-virtual devices, other acceleration
mechanisms, and smart placement of instances. The origin of NFV comes
from a working group from the European Telecommunications Standards
Institute (ETSI) whose work is the basis of most current
implementations. The main consumers of NFV are Service providers
(telecommunication providers and the like) who are looking to accelerate
the deployment of new network services, and to do that, need to
eliminate the constraint of slow renewal cycle of hardware appliances,
which do not autoscale and limit their innovation.

NFV support for OpenStack aims to provide the best possible
infrastructure for such workloads to be deployed in, while respecting
the design principles of a IaaS cloud. In order for VNF to perform
correctly in a cloud world, the underlying infrastructure needs to
provide a certain number of functionalities which range from scheduling
to networking and from orchestration to monitoring capacities. This
means that to correctly support NFV use cases in OpenStack,
implementations may be required across most, if not all, main OpenStack
projects, starting with Neutron and Nova.

The opportunities for OpenStack in the NFV space appear to be huge. The work being tracked by the NFV sub-team spans more than just Nova, but here are some of the NFV related projects for Nova:

Upgrades

The road to live upgrades has been a long one. Progress has been made over the last several releases. The Icehouse release was the first release that supported live upgrades in some form. From Havana to Icehouse, you can do a control plane upgrade with some API downtime without having to upgrade your compute nodes at the same time. You can roll through upgrading the compute nodes with the control plane already upgraded to Icehouse.

For Juno we are continuing to improve on this in several areas. Since Nova is a highly distributed system, one of the biggest requirements for doing this is versioning everything about the interactions between components. First we went through and versioned all of the interfaces between components. Next we have been versioning all of the data passed between components. This versioning of the data is part of what Nova Objects provide. Nova Objects are an internal implementation detail, but are critical to upgrade support. The entire code base had not been converted as of the Icehouse release. For Juno we continue to do conversions over to this new object model.

The other major improvement being looked at this release cycle is how we can reduce the downtime needed on the control plane by reducing how long it takes for database schema migrations to run. This is largely about developing new best practices about how migrations need to work going forward.

Finally, for the Icehouse release we added some basic testing of the live upgrade sceanario to the OpenStack CI system. This testing runs OpenStack using the previous release and then upgrades everything except the nova-compute service. At that point, everything should continue to work. One goal for the Juno cycle is to improve this testing to verify that we can also run an older instance of the nova-network service with an upgraded control plane. This is critical for deployments that use nova-network in multi-host mode. In that case, you have nova-network running on each compute node, so we need to support a mixed version environment for nova-network, as well as nova-compute.

Scheduler

There’s always a lot of interest in improving the way host scheduling works in Nova. In the Icehouse cycle we identified that we wanted to split the scheduler out into a new project (codenamed Gantt). Doing so requires decoupling Nova’s scheduler as much as possible from the rest of Nova. This decoupling effort is the primary goal for the Juno cycle. Once the scheduler is independent of Nova, we can investigate ways to integrate other projects so that scheduling can use information that currently only lives in other projects such as Neutron or Cinder.

Docker

The Docker driver for Nova was moved to Stackforge during the Icehouse development cycle. The primary reason was the lack of CI running for the driver. However, there were a number of feature gaps that made getting CI with tempest working as it needed to. Moving to stackforge gave an opportunity for the team working on this driver to iterate quicker and fill these gaps.

There has been a lot of progress on the Docker driver in the Juno cycle. Some of the feature gap work has resulted in improvements to Docker itself, which is really great to see. For example, Docker now supports pause and unpause, which is a feature of the Nova API that the Docker driver is now able to support. Another area that has seen some focus is Cinder support. To make this work, we have to be able to support exposing block devices to Docker containers at creation time, as well as later on after they are already running. There has been work on Docker itself in this area, we expect to eventually lead to support in the Nova Docker driver.

Finally, there has been ongoing work to get CI with tempest running. It’s now running in OpenStack’s CI infrastructure. The progress is great to see, but it also seems most likely that the driver will return to Nova in the K release cycle instead of Juno.

Ironic

Nova introduced the baremetal driver in the Grizzly release.  This driver allows you to use Nova’s API to do provisioning of bare metal instead of virtual machines.  There was immediately a lot of interest in this functionality for OpenStack.  Soon after this driver was introduced, it was decided that we should start a new project dedicated to bare metal management.  That project is Ironic.

Ironic has come a long way since then.  The project is currently incubated and could potentially graduate for the K release.  One of the major tasks in moving towards graduation is getting the Ironic driver for Nova merged.  The spec has been approved and the code seems to be in good shape.  I’m very hopeful that we will have this step completed in the Juno release.

Database Integration

OpenStack has been a long time user of the SQLAlchemy library for its integration with relational databases.  More recently, some OpenStack projects have begun using Alembic for managing database schema migrations.  Michael Bayer, author of SQLAlchemy and Alembic, recently joined Red Hat to help with OpenStack, as well as continue to maintain SQLAlchemy and Alembic.  He has been surveying OpenStack’s current usage of SQLAlchemy and identifying areas where we can improve.  He has written up a fascinating wiki page with his findings.  I expect this to result in some very nice improvements to many OpenStack projects, including Nova.

Other

There are many other features being worked on right now for Nova. The best place to get an idea of what’s going on is to look at either the list of approved design specs or the list of specs under review.

by russellbryant at July 10, 2014 05:47 PM

Piston

What is SDN and Should You Buy Into the Hype?

Hi. I’m Ben. I support working on SDN integrations within Piston OpenStack™ along with Noel Burton-Krahn and Nick Bartos. For those of you unfamiliar with SDN. The initials (one of many in the world of IT) stands for Software Defined Networking. It’s a buzzword that’s been going around the networking blogs, yet everyone still grapples with the definition, benefits, and overall use case in the enterprise. In this blog, I’ll tackle this overused and mostly misunderstood topic: SDN, and SDN in OpenStack®. I won’t be able to get to all of the nitty gritty details of how SDN can help in every situation, in every datacenter. That would certainly take more than just a blog post.

So, I apologize in advance if you are in need of some clarification on SDN and encourage you to please ask the questions I may not have answered for you already (after all, that’s what the comment box below is for).

Now, let’s begin.

Before we dig in, let’s role play for a minute.

You are the architect of a very important project that will rely on a very particular, perhaps even exotic, network infrastructure. It will certainly be more complex than connecting everything directly to Top of Rack switches and then connecting those to a router or routers. You describe this network to the people who will wire it up for you. Maybe you work for a small team at a university and an intern will be pulling cables for you, or maybe you work at a large corporation and a team of professionals will construct your vast network infrastructure for you.

Either way, you draw the network diagram on a white board and do your best to make sure your people understand each part of it. They then go off to assemble your network. You hope that you described the network properly; you hope that they do not make any mistakes and plug a host into the wrong switch; you hope that they don’t accidentally leave one end of a network cable unplugged. Long story short? Plan to do a lot of hoping.

What is SDN? How does it work? How do you build it?

A simple description is that there are three parts: the physical network, the logical network, and the controller. The physical network is the actual hardware. The routers and switches and cables. The logical network is what hosts and VMs connected to the network perceive as the actual network. The controller is what talks to the physical network and configures it to behave the way that is required to create the logical network.

Why is SDN so awesome?

The Dilbert Cartoon at the top exaggerates the situation, but is pretty representative of how little work you would need to do if you implemented SDN. Things like the aforementioned hypothetical networking nightmare can cause your project to become delayed, or worse, remain unnoticed until your project goes into production and then cause all sorts of hard-to-debug problems. If you had a software defined network you wouldn’t have to deal with problems like that. Instead of drawing diagrams and trying to explain the network to humans, you would be describing it to the SDN controller. The SDN controller would then communicate with your physical networking hardware and have it reconfigure itself to create a logical network that behaved exactly as you described. Without any of the time-consuming and error prone physical steps, you would have the network you desired.

With SDN, your important project’s network would be done faster and with less headaches, so you could focus on the more critical work that relied on that network. You would no longer need to worry about touching your critical networking infrastructure. Instead you would reconfigure the easily manipulated logical network that exists on top of it.

How do I use OpenStack for a SDN?

The simple answer? You play nice with Neutron.

OpenStack is made up of very many pieces, each with a specialized goal. Nova, Cinder, Glance, Keystone and so on. The networking part of OpenStack is called Neutron. Neutron has many different parts. At the simplest level it provides a way for the other parts of OpenStack to inspect and manage the network. But the most powerful part of Neutron is the ability to use different SDN plugins. There is already a large variety of plugins from many well-known developers. The power of being able to use and manage a SDN directly through OpenStack is incredibly useful. Instead of running your cloud on top of a network that is configured from an external SDN, you can manage that network with the same tools you manage the rest of your cloud.

So is SDN just hype?

I don’t know if anyone remembers when VMs were first a “thing”. There was a lot of hype behind it. I think it’s similar with SDN – It’s going to become a thing. It may have a little ways to go, but the reality is that it’s too useful for it not to be a thing.

Managing and changing your network shouldn’t be a day spent in the datacenter. It shouldn’t take down an entire server. It should only take a few minutes, and from a single panel dashboard. Most importantly, it shouldn’t effect your workloads. The feature I work on for Piston OpenStack integrates with various SDNs via the Neutron plug-in. It keeps everything up and running, it only takes one person to change the network configuration, and, best of all, it doesn’t take an entire day. And that’s awesome.

I hope I’ve given you some insight into SDN and its benefits. Is it hype? As someone who’s seen it deployed and who’s seen it worked, I believe the practicality of SDN outweighs the hype. It’s awesome to see it in practice, and you should try it out for yourself with Piston OpenStack. You can schedule a demo or download it here.

Photo credit: Dilbert.com

by Ben Rosenberg at July 10, 2014 04:30 PM

OpenStack Blog

Open Mic Spotlight, 4th Birthday Edition: Kashyap Chamarthy

kashyapThis post is part of the OpenStack Open Mic series to spotlight the people who have helped make OpenStack successful. Each week, a new contributor will step up to the mic and answer five questions about OpenStack, cloud, careers and what they do for fun. For the month of July, we’re focusing on Q&A specific to OpenStack’s 4th birthday. If you’re interested in being featured, please choose five questions from this form and submit!

Kashyap currently works for Red Hat on most things related to open source virtualization/cloud related projects (OpenStack). He works remotely, from India. Kashyap enjoys reading, traveling, and learning to be conscious to live minimally and in an ecologically sustainable way.

1. Where were you when you first heard of OpenStack? What were you doing?

It was in 2012 in Brussels, Belgium. I was there to participate in the (no-nonsense) FOSDEM conference. For most of the second (and the last) day of the conference I was hanging out in the “Virtualization Dev” room and attended the final session of the day: an OpenStack community panel discussion moderated by Thierry Carrez (current OpenStack release manager) & co. Most of the debates in that session were around evolving project governance, roles of linux distributions, release process and plenty their related topics. That’s when I learned about OpenStack.

2. What drew you to OpenStack?

I got involved in OpenStack around 2013 through RDO project (a community OpenStack distribution that stays close to upstream trunk, started by Red Hat). I’d say it’s the sheer range of areas one can contribute to in many useful ways. By the time I was starting with OpenStack, it clearly helped to have been closely familiar with some of the under-the-hood open source virtualization technologies (like libvirt, QEMU, KVM and a ton of tooling around it) that OpenStack relies on. I feel it’s a nice progression to work on a higher-level project like OpenStack that already takes advantage of these and connects them all together in a meaningful way (and not some afterthought bolt-on).

Others factors would be OpenStack’s commitment towards technical meritocracy, its fair (walking the walk style) approach in governance and community interactions, and flat out fun in participating in such a large community-based software project.

3. What does “open source” mean to you?

To me, it’s the strong belief that it is the most sensible approach to develop software. Secondly, the realization that “hey, I get to benefit immensely from the work of scores of open source communities (at the tap of a keystroke — thanks to innovations like GPL, Creative Commons and the likes), so it’s just fair to contribute back to those communities on whose labour I’m building my existing work.”

4. Which OpenStack debate gets you the most fired up? Why?

Hmm, off the top of my head I can’t single out something. But there are a lot of interesting technical/community related debates on the very high-traffic upstream openstack-dev mailing list. It’s a great experience for a new person to learn the community culture by following discussions (with some good mail filters), getting a sense of tone on the lists, what kind of topics to bring up (and how) and many more things — just by plain old observation.

I don’t mean to imply that everything is rainbows and butterflies. Sure, there are (open/closed) conflicts too – like any massive project with a lot of moving parts, but the civilized manner in which most of them are resolved is heartening to see.

5. What is your favorite memory from am OpenStack summit?

I haven’t been to an OpenStack summit, yet. But I was at an OpenStack meetup (“OpenStack in Action 4″ by eNovance, last November in Paris). In the conference lobby, I noticed Mark McClain (current Neutron project PTL) passed by – I walked up, politely introduced my self and had a brief conversation. Before I left him alone, I asked him to share a piece of wisdom that can help one wrap his/her head around the complexity of Neutron (OpenStack Networking project) and its associated open source plugins. “Read ‘iproute2′ man pages, read carefully and experiment more, it’s full of useful details,” Mark said. And I still haven’t gotten to
it. So, by saying it out loud here, hopefully I’ll get my act together and spend some quality time with it. :-)

by OpenStack at July 10, 2014 04:15 PM

Flavio Percoco

Juno preview for Glance and Marconi

Yo!

You may probably know that I spend most of my time on OpenStack in general, I love tackling many things but I'm mostly focused on storage and queuing technologies - you can't do it all - so, I thought about giving you a heads up of what's being baked in the 2 projects I spend lot of my time.

Glance

Glance's team will focus on working on glance Artifacts. The plan for juno is to implemented models, API and everything needed to implement this feature without changing anything in the images API. That means images will remain the same during Juno and they'll be migrated later on during K or L depending on the status of the artifacts implementation. The artifacts work means Glance will move away from being a simply image registry to something more generic like a catalog of various data assets. In fact, the mission statement has already been changed.

Another thing that will happen in Glance during Juno is that the code for store libraries will be pulled out from the code base into its own library. This work started during Icehouse and it's now almost complete. The new library - glance.store - contains the old, already supported, store drivers with a slightly different API to support random access to image data, remove the dependencies on global configuration objects and a couple of more things.

The goal behind this library is to remove from Glance part of the code that is reusable, and to allow external consumers to better support direct access to image data by using the same library Glance uses to manage such data.

There's one more thing that is worth mentioning about Glance's plans for Juno. The async workers work is still moving forward. There's some support for it already - tasks base has been merged - and in the upcoming month the project will adopt taskflow as much as possible. There's still some work to do here and the feature is, unfortunately, moving slowly. An interesting thing about this new feature is that it'll allow Glance to do more things with the resources it has. For example, it'd be possible to do image introspection, convert and resize images without blocking requests.

Marconi

As of Marconi, the plans are to complete the API v1.1. This version of the API is just like the previous one but it addresses some of the feedbacks gotten from the community. Some of the new things that will change are:

  • Support for pop endpoints (get and delete)
  • Queues are now lazy resources, which means they don't have to be created in advance.

On the storage side of Marconi, the team will add one new storage driver to support redis and the support for storage engines is on the works. With storage engines (flavors) it'll be possible to create and tag clusters of storage and then use them based on their capabilities. This allows for a more granular billable and scalable deployments.

On top of the aforementioned storage engines, the team will add support for queues migrations between pools of the same type (flavor). It should be possible to do cross-type migrations but the team prefers to go with a more conservative approach and test the algorithm first and then improve it as needed.

Hope you find the above useful, any feedback is very welcome.

by FlaPer87 at July 10, 2014 02:26 PM

Dafna Ron

the place of a non-developer in the openstack community

For a while now, I have been contemplating the role of non-developers in the community.
As a non-developer I sometimes find it hard to navigate in the open source community and find that people often think that if someone does not contribute code then that person is not a contributor.
I think that many of you reading this are now thinking that I am being ridicule and that you yourselves have never excluded anyone from the openstack community.
That may be correct as individuals however, I think that as a community, the decisions, procedures and even influence of individuals are measured by coders for coders.

I have never written code, never automated tests and if I am honest I do not think I have the skills to do so. I do have the skills to learn the product flows, find the soft spots and to bring a user’s and administrator’s point of view allowing me to not only locate the bugs a developer will not always find in places they may never think of looking but also contribute to making the product better for the intended target – the user.
All that said, on my first week working in the openstack community I have heard the following line from several people “If you don’t contribute code, you’re not really a contributor so learn python”

Since I started working on openstack I realized that the main discussion in meetings is code. Even QE’s meetings are about new automation tests written for tempest and if you are not an automation QE you will feel left out.

I will jump to the latest decision made in the community: blueprints and specs.
It appears to be a good decision making people who suggest a new feature to also suggest a solution.
However, as a non-developer it took me the better part of the day to understand what I need to do.

It all started when I opened a bug which was closed with a request to open a blueprint.
I opened the blueprint and then sent it to the engineering mailing list where I was asked to open a spec.
Since I never worked with git it took me a while to understand what I needed to do and submit a spec.
To my surprise, at this point, I was told I should not have submitted the spec in the first place since I have no way of submitting a technical solution to it.

I found it very frustrating that once I finally submitted the spec I was told that the blueprint + spec procedure was created for developers only.
Not only did I spend a long time working on something I should not have been working on in the first place but also I believe that by deciding on this procedure the community has basically limited even more the options given to a non coder to contribute in the openstack community.

A few of my colleagues suggested that I write a blog on the hardships that I had submitting the spec during this discussion, someone suggested adding a blog in planet.openstack.org and a second person immediately commented that you need to submit code in order to add a blog to planet.openstack.org

Hence I have decided that perhaps it is time to raise the issue of what exactly is the openstack community for non developers? Perhaps the reason the community is largely run and created by developers is because non developers quickly understand that they have no room or a place in it?

I would like to call out to the openstack community to be the first community to change the way that open source communities work and start changes which would open doors to non coders to contribute and integrate in the community.


by daffiduck at July 10, 2014 02:25 PM