August 02, 2014

Aptira

OpenStack Identity (Keystone) on FreeBSD

Late last night I was caught with a flash of inspiration and wondered to myself how hard it would be to get OpenStack working on FreeBSD if Oracle could do it with OpenSolaris.

 

Over the coming months (whenever I get some free time) I’m going to try and see how far I can proceed in running the various OpenStack services on FreeBSD. I imagine most of the “control plane” components will be relatively painless to get going and I might even have a go at writing a nova-compute driver for FreeBSD Jails based on the OpenSolaris Zones work or perhaps the nova-docker or LXC drivers and see if something similar can be done for OpenStack Networking (or nova-network if necessary).

 

But for today let’s start at the easy end of the scale and see what it takes to get the OpenStack Identity (Keystone) service running on FreeBSD!

 

First up I will add a FreeBSD10 VirtualBox box to vagrant (I tried a few on vagrantcloud.com and this seemed the best one). If you’re not familiar with Vagrant I definitely recommend checking out the documentation as it’s a great tool :

 

$ vagrant box add hfm4/freebsd-10.0

 

and produce a simple Vagrantfile for it:

 

VAGRANTFILE_API_VERSION = "2"

 

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|

 config.vm.define "bsdstack" do |bsdstack|

    bsdstack.vm.box = "hfm4/freebsd-10.0"

    bsdstack.vm.hostname = "bsdstack.testbed.aptira.com"

    bsdstack.vm.provider "virtualbox" do |v|

     v.customize ['modifyvm', :id ,'--memory','2048']

    end

 end

end

 

After a quick

 

$ vagrant up

 

to bring up my FreeBSD10 virtual machine and then

 

$ vagrant ssh

$ sudo -i

 

to log in to it and switch to root, my testbed environment is ready!

 

Now before we continue any further I will stress that what I'm implementing here is a proof of concept, so security is not really a consideration and you should keep that in mind if you ever decided to attempt this yourself on any internet connected server.

 

Installing the python, git and wget packages:

 

# pkg install python git wget

 

Installing pypi pip:

 

# wget https://bootstrap.pypa.io/get-pip.py

# python get-pip.py

 

Installing libxslt:

 

# pkg install libxslt

 

I generally use MariaDB for my backend these days, so let's install and start that too, create a database called keystone, then we can get into the configuration steps:

 

# pkg install mariadb55-server mariadb55-client

# echo mysql_enable=\"YES\" >> /etc/rc.conf

# service mysql-server start

# mysql -u root -e "CREATE DATABASE keystone;"

 

 

Clone the keystone git repository and install it with setup.py:

 

# git clone https://github.com/openstack/keystone.git

# cd keystone/

# python setup.py install

 

We will also need a couple of PyPI packages not installed by the above process:

 

# pip install pbr

# pip install MySQL-python

 

and with those simple steps, keystone is installed and ready to use! That was pretty painless!

 

The next step is to copy the sample keystone config to /etc/keystone, rename and configure (these commands assume being run from inside the cloned git repository):

 

# cp -r etc/ /etc/keystone

# cd /etc/keystone

# mv keystone.conf.sample keystone.conf

# mv logging.conf.sample logging.conf

 

Edit the keystone.conf file with your favorite editor, the following changes in the appropriate sections are all that's really required:

 

admin_token=ADMIN

connection=mysql://root@localhost/keystone

provider=keystone.token.providers.uuid.Provider

 

Now we can do a database sync and start keystone:

 

# /usr/local/bin/keystone-manage db_sync

# /usr/local/bin/keystone-all &

 

If we have done everything correctly we should be able to authenticate against the service endpoint of keystone with the admin token and make a call to verify it worked (note there will be no output, just a blank line).

 

# /usr/local/bin/keystone --os-token ADMIN --os-endpoint http://localhost:35357/v2.0/ user-list

 

Next, let's set up an admin tenant/user, an admin role, service and endpoints:

 

# /usr/local/bin/keystone --os-token ADMIN --os-endpoint http://localhost:35357/v2.0/ tenant-create --name=admin

# /usr/local/bin/keystone --os-token ADMIN --os-endpoint http://localhost:35357/v2.0/ user-create --name=admin --tenant=admin

# /usr/local/bin/keystone --os-token ADMIN --os-endpoint http://localhost:35357/v2.0/ user-password-update --pass=test123 admin

# /usr/local/bin/keystone --os-token ADMIN --os-endpoint http://localhost:35357/v2.0/ role-create --name=admin

# /usr/local/bin/keystone --os-token ADMIN --os-endpoint http://localhost:35357/v2.0/ user-role-add --user=admin --tenant=admin --role=admin

# /usr/local/bin/keystone --os-token ADMIN --os-endpoint http://localhost:35357/v2.0/ service-create --name=identity --type=identity

# /usr/local/bin/keystone --os-token ADMIN --os-endpoint http://localhost:35357/v2.0/ endpoint-create --service=identity --publicurl=http://localhost:5000/v2.0 --internalurl=http://localhost:5000/v2.0 --adminurl=http://localhost:35357/v2.0

 

Once that is done we can test the new user we created and see whether everything is working:

 

# /usr/local/bin/keystone --os-tenant-name admin --os-username admin --os-password test123 --os-auth-url http://localhost:5000/v2.0 user-list

# /usr/local/bin/keystone --os-tenant-name admin --os-username admin --os-password test123 --os-auth-url http://localhost:5000/v2.0 tenant-list

# /usr/local/bin/keystone --os-tenant-name admin --os-username admin --os-password test123 --os-auth-url http://localhost:5000/v2.0 user-role-list --user=admin --tenant=admin

# /usr/local/bin/keystone --os-tenant-name admin --os-username admin --os-password test123 --os-auth-url http://localhost:5000/v2.0 endpoint-list

# /usr/local/bin/keystone --os-tenant-name admin --os-username admin --os-password test123 --os-auth-url http://localhost:5000/v2.0 service-list

 

and there we go! OpenStack Identity running on FreeBSD!

 

Join us next time when we will try and setup the OpenStack Image (Glance) service on FreeBSD.

by Sina Sadeghi (sina@aptira.com) at August 02, 2014 03:59 AM

August 01, 2014

Aptira

OpenStack Identity (Keystone) on FreeBSD old

Late last night I was caught with a flash of inspiration and wondered to myself how hard it would be to get OpenStack working on FreeBSD if Oracle could do it with OpenSolaris.

 

Over the coming months (whenever I get some free time) I’m going to try and see how far I can proceed in running the various OpenStack services on FreeBSD. I imagine most of the “control plane” components will be relatively painless to get going and I might even have a go at writing a nova-compute driver for FreeBSD Jails based on the OpenSolaris Zones work or perhaps the nova-docker or LXC drivers and see if something similar can be done for OpenStack Networking (or nova-network if necessary).

 

But for today let’s start at the easy end of the scale and see what it takes to get the OpenStack Identity (Keystone) service running on FreeBSD!

 

First up I will add a FreeBSD10 VirtualBox box to vagrant (I tried a few on vagrantcloud.com and this seemed the best one). If you’re not familiar with Vagrant I definitely recommend checking out the documentation as it’s a great tool :

 

$ vagrant box add hfm4/freebsd-10.0

 

and produce a simple Vagrantfile for it:

 

VAGRANTFILE_API_VERSION = "2"

 

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|

 config.vm.define "bsdstack" do |bsdstack|

    bsdstack.vm.box = "hfm4/freebsd-10.0"

    bsdstack.vm.hostname = "bsdstack.testbed.aptira.com"

    bsdstack.vm.provider "virtualbox" do |v|

     v.customize ['modifyvm', :id ,'--memory','2048']

    end

 end

end

 

After a quick

 

$ vagrant up

 

to bring up my FreeBSD10 virtual machine and then

 

$ vagrant ssh

$ sudo -i

 

to log in to it and switch to root, my testbed environment is ready!

 

Now before we continue any further I will stress that what I'm implementing here is a proof of concept, so security is not really a consideration and you should keep that in mind if you ever decided to attempt this yourself on any internet connected server.

 

Installing the python, git and wget packages:

 

# pkg install python git wget

 

Installing pypi pip:

 

# wget https://bootstrap.pypa.io/get-pip.py

# python get-pip.py

 

Installing libxslt:

 

# pkg install libxslt

 

I generally use MariaDB for my backend these days, so let's install and start that too, create a database called keystone, then we can get into the configuration steps:

 

# pkg install mariadb55-server mariadb55-client

# echo mysql_enable=\"YES\" >> /etc/rc.conf

# service mysql-server start

# mysql -u root -e "CREATE DATABASE keystone;"

 

 

Clone the keystone git repository and install it with setup.py:

 

# git clone https://github.com/openstack/keystone.git

# cd keystone/

# python setup.py install

 

We will also need a couple of PyPI packages not installed by the above process:

 

# pip install pbr

# pip install MySQL-python

 

and with those simple steps, keystone is installed and ready to use! That was pretty painless!

 

The next step is to copy the sample keystone config to /etc/keystone, rename and configure (these commands assume being run from inside the cloned git repository):

 

# cp -r etc/ /etc/keystone

# cd /etc/keystone

# mv keystone.conf.sample keystone.conf

# mv logging.conf.sample logging.conf

 

Edit the keystone.conf file with your favorite editor, the following changes in the appropriate sections are all that's really required:

 

admin_token=ADMIN

connection=mysql://root@localhost/keystone

provider=keystone.token.providers.uuid.Provider

 

Now we can do a database sync and start keystone:

 

# /usr/local/bin/keystone-manage db_sync

# /usr/local/bin/keystone-all &

 

If we have done everything correctly we should be able to authenticate against the service endpoint of keystone with the admin token and make a call to verify it worked (note there will be no output, just a blank line).

 

# /usr/local/bin/keystone --os-token ADMIN --os-endpoint http://localhost:35357/v2.0/ user-list

 

Next, let's set up an admin tenant/user, an admin role, service and endpoints:

 

# /usr/local/bin/keystone --os-token ADMIN --os-endpoint http://localhost:35357/v2.0/ tenant-create --name=admin

# /usr/local/bin/keystone --os-token ADMIN --os-endpoint http://localhost:35357/v2.0/ user-create --name=admin --tenant=admin

# /usr/local/bin/keystone --os-token ADMIN --os-endpoint http://localhost:35357/v2.0/ user-password-update --pass=test123 admin

# /usr/local/bin/keystone --os-token ADMIN --os-endpoint http://localhost:35357/v2.0/ role-create --name=admin

# /usr/local/bin/keystone --os-token ADMIN --os-endpoint http://localhost:35357/v2.0/ user-role-add --user=admin --tenant=admin --role=admin

# /usr/local/bin/keystone --os-token ADMIN --os-endpoint http://localhost:35357/v2.0/ service-create --name=identity --type=identity

# /usr/local/bin/keystone --os-token ADMIN --os-endpoint http://localhost:35357/v2.0/ endpoint-create --service=identity --publicurl=http://localhost:5000/v2.0 --internalurl=http://localhost:5000/v2.0 --adminurl=http://localhost:35357/v2.0

 

Once that is done we can test the new user we created and see whether everything is working:

 

# /usr/local/bin/keystone --os-tenant-name admin --os-username admin --os-password test123 --os-auth-url http://localhost:5000/v2.0 user-list

# /usr/local/bin/keystone --os-tenant-name admin --os-username admin --os-password test123 --os-auth-url http://localhost:5000/v2.0 tenant-list

# /usr/local/bin/keystone --os-tenant-name admin --os-username admin --os-password test123 --os-auth-url http://localhost:5000/v2.0 user-role-list --user=admin --tenant=admin

# /usr/local/bin/keystone --os-tenant-name admin --os-username admin --os-password test123 --os-auth-url http://localhost:5000/v2.0 endpoint-list

# /usr/local/bin/keystone --os-tenant-name admin --os-username admin --os-password test123 --os-auth-url http://localhost:5000/v2.0 service-list

 

and there we go! OpenStack Identity running on FreeBSD!

 

Join us next time when we will try and setup the OpenStack Image (Glance) service on FreeBSD.

by Sina Sadeghi (sina@aptira.com) at August 01, 2014 02:57 PM

July 31, 2014

Cloudify Engineering

Voting for OpenStack Summit talks is now live! Get your vote on for Team Cloudify

image

Voting is now live for the OpenStack Summit in Paris and we need your vote!

We’ve submitted a bunch of great talks and you can help them be selected. All you need to do is click on the links below and sign into your OpenStack account. 

If you do not have an account, you can register at https://www.openstack.org/join/register/

 

Thanks for voting and sharing! 

by cloudify-engineering at July 31, 2014 03:51 PM

eNovance Engineering Teams

Brace yourself, DevStack Ceph is here!

For more than a year, Ceph has become increasingly popular and saw several deployments inside and outside OpenStack. For those of you who do not know Ceph is unified, distributed and massively scalable open source storage technology that provides several ways to access and consume your data such as object, block and filesystem. The community and Ceph itself has greatly matured. More developers have joined the project as well. Since I joined eNovance, I have been thinking about building a Ceph support for DevStack. DevStack is a documented collection of shell scripts to build complete OpenStack development environments. I finally got some time to make this happening. It was not easy though, after 7 months and 42 patch sets (42 was the answer I guess), my patch got merged into DevStack. Here is the link of the review: https://review.openstack.org/#/c/65113/

It took me a while to get this into DevStack, however thanks to this patch DevStack got several improvements and new capabilities that we will discuss in this article.

 

What does it do?

Basically, the patch configures everything for you. So it will bootstrap a Ceph cluster and then configure all the OpenStack services, meaning: Glance, Cinder, Cinder backup and Nova. Many things are configurable such as Ceph size, pool names, user names, replica level. Setting a replica count greater than 1 does not make sense unless you want to look at the Ceph replication. Using a replica count of 2, will bootstrap 2 OSDs within the exact same loopback device. Thus Ceph will report having twice the amount of space to store object which is not true. So be careful with that. I believe this might be improved in the future but once again DevStack is a development platform not a production. DevStack Swift does the exact same thing for the replication.

This patch relies on a recent patch that came into DevStack, the Cinder multi-backend support. So thanks to Dean Troyer, we can now use several backends for Cinder. This patch was really critical in order to get Ceph into DevStack. To use it, simply add the flag CINDER_ENABLED_BACKENDS to your localrc file. Then append comma-separated backends names.

A new capability was introduced because of Ceph as well, the ability to perform a  pre-install phase for extras.d plugins. An additional call hook for the extras.d plugins that is called before any service installation occurs. This is called between the installation of the system packages listed as prerequisites and the installation of the actual services.

 

./stack.sh

Below you will find a complete localrc example with every variables that you can use. Of course every variables like USERS, POOL, PG are not mandatory, we have default values for that:

# Misc
DATABASE_PASSWORD=password
ADMIN_PASSWORD=password
SERVICE_PASSWORD=password
SERVICE_TOKEN=password
RABBIT_PASSWORD=password

# Enable Logging
LOGFILE=/opt/stack/logs/stack.sh.log
VERBOSE=True
LOG_COLOR=True
SCREEN_LOGDIR=/opt/stack/logs

# Prerequisite
ENABLED_SERVICES=rabbit,mysql,key

# Ceph!
ENABLED_SERVICES+=,ceph
CEPH_LOOPBACK_DISK_SIZE=10G
CEPH_CONF=/etc/ceph/ceph.conf
CEPH_REPLICAS=1

# Glance - Image Service
ENABLED_SERVICES+=,g-api,g-reg
GLANCE_CEPH_USER=glancy
GLANCE_CEPH_POOL=imajeez

# Cinder - Block Device Service
ENABLED_SERVICES+=,cinder,c-api,c-vol,c-sch,c-bak
CINDER_DRIVER=ceph
CINDER_CEPH_USER=cindy
CINDER_CEPH_POOL=volumeuh
CINDER_CEPH_UUID=6d52eb95-12f3-47e3-9eb9-0c1fe4142426
CINDER_BAK_CEPH_POOL=backeups
CINDER_BAK_CEPH_USER=cind-backeups
CINDER_ENABLED_BACKENDS=ceph,lvm

# Nova - Compute Service
ENABLED_SERVICES+=,n-api,n-crt,n-cpu,n-cond,n-sch,n-net
NOVA_CEPH_POOL=vmz

 

Why is it useful?

Well, as mentioned in the introduction many organizations are interested in Ceph and thus they have committed on new functionalities. As DevStack is the de facto platform to program in OpenStack, the need for a DevStack Ceph was natural.

This patch is part of my commitment to the Juno cycle for the Ceph integration into OpenStack effort. I would like to thank the community for its support with this patch. It was important to see that many people want Ceph to be in DevStack, this helped me a lot and gave me the motivation to persevere. More is coming, we are currently working on getting Ceph into the CI gate so we will see our patches more easily accepted and also for Cinder since it requires a CI per volume backend.

Happy DevStacking!

 

 

by Sebastien Han at July 31, 2014 02:56 PM

eNovance talks at next OpenStack Summit Paris

Next November in Paris, the OpenStack community is going to design our future Kilo release.
Together, developers, users and companies are building and sharing thoughts about ‘How OpenStack is going to be most scalable – the most efficient – the most Open Source Cloud that ever existed ?
eNovance is proud to be part of this community project and to be an active contributor to the code. Our talented experts have proposed talks :

- OpenStack and Ceph: match made in the Cloud by Sébastien Han

For more than a year, Ceph has become increasingly popular and saw several deployments inside and outside OpenStack. The community and Ceph itself has greatly matured. Ceph is a fully open source distributed object store, network block device, and file system designed for reliability, performance, and scalability from terabytes to exabytes. Ceph utilizes a novel placement algorithm (CRUSH), active storage nodes, and peer-to-peer gossip protocols to avoid the scalability and reliability problems associated with centralized controllers and lookup tables. Since the beginning we have been constantly putting effort for integrating Ceph into OpenStack. Things have gotten really serious since Grizzly, all the way to Icehouse. All these things, will certainly encourage people to use Ceph in OpenStack. Ceph is excellent to back OpenStack platforms, no matter how big and complex the platform.. Continuing on his series of talks, Sébastien Han from eNovance will go through all the new things that appeared during the Juno cycle and will expose the roadmap for K.

Transforming to OpenStack? A roadmap to work as a service by Nick Barcet  

This talk is a follow up on what are the best practices that are successful in operating the transformation.  We will first focus on identifying the right use cases for a generic enterprise, then define a roadmap with an organizational and a technical track, to finish with the definition what would be our success criterias for our group.   This will happen as a “workshop summary” based on the multiple engagement eNovance has been delivering over the past 2 years. 

- Javascript in Openstack or: How I learned to stop worrying and love the APIs by Maxime Vidori 

Accessing OpenStack API through the browser without any backend,how to use your browser as a powerful REST client and easily develop new web applications.Thanks to its restful API, OpenStack web applications can be freed from the backend part.

In this talk I will explain how to configure your OpenStack to allow this kind of connections, and present a simple javascript library which wrap the API calls in reusable modules.

There is a couple of reasons why a javascript library can be a better fit than python or shell script. This talk will go over four parts, to show you about the javascript advantages.

Unthrottling the Network: Delivering high throughput with OpenStack by Nick Barcet

Presentation with Vincent Jardin, 6Wind

Based on the requirements we gathered from our various customers in the telco industry, one of the key elements we found missing was the ability to run VM which could deliver near wirespeed throughput when connected to gigabit wires.  Working together with 6Wind, we took on this requirement and built a lab.  The results are in and we are now ready to provide some amazing results.  This presentation will explain how 6Gate and OpenStack can do this and which tricks we had to apply

Rethinking Ceilometer metric storage with Gnocchi: Time-series as a Service by Julien Danjou

Recent evolution in OpenStack and Ceilometer pushed us to rethink how we should treat and store metrics. This led to the creation of the Gnocchi project, a service providing time-series indexing and storage via an API, following the OpenStack philosophy.

This talk will explain the design ideas underpinning Gnocchi, how it differs from the classic metering store provided by Ceilometer, and how it has been integrated into the Ceilometer metrics pipeline. We will also discuss the potential for performance & scalability improvements, and how the new metrics store will co-exist with the existing v2 Ceilometer API.

Code, Review, Repeat – Code the OpenStack way by James Kulina

The transformation of enterprises operations and applications delivery methods with the adoption of OpenStack cloud, provides the perfect inflection point to evaluate how best to deliver applications and the methods and tools their developers utilize to build applications.

During this presentation, we will showcase the eNovance Software Factory platform and explain how enterprises can adopt the “OpenStack Way of Coding” to begin to transform how their distributed software development teams can deliver superior quality code at scale and velocity.

- Neutron is not broken: real world uses cases and deployments of Neutron without breaking the bank by Carl Perry

The goal of this talk is to dispel the myth that Neutron is broken and worthless. To illustrate how this point is totally not true, we are going to tackle some sacred cows: a lot of what you want from OpenStack is not everyone else wants from OpenStack. Networking is not one size fits all, and not all solutions work in all environments. You can deploy a large environment without a commercial solution. You can have VLANs and Neutron too. You can have IPv6. And you can have all of this today, and you can have it with Neutron. 

A framework for Continuous Delivery of OpenStack by Sandro Maziotta

The goal of this talk is to talk about our eNovance Continuous Delivery platform. We will first introduce our business objectives and then present the different components. We will then focus on key technical deliverables around OpenStack HEAT , Triple O that allowed us to build the Continuous Platform.

- How to accommodate OpenStack to fit with NFV requirements by Sandro Maziotta

The goal of this talk is to demonstrate our experience in contributing upstream in NOVA and NEUTRON missing features that are enabling NFV. Based on some real customer examples, we will first introduce the NFV challenges and gaps and some concrete contributions that we are pushing within the community. 

Your Margins Suck! (or, You’re Not Going to Like This) by Pano Xinos

Today’s Telcos and Data Center Operators find themselves in an ever increasingly commoditized world despite their continued reliance on the sale of bandwidth and other dated services such as hosted virtual servers rebranded as cloud computing. This presentation will briefly discuss the evolution of the Telco and data center business model and how these organizations can further adapt and become Service Providers in their own right, thereby helping to maintain and grow revenues and success in the age of cloud and big data. 

Neutron Roundtable: Overlay or full SDN? by Nick Barcet 

Neutron offers multiple ways to implement networking. It’s not only a matter of vendor choice, but also a choice of networking models. Should the tenants of your cloud be allowed to place requests that would directly modify the configuration of your hardware, or would you like them confined in virtual land? What are the limits of each models and can they be combined? Why would you need access to BGP/OSPF layers from Neutron? What about VPNs or MPLS?

In this roundtable we will ask 5 OpenStack Networking experts to prepare a 5 min position statement on which model they prefer and for what purposes, then we will open the floor to a debate within the group and with the public.

OpenStack on a silver Platter by Emilien Macchi and Frederic Lepied 

Over the last months, we could see more and more OpenStack deployments running in production.

Installing an OpenStack cloud that can scale does not only mean setting up packages and run your favorite automation tool to configure all the projects together.

It also means:

  • test deployments (ability to reproduce the infrastructure)
  • adding new features and fix bugs in components (continuous integration)
  • manage upgrades (OpenStack releases, dependencies and operating systems)
  • migrate the production with new features (continuous delivery) without downtime

Here are the challenges: how to deal with staying as close as possible to the latest features available in OpenStack and how to upgrade an OpenStack cloud in production as often as possible.

We can see over the OpenStack distros market that they all provide a nice way to deploy OpenStack in 5 minutes from a great GUI. But do they really care about upgrades? Are they much more flexible than other solutions? Are they really production-ready?

Adopt TripleO tools for your own project by Goneri Le Bouder

TripleO aimed at installing, upgrading and operating OpenStack clouds using OpenStack’s own cloud facilities as the foundations.

TripleO itself is actually a collection of different tools. Most of them are standalone project that can be used independently, this including:

  • template for Heat, the OpenStack Orchestration program,
  • the configuration management tools: os-apply-config,
  • or diskimage-builder, the gold image generator.

TripleO makes use of some interesting paradigms, like the use of specialized images or the tight integration with OpenStack Heat. During this presentation, we will give some example of integration and the benefits. 

Using OpenStack Swift for extreme data durability by Christian Schwede

OpenStack Swift is a very powerful object storage that is used in several of the largest object storage deployments around the globe. It ensures a very high level of data durability and can withstand epic disasters if setup in the right way. During this talk we want to give you an overview about the mechanism that are used within Swift to ensure data durability and availability, how to design your cluster using regions and zones and things you need to pay attention to, especially when enhancing an existing cluster.

Win the Enterprise: Application high availability by Nick Barcet

During the Atlanta summit, and as missioned by the Borad of Directors, a group of users and integrators gathered to focus around what was needed for OpenStack to strive in the enterprise.  This group was subdivided into focus groups and I had the priviledge to drive the activity of the Application High Availability.  This presentation, which will be done jointly with the other member of the group, will provide an overview of the use cases we chose to work on an explain where we currently are in terms of our effort.  It will end with a Q&A where you will be able to provide your feedback, volunteer help or ask questions.

[use-case] From devs to ops : deploy, upgrade and rule an OpenStack platform in production by Nicolas Auvray 

This will describe a production use-case : deploy and rule an OpenStack platform from the very beginning to the bloody end.

How do we install it on an industrial and scalable way? How can we keep it up and running? How do we manage HA? What about backup / monitoring / logging and all that operational stuff?

This talk topic is mostly about explaining the pain we had to install it, the downtimes we had during upgrade, the strategies we adopted in production to recover a machine the fastest way, and the technologies we used to handle all of it : puppet & ansible.

Performance does matter by Erwan Velu

Deploying clouds is in everybody’s mind but how to make an efficient deployment ?

After setting up the hardware, it’s mandatory to make a deep inspection of server’s performance.

In a farm of supposed identical servers, many miss-{installation|configuration} could seriously degrade performance.

If you want to discovery such counter-performance before users complains of their VMs, you have to be detect them before installing any software.

Another performance metric to know is “how many VMs could I load on top of my servers ?”.

By using the same methodology it is possible the compare how a set of VMs performs regarding the bare metal capabilities.

The challenge is here:  How do detect automatically servers that under perform ? How to insure that a new server entering a farm will not degrade it ? How to measure the overhead of all the virtualization layers from the VM point of view ?

In this presentation, I would like to present how we deal this issues at eNovance by using open source tools and a strong benchmark methodology. Automated testing is the key to success.

Rowin’ in the wind by Alexis Monville

Last summit gave us the opportunity to explain how we use agility to scale our distributed team to contribute to OpenStack.

In this session we will go over how we setup an Agile Guild missioned to groom an Agile and Open Source culture among people that pervades the whole organization.

Rowin’ in the Wind is the talk for everyone who wants to enter deeper in the subject. We will cover how we work with our distributed teams, how we involve people outside our company, how we interact with different openStack.

by Laura ZANETTI at July 31, 2014 02:52 PM

Opensource.com

Coding all summer long in OpenStack

The end of Google Summer of Code (GSoC) is near, so I wanted to share with you how things worked out for me as an intern with OpenStack. Precisely, I wanted to let you know my perception about what it takes to participate in GSoC, the blockers you may encounter and how to overcome them, what to expect after the internship, and a brief description of what I have been doing during my internship.

by vmartinezdelacruz at July 31, 2014 11:00 AM

July 30, 2014

Rob Hirschfeld

DefCore Advances at the Core > My take on the OSCON’14 OpenStack Board Meeting

Last week’s day-long Board Meeting (Jonathan’s summary) focused on three major topics: DefCore, Contribute Licenses (CLA/DCO) and the “Win the Enterprise” initiative. In some ways, these three topics are three views into OpenStack’s top issue: commercial vs. individual interests.

But first, let’s talk about DefCore!

DefCore took a major step with the passing of the advisory Havana Capabilities (the green items are required). That means that vendors in the community now have a Board approved minimum requirements.  These are not enforced for Havana so that the community has time to review and evaluate.

Designated Sections (1)For all that progress, we only have half of the Havana core definition complete. Designated Sections, the other component of Core, will be defined by the DefCore committee for Board approval in September. Originally, we expected the TC to own this part of the process; however, they felt it was related to commercial interested (not technical) and asked for the Board to manage it.

The coming meetings will resolve the “is Swift code required” question and that topic will require a dedicated post.  In many ways, this question has been the challenge for core definition from the start.  If you want to join the discussion, please subscribe to the DefCore list.

The majority of the board meeting was spent discussion other weighty topics that are work a brief review.

Contribution Licenses revolve around developer vs broader community challenge. This issue is surprisingly high stakes for many in the community. I see two primary issues

  1. Tension between corporate (CLA) vs. individual (DCO) control and approval
  2. Concern over barriers to contribution (sadly, there are many but this one is in the board’s controls)

Win the Enterprise was born from product management frustration and a fragmented user base. My read on this topic is that we’re pushing on the donkey. I’m hearing serious rumbling about OpenStack operability, upgrade and scale.  This group is doing a surprisingly good job of documenting these requirements so that we will have an official “we need this” statement. It’s not clear how we are going to turn that statement into either carrots or sticks for the donkey.

Overall, there was a very strong existential theme for OpenStack at this meeting: are we a companies collaborating or individuals contributing?  Clearly, OpenStack is both but the proportions remain unclear.

Answering this question is ultimately at the heart of all three primary topics. I expect DefCore will be on the front line of this discussion over the next few weeks (meeting 1, 2, and 3). Now is the time to get involved if you want to play along.


by Rob H at July 30, 2014 06:15 PM

OpenStack Reactions

Trying to use devstack

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

metrying

by chmouel at July 30, 2014 04:56 PM

Daniel P. Berrangé

Announce: gerrymander 1.3 “Any history of sanity in the family?” – a client API and command line tool for gerrit

I’m pleased to announce the availability of a new release of gerrymander, version 1.3. Gerrymander provides a python command line tool and APIs for querying information from the gerrit review system, as used in OpenStack and many other projects. You can get it from pypi

# pip install gerrymander

Or straight from GitHub

# git clone git://github.com/berrange/gerrymander.git

If you’re the impatient type, then go to the README file which provides a quick start guide to using the tool.

This release contains a mixture of bug fixes and two new features. When displaying a list of changes, one of the fields that can be shown per-change is the approvals. This is rendered as a list of all the -2/-1/+1/+2 votes made against the current patch set. The text is also coloured to make it easier to tell at a glance what the overall state of the change is. There are two problems with this, first when there were a lot of votes on a change the list gets rather too wide. The bigger problem though has been the high level of false failures in the OpenStack CI testing system. This results in many patches receiving -1′s from testing, which caused gerrymander to colour them in red:

+-------------------------------------+-------------------------------------------------------+----------+-----------------------+
| URL                                 | Subject                                               | Created  | Approvals             |
+-------------------------------------+-------------------------------------------------------+----------+-----------------------+
| https://review.openstack.org/68942  | Power off commands should give guests a chance to ... | 186 days | w= v=1,1,1,1 c=-2,-1  |
| https://review.openstack.org/77027  | Support image property for config drive               | 152 days | w= v=1,-1,-1,-1 c=-1  |
| https://review.openstack.org/82409  | Fixes a Hyper-V list_instances localization issue     | 128 days | w= v=1,-1 c=-1        |
| https://review.openstack.org/88067  | Allow deleting instances while uuid lock is held      | 104 days | w= v=1,1,1,1 c=2      |
| https://review.openstack.org/108013 | Fixes Hyper-V agent force_hyperv_utils_v1 flag iss... | 12 days  | w= v=1,1,1,-1 c=1,1,1 |

My workflow is to focus on things which do not have negative feedback and so I found this was discouraging me from reviewing stuff that was only marked negative due to bogus CI failures. So in this new release, the display is now using separate columns to report test votes, code review votes and workflow votes, each column being separately coloured. Also, instead of showing each individual vote, we only show the so called “casting vote” – ie the one that’s most important, (order is -2, +2, -1, +1)

+-------------------------------------+-------------------------------------------------------+----------+-------+---------+----------+
| URL                                 | Subject                                               | Created  | Tests | Reviews | Workflow |
+-------------------------------------+-------------------------------------------------------+----------+-------+---------+----------+
| https://review.openstack.org/68942  | Power off commands should give guests a chance to ... | 186 days | 1     | -2      |          |
| https://review.openstack.org/77027  | Support image property for config drive               | 152 days | -1    | -1      |          |
| https://review.openstack.org/82409  | Fixes a Hyper-V list_instances localization issue     | 128 days | -1    | -1      |          |
| https://review.openstack.org/88067  | Allow deleting instances while uuid lock is held      | 104 days | 1     | 2       |          |
| https://review.openstack.org/108013 | Fixes Hyper-V agent force_hyperv_utils_v1 flag iss... | 12 days  | -1    | 1       |          |

The second new feature is the ‘patchreviewrates’ command which is reports on the review comment activity of people over time. We already have ‘patchreviewstats’ command which gives information about review activity over a fixed window, but this doesn’t let us see long term trends. With the new command we’re reporting on the daily number of review comments per person, averaging over a week, and reported for the last 52 weeks. This lets us see how review activity from contributors goes up and down over the course of a year (or 2 dev cycles). I used this to produced a report which I then imported to LibreOffice to create a graph showing the nova-core team activity over the past two cycles (click image to enlarge)

Nova core team review rates

Nova core team review rates

In summary the changes in version 1.2 of gerrymander are

  • Exclude own changes in the todo lists
  • Add CSV as an output format for some reports
  • Add patchreviewrate report for seeing historica approvals per day
  • Replace ‘Approvals’ column with ‘Test’, ‘Review’ and ‘Workflow’ columns in change reports
  • Allow todo lists to be filtered per branch
  • Reorder sorting of votes to prioritize +2/-2s over +1/-1s
  • Avoid exception from unexpected approval vote types
  • Avoid creating empty cache file when Ctrl-C’ing ssh client
  • Run ssh in batch mode to avoid hang when host key is unknown

Thanks to everyone who contributed patches that went into this new release

by Daniel Berrange at July 30, 2014 03:32 PM

Aptira

OpenStack Identity (Keystone) on FreeBSD

Late last night I was caught with a flash of inspiration and wondered to myself how hard it would be to get OpenStack working on FreeBSD if Oracle could do it with OpenSolaris.

 

Over the coming months (whenever I get some free time) I’m going to try and see how far I can proceed in running the various OpenStack services on FreeBSD. I imagine most of the “control plane” components will be relatively painless to get going and I might even have a go at writing a nova-compute driver for FreeBSD Jails based on the OpenSolaris Zones work or perhaps the nova-docker or LXC drivers and see if something similar can be done for OpenStack Networking (or nova-network if necessary).

 

But for today let’s start at the easy end of the scale and see what it takes to get the OpenStack Identity (Keystone) service running on FreeBSD!

 

First up I will add a FreeBSD10 VirtualBox box to vagrant (I tried a few on vagrantcloud.com and this seemed the best one). If you’re not familiar with Vagrant I definitely recommend checking out the documentation as it’s a great tool :

 

$ vagrant box add hfm4/freebsd-10.0

 

and produce a simple Vagrantfile for it:

 

VAGRANTFILE_API_VERSION = "2"

 

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|

 config.vm.define "bsdstack" do |bsdstack|

    bsdstack.vm.box = "hfm4/freebsd-10.0"

    bsdstack.vm.hostname = "bsdstack.testbed.aptira.com"

    bsdstack.vm.provider "virtualbox" do |v|

     v.customize ['modifyvm', :id ,'--memory','2048']

    end

 end

end

 

After a quick

 

$ vagrant up

 

to bring up my FreeBSD10 virtual machine and then

 

$ vagrant ssh

$ sudo -i

 

to log in to it and switch to root, my testbed environment is ready!

 

Now before we continue any further I will stress that what I'm implementing here is a proof of concept, so security is not really a consideration and you should keep that in mind if you ever decided to attempt this yourself on any internet connected server.

 

Installing the python, git and wget packages:

 

# pkg install python git wget

 

Installing pypi pip:

 

# wget https://bootstrap.pypa.io/get-pip.py

# python get-pip.py

 

Installing libxslt:

 

# pkg install libxslt

 

I generally use MariaDB for my backend these days, so let's install and start that too, create a database called keystone, then we can get into the configuration steps:

 

# pkg install mariadb55-server mariadb55-client

# echo mysql_enable=\"YES\" >> /etc/rc.conf

# service mysql-server start

# mysql -u root -e "CREATE DATABASE keystone;"

 

 

Clone the keystone git repository and install it with setup.py:

 

# git clone https://github.com/openstack/keystone.git

# cd keystone/

# python setup.py install

 

We will also need a couple of PyPI packages not installed by the above process:

 

# pip install pbr

# pip install MySQL-python

 

and with those simple steps, keystone is installed and ready to use! That was pretty painless!

 

The next step is to copy the sample keystone config to /etc/keystone, rename and configure (these commands assume being run from inside the cloned git repository):

 

# cp -r etc/ /etc/keystone

# cd /etc/keystone

# mv keystone.conf.sample keystone.conf

# mv logging.conf.sample logging.conf

 

Edit the keystone.conf file with your favorite editor, the following changes in the appropriate sections are all that's really required:

 

admin_token=ADMIN

connection=mysql://root@localhost/keystone

provider=keystone.token.providers.uuid.Provider

 

Now we can do a database sync and start keystone:

 

# /usr/local/bin/keystone-manage db_sync

# /usr/local/bin/keystone-all &

 

If we have done everything correctly we should be able to authenticate against the service endpoint of keystone with the admin token and make a call to verify it worked (note there will be no output, just a blank line).

 

# /usr/local/bin/keystone --os-token ADMIN --os-endpoint http://localhost:35357/v2.0/ user-list

 

Next, let's set up an admin tenant/user, an admin role, service and endpoints:

 

# /usr/local/bin/keystone --os-token ADMIN --os-endpoint http://localhost:35357/v2.0/ tenant-create --name=admin

# /usr/local/bin/keystone --os-token ADMIN --os-endpoint http://localhost:35357/v2.0/ user-create --name=admin --tenant=admin

# /usr/local/bin/keystone --os-token ADMIN --os-endpoint http://localhost:35357/v2.0/ user-password-update --pass=test123 admin

# /usr/local/bin/keystone --os-token ADMIN --os-endpoint http://localhost:35357/v2.0/ role-create --name=admin

# /usr/local/bin/keystone --os-token ADMIN --os-endpoint http://localhost:35357/v2.0/ user-role-add --user=admin --tenant=admin --role=admin

# /usr/local/bin/keystone --os-token ADMIN --os-endpoint http://localhost:35357/v2.0/ service-create --name=identity --type=identity

# /usr/local/bin/keystone --os-token ADMIN --os-endpoint http://localhost:35357/v2.0/ endpoint-create --service=identity --publicurl=http://localhost:5000/v2.0 --internalurl=http://localhost:5000/v2.0 --adminurl=http://localhost:35357/v2.0

 

Once that is done we can test the new user we created and see whether everything is working:

 

# /usr/local/bin/keystone --os-tenant-name admin --os-username admin --os-password test123 --os-auth-url http://localhost:5000/v2.0 user-list

# /usr/local/bin/keystone --os-tenant-name admin --os-username admin --os-password test123 --os-auth-url http://localhost:5000/v2.0 tenant-list

# /usr/local/bin/keystone --os-tenant-name admin --os-username admin --os-password test123 --os-auth-url http://localhost:5000/v2.0 user-role-list --user=admin --tenant=admin

# /usr/local/bin/keystone --os-tenant-name admin --os-username admin --os-password test123 --os-auth-url http://localhost:5000/v2.0 endpoint-list

# /usr/local/bin/keystone --os-tenant-name admin --os-username admin --os-password test123 --os-auth-url http://localhost:5000/v2.0 service-list

 

and there we go! OpenStack Identity running on FreeBSD!

 

Join us next time when we will try and setup the OpenStack Image (Glance) service on FreeBSD.

by Sina Sadeghi (sina@aptira.com) at July 30, 2014 02:57 PM

Tesora Corp

Short Stack: OpenStack turns four, SAP joins the fray and a conversation with OpenStack COO and executive director

short stack_b small_0_0.jpgWelcome to the Short Stack, our weekly feature where we search for the most intriguing OpenStack links to share with you. These links may come from traditional publications or company blogs, but if it's about OpenStack, we'll find the best links we can to share with you every week.

If you like what you see, please consider subscribing.

Here we go with this week's links:

Happy Fourth Birthday, OpenStack | SUSE Conversations

It's hard to believe that OpenStack celebrated its fourth birthday last week. The SUSE Conversations blog looks back at how far it has come in a relatively short time and what it needs to do to keep the momentum going as it continues to garner attention from vendors big and small.

SAP supports open source Cloud Foundry and OpenStack for cloud | ZDNet

And as though to prove how far OpenStack has come, SAP announced it would be a major contributor as part of its growing cloud strategy. SAP hopes to be a major contributor and to help guide the project going forward. Whether it can just jump in and have that kind of influence remains to be seen, but being involved will force it to partner with Oracle, which should be pretty entertaining in itself.

Rackspace Announces OnMetal Server Availability, Pricing | eWeek

Meanwhile, Rackspace, one of the founding OpenStack companies announced a new product last week that gives customers access to dedicated cloud servers running the OpenStack API. These so-called bare metal servers provide customers who need more consistent performance the ability to run critical software in the cloud without suffering any performance inconsistencies.

Is OpenStack the future of Cloud Computing | ZDNet

This vendor-written post from Red Hat points out that OpenStack is making great headway in Asia and the Chinese group is already the second largest group in the world outside of the US --and it's still growing much like the project. Beyond the marketing rhetoric, it's interesting to note that OpenStack is gaining traction across the world and that very likely bodes well for it as a project.

OpenStack execs: Red Hat, Yahoo, Comcast are our adopters -- and contributors | InfoWorld

In this wide ranging interview at OSCON, Infoworld talked to OpenStack executive director Jonathan Bryce and COO Mark Collier about the state of the OpenStack project. It's interesting to note the level of corporate involvement in this project and that many are not doing it because they want to build a business around OpenStack, but because they want to use OpenStack to meet their  own need.

by 693 at July 30, 2014 12:28 PM

July 29, 2014

Cloudwatt

Cloudwatt Talks proposals for November OpenStack Summit in Paris up for vote

The next OpenStack Summit will be held in Paris in November. As a committed OpenStack user and contributor, Cloudwatt will to be there. Our talented experts have therefore submitted the following talks, sometimes in collaboration with partners.

We hope they represent a valuable return of experience, so if you’re interested, please go ahead and vote for us! The vote will be open from July 30th to August 8th.

Scaling Neutron for public cloud usage with OpenContrail.

Speakers: Edouard Thuleau (Cloudwatt) / Pedro Marques (Juniper) / Sylvain Afchain (eNovance)

This talk presents our experiences deploying Neutron at large scale for a public cloud application. We present the rational for choosing OpenContrail as the neutron plugin for this deployment as well as the pros and cons versus the default neutron plugin. The talk covers both the architecture and implementation differences between the two approaches as well as operational impact of some design choices.


Launching with OpenContrail: the implementation of a collaborative Agile Delivery Model.

Speakers: Foucault de Bonneval (Cloudwatt) / Jenny Lin (Juniper)

This talk will cover the technical & operational challenges that were addressed to launch Cloudwatt with OpenContrail, and the innovative Co-Devlopment model created with Juniper to address those. We will also put a light on OpenContrail’s CI architecture that was created as a result.


Openstack Toolbox: Give your users the power to do it themselves.

Speakers: Ala Razmerita (Cloudwatt) / Jordan Pittier (Cloudwatt)

This session will present three tools that were developed by Cloudwatt’s team to address the typical needs of OpenStack Developers: ‘OSPurge’, a standalone Python script that aims at deleting all resources, ‘Flame’ an HOT Heat template/user-data generator for already existing infrastructures and ‘OSit’. It’s an OpenStack image tester. This session details the why, the how and the future of these tools.


Using Openstack Swift for extreme data durability.

Speakers: Florent Flament (Cloudwatt) + Christian Schwede (eNovance)

Openstack Swift is a very powerful object storage that is used in several of the largest object storage deployments around the globe. It ensures a very high level of data durability and can withstand epic disasters if setup in the right way. During this talk we want to give you an overview about the mechanism that are used within Swift to ensure data durability and availability, how to design your cluster using regions and zones and things you need to pay attention to, especially when enhancing an existing cluster.


Eating our own dog food: Upgrading Cloudwatt’s development model to leverage OpenStack.

Speakers: Regis Allegre (Cloudwatt) + Hugues Obolonsky (Cloudwatt)

Cloudwatt has always been a firm believer in dogfooding principle, and consequently always tried to leverage OpenStack for its internal development needs, as well as its public cloud. This proved productive in many ways, providing elasticity to our development teams, growing Openstack operational expertise in our teams, and helped us understand the impact of major OpenStack releases before they reached our production. We’ll also use this talk to go over the Dev & Test use cases that proved compatible with public cloud usage and the ones that remained resistant.

by Régis Allègre at July 29, 2014 10:00 PM

Cloudwatt Talks proposals for November OpenStack Summit in Paris up for vote

The next OpenStack Summit will be held in Paris in November. As a committed OpenStack user and contributor, Cloudwatt will to be there. Our talented experts have therefore submitted the following talks, sometimes in collaboration with partners.

We hope they represent a valuable return of experience, so if you’re interested, please go ahead and vote for us! The vote will be open from July 30th to August 8th.

Scaling Neutron for public cloud usage with OpenContrail.

Speakers: Edouard Thuleau (Cloudwatt) / Pedro Marques (Juniper) / Sylvain Afchain (eNovance)

This talk presents our experiences deploying Neutron at large scale for a public cloud application. We present the rational for choosing OpenContrail as the neutron plugin for this deployment as well as the pros and cons versus the default neutron plugin. The talk covers both the architecture and implementation differences between the two approaches as well as operational impact of some design choices.


Launching with OpenContrail: the implementation of a collaborative Agile Delivery Model.

Speakers: Foucault de Bonneval (Cloudwatt) / Jenny Lin (Juniper)

This talk will cover the technical & operational challenges that were addressed to launch Cloudwatt with OpenContrail, and the innovative Co-Devlopment model created with Juniper to address those. We will also put a light on OpenContrail’s CI architecture that was created as a result.


Openstack Toolbox: Give your users the power to do it themselves.

Speakers: Ala Rezmerita (Cloudwatt) / Jordan Pittier (Cloudwatt)

This session will present three tools that were developed by Cloudwatt’s team to address the typical needs of OpenStack Developers: ‘OSPurge’, a standalone Python script that aims at deleting all resources, ‘Flame’ an HOT Heat template/user-data generator for already existing infrastructures and ‘OSit’. It’s an OpenStack image tester. This session details the why, the how and the future of these tools.


Using Openstack Swift for extreme data durability.

Speakers: Florent Flament (Cloudwatt) + Christian Schwede (eNovance)

Openstack Swift is a very powerful object storage that is used in several of the largest object storage deployments around the globe. It ensures a very high level of data durability and can withstand epic disasters if setup in the right way. During this talk we want to give you an overview about the mechanism that are used within Swift to ensure data durability and availability, how to design your cluster using regions and zones and things you need to pay attention to, especially when enhancing an existing cluster.


Eating our own dog food: Upgrading Cloudwatt’s development model to leverage OpenStack.

Speakers: Regis Allegre (Cloudwatt) + Hugues Obolonsky (Cloudwatt)

Cloudwatt has always been a firm believer in dogfooding principle, and consequently always tried to leverage OpenStack for its internal development needs, as well as its public cloud. This proved productive in many ways, providing elasticity to our development teams, growing Openstack operational expertise in our teams, and helped us understand the impact of major OpenStack releases before they reached our production. We’ll also use this talk to go over the Dev & Test use cases that proved compatible with public cloud usage and the ones that remained resistant.

by Régis Allègre at July 29, 2014 10:00 PM

OpenStack Blog

Upcoming Industry Events & CFP Deadlines!

Fall is quickly approaching and there are some great industry events around the world coming up on the calendar, as well as Call for Proposals deadlines!

The Global Events Calendar is the primary resource to know what events are approaching. It is fully editable, so you can update the following criteria:

  • If your organization is attending, sponsoring or exhibiting (COLUMN G)
  • Provide feedback or ideas on events (COLUMN H)
  • Add vendor-independent industry events to the calendar (complete ALL criteria)

Here are the upcoming industry events planned for the second half of 2014:

PyCon AU: August 1-5, Brisbane, Australia

  • The OpenStack Miniconf will be held on August 1st and features speakers, including Jamie Lennox, Robert Collins, Matthew Treinish, Anita Kuno & more!

CloudOpen NA: August 20 – 22, Chicago

  • If you are knowledgeable in OpenStack and would like to volunteer at the OpenStack booth, please email events@openstack.org

Interop Mumbai: September 4-5, Mumbai, India
Cloud Connect China: September 16-18, China

  • Attend the half-day OpenStack workshop the afternoon of September 16 for hands-on experience with OpenStack.

DEVIEW: September 29-30, Seoul, South Korea
Gartner Symposium/IT Expo North America: October 5-9, Orlando, FL
CloudOpen Europe: October 13-15, Dusseldorf, Germany
FUTURECOM: October 13-16, Sao Paolo, Brazil
Gartner Symposium/IT Expo Japan: October 28-30, Tokyo, Japan
Open World Forum: October 31- November 1, Paris, France
USENIX LISA: November 9-14, Seattle, WA

  • Stay tuned for the date and time of the half-day OpenStack workshop at LISA 2014!

OW2 Annual Conference: November 4-6, Paris, France
Supercomputing: November 16-21, New Orleans, LA
Gartner Data Center Conference: December 2-5, Las Vegas, NV

  • Interested in exhibiting to this Infrastructure and Operations IT management audience?  Join us in the OpenStack Pavilion.  Contact events@openstack.org for more information.

Here are the approaching CFP deadlines:
Ubucon.DE - July 31

If you have any questions, or you would like to plan a regional OpenStack Day, please contact events@openstack.org

by Allison Price at July 29, 2014 04:16 PM

eNovance Engineering Teams

Multi-tenant Docker with OpenStack Heat

While Heat exists mostly as an orchestration tool for OpenStack, it is also an interesting system for describing in templates interactions with APIs. There has been a resource to talk to the Docker API in Heat for a few months now [1], and we’ve seen some great examples of how to to use it. Most of them expect a Docker deployed on your Heat node, with all your users talking to it. With collegues, we thought about how you could use Nova servers with Docker installed and talk to the remote API [2]. This way we get a tenant-specific Docker instance on which we have full control.

While exploring those capabilities, I discovered that Docker introduced several exciting features that would make the template description much nicer. I wrote a patch to be able to use them [3], the following example relies on it, the good news being that it got recently merged in Heat master branch. You also need to enable the Docker resource in your heat deployment [4].

The main issue to solve when trying to deploy such a template is to make sure that your Docker service is ready before starting to create the containers in it. The new supported way to do this is to use software deployments resources. You need tools on your base image to talk to Heat, as used by TripleO. Building such an image is described in the heat-templates repository [5].

We’ll build the following example, creating our Docker server and 2 containers inside for having a WordPress deployment:

Heat and Docker

This is what an example template looks like:

heat_template_version: 2013-05-23

description: >
  Heat Docker template using software deployments.

parameters:

  key_name:
    type: string
    description : Name of a KeyPair to enable SSH access to the instance
    default: heat

  instance_type:
    type: string
    description: Instance type for WordPress server
    default: m1.small

  image:
    type: string
    description: >
      Name or ID of the image to use for the Docker server.  This needs to be
      built with os-collect-config tools from a fedora base image.

resources:
  docker_sg:
    type: OS::Neutron::SecurityGroup
    properties:
      description: Ping, SSH, Docker
      rules:
      - protocol: icmp
      - protocol: tcp
        port_range_min: 22
        port_range_max: 22
      - protocol: tcp
        port_range_min: 80
        port_range_max: 80
      - protocol: tcp
        port_range_min: 2345
        port_range_max: 2345

  docker_config:
    type: OS::Heat::SoftwareConfig
    properties:
      group: script
      config: |
        #!/bin/bash -v
        setenforce 0
        yum -y install docker-io
        cp /usr/lib/systemd/system/docker.service /etc/systemd/system/
        sed -i -e '/ExecStart/ { s,fd://,tcp://0.0.0.0:2345, }' /etc/systemd/system/docker.service
        systemctl start docker.service

  docker_deployment:
    type: OS::Heat::SoftwareDeployment
    properties:
      config: {get_resource: docker_config}
      server: {get_resource: docker_host}

  docker_host:
    type: OS::Nova::Server
    properties:
      image: {get_param: image}
      flavor: {get_param: instance_type}
      key_name: {get_param: key_name}
      security_groups:
        - {get_resource: docker_sg}
      user_data_format: SOFTWARE_CONFIG

  database_password:
    type: OS::Heat::RandomString

  database:
    type: DockerInc::Docker::Container
    depends_on: [docker_deployment]
    properties:
      image: mysql
      name: db
      docker_endpoint:
        str_replace:
          template: http://host:2345/
          params:
            host: {get_attr: [docker_host, networks, private, 0]}
      env:
        - {str_replace: {template: MYSQL_ROOT_PASSWORD=password,
                         params: {password: {get_attr: [database_password, value]}}}}

  wordpress:
    type: DockerInc::Docker::Container
    depends_on: [database]
    properties:
      image: wordpress
      links:
        db: mysql
      port_bindings:
        80/tcp: [{"HostPort": "80"}]
      docker_endpoint:
        str_replace:
          template: http://host:2345/
          params:
            host: {get_attr: [docker_host, networks, private, 0]}

outputs:
  url:
    description: Public address of the web site
    value:
      str_replace:
        template: http://host/wordpress
        params:
          host: {get_attr: [docker_host, networks, private, 0]}

You can deploy the template simply doing calling stack-create:

heat stack-create -f docker_sd.yaml -P image=fedora-software-config -P instance_type=m1.large docker_stack

The first part of the template is about exposing a Nova server with a Docker API endpoint listening for commands. We then create a database container and a WordPress container that we link to it. Using the port_bindings configuration, we expose the container on the Docker host. The URL output gives the private address where the service should be up once the stack is deployed.

Another alternative to software deployments is to simply use wait conditions. While less powerful, it can solve our problem easily; the corresponding template is shown at [6].

Being a first shot at solving that problem, there are several ways it can be improved. The main thing is that while you have a per-tenant Docker endpoint, it’s open without any authentication. It would be nice to use client certicates as shown at [7]. Then, for reproducibility, having an image with Docker installed and configured properly would simplify the template quite a bit.

We haven’t got the final word on how Docker and containers will integrate in OpenStack. There is a driver for Nova [8] actively developed, but it’s unclear (at least to me) if it fits Nova model or not, and it certainly doesn’t expose all features that Docker has to offer. I expect something like a container service to emerge, but in the mean time using Heat gives you some nice capabilities.

[1] http://docs.openstack.org/developer/heat/template_guide/contrib.html#dockerinc-resource

[2] https://docs.docker.com/reference/api/docker_remote_api/

[3] https://review.openstack.org/106120

[4] https://github.com/openstack/heat/blob/master/contrib/docker/docker/README.md

[5] https://github.com/openstack/heat-templates/blob/master/hot/software-config/elements/README.rst

[6] https://gist.github.com/therve/0e1148296c6c9b43cb55

[7] https://docs.docker.com/articles/https/

[8] https://github.com/stackforge/nova-docker

by Thomas Herve at July 29, 2014 04:04 PM

Spilgames Engineering

Using Ceilometer with Graphite

At Spil Games we love OpenStack and we love metrics. We tried to run Ceilometer in the past but we experienced performance issues. We heavily use Graphite to store metrics we thought it would be a good idea to push Ceilometer metrics into Graphite. The data is directly sent from the compute node to the graphite backend so there are […]

The post Using Ceilometer with Graphite appeared first on Spil Games Engineering.

by Robert van Leeuwen at July 29, 2014 01:34 PM

Rafael Knuth

Google+ Hangout: Swift 2.0 Released – Now with Storage Policies

Storage Policies are the biggest feature to be added to OpenStack Swift since the project began....

July 29, 2014 09:17 AM

July 28, 2014

Amar Kapadia

The Number One Inhibitor to Cloud Storage (Part 2 of 2)!

The number one inhibitor is Access! (Part 2)

I've been feeling bad about delaying this second part of my blog, but in hindsight it was good; EMC acquired TwinStrata in the meantime validating the whole premise of my current blog!

Anyways, a few weeks ago I talked about how access, in my view, is the biggest inhibitor to cloud storage. Specifically the five issues are:

1. How do I get massive amounts of data in-and-out of the cloud?
2. How do I get my application to interface with cloud storage?
3. How do I get cloud storage to fit within my current workflow?
4. How do I figure out what data to move to the cloud?
5. Once the data is moved, how do I know it's in the cloud?

Also, the publisher for my OpenStack Swift book is having this contest:
==============================
Book Give-away:

Get a chance to win a free copy of the Implementing Cloud Storage with OpenStack Swift, just by commenting about the book with the link - http://www.packtpub.com/implementing-cloud-storage-with-openstack-swift/book! For the contest we have 7  ecopy each of  the book Implementing Cloud Storage with OpenStack Swift, to be given away to 7 lucky winners.

How you can win:

To win your copy of this book, all you need to do is come up with a comment below highlighting the reason "why you would like to win this book” with the link of the book - Implementing Cloud Storage with OpenStack Swift

Note – To win, the winners must also mention the book link in their comments  - http://www.packtpub.com/implementing-cloud-storage-with-openstack-swift/book

Duration of the contest & selection of winners:
The contest is valid for 1 week (i.e. from 7/25/14 - 8/1/14), and is open to everyone. Winners will be selected on the basis of their comment posted.
==============================
Read more »

by Amar Kapadia (noreply@blogger.com) at July 28, 2014 07:30 PM

Opensource.com

Last call for OpenStack Summit speakers, Juno security updates, and more

Interested in keeping track of what's happening in the open source cloud? Opensource.com is your source for what's happening right now in OpenStack, the open source cloud infrastructure project.

OpenStack around the web

There's a lot of interesting stuff being written about OpenStack. Here's a sampling:

by Jason Baker at July 28, 2014 04:00 PM

Cloudify Engineering

Bringing New Intelligence to Cloud Orchestration with Cloudify 3.0

Cloudify has been completely re-architected to provide Intelligent Orchestration of applications on the cloud. With this product rewrite, the new Cloudify orchestration platform simplifies the application deployment, management and scaling experience on OpenStack, VMware vSphere and other clouds and environments.

"To deliver this next generation, intelligent orchestration, we needed to rethink Cloudify’s design. With a new language of code, adoption of industry standards and development of scalable and custom workflows, we created something that few are doing today - orchestration of the entire app lifecycle that encompasses both pre-deployment and post-deployment management with a single platform.” - Yaron Parasol, VP of Product at GigaSpaces. 

Cloudify 3 highlights include automated reactions with a powerful workflow and policy engine, easy integration with any tool chain, native integration with OpenStack technology, support for VMware vSphere, CloudStack, Softlayer and other clouds and much more.

Read more from this announcement to hear what’s new with Cloudify 3.

by cloudify-engineering at July 28, 2014 12:12 PM

July 27, 2014

Arx Cruz

Deleting OpenStack Instances directly from database

Today I had a problem with my CI. Basically, one of my compute nodes went down, and all the VM’s created in that compute node stop work (of course!). Since I hate to do a nova list and see a lot of VM’s in ERROR instance and I wasn’t being able...

by Arx Cruz at July 27, 2014 05:58 PM

July 25, 2014

Ana Malagon

Notes on Stevedore

This is mostly taken from the very helpful documentation on Stevedore; when I started working on Gnocchi I found myself wondering a lot about the functions of two modules in particular, stevedore and pecan. The other day I needed to use stevedore to load a plugin and so finally had the chance to use it in practice. These are some notes from the process – hopefully they can be useful for someone needing to use stevedore for the first time.

So my basic understanding of stevedore is that it is used for managing plugins, or pieces of code that you want to load into an application. The manager classes work with plugins defined through entry points to load and enable the code.

In practice, this is what the process looks like:

  • Create a plugin

The documentation, which is authored by Doug Hellman I believe, recommends making a base class with the abc module, as good API practice. In my case, I wanted to make a class that would calculate the moving average of some data. So my base class, defined in the init file of my directory (/gnocchi/statistics) looked like this:

import abc
import six

@six.add_metaclass(abc.ABCMeta)
class CustomStatistics(object):

    @abc.abstractmethod
    def compute(data):
    '''Returns the custom statistic of the data.'''

The code is implemented in the class MovingAverage (/gnocchi/statistics/moving_statistics.py):

from gnocchi import statistics

class MovingAverage(statistics.CustomStatistics):

  def compute(self, data):
      ... do stuff ...
      return averaged_data
  • Create the entry point

The next step is to define an entry point for the code in your setup.cfg file. The entry point format for the syntax is

plugin_namespace=
 name = module.path:thing_you_want_to_import_from_the_module

so I had

[entry_points]
gnocchi.statistics =
    moving-average = gnocchi.statistics.moving_statistics:MovingAverage

The stevedore documentation on registering plugins has more information on how to package a library in general usng setuptools.

  • Load the Plugins

You can either use drivers, hooks, or the extensions pattern to load your plugins. I ended up starting with drivers and then moving to extensions. The difference between them is whether you want to load a single plugin (use drivers) or multiple plugins at a time (extensions). I believe hooks also allows you to load many plugins at once but is meant to be used for multiple entry points with the same name. This allows you to invoke several functions with a single call…that’s about the limit of my knowledge on hooks.

The syntax for a driver is the following:

from stevedore import driver

mgr = driver.DriverManager(
    namespace='gnocchi.statistics',
    name='moving-average',
    invoke_on_load=True,
)

output  = mgr.driver.compute(data)

The invoke_on_load argument lets you call the object when loaded. Here the object is an instance of the MovingAverage class. You access it with the driver property and then call the methods (in this case, compute). You can also pass in arguments in DriverManager; see the documentation for more detail.

I ended up going with extensions instead of drivers, as there were multiple statistical functions I had as plugins and I wanted to load all the entry points at once. The syntax is then

  from stevedore import extension

  mgr = extension.ExtensionManager(
      namespace = 'gnocchi.statistics',
      invoke_on_load=True
  )

This loads all of the plugins in the namespace. In my case I wanted to make a dictionary of all the function names and the extension objects so I did:

configured_statistics = dict((x.name, x.obj) for x in mgr)

When a GET request to Gnocchi had a query for computing statistics on the data, the dict was consulted to see if there was a match with a configured statistics function name. If so, the extension object was called with the compute() method.

 output = configured_statistics[user_query].compute(data)

The documentation shows an example using map() to call all the plugins. For the code below results would be a sequence of function names and the resulting data once the statistic is applied :

def compute_data(ext, data):
    return (ext.name, ext.obj.compute(data))

results = mgr.map(compute_data, data)

If you need the order to matter when loading the extension objects, you can use NamedExtensionManager.

That’s about it for my notes on stevedore – it’s a clean, well-designed module and I’m glad I got to learn about it.

July 25, 2014 07:41 PM

OpenStack Blog

OpenStack Community Weekly Newsletter (July 18 – 25)

How to Effectively Contribute to An Open Source Project Such As OpenStack Neutron

As Neutron’s Tech Lead (PTL), Kyle Mestery has been mostly heads down working to ensure the Neutron project has a successful Juno release. Increasingly, and especially near OpenStack Juno milestone deadlines, he’s forced to make hard choices and start turning new features down in order to focus on shipping good quality code for Juno. He sent an email to the openstack-dev mailing list this morning addressing the pressure his team is under. He also wrote a longer blog post to expand upon that email.

OpenStack Failures

Last week the bulk of the brain power of the OpenStack QA and Infra teams were all in one room, in real life. This was a great opportunity to spend a bunch of time diving deep into the current state of the Gate, figure out what’s going on, and how we might make things better. Sean Dague, Jim Blair, wElizabeth K. Joseph and bmwiedemann wrote a summary of the week.

OpenStack plays Tetris : Stacking and Spreading a full private cloud

CERN is running a large scale private cloud which is providing compute resources for physicists analysing the data from the Large Hadron Collider. With 100s of VMs created per day, the OpenStack scheduler has to perform a Tetris like job to assign the different flavors of VMs falling to the specific hypervisors.

Juno Updates – Security, Authentication and other neat things

There is a lot of development work going on in Juno in security related areas. Nathan Kinder wrote up some of the more notable efforts that are under way in Keystone, Barbican, Kite and other projects.

The Road To Paris 2014 – Deadlines and Resources

During the Paris Summit there will be a working session for the Women of OpenStack to frame up more defined goals and line out a blueprint for the group moving forward. We encourage all women in the community to complete this very short surveyto provide input for the group.

Security Advisories and Notices

Tips ‘n Tricks

Upcoming Events

Other News

Got Answers?

Ask OpenStack is the go-to destination for OpenStack users. Interesting questions waiting for answers:

Welcome New Reviewers and Developers

Mike Smith Roman Vasilets
Mika Ayenson Motohiro Otsuka
Claudiu Nesa David Yuan
Scott Reeve Amey Ghadigaonkar
Pawel Palucki Alexandr Naumchev
Travis McPeak Michele Paolino
Livnat Peer Marcus V R Nascimento
zhangtralon daya kamath
Ryan Brown arkady kanevsky
David Caudill Travis McPeak
FeihuJiang ChingWei Chang
Anusha JJ Asghar
Lee Yarwood Neetu Jain
François Magimel
Ashraf Vazeer

Latest Activity In Projects

Do you want to see at a glance the bugs filed and solved this week? Latest patches submitted for review? Check out the individual project pages on OpenStack Activity Board – Insights.

OpenStack Reactions

wnnng

My reaction when for the first time I had a contribution merged in OpenStack

The weekly newsletter is a way for the community to learn about all the various activities occurring on a weekly basis. If you would like to add content to a weekly update or have an idea about this newsletter, please leave a comment.

by Stefano Maffulli at July 25, 2014 06:31 PM

Nathan Kinder

Juno Updates – Security

There is a lot of development work going on in Juno in security related areas.  I thought it would be useful to summarize what I consider to be some of the more notable efforts that are under way in the projects I follow.

Keystone

Nearly everyone I talk with who is using Keystone in anger is integrating it with an existing identity store such as an LDAP server.  Using the SQL identity backend is really a poor identity management solution, as it only supports basic password authentication, there is lack of password policy support, and the user management capabilities are fairly limited.  Configuring Keystone to use an existing identity store has it’s challenges, but some of the changes in Juno should make this easier.  In Icehouse and earlier, Keystone can only use one single identity backend.  This means that all regular users and service users must exist in the same identity backend.  In many real-world scenarios, the LDAP server used for users and credentials is considered to be read-only by anything other than the normal user provisioning tools.  A common problem is that the OpenStack service users are not wanted in the LDAP server.  In Juno, it will be possible to configure Keystone to use multiple identity backends.  This will allow a deployment to use an LDAP server for normal users and the SQL backend for service users.  In addition, this should allow multiple LDAP servers to be used by a single Keystone instance when using Keystone Domains (which previously only worked with the SQL identity backend).

I mentioned above that Keystone’s SQL identity backend is not ideal.  In many ways, Keystone’s LDAP identity backend is also not ideal.  Authentication is currently limited to using the LDAP simple bind operation, which requires users to send their clear-text password to Keystone, which is then sent to the LDAP server (hopefully all over SSL/TLS protected connections).   Keystone already allows for stronger authentication via external authentication, but there are some barriers to adoption that should be eliminated in Juno.  Using external authentication requires that Keystone is run in Apache httpd.  Unfortunately, the size of Keystone’s PKI formatted tokens can easily get large enough to cause problems with the httpd/mod_wsgi interaction due to the amount of service catalog information contained within the token.  In Juno, the default token format is a compressed PKI format called PKIZ.  This significantly reduces the size of tokens such that running Keystone in httpd is feasible.  This will make it possible for Keystone deployments to leverage a number of httpd modules that allow for strong forms of authentication such as Kerberos and X.509 client-certificates.  The Keystone team even switched all of it’s gate jobs to use httpd recently, as it is considered to be the recommended deployment method going forward.

Keystone has an existing Federation extension that allows one to define mappings to translate SAML assertions into Keystone role assignments.  In Juno, this mapping functionality is being made more general purpose to allow it to be used with external authentication via Apache httpd modules.  This will allow for some very interesting use-cases, such as having Apache provide all of the user and group information necessary to figure out role assignment without the need for Keystone to maintain it’s own identity store.  For example, one should be able to use httpd with mod_lookup_identity to allow SSSD on the underlying platform to provide Keystone with all of the user and group information from an external backend identity provider (FreeIPA, Active Directory).  This offloads all of the LDAP complexity to SSSD, which provides LDAP connection pooling and caching to allow for continued service even if the LDAP server is down.  Combined with strong authentication like Kerberos, this offers a performant and secure approach to authentication and identity while leaving Keystone to focus on it’s main task of authorization within OpenStack.

Barbican

The Barbican project looks to be progressing nicely through the incubation process.  Barbican was initially designed with a plug-in model, with a single hardware security module plug-in.  There has been quite a bit of interest in implementing various new plug-ins, which has highlighted the need to re-architect the plug-in interface to allow for multiple plug-in types.  This re-architecture has has been one of the big focus items of the Juno cycle thus far, as it affects any new functionality that is being implemented as a plug-in.

A plug-in was implemented to allow the Dogtag PKI DRM (data recovery manager) subsystem to be used for key generation and storage.  This should allow for easy integration for those with existing Dogtag installations as well as being an attractive well-proven key archival and recovery solution for new deployments.

Barbican is expanding it’s functionality to allow for more than it’s initial use-case of storing (and optionally generating) symmetric keys.  The ability to store asymmetric key pairs and certificates is being added as well in Juno, which is of particular interest for things like LBSaaS.  The ability to act as an interface for handing certificate requests and interacting with a CA (certificate authority) are also being worked on, with plug-ins for Dogtag and Symantec CAs.

Kite

The Kite project continues to work it’s way through implementation, albeit slowly.  There has only been one developer working part-time on the implementation, so it’s not likely to be in a usable state until the Kilo release.  The good news is that an additional contributor has recently stared contributing to this effort, working from the Oslo messaging library side of things.  Hopefully this speeds things along so that Kite is available for other projects to start developing against early in the Kilo cycle.

Cross-Project

A new Session object has been added to Keystoneclient for use by the client code in other projects.  The Session object centralizes the responsibility of authentication and transport handling.  From a security standpoint, this is very nice since it centralizes all of the HTTPS client-side code across all projects as opposed to the current situation of having many different implementations.  The configuration of things like certificate validation will also be consistent across all projects once they utilize the Session object, which is a particular  item that is a bit of a mess in Icehouse and earlier releases.  Nova has already been converted to use the new Session object, and conversion of the Cinder and Neutron projects is in progress.

There has been some work in Devstack that is worth noting, even though it’s code doesn’t directly affect actual deployments.  Patches have been proposed to allow Devstack to easily be set up with SSL/TLS support for many of the main services.  The goal of this work is to allow SSL/TLS to eventually be regularly tested as a part of the continuous integration gate jobs.  This is an area that is not currently well tested, even though it is crucial functionality for secure deployments.  This work should result in more robust SSL/TLS code, which benefits everyone.

by Nathan Kinder at July 25, 2014 03:49 PM

Opensource.com

Celebrating sysadmins in the cloudy future

System administration can be a thankless job. To all of the tireless administrators out there who keep the systems we reply upon up and running, today is the day that we say thank you!

by Jason Baker at July 25, 2014 11:00 AM

Rafael Knuth

Google+ Hangout: Set up OpenStack on AWS or GCE for dev/test w/ Ravello Systems

OpenStack is awesome. But, in order to try out the latest releases you typically need more hardware...

July 25, 2014 09:40 AM

Opensource.com

Docker acquires Orchard, SAP supports OpenStack, ODF and more

Open source news for your reading pleasure.

July 19 - 25, 2014


In this week's edition of our open source news roundup, we take a look at Docker acquiring Orchard, SAP who will support Cloud Foundry and OpenStack, the UK government who made ODF its official document standard, and more!

by robinmuilwijk at July 25, 2014 09:00 AM

July 24, 2014

Kyle Mestery

How to Effectively Contribute to An Open Source Project Such As OpenStack Neutron

Since being elected as the OpenStack Neutron PTL, I’ve been mostly heads down working to ensure the Neutron project has a successful Juno release. Increasingly, and especially near OpenStack Juno milestone deadlines, I’m seeing frustration from new contributors around their contributions to Neutron. I sent an email to the openstack-dev mailing list this morning addressing this in a terse form, this blog is an attempt to expand upon that email.

An increasing concern I see from people who are new contributors is the perceived issues in getting their code merged into Juno. The common concerns come from one of the following possible ideas:

  1. OpenStack Neutron is a “closed development” environment where only a select few are allowed to make changes.
  2. OpenStack Neutron is part of some grand conspiracy by vendor X, and as I work for vendor Y, my changes never get merged.
  3. OpenStack Neutron needs more core developers, this would solve all the velocity problems.

Addressing the above concerns in order, OpenStack Neutron is not a “closed development” environment. We do all of our work in the open by using mailing lists andIRC (for both meetings and discussions). Anyone can submit a patch to OpenStack Neutron once you’ve signed the CLA and followed the process here. We welcome new contributors and do our best to work with them! My advice for new contributors, echoed below, is to start small, listen and learn.

The second concern around conspiracy theories comes from people who are frustrated that their specification or blueprint isn’t prioritized for inclusion into a specific Juno release. They may also work for a competing company of some existing Neutron core developer, and think there is some political machinations going on here. While it’s true upstream developers work for corporations (we like to get paid and feed our families, it’s true!), it’s also true the core team in particular works together as a team. This means that people who work for Cisco do in fact work with people from VMware, as an example. We’re all working to drive stability and innovation upstream, but we do it in an open and collaborative manner.

The third concern is a tricky one. While adding more core reviewers would certainly help spread the load, we’re generally picky about adding new ones and have a policy in place around how we do this. The reason we’re picky is because being a core reviewer is a big responsibility, and we want dedicated developers who can contribute time upstream for the role. We’re in the process of coming up with a mentoring program to groom Neutron core developers, which will hopefully help here.

I’d like to reiterate from the email a list of effective ways to contribute to an upstream Open Source project:

  1. Get involved in the ways the project uses to communicate, including IRC, mailing lists, Hangouts, phone calls, whatever they use.
  2. Work upstream to fix some bugs. Documentation bugs and low-hanging fruit bugs are a good place to start.
  3. Understand how the existing team works. It’s best to understand this before suggesting any changes.
  4. Come to weekly meetings. Make sure you spend a little while listening to understand how the existing team works before jumping in with your own agenda.
  5. Build relationships. Doing the 4 points above will naturally lead you to this point.

An Open Source project is really a large development project being done in the open. To effectively join this project, you need to learn how the project operates and contribute in small ways initially. As in close sourced development, gaining the trust of the existing contributors is key to having influence over the direction of the project. The existing developers for a project such as Neutron (both core and non-core developers) all spend varying amount of time upstream working. But the key point is that they have taken the time to develop their contributions as well as grow relationships with existing upstream developers.

Upstream effectiveness is really about the time you put into your contributions. As existing developers upstream, we’re responsible for the current and future state of a project like OpenStack Neutron. While we have policies in place dictating how we work, we’re also responsible for things like gate failures, bug regressions, packaging issues, broad community interaction, and core developer grooming, among other things.

This post isn’t meant to discourage anyone from being an upstream developer. Quite the contrary, actually. I’m hoping to highlight effective ways to work upstream, which is beneficial for both new contributors as well as existing contributors. At the end of the day, a project like OpenStack Neutron involves people from across a wide spectrum of companies and interests. As the PTL, it’s my job to lead these people to a common goal. Growing the base of contributors is part of this responsibility as well. I look forward to seeing new contributors in Neutron and OpenStack in general!

by mestery at July 24, 2014 07:28 PM

SUSE Conversations

Happy Fourth Birthday OpenStack

The OpenStack project turned four this past week.  While birthdays are always a time to celebrate they are also a time to reflect on past accomplishments and anticipate the excitement of future endeavors.  And, since OpenStack has been maturing in dog years, some contemplation of where it has been and where it’s headed seems in …

+read more

by Douglas Jarvis at July 24, 2014 06:13 PM

Piston

Bring New Apps and Services to Market Faster with SDN and OpenStack

In this new “data revolution” economy, being able to quickly develop apps from testing to staging to production is a key competitive differentiation for most businesses today. Being able to automate the use of an API driven-infrastructure is the essential ingredient to moving at the breakneck speed of today’s software-driven businesses.

In light of this revolution, the founders and CTOs of Piston and PLUMgrid got together for an online fireside chat (you can check out the replay below) to discuss this phenomenon.

Some highlights from their conversation included:

  • How a software-defined cloud architectur enables scalability, performance-monitoring, automation, and zero downtime for your infrastructure
  • Technical and business benefits of a simplified OpenStack deployment model
  • Common use cases for or deploying Piston OpenStack with PLUMgrid OpenStack Networking Suite
<iframe allowfullscreen="allowfullscreen" frameborder="0" height="375" mozallowfullscreen="mozallowfullscreen" src="http://player.vimeo.com/video/101468842" webkitallowfullscreen="webkitallowfullscreen" width="500"></iframe>

Piston and PLUMgrid’s turn-key solution offers a scalable, secure, and extensible platform to speed time to market for new apps and services. Interested in learning more about how Piston and PLUMgrid’s joint solution can help you bring new apps and services to market quickly? Reach out to a member of our team today and we’d be happy to assist you! Start today!

by Piston Staff at July 24, 2014 04:30 PM

Sean Dague

Splitting up Git Commits

Human review of code takes a bunch of time. It takes even longer if the proposed code has a bunch of unrelated things going on in it. A very common piece of review commentary is “this is unrelated, please put it in a different patch”. You may be thinking to yourself “gah, so much work”, but turns out git has built in tools to do this. Let me introduce you to git add -p.

Lets look at this Grenade review - https://review.openstack.org/#/c/109122/1. This was the result of a days worth of hacking to get some things in order. Joe correctly pointed out there was at least 1 unrelated change in that patch (I think he was being nice, there were probably at least 4 things going that should have been separate). Those things are:

  • The quiece time for shutdown, that actually fixes bug 1285323 all on it’s own.
  • The reordering on the directory creates so it works on a system without /opt/stack
  • The conditional upgrade function
  • The removal of the stop short circuits (which probably shouldn’t have been done)

So how do I turn this 1 patch, which is at the bottom of a patch series, into 3 patches, plus drop out the bit that I did wrong?

Step 1: rebase -i master

Start by running git rebase -i master on your tree to put myself into the interactive rebase mode. In this case I want to be editing the first commit to split it out.

screenshot_171

Step 2: reset the changes

git reset ##### will unstage all the changes back to the referenced commit, so I’ll be working from a blank slate to add the changes back in. So in this case I need to figure out the last commit before the one I want to change, and do a git reset to that hash.

screenshot_173

Step 3: commit in whole files

Unrelated change #1 was fully isolated in a whole file (stop-base), so that’s easy enough to do a git add stop-base and then git commit to build a new commit with those changes. When splitting commits always do the easiest stuff first to get it out of the way for tricky things later.

Step 4: git add -p 

In this change grenade.sh needs to be split up all by itself, so I ran git add -p to start the interactive git add process. You will be presented with a series of patch hunks and a prompt about what to do with them. y = yes add it, n = no don’t, and lots of other options to be trickier.

screenshot_176

In my particular case the first hunk is actually 2 different pieces of function, so y/n isn’t going to cut it. In that case I can type ‘e’ (edit), and I’m dumping into my editor staring at the patch, which I can interactively modify to be the patch I want.

screenshot_177

I can then delete the pieces I don’t want in this commit. Those deleted pieces will still exist in the uncommitted work, so I’m not losing any work, I’m just not yet dealing with it.

screenshot_178

Ok, that looks like just the part I want, as I’ll come back to the upgrade_service function in patch #3. So save it, and final all the other hunks in the file that are related to that change to add them to this patch as well.

screenshot_179

Yes, to both of these, as well as one other towards the end, and this commit is ready to be ‘git commit’ed.

Now what’s left is basically just the upgrade_service function changes, which means I can git add grenade.sh as a whole. I actually decided to fix up the stop calls before doing that just by editing grenade.sh before adding the final changes. After it’s done, git rebase –continue rebases the rest of the changes on this, giving me a new shiney 5 patch series that’s a lot more clear than the 3 patch one I had before.

Step 5: Don’t forget the idempotent ID

One last important thing. This was a patch to gerrit before, which means when I started I had an idempotent ID on every change. In splitting 1 change into 3, I added that id back to patch #3 so that reviewers would understand this was an update to something they had reviewed before.

It’s almost magic

As a git user, git add -p is one of those things like git rebase -i that you really need in your toolkit to work with anything more than trivial patches. It takes practice to have the right intuition here, but once you do, you can really slice up patches in a way that are much easier for reviewers to work with, even if that wasn’t how the code was written the first time.

Code that is easier for reviewers to review wins you lots of points, and will help with landing your patches in OpenStack faster. So taking the time upfront to get used to this is well worth your time.

by Sean Dague at July 24, 2014 12:33 PM

Mirantis

Meet Your OpenStack Training Instructor: Reza Roodsari

Next up in our “Meet your OpenStack Training Instructor” series, we spend a few moments talking with Reza Roodsari.


Tell us more about your background. How did you become involved in OpenStack training?

mirantis-instructor-devin-parrish

Just like the satisfaction a person receives after they put together a good puzzle, I have always had passion for working with complex, intricate systems.

For me, the cloud is a perfect extension of this passion – and playing in the OpenStack playground, with its ever-evolving conglomeration of open-source components, is just as intriguing as a Mandelbrot. Just like a Mandelbrot, an OpenStack environment is a grand, captivating, and immense landscape.

It is a place where one is never bored.

What do you enjoy most about training?

Any successful teacher must possess the innate ability to step back, structure and articulate information in a manner that transfers knowledge and facilitates learning. For me this has always been one of teaching’s biggest rewards. I truly enjoy the challenge of discovering new ways to take difficult subjects, and present them in an easy to understand, efficient manner.

Of course, as is the case with any discipline or domain, intuitive understanding comes with a commitment to continued practice and education. For me, this has always been an added benefit that comes from my passion for teaching. It allows me to approach a subject from a deeply personal level, and it opens up the opportunity to gain an intuitive understanding.

mirantis-bootcamp-san-jose

What do you find the biggest challenge in training students to use OpenStack?

Simply put: A good teacher makes complex topics easy to understand. A struggling teacher makes an easy subject difficult to understand.

In terms of OpenStack training, and its ever-evolving collection of moving parts, the challenge becomes remaining committed to approaching a complex subject, and presenting it in an easy to understand manner.

Here at Mirantis Training, our promise and our challenge has always been to ensure that every student walks away with a deep understanding of OpenStack. Our mission is to deliver the knowledge, skillset and tools they need to tackle the challenge of a real-world OpenStack environment.

What kinds of professionals are most likely to benefit from participating in this class?

mirantis-bootcamp-san-jose

In our introductory course, we cover all the fundamentals you need to know before you dive into architecting and deploying a cloud based OpenStack environment. While our classes are structured to prepare students from an array of different backgrounds and skillsets, networking professionals and IT engineers comprise a high percentage of our classes.

What advice would you give our readers who want to learn more about OpenStack?

It all begins with time and commitment to learning. Everything we teach at Mirantis, you can learn on your own, but you need to be prepared to invest the time and effort – and I would start with http://www.openstack.org/software/start/.

Of course, the advantage of an OpenStack training course from Mirantis is that this learning curve is reduced dramatically. I encourage any IT professional committed to expanding their understanding of OpenStack to seriously consider attending one of our trainings.


Read more about our instructors on the Mirantis Training website.

The post Meet Your OpenStack Training Instructor: Reza Roodsari appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Lana Zhudina at July 24, 2014 11:14 AM

Rafael Knuth

Google+ Hangout: Clocker - Creating a Docker Cloud w/ Apache Brooklyn

In this meetup we introduce Clocker - an Apache software licensed open source project which lets you...

July 24, 2014 09:56 AM

Percona

DBaaS, OpenStack and Trove 101: Introduction to the basics

We’ll be publishing a series of posts on OpenStack and Trove over the next few weeks, diving into their usage and purpose. For readers who are already familiar with these technologies, there should be no doubt as to why we are incredibly excited about them, but for those who aren’t, consider this a small introduction to the basics and concepts.

What is Database as a Service (DBaaS)?
In a nutshell, DBaaS – as it is frequently referred to – is a loose moniker to the concept of providing a managed cloud-based database environment accessible by users, applications or developers. Its aim is to provide a full-fledged database environment, while minimizing the administrative turmoil and pains of managing the surrounding infrastructure.

Real life example: Imagine you are working on a new application that has to be accessible from multiple regions. Building and maintaining a large multiregion setup can be very expensive. Furthermore, it introduces additional complexity and strain on your system engineers once timezones start to come into play. The challenge of having to manage machines in multiple datacenters won’t simplify your release cycle, nor increase your engineers’ happiness.

Let’s take a look at some of the questions DBaaS could answer in a situation like this:

- How do I need to size my machines, and where should I locate them?
Small environments require less computing power and can be a good starting point, although this also means they may not be as well-prepared for future growth. Buying larger-scale and more expensive hardware and hosting can be very expensive and can be a big stumbling block for a brand new development project. Hosting machines in multiple DC’s could also introduce administrative difficulties, like having different SLA’s and potential issues setting up WAN or VPN communications. DBaaS introduces an abstraction layer, so these consideration aren’t yours, but those of the company offering it, while you get to reap all the rewards.

- Who will manage my environment from an operational standpoint?
Staffing considerations and taking on the required knowledge to properly maintain a production database are often either temporarily sweeped under the rug or, when the situation turns out badly, a cause for the untimely demise of quite a few young projects. Rather than think about how long ago you should have applied that security patch, wouldn’t it be nice to just focus on managing the data itself, and be otherwise confident that the layers beyond it are managed responsibly?

- Have a sudden need to scale out?
Once you’re up and running, enjoying the success of a growing use base, your environment will need to scale accordingly. Rather than think long and hard on the many options available, as well as the logistics attached to those changes, your DBaaS provider could handle this transparently.

Popular public options: Here are a few names of public services you may have come across already that fall under the DBaaS moniker:

- Amazon RDS
- Rackspace cloud databases
- Microsoft SQLAzure
- Heroku
- Clustrix DBaaS

What differentiates these services from a standard remote database is the abstraction layer that fully automates their backend, while still offering an environment that is familiar to what your development team is used to (be it MySQL, MongoDB, Microsoft SQLServer, or otherwise). A big tradeoff to using these services is that you are effectively trusting an external company with all of your data, which might make your legal team a bit nervous.

Private cloud options?
What if you could offer your team the best of both worlds? Or even provide a similar type of service to your own customers? Over the years, a lot of platforms have been popping up to allow effective management and automation of virtual environments such as these, allowing you to effectively “roll your own” DBaaS. To get there, there are two important layers to consider:

  • Infrastructure Management, also referred to as Infrastructure-as-a-Service (IaaS), focusing on the logistics of spinning up virtual machines and keeping their required software packages running.
  • Database Management, previously referred to DBaaS, transparently coordinating multiple database instances to work together and present themselves as a single, coherent data repository.

Examples of IaaS products:
- OpenStack
- OpenQRM

Ecample of DBaaS:
- Trove

Main Advantages of DBaaS
For reference, the main reasons why you might want to consider using an existing DBaaS are as follows:

- Reduced Database management costs

DBaaS removes the amount of maintenance you need to perform on isolated DB instances. You offload the system administration of hardware, OS and database to either a dedicated service provider, or in the case where you are rolling your own, allow your database team to more efficiently manage and scale the platform (public vs private DBaaS).

- Simplifies certain security aspects

If you are opting to use a DBaaS platform, the responsibility of worrying about this or that patch being applied falls to your service provider, and you can generally assume that they’ll keep your platform secure from the software perspective.

- Centralized management

One system to rule them all. A guarantee of no nasty surprises concerning that one ancient server that should have been replaced years ago, but you never got around to it. As a user of DBaaS, all you need to worry about is how you interface with the database itself.

- Easy provisioning

Scaling of the environment happens transparently, with minimal additional management.

- Choice of backends

Typically, DBaas providers offer you the choice of a multitude of database flavors, so you can mix and match according to your needs.

Main Disadvantages
- Reduced visibility of the backend

Releasing control of the backend requires a good amount of trust in your DBaaS provider. There is limited or no visibility into how backups are run and maintained, which configuration modifications are applied, or even when and which updates will be implemented. Just as you offload your responsibilities, you in turn need to rely on an SLA contract.

- Potentially harder to recover from catastrophic failures

Similarly to the above, unless your service providers have maintained thorough backups on your behalf, the lack of direct access to the host machines means that it could be much harder to recover from database failure.

- Reduced performance for specific applications

There’s a good chance that you are working on a shared environment. This means the amount of workload-specific performance tuning options is limited.

- Privacy and Security concerns

Although it is much easier to maintain and patch your environment. Having a centralized system also means you’re more prone to potential attacks targeting your dataset. Whichever provider you go with, make sure you are intimately aware of the measures they take to protect you from that, and what is expected from your side to help keep it safe.

Conclusion: While DBaaS is an interesting concept that introduces a completely new way of approaching an application’s database infrastructure, and can bring enterprises easily scalable, and financially flexible platforms, it should not be considered a silver bullet. Some big tradeoffs need to be considered carefully from the business perspective, and any move there should be accompanied with careful planning and investigation of options.

Embracing the immense flexibility these platforms offer, though, opens up a lot of interesting perspectives too. More and more companies are looking at ways to roll their own “as-a-Service”, provisioning completely automated hosted platforms for customers on-demand, and abstracting their management layers to allow them to be serviced by smaller, highly focused technical teams.

Stay tuned: Over the next few weeks we’ll be publishing a series of posts focusing on the combination of two technologies that allow for this type of flexibility: OpenStack and Trove.

The post DBaaS, OpenStack and Trove 101: Introduction to the basics appeared first on MySQL Performance Blog.

by Dimitri Vanoverbeke at July 24, 2014 07:00 AM

Mika Ayenson

Openstack Re-Heat



Hello All,

My name is Mika Ayenson and I have the privilege to intern at Johns Hopkins - Applied Physics Lab. I’m really excited to release the latest proof of concept “Re-Heat”  Re-Heat is a JHU/APL developed tool for OpenStack users to help them quickly rebuild their OpenStack environments via OpenStack’s Heat . 

  • I have included the abstract to our paper here:

Abstract

OpenStack has experienced tremendous growth since its initial release just over four years ago.  Many of the enhancements, such as the Horizon interface and Heat, facilitate making complex network environment deployments in the cloud from scratch easier.  The Johns Hopkins University Applied Physics Lab (JHU/APL) has been using the OpenStack environment to conduct research, host proofs-of-concepts, and perform testing & experimentation.  Our experience reveals that during the environment development lifecycle users and network architects are constantly changing the environments (stacks) they originally deployed.  Once development has reached a point at which experimentation and testing is prudent, scientific methodology requires recursive testing be conducted to determine the repetitiveness of the phenomena observed.  This requires the same entry point (an identical environment) into the testing cycle.  Thus, it was necessary to capture all the changes made to the initial environment during the development phase and modify the original Heat template.  However, OpenStack has not had a tool to help automate this process.  In response, JHU/APL developed a proof-of-concept automation tool called “Re-Heat,” which this paper describes in detail. 

I hope you all enjoy this as I have truly enjoyed playing with Heat and developing Re-Heat.

Cheers,
Mika

by Mika ayenson (noreply@blogger.com) at July 24, 2014 04:17 AM

Adam Young

Devstack mounted via NFS

Devstack allows the developer to work with the master branches for upstream OpenStack development. But Devstack performs many operations (such as replacing pip) that might be viewed as corrupting a machine, and should not be done on your development workstation. I’m currently developing with Devstack on a Virtual Machine running on my system. Here is my setup:

Both my virtual machine and my Base OS are Fedora 20. To run a virtual machine, I use KVM and virt-manager. My VM is fairly beefy, with 2 GB of Ram allocated, and a 28 GB hard disk.

I keep my code in git repositories on my host laptop. To make the code available to the virtual machine, I export them via NFS, and mount them on the host VM in /opt/stack, owned by the ayoung user, which mirrors the setup on the base system.

Make sure NFS is running with:

sudo systemctl enable nfs-server.service 
sudo systemctl start  nfs-server.service

My /etc/exports:

/opt/stack/ *(rw,sync,no_root_squash,no_subtree_check)

And to enable changes in this file

sudo exportfs

Make sure firewalld has the port for nfs open, but only for the internal network. For me, this is interface

virbr0: flags=4163 UP,BROADCAST,RUNNING,MULTICAST  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255

I used the firewall-config application to modify firewalld:

For both, make sure the Configuration select box is set on Permanent or you will be making this change each time you reboot.

First add the interface:

firewalld-nfs-interfaces

And enable NFS:

firewalld-nfs-ports

In the Virtual machine, I added a user (ayoung) with the same numeric userid and group id from my base laptop. To find these values:

$ getent passwd ayoung
ayoung:x:14370:14370:Adam Young:/home/ayoung:/bin/bash

I admit I created them when I installed the VM, which I did using the Anaconda installer and a DVD net-install image. However, the same thing can be done using user-add. I also added the user to the wheel group, which simplifies sudo.

On the remote machine, I created /opt/stack and let the ayoung user own them:

$ sudo mkdir /opt/stack ; sudo chown ayoung:ayoung /opt/stack

To mount the directory via nfs, I made an /etc/fstab entry:

192.168.122.1:/opt/stack /opt/stack              nfs4  defaults 0 0 

And now I can mount the directory with:

$ sudo mount /opt/stack

I went through and updated the git repos in /opt/stack using a simple shell script.

 for DIR in `ls` ; do pushd $DIR ; git fetch ; git rebase origin/master ; popd ; done

The alternative is setting RECLONE=yes in /opt/stack/devstack/localrc.

When running devstack, I had to make sure that the directory /opt/stack/data was created on the host machine. Devstack attempted to create it, but got an error induced by nfs.

Why did I go this route? I need to work on code running in HTTPD, namely Horizon and Keystone. THat preclueded me from doing all of my work in a venv on my laptop. The NFS mount gives me a few things:

  • I keep my Git repo intact on my laptop. This includes the Private key to access Gerrit
  • I can edit using PyCharm on my Laptop.
  • I am sure that the code on my laptop and in my virtual machine is identical.

This last point is essential for remote debugging. I just go this to work for Keystone, and have submitted a patch that enables it for Keystone. I’ll be working up something comparable for Horizon here shortly.

by Adam Young at July 24, 2014 01:14 AM

July 23, 2014

openSUSE Lizards

OpenStack Infra/QA Meetup

Last week, around 30 people from around the world met in Darmstadt, Germany to discuss various things about OpenStack and its automatic testing mechanisms (CI).
The meeting was well-organized by Marc Koderer from Deutsche Telekom.
We were shown plans of what the Telekom intends to do with virtualization in general and OpenStack in particular and the most interesting one to me was to run clouds in dozens of datacenters across Germany, but have a single API for users to access.
There were some introductory sessions about the use of git review and gerrit, that mostly had things I (and I guess the majority of the others) already learned over the years. It included some new parts such as tracking “specs” – specifications (.rst files) in gerrit with proper review by the core reviewers, so that proper processes could already be applied in the design phase to ensure the project is moving in the right direction.

On the second day we learned that the infra team manages servers with puppet, about jenkins-job-builder (jjb) that creates around 4000 jobs from yaml templates. We learned about nodepool that keeps some VMs ready so that jobs in need will not have to wait for them to boot. 180-800 instances is quite an impressive number.
And then we spent three days on discussing and hacking things, the topics and outcomes of which you can find in the etherpad linked from the wiki page.
I got my first infra patch merged, and a SUSE Cloud CI account setup, so that in the future we can test devstack+tempest on openSUSE and have it comment in Gerrit. And maybe some day we can even have a test to deploy crowbar+openstack from git (including the patch from an open review) to provide useful feedback, but for that we might first want to move crowbar (which is consisting of dozens of repos – one for each module) to stackforge – which is the openstack-provided Gerrit hosting.

see also: pleia2′s post

Overall for me it was a nice experience to work together with all these smart people and we certainly had a lot of fun

by bmwiedemann at July 23, 2014 01:54 PM

Tesora Corp

Red Hat and Mirantis battle in the OpenStack market and VMware needs to find a way into the fight

short stack_b small.jpgWelcome to the Short Stack, our weekly feature where we search for the most intriguing OpenStack links to share with you. These links may come from traditional publications or company blogs, but if it's about OpenStack, we'll find the best links we can to share with you every week. If you like what you see, please consider subscribing.

Here we go with this week's links:

Red Hat releases Inktank Ceph Enterprise 1.2 | ZDNet

If you wanted proof that Red Hat is serious about OpenStack, look no further than its purchase of Inktank in April. Just months after acquiring the company, Red Hat has already turned around an enterprise release of Inktank Ceph. Red Hat says it's all part of an overall strategy to be an OpenStack powerhouse and bringing enterprise-class software defined storage to OpenStack via Ceph is a big part of that.

Oracle, Mirantis team up to grab Red Hat's OpenStack crown | InfoWorld

While Red Hat has made its desire to dominate OpenStack clear, the rest of the industry isn't sitting idly and ceding anything to them. Last week, Mirantis announced a deal with Oracle to sell OpenStack services to Oracle Linux and VM customers. It's part of a larger strategy by Mirantis to team with corporate players. Last month, Mirantis announced a similar deal with IBM.

The Cloudcast #152 - How Large does Mirantis Loom Over OpenStack? | Javalobby

And speaking of Mirantis, the company is clearly making a concerted effort to blunt Red Hat's growing influence on the OpenStack community. In this podcast interview, Mirantis CEO Adrian Ionel talks about Mirtantis' role in the community and the growing demand for OpenStack in Europe.

VMware Must Conquer the OpenStack Battleground if It Wants to Grow - TheStreet

As companies like Red Hat and Mirantis exert growing influence on the quickly evolving OpenStack community, Wall Street has taken notice and VMware is a company that has to evolve to continue to stay relevant. This article suggests that VMware could find the next growth path by embracing and conquering the OpenStack market.

What does project management mean to OpenStack? | Opensource.com

In this case, the author is talking about OpenStack as an open source project and how you manage that and the changing needs of users. He wonders whether the project could benefit from more management, and concludes it's a double-edged sword. It could gain and lose something by having more tightly controlled management, but changing community needs could drive whether tighter management of the project is warranted.

by 693 at July 23, 2014 12:26 PM

July 22, 2014

Sean Dague

OpenStack Failures

Last week we had the bulk of the brain power of the OpenStack QA and Infra teams all in one room, which gave us a great opportunity to spend a bunch of time diving deep into the current state of the Gate, figure out what’s going on, and how we might make things better.

Over the course of 45 minutes we came up with this picture of the world.

14681027401_327a720647_o

We have a system that’s designed to merge good code, and keep bugs out. The problem is that while it’s doing a great job of keeping big bugs out, subtle bugs, ones that are low percentage (like show up in only 1% of test runs) can slip through. These bugs don’t go away, they instead just build up inside of OpenStack.

As OpenStack expands in scope and function, these bugs increase as well. They might grow or shrink based on seemingly unrelated changes, dependency changes (which we don’t gate on), timing impacts by anything in the underlying OS.

As OpenStack has grown no one has a full view of the system any more, so even identifying that a bug might or might not be related to their patch is something most developers can’t do. The focus of an individual developer is typically just wanting to land their code, not diving into the system as a whole. This might be because they are on a schedule, or just that landing code feels more fun and productive, than digging into existing bugs.

From a social aspect we seem to have found that there is some threshold failure rate in the gate that we always return to. Everyone ignores base races until we get to that failure rate, and once we get above it for long periods of time, everyone assumes fixing it is someone else’s responsibility. We had an interesting experiment recently where we dropped 300 Tempest tests in turning off Nova v3 by default, which gave us a short term failure drop, but within a couple months we’re back up to our unpleasant failure rate in the gate.

Part of the visibility question is also that most developers in OpenStack don’t actually understand how the CI system works today, so when it fails, they feel powerless. It’s just a big black box blocking their code, and they don’t know why. That’s incredibly demotivating.

Towards Solutions

Every time the gate fail rates get high, debates show up in IRC channels and on the mailing list with ideas to fix it. Many of these ideas are actually features that were added to the system years ago. Some are ideas that are provably wrong, like autorecheck, which would just increase the rate of bug accumulation in the OpenStack code base.

A lot of good ideas were brought up in the room, over the next week Jim Blair and I are going to try to turn these into something a little more coherent to bring to the community. The OpenStack CI system tries to be the living and evolving embodiment of community values at any point in time. One of the important things to remember is those values aren’t fixed points either.

The gate doesn’t exist to serve itself, it exists because before OpenStack had one, back in the Diablo days, OpenStack simply did not work. HP Cloud had 1000 patches to Diablo to be able to put it into production, and took 2 years to migrate from it to another version of OpenStack.

by Sean Dague at July 22, 2014 05:00 PM

Maish Saidel-Keesing

OpenStack Summit - It’s all about the Developers

This one has been sitting in the drafts for a while.

What pushed me to publish and finish this post was an article posted by Brian Gracely,
Will Paris be the last OpenStack Summit?

The Openstack Summit is actually two separate tracks – one for users, and a second for developers. It is just by “chance” (not really) that they are held at the same location – at the same time – because they are catered for two very different audiences.

This is very apparent – even in the logo for the summits.

openstack-cloud-summit

It is even confusing sometimes in regards to what the name of the summit is? Will this be the Juno summit (if you ask an Operator/User – yes it will) or is it the Kilo summit (Developers with give you a thumbs up here).

How the event works?

5 days. the first 3 are the Main conference, and the last 4 is the Design Summit.

schedule

And of course from the mouth of babes..

The Design Summit sessions are collaborative working sessions where the community of OpenStack developers come together twice annually to discuss the requirements for the next software release and connect with other community members. It is not a classic track with speakers and presentations. (The Design Summit is not the right place to get started or learn the basics of OpenStack.)

Steve Ballmer – you remember him? He loved his developers….

<object height="252" width="448"><param name="movie" value="http://www.youtube.com/v/8To-6VIJZRE?hl=en&amp;hd=1"/><embed height="252" src="http://www.youtube.com/v/8To-6VIJZRE?hl=en&amp;hd=1" type="application/x-shockwave-flash" width="448"></embed></object>
Developers, Developers, Developers

The OpenStack Foundation treats the OpenStack Developers – differently. They are the people who create the product. Therefore they receive special treatment.

And by special treatment I mean:

  • The Design Summit is called a Summit, the rest of it is called the Main Conference
    (see above)
  • A completely different part of the conference only for developers – this includes:
    • Separate rooms
    • Separate schedule
    • Separate website for schedule
    • Separate submission process and voting for Design sessions
  • Constant refreshments and treats (M&M’s and Snicker bars galore, drinks, fruit)
  • Brainstorming area outside the discussion rooms
  • Multiple power outlets in every single room and everywhere
  • Every single ATC (Active Technical Contributor) receives a free pass to the summit.

    Individual Members who committed a change to a repository under any of the official OpenStack programs (as defined above) over the last two 6-month release cycles are automatically considered ATC.

Is this unfair – perhaps – but then again – these are the people who are creating the product – so it is in the Foundation’s best interest to keep them engaged, comfortable, happy and available to continue to contribute to the community and the products.

Back to Brian Gracely’s post. Because of the developers there will always be a OpenStack summit. Will it be the same as the past and upcoming summit – I do not know. But it is in the best interest of the Foundation to have the people developing the products, developing the projects to come together, talk, schmooze and also get the details hacked out of what will happen in the upcoming 6 months and the future directions of the product.

So in response to Brian – I still think that the Foundation will hold a summit – and it will always be its central event. The same way that all the major vendors have their own big Conference (Cisco Live, Redhat Summit, VMworld, etc.) every single year, but on the other hand they will make sure they have booths at all the other conferences as well (as a sponsor) it will be the same for OpenStack.

I think that the summit will continue to be here next year in 2015 and beyond.

by Maish Saidel-Keesing (noreply@blogger.com) at July 22, 2014 02:00 PM

Christian Berendt

OpenStack @ EuroPython 2014

OpenStack boot @ EuroPython 2014 We are on the EuroPython 2014 in Berlin at the moment. The OpenStack booth is in the basement. If you are there visit us. We still have some OpenStack 2014 T-Shirts remaining.

by berendt at July 22, 2014 11:15 AM

Opensource.com

OpenStack product management: wisdom or folly?

Two recent, excellent, blog posts have touched on a topic I've been wrestling with since May's OpenStack Summit: What is the role of the Product Management function, if any, in the OpenStack development process?

by Jim Haselmaier at July 22, 2014 09:00 AM

July 21, 2014

Piston

What is SDN and Should You Buy Into the Hype?

Hi. I’m Ben. I support working on SDN integrations within Piston OpenStack™ along with Noel Burton-Krahn and Nick Bartos. For those of you unfamiliar with SDN. The initials (one of many in the world of IT) stands for Software Defined Networking. It’s a buzzword that’s been going around the networking blogs, yet everyone still grapples with the definition, benefits, and overall use case in the enterprise. In this blog, I’ll tackle this overused and mostly misunderstood topic: SDN, and SDN in OpenStack®. I won’t be able to get to all of the nitty gritty details of how SDN can help in every situation, in every datacenter. That would certainly take more than just a blog post.

So, I apologize in advance if you are in need of some clarification on SDN and encourage you to please ask the questions I may not have answered for you already (after all, that’s what the comment box below is for).

Now, let’s begin.

Before we dig in, let’s role play for a minute.

You are the architect of a very important project that will rely on a very particular, perhaps even exotic, network infrastructure. It will certainly be more complex than connecting everything directly to Top of Rack switches and then connecting those to a router or routers. You describe this network to the people who will wire it up for you. Maybe you work for a small team at a university and an intern will be pulling cables for you, or maybe you work at a large corporation and a team of professionals will construct your vast network infrastructure for you.

Either way, you draw the network diagram on a white board and do your best to make sure your people understand each part of it. They then go off to assemble your network. You hope that you described the network properly; you hope that they do not make any mistakes and plug a host into the wrong switch; you hope that they don’t accidentally leave one end of a network cable unplugged. Long story short? Plan to do a lot of hoping.

What is SDN? How does it work? How do you build it?

A simple description is that there are three parts: the physical network, the logical network, and the controller. The physical network is the actual hardware. The routers and switches and cables. The logical network is what hosts and VMs connected to the network perceive as the actual network. The controller is what talks to the physical network and configures it to behave the way that is required to create the logical network.

Why is SDN so awesome?

The Dilbert Cartoon at the top exaggerates the situation, but is pretty representative of how little work you would need to do if you implemented SDN. Things like the aforementioned hypothetical networking nightmare can cause your project to become delayed, or worse, remain unnoticed until your project goes into production and then cause all sorts of hard-to-debug problems. If you had a software defined network you wouldn’t have to deal with problems like that. Instead of drawing diagrams and trying to explain the network to humans, you would be describing it to the SDN controller. The SDN controller would then communicate with your physical networking hardware and have it reconfigure itself to create a logical network that behaved exactly as you described. Without any of the time-consuming and error prone physical steps, you would have the network you desired.

With SDN, your important project’s network would be done faster and with less headaches, so you could focus on the more critical work that relied on that network. You would no longer need to worry about touching your critical networking infrastructure. Instead you would reconfigure the easily manipulated logical network that exists on top of it.

How do I use OpenStack for a SDN?

The simple answer? You play nice with Neutron.

OpenStack is made up of very many pieces, each with a specialized goal. Nova, Cinder, Glance, Keystone and so on. The networking part of OpenStack is called Neutron. Neutron has many different parts. At the simplest level it provides a way for the other parts of OpenStack to inspect and manage the network. But the most powerful part of Neutron is the ability to use different SDN plugins. There is already a large variety of plugins from many well-known developers. The power of being able to use and manage a SDN directly through OpenStack is incredibly useful. Instead of running your cloud on top of a network that is configured from an external SDN, you can manage that network with the same tools you manage the rest of your cloud.

So is SDN just hype?

I don’t know if anyone remembers when VMs were first a “thing”. There was a lot of hype behind it. I think it’s similar with SDN – It’s going to become a thing. It may have a little ways to go, but the reality is that it’s too useful for it not to be a thing.

Managing and changing your network shouldn’t be a day spent in the datacenter. It shouldn’t take down an entire server. It should only take a few minutes, and from a single panel dashboard. Most importantly, it shouldn’t effect your workloads. The feature I work on for Piston OpenStack integrates with various SDNs via the Neutron plug-in. It keeps everything up and running, it only takes one person to change the network configuration, and, best of all, it doesn’t take an entire day. And that’s awesome.

I hope I’ve given you some insight into SDN and its benefits. Is it hype? As someone who’s seen it deployed and who’s seen it worked, I believe the practicality of SDN outweighs the hype. It’s awesome to see it in practice, and you should try it out for yourself with Piston OpenStack. You can schedule a demo or download it here.

Photo credit: Dilbert.com

by Ben Brosenberg at July 21, 2014 07:40 PM

OpenStack Blog

OpenStack Community Celebrates Four Years!

User maturity, software maturity and a focus on cloud software operations are now established areas of focus for OpenStack and none of it would be possible without the consistent  growth of the OpenStack community. In the four years since the community was established, OpenStack now has 70+ active user groups and thousands of active members spread across 139 different countries!Throughout the month of July, we are celebrating our community milestones and progress over the past four years, as well as Superusers who support the OpenStack mission. This year, we also launched the Superuser publication to chronicle the work of users, and their many accomplishments individually and organizationally amplifying their impact among the community.

2014_Singles_InfoGraphics_EP_12

We invite you all to join the party and celebrate 4 awesome years of OpenStack:

  • Check out the OpenStack 4th Birthday page featuring the latest stats, infographic and a web badge to download
  • Attend the birthday party in Portland, Oregon during OSCON, Tuesday, July 22
  • Attend your local birthday party, more than 50 are taking place around the world this month!
  • Visit the Superuser publication to learn about the contributors and user groups who make OpenStack successful
  • Join the conversation on Twitter today using the hashtag #OpenStack4Bday
Here are some community leaders’ perspectives reflecting on the past four years with OpenStack and their predictions for the future:

 

by Allison Price at July 21, 2014 07:36 PM

The Official Rackspace Blog » OpenStack

That Time When OpenStack Turned Four

This is huge. Really huge. If someone told me four years ago that OpenStack would be where it is today – just a mere four years in – I would’ve shrugged my shoulders and said “maybe, we’ll see.”

I am absolutely astounded by how far we’ve come as a community and as a project. Think about it: as of May 2014, OpenStack boasted 16,266 individual members in 139 countries from 355 organizations. There were 2,130 contributors, 466 average monthly contributors and 17,209 patches merged. Let’s compare that to May 2013, when there were 9,511 individual members from 209 organizations. 998 total contributors, 230 average monthly contributors and 7,260 patches merged.

Oh, and the Atlanta Summit this past May was the biggest ever, with more than 4,500 attendees from 55 different countries.

As the project continues to evolve into its fifth year, I’m excited to see increased operator participation. While developers and users are key cornerstones for OpenStack, the operators can tell us what works and what works at scale. One of our big goals for this past year was to close the feedback loop between operators and developers. Moving forward, we as a community have to continue to foster close relationships between the developers and the operators to continue innovation and balance stability. The launch this year of DefCore, a set of standards and tests that will help the community understand which projects are stable, widely used and key to interoperability, will help this progress. Rackspace is hosting the next OpenStack Ops Meetup August 25 and 26 in San Antonio; if you want to learn more.

We’ve also made great strides in making OpenStack more stable and have made great progress defining OpenStack core, two things will continue to hammer on.

And the production use and the maturity of use cases are incredible. If you’ve been to any of the recent OpenStack Summits, you’ve seen household names talking about how they use OpenStack – Comcast, Sony, Disney, eBay, Wells Fargo, AT&T and more showed how they’re using it in production to run very real, critical workloads. More than 1,200 user surveys have been completed by users detailing their OpenStack deployments. There are more than 70 user groups and more than 9,000 members joined a user group this year alone.

At Rackspace, we are co-founders of OpenStack, but we’re also among its largest users. It’s been a boon for us and our business. Our public and private clouds are built on it. It’s a key pillar of our managed cloud strategy. And it powers much of what we do. We’ve been able to rebuild our public cloud for massive scale and OpenStack has empowered us to innovate quickly and be agile (Have you heard of OnMetal yet? That was built with OpenStack Ironic, the bare-metal provisioning program).

I’m as optimistic about OpenStack’s future as I am humbled and inspired by its growth. It’s truly a project that we – the community– have taken from a handful of lines of code to a production-ready cloud operating system that world-beating enterprises use and trust.

Year five is a big one. So let’s celebrate how far we’ve come, and look forward to where we’ll go.

by Paul Voccio at July 21, 2014 03:24 PM

DreamHost

Happy Fourth Birthday, OpenStack!

Four years ago the open and collaborative world of open source software needed a reliable cloud stack that was created not by an army of MBAs and business analysts, but by the engineers and developers that would be using and supporting it every day.

OpenStack’s founders saw to that need, and today OpenStack is much more than simply an open source cloud stack – it’s an entire movement!

The community supporting OpenStack has grown to span members in 139 countries with over two thousand contributors actively committing improvements and enhancements to make it the absolute best that it can be.

It’s no wonder that DreamHost selected OpenStack as the foundation behind DreamCompute, our public cloud solution now being put through a gauntlet of stress tests from beta testers the world over – and you can join them!  Just enter your email address right here to receive an invitation to our cost-free beta!

You can learn more about the technology behind DreamCompute, along with a great look deeper into our use of OpenStack, in this revealing look behind the blue curtain by Jonathan LaCour, DreamHost’s VP of Cloud.

DreamHost is proud to co-sponsor this months’ OpenStack Los Angeles user group meetup!  If you’re in or around Metacloud’s office in Pasadena on Thursday, July 31st, RSVP today!

Happy 4th Birthday, OpenStack

 

by Brett Dunst at July 21, 2014 03:00 PM

IBM OpenStack Team

OpenStack celebrates fourth birthday

Here at IBM, we’re very excited to celebrate OpenStack’s fourth birthday. This is a great opportunity to reflect on the significant accomplishments to date as well as look forward to exciting technology advancements ahead. And of course it’s not a birthday celebration without a present for OpenStack, so make sure you stay for the end of the party when we open gifts!

Unstoppable growth

OpenStack’s growth since the foundation formation has been unprecedented, exceeding the growth of the Linux Foundation, which is no small feat. I thought it would be fun to look at some of the stats we like to track to show just how dramatic the growth has been in just two years:

OpenStack birthday

Wow, pretty amazing stats for a four year old! And OpenStack is showing no signs of stopping either with almost 130,000 overall commits with 61,000 of those in the last 12 months—not to mention more than 4,000 monthly commits since the beginning of 2014. If your developers are not hanging out with OpenStack in some shape or form, it’s time to get on board because this IaaS train is moving fast.

Notable milestones

So, those are impressive stats, you might say, but what does this mean in terms of OpenStack functionality today? As I outlined in my recent “Guide to Icehouse” blog, OpenStack has a lot to be proud of in this department as well. In this latest release of OpenStack, key areas of concern were addressed, including security, authentication, orchestration and quality assurance. IBM worked collaboratively with our partners to specifically help drive improvements to quality assurance (Tempest), compute (Nova), authentication and security (Keystone), storage (Cinder & Swift) and orchestration (Heat & HOT). Icehouse left no question that this latest release of OpenStack is enterprise ready.

(Related: A guide to OpenStack Icehouse)

IBM is proud to partner for success

As a founding member and platinum sponsor, IBM is proud to be a top contributor to the OpenStack Foundation from supporting governance in our role on the board of directors to our team of developers contributing code, reviews and debug skills from across the company. Specifically, we are very proud of our leadership on several development efforts focused on improving OpenStack to ensure it meets the needs of enterprise customers and the greater OpenStack user community. IBM contributors are spearheading efforts in OpenStack’s identity provider (Keystone) to deliver critical enhancements in the areas of federated identity support as well as adding new cross-cloud authentication and authorization mechanisms for enabling hybrid clouds based on OpenStack.

In addition, IBM contributors are driving support for standard based (Distributed Management Task Force’s CADF) auditing support for OpenStack to enable consistent reporting of audit data across cloud providers for the purposes of meeting regulatory compliance requirements.

IBM contributors also continue to drive enhancements into OpenStack’s orchestration layer (Heat) to ensure that it aligns well with popular industry cloud workload orchestration standards such as OASIS TOSCA. Finally, IBMers are also leading the refstack initiative, which is focused on making sure OpenStack distributions are meeting quality assurance requirements necessary to receive official OpenStack branding.

IBM Cloud offerings are built on OpenStack

IBM remains committed to supporting the OpenStack Foundation to provide an open, best-of-breed solution for IaaS as well as to offer our premier line of IBM Cloud offerings based on OpenStack. Please join us at the next OpenStack Summit in Paris where we will showcase our IBM Cloud offerings running on OpenStack, including IBM’s Power8 Server, IBM Bluemix, IBM Cloud Orchestrator, IBM Cloud Manager with OpenStack and Power VC.

Did you say something about a birthday present?

Why yes I did! We’re so thrilled to celebrate OpenStack’s fourth birthday that we thought a birthday present was more than appropriate. To further showcase the outstanding contributions made to OpenStack every day, we’re launching a new OpenStack developers corner blog to feature IBM’s best and brightest contributors to OpenStack code. In this blog series, our developers will give you the real scoop and inside details on day-to-day development activities and achievements at OpenStack. Some of the initial blog topics you can look forward to include deep dives on keystone, heat/hot and horizon, all from the developer’s perspective. Stay tuned.

Happy birthday, OpenStack, from the hundreds of OpenStackers at IBM who proudly contribute to your success. Here’s to an absolutely fantastic fifth year!

The post OpenStack celebrates fourth birthday appeared first on Thoughts on Cloud.

by Brad Topol at July 21, 2014 01:25 PM

Maish Saidel-Keesing

Recording of my Presentation at OpenStack Israel 2014

Embedded below you can find the recording of my session
"OpenStack in the Enterprise - Are you Ready?"

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="http://www.youtube.com/embed/AvdesnmCjYU" width="560"></iframe>

You are welcome to go over the blog post I wrote about the event.

The full playlist of all the sessions can be viewed here

I have already submitted a few sessions for the upcoming summit in Paris.

by Maish Saidel-Keesing (noreply@blogger.com) at July 21, 2014 12:30 PM

Opensource.com

A new OpenStack book, advice for contributing, and more

Interested in keeping track of what's happening in the open source cloud? Opensource.com is your source for what's happening right now in OpenStack, the open source cloud infrastructure project.

OpenStack around the web

There's a lot of interesting stuff being written about OpenStack. Here's a sampling:

by Jason Baker at July 21, 2014 07:00 AM

July 19, 2014

OpenStack in Production

OpenStack plays Tetris : Stacking and Spreading a full private cloud

At CERN, we're running a large scale private cloud which is providing compute resources for physicists analysing the data from the Large Hadron Collider. With 100s of VMs created per day, the OpenStack scheduler has to perform a Tetris like job to assign the different flavors of VMs falling to the specific hypervisors.

As we increase the number of VMs that we're running on the CERN cloud, we see the impact of a number of configuration choices made early on in the cloud deployment. One key choice is how to schedule VMs across a pool of hypervisors.

We provide our users with a mixture of flavors for their VMs (for details, see http://openstack-in-production.blogspot.fr/2013/08/flavors-english-perspective.html).

During the past year in production, we have seen a steady growth in the number of instances to nearly 7,000.


At the same time, we're seeing an increasing elastic load as the user community explores potential ways of using clouds for physics.



Given that CERN has a fixed resource pool and the budget available is defined and fixed, the underlying capacity is not elastic and we are now starting to encounter scenarios where the private cloud can become full. Users see this as errors when they request VMs that no free hypervisor could be located.

This situation occurs more frequently for the large VMs. Physics programs can make use of multiple cores to process physics events in parallel and our batch system (which runs on VMs) benefits from a smaller number of hosts. This accounts for a significant number of large core VMs.


The problem occurs as the cloud approaches being full. Using the default OpenStack configuration (known as 'spread'), VMs are evenly distributed across the hypervisors. If the cloud is running at low utilisation, this is an attractive configuration as CPU and I/O load are also spread and little hardware is left idle.

However, as the utilisation of the cloud increases, the resources free on each hypervisor are reduced evenly. To take a simple case, a cloud with two compute nodes of 24 cores handling a variety of flavors. If there are requests for two 1-core VMs followed by one 24 core flavor, the alternative approaches can be simulated.

In a spread configuration,
  • The first VM request lands on hypervisor A leaving A with 23 cores available and B with 24 cores
  • The second VM request arrives and following the policy to spread the usage, this is scheduled to hypervisor B, leaving A and B with 23 cores available.
  • The request for one 24 core flavor arrives and no hypervisor can satisfy it despite there being 46 cores available and only 4% of the cloud used.
In the stacked configuration,

  • The first VM request lands on hypervisor A leaving A with 23 cores available and B with 24 cores
  • The second VM request arrives and following the policy to stack the usage, this is scheduled to hypervisor A, leaving A with 22 cores and B with 24 cores available.
  • The request for one 24 core flavor arrives and is satisfied by B
A stacked configuration is configured using the RAM weight being negative (i.e. prefer machines with less RAM). This has the effect to pack the VMs. This is done through a nova.conf setting as follows

ram_weight_multiplier=-1.0


When a cloud is initially being set up, the question of maximum packing does not often come up in the early days. However, once the cloud has workload running under spread, it can be disruptive to move to stacked since the existing VMs will not be moved to match the new policy.

Thus, it is important as part of the cloud planning to reflect on the best approach for each different cloud use case and avoid more complex resource rebalancing at a later date.

References

  • OpenStack configuration reference for scheduling at http://docs.openstack.org/trunk/config-reference/content/section_compute-scheduler.html



by Tim Bell (noreply@blogger.com) at July 19, 2014 09:39 PM

Elizabeth K. Joseph

OpenStack QA/Infrastructure Meetup in Darmstadt

I spent this week at the QA/Infrastructure Meetup in Darmstadt, Germany.

Our host was Marc Koderer of Deutsche Telekom who sorted out all logistics for having our event at their office in Darmstadt. Aside from the summer heat (the conference room lacked air conditioning) it all worked out very well, we had a lot of space to work, the food was great, we had plenty of water. It was also nice that the hotel most of us stayed at was an easy walk away.

The first day kicked off with an introduction by Deutsche Telekom that covered what they’re using OpenStack for in their company. Since they’re a network provider, networking support was a huge component, but they use other components as well to build an infrastructure as they plan to have a quicker software development cycle that’s less tied to the hardware lifetime. We also got a quick tour of one of their data centers and a demo of some of the running prototypes for quicker provisioning and changing of service levels for their customers.

Monday afternoon was spent with an on-boarding tutorial for newcomers to OpenStack when it comes to contributing, and on Tuesday we transitioned into an overview of the OpenStack Infrastructure and QA systems that we’d be working on for the rest of the week. Beyond the overview of the infrastructure presented by James E. Blair, key topics included in the infrastructure included jeepyb presented by Jeremy Stanley, devstack-gate and Grenade presented by Sean Dague, Tempest presented by Matthew Treinish (including the very useful Tempest Field Guide) and our Elasticsearch, Logstash and Kibana (ELK) stack presented by Clark Boylan.

Wednesday we began the hacking/sprint portion of the event, where we moved to another conference room and moved tables around so we could get into our respective working groups. Anita Kuno presented the Infrastructure User Manual which we’re looking to flesh out, and gave attendees a task of helping to write a section to help guide users of our CI system. This ended up being a great thing for newcomers to get their feet wet with, and I hope to have a kind of entry level task at every infrastructure sprint moving forward. Some folks worked on getting support for uploading log files to Swift, some on getting multinode testing architected, and others worked on Tempest. In the early afternoon we had some discussions covering recheck language, next steps I’d be taking when it comes to the evaluation of translations tools, a “Gerrit wishlist” for items that developers are looking for as Khai Do prepares to attend a Gerrit hack event and more. I also took time on Wednesday to dive into some documentation I noticed needed some updating after the tutorial day the day before.

Thursday the work continued, I did some reviews, helped out a couple of new contributors and wrote my own patch for the Infra Manual. It was also great to learn and collaborate on some of the aspects of the systems we use that I’m less familiar with and explain portions to others that I was familiar with.


Zuul supervised my work

Friday was a full day of discussions, which were great but a bit overwhelming (might have been nice to have had more on Thursday). Discussions kicked off with strategies for handling the continued publishing of OpenStack Documentation, which is currently just being published to a proprietary web platform donated by one of the project sponsors.

A very long discussion was then had about managing the gate runtime growth. Managing developer and user expectations for our gating system (thorough, accurate testing) while balancing the human and compute resources that we have available on the project is a tough thing to do. Some technical solutions to ease the pain on some failures were floated and may end up being used, but the key takeaway I had from this discussion was that we’d really like the community to be more engaged with us and each other (particularly when patches impact projects or functionality that you might not feel is central to your patch). We also want to stress that the infrastructure is a living entity that evolves and we accept input as to ideas and solutions to problems that we’re encountering, since right now the team is quite small for what we’re doing. Finally, there were some comments about how we run tests in the process of reviewing, and how scalable the growth of tests is over time and how we might lighten that load (start doing some “traditional CI” post merge jobs? having some periodic jobs? leverage experimental jobs more?).

The discussion I was most keen on was around the refactoring of our infrastructure to make it more easily consumable by 3rd parties. Our vision early on was that we were an open source project ourselves, but that all of our customizations were a kind of example for others to use, not that they’d want to use them directly, so we hard coded a lot into our special openstack_projects module. As the project has grown and more organizations are starting to use the infrastructure, we’ve discovered that many want to use one largely identical to ours and that making this easier is important to them. To this end, we’re developing a Specification to outline the key steps we need to go through to achieve this goal, including splitting out our puppet modules, developing a separate infra system repo (what you need to run an infrastructure) and project stuff repo (data we load into our infrastructure) and then finally looking toward a way to “productize” the infrastructure to make it as easily consumable by others as possible.

The afternoon finished up with discussions about vetting and signing of release artifacts, ideas for possible adjustment of the job definition language and how teams can effectively manage their current patch queues now that the auto-abandon feature has been turned off.

And with that – our sprint concluded! And given the rise in temperature on Friday and how worn out we all were from discussions and work, it was well-timed.

Huge thanks to Deutsche Telekom for hosting this event, being able to meet like this is really valuable to the work we’re all doing in the infrastructure and QA for OpenStack.

Full (read-only) notes from our time spent throughout the week available here: https://etherpad.openstack.org/p/r.OsxMMUDUOYJFKgkE

by pleia2 at July 19, 2014 11:07 AM

July 18, 2014

OpenStack Blog

OpenStack Community Weekly Newsletter (July 11 – 18)

DefCore Update: Input Request for Havana Capabilities

As part of our community’s commitment to interoperability, the OpenStack Board of Directors has been working to make sure that “downstream” OpenStack-branded commercial products offer the same baseline functionality and include the same upstream, community-developed code. The work to define these required core capabilities and code has been led by the DefCore Committee co-chaired by Rob Hirschfeld (his DefCore blog) and Joshua McKenty (his post). You can read more about the committee history and rationale in Mark Collier’s blog post. The next deadlines are: OSCON on July 21, 11:30 am PDT and the Board Meeting on July 22nd.

And the K cycle will be named… Kilo !

The results of the poll are just in, and the winner proposal is “Kilo”. “k” is the unit symbol for “kilo”, a SI unit prefix (derived from the Greek word χίλιοι which means “thousand”). “Kilo” is often used as a shorthand for “kilogram”, and the kilogram is the last SI base unit to be tied to a reference artifact (stored near Paris in the Pavillon de Breteuil in Sèvres).

Five Days + Twelve Writers + One Book Sprint = One Excellent Book on OpenStack Architecture

A dozen OpenStack experts and writers from companies across the OpenStack ecosystem gathered at VMware’s Palo Alto campus for the OpenStack Architecture Design Guide book sprint. The intent was to deliver a completed book aimed architects and evaluators, on designing OpenStack clouds — in just five days.

Only developers should file specifications and blueprints

If you try to solve a problem with the wrong tool you’re likely going to have a frustrating experience. OpenStack developers use blueprints define the roadmap for the various projects, the specifications attached to a blueprint are used to discuss the implementation details before code is submitted for review. Operators and users in general don’t need to dive in the details of how OpenStack developers organize their work and definitely should never be asked to use tools designed for and by developers.

Third Party CI group formation and minutes

At this week’s meeting the Third-Party group continues to discuss documentation patches, including a new terminology proposal, as well as CI system naming, logging and test timing. There was also a summary review of the current state of Neutron driver CI rollout. Anyone deploying a third-party test system or interested in easing third-party involvement is welcome to attend the meetings. Minutes of ThirdParty meetings are carefully logged.

The Road To Paris 2014 – Deadlines and Resources

Security Advisories and Notices

Tips ‘n Tricks

Upcoming Events

Other News

Got Answers?

Ask OpenStack is the go-to destination for OpenStack users. Interesting questions waiting for answers:

Welcome New Reviewers and Developers

Will Foster zhangtralon
Walter Heck Gael Chamoulaud
Mithil Arun Fabrizio Fresco
Kieran Forde badveli_vishnuus
JJ Asghar Ryan Lucio
Gilles Dubreuil Martin Falatic
Emily Hugenbruch Bryan Jones
Christian Hofstädtler Tri Hoang Vo
Steven Hillman Ryan Rossiter
Rajesh Tailor Mohit
akash Tushar Katarki
Rajini Ram Pawel Skowron
Pawel Skowron Karthik Natarajan
Abhishek L Ryan Brown
takehirokaneko Keith Basil
Kate Coyne
Ju Lim

Latest Activity In Projects

Do you want to see at a glance the bugs filed and solved this week? Latest patches submitted for review? Check out the individual project pages on OpenStack Activity Board – Insights.

OpenStack Reactions

youwelcome

Trivial fix on a review of someone else while he’s asleep so jenkins can pass

The weekly newsletter is a way for the community to learn about all the various activities occurring on a weekly basis. If you would like to add content to a weekly update or have an idea about this newsletter, please leave a comment

by Stefano Maffulli at July 18, 2014 07:56 PM

July 17, 2014

Arx Cruz

One controller to rule them all

These days I had a problem with my environment: I had two controllers: One for production, and the second for development. The production had a public network interface, where you could connect to your vm directly, however, the development...

by Arx Cruz at July 17, 2014 08:27 PM

Cody Bunch

OSCON Lab Materials

tl;dr Download our OSCON lab materials here.

As a follow-up on my coming to OSCON, I thought it prudent to provide some info & downloads for the lab ahead of time.

Lab Materials

While we will have USB keys in the tutorial for everyone, we figure some of y’all might want to get started early. With that in mind, the lab materials can be downloaded here, but be aware, it’s about 4GB of stuff to download.

  • Slides – Both the PPT & PDF of the slides
  • openstackicehouse.ova – The vAPP we will use in the lab
  • OpenStack_command_guide_reference.pdf – A quick reference for OpenStack CLI commands
  • Access_virtualbox_allinone.pdf – A guide for accessing the lab
  • cirros-0.3.1-x86_64-disk.img – Used in the labs
  • Osco Solutions/ – All of the labs we will be doing
  • Couch to OpenStack/ – An additional 12 hours of Getting Started with OpenStack Material
  • VirtualBox/ – Contains the VirtualBox installer for OSX, Linux, and Windows

Really, you can get the materials here

Prerequisites

To be successful in the lab, there are a few things you will need. None of these are too complex or too deep, but having them will improve your experience overall.

  • A laptop with a minimum of 4GB free ram
  • VirtualBox or VMware Fusion/Workstation/Player installed
  • An SSH client. On Windows, Putty works well.

Some Random Statistics

Building the USB keys was an exercise in insanity. The setup looks kinda like this:
https://pbs.twimg.com/media/BstHHTaCMAACUTk.jpg

The fan was added after the first batch nearly melted the USB hub. The smell of burnt silicon was pretty intense.

  • Each key contains about 4GB of data.
  • We’re copying them 24 at a time and seeing:
    • 40 min to finish all 24 disks
    • 45MB/sec (Yes Megabytes) sustained transfer
    • 12,000 IOPS largely write

by OpenStackPro at July 17, 2014 07:56 PM

Arx Cruz

OpenStack 3rd Party CI - Part III - Configuring your puppet recipes

Last time, we talked about Puppetboard. Now let’s start to work with recipes to install our services. For that I’ve created a github project called openstack-puppet-recipes I will continue to update the github as we progress in this series of...

by Arx Cruz at July 17, 2014 07:15 PM

July 16, 2014

Cody Bunch

USB Key Duplication on OSX on the Cheap

Edit: As I got a bit deeper into the copies, a new method was needed.

Common

First, make an image of the usb disk in question. To do this, open Disk Utility, and then:

  1. Click File
  2. Click New
  3. Click “New Image From Folder…”
  4. Select your folder
  5. Wait

Next, find the image file in finder & mount it, record the place it was mounted.

Methodology 1

This is a variant of the work here.

Now that you’ve got the image and it’s mounted, plug in the USB hub containing your keys and run the following from your terminal:

diskutil list
$ diskutil list
/dev/disk0
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *251.0 GB   disk0
... snip
/dev/disk3
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:     FDisk_partition_scheme                        *8.2 GB     disk3
   1:                 DOS_FAT_32 NO NAME                 8.2 GB     disk3s1

What you are looking for here is the first and last /dev/disk# that represent your USB keys. In my case this is 3 – 23. From there we start the copy:

for i in `jot 23 3`; do asr --noverify --erase --noprompt --source /Volumes/No\ Name --target /dev/disk${i}s1 & done

In the above, note the –source specifies the /Volume/No\ Name\ ## that represents where we mounted the image. What it does then, is loop over each usb disk copying the data from the image.

Methodology 2

This is a variant of the work here.

Now that you’ve got the image and it’s mounted, plug in the USB hub containing your keys and run the following from your terminal:

diskutil list
$ diskutil list
/dev/disk0
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *251.0 GB   disk0
... snip
/dev/disk3
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:     FDisk_partition_scheme                        *8.2 GB     disk3
   1:                 DOS_FAT_32 NO NAME                 8.2 GB     disk3s1

What you are looking for here is the first and last /dev/disk# that represent your USB keys. In my case this is 3 – 27.

First unmount the disks:

for i in `jot 25 3`; do diskutil unmountDisk /dev/disk${i}; done

Next, use homebrew to install PV if you don’t have it:

brew install pv

Finally start the copy:

sudo dd if=/dev/disk2 |pv| tee >(sudo dd of=/dev/disk3 bs=16m) >(sudo dd of=/dev/disk4 bs=16m) >(sudo dd of=/dev/disk5 bs=16m) >(sudo dd of=/dev/disk26 bs=16m) >(sudo dd of=/dev/disk7 bs=16m) >(sudo dd of=/dev/disk8 bs=16m) >(sudo dd of=/dev/disk9 bs=16m) >(sudo dd of=/dev/disk10 bs=16m) >(sudo dd of=/dev/disk11 bs=16m) >(sudo dd of=/dev/disk12 bs=16m) >(sudo dd of=/dev/disk13 bs=16m) >(sudo dd of=/dev/disk14 bs=16m) >(sudo dd of=/dev/disk15 bs=16m) >(sudo dd of=/dev/disk16 bs=16m) >(sudo dd of=/dev/disk17 bs=16m) >(sudo dd of=/dev/disk18 bs=16m) >(sudo dd of=/dev/disk19 bs=16m) >(sudo dd of=/dev/disk20 bs=16m) >(sudo dd of=/dev/disk21 bs=16m) >(sudo dd of=/dev/disk22 bs=16m) >(sudo dd of=/dev/disk23 bs=16m) >(sudo dd of=/dev/disk24 bs=16m) >(sudo dd of=/dev/disk25 bs=16m) | sudo dd of=/dev/disk27 bs=16m

Ok, that is a single line. It is also terrible terrible terrible, but it works. Some notes:
You need a >(sudo dd) section for each disk except the last one. You will also need to change these to match your environment.

by OpenStackPro at July 16, 2014 09:15 PM

Rob Hirschfeld

OpenStack DefCore Review [interview by Jason Baker]

I was interviewed about DefCore by Jason Baker of Red Hat as part of my participation in OSCON Open Cloud Day (speaking Monday 11:30am).  This is just one of fifteen in a series of speaker interviews covering everything from Docker to Girls in Tech.

This interview serves as a good review of DefCore so I’m reposting it here:

Without giving away too much, what are you discussing at OSCON? What drove the need for DefCore?

I’m going to walk through the impact of the OpenStack DefCore process in real terms for users and operators. I’ll talk about how the process works and how we hope it will make OpenStack users’ lives better. Our goal is to take steps towards interoperability between clouds.

DefCore grew out of a need to answer hard and high stakes questions around OpenStack. Questions like “is Swift required?” and “which parts of OpenStack do I have to ship?” have very serious implications for the OpenStack ecosystem.

It was impossible to reach consensus about these questions in regular board meetings so DefCore stepped back to base principles. We’ve been building up a process that helps us make decisions in a transparent way. That’s very important in an open source community because contributors and users want ground rules for engagement.

It seems like there has been a lot of discussion over the OpenStack listservs over what DefCore is and what it isn’t. What’s your definition?

First, DefCore applies only to commercial uses of the OpenStack name. There are different rules for the integrated code base and community activity. That’s the place of most confusion.

Basically, DefCore establishes the required minimum feature set for OpenStack products.

The longer version includes that it’s a board managed process that’s designed to be very transparent and objective. The long-term objective is to ensure that OpenStack clouds are interoperable in a measurable way and that we also encourage our vendor ecosystem to keep participating in upstream development and creation of tests.

A final important component of DefCore is that we are defending the OpenStack brand. While we want a vibrant ecosystem of vendors, we must first have a community that knows what OpenStack is and trusts that companies using our brand comply with a meaningful baseline.

Are there other open source projects out there using “designated sections” of code to define their product, or is this concept unique to OpenStack? What lessons do you think can be learned from other projects’ control (or lack thereof) of what must be included to retain the use of the project’s name?

I’m not aware of other projects using those exact words. We picked up ‘designated sections’ because the community felt that ‘plug-ins’ and ‘modules’ were too limited and generic. I think the term can be confusing, but it was the best we found.

If you consider designated sections to be plug-ins or modules, then there are other projects with similar concepts. Many successful open source projects (Eclipse, Linux, Samba) are functionally frameworks that have very robust extensibility. These projects encourage people to use their code base creatively and then give back some (not all) of their lessons learned in the form of code contributes. If the scope returning value to upstream is too broad then sharing back can become onerous and forking ensues.

All projects must work to find the right balance between collaborative areas (which have community overhead to join) and independent modules (which allow small teams to move quickly). From that perspective, I think the concept is very aligned with good engineering design principles.

The key goal is to help the technical and vendor communities know where it’s safe to offer alternatives and where they are expected to work in the upstream. In my opinion, designated sections foster innovation because they allow people to try new ideas and to target specialized use cases without having to fight about which parts get upstreamed.

What is it like to serve as a community elected OpenStack board member? Are there interests you hope to serve that are difference from the corporate board spots, or is that distinction even noticeable in practice?

It’s been like trying to row a dragon boat down class III rapids. There are a lot of people with oars in the water but we’re neither all rowing together nor able to fight the current. I do think the community members represent different interests than the sponsored seats but I also think the TC/board seats are different too. Each board member brings a distinct perspective based on their experience and interests. While those perspectives are shaped by their employment, I’m very happy to say that I do not see their corporate affiliation as a factor in their actions or decisions. I can think of specific cases where I’ve seen the opposite: board members have acted outside of their affiliation.

When you look back at how OpenStack has grown and developed over the past four years, what has been your biggest surprise?

Honestly, I’m surprised about how many wheels we’ve had to re-invent. I don’t know if it’s cultural or truly a need created by the size and scope of the project, but it seems like we’ve had to (re)create things that we could have leveraged.

What are you most excited about for the “K” release of OpenStack?

The addition of platform services like database as a Service, DNS as a Service, Firewall as a Service. I think these IaaS “adjacent” services are essential to completing the cloud infrastructure story.

Any final thoughts?

In DefCore, we’ve moved slowly and deliberately to ensure people have a chance to participate. We’ve also pushed some problems into the future so that we could resolve the central issues first. We need to community to speak up (either for or against) in order for us to accelerate: silence means we must pause for more input.


by Rob H at July 16, 2014 07:54 PM

DreamHost

How DreamHost is reinventing itself with OpenStack

Original post came from OpenSource.com - http://opensource.com/business/14/7/dreamhost-and-openstack-love-story

Founded in 1997, DreamHost is a seasoned internet business home to over 400,000 happy customers, 1.5 million sites and applications, and hundreds of thousands of installs of WordPress, the dominant open source CMS. Open source is in our blood, and has powered every aspect of our services since 1997. DreamHost is built on a foundation of Perl, Linux, ApacheMySQL, and countless other open source projects. In our 16+ years of existence, DreamHost has seen the realities of internet applications and hosting drastically evolve. Our journey to the cloud requires a bit of history and context, so let’s dive right in.

The rise of the black box cloud

Nearly a decade ago Amazon created the market of cloud infrastructure services with the introduction of the immensely popular S3 for storage and EC2 for compute. The years that followed have been dominated by sweeping changes to the way that infrastructure is consumed and, more importantly, to the underlying design and architecture of software. There’s also there’s been a larger, hidden consequence to the rise of opaque cloud infrastructure services.

While the cloud has been revolutionary it has also been largely a black box. The software and systems that power Amazon Web Services, Microsoft Azure, and many other clouds are closed to prying eyes, leaving users in the dark about the implementation of the most critical component of their application stacks. The era prior to the cloud represented the rise of the open internet – Linux, Apache, MySQL, and languages like PHP, Perl, Python, and Ruby, where developers, engineers, and IT organizations had a large degree of transparency about the software that powered their applications. In the early cloud era much of that transparency disappeared.

A new hope

In 2010, two unlikely partners, NASA and Rackspace Hosting, founded theOpenStack project to create open source cloud software for the creation of private and public clouds. In the years since its inception the OpenStack project has exploded, aiming to live up to its potential as the Linux of cloud. More than 200 companies and countless individuals are now a part of the project, working in concert to create open source software and APIs that power private and public clouds globally.

DreamHost joined OpenStack early in its life, committing code, financial backing, and leadership to the project. We joined the OpenStack Foundation as a Gold member, and DreamHost CEO Simon Anderson was elected to represent us on the OpenStack Foundation Board of Directors. Our commitment to the success of the project runs deep.

Why OpenStack?

DreamHost wouldn’t exist today without a strong commitment to the open source philosophy. We don’t want to live in a future that is again dominated by closed, technically opaque, “magical” cloud platforms. Many traditional hosting customers are interested in the adoption of cloud services, either in addition to, or as a replacement for, their existing shared, VPS, and dedicated hosting, and we believe that they too are looking for a simple and affordable upgrade path. Given our DNA, it makes sense for DreamHost to build our customers what they want using best-of-breed open source software.

Introducing DreamCompute

DreamHost’s first product built on OpenStack is DreamCompute, which allows customers to create virtual machines, block devices, and networks on-demand via the standard OpenStack APIs and command-line tools or via an intuitive web-based user interface. DreamCompute puts more power in the hands of our customers than they’ve ever had access to before, and is built on a large library of open source software. In true DreamHost fashion, even the architecture of DreamCompute is open.

DreamCompute runs on a mixture of high-end Dell servers running Ubuntu Linux. We have two basic types of servers: storage nodes and hypervisor nodes. The hypervisor nodes are optimized for hosting virtual machines running on top of the open source KVM hypervisor, and feature 64 AMD cores and 192 GB of RAM. Our storage nodes are lower-powered, higher-density servers, each with twelve 3 TB disks, and are running Ceph, the open source, massively distributed, fault tolerant storage system that DreamHost helped build.

DreamCompute also features a “cockpit” pod, which represents the “brain” of the cloud. In the cockpit, we run OpenStack and its supporting services on a mixture of bare metal and virtual machines, including HorizonGlanceNovaNeutron,Keystone, and Cinder, along with Apache, HAProxy load balancers, MySQL databases, and RabbitMQ queueing systems. The entire system is configured and managed by Chef, and is monitored using open source tools like logstash,graphitecollectd, and nagios.

Even the networking hardware and software in DreamCompute are based upon open platforms and technology. DreamHost has sourced high-performance, 48 port 10 Gig switches directly from manufacturers. The switches run Cumulus Linux, which is a Linux network operating system from our friends at Cumulus Networks. This unique setup enables us to provision, monitor, and operate our networking infrastructure using the same tools and processes that we use for our compute and storage nodes, greatly minimizing operational overhead.

DreamCompute is compatible with the standard OpenStack Compute, Network, Image and Storage APIs, and is at its core an OpenStack deployment. That said, DreamCompute also has some unique features that set it apart from other clouds. It should come as no surprise that the foundation for these features are, in fact, based upon open source software that DreamHost created.

Fear the Cephalopod

Every virtual machine in DreamCompute boots from a virtual block device backed by a multi-petabyte Ceph storage cluster. Operating system images themselves are stored in the same cluster as these block devices, enabling DreamCompute to leverage Ceph’s Copy-on-Write (COW) functionality. Rather than downloading the operating system image from a central store to a hypervisor (which is time consuming) and then provisioning a new block device, Ceph enables our virtual machines to boot nearly instantly from a thin-provisioned copy of the OS image. As a result, virtual machines in DreamCompute can be created and fully operational in as little as 40 seconds.

Ceph also provides DreamCompute users with confidence that their data is safe, as every piece of data that is stored in the cluster is replicated a total of three times. When disks, servers, or racks fail, the Ceph cluster springs into action to automatically heal itself, ensuring that the proper number of replicas exist. When new capacity is added, Ceph responds by immediately putting it to good use, rebalancing data across the cluster.

Virtualize all the things. Including the network!

Server and storage virtualization are very familiar concepts to most, but network virtualization is a relatively new idea. DreamCompute was built from the ground up to provide full network virtualization for every customer. In DreamCompute, the physical network represents an “underlay,” which is invisible to the customer. A virtual network fabric – an “overlay” – is then layered on top, providing every customer in DreamCompute with a virtual OSI Layer 2 (L2) switch, which is completely isolated at L2 from every other customer.

On top of this virtual L2 network, tenants are provided with a virtualized software router, which provides L3+ services like routing, firewalling, and more. DreamHost has open-sourced this project, named it Akanda, and published it under a liberal open source license on GitHub.

DreamCompute is also built from the ground-up to support IPv6 as the exhaustion of IPv4 address space is nearly upon us. Every virtual machine in DreamCompute is automatically assigned an IPv6 address along with its private IPv4 address.

By connecting network virtualization technology with OpenStack’s Neutron Networking APIs, customers have fully programmable control of their network from L2-L7, with isolation.

The future of the open source cloud is bright

DreamCompute represents the continuation of a long partnership between DreamHost and the open source community. We’re excited to further our contributions to OpenStack, and to be part of a vibrant ecosystem of cloud service providers who provide OpenStack-based services. The future of the open source cloud is very bright, and we’re delighted to be on the forefront.

DreamHost’s DreamCompute is currently in private beta. To register your interest in joining the free beta period, visit DreamCompute and register today.

by Jonathan LaCour at July 16, 2014 07:21 PM