July 04, 2015

Craige McWhirter

How To Delete a Cinder Snapshot with a Status of error or error_deleting With Ceph Block Storage

When deleting a volume snapshot in OpenStack you may sometimes get an error message stating that Cinder was unable to delete the snapshot.

There are a number of reasons why a snapshot may be reported by Ceph as unable to be deleted, however the most common reason in my experience has been that a Cinder client connection has not yet been closed, possibly because a client crashed.

If you were to look at the snapshots in Cinder, the status is usually error or error_deleting:

% cinder snapshot-list
+--------------------------------------+--------------------------------------+----------------+------------------------------------------------------------------+------+
|                  ID                  |              Volume ID               |     Status     |                           Display Name                           | Size |
+--------------------------------------+--------------------------------------+----------------+------------------------------------------------------------------+------+
| 07d75992-bf3f-4c9c-ab4e-efccdfc2fe02 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 |     error      | snappy:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-26T14:00:02Z |  40  |
| 2db84ec7-6e1a-41f8-9dc9-1dc14e6ecef0 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 | error_deleting | snappy:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-05-18T00:00:01Z |  40  |
| 47fbbfe8-643c-4711-a066-36f247632339 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 |   available    | snappy:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-29T03:00:14Z |  40  |
| 52c43ec8-e713-4f87-b329-3c681a3d31f2 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 | error_deleting | snappy:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-24T14:00:02Z |  40  |
| a595180f-d5c5-4c4b-a18c-ca56561f36cc | 3004d6e9-7934-4c95-b3ee-35a69f236e46 |     error      | snappy:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-25T14:00:02Z |  40  |
+--------------------------------------+--------------------------------------+----------------+------------------------------------------------------------------+------+

When you check Ceph you may find the following snapshot list:

# rbd snap ls my.pool.cinder.block/volume-3004d6e9-7934-4c95-b3ee-35a69f236e46
SNAPID NAME                                              SIZE
  2069 snapshot-2db84ec7-6e1a-41f8-9dc9-1dc14e6ecef0 40960 MB
  2526 snapshot-52c43ec8-e713-4f87-b329-3c681a3d31f2 40960 MB
  2558 snapshot-47fbbfe8-643c-4711-a066-36f247632339 40960 MB

The astute will notice that there are only 3 snapshots listed in Ceph yet 5 listed in Cinder. We can immediately exclude 47fbbfe8 which is available in both Cinder and Ceph, so there's no issues there.

You will also notice that the snapshots with the status error are not in Ceph and the two with error_deleting are. My take on this is that for the status error, Cinder never received the message from Ceph stating that this had been deleted successfully. Whereas for the status error_deleting status, Cinder had been unsuccessful in offloading the request to Ceph.

Each status will need to be handled separately , I'm going to start with the error_deleting snapshots, which are still present in both Cinder and Ceph.

In MariaDB, set the status from error_deleting to available:

MariaDB [cinder]> update snapshots set status='available' where id = '2db84ec7-6e1a-41f8-9dc9-1dc14e6ecef0';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0

MariaDB [cinder]> update snapshots set status='available' where id = '52c43ec8-e713-4f87-b329-3c681a3d31f2';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0

Check in Cinder that the status of these snapshots has been updated successfully:

% cinder snapshot-list
+--------------------------------------+--------------------------------------+----------------+------------------------------------------------------------------+------+
|                  ID                  |              Volume ID               |     Status     |                           Display Name                           | Size |
+--------------------------------------+--------------------------------------+----------------+------------------------------------------------------------------+------+
| 07d75992-bf3f-4c9c-ab4e-efccdfc2fe02 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 |     error      | tuttle:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-26T14:00:02Z |  40  |
| 2db84ec7-6e1a-41f8-9dc9-1dc14e6ecef0 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 |   available    | tuttle:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-05-18T00:00:01Z |  40  |
| 47fbbfe8-643c-4711-a066-36f247632339 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 |   available    | tuttle:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-29T03:00:14Z |  40  |
| 52c43ec8-e713-4f87-b329-3c681a3d31f2 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 |   available    | tuttle:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-24T14:00:02Z |  40  |
| a595180f-d5c5-4c4b-a18c-ca56561f36cc | 3004d6e9-7934-4c95-b3ee-35a69f236e46 |     error      | tuttle:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-25T14:00:02Z |  40  |
+--------------------------------------+--------------------------------------+----------------+------------------------------------------------------------------+------+

Delete the newly available snapshots from Cinder:

% cinder snapshot-delete 2db84ec7-6e1a-41f8-9dc9-1dc14e6ecef0
% cinder snapshot-delete 52c43ec8-e713-4f87-b329-3c681a3d31f2

Then check the results in Cinder and Ceph:

% cinder snapshot-list
+--------------------------------------+--------------------------------------+----------------+------------------------------------------------------------------+------+
|                  ID                  |              Volume ID               |     Status     |                           Display Name                           | Size |
+--------------------------------------+--------------------------------------+----------------+------------------------------------------------------------------+------+
| 07d75992-bf3f-4c9c-ab4e-efccdfc2fe02 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 |     error      | tuttle:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-26T14:00:02Z |  40  |
| 47fbbfe8-643c-4711-a066-36f247632339 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 |   available    | tuttle:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-29T03:00:14Z |  40  |
| a595180f-d5c5-4c4b-a18c-ca56561f36cc | 3004d6e9-7934-4c95-b3ee-35a69f236e46 |     error      | tuttle:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-25T14:00:02Z |  40  |
+--------------------------------------+--------------------------------------+----------------+------------------------------------------------------------------+------+

# rbd snap ls my.pool.cinder.block/volume-3004d6e9-7934-4c95-b3ee-35a69f236e46
SNAPID NAME                                              SIZE
  2558 snapshot-47fbbfe8-643c-4711-a066-36f247632339 40960 MB

So we are done with Ceph now, as the error snapshots do not exist there. As they only exist in Cinder, we need to mark them as deleted in the Cinder database:

MariaDB [cinder]> update snapshots set status='deleted', deleted='1' where id = '07d75992-bf3f-4c9c-ab4e-efccdfc2fe02';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0

MariaDB [cinder]> update snapshots set status='deleted', deleted='1' where id = 'a595180f-d5c5-4c4b-a18c-ca56561f36cc';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0

Now check the status in Cinder:

% cinder snapshot-list
+--------------------------------------+--------------------------------------+-----------+------------------------------------------------------------------+------+
|                  ID                  |              Volume ID               |   Status  |                           Display Name                           | Size |
+--------------------------------------+--------------------------------------+-----------+------------------------------------------------------------------+------+
| 47fbbfe8-643c-4711-a066-36f247632339 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 | available | tuttle:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-29T03:00:14Z |  40  |
+--------------------------------------+--------------------------------------+-----------+------------------------------------------------------------------+------+

Now your errant Cinder snapshots have been removed.

Enjoy :-)

by Craige McWhirter at July 04, 2015 07:25 AM

July 03, 2015

OpenStack Blog

OpenStack Community Weekly Newsletter (June 26 – July 3)

Writing Your First OpenStack Application

Ever thought about what it takes to write a scalable cloud application using an OpenStack SDK? Thanks to a small team’s heroic effort, there’s now a guide for that!

Dive into Zuul – Gated commit system

Zuul is software developed by the OpenStack community. It was developed as an efficient gated commit system, allowing projects to merge patches only after they pass a series of tests. It reduces the probability of breaking the master branch, for instance when unit tests or functional tests no longer pass on the tip of master. Fabien Boucher explains how Zuul works and clarifies some concepts through simple examples.

5 years of OpenStack – it’s time to celebrate the community!

OpenStack celebrates its 5th birthday July 19, and we’re celebrating with the entire OpenStack community during July! Cloud interoperability and support for developer productivity have been focuses for the OpenStack project this year, and none of it would be possible without the quickly growing OpenStack community.

The Road to Tokyo

Reports from Previous Events

Relevant Conversations

Deadlines and Contributors Notifications

Security Advisories and Notices

  • None this week

Tips ‘n Tricks

Open Call for Proposals

Recently Merged Specs

Subject Owner Project
Implement server instance tagging Sergey Nikitin openstack/nova-specs
New ZeroMQ driver implementation details Oleksii Zamiatin openstack/oslo-specs
Add user-identity-format-flexibility for oslo.log Doug Hellmann openstack/oslo-specs
Enable optional dependencies in OpenStack projects lifeless openstack/oslo-specs
Specification for Adding Kafka Driver Komei Shimamura openstack/oslo-specs
Add flavor tables to API database Vineet Menon openstack/nova-specs
Servicegroup foundational refactoring for Control Plane Vilobh Meshram openstack/nova-specs
Add working items to consistent-service-method-names Ken’ichi Ohmichi openstack/qa-specs
Cleanup the specs repo Matthew Treinish openstack/qa-specs
Add devstack external plugin spec Chmouel Boudjnah openstack/qa-specs
Graduate fileutils to oslo.utils and oslo.policy Steve Martinelli openstack/oslo-specs
Move email spec to backlog Flavio Percoco openstack/zaqar-specs
Add spec for email notification Fei Long Wang openstack/zaqar-specs
Enable listing of role assignments in a project hierarchy henry-nash openstack/keystone-specs
Configure most important hadoop configs automatically Vitaly Gridnev openstack/sahara-specs
Add scheduling edp jobs in sahara lu huichun openstack/sahara-specs
Persistent transport Victoria Martinez de la Cruz openstack/zaqar-specs
Change QoS API to be consistent Eran Gampel openstack/neutron-specs
Nova API Microversions support in NovaClient Andrey Kurilin openstack/nova-specs
Propose VMware limits, reservation and shares garyk openstack/nova-specs
Spec to Add ‘macvtap’ as vif type to novas libvirt driver. Andreas Scheuring openstack/nova-specs
Add spec for more-gettext-support Peng Wu openstack/oslo-specs
Moving not implemented specs to backlog Flavio Percoco openstack/zaqar-specs
Implement force_detach for safe cleanup Scott DAngelo openstack/cinder-specs
Update to CORS specification. Michael Krotscheck openstack/openstack-specs
Add requirements management specification. lifeless openstack/openstack-specs
Enabling Python 3 for Application Integration Tests Doug Hellmann openstack/openstack-specs
Cleanup and removal of StrictABC requirement Morgan Fainberg openstack/keystone-specs
Fix resource tracking for operations that move instances between hosts Nikola Dipanov openstack/nova-specs
“Get me a network” spec Sean M. Collins openstack/neutron-specs
Add spec for tempest plugin interface Matthew Treinish openstack/qa-specs
mandatory api limits gordon chung openstack/ceilometer-specs
Moved driver interface from backlog to liberty Ajaya Agrawal openstack/keystone-specs
Adopt Oslo Guru Meditation Reports zhangtralon openstack/ceilometer-specs
Spec for DBaaS(Trove) notification consumption Rohit Jaiswal openstack/ceilometer-specs
Declarative snmp metric pollster Lianhao Lu openstack/ceilometer-specs
Add is_domain to tokens for projects acting as a domain henry-nash openstack/keystone-specs
Clean up tenant resources when one is deleted Assaf Muller openstack/neutron-specs
Fixes for generic RAID interface Devananda van der Veen openstack/ironic-specs

Upcoming Events

Celebrating 5 Years of OpenStack at OSCON on Wednesday, July 22nd: RSVP

Other News

OpenStack Reactions

Spawning up a new compute node

Spawning up a new compute node

The weekly newsletter is a way for the community to learn about all the various activities occurring on a weekly basis. If you would like to add content to a weekly update or have an idea about this newsletter, please leave a comment.

by Stefano Maffulli at July 03, 2015 03:45 PM

OpenStack Superuser

Superuser weekend reading

Here's the news from the OpenStack world you won't want to miss -- the musings, polemics and questions posed by the larger community.

Got something you think we should highlight? Tweet, blog, or email me!

In Case You Missed It

In our favorite headline of the week, Japan's NTT whips out OpenStack cannon at cloud Godzilla AWS,The Register opines: "NTT joined the OpenStack Foundation in May, pledging to use the open-source cloud architecture to strengthen its own public-cloud service. NTT is a hero among OpenStackers for being an early champion and adopter of their religion. In February, NTT announced Elastic Service Infrastucture (ESI), putting OpenStack on Juniper gear."

In a post titled Writing your first OpenStack application, OpenStack's Tom Fifield offers a handy guide aimed at software developers who want to build applications on OpenStack clouds and also shares some best practices for cloud application development.

Peek-a-boo, I see you! Larry Lang, president and CEO at PLUMgrid Inc., did some noodling around with Google Trends to discover that OpenStack leads in search interest. "Judging by interest shown via Google searches, OpenStack is running away from the pack...OpenStack has a long way to go, but it's even gaining on VMware." Try it for yourself using Google Search Trends. We couldn't resist plugging media darling Docker in there - the results are eye-opening.

Rob Hirschfeld, OpenStack individual board member and founder/CEO at startup RackN, also has Docker on the brain..."If OpenStack is a leading indicator, we can expect to see vendor battlegrounds forming around networking and storage. Docker (the company) has a chance to show leadership and build community here yet could cause harm by giving up the arbitrator role be a contender instead," he writes in a post about Docker's community landscape.

As OpenStack's birthday celebration nears, here's where to find a party near you, wherever you are, from Atlanta to Turkey.

Looking forward to the Tokyo Summit? Here's the first in a series of posts to help get you there. Japan exempts 67 countries from requiring a visa, but if you need one, here's what you need to know.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Cover Image/ by Alan Kotok // CC BY NC) // CC BY NC

by Nicole Martinelli at July 03, 2015 03:29 PM

Opensource.com

Top 5: Linux lifestyle, Netflix big data, Etsy, OpenStack and more

This week, Opensource.com began two new series. A couple from our OSCON speaker interview series made it into the Top 5 this week, but none quite hit the mark from our Mid-Year series. The Mid-Year series is comprised of some fun roundups, so here's the full collection for your reading pleasure. Bookmark these for later! You won't want to miss them.

by Jen Wike Huger at July 03, 2015 02:04 PM

July 02, 2015

Tesora Corp

Short Stack: OpenStack Essentials book, DBaaS, contributors vs. consumers, Project Raisin

Welcome to the Short Stack, our weekly feature where we search for the most intriguing OpenStack links to share with you. These links may come from traditional publications or company blogs, but if it’s about OpenStack, we’ll find the best links we can to share with you every week If you like what you see, […]

The post Short Stack: OpenStack Essentials book, DBaaS, contributors vs. consumers, Project Raisin appeared first on Tesora.

by Leslie Barron at July 02, 2015 08:00 PM

Rackspace Developer Blog

OpenStack OSAD and Nagios, against the world

Through the course of technology, infrastructure and application monitoring have changed positions. Not so long ago, monitoring was an afterthought when rolling out your new application or standing up your new rack of servers. More recently, I have observed monitoring to be one of the first considerations, to the point where it is actually in the initial project plan.

This evolution, while late in my mind, is the right direction…not just for the System Admin who gets the 2AM email alert or the application owner who on a monthly basis sadly report to his leadership 97% SLA on his app. Truly knowing how your application is affecting your infrastructure is one of the keys to a successful cloud.

With monitoring now being in an elevated position, that then leaves you to think: what should I use for monitoring? While there are plenty of software solutions in the market, many of which solve for different problems.

Your choice should be made around the following thoughts:

  • Keep it simple
  • Keep your monitoring close to your infrastructure
  • Create good monitors

To keep it simple, you can't do better than by going with Nagios Core. Yes, while it may not be the flashiest dashboard visually, it is one of the most powerful and lightweight monitoring applications I have used. With Nagios, you have ultimate control on many aspects of your monitoring ecosystem, ranging from being able to create custom plugins, all the way to explicitly defining execution windows for that host. Administration can be handled directly from the flat files, or you can use many of the third party tools, such as NConf. With the launch of the new versions, XI, more and more of the features only found in the third party tools are built right in. Some of the new features that stand out would be the advanced graphs, integration into Incident management tools, and cleaner SLA reports.

Of course, with great capability comes great overhead, sometimes. Typically, I have found that keeping your monitoring close to your infrastructure avoids limiting what you are monitoring due to firewall restrictions and so on. I strongly recommend using SNMP (UDP port 161), rather than the NRPE agent. No agent install needed. Also, I normally stick with Perl written plugins to ease troubleshooting. Creating ‘good’ monitors is essential to minimize false alerts, that in time turn to ignored alerts. If you find a service check continuously sending off false alerts, FIX IT! Do not let it linger for days.

Because of the power behind OpenStack exposing all functionality thru APIs, monitoring is made easy. Custom plugin scripts can be created to monitor the whole OpenStack stack and to cross reference any bottlenecks to physical infrastructure problems. This type of proactive monitoring can lead to preventing down time, which lead to outages.

OpenStack monitoring consists of:

  • Monitoring the physical hardware (base resource consumption)
  • Monitoring the OpenStack API endpoints
  • Monitoring the OpenStack services processes
  • Monitoring your Compute nodes via your Infrastructure nodes

Since I have such a deep-seated love for OSAD (OpenStack Ansible Deployment) used and created by Rackspace, it seemed only fitting to put together a series of Ansible playbooks to handle most of the Nagios and NConf process. Also, because I love to pay it forward, I included are OSAD-focused Nagios configs (checkcommands, services and a bunch of global Nagios configs), which can be used to monitor your OpenStack OSAD cloud within minutes.

Base prerequisites are:

  • OpenStack OSAD cloud (technically, the Nagios configs can work against any OpenStack deployment with tweaks; playbooks tested against v10.6)
  • Monitoring server to run Nagios and NConf

Let’s get started! Early disclaimer, the steps below will take some time and should not be rushed.


Step 1: Clone Repo

Connect via SSH to the node used to deploy your OSAD cloud (most likely is the first Infrastructure node). Within the root home directory, clone the repo below to pull down the roles you will need.

$ git clone --recursive https://github.com/wbentley15/nagios-openstack.git
Step 2: Examine roles and populate variables

Take a look at the roles, and familiarize yourself with the steps. Find below all the variables for which you will need to supply values. The variable files are located in the group_vars directory. The 'all_containers' and 'hosts' files are meant to be identical, so please supply the same variables below for both.

USER: user to be created on the OSAD nodes to match up against the default Nagios user created. The default user is 'nagios'
SNMP_COMMUNITY: the SNMP community string used for the OSAD nodes and containers
SYS_LOCATION: additional SNMP information (optional)
SYS_CONTACT: additional SNMP information (optional)

The variables needed for the nagios-server variable file are:

DB_NAME: name of the NConf database to be created
DB_USER: root user for the local mysql server
DB_PASS: root user password for the local mysql server
Step 2b: Add the IP address of the Nagios server

Add the IP address of the Nagios server to the hosts file in the root of the playbook directory.


Step 3: Move the playbooks and roles into the OSAD deployment directory

In order to leverage the dynamic inventory capabilities that come with OSAD, the playbooks and roles need to be local to the deployment directory. Trust me, you will like this!

$ cd ~/nagios-openstack
$ mkdir /opt/os-ansible-deployment/rpc_deployment/playbooks/groups_vars
$ cp ~/nagios-openstack/group_vars/* /opt/os-ansible-deployment/rpc_deployment/playbooks/group_vars
$ cp -r ~/nagios-openstack/roles/* /opt/os-ansible-deployment/rpc_deployment/roles
$ cp ~/nagios-openstack/base* /opt/os-ansible-deployment/rpc_deployment/playbooks
$ cp ~/nagios-openstack/hosts /opt/os-ansible-deployment/rpc_deployment/playbooks
Step 4: Execute the following playbook to install and configure SNMP on your OSAD cloud:
$ cd /opt/os-ansible-deployment/rpc_deployment/ 
$ ansible-playbook -i inventory/dynamic_inventory.py playbooks/base.yml

In the event the SNMP service does not start the first time, please execute the following commands:

$ ansible all_containers -m shell -a "service snmpd start"
$ ansible hosts -m shell -a "service snmpd start"
Step 5: Execute the following playbook to install and configure Nagios onto your monitoring server:
$ cd playbooks
$ ansible-playbook -i hosts base-nagios.yml

Then connect to the monitoring server via SSH and execute the following commands to set the 'nagiosadmin' user password (used to log into Nagios web dashboard) and to restart Nagios:

$ sudo htpasswd -c /etc/nagios3/htpasswd.users nagiosadmin
$ service nagios3 restart
Step 6: Execute the following playbook to install and configure NConf onto your monitoring server:
$ ansible-playbook -i hosts base-nconf.yml
Step 6b: NConf initial configuration

My attempt to automate this part was not successful, so you have to finish the NConf configuration using the NConf web console. Browse to http://<monitoring server IP>/nconf and follow the prompts to complete the initial configuration. I suggest using the following inputs and keeping the defaults for the others:

DBNAME: same as what you inputed in the variables file above
DBUSER: same as what you inputed in the variables file above
DBPASS: same as what you inputed in the variables file above
NCONFDIR: /var/www/html/nconf
NAGIOS_BIN: /usr/sbin/nagios3
Step 6c: Execute the post NConf playbook:
ansible-playbook -i hosts post-nconf-install.yml
Step 7: Execute the following playbook to configure the OSAD nodes to allow for monitoring via SSH:

In order to monitor the OpenStack processes and APIs running on the local containers, you must run the service checks remotely over SSH. Good news is the Nagios plugin to do this already exists (checkbyssh).

$ cd ..
$ ansible-playbook -i inventory/dynamic_inventory.py playbooks/base-infra.yml
Step 7b: Confirm the Nagios and NConf install:

In a browser go to the following URLs:

http://<monitoring server IP>/nagios3
http://<monitoring server IP>/nconf
Step 8: Time to configure Nagios for monitoring OSAD:

Unfortunately, this part does require manual configuration as each installation will differ too much to automate. In the big picture, this will just help you sharpen your Nagios skills. Do not worry, a copy of the Nagios directory was already taken. This step will take some time and should not be rushed.

First step here would be to customize the Nagios configuration files located in the /etc/nagios3/rpc-nagios-configs directory on your monitoring server. All the configuration files are important but, the most critical ones are the advanced_services.cfg and hosts.cfg files.

Within the advanced_services.cfg file, you will need to update each service check with the IP addresses of the containers within your OSAD install. The fastest way to get that information is to execute the following command and capture the output on each Infrastructure node: lxc-ls --fancy. Below is an example:

define service {
     service_description          infra1_check_ssh_process_glance-api
     check_command                check_by_ssh_process!<glance container IP>!glance-api
     check_period                 24x7
     notification_period          24x7
     host_name                    <OSAD node name>
     contact_groups               +admins,rpc-openstack-support
     use                          rpc-service
}

Same goes for the hosts.cfg file. Please update the OSAD node names and IP addresses.

define host {
     host_name                     <OSAD node name>
     address                       <OSAD node IP>
     icon_image_alt                Ubuntu 14.04
     icon_image                    base/ubuntu.gif
     statusmap_image               base/ubuntu.gd2
     check_command                 check-host-alive
     check_period                  24x7
     notification_period           24x7
     contact_groups                +admins,rpc-openstack-support
     use                           rpc-node
}

Please also add the following to the bottom of the resources.cfg file located in the root of the Nagios directory (/etc/nagios3):

$USER10$=<random SNMP community string of your choice, keep it simple>

If you are having trouble making the updates to the configs using an editor, do not stress out as the next step will make this process a bit easier.

Step 9: Import Nagios configuration into NConf:

Next, append the contents of the configuration files in the /etc/nagios3/rpc-nagios-configs directory to current Nagios configuration files (add to bottom). Every host, host group, check, service, and contact group is uniquely named so that they don't conflict with current Nagios setup. Then we will step thru the instructions found on the NConf website.

As the NConf tutorial suggests, first run the commands with the '-s' parameters to simulate the import process. Once you're able to run with no errors, remove the '-s' parameter to do the final import. Connect to the monitoring server via SSH, run the following commands:

$ cd /var/www/html/nconf
$ bin/add_items_from_nagios.pl -c timeperiod -f /path/to/timeperiods.cfg -s
$ bin/add_items_from_nagios.pl -c misccommand -f /path/to/misccommands.cfg -s
$ bin/add_items_from_nagios.pl -c checkcommand -f /path/to/checkcommands.cfg -s
$ bin/add_items_from_nagios.pl -c contact -f /path/to/contacts.cfg -s
$ bin/add_items_from_nagios.pl -c contactgroup -f /path/to/contactgroups.cfg -s 
$ bin/add_items_from_nagios.pl -c host-template -f /path/to/host_templates.cfg -s
$ bin/add_items_from_nagios.pl -c service-template -f /path/to/service_templates.cfg -s
$ bin/add_items_from_nagios.pl -c hostgroup -f /path/to/hostgroups.cfg -s
$ bin/add_items_from_nagios.pl -c host -f /path/to/hosts.cfg -s
$ bin/add_items_from_nagios.pl -c advanced-service -f /path/to/advanced-services.cfg -s

Now your can edit all the Nagios configs within the NConf web console.

Step 10: Execute the post Nagios playbook:
$ cd playbooks
$ ansible-playbook -i hosts post-nagios-install.yml
Step 11: Generate your first Nagios config:

Once you are satisfied with all of your custom Nagios configs (trust me, you will do this a couple of times), click on the 'Generate Nagios config' link on the sidebar of the NConf web console. It will note if any errors were encountered. From time to time, you will see warnings, but they are just that warnings, nothing urgent.

Last and not least, from the monitoring server, execute the following command to deploy the Nagios configurations to Nagios (may need to use sudo):

$ cd /var/www/html/nconf/ADD-ONS
$ ./deploy_local.sh

If you wanted to get fancy you can follow the instructions found on the digitalcardboard blog under the 'Configuring NConf to Deploy Nagios Configurations Automatically' section.

Go check out your work in Nagios now!

July 02, 2015 01:00 PM

Tesora Corp

Gartner Examines Database as a Service

Gartner recently published “Market Guide for Database Platform as a Service” that summarizes the current state of and major players in the Database as a Service (DBaaS) market. It suggests that the maturing of DBaaS technologies and a high degree of interest from enterprises will lead to fast growth in this market over the next five […]

The post Gartner Examines Database as a Service appeared first on Tesora.

by Leslie Barron at July 02, 2015 12:30 PM

OpenStack Blog

Writing Your First OpenStack Application

Ever thought about what it takes to write a scalable cloud application using an OpenStack SDK? Thanks to a small team’s heroic effort, there’s now a guide for that!

Christian Berendt (B1 Systems), Sean Collins (Mirantis), James Dempsey (Catalyst IT) and Tom Fifield gathered in Taipei, with Nick Chase live via video link, to produce “Writing Your First OpenStack Application” in just five days. The sprint was organised by the Application Ecosystem Working Group, with the financial support of the OpenStack Foundation.

The new work is aimed at software developers who want to build applications on OpenStack clouds and also shares some best practices for cloud application development.

Inspired by Django’s first app tutorial, where a simple polling app is used to explore the basics of working with Django, “Writing Your First OpenStack Application” uses an app that generates beautiful fractal images as a teaching tool to run through areas like:

  • Creating and destroying compute resources.
  • Scaling available resources up and down.
  • Using Object and Block storage for file and database persistence.
  • Customizing networking for better performance and segregation.
  • Making cloud-related architecture decisions such as turning functions into micro-services and modularizing them.

The guide has been written with a strong preference for the most common API calls, so it will work across a broad spectrum of OpenStack versions. In addition, the authors have paid special attention that the first few sections should work almost regardless of OpenStack cloud configuration.

A core part of the guide’s design is support for multiple SDKs. The initial version was written and tested with the libcloud SDK, but work is underway for python-openstacksdk, pkgcloud and fog which will re-use the text with new code samples.

So, check out “Writing your First OpenStack Application” for libcloud, watch the introductory presentation from the summit, or consider helping complete the samples for other languages.

 

 

 

Taipei 101 (c) James Dempsey

Taipei 101 (c) James Dempsey

Each post-it note represents an area that had to be written.

Each post-it note represents an area that had to be written.

Enjoying local Taiwanese food after a hard day's writing.

Enjoying local Taiwanese food after a hard day’s writing.

by Tom Fifield at July 02, 2015 03:58 AM

Kyle Mestery

Running Docker Machine on HP Helion Public Cloud

Brent Salisbury, otherwise known as networkstatic, has been doing an amazing job writing articles on how to run Docker Machine on various cloud platforms. He’s written about running it on Microsoft Azure, Amazon Web Services, Rackspace Public Cloud, and Digital Ocean. He’s done an amazing job showing you how you can utilize Docker Machine with an existing cloud platform to experiment with Docker and even use it in production running in VMs on public clouds.

Given that Docker Machine supports so many different cloud platform, I thought I’d help to continue his series and show you how to use Docker Machine with the HP Helion Public Cloud. Since the HP Helion Public Cloud is based on OpenStack, the Docker Machine driver for OpenStack works quite nicely here. Lets dive in and see how to configure this up.

HP Helion Public Cloud

For those interested in following along, you can sign up for a free HP Helion Public Cloud account and get a credit by using the link here. Use this to get access so you can spin up your own instances here. Once you do that, you can then login and create VMs and try out Docker Machine as below.

Using Docker Machine With HP Helion Public Cloud

There are a few things you’ll need to grab before you create your VMs using Docker Machine:

  1. Image Name
  2. Flavor Name
  3. Tenant ID
  4. Auth URL
  5. Endpoint Type
  6. Region
  7. Floating IP Pool
  8. Username
  9. Password
  10. ssh-user

You can acquire this information from the admin panel of your HP Helion Public Cloud account. Once you have it all, it’s a simple matter of adding it to the commandline of the docker-machine command you run. An example is shown below:

mestery$ docker-machine create \
     --driver openstack \
     --openstack-image-name "Ubuntu Server 14.04.1 LTS (amd64 20140927) - Partner Image" \
     --openstack-flavor-name standard.xsmall \
     --openstack-tenant-id <tenant ID> \
     --openstack-auth-url https://region-a.geo-1.identity.hpcloudsvc.com:35357/v2.0/ \
     --openstack-endpoint-type publicURL \
     --openstack-region "region-a.geo-1" \
     --openstack-floatingip-pool Ext-Net \
     --openstack-username <username> \
     --openstack-password <password> \
     --openstack-ssh-user ubuntu \
     test-machine

As you can see, we’re creating a machine using an extra small flavor with an Ubuntu 14.04.1 LTS image. You’ll want to fill in the details around tenant ID, username, and password here.

Some Potential Gotchas

Note that docker-machine will use port 2376 to communicate with the docker daemon running on the VM in the HP Helion Public Cloud. To enable this access remotely, you’ll need to add a security group rule to allow this.

Another thing to note is that if you end up reusing the same floating IP across a few instances, the step where docker-machine logs into the guest and sets up docker will fail. It’s best to ensure the fingerprint for an older host which was previously using that floating IP is no longer in your ~/.ssh/known_hosts file.

Using Docker

Now that we’ve created the virtual machine, lets eval the new host:

mestery$ eval "$(docker-machine env docker1)"

And for kicks, lets just verify nothing is running on the host:

mestery$ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
mestery$ 

Lets follow some simple examples from networkstatic and try a few things out. First, lets just run a simple docker image with busybox and print out something interesting:

mestery$ docker run busybox echo networkstatic is awesome
Unable to find image 'busybox:latest' locally
latest: Pulling from busybox

cf2616975b4a: Pull complete 
6ce2e90b0bc7: Pull complete 
8c2e06607696: Already exists 
busybox:latest: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security.

Digest: sha256:38a203e1986cf79639cfb9b2e1d6e773de84002feea2d4eb006b52004ee8502d
Status: Downloaded newer image for busybox:latest
networkstatic is awesome
mestery$ 

Pretty cool, right? Now, lets run something a bit more useful:

mestery$ docker run -d -p 8000:80 nginx

We’ll need to add a security group rule to allow port 8000 in so we can get access to our new nginx container running on this VM in the HP Helion Public Cloud. Once we do, lets run a curl command to see what’s there:

mestery$ curl $(docker-machine ip test-machine):8000

And you should see something like this:

<title>Welcome to nginx!</title>

Closing Thoughts

Docker Machine is a pretty cool tool to explore Docker with. Thanks to Brent for inspiring me to take a peek at this and how it runs on the HP Helion Public Cloud! I encourage everyone to give Docker Machine a try on your public cloud of choice and get familiar with docker and containers.

July 02, 2015 01:29 AM

July 01, 2015

eNovance Engineering Teams

Dive into Zuul – Gated commit system

What is Zuul

Zuul is a software developed by the OpenStack community. It was developed as an efficient gated commit system, allowing projects to merge patches only after they pass a series of tests. It reduces the probability of breaking the master branch, for instance when unit tests or functional tests no longer pass on the tip of master.

It also performs well for running jobs when specific events occur like when a patch is proposed, merged, … or even when a project is tagged.

In this blog post we’ll try to explain how Zuul works and clarify some concepts through simple examples:

  • The independent pipeline
  • The dependent pipeline
  • The Zuul cloner
  • The Cross project testing

To ease the understanding of this blog post you should be familiar with Gerrit and Jenkins.

We wrote this article with the intend to dive more into Zuul use cases and share these with the interested reader. We also wanted to improve Zuul integration inside our CI/CD platform Software Factory.

How does Zuul keep your master branch sane ?

Keeping the master branch sane is difficult when:

  • validating a patch for a project takes a long time
  • the amount of patches proposals submitted to a project is quite high

Zuul helps the project’s core reviewers (those who can approve the merge of a patch in master) to decide whether a patch can land into the master branch or not by ensuring the patch is always tested over the latest version of master prior to merging.

Zuul is designed to be coupled with a code review software like Gerrit. It allows code reviewers to validate patches on a project and to decide whether a patch is accepted (good to be merged on the master branch) or not (the patch needs more work or should be abandoned).

When the patch submission rate is high, then keeping patches rebased on master’s HEAD is difficult, but validating a patch on the HEAD is essential to keep the master branch sane. For example, between the moment a core reviewer decides to check a patch (A) and the moment he accepts it by submitting it to master, another reviewer may have merged another patch (B). In that situation, the tip of the master might end up in an unexpected, undesirable or even broken state because the first reviewer has just tested master HEAD + (A) and not master HEAD + (B) + (A).

Instead, Zuul listens to the Gerrit event stream and detects if a patch (we’ll call it (A)) was approved for merging by a core reviewer. If this is the case, Zuul runs the project test suite on top of the project master HEAD with patch (A) applied on it. If another patch (B) was being tested for the same project (therefore pending a merge) prior to the submision of patch (A), Zuul will run the test suite on HEAD + (B) + (A). Depending on the results, Zuul will notify Gerrit to merge the patch on master or to report a test failure. This behavior is handled by the dependent pipeline of Zuul (gate).

Furthermore, let’s say you have a functional test suite that takes at least 2 hours to run. Without a tool like Zuul you could at most merge 12 patches a day. Zuul increases this rate by running the tests in parallel while still respecting the order of acceptance of patches.

High level Zuul architecture

Zuul scheduler

This component listens to the Gerrit event stream and triggers actions to be performed depending on the events, based on a configuration file. The most important configuration file for Zuul is “/etc/zuul/layout.yaml”; it defines various settings such as:

  • the available pipelines
  • the conditions for a patch to enter in a pipeline
  • which jobs to run within a pipeline for a given project or a set of projects
  • how to report a job result
  • custom actions to perform

Zuul merger

The merger’s purpose is to set temporary Git repositories and branches up in order to ease the preparation of jobs environments. When a patch is scheduled to be tested, the scheduler will ask the merger to prepare a temporary branch (ZUUL_REF) where the patches have been merged on the tip of master of one or more projects, depending on what is required for the job. Then Zuul scheduler will pass details about that temporary branch to the jobs runner, allowing it to fetch the proper environment to start a job.

Zuul cloner

This is a handy python script that helps to properly setup a job workspace with the patch(es) to be tested for one or more projects. The scheduler sets some environment variables when triggering the job runner to run a job. Then the job runner can use those variables when preparing the workspace. The cloner knows how to interpret those variables, so it can be called at the beginning of a job script. The cloner clones the master branch of every project needed from the main repository and fetches patch(es) from Zuul merger’s temporary repository’s ZUUL_REF branch.

Want to try out Zuul ?

In order to help you experiment easily with Zuul we have setup a Dockerfile that builds a ready-to-use container with all the components you need to get started with commit gating, in case you don’t have your own setup or time to deploy one properly.

Checkout the exzuul project and follow the README:

<style>.gist table { margin-bottom: 0; }</style><script src="http://gist.github.com/morucci/7549ea9c0502976a5976.js"></script>

All examples below have been run on the exzuul container.

The container starts these main components:

  • Gerrit (Code Review)
  • Zuul (Gating)
  • Zuul status page
  • Jenkins (Job runner)
  • Apache (with git-http-backend)

Below is the basic architecture running when you start the container. The diagram also gives an idea of components interactions.

arch

In this blog post we define jobs in Jenkins using Jenkins Job Builder (JJB). You can also find an interesting blog post about it here.

Independent and dependent Pipelines

The dependent and independent pipelines follow different behaviors:

  • The dependent pipeline is best used for commit gating (it merges patches)
  • The independent pipeline can be used to run jobs that can usually be run independently, like these:
    • Run tests to get early feedback on a patch, like smoke tests
    • Run periodic jobs, like checking the availability of external dependencies
    • Run post merge jobs, like building and uploading the project’s documentation
    • Run jobs when a tag is created, like building and uploading a binary for the project

Actually you can have as many pipelines as you want and bind them on custom events that are going to occur on the Gerrit events stream. That means you can run jobs at any moment during the life cycle of a patch or a project.

The exzuul container bundles Zuul with the following pre-defined pipelines:

  • check (Independent pipeline)
  • gate (Dependent pipeline)

The independent pipeline

Below is the configuration of the independent pipeline named ‘check’ in /etc/zuul/layout.yaml.

- name: check
  description: Newly uploaded patchsets enter this pipeline to receive an initial +/-1 Verified vote from Jenkins.
  failure-message: Build failed.
  manager: IndependentPipelineManager
  precedence: low
  require:
    open: True
    current-patchset: True
  trigger:
    gerrit:
      - event: patchset-created
      - event: comment-added
        comment: (?i)recheck
  start:
    gerrit:
      verified: 0
  success:
    gerrit:
      verified: 1
  failure:
    gerrit:
      verified: -1

According to the trigger section, this pipeline run jobs if a new patch is submitted (gerrit event called “patchset-created”) or if a comment containing the keyword “recheck” (gerrit event called “comment-added”) is posted on a review thread. You can see that the pipeline reports a score of 0 on the “Verified” label when the job starts, -1 if the job fails and +1 if it succeed.

In order to test that pipeline we will add a small project called ‘democlient’ on Gerrit and we are going to run its unit tests through zuul and jenkins. As the pipeline is already defined, we just need to associate the project name with a pipeline and a job name. In order to do so, add the following in /etc/zuul/layout.yaml under the “projects” section:

- name: democlient
  check:
    - democlient-unit-tests

Validate layout.yaml and force Zuul to reload its configuration by running the following command on the container shell:

<style>.gist table { margin-bottom: 0; }</style><script src="http://gist.github.com/morucci/547d07386db10f9e8bcb.js"></script>

Zuul is ready to ask the job runner (Jenkins) to run democlient-unit-tests inside the check pipeline, but we need to define that job on Jenkins first. To do that we use JJB (Jenkins Job Builder).

Edit /etc/jenkins_jobs/jobs/jjb.yaml and add the following:

- job:
     name: democlient-unit-tests
     defaults: global
     builders:
       - shell: |
           env | grep ZUUL
           zuul-cloner http://ci.localdomain:8080 $ZUUL_PROJECT
           cd $ZUUL_PROJECT
           ./run_tests.sh

- project:
     name: democlient
     node: master
     jobs:
       - democlient-unit-tests

Submit the job on Jenkins via JJB by running the following command on the container shell:

<style>.gist table { margin-bottom: 0; }</style><script src="http://gist.github.com/morucci/3bbb6b1e3f36492ab2e4.js"></script>

Now initialize the democlient project on the Gerrit UI. For the sake of simplicity we are going to use the default Admin account (already created on Gerrit) to perform user actions on Gerrit:

  • Add your public ssh key to the Gerrit Admin account settings.
  • Create a project called “democlient” using the Admin account on Gerrit. Be sure to check “create an empty commit” before creation.

Configure your local “democlient” repository for reviewing with gerrit, and push the initial code on “democlient” by running these commands on your host:

<style>.gist table { margin-bottom: 0; }</style><script src="http://gist.github.com/morucci/a0d477598ce3c8fa86d1.js"></script>

You should see that the patch freshly submitted via git review has been “Verified” by Zuul by giving it a +1 vote.

Your patch has entered the check pipeline because it matches the trigger conditions defined inside the pipeline.

1_

The job related to the project and the check pipeline begins with running zuul-cloner in order to prepare the workspace. We do not rely on any Jenkins plugin to prepare the environment; instead zuul-cloner receives a bunch of environment variables that it uses to fetch the patch.

The most important variables are ZUUL_PROJECT, ZUUL_URL, and ZUUL_REF. Zuul-cloner clones democlient from Gerrit at the current tip of master branch, then fetches ZUUL_REF from democlient zuul-merger’s temporary repository. Indeed the merger has been instructed to prepare a specific branch called ZUUL_REF where the change has been applied.

Below are the logs of the job console where you can see the output of the cloner.

...
+ env
ZUUL_PROJECT=democlient
ZUUL_BRANCH=master
ZUUL_URL=http://ci.localdomain/p
ZUUL_CHANGE=1
ZUUL_CHANGES=democlient:master:refs/changes/01/1/2
ZUUL_REF=refs/zuul/master/Z86a40a16c3064a9ca9f48d590d89e2b7
ZUUL_CHANGE_IDS=1,2
ZUUL_PIPELINE=check
ZUUL_COMMIT=03285f35e11a225af0a6da55d647871ece06cfde
ZUUL_PATCHSET=2
ZUUL_UUID=1d710aa40eea46198143513241f06309
+ zuul-cloner http://ci.localdomain:8080 democlient
INFO:zuul.CloneMapper:Workspace path set to: /var/lib/jenkins/workspace/democlient-unit-tests
INFO:zuul.CloneMapper:Mapping projects to workspace...
INFO:zuul.CloneMapper:  democlient - /var/lib/jenkins/workspace/democlient-unit-tests/democlient
INFO:zuul.CloneMapper:Expansion completed.
INFO:zuul.Cloner:Preparing 1 repositories
INFO:zuul.Cloner:Creating repo democlient from upstream http://ci.localdomain:8080/democlient
INFO:zuul.Cloner:upstream repo has branch master
INFO:zuul.Cloner:Prepared democlient repo with commit 03285f35e11a225af0a6da55d647871ece06cfde
INFO:zuul.Cloner:Prepared all repositories
+ cd democlient
+ ./run_tests.sh
...

We can verify inside zuul-merger’s democlient repository that the tip of ZUUL_REF corresponds to your commit. Run the following command in the container shell:

<style>.gist table { margin-bottom: 0; }</style><script src="http://gist.github.com/morucci/a019487d8430fa6e3efe.js"></script>

Now let’s say you or someone else submits a patch that depends on your previous, not yet merged patch. Zuul-merger will prepare a branch under a new ZUUL_REF that will include your previous patch and the new one. On the host shell:

<style>.gist table { margin-bottom: 0; }</style><script src="http://gist.github.com/morucci/db0590d38c509fd83a19.js"></script>

2_

In the job console logs you can see that the variable ZUUL_CHANGES now mentions two changes, separated by a caret:

...
+ env
ZUUL_PROJECT=democlient
...
ZUUL_CHANGES=democlient:master:refs/changes/01/1/2^democlient:master:refs/changes/02/2/1
ZUUL_REF=refs/zuul/master/Z6c1cc4aea576448d9dfdcdaae46c9732
ZUUL_CHANGE_IDS=1,2 2,1
ZUUL_PIPELINE=check
ZUUL_COMMIT=d154875abac954a9ddeccfa7e875a972a3d3353b
ZUUL_PATCHSET=1
ZUUL_UUID=de438964783e41a18535f4a24b36b12e
+ zuul-cloner http://ci.localdomain:8080 democlient
...
INFO:zuul.Cloner:Prepared democlient repo with commit d154875abac954a9ddeccfa7e875a972a3d3353b
INFO:zuul.Cloner:Prepared all repositories
+ cd democlient
+ ./run_tests.sh
...

Let’s have a look at the Zuul-merger democlient repository:

<style>.gist table { margin-bottom: 0; }</style><script src="http://gist.github.com/morucci/db197cc08b7cb357017d.js"></script>

Zuul-cloner prepared the workspace in order to run the unit tests with both 03285f3 and d154875 included.

The dependent pipeline

Below is the configuration of the dependent pipeline named ‘gate’ in /etc/zuul/layout.yaml.

- name: gate
  description: Changes that have been approved by core developers are enqueued in order in this pipeline
  manager: DependentPipelineManager
  precedence: normal
  require:
    open: True
    current-patchset: True
    approval:
      - verified: [1, 2]
        username: zuul
      - code-review: 2
  trigger:
    gerrit:
      - event: comment-added
        approval:
          - code-review: 2
      - event: comment-added
        approval:
          - verified: 1
        username: zuul
  start:
    gerrit:
      verified: 0
  success:
    gerrit:
      verified: 2
      submit: true
  failure:
    gerrit:
      verified: -2

Two elements are really important in the gate pipeline definition:

  • the manager is “DependentPipelineManager”
  • submit is “True”

The former tells Zuul to handle patches in that pipeline with the DependentPipelineManager logic, meaning that patches are dealt with according to the status of others patches currently in the pipeline. The latter tells Zuul to let Gerrit merge the patch on the branch if the job succeeds.

The trigger section states that a patch will enter the gate pipeline if:

  • Zuul already set a score of +1 “Verified” (when the patch went through the check pipeline)
  • a core reviewer set a score of +2 “Code Review”

If one of these events appears on the Gerrit stream Zuul will check that the patch also matches the “require” section. If that’s the case, the jobs defined for the project under the gate pipeline will run.

Before we experiment with this pipeline be sure to define the job we want to run for demoproject within the gate pipeline. Thus update the democlient section in /etc/zuul/layout.yaml and add the gate subsection:

- name: democlient
  check:
    - democlient-unit-tests
  gate:
    - democlient-unit-tests

Validate layout.yaml and force Zuul to reload its configuration.

<style>.gist table { margin-bottom: 0; }</style><script src="http://gist.github.com/morucci/547d07386db10f9e8bcb.js"></script>

Now let’s merge the patch “push inital code” by setting a +2 Code Review on that pending patch. As soon as you put +2 Zuul handles the event and triggers the “democlient-unit-tests” job. The job should pass and the patch should now be merged on master.

3_

Keep ordering

In order to verify that the gate pipeline manages to keep the validation ordering by merging working patches in the same order they appeared in the pipeline, we are going to create 2 patches:

  • p1: A patch where we set a delay inside run_tests.sh (60 s)
  • p2: A patch where we set a delay inside run_tests.sh (30 s)

Then we’ll accept both patches with by issuing +2 Code Reviews. The order is important: p1 then p2.

As soon as we accept both patches with +2 Code Reviews, Zuul will have to handle two jobs running in the gate pipeline at the same time:

  • A job with p1 applied on democlient HEAD
  • A job with p1 applied on democlient HEAD then p2 applied on top of p1

On the host shell:

<style>.gist table { margin-bottom: 0; }</style><script src="http://gist.github.com/morucci/fde6d519fd0f537e001a.js"></script>

You need to wait until both patches have been verified by the check pipeline then you can set +2 Code Review on each patch. This is important to set +2 on both patches at almost the same time, to have them in the gate pipeline concurrently. Use the commands below.

You will need both patches’ from the Gerrit web UI. In my case p1’s was “3,1” and p2’s was “4,1”.

<style>.gist table { margin-bottom: 0; }</style><script src="http://gist.github.com/morucci/404a43a7f44ff669747c.js"></script>

Now both patches are in the gate pipeline and should pass the related job, then should be merged by Zuul.

4_

Below are the job console logs for zull cloner when testing patch p2 (4,1), where you can see that the patches currently tested in the job environment are (p1) 3,1 and (p2) 4,1.

...
ZUUL_PROJECT=democlient
...
ZUUL_CHANGES=democlient:master:refs/changes/03/3/1^democlient:master:refs/changes/04/4/1
ZUUL_REF=refs/zuul/master/Z316eaf2e04634720bfada0217695d0af
ZUUL_CHANGE_IDS=3,1 4,1
ZUUL_PIPELINE=gate
+ zuul-cloner http://ci.localdomain:8080 democlient
INFO:zuul.CloneMapper:Workspace path set to: /var/lib/jenkins/workspace/democlient-unit-tests@2
INFO:zuul.CloneMapper:Mapping projects to workspace...
INFO:zuul.CloneMapper:  democlient - /var/lib/jenkins/workspace/democlient-unit-tests@2/democlient
INFO:zuul.CloneMapper:Expansion completed.
INFO:zuul.Cloner:Preparing 1 repositories
INFO:zuul.Cloner:Creating repo democlient from upstream http://ci.localdomain:8080/democlient
INFO:zuul.Cloner:upstream repo has branch master
INFO:zuul.Cloner:Prepared democlient repo with commit 2da5892d57fe506ab392d026a37b660007cc783f
INFO:zuul.Cloner:Prepared all repositories
+ cd democlient
+ ./run_tests.sh
...

We can check which patches are behind $ZUUL_REF and what is important to note here is that p1 has been applied before p2 during the environment preparation by Zuul merger. Indeed as p1 has been validated before p2 and with p1 being currently in the pipeline (job not finished), Zuul assumes p1 will pass the job and will be the next patch merged on master. So instead of waiting for the end of the p1 test and its merge Zuul runs the job for p2 in parallel with p1 included. This is specific to the dependent pipeline.

<style>.gist table { margin-bottom: 0; }</style><script src="http://gist.github.com/morucci/bd431090d57e87a5b21f.js"></script>

Also below are the relevant log lines showing that Zuul merged both patches in the correct order. If you look carefully you’ll see jobs for 3,1 (unique-id: d36a3b4f12e643faba03bbaf53bdc571) and 4,1 (unique-id: 108ef7b8a33b4b1a99c601941b0f94da) started at almost the same time but job for 4,1 finished before the other. This is expected as we set SLEEP_DELAY to 30 seconds for 4,1 and 60 seconds for 3,1. But it’s important to see that Zuul kept the order by merging 3,1 before 4,1.

<style>.gist table { margin-bottom: 0; }</style><script src="http://gist.github.com/morucci/e0d29ef5c505a9bab582.js"></script>

Discard broken patches

As previously said, Zuul manages to run jobs in parallel by assuming that patches currently in the gate pipeline will pass the tests and will be merged on the master branch. But what happens if a patch in the pipeline fails ? Let’s try.

We are going to create 3 patches:

  • r1: An empty file addition called “dummy1.txt”
  • r2: A modification in run_tests.sh (that breaks the tests if dummy1.txt exists)
  • r3: Another file addition “dummy2.txt”
<style>.gist table { margin-bottom: 0; }</style><script src="http://gist.github.com/morucci/63225594866092f1dff2.js"></script>

You need to wait for all patches to pass through the check pipeline and get the +1 Verified from Zuul. Then, accept them for merging by setting +2 Code Review in the correct order (to raise the bad behavior introduced in r2):

<style>.gist table { margin-bottom: 0; }</style><script src="http://gist.github.com/morucci/5b66abe5a80421d54386.js"></script>

5_

Below are the logs of Zuul:

<style>.gist table { margin-bottom: 0; }</style><script src="http://gist.github.com/morucci/dbcdfb9e3afc90d18c33.js"></script>

Four jobs have been started by Zuul :

  • HEAD + r1
  • HEAD + r1 + r2 (Failed)
  • HEAD + r1 + r2 + r3 (Canceled)
  • HEAD + r1 + r3

From the logs we clearly see that zuul began to start the job to verify r3 (7,1) with r2 (6,1) and r3 (5,1) applied on top of master, but r2 already failed so Zuul asks to cancel the job for r3 as something went wrong with r2. Zuul then started another job with r1 and r3 applied on top of master, bypassing the faulty patch r2.

If you have a look to the Gerrit WEB UI you’ll see that only r1 and r3 have been merged and r2 has been rejected “-2 Verified”. It’s up to the author to fix and submit a new patchset.

Four jobs have been started; that also means Zuul-merger created four $ZUUL_REF.

So thanks to the gate pipeline we haven’t merged r2 on the master branch. Note that r2 passed the test in the check pipeline because r1 was not put inside the test environment.

In a context where more than one person are allowed to accept a patch and also where jobs can take a long time to run I let you imagine how this behavior can save time. It minimizes the need to revert commits to fix the master branch.

Cross projects jobs

Let’s say you have two projects repositories on Gerrit and both interact together. For instance this can be the case for:

  • a client and a server
  • a software and its plugins

Also you have a test “like a functional test” that verifies that a plugin interacts well with the software. You want to trigger the functional test when a patch is proposed either on the software or on the plugin.

  • If a patch is proposed on the software then you want to run the functional test with the patch applied on software master HEAD, and using the plugin master HEAD.
  • If a patch is proposed on the plugin you want to run the functional test with the patch applied on plugin master HEAD, and using the software master HEAD.

Again Zuul-cloner can be used to prepare the functional test’s job environment thanks to the git repositories prepared by zuul-merger.

We are going to test that with a second project called “demolib” that is a dummy library designed to run with “democlient”.

First create the demolib project on Gerrit and setup Zuul/Jenkins for testing this new project.

  • Setup Zuul/Jenkins
  • Create a project called “demolib” using the Admin account on Gerrit. (be sure to check “create an empty commit”)

In /etc/jenkins_jobs/jobs/jjb.yaml

- job:
     name: demolib-unit-tests
     defaults: global
     builders:
       - shell: |
           env | grep ZUUL
           zuul-cloner http://ci.localdomain:8080 $ZUUL_PROJECT
           cd $ZUUL_PROJECT
           ./run_tests.sh

- job:
     name: demo-functional-tests
     defaults: global
     builders:
       - shell: |
           env | grep ZUUL
           zuul-cloner http://ci.localdomain:8080 democlient
           zuul-cloner http://ci.localdomain:8080 demolib
           cd democlient
           DEMOLIBPATH=../demolib ./run_functional-tests.sh

- project:
     name: democlient
     node: master
     jobs:
       - democlient-unit-tests
       - demo-functional-tests

- project:
     name: demolib
     node: master
     jobs:
       - demolib-unit-tests
       - demo-functional-tests

Above we added a job to run the unit tests for the project demolib.

Also we added a job “demo-functional-tests” that will run the functional tests of democlient. democlient needs demolib to behave as expected.

The demo-functional-tests job uses zuul-cloner in order to fetch democlient and demolib inside the workspace, then starts run_functional-tests.sh by setting the path to the demolib codebase in the job’s workspace.

We also need to configure Zuul to associate additional jobs with the related projects. Thus modify /etc/zuul/layout.yaml like below:

projects:
  - name: democlient
    check:
      - democlient-unit-tests
      - demo-functional-tests
    gate:
      - democlient-unit-tests
      - demo-functional-tests
  - name: demolib
    check:
      - demolib-unit-tests
      - demo-functional-tests
    gate:
      - demolib-unit-tests
      - demo-functional-tests

Here we configure Zuul to start “demolib-unit-tests” inside the pipelines check and gate for the demolib project. We also ask to run “demo-functional-tests” for both projects as well for the check and gate pipelines.

Run Jenkins job builder and restart Zuul:

<style>.gist table { margin-bottom: 0; }</style><script src="http://gist.github.com/morucci/a883bdfb77533decc9de.js"></script>

And push the initial code on “demolib”: (commands to be perform from your laptop)

<style>.gist table { margin-bottom: 0; }</style><script src="http://gist.github.com/morucci/0c690ce3ed4bc2645de9.js"></script>

6_

By looking at the job console logs of “demo-functional-tests” you can see an output like this:

...
ZUUL_PROJECT=demolib
ZUUL_BRANCH=master
ZUUL_URL=http://ci.localdomain/p
ZUUL_CHANGE=8
ZUUL_CHANGES=demolib:master:refs/changes/08/8/1
ZUUL_REF=refs/zuul/master/Z0d55ecb440c1415ca61cf36ad690e2cd
ZUUL_CHANGE_IDS=8,1
ZUUL_PIPELINE=check
ZUUL_COMMIT=e8c81dd33ece1b0f651a143aff9f0e6a2df2bddd
ZUUL_PATCHSET=1
ZUUL_UUID=84bef62dfb0a44f1bf62df631aacd7d2
+ zuul-cloner http://ci.localdomain:8080 democlient
INFO:zuul.CloneMapper:Workspace path set to: /var/lib/jenkins/workspace/demo-functional-tests
INFO:zuul.CloneMapper:Mapping projects to workspace...
INFO:zuul.CloneMapper:  democlient - /var/lib/jenkins/workspace/demo-functional-tests/democlient
INFO:zuul.CloneMapper:Expansion completed.
INFO:zuul.Cloner:Preparing 1 repositories
INFO:zuul.Cloner:Creating repo democlient from upstream http://ci.localdomain:8080/democlient
INFO:zuul.Cloner:upstream repo has branch master
INFO:zuul.Cloner:Falling back to branch master
INFO:zuul.Cloner:Prepared democlient repo with branch master
INFO:zuul.Cloner:Prepared all repositories
+ zuul-cloner http://ci.localdomain:8080 demolib
INFO:zuul.CloneMapper:Workspace path set to: /var/lib/jenkins/workspace/demo-functional-tests
INFO:zuul.CloneMapper:Mapping projects to workspace...
INFO:zuul.CloneMapper:  demolib - /var/lib/jenkins/workspace/demo-functional-tests/demolib
INFO:zuul.CloneMapper:Expansion completed.
INFO:zuul.Cloner:Preparing 1 repositories
INFO:zuul.Cloner:Creating repo demolib from upstream http://ci.localdomain:8080/demolib
INFO:zuul.Cloner:upstream repo has branch master
INFO:zuul.Cloner:Prepared demolib repo with commit e8c81dd33ece1b0f651a143aff9f0e6a2df2bddd
INFO:zuul.Cloner:Prepared all repositories
+ cd democlient
+ DEMOLIBPATH=../demolib
+ ./run_functional-tests.sh
...

Below is the state of $ZUUL_REF on democlient and demolib:

<style>.gist table { margin-bottom: 0; }</style><script src="http://gist.github.com/morucci/37ad6c4af5f6ef1b4ba7.js"></script>

Here the functional test job via zuul-cloner has prepared the job environment by setting demolib to the proper commit fetched from $ZUUL_REF. For democlient zuul-cloner fell back to the master branch as the merger has not prepared a temporary branch under $ZUUL_REF, since it wasn’t needed.

But in some circumstances you’ll have a branch referenced by $ZUUL_REF for multiple projects:

  1. The patch has been accepted then passes through the dependent pipeline “gate” but another patch, on democlient, has been accepted before and is also in the gate pipeline in front of the demolib patch.
  2. The commit message of the demolib patch indicates one or a couple of dependencies via the keyword “Depends-on”.

You can approve the patch on demolib by setting +2 CR. The patch passes through the gate pipeline and is merged.

Now we will add two more patches and manage to have them go through the dependent pipeline gate at the same time:

Let’s create a patch on democlient and on demolib:

<style>.gist table { margin-bottom: 0; }</style><script src="http://gist.github.com/morucci/032d5a05763f0a487261.js"></script>

Then we accept both patches on democlient and demolib. c1 then d1:

<style>.gist table { margin-bottom: 0; }</style><script src="http://gist.github.com/morucci/7ad86aef636ed5d7b5f5.js"></script>

7_

Below are the job console logs of demo-functional-tests in the gate pipeline for the patch d1 against demolib, where you can see that zuul-cloner fetched the tip of $ZUUL_REF for both projects. Indeed as demolib and democlient share at least one test with the same name “demo-functional-test”, Zuul built a shared queue between demolib and democlient and created the same $ZUUL_REF on both repositories.

The behavior of the gate pipeline explained above (in the previous chapter) is applied on two or more project’s repositories; and so patch d1 (on demolib) has been validated via the functional test with patch c1 applied on democlient.

...
ZUUL_PROJECT=demolib
ZUUL_CHANGES=democlient:master:refs/changes/03/3/1^demolib:master:refs/changes/04/4/1
ZUUL_REF=refs/zuul/master/Ze6d383a9614041f9a4e43879996fd1f2
ZUUL_CHANGE_IDS=9,1 10,1
...
+ zuul-cloner http://ci.localdomain:8080 democlient
INFO:zuul.CloneMapper:Workspace path set to: /var/lib/jenkins/workspace/demo-functional-tests
INFO:zuul.CloneMapper:Mapping projects to workspace...
INFO:zuul.CloneMapper:  democlient - /var/lib/jenkins/workspace/demo-functional-tests/democlient
INFO:zuul.CloneMapper:Expansion completed.
INFO:zuul.Cloner:Preparing 1 repositories
INFO:zuul.Cloner:Creating repo democlient from upstream http://ci.localdomain:8080/democlient
INFO:zuul.Cloner:upstream repo has branch master
INFO:zuul.Cloner:Prepared democlient repo with commit 1e2150e51d25505a8a8f26f4c23d1a4fc136ee91
INFO:zuul.Cloner:Prepared all repositories
+ zuul-cloner http://ci.localdomain:8080 demolib
INFO:zuul.CloneMapper:Workspace path set to: /var/lib/jenkins/workspace/demo-functional-tests
INFO:zuul.CloneMapper:Mapping projects to workspace...
INFO:zuul.CloneMapper:  demolib - /var/lib/jenkins/workspace/demo-functional-tests/demolib
INFO:zuul.CloneMapper:Expansion completed.
INFO:zuul.Cloner:Preparing 1 repositories
INFO:zuul.Cloner:Creating repo demolib from upstream http://ci.localdomain:8080/demolib
INFO:zuul.Cloner:upstream repo has branch master
INFO:zuul.Cloner:Prepared demolib repo with commit 600e36667c7850cb856e1281baad502eb05aa7c2
INFO:zuul.Cloner:Prepared all repositories
...

We can verify which commits were under $ZUUL_REF:

<style>.gist table { margin-bottom: 0; }</style><script src="http://gist.github.com/morucci/4e73e8a46f071a7de879.js"></script>

Depends-on

Zuul can detect one or multiple mentions of the folowing string in a commit message:
"Depends-on: <changeid>"

When a patch needs another one on the same project as its base it is easy to just make the patch dependent on a specific parent SHA commit id but when it depends on a patch (not yet merged) from another project, then you need to provide some extra information via “Depends-on”.

For instance, let’s assume you want to add a new function to demolib:

<style>.gist table { margin-bottom: 0; }</style><script src="http://gist.github.com/morucci/a969b4162906d2fe0620.js"></script>

Retrieve the Change-Id from your last commit message. Thanks to it you’ll be able to declare this patch as a dependency in another patch.

<style>.gist table { margin-bottom: 0; }</style><script src="http://gist.github.com/morucci/88642a2971b61b5642d1.js"></script>

Use the “Depends-On” keyword with the Change-Id to indicate that democlient needs that new function in order to implement a new feature.

<style>.gist table { margin-bottom: 0; }</style><script src="http://gist.github.com/morucci/b2f788117fc899409f5f.js"></script>

8_

Below are the logs of demo-functional-test in the check pipeline for our patch g1 on democlient.

...
ZUUL_PROJECT=democlient
ZUUL_CHANGES=demolib:master:refs/changes/11/11/1^democlient:master:refs/changes/12/12/1
ZUUL_REF=refs/zuul/master/Z18f19080f4c94d4c85bf689ad58588a7
ZUUL_CHANGE_IDS=11,1 12,1
ZUUL_PIPELINE=check
...
+ zuul-cloner http://ci.localdomain:8080 democlient
INFO:zuul.CloneMapper:Workspace path set to: /var/lib/jenkins/workspace/demo-functional-tests
INFO:zuul.CloneMapper:Mapping projects to workspace...
INFO:zuul.CloneMapper:  democlient - /var/lib/jenkins/workspace/demo-functional-tests/democlient
INFO:zuul.CloneMapper:Expansion completed.
INFO:zuul.Cloner:Preparing 1 repositories
INFO:zuul.Cloner:Creating repo democlient from upstream http://ci.localdomain:8080/democlient
INFO:zuul.Cloner:upstream repo has branch master
INFO:zuul.Cloner:Prepared democlient repo with commit c7aeebda47e0548ed7b7dfa4fb43c660c532ee7a
INFO:zuul.Cloner:Prepared all repositories
+ zuul-cloner http://ci.localdomain:8080 demolib
INFO:zuul.CloneMapper:Workspace path set to: /var/lib/jenkins/workspace/demo-functional-tests
INFO:zuul.CloneMapper:Mapping projects to workspace...
INFO:zuul.CloneMapper:  demolib - /var/lib/jenkins/workspace/demo-functional-tests/demolib
INFO:zuul.CloneMapper:Expansion completed.
INFO:zuul.Cloner:Preparing 1 repositories
INFO:zuul.Cloner:Creating repo demolib from upstream http://ci.localdomain:8080/demolib
INFO:zuul.Cloner:upstream repo has branch master
INFO:zuul.Cloner:Prepared demolib repo with commit c6bfc270799e522fe40a95125c8810f0176998bc
INFO:zuul.Cloner:Prepared all repositories
...

If we look at the tip of $ZUUL_REF branch on the democlient merger repository we can see the patch (Add new function) we put as a dependency for our patch (Add new feature):

<style>.gist table { margin-bottom: 0; }</style><script src="http://gist.github.com/morucci/2b92e796ea63fb69534b.js"></script>

The dependent pipeline “gate” supports “Depends-On” as well.

The “Depends-On” can also be useful to ensure a patch cannot be merged if a cross project dependency has not been merged before, or if it is not above in the gate pipeline. For instance here are the Zuul logs if you intend to set +2 CR on “c7aeebd Add new feature” before “Add new function” is merged.

<style>.gist table { margin-bottom: 0; }</style><script src="http://gist.github.com/morucci/f89d2dc2b985e64a9e9d.js"></script>

So how could you use Zuul in your company ?

Here are some use cases where Zuul can be useful:

  • You already have Gerrit and Jenkins as a CI platform and you want to perform automatic gating with Zuul (dependent pipeline). In this case Zuul should be easy to integrate in such platform.
  • Your software is based on OpenStack components and you want to be confident that the next patches that will land on the master branch won’t break your software. Zuul can be used to build a “third party CI” and react to reviews.openstack.org events.
  • You want to implement a Continuous Integration/Continuous Deployment workflow for your software; then you can bind the required jobs on pipelines such as check, gate (for Continuous Integration and ensuring code quality) and post (for Continuous Deployment of master’s HEAD in production).

For more information about Zuul have a look at the official documentation.

If you want to see Zuul in action jump here.

As we think Zuul is a really interesting component to have in software development workflow we have integrated it in an open source platform CI/CD we develop called Software Factory. You can check out what we do here.

And see it in action here.

by Fabien Boucher at July 01, 2015 10:13 AM

June 30, 2015

OpenStack Superuser

Tips for getting a travel grant to the next OpenStack Summit

OpenStack runs on the power of key contributors.

If you have helped out and want to attend the upcoming OpenStack Summit in Tokyo but need funds for travel, lodging or a conference pass, the Travel Support Program is here for you.

For every Summit, the OpenStack Foundation funds around 20 dedicated contributors to attend the five-day conference. You don’t have to be a code jockey, either. In addition to developers and reviewers, the Support program welcomes documentation writers, organizers of user groups around the world, translators, Operators and Ask moderators. (The Support program doesn’t include university students, however, who are encouraged to apply for a discounted registration pass.

To boost the odds of getting your application accepted, Superuser talked to Allison Price, marketing coordinator at the OpenStack Foundation who also participates in voting on the applications.

Although applying is a quick process, remember to frame your request clearly. Spend some time answering the question about why you want to attend the Summit. If you make it all about how you’d like to network or visit the town where the summit is taking place, your request is likely to get voted down.

“The biggest common mistake people make is not conveying their value to the community,” says Price. “Focus on what you can contribute to the discussions or pinpoint sessions that would be useful to your business and you have a much better chance.”

She shared some concrete examples of successful applications:

  • “I have contributed to the OpenStack Dashboard (Horizon) and I’d like to attend Horizon's design sessions with the goal of making stronger contributions in the next cycle…”
  • “I’ve been involved with the OpenStack project since Bexar days and have played a critical role in developing the OpenStack community in my home country. I’m also the founder and the leader of the OpenStack user group here and an OpenStack Ambassador. I plan to keep driving community adoption and involvement in my region.”

And some of those that were too vague to get approved:

  • “I am very interested in cloud computing.”
  • “I want to connect with people in open source community and the OpenStack summit provides a great opportunity to do that.”
  • “I would like to attend mostly for the networking and to meet other OpenStack developers…This event will be a great opportunity for me to make new friends.”

What are some other reasons applications get rejected? Keep it friendly, Price says, noting that applications criticizing competitors or trash-talking other applicants definitely strike the wrong note.

Applications are voted on by the Travel Committee, which is made up of three people each from the User Committee, Board of Directors, OpenStack Ambassadors, Project Team Leads/Technical Committee members and Foundation staff. The composition of the committee is refreshed for each Summit.

Asking your company to pay for part of the expenses or finding a buddy to room with won’t influence your chances of getting in, Price says. However, she does recommend at least asking if your company will cover some of the costs — because often your employer is happy to chip in and it allows the Foundation to help to more people.

Price has a final recommendation for applicants: if you need a visa to travel, make sure to request it in time. For each Summit, there have been a number of grantees who haven’t made it because of paperwork.

“It’s too bad when people get accepted but can’t make it to the Summit because their documents don’t arrive in time,” she says. “That’s a spot we could’ve filled with another great contributor.”

alt text here

Cover Photo by Mario Mancuso // CC BY NC; Recipients of the Atlanta Summit grants.

by Nicole Martinelli at June 30, 2015 10:34 PM

Blazing the trail for OpenStack in South Korea

Even in tech-crazy South Korea, Nalee Jang is a rarity. The 35-year-old was one of the country's first software coders for OpenStack. She got in early, and that meant blazing the trail for others.

“When I started out four years ago, there were no user guides, experienced engineers, or instructions available,” says Jang. "Everything I learned, I taught myself.” If going it alone was difficult, it didn't deter her. “I would not give up learning,” she remembers. “I worked hard to develop my knowledge.”

Jang’s hard work and determination paid off - in February 2015, she was appointed senior engineer at Cloudike, where she now manages the installation of OpenStack in cloud storage solutions. She is one of only a handful of women spearheading the industry in Korea, despite the country's reputation for rivaling Silicon Valley.

She also stepped up recently at the OpenStack Summit Vancouver, joining fellow women community members, Shilla Saebi, Rainya Mosher, Elizabeth Joseph, Alexandra Settle and Radha Ratnaparkhi for a panel moderated by Beth Cohen. Sponsored by the Women of OpenStack, the group shared tips on how women can amplify their voices when they are the minority.

alt text here

Jang (in white jacket) and the panel at her OpenStack Summit Vancouver session.

"This was my first OpenStack Summit, and during the session, I had the opportunity to discuss the OpenStack Korea user group," says Jang. The user group currently counts around 3,800 men and 50 women. "Almost all of the community leaders are male, but I am a woman developer, engineer and user group leader, so I was happy to share my experiences."

A vital support network

With over 20,000 members in 164 counties, OpenStack is the fastest growing open-source community in the world. At its heart, is a belief in community sharing, and a drive for the inclusion of all – no matter their age, race, or gender.

This is a sentiment echoed by Jang, who, when she was unable to find the solution to a coding problem back in 2011, would turn to her peers, “When I was stuck, I asked the other OpenStack community members for help.”

Fast forward to 2014, and Jang released “The Art of OpenStack” which was designed to share her experiences and knowledge with others in Korea now starting out in the field. She was also pivotal in developing the country’s first OpenStack seminar, OpenStack Day Korea, held in January of this year. Around 800 participants took part in the event – a huge success considering the virtual absence of the industry in Korea just a few years prior.

To help ease the path for women tech contributors, Jang also mentors young female software engineers.

She has been a long-time adviser to engineering majors at Seoul's Kookmin University, as well as providing career planning and psychosocial support to budding female engineers.

“Working together allows for varied views, insights and the sharing of knowledge.” says Jang.

Women like Jang in the OpenStack Community have played a role in bringing more women to the field. They have held workshops including ones at Vancouver OpenStack Summit, which is geared partly as an outreach program to introduce women in technical careers to OpenStack.

Jang and her future with OpenStack

“Our mission is simple,” says Jang. “We aim to protect, empower, and promote OpenStack software and the community around it, including users, developers and the entire ecosystem.”

The value of ensuring that women are an equal part of this ecosystem cannot be stressed enough, for while there is strength in numbers, there is also strength in diversity. It has been proven time and time again that companies with women on their board of directors consistently outperform those with all-male teams, while gender-balanced companies demonstrate superior team dynamics and productivity.

The road that brought her to Vancouver was a difficult one, but she is content with the journey.

“To go from being one of the only women in Korea doing this to traveling to Canada to speak at the OpenStack Summit – it’s incredible,” she says. “It's a dream come true.”

Cover Photo by ThomasThomas // CC BY NC

by Superuser at June 30, 2015 10:14 PM

OpenStack Blog

Need a visa for the Tokyo Summit? Here’s what you need to know

Set your travel plans early if you plan to come to OpenStack’s 2015 Tokyo Summit, and even earlier if you need a visa

Tokyo visa application

The Visa application process for Japan will require more time than any of the previous Summits.

tokyo-img

What does that mean? 

The 2015 OpenStack Summit to be held in Tokyo from Oct., 27-30 requires all applicants to have the following information BEFORE they apply for visa support invitation.

  • Flight
  • Hotel
  • Summit Registration

Due to these requirements, we recommend summit attendees who need to apply for a visa to book fully REFUNDABLE flights and hotel accommodations. All invitation letters are written in Japanese and mailed via regular post from Japan to the traveler. This process takes three to five weeks. Also, once the visa is issued it’s only valid for three months – so for the summit, all of our visa requests have to be received between mid-July and October 1. There are about 70 countries who do NOT need to apply for a visa to visit Tokyo so please look over this list to confirm if your country is exempt or not. Additional visa information is now live on our website here: https://www.openstack.org/summit/tokyo-2015/tokyo-and-travel/#visa.

Tokyo travel support program

For each OpenStack Summit, OpenStack assists its key community members with travel. If you are a contributor to OpenStack (developers, documentation writers, organizers of user groups around the world, Ask moderators, translators, PTLs, code reviewers etc.) you are invited to submit a request. Access the application and apply here: https://docs.google.com/a/openstack.org/forms/d/10Ral16vvYbk6FYlsg_t5zKS0QYIIGtZ8vyX97eSIoP4/viewform

by Jay Fankhauser at June 30, 2015 08:30 PM

DreamHost

Open For Brainstorming

Last week Jonathan introduced the newest member of our DreamCloud team, Stefano Maffulli. Take it away, Stefano! 

I have two ears and one mouth, and I like to use them in that proportion. As I start my new job on the DreamCloud team at DreamHost, I want to hear from you.

My goal is to enable developers and entrepreneurs to achieve their dreams using innovative cloud services deeply rooted in open source culture. DreamHost’s values are closely aligned with mine: I’ve spent my career close to the smartest developers on the planet, disrupting the 90s business models of software sold in boxes, pushing for the values of freedom, openness and fair competition.

DreamHost's Core Values

DreamHost’s Core Values

The dominant cloud services are largely run on opaque, closed, and proprietary systems by faceless corporations. When Jonathan LaCour, VP of Cloud Services, approached me, his concept of a cloud platform built upon a foundation of transparency and open source immediately resonated with me. Today, DreamHost offers DreamObjects and DreamCompute, built on top of OpenStack, Linux, Ceph, and Akanda: all open source projects. You can see the commitment to these values in the faces of the DreamHost developers and operators right there in the website.  DreamHost has built a team of strong technical people and placed them within easy reach of customers: when they have a problem or a question, they can hop on IRC and talk to the actual team building the DreamCloud. This is a key difference for me.

When I started advocating GNU/Linux against proprietary alternatives I often said: you can see the individual names and email addresses of the people who have written each line of code of a GNU system; you can see what they write, how they write it: they put their reputation in the code. With proprietary code, you get a black box and code that is probably so bad their authors would be embarrassed for their neighbors to know they’re behind it.

The DreamCloud team, in contrast, wants you to put a face to a name: they’re very proud of the products they build. I’m happy to place myself in their ranks and ready to enable new developers to reap the benefits of OpenStack, Ceph and Akanda.

In the next few weeks, I’ll reach out to existing DreamCloud users to get to know them better: What makes you dream? What keeps you awake at night? If you want to contact me,  please reach out to me on Twitter.  I’m @smaffulli.

Thanks!

by Stefano Maffulli at June 30, 2015 06:35 PM

OpenStack Superuser

OpenStack Essentials: a book to get you grounded in the fundamentals

Reading the manual is one way to get your head out of the clouds.

To help those stumbling with the lofty concepts surrounding OpenStack, Dan Radez, a senior software engineer at Red Hat, wrote a new book called "OpenStack Essentials."

The 182-page book is based on feedback from presentations that Radez has given around the world for people getting their heads around OpenStack. It's also the fruit of his work with TryStack, a free service that offers networking, storage, and compute instances, without having to go all-in with your own hardware.

Superuser talked to Radez about the role of the OpenStack community, what beginners often overlook and the frustrations of DIY learning...

alt text here

Who will this book help?

Beginners. This book is a written form of the same information I've presented in a “101/Getting Started”-type session at venues around the world.

If you don't know what OpenStack is or would like a more tangible definition of it — or even if you're just trying to get going with some of the basics — then this book is for you. It covers the basic components that make up OpenStack and how they fit together in a step-by-step format.

How did your efforts with TryStack influence the book?

TryStack has brought an awareness that OpenStack is still new and shiny and there are lots of people who are still trying to figure out how to get started. In the couple of years that I have worked on TryStack, we have had over 10,000 people signup to try OpenStack through it.

Working with people through TryStack to help them get started helped me realize that it's a real privilege for me to spend each and every day working on OpenStack and there is a real need in the industry to offer materials that cover the basics of OpenStack.

What's the one "essential" thing most people don't get about OpenStack at first?

Networking. Neutron is a really powerful networking model, dynamically creating virtual networks within the cloud. Unfortunately, to get it set up you need to understand some basic networking concepts that most people have not had to deal with before. I started there with OpenStack Neutron, too, and I had to learn some basics that I had not had to deal with before.

The book tries to explain some of these concepts as best as possible in the amount of content the book was slated to contain.

What's the role of the community in the Red Hat Distribution of OpenStack (RDO)?

RDO is upstream OpenStack packaged in RPMs (Red Hat Package Manager.) When you install it, you get vanilla OpenStack directly from the upstream releases. What this means is: community is everything.

Since RDO is focused on packaging and testing upstream releases, the OpenStack community is key because their content is what makes up the result of installing RDO.

The RDO community is really focused on transparency. All of the packaging and testing is publicly exposed to the community. This is important for operators and developers so that they see what they are getting and what's being done to validate what they get when they use RDO.

RDO's community is also evolving to keep up with the fast pace that OpenStack develops at. RDO Manager is an installation initiative in the RDO community that takes all the latest and greatest that's happening in the OpenStack community, like Triple-O and Ironic and others, and bakes it all together into an installer to help the operators in the RDO community get OpenStack installed and configured.

Why is a book helpful now — when you have IRC, mailing lists, documentation, etc.?

The internet is a big place and the computer screen is a bit unforgiving when your trying to pick up a new technology. Having a collection of web pages up that you're trying to piece together to get something working can get frustrating in the large scope of technologies that make up an OpenStack installation. For me, I find confidence in having a book in my hands that I can hold on to and flip through for reference.

I think there's validity in that mindset by looking at the desks of my colleagues. Most everyone has a stack of a couple books they used to come up to speed on projects they've worked on over the years. I can only hope to be a book on someone's desk that has help them get started with OpenStack.

Cover Photo by Kate Ter Haar // CC BY NC

by Nicole Martinelli at June 30, 2015 05:34 PM

StackMasters

Reach for the Cloud

You’d think we’d all be accustomed to the Cloud by now. After all we’ve been hearing about Cloud technologies, Cloud this and Cloud that for over five years. But this is quite far from the truth when it comes to the enterprise Cloud.

See, while consumers were first to embrace Cloud services, which for them were mostly just web apps (like Gmail) or something “out there” that backups their data (like Dropbox), for enterprises it was a much slower ride.


2751024173_d5b345bafd_b


Most businesses, even now, have only relegated some secondary and mostly trivial services to the Cloud (e.g. adopting Google Apps for Work or collaboration tools like Jira or Asana). And while startups have eagerly adopted IaaS offerings like AWS and Azure, established enterprises have been much more reluctant.

The thing is that the new Cloud model is not just another option, but brings with it a whole new approach that requires a mindset shift — that is, if we want to take full advantage of what it has to offer.

Fully embracing the Cloud means changing how our organisation approaches IT resources, internal services, end-user software applications, and even development environments.

In the process we should also get rid of some misconceptions about the Cloud too.

A lot of managers for example (and even some engineers) believe that the Cloud is some kind of ultimate infrastructure technology that will magically solve any and all availability and performance issues.

That’s not what the Cloud offers. That’s not what any technology offers, to be frank: there’s no silver bullet.

What the Cloud does offer is increased abstraction, the closest thing to “magic” we have in IT. Increased abstraction is what enables your 15-year old nephew to program in an afternoon what would have take a team of IBM scientists several months in 1960.

This increased abstraction of the Cloud provides admins and dev ops with the tools to proactively manage, provision and deploy machines, with advanced monitoring, reproducibility, and freedom from vendor lock-ins.

I’m talking about features such as:

abstraction of resources, that lets you handle heterogenous vendor-independent infrastructure and mixed architectures with ease,

workload migration/evacuation, that enables you to handle updates and route around problems with minimal impact to the users while minimising maintenance windows

centralised reporting, giving you insightful views of system health and performance that assists operations management and decision making

systematic infrastructure design/blueprints through code, while enable standards-based know-how sharing among IT teams

automated machine and software provisioning and maintenance, that eliminate error prone manual procedures

And while both high and low ends of the spectrum (Fortune-500 enterprises and agile startups) have already embraced the Cloud for all the above traits, there is an enormous amount of enterprises in between that are traditionally reluctant to embrace new technologies, going instead with what they believe to be tried-and-tested traditional IT procedures.

Well, it’s 2015 already. Those procedures have indeed been tried and tested ― and they have been found lacking.

The Cloud is not some novelty to be approached cautiously anymore, it’s the emerging new standard way of doing business. Of course there are a lot of things to be cautious still: not all Clouds are alike, and embracing the Cloud only to find yourself locked-in to some proprietary vendor Cloud platform kind of defeats the purpose.

That’s why we betted our company in OpenStack, the industry standard Cloud platform sponsored by world leading vendors like Cicso, IBM, HP, Dell, RedHat, Backspace and Canonical, that we believe to be the optimal solution for businesses of any size.

OpenStack is Open Source and has the support of multiple vendors, letting you have full control of your cloud, with broad support for enterprise level virtualisation (from KVM and VMWare to Docker) allowing you to leverage existing products and solutions used at your company.

OpenStack also eliminates perpetual licensing costs and is free of byzantine pricing schemes.

And with its simple and intuitive web management interface, makes provisioning, managing and operating your infrastructure a piece of cake and empowering scaling-out, allowing administrators and operators to respond quickly to any business demand.

As for support, an OpenStack solution hits all the right keys, as it’s both available as Open Source and based on an industry standard with huge support from vendors, consultants and enterprise support shops big and small ― not to mention the availability of a vibrant grassroots community.

As we already noted, companies of both huge and tiny scales have already embraced the Cloud. Large Fortune-500 companies use Cloud based services to streamline, simplify and empower their infrastructure. Small, agile, startups practically live in the Cloud, as it offers them the ability to tap resources on demand and compete with big, established, players.

It’s time for enterprises between those two extremes, small, medium and large, to reach for the Clouds.

After that, the sky is the limit.

by admin at June 30, 2015 03:40 PM

OpenStack Blog

5 years of OpenStack – it’s time to celebrate the community!

OpenStack celebrates its 5th birthday July 19, and we’re celebrating with the entire OpenStack community during July! Cloud interoperability and support for developer productivity have been focuses for the OpenStack project this year, and none of it would be possible without the quickly growing OpenStack community.

There are now more than 80 global user groups and 27,300 community members, across 165 countries, spanning more than 500 organizations. Over 30 million lines of code have been written in less than five years thanks to our thriving community of developers and users around the world. This calls for a big toast to the OpenStack community members and our users.

 

5yearOpenStackBDay

 

We’ve invited all our user groups to celebrate with us. During the month of July, more than 35 OpenStack birthday parties will be thrown all over the world – celebrating the OpenStack community!  We encourage everyone to find a birthday party in your area and join your fellow community members to toast each other on another great year! If you don’t see a celebration in your area, not to worry – several more parties are to be announced soon. Don’t forget to share your pictures and memories using #OpenStack5Bday.

If you’re attending OSCON, the Foundation invites you to come celebrate the OpenStack Community on Tuesday, July 22nd at the LeftBank Annex to mingle with other community members and Foundation staff. Stay tuned – more details coming soon!

Find a local celebration in your area:

Argentina July 15

Atlanta July 18

Austin July 28

Baden-Württemberg July 15

Bangalore, India July 11

Brazil July 25

Bucharest, Romania June 30

China – ShenZhen July 11

Colorado (Denver Metro/South) July 16

Fort Collins, Colorado July 16

Greece July 1

Hong Kong July 14

Hungary July 16

Israel July 13

Italy July 14

Japan July 13

Kenya, Africa July 11

London July 21

Los Angeles July 16

Moscow, Russia, July 22 

Mumbai, India July 25

New Delhi, India July 11

North Carolina July 23

North Virginia July 7

Paris, France June 30

Philippines June 29

San Francisco Bay Area July 9

Seattle, July 16 

Sevilla, Spain July 1

Slovenia June 23

Stockholm/Nordic July 21

Switzerland July 17

Sydney, Australia July 15

Thailand July 17 & July 18

Tunisia July 22

Turkey July 22

Vancouver, Canada July 16

Vietnam July 4

Virginia July 9

Washington DC Metro Area July 20

by Jay Fankhauser at June 30, 2015 02:49 PM

Opensource.com

New guides and tips for OpenStack

Looking to learn something about OpenStack? You’re not alone. Fortunately, there are a ton of resources out there to get started. Hands on training courses, books, and of course the official documentation are great resources for learning more, regardless of whether you are a beginner or a seasoned IT professional. Even for OpenStack contributors, there’s still plenty to be learned.

by Jason Baker at June 30, 2015 08:00 AM

June 29, 2015

Shannon McFarland

Using OpenStack Heat to Deploy an IPv6-enabled Instance

In this post I will talk about how to use a basic OpenStack Heat template to build a dual-stack (IPv4 and IPv6) Neutron network, router and launch an instance that will use StateLess Address AutoConfiguration (SLAAC) for IPv6 address assignment.

In the May, 2015 post I discussed, in detail, how to build a dual-stack tenant and use a variety of IPv6 address assignment methods (SLAAC, Stateless DHCPv6, Stateful DHCPv6) for OpenStack instances.

To build on the previous post, I wanted to show a basic Heat template for building an IPv4 and IPv6 network with the basic parameters such as CIDR, gateway and pools.  I want Heat to also launch an OpenStack instance (what Heat calls a Server) that attaches to those networks.  Finally, the template will create a new security group that will create security group rules for both IPv4 and IPv6.

The Heat template that I am referring to in this post can be found here: https://github.com/shmcfarl/my-heat-templates/blob/master/single-v6-test.yaml. That specific template is using SLAAC.  You can also take a look at this template which uses Stateless DHCPv6: https://github.com/shmcfarl/my-heat-templates/blob/master/stateless-demo-slb-trusty.yaml. You can modify the template from there to play around with DHCPv6 Stateful. Hint, it’s all in the resource properties of:

ipv6_address_mode: <slaac/dhcpv6-stateless/dhcpv6-stateful>
ipv6_ra_mode: <slaac/dhcpv6-stateless/dhcpv6-stateful>)

Heat Template

I am not going to teach you Heat. There are countless resources out there that do a much better job than I ever could on teaching Heat.  A couple of places to start are:

The Heat Orchestration Template (HOT) Guide is a great resource for finding the various parameters, resources and properties that can be used in Heat.

The primary place to dig into IPv6 capabilities within Heat is in the Heat template guide under OS::Neutron::Subnet.  You can jump to it here: http://docs.openstack.org/hot-reference/content/OS__Neutron__Subnet.html.  I am not going to walk through all of what is in the guide but I will point out specific properties that I have used in the example Heat template I referenced before.

Let’s take a look at the IPv6-specific parts of the example template.  In the example template file I have created a parameter section that includes various items such as key, image, flavor and so on.  The IPv6 section includes:

  • The private IPv6 network (2001:db8:cafe:1e::/64)
  • The private IPv6 gateway (2001:db8:cafe:1e::1)
  • The beginning and ending range of the IPv6 address allocation pool (2001:db8:cafe:1e::2 – 2001:db8:cafe:1e:ffff:ffff:ffff:fffe)
private_net_v6:
    type: string
    description: Private IPv6 subnet address
    default: 2001:db8:cafe:1e::/64
private_net_v6_gateway:
    type: string
    description: Private IPv6 network gateway address
    default: 2001:db8:cafe:1e::1
private_net_v6_pool_start:
    type: string
    description: Start of private network IPv6 address allocation pool
    default: 2001:db8:cafe:1e::2
private_net_v6_pool_end:
    type: string
    description: End of private network IPv6 address allocation pool
    default: 2001:db8:cafe:1e:ffff:ffff:ffff:fffe

The next section to look at is in the “resources” section and this is where things go into action. The “private_v6_subnet” has various resource types and properties to include:

  • Version is IPv6
  • IPv6 address and RA modes are SLAAC
  • The network property (set in the parameter section)
  • The CIDR property which is the “private_net_v6″ from the parameter section
  • The gateway IPv6 address is defined in the “private_net_v6_gateway” parameter
  • The allocation pool is defined in the “private_net_v6_pool_start/end” parameters
  private_v6_subnet:
    type: OS::Neutron::Subnet
    properties:
      ip_version: 6
      ipv6_address_mode: slaac
      ipv6_ra_mode: slaac
      network: { get_resource: private_net }
      cidr: { get_param: private_net_v6 }
      gateway_ip: { get_param: private_net_v6_gateway }
      allocation_pools:
        - start: { get_param: private_net_v6_pool_start }
          end: { get_param: private_net_v6_pool_end }

The next IPv6-relevant area of the resource section is “router_interface_v6″. In the “router_interface_v6″ resource, there is a reference to the previously created “router” resource (see template file for full resource list) and the “private_v6_subnet”. This entry is simply attaching a new router interface to the Private IPv6 subnet.

  router_interface_v6:
    type: OS::Neutron::RouterInterface
    properties:
      router: { get_resource: router }
      subnet: { get_resource: private_v6_subnet }

Next, there is the Server (AKA “instance” or “VM”) creation section. There is nothing IPv6 specific here. On the network property line, Heat is pointing to “get_resource: private_net” which is the private network that both IPv4 and IPv6 subnets are associated with. That line, basically, attaches the server to a dual-stack network.

server1:
    type: OS::Nova::Server
    properties:
      name: Server1
      image: { get_param: image }
      flavor: { get_param: flavor }
      key_name: { get_param: key_name }
      networks:
        - network: { get_resource: private_net }
      config_drive: "true"
      user_data_format: RAW
      user_data: |
        #!/bin/bash
      security_groups: [{ get_resource: server_security_group }]

Finally, there is the security group section which enables rules for both IPv4 and IPv6. In this example ports 22, 80 and ICMP are open for IPv4 and IPv6.

server_security_group:
    type: OS::Neutron::SecurityGroup
    properties:
      description: Heat-deployed security group.
      name: heat-security-group
      rules: [
        {remote_ip_prefix: 0.0.0.0/0,
        protocol: tcp,
        port_range_min: 22,
        port_range_max: 22},
        {remote_ip_prefix: 0.0.0.0/0,
        protocol: icmp},
        {remote_ip_prefix: 0.0.0.0/0,
        protocol: tcp,
        port_range_min: 80,
        port_range_max: 80},
        {remote_ip_prefix: "::/0",
        ethertype: IPv6,
        protocol: tcp,
        port_range_min: 22,
        port_range_max: 22},
        {remote_ip_prefix: "::/0",
        ethertype: IPv6,
        protocol: icmp},
        {remote_ip_prefix: "::/0",
        ethertype: IPv6,
        protocol: tcp,
        port_range_min: 80,
        port_range_max: 80}]

Now, let’s deploy this template and see how it all looks. I am deploying the Heat “stack” using the Heat “stack-create” command (alternatively you can deploy it using the ‘Orchestration > Stacks > Launch Stack’ interface in the OpenStack Dashboard). In this example I am running “stack-create” using the “-r” argument to indicate ‘rollback’ (in the event something goes wrong, I don’t want the whole stack to build out). Then I am using the “-f” argument to indicate that I am using a file to build the Heat stack. The stack is named “demo-v6″:

root@c71-kilo-aio:~$ heat stack-create -r -f Heat-Templates/single-v6-test.yaml demo-v6
+--------------------------------------+------------+--------------------+----------------------+
| id                                   | stack_name | stack_status       | creation_time        |
+--------------------------------------+------------+--------------------+----------------------+
| 688388f5-4ae1-4d39-bf85-6f9a591a4420 | demo-v6    | CREATE_IN_PROGRESS | 2015-06-29T15:44:18Z |
+--------------------------------------+------------+--------------------+----------------------+

After a few minutes, the Heat stack is built:

root@c71-kilo-aio:~$ heat stack-list
+--------------------------------------+------------+-----------------+----------------------+
| id                                   | stack_name | stack_status    | creation_time        |
+--------------------------------------+------------+-----------------+----------------------+
| 688388f5-4ae1-4d39-bf85-6f9a591a4420 | demo-v6    | CREATE_COMPLETE | 2015-06-29T15:44:18Z |
+--------------------------------------+------------+-----------------+----------------------+

Here is a messy view of the obligatory OpenStack Dashboard Network Topology view (Note: Some Horizon guru needs to line break the IPv4 and IPv6 addresses for the instances so they are readable ;-)):

net-topo

 

Here’s a cleaner view of things:

  • Network list – You can see the new Heat-built “test_net” with the two subnets (IPv4/IPv6) as well as the previously built (by the admin) “Public-Network”:
root@c71-kilo-aio:~$ neutron net-list
+--------------------------------------+----------------+------------------------------------------------------------+
| id                                   | name           | subnets                                                    |
+--------------------------------------+----------------+------------------------------------------------------------+
| 2e03a628-e85e-4519-b1bb-a579880be0ae | test_net       | 93764f36-c56b-4c65-b7d7-cb78a694353b 10.10.30.0/24         |
|                                      |                | cafb610a-2aaa-4640-b0f0-8bb4b60cbaf2 2001:db8:cafe:1e::/64 |
| f6a55029-d875-48a8-aab9-1a5a5399592b | Public-Network | dda7d8f1-89a6-40bb-b11b-64a62c103828 192.168.81.0/24       |
|                                      |                | f2107125-c98e-4375-a81f-d0f4d34bdae3 2001:db8:cafe:51::/64 |
+--------------------------------------+----------------+------------------------------------------------------------+
  • Subnet list:
root@c71-kilo-aio:~$ neutron subnet-list
+--------------------------------------+----------------------------------------+-----------------------+---------------------------------------------------------------------------------+
| id                                   | name                                   | cidr                  | allocation_pools                                                                |
+--------------------------------------+----------------------------------------+-----------------------+---------------------------------------------------------------------------------+
| 93764f36-c56b-4c65-b7d7-cb78a694353b | demo-v6-private_subnet-6evpyylqyux7    | 10.10.30.0/24         | {"start": "10.10.30.2", "end": "10.10.30.254"}                                  |
| dda7d8f1-89a6-40bb-b11b-64a62c103828 | Public-Subnet-v4                       | 192.168.81.0/24       | {"start": "192.168.81.5", "end": "192.168.81.254"}                              |
| f2107125-c98e-4375-a81f-d0f4d34bdae3 | Public-Subnet-v6                       | 2001:db8:cafe:51::/64 | {"start": "2001:db8:cafe:51::3", "end": "2001:db8:cafe:51:ffff:ffff:ffff:fffe"} |
| cafb610a-2aaa-4640-b0f0-8bb4b60cbaf2 | demo-v6-private_v6_subnet-vvsmlbkc6sds | 2001:db8:cafe:1e::/64 | {"start": "2001:db8:cafe:1e::2", "end": "2001:db8:cafe:1e:ffff:ffff:ffff:fffe"} |
+--------------------------------------+----------------------------------------+-----------------------+---------------------------------------------------------------------------------+
  • Here is the “subnet-show” of the Heat-built subnet for the Private IPv6 subnet. The allocation pool range, gateway, IPv6 version, IPv6 address mode and IPv6 RA modes are all defined as we wanted (based on the Heat template):
root@c71-kilo-aio:~$ neutron subnet-show demo-v6-private_v6_subnet-vvsmlbkc6sds
+-------------------+---------------------------------------------------------------------------------+
| Field             | Value                                                                           |
+-------------------+---------------------------------------------------------------------------------+
| allocation_pools  | {"start": "2001:db8:cafe:1e::2", "end": "2001:db8:cafe:1e:ffff:ffff:ffff:fffe"} |
| cidr              | 2001:db8:cafe:1e::/64                                                           |
| dns_nameservers   |                                                                                 |
| enable_dhcp       | True                                                                            |
| gateway_ip        | 2001:db8:cafe:1e::1                                                             |
| host_routes       |                                                                                 |
| id                | cafb610a-2aaa-4640-b0f0-8bb4b60cbaf2                                            |
| ip_version        | 6                                                                               |
| ipv6_address_mode | slaac                                                                           |
| ipv6_ra_mode      | slaac                                                                           |
| name              | demo-v6-private_v6_subnet-vvsmlbkc6sds                                          |
| network_id        | 2e03a628-e85e-4519-b1bb-a579880be0ae                                            |
| subnetpool_id     |                                                                                 |
| tenant_id         | dc52b50429f74aeabb3935eb3e2bcb04                                                |
+-------------------+---------------------------------------------------------------------------------+
  • Router port list – You can see that the router has IPv4/IPv6 addresses on the tenant and public network interfaces:
root@c71-kilo-aio:~$ neutron router-port-list demo-v6-router-txy5s5bcixqd | grep ip_address | sed -e 's#.*ip_address": "\([^"]\+\).*#\1#'
10.10.30.1
2001:db8:cafe:1e::1
192.168.81.75
2001:db8:cafe:51::3f
  • Security Group list:
root@c71-kilo-aio:~$ neutron security-group-list
+-----------------------------+---------------------+---------------------------------------------------+
| id                          | name                | security_group_rules                              |
+-----------------------------+---------------------+---------------------------------------------------+
| 69f81e8e-5059-4a...         | heat-security-group | egress, IPv4                                      |                                                       |                             |                     | egress, IPv6                                      |
|                             |                     | ingress, IPv4, 22/tcp, remote_ip_prefix: 0.0.0.0/0|                                                       |                             |                     | ingress, IPv4, 80/tcp, remote_ip_prefix: 0.0.0.0/0|
|                             |                     | ingress, IPv4, icmp, remote_ip_prefix: 0.0.0.0/0  |
|                             |                     | ingress, IPv6, 22/tcp, remote_ip_prefix: ::/0     |
|                             |                     | ingress, IPv6, 80/tcp, remote_ip_prefix: ::/0     |
|                             |                     | ingress, IPv6, icmp, remote_ip_prefix: ::/0       |
+--------------------------------------+---------------------+------------------------------------------+
  • Server/Instance list:
root@c71-kilo-aio:~$ nova list
+--------------------------------------+---------+--------+------------+-------------+-----------------------------------------------------------+
| ID                                   | Name    | Status | Task State | Power State | Networks                                                  |
+--------------------------------------+---------+--------+------------+-------------+-----------------------------------------------------------+
| d7bfc606-f9da-4be5-b3e8-2219882c3da6 | Server1 | ACTIVE | -          | Running     | test_net=10.10.30.3, 2001:db8:cafe:1e:f816:3eff:fea8:7d2c |
+--------------------------------------+---------+--------+------------+-------------+-----------------------------------------------------------+

Finally, inside the instance, you can see that both IPv4 and IPv6 addresses are assigned:

root@c71-kilo-aio:~$ ip netns exec qrouter-d2ff159a-b603-4a3b-b5f7-481bff40613e ssh fedora@2001:db8:cafe:1e:f816:3eff:fea8:7d2c
The authenticity of host '2001:db8:cafe:1e:f816:3eff:fea8:7d2c (2001:db8:cafe:1e:f816:3eff:fea8:7d2c)' can't be established.
ECDSA key fingerprint is 41:e2:ea:28:e5:6d:ae:50:24:81:ad:5e:db:d7:a0:21.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '2001:db8:cafe:1e:f816:3eff:fea8:7d2c' (ECDSA) to the list of known hosts.
[fedora@server1 ~]$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc pfifo_fast state UP group default qlen 1000
    link/ether fa:16:3e:a8:7d:2c brd ff:ff:ff:ff:ff:ff
    inet 10.10.30.3/24 brd 10.10.30.255 scope global dynamic eth0
       valid_lft 79179sec preferred_lft 79179sec
    inet6 2001:db8:cafe:1e:f816:3eff:fea8:7d2c/64 scope global mngtmpaddr dynamic
       valid_lft 86400sec preferred_lft 14400sec
    inet6 fe80::f816:3eff:fea8:7d2c/64 scope link
       valid_lft forever preferred_lft forever

I hope this gives you a starting point for adding IPv6 to your Heat template collection.

Thanks,
Shannon

by eyepv6(at)gmail(dot)com at June 29, 2015 10:44 PM

Loïc Dachary

Public OpenStack providers useable within the hour

The OpenStack marketplace provides a list of OpenStack public clouds, a few of which enable the user to launch an instance at most one hour after registration.

Enter Cloud Suite has a 2GB RAM, 2 CPU, 40GB Disk instance for 0.06 euros / hour (~40 euros per month) and there is no plan to provide a flavor with only 1 CPU instead of 2 CPU. The nova, cinder and neutron API are available.

HP Helion Public Cloud has a 2GB RAM, 2 CPU, 10GB Disk instance for 0.05 euros / hour (0.06 USD / hour) (~40 euros per month).

OVH has a 2GB RAM, 1 CPU, 10GB Disk instance for 0.008 euros / hour (~3 euros per month). The nova API is available, not cinder nor neutron.

Rackspace has a 2GB RAM, 1 CPU, 10GB DIsk instance for ~40 euros per month (plus ~50 euros / month service fee, regardless of the number of instances). The nova and cinder API are available, not neutron.

by Loic Dachary at June 29, 2015 08:53 AM

Opensource.com

New component versioning, Technical Committee highlights, and more OpenStack news

The Opensource.com weekly look at what is happening in the OpenStack community and the open source cloud at large.

by Jason Baker at June 29, 2015 07:00 AM

June 28, 2015

David Moreau Simard

Openstackclient is better than I thought

When openstackclient came out, I was not a believer. I thought.. Great, a new standard (cue XKCD), and I joked about it.

I had been using the ordinary CLI clients like novaclient, keystoneclient and so on. Over time, I guess I got used to their strengths, weaknesses and their quirks for better or for worse.

We’ve started wrapping around Openstackclient in the different puppet modules for Openstack. Since Openstackclient provides CSV output, it makes it easy for us to parse the command outputs.

This prompted me to look closer at Openstackclient as I needed to develop features that would leverage it.

The objective of Openstackclient

OpenStackClient (aka OSC) is a command-line client for OpenStack that brings the command set for Compute, Identity, Image, Object Store and Volume APIs together in a single shell with a uniform command structure.

The primary goal is to provide a unified shell command structure and a common language to describe operations in OpenStack.

Version 1.0 of Openstackclient was released on December 4th, 2014. Let’s look if Openstackclient has had time to deliver on that promise.

The openstack clients inconsistencies addressed

  • Show a virtual machine instance ?
# This one is easy enough...
nova show <instance>
# With openstackclient:
openstack server show <instance>
  • Show an aggregate ?
nova aggregate-show <aggregate>
error: argument <subcommand>: invalid choice: u'aggregate-show'
# Oh.. right, it's aggregate-details
nova aggregate-details <aggregate>
# With openstackclient:
openstack aggregate show
  • Show a user ?
keystone user-show <user>
keystone: error: argument <subcommand>: invalid choice: 'user-show'
# Oh.. right, it's user-get
keystone user-get <user>
# With openstackclient:
openstack user show <user>

Hey, this is already better. Openstackclient does more than wrap around the clients to unify and standardize the interfaces, though: it also improves the user experience.

Impoving and streamlining the user experience

I thought Openstackclient merely did a one-to-one mapping of current client commands to Openstackclient commands but I couldn’t have been more wrong. They standardize, unify the commands and parameters but they also do things like merge redundant commands into existing ones.

I learned this first hand when I took a stab at implementing a feature Openstackclient did not have yet: Cinder QoS management.

In the context of the Cinder QoS implementation, QoS specifications have fields like:

  • id
  • name
  • consumer
  • specs (like IOPS limitations)

QoS specifications are also tied to Cinder volume types, these are called ‘associations’. If you don’t know about how Cinder QoS works, there was a great talk about it at the last Openstack summit.

With cinderclient, if you do a cinder qos-list or a cinder qos-show <qos>, you will not see these associations. Instead, you have a command cinder qos-get-association <qos> to see which volume types are associated to a QoS specification.

So, what I ended up doing in Openstackclient is to show the associations in the commands openstack volume qos list and openstack volume qos show. This improved the commands to show more information and “get-qos-association” became redundant so I removed it.

This is just one of the many examples of improvements you might come across in Openstackclient.

Shout out to the Openstackclient core reviewers Steve Martinelli and Terry Howe for giving me great feedback to get QoS support merged. I also appreciated that they were rigorous about the implementation and standards.

Search by name or ID

Another thing I learned is that, where possible, Openstackclient provides the possibility of doing actions on an item either by name or by ID.

You can tell if the commands support them with a ‘help’ on the command:

openstack help volume type delete
usage: openstack volume type delete [-h] <volume-type>

Delete volume type

positional arguments:
  <volume-type>  Volume type to delete (name or ID) <-------

optional arguments:
    -h, --help     show this help message and exit

Predictible, customizable and parsable output

Pretty tables are great in most cases but if you want to do other things.. Like parse them for automation, for example, you’re going to have a bad time.

For some commands, like list, Openstackclient will provide great standard output formatters:

output formatters:
  output formatter options

  -f {csv,html,json,table,value,yaml}, --format {csv,html,json,table,value,yaml}
                        the output format, defaults to table
  -c COLUMN, --column COLUMN
                        specify the column(s) to include, can be repeated

table formatter:
  --max-width <integer>
                        Maximum display width, 0 to disable

CSV Formatter:
  --quote {all,minimal,none,nonnumeric}
                          when to include quotes, defaults to nonnumeric

Openstackclient is better than I thought

I wear a lot of hats here so let’s put all into perspective:

  • From a developer standpoint, developing into Openstackclient was a great, satisfying and rewarding experience.
  • From an user standpoint, I’m still learning to love the consistent experience across the different improved commands.
  • From an operator perspective, Openstackclient will help us ramp up users and customers faster with an easier to learn and use client.

What are you waiting for ? Do like me, stop laughing about it and just try it out. You’ll be surprised.

by dmsimard at June 28, 2015 10:00 PM

Nir Yechiel

Neutron networking with Red Hat Enterprise Linux OpenStack Platform

(This is a summary version of a talk I gave at Red Hat Summit on June 25th, 2015. Slides are available here)

I was honored to speak the second time in a row on Red Hat Summit, the premier open source technology event hosted in Boston this year. As I am now focusing on product management for networking in Red Hat Enterprise Linux OpenStack Platform I presented Red Hat’s approach to Neutron, the OpenStack networking service.

Since OpenStack is fairly a new project and a new product on Red Hat’s portfolio, I was not sure what level of knowledge to expect from my audience. Therefore I have started with a fairly basic overview of Neutron – what it is and what are some of the common features you can get from its API today. I was very happy to see that most of the people at the audience seemed to be already familiar with OpenStack and with Neutron so the overview part was quick.

The next part of my presentation was a deep dive into Neutron when deployed with the ML2/Open vSwitch (OVS) plugin. This is our default configuration when deploying Red Hat Enterprise Linux OpenStack Platform today, and like any other Red Hat products, based on fully open-source components. Since there is so much to cover here (and I only had one hour for the entire talk), I focused on the core elements of the solution, and the common features we see customers using today: L2 connectivity, L3 routing and NAT for IPv4, and DHCP for IP address assignment. I explained the theory of operation and used some graphics to describe the backend implementation and how things look on the OpenStack nodes.

OVS-based solution is our default, but we are also working with a very large number of leading vendors in the industry providing their own solutions through the use of Neutron plugins. I spent some time to describe the various plugins out there, our current partner ecosystem, and Red Hat’s certification program for 3rd party software.

I then covered some of the major recent enhancements introduced in Red Hat Enterprise Linux OpenStack Platform 6 based on the upstream Juno code base: IPv6 support, L3 HA, and distributed virtual router (DVR) – which is still a Technology Preview feature, yet very interesting to our customers.

Overall, I was very happy with this talk and with the number of questions I got in the end. It looks like OpenStack is happening, and more and more customers are interested to find out more about it. See you next year in San Francisco for Red Hat Summit 2016!


by nyechiel at June 28, 2015 05:45 PM

Maish Saidel-Keesing

Downloading all sessions from the #OpenStack Summit

A question was just posted to the OpenStack mailing list – and this is not the first time I have seen this request.

Can openstack conference video files be downloaded?

A while back I wrote a post about how you can download all the vBrownBag sessions from the past OpenStack summit.

Same thing applies here, with a slight syntax change.

You can use the same tool – youtube-dl (just the version has changed since that post – and therefore some of the syntax is different as well).

Download youtube-dl and make the file executable

curl https://yt-dl.org/downloads/2015.06.25/youtube-dl \
-o /usr/local/bin/youtube-dl
chmod a+rx /usr/local/bin/youtube-dl

The videos are available on the OpenStack Youtube channel.

What you are looking for is all the videos that were uploaded from the Summit, that would mean between May 18th, 2015 and May 30th, 2015.

The command to do that would be

youtube-dl -ci -f best --dateafter 20150518 \
--datebefore 20150529 https://www.youtube.com/user/OpenStackFoundation/videos

The options I have used are:

-c - Force resume of partially downloaded files. By default, youtube-dl will resume downloads if possible.
-i  - Continue on download errors, for example to skip unavailable videos in a playlist.
-f best - Download the best quality video available.
--dateafter - Start after date
--datebefore – Up until date specified

Be advised.. This will take a while – and will use up a decent amount of disk space.

Happy downloading !!

by Maish Saidel-Keesing (noreply@blogger.com) at June 28, 2015 10:00 AM

OpenStack Reactions

The first time I tried to get certified on OpenStack

<script async="" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

by chmouel at June 28, 2015 05:19 AM

June 26, 2015

OpenStack Blog

OpenStack Community Weekly Newsletter (June 19 – 26)

New OpenStack component versioning

Thierry Carrez explains why the Liberty-1 milestone release has some unfamiliar version numbers for familiar projects.

Technical Committee Highlights June 25, 2015

A compute starter kit tag has been approved, it provides a place for a beginner who only wants to try to get a computing cloud use case started. New projects under the OpenStack “Big Tent”: Searchlight, OS-ansible-deployment (OSAD), Solum and Cue Message Broker service.

Forrester says: ready, set, OpenStack!

A recent report from Forrester gives a major boost to OpenStack adoption, calling its “viability and presence in the market irrefutable.” You can download the entire report for a limited time on the OpenStack.org website.

The Road to Tokyo

Reports from Previous Events

  • None this week

Relevant Conversations

Deadlines and Contributors Notifications

Security Advisories and Notices

Tips ‘n Tricks

Open Call for Proposals

Recently Merged Specs

Subject Owner Project
Clean up tenant resources when one is deleted Assaf Muller openstack/neutron-specs
Fixes for generic RAID interface Devananda van der Veen openstack/ironic-specs
Add spec for reference implementation split Kyle Mestery openstack/neutron-specs
Add Spec Lifecycle Rules to readme Matthew Oliver openstack/swift-specs
Add uuid field to security-groups for server show heijlong openstack/nova-specs
Add spec to enhance PCI passthrough whitelist to support regex Moshe Levi openstack/nova-specs
Moved driver interface from backlog to liberty Ajaya Agrawal openstack/keystone-specs
Update monasca spec with version 5.0.0 Kanagaraj Manickam openstack/heat-specs
Add Zone Exists Event Spec Kiall Mac Innes openstack/designate-specs
VLAN aware VMs Erik Moe openstack/neutron-specs
Allow unaddressed port(without l3 address, subnet) and to boot VM with it Isaku Yamahata openstack/neutron-specs
Uniform Resource Signals Miguel Grinberg openstack/heat-specs
Decompose vendor plugins/drivers for neutron-*aas Doug Wiegley openstack/neutron-specs
Lbaas, use Octavia as reference implementation Doug Wiegley openstack/neutron-specs
MySQL manager refactor Alex Tomic openstack/trove-specs
Add virt-driver CPU thread pinning Stephen Finucane openstack/nova-specs
Implement external physical bridge mapping in linuxbridge Li Ma openstack/neutron-specs
Add port timestamp Zhiyuan Cai openstack/neutron-specs
Add availability_zone support IWAMOTO Toshihiro openstack/neutron-specs
PowerVM Compute Inspector Drew Thorstensen openstack/ceilometer-specs
Add rootwrap-daemon-mode blueprint Yuriy Taraday openstack/nova-specs
Add heat template-version-list command to cmd Oleksii Chuprykov openstack/heat-specs
Add a str_split intrinsic function Steven Hardy openstack/heat-specs
Add spec for more-gettext-support Peng Wu openstack/oslo-specs
trivial: Change file permissions for spec Stephen Finucane openstack/nova-specs
Action listing Tim Hinrichs openstack/congress-specs
libvirt: virtio-net multiqueue Vladik Romanovsky openstack/nova-specs
Spec for adding audit capability using CADF specification. Arun Kant openstack/barbican-specs
libvirt: set admin root password sahid openstack/nova-specs
Report host memory bandwidth as a metric in Nova Sudipta Biswas openstack/nova-specs
Adds Hyper-V Cluster spec Claudiu Belu openstack/nova-specs
Inject NMI to an instance Shiina, Hironori openstack/nova-specs
Add a Distinct Exception for Exceeding Max Retries Ed Leafe openstack/nova-specs
Fix error messages on check-flavor-type Ken’ichi Ohmichi openstack/nova-specs
Add BuildRequest object Andrew Laski openstack/nova-specs
Groups are not included in federated scoped tokens Dolph Mathews openstack/keystone-specs
Add spec for event alarm evaluator Ryota MIBU openstack/ceilometer-specs
nova.network.linux_net refactor Roman Bogorodskiy openstack/nova-specs
user_data modification Alexandre Levine openstack/nova-specs
Add support for Redis replication Peter Stachowski openstack/trove-specs
Adds spec for modeling resources using objects Jay Pipes openstack/nova-specs
Add tooz service group driver Joshua Harlow openstack/nova-specs
Add List of Group-IDs to ACL for Secrets/Containers John Wood openstack/barbican-specs
Specification for spark-jobs-for-cdh-5-3-0 added Alexander openstack/sahara-specs
Scheduler Introduce lightwieght transactional model for HostState Nikola Dipanov openstack/nova-specs
DNS resolution inside of Neutron using Nova instance name Carl Baldwin openstack/neutron-specs
Allow multiple clusters creation simultaneously Telles Mota Vidal Nóbrega openstack/sahara-specs
Update the backlog spec page John Garbutt openstack/nova-specs
Add spec for decoupling auth from API versions to backlog Morgan Fainberg openstack/keystone-specs
Let users restrict stack-update scope Ryan Brown openstack/heat-specs
Update of `support-modify-volume-image-metadata.rst` Dave Chen openstack/cinder-specs
Add ability to abandon environments Dmytro Dovbii openstack/murano-specs
Add the oslo_db enginefacade proposal Matthew Booth openstack/nova-specs
Track cinder capacity notifications XinXiaohui openstack/ceilometer-specs

Upcoming Events

Other News

OpenStack Reactions

Rushing to see if my bug was fixed in the release note

Rushing to see if my bug was fixed in the release note

The weekly newsletter is a way for the community to learn about all the various activities occurring on a weekly basis. If you would like to add content to a weekly update or have an idea about this newsletter, please leave a comment.

by Stefano Maffulli at June 26, 2015 09:55 PM

OpenStack Superuser

Superuser weekend reading

Here's the news from the OpenStack world you won't want to miss -- the musings, polemics and questions posed by the larger community.

Got something you think we should highlight? Tweet, blog, or email me!

In Case You Missed It

Give and take is a fundamental push-pull of any open-source project. This week, there's in interesting debate about contributors versus consumers of OpenStack.

Kicking things off, Mark van Oppen of Blue Box says that both consumers and contributors are welcome in the OpenStack ecosystem.

"It’s a misconception that everyone who wants to use OpenStack wants to be a project contributor, because the learning curve associated with being able to use OpenStack is rarely counted in an ROI equation," he writes on the Blue Box blog. "In order to take OpenStack to the next level of adoption, we want to enable a broad spectrum of consumers by delivering private infrastructure-as-a-service."

Not everyone agrees. "For an organisation to deliver value with OpenStack it needs to participate in the community, contribute what it wants, and help guide other effort away from pitfalls," writes Roland Chan on the Aptira blog. "Each organisation gets the value from the effort expended by themselves + some fraction of the community’s effort, based on how aligned the organisation is with that effort."

What do you think? This is one of those debates that looks like it will have a long shelf life...

On the more hands-on side, Chris Evans, co-founder of Langton Blue, offers up a meaty primer on OpenStack Cinder 101. He covers the fundamentals of Cinder, how it's implemented, how to provision it, how it works with third-party storage arrays, and more...

For the 30,000-foot-view of the week, Wired has an interesting take on how containers are uniting tech giants.

"In the long run that’s better for everyone. Companies won’t stop trying to create their own innovative extensions to these standards that set them apart from the competition. But at least developers, and ultimately customers, won’t be left with fundamentally incompatible product. That’s progress."

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script> <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Cover Image/ by Alan Kotok // CC BY NC) // CC BY NC

by Superuser at June 26, 2015 08:44 PM

Tesora Corp

Celebrating our funding!

We are excited to announce our recent financing close of $4.5 million in new funding, bringing our total financing to $13.2 million! We welcome our new investor, Rho Canada Ventures, joining our existing investors General Catalyst Partners, CommonAngels Ventures, Point Judith Capital and angel investors. Our new funding will further scale our sales, marketing, and […]

The post Celebrating our funding! appeared first on Tesora.

by Leslie Barron at June 26, 2015 05:39 PM

Cameron Seader

SUSE® OpenStack Cloud 5 Admin Appliance – The Easier Way to Start Your Cloud

If you used the SUSE OpenStack Cloud 4 Admin Appliance, you know it was a downloadable, OpenStack Icehouse-based appliance, which even a non-technical user could get off the ground to deploy an OpenStack cloud. Today, I am excited to tell you about the new Juno-based SUSE OpenStack Cloud 5 Admin Appliance.

With the SUSE OpenStack Cloud 4 release we moved to a single integrated version. After lots of feedback from users it was clear that no one really cared that downloading something over 10GB mattered as long as it had everything they needed to start an OpenStack private cloud. In version 5 the download is over 15GB, but it actually has all of the software you might need from SLES 11 or SLES 12 compute infrastructure to SUSE Enterprise Storage integration. I was able to integrate the latest SMT mirror repositories at a reduced size and have everything you might need to speed your deployment.

The new appliance incorporates all of the needed software and repositories to set up, stage and deploy OpenStack Juno in your sandbox lab, or production environments. Coupled with it are the added benefits of automated deployment of highly available cloud services, support for mixed-hypervisor clouds containing KVM, Xen, Microsoft Hyper-V, and VMware vSphere, integration of our award winning, SUSE Enterprise Storage, support from our award-winning, worldwide service organization and integration with SUSE Engineered maintenance processes. In addition, there is integration with tools such as SUSE Studio™ and SUSE Manager to help you build and manage your cloud applications.

With the availability of SUSE OpenStack Cloud 5, and based on feedback from partners, vendors and customers deploying OpenStack, it was time to release a new and improved Admin Appliance. This new image incorporates the most common use cases and is flexible enough to add in other components such as SMT (Subscription Management Tool) and SUSE Customer Center registration, so you can keep your cloud infrastructure updated.

The creation of the SUSE OpenStack Cloud 5 Admin Appliance is intended to provide a quick and easy deployment. The partners and vendors we are working with find it useful to quickly test their applications in SUSE OpenStack Cloud and validate their use case. For customers it has become a great tool for deploying production private clouds based on OpenStack.

With version 5.0.x you can proceed with the following to get moving now with OpenStack.

Its important that you start by reading and understanding the Deployment Guide before proceeding. This will give you some insight into the requirements and an overall understanding of what is involved to deploy your own private cloud.

As a companion to the Deployment Guide we have provided a questionnaire that will help you answer and organize the critical steps talked about in the Deployment Guide.

To help you get moving quickly the SUSE Cloud OpenStack Admin Appliance Guide provides instructions on using the appliance and details a step-by-step installation.

The most updated guide will always be here

A new fun feature to try out in SUSE OpenStack Cloud 5 is the batch deployment capability. The appliance includes three templates in the /root home directory ( NFS.yaml, DRBD.yaml, simple-cloud.yaml )

NFS.yaml will deploy a 2 node controller cluster with NFS shared storage and 2 compute nodes with all of the common OpenStack services running in the cluster.

DRBD.yaml will deploy a 2 node controller cluster with DRBD replication for the database and messaging queue and 2 compute nodes with all of the common OpenStack services running in the cluster.

simple-cloud.yaml will deploy 1 controller and 1 compute node with all of the common OpenStack services running in a simple setup. 

Now is the time. Go out to http://www.suse.com/suse-cloud-appliances and start downloading version 5, walk through the Appliance Guide, and see how quick and easy it can be to set up OpenStack. Don't stop there. Make it highly available and set up more than one hypervisor, and don't forget to have a lot of fun.

by Cameron Seader (noreply@blogger.com) at June 26, 2015 04:34 PM

Ravello Systems

OpenStack Revisited – Kilo Ravello blueprint to build lab environments on AWS and Google Cloud

OpenStack Cloud Software

Author:
Michael J. Clarkson Jr.
President at Flakjacket Inc., Michael is Red Hat Certified Architect Level II (E,I,X,VA,SA-OSP,DS,A), Cloudera Certified Administrator Apache Hadoop

OpenStack Kilo Blueprint

Now that Kilo has had a bit of soak time and with the next release of Red Hat OpenStack Platform to be based on it I thought it time to revisit OpenStack. Using the same methods as the Juno installation from my previous blog entry, I set up Kilo running on CentOS 7 using the RDO Packstack based release. The blueprint is now available on Ravello Repo, ready for you to kick the tires. The answers file lives in /root/answers.txt on the controller node. Copy the blueprint to your account and go nuts. The VMs have cloud-init so you will need your SSH keypair. The default user for SSH with the keypair is centos. Password for the root user and the OpenStack users admin and demo is ravellosystems. Once the instance is deployed the Horizon UI is available at https://PUBLIC.IP.OF.CONTROLLER from any modern browser. Just accept the self signed certificate at the warning screen.

Get it on Repo

REPO by Ravello Systems, is a library of public blueprints shared by experts in the infrastructure community.

What’s new in Kilo?

There are a lot of improvements and bugfixes in existing services as well as some new projects beginning to see tech preview adoption. Some of the high points are:

  • Nova performance optimization takes better advantage of the hypervisor’s tuning options.
  • Better support for hypervisors including VMWare, Hyper-V, and of course KVM.
  • NFV features including CPU pinning for VMs, large page support, and NUMA scheduling.
  • Flavor support for Ironic.
  • Support for Federated authentication via Web Single-Sign-On.
  • Trove DBaaS resizing support.
  • Tighter Ceilometer integration.
  • Improved VXLAN and GRE support in Neutron.
  • Subnet allocation in Neutron for better control of which projects are on which subnet/vlan.
  • Tons of new Neutron ML2 plugins.
  • Tighter integration with Ceph and Gluster for backending Cinder and Glance.
  • Better support for Ceph as a full replacement for Swift.
  • Improved Heat functionality including nested stacks.
  • Project Sahara now has full support for Hadoop CDH, Spark, Storm, MapR, HBase, and Zookeeper.
  • Many, many more.

Here are the release notes.

Ravello Repo

What is this Ravello Repo I mentioned earlier? As part of the rollout of our free service for current North American RHCEs, we created a centralized repository on which Ravello users can share the amazing blueprints they create with others in the community. The success of the open source model is proof that sharing is caring and on Repo we continue in that spirit. The Repo is available to any Ravello user and sharing is encouraged. All of the blueprints I’ve referenced on previous blog entries are there along with labs for vSphere, Arista Networks, and Mirantis OpenStack. Come join the party!

Free Service for North American RHCEs

Do you have a current RHCE? Are you from somewhere in North America? We have a free service for you to reward your hard work. Free for personal use you get 1000 vCPU hours every month. This is a use it or lose it service, but you can use the hours as you see fit (as long as you don’t violate our Terms of Service). Sign up here.

The post OpenStack Revisited – Kilo Ravello blueprint to build lab environments on AWS and Google Cloud appeared first on The Ravello Blog.

by Michael J. Clarkson, Jr. from Flakjacket at June 26, 2015 04:29 PM

Dan Smith

Upgrading Nova to Kilo with minimal downtime

Starting in Icehouse, Nova gained the ability to do partial live upgrades. This first step meant that control services (which are mostly stateless) could be upgraded along with database schema before any of the compute nodes. After that step was done, individual compute nodes could be upgraded one-by-one, even migrating workloads off to newer compute nodes in order to facilitate hardware or platform upgrades in the process.

In the Kilo cycle, Nova made a concerted effort to break that initial atomic chunk of work into two pieces: the database schema upgrades and the code upgrades of the control services. It’s our first stab at this, so it’s not guaranteed to be perfect, but initial testing shows that it worked.

What follows is a high-level guide for doing a rolling Nova upgrade, using Juno-to-Kilo as the example. It’s not detailed enough to blindly follow, but is more intended to give an overview of the steps involved. It’s also untested and not something you should do on a production machine — test this procedure in your environment first and prove (to yourself) that it works.

The following steps also make some assumptions:

  • You’re using nova-network. If you’re using neutron, you are probably okay to do this, but you will want to use care around the compute-resident neutron agent(s) if you’re running them. If you’re installing system-level packages and dependencies, it may be difficult to upgrade Nova or Neutron packages without upgrading both.
  • You’re running non-local conductor (i.e. you have nova-conductor services running and [conductor]/use_local=False in your config). The conductor is a major part of insulating the newer and older services in a meaningful way. Without it, none of this will work.

Step 0: Prepare for what is coming

In order to have multiple versions of nova code running, there is an additional price in the form of extra RPC traffic between the compute nodes and the conductors. Compute nodes will start receiving data they don’t understand and they will start kicking that data back to conductor for help translating it into a format they understand. That may mean you want to start up some extra conductor workers to handle this load. How many additional workers you will need depends on the characteristics of your workload and there is really no rule of thumb to go by here. Also, if you plan to convert your compute nodes fairly quickly, you may need only a little extra overhead. If you have some stubborn compute nodes that will continue to run older code for a long time, they will be a constant source of additional traffic until they’re upgraded.

Further, as soon as you start running Kilo code, the upgraded services will be doing some online data migrations. That will generate some additional load on your database. As with the additional conductor load, the amount and impact depends on how active your cloud is and how much data needs to be migrated.

Step 1: Upgrade the schema

For this, you’ll need to get a copy of Kilo code installed somewhere. This should be a mostly temporary location that has access to the database and won’t affect any other running things. Once you’ve done that, you should be able to apply the schema updates:

$ nova db sync

This should complete rather quickly as it does no invasive data migration or examination.

You should grab the code of whatever you’re going to deploy and run the database sync from that. If you’re installing from pip, use the same package to do this process. If you’re deploying distro packages, use those. Just be careful, regardless of where you do this, to avoid service disruption. It’s probably best to spin up a VM or other sandbox environment from which to perform this action.

Step 2: Pin the compute RPC version

This step ensures that everyone in the cloud will speak the same version of the compute RPC API. Right now, it won’t change anything, but once you start upgrading services, it will ensure that newer services will send messages that are compatible with the old ones.

In nova.conf, set the following pin:

[upgrade_levels]
compute = juno

You should do this on any node that could possibly talk to a compute node. That includes the compute nodes themselves, as they do talk to other compute nodes as well. If you’re not sure which services talk to compute nodes, just be safe and do this everywhere.

You don’t technically need to restart all your services after you’ve made this change, since it’s really mostly important for the newer code. However, it wouldn’t hurt to make sure that everything is happy with this version pin in place before you proceed.

I’ll also point out here that juno is an alias for 3.35. We try to make sure the aliases are there for the given releases, but this doesn’t always happen and it sometimes becomes invalid after changes are backported. This obviously is not a nice user experience, but it is what it is at this point. You can see the aliases, and history, defined in the compute/rpcapi.py file.

Step 3: Upgrade the control services

This is the first step where you actually deploy new code. Make sure that you don’t accidentally overwrite the changes you made in step 2 to your nova.conf, or that your new one includes the version pin. Nova, by convention, supports running a new release with the old release’s config file so you should be able to leave that in place for now.

In this step, you will upgrade everything but the compute nodes. This means nova-api, nova-scheduler, nova-conductor, nova-consoleauth, nova-network, and nova-cert. In reality, this needs to be done fairly atomically. So, shut down all of the affected services, roll the new code, and start them back up. This will result in some downtime for your API, but in reality, it should be easy to quickly perform the swap. In later releases, we’ll reduce the pain felt here by eliminating the need for the control services to go together.

Step 4: Watch and wait

At this point, you’ve got control services running on newer code with compute nodes running old stuff. Hopefully everything is working, and your compute nodes are slamming your conductors with requests for help with the newer versions of things.

Things to be on the lookout for are messages in the compute logs about receiving messages for an unsupported version, as well as version-related failures in the nova-api or nova-conductor logs. This example from the compute log is what you would see, along with some matching messages on the sending-side of calls that expect to receive a response:

Exception during message handling: Endpoint does not support RPC version 4.0. Attempted method: build_and_run_instance

If you see these messages, it means that either you set the pin to an incorrect value, or you missed restarting one of the services to pick up the change. In general, it’s the sender who sent the bad message, so if you see this on a compute node, suspect a conductor or api service as the culprit. Not all messages that the senders send expect a response, so trying to find the bad sender by matching up a compute error with an api error, for example, will not always be possible.

If everything looks good at this point, then you can proceed to the next step.

Step 5: Upgrade computes

This step may take an hour or a month, depending on your requirements. Each compute node can be upgraded independently to the new code at this point. When you do, it will just stop needing to ask conductor to translate things.

Don’t unpin the compute version just yet, even on upgraded nodes. If you do any resize/migrate/etc operations, a newer compute will have to talk to an older one, and the version pin needs to remain in place in order for that to work.

When you upgrade your last compute node, you’re technically done. However, the steps after 5 include some cleanup and homework before you can really declare completion and have that beer you’re waiting for.

Step 6: Drop the version pins

Once all the services are running the new code, you can remove (or comment out) the compute line in the upgrade_levels section and restart your services. This will cause all the services to start sending kilo-level messages.  You could set this to “kilo” instead of commenting it out, but it’s better to leave it unset so that the newest version is always sent. If we were to backport something that was compatible with all the rest of kilo, but you had a pin set, you might be excluded from an important bug fix.

Because all of your services are new enough to accept old and new messages, you can stage the restarts of your services however you like in order to apply this change. It does not need to be atomic.

Step 7: Perform online data migrations

This step is your homework. There is a due date, but it’s a long way off. So, it’s more like a term project. You don’t have to do it now, but you will have to do it before you graduate to Liberty. If you’re responsible and mindful, you’ll get this out of the way early.

If you’re a seasoned stacker, you probably remember previous upgrades where the “db sync” phase was long, painful, and intense on the database. In Kilo, we’ve moved to making those schema updates (hopefully) lightweight, and have moved the heavy lifting to code that can execute at runtime. In fact, when you completed Step 3, you already had some data migrations happening in the background as part of normal operation. As instances are loaded from and saved to the database, those conversions will happen automatically. However, not everything will be migrated this way.

Before you will be able to move to Liberty, you will have to finish all your homework. That means getting all your data migrated to the newer formats. In Kilo, there is only one such migration to be performed and there is a new nova-manage command to help you do it. The best way to do this is to run small chunks of the upgrade over time until all of the work is done. The size of the chunks you should use depend on your infrastructure and your tolerance for the work being done. If you want to do ten instances at a time, you’d do this over and over:

$ nova-manage migrate_flavor_data --max-number 10

If you have lots of un-migrated instances, you should see something like this:

10 instances matched query, 10 completed

Once you run the command enough times, you should get to the point where it matches zero instances, at which point you know you’re done. If you start getting to the point where you have something like this:

7 instances matched query, 0 completed

…then you still have work to do. Instances that are in a transitional state (such as in the middle of being resized, or in ERROR state) are normally not migrated. Let these instances complete their transition and re-run the migration. Eventually you should be able to get to zero.

NOTE: The invocation of this migration function is actually broken in the Kilo release. There are a couple of backport patches proposed that will fix it, but it’s likely not fixed in your packages if you’re reading this soon after the release. Until then, you have a pass to not work on your homework until your distro pulls in the fixes[1][2].

Summary and Next Steps

If you’ve gotten this far, then you’ve upgraded yourself from Juno to Kilo with the minimal amount of downtime allowed by the current technology. It’s not perfect yet, but it’s a lot better than having to schedule the migration at a time where you can tolerate a significant outage window for database upgrades, and where you can take every node in your cluster offline for an atomic code deployment.

Going forward, you can expect this process to continue to get easier. Ideally we will continue to reduce the number of services that need to be upgraded together, including even partial upgrades of individual services. For example, right now you can’t really upgrade your API nodes separate from your conductors, and certainly not half of your conductors before the other half. However, that reality does exist in the future, and will allow a much less impactful transition.

As I said at the beginning, this is new stuff. It should work, and it does in our gate testing. However, be diligent about testing it on non-production systems and file bugs against the project if you find gaps and issues.

by Dan at June 26, 2015 04:12 PM

Joshua Hesketh

git.openstack.org adventures

Over the past few months I started to notice occasional issues when cloning repositories (particularly nova) from git.openstack.org.

It would fail with something like

git clone -vvv git://git.openstack.org/openstack/nova .
fatal: The remote end hung up unexpectedly
fatal: early EOF
fatal: index-pack failed

The problem would occur sporadically during our 3rd party CI runs causing them to fail. Initially these went somewhat ignored as rechecks on the jobs would succeed and the world would be shiny again. However, as they became more prominent the issue needed to be addressed.

When a patch merges in gerrit it is replicated out to 5 different cgit backends (git0[1-5].openstack.org). These are then balanced by two HAProxy frontends which are on a simple DNS round-robin.

                          +-------------------+
                          | git.openstack.org |
                          |    (DNS Lookup)   |
                          +--+-------------+--+
                             |             |
                    +--------+             +--------+
                    |           A records           |
+-------------------v----+                    +-----v------------------+
| git-fe01.openstack.org |                    | git-fe02.openstack.org |
|   (HAProxy frontend)   |                    |   (HAProxy frontend)   |
+-----------+------------+                    +------------+-----------+
            |                                              |
            +-----+                                    +---+
                  |                                    |
            +-----v------------------------------------v-----+
            |    +---------------------+  (source algorithm) |
            |    | git01.openstack.org |                     |
            |    |   +---------------------+                 |
            |    +---| git02.openstack.org |                 |
            |        |   +---------------------+             |
            |        +---| git03.openstack.org |             |
            |            |   +---------------------+         |
            |            +---| git04.openstack.org |         |
            |                |   +---------------------+     |
            |                +---| git05.openstack.org |     |
            |                    |  (HAProxy backend)  |     |
            |                    +---------------------+     |
            +------------------------------------------------+

Reproducing the problem was difficult. At first I was unable to reproduce locally, or even on an isolated turbo-hipster run. Since the problem appeared to be specific to our 3rd party tests (little evidence of it in 1st party runs) I started by adding extra debugging output to git.

We were originally cloning repositories via the git:// protocol. The debugging information was unfortunately limited and provided no useful diagnosis. Switching to https allowed for more CURL output (when using GIT_CURL_VERBVOSE=1 and GIT_TRACE=1) but this in itself just created noise. It actually took me a few days to remember that the servers are running arbitrary code anyway (a side effect of testing) and therefore cloning from the potentially insecure http protocol didn’t provide any further risk.

Over http we got a little more information, but still nothing that was conclusive at this point:

git clone -vvv http://git.openstack.org/openstack/nova .

error: RPC failed; result=18, HTTP code = 200
fatal: The remote end hung up unexpectedly
fatal: protocol error: bad pack header

After a bit it became more apparent that the problems would occur mostly during high (patch) traffic times. That is, when a lot of tests need to be queued. This lead me to think that either the network turbo-hipster was on was flaky when doing multiple git clones in parallel or the git servers were flaky. The lack of similar upstream failures lead me to initially think it was the former. In order to reproduce I decided to use Ansible to do multiple clones of repositories and see if that would uncover the problem. If needed I would have then extended this to orchestrating other parts of turbo-hipster in case the problem was systemic of something else.

Firstly I need to clone from a bunch of different servers at once to simulate the network failures more closely (rather than doing multiple clones on the one machine or from the one IP in containers for example). To simplify this I decided to learn some Ansible to launch a bunch of nodes on Rackspace (instead of doing it by hand).

Using the pyrax module I put together a crude playbook to launch a bunch of servers. There is likely much neater and better ways of doing this, but it suited my needs. The playbook takes care of placing appropriate sshkeys so I could continue to use them later.

    ---
    - name: Create VMs
      hosts: localhost
      vars:
        ssh_known_hosts_command: "ssh-keyscan -H -T 10"
        ssh_known_hosts_file: "/root/.ssh/known_hosts"
      tasks:
        - name: Provision a set of instances
          local_action:
            module: rax
            name: "josh-testing-ansible"
            flavor: "4"
            image: "Ubuntu 12.04 LTS (Precise Pangolin) (PVHVM)"
            region: "DFW"
            count: "15"
            group: "raxhosts"
            wait: yes
          register: raxcreate

        - name: Add the instances we created (by public IP) to the group 'raxhosts'
          local_action:
            module: add_host
            hostname: "{{ item.name }}"
            ansible_ssh_host: "{{ item.rax_accessipv4 }}"
            ansible_ssh_pass: "{{ item.rax_adminpass }}"
            groupname: raxhosts
          with_items: raxcreate.success
          when: raxcreate.action == 'create'

        - name: Sleep to give time for the instances to start ssh
          #there is almost certainly a better way of doing this
          pause: seconds=30

        - name: Scan the host key
          shell: "{{ ssh_known_hosts_command}} {{ item.rax_accessipv4 }} &gt;&gt; {{ ssh_known_hosts_file }}"
          with_items: raxcreate.success
          when: raxcreate.action == 'create'

    - name: Set up sshkeys
      hosts: raxhosts
      tasks:
       - name: Push root's pubkey
         authorized_key: user=root key="{{ lookup('file', '/root/.ssh/id_rsa.pub') }}"

From here I can use Ansible to work on those servers using the rax inventory. This allows me to address any nodes within my tenant and then log into them with the seeded sshkey.

The next step of course was to run tests. Firstly I just wanted to reproduce the issue, so in order to do that it would crudely set up an environment where it can simply clone nova multiple times.

    ---
    - name: Prepare servers for git testing
      hosts: josh-testing-ansible*
      serial: "100%"
      tasks:
        - name: Install git
          apt: name=git state=present update_cache=yes
        - name: remove nova if it is already cloned
          shell: 'rm -rf nova'

    - name: Clone nova and monitor tcpdump
      hosts: josh-testing-ansible*
      serial: "100%"
      tasks:
        - name: Clone nova
          shell: "git clone http://git.openstack.org/openstack/nova"

By default Ansible runs with 5 folked processes. Meaning that Ansible would work on 5 servers at a time. We want to exercise git heavily (in the same way turbo-hipster does) so we use the –forks param to run the clone on all the servers at once. The plan was to keep launching servers until the error reared its head from the load.

To my surprise this happened with very few nodes (less than 15, but I left that as my minimum testing). To confirm I also ran the tests after launching further nodes to see it fail at 50 and 100 concurrent clones. It turned out that the more I cloned the higher the failure rate percentage was.

Now that I had the problem reproducing, it was time to do some debugging. I modified the playbook to capture tcpdump information during the clone. Initially git was cloning over IPv6 so I turned that off on the nodes to force IPv4 (just in case it was a v6 issue, but the problem did present itself on both networks). I also locked git.openstack.org to one IP rather than randomly hitting both front ends.

    ---
    - name: Prepare servers for git testing
      hosts: josh-testing-ansible*
      serial: "100%"
      tasks:
        - name: Install git
          apt: name=git state=present update_cache=yes
        - name: remove nova if it is already cloned
          shell: 'rm -rf nova'

    - name: Clone nova and monitor tcpdump
      hosts: josh-testing-ansible*
      serial: "100%"
      vars:
        cap_file: tcpdump_{{ ansible_hostname }}_{{ ansible_date_time['epoch'] }}.cap
      tasks:
        - name: Disable ipv6 1/3
          sysctl: name="net.ipv6.conf.all.disable_ipv6" value=1 sysctl_set=yes
        - name: Disable ipv6 2/3
          sysctl: name="net.ipv6.conf.default.disable_ipv6" value=1 sysctl_set=yes
        - name: Disable ipv6 3/3
          sysctl: name="net.ipv6.conf.lo.disable_ipv6" value=1 sysctl_set=yes
        - name: Restart networking
          service: name=networking state=restarted
        - name: Lock git.o.o to one host
          lineinfile: dest=/etc/hosts line='23.253.252.15 git.openstack.org' state=present
        - name: start tcpdump
          command: "/usr/sbin/tcpdump -i eth0 -nnvvS -w /tmp/{{ cap_file }}"
          async: 6000000
          poll: 0 
        - name: Clone nova
          shell: "git clone http://git.openstack.org/openstack/nova"
          #shell: "git clone http://github.com/openstack/nova"
          ignore_errors: yes
        - name: kill tcpdump
          command: "/usr/bin/pkill tcpdump"
        - name: compress capture file
          command: "gzip {{ cap_file }} chdir=/tmp"
        - name: grab captured file
          fetch: src=/tmp/{{ cap_file }}.gz dest=/var/www/ flat=yes

This gave us a bunch of compressed capture files that I was then able to seek the help of my colleagues to debug (a particular thanks to Angus Lees). The results from an early run can be seen here: http://119.9.51.216/old/run1/

Gus determined that the problem was due to a RST packet coming from the source at roughly 60 seconds. This indicated it was likely we were hitting a timeout at the server or a firewall during the git-upload-pack of the clone.

The solution turned out to be rather straight forward. The git-upload-pack had simply grown too large and would timeout depending on the load on the servers. There was a timeout in apache as well as the HAProxy config for both frontend and backend responsiveness. The relative patches can be found at https://review.openstack.org/#/c/192490/ and https://review.openstack.org/#/c/192649/

While upping the timeout avoids the problem, certain projects are clearly pushing the infrastructure to its limits. As such a few changes were made by the infrastructure team (in particular James Blair) to improve git.openstack.org’s responsiveness.

Firstly git.openstack.org is now a higher performance (30GB) instance. This is a large step up from the previous (8GB) instances that were used as the frontend previously. Moving to one frontend additionally meant the HAProxy algorithm could be changed to leastconn to help balance connections better (https://review.openstack.org/#/c/193838/).

                          +--------------------+
                          | git.openstack.org  |
                          | (HAProxy frontend) |
                          +----------+---------+
                                     |
                                     |
            +------------------------v------------------------+
            |  +---------------------+  (leastconn algorithm) |
            |  | git01.openstack.org |                        |
            |  |   +---------------------+                    |
            |  +---| git02.openstack.org |                    |
            |      |   +---------------------+                |
            |      +---| git03.openstack.org |                |
            |          |   +---------------------+            |
            |          +---| git04.openstack.org |            |
            |              |   +---------------------+        |
            |              +---| git05.openstack.org |        |
            |                  |  (HAProxy backend)  |        |
            |                  +---------------------+        |
            +-------------------------------------------------+

All that was left was to see if things had improved. I rerun the test across 15, 30 and then 45 servers. These were all able to clone nova reliably where they had previously been failing. I then upped it to 100 servers where the cloning began to fail again.

Post-fix logs for those interested:
http://119.9.51.216/run15/
http://119.9.51.216/run30/
http://119.9.51.216/run45/
http://119.9.51.216/run100/
http://119.9.51.216/run15per100/

At this point, however, I’m basically performing a Distributed Denial of Service attack against git. As such, while the servers aren’t immune to a DDoS the problem appears to be fixed.

by Joshua Hesketh at June 26, 2015 01:47 PM

Loïc Dachary

Setting a custom name server on an OpenStack instance

In an OpenStack tenant that is not allowed to create a network with neutron net-create, the name server can be set via cloudinit. The resolv-conf module although documented in the examples is not always available. It can be worked around with

#cloud-config
bootcmd:
 - echo nameserver 4.4.4.4 | tee /etc/resolvconf/resolv.conf.d/head
 - resolvconf -u

for Ubuntu or

#cloud-config
bootcmd:
 - echo nameserver 4.4.4.4 | tee /etc/resolv.conf
 - sed -ie 's/PEERDNS="yes"/PEERDNS="no"/' /etc/sysconfig/network-scripts/ifcfg-eth0

for CentOS.

by Loic Dachary at June 26, 2015 01:34 PM

Tesora Corp

Short Stack: OpenStack resources, Red Hat’s cloud push, Forrester report, static code analysis, Tesora’s funding

Welcome to the Short Stack, our weekly feature where we search for the most intriguing OpenStack links to share with you. These links may come from traditional publications or company blogs, but if it’s about OpenStack, we’ll find the best links we can to share with you every week If you like what you see, […]

The post Short Stack: OpenStack resources, Red Hat’s cloud push, Forrester report, static code analysis, Tesora’s funding appeared first on Tesora.

by Leslie Barron at June 26, 2015 01:09 PM

Thierry Carrez

New OpenStack component versioning

Yesterday we reached the liberty-1 development milestone. You may have noticed from the announcement that the various components released were all using new, different version numbers. What's going on here ?

Once upon a time

Since the beginning of OpenStack we've been using two versioning schemes. One was for projects released once every 6 months and following a schedule of development milestones and release candidates. Those would be using a YEAR.N version number (like 2015.1 for Kilo).

Another was used by Swift, which was already mature when OpenStack started, and which released intermediary versions as-needed throughout the cycle. It would use a X.Y.Z version number which looked a lot more like semantic versioning.

At the end of the cycle, we would coordinate a final release that would combine both. For example the "Kilo" release would be made of Nova 2015.1.0, Swift 2.3.0, and everything else at 2015.1.0.

Recent developments

A few things happened over the last two cycles. First, we released more and more libraries, and those would follow a strict X.Y.Z semantic versioning. Those would also have an final release in the cycle, from which a stable branch would be maintained for critical bugfixes and vulnerability fixes. So the portion of commonly-versioned YEAR.N deliverables was fast decreasing.

Second, some projects got more mature and/or more able to release fully-functional intermediary releases as-needed. As a community, we still can't support more than one stable branch every 6 months, so those intermediary releases won't get backports, but past a given maturity step, it's still a great thing to push new features to bleeding-edge users as early and often as we can. For those a YEAR.N synchronized versioning scheme would not work.

The versioning conundrum

At that stage we had three options to handle those projects switching from one model to another. They could keep their 2015.2.0 version and start doing semantic versioning from that -- but that would be highly confusing, when you end up releasing 2017.9.4 sometimes in 2016. The second option was to reset the version for projects as they switch. So Ironic would adopt, say, 3.0.0 while all other projects still use 2015.2.0.

The third option was to bite the bullet and drop the YEAR.N versioning at the same time, for all the projects that were still using it. Switching them all to some arbitrary number (say, 12.0.0 since that would be the 12th OpenStack release) would create confusion as projects switching to intermediary releases would slowly drift from the pack (most projects publishing 13.0.0 while some would be at 12.5.2 and others at 13.1.0). So to avoid that confusion, projects would pick purposefully distinct version numbers based on their age.

The change

After discussions at the Vancouver Design Summit and on the mailing-list, we opted for the third option, with an initial number calculated from the number of past integrated releases already published.

It's a clean cut which will reduce on-going disruption. All components end up with a different, meaningful version number: there are no longer "normal" and "outliers" projects. Additionally, it solves the weird impression we had when we released 2014.2.2 stable versions sometimes in 2015.

As far as impact is concerned, distributions will need to make sure to insert an epoch so that package versions sort correctly in their package management systems. If your internal CI pipeline relies on sorting version numbers, it will likely need an adjustment too. For everyone else, it should not have an impact: when Liberty is out, you will upgrade to the liberty version of the components, as you always did.

Liberty-1 and the future

The change in versions was pushed last week, and that is why for liberty-1 we published 12.0.0.0b1 for Nova, 8.0.0.0b1 for Keystone, and 1.0.0.0b1 for Designate, etc... Those are still on a milestone-based 6-month release cycle, but their "Liberty" final version won't be all versioned "2015.2.0", but 12.0.0 for Nova, 8.0.0 for Keystone, etc.

To reduce the confusion, the release management team will provide tooling and web pages to describe what each series means in terms of component version numbers (and the other way around).

We hope this future-proof change will bring some more freedom for OpenStack project teams to pick the release model that is the most interesting for them and their user base. For a cycle named "liberty", that sounded like a pretty good time to do it.

by Thierry Carrez at June 26, 2015 12:45 PM

Lars Kellogg-Stedman

OpenStack Networking without DHCP

In an OpenStack environment, cloud-init generally fetches information from the metadata service provided by Nova. It also has support for reading this information from a configuration drive, which under OpenStack means a virtual CD-ROM device attached to your instance containing the same information that would normally be available via the metadata service.

It is possible to generate your network configuration from this configuration drive, rather than relying on the DHCP server provided by your OpenStack environment. In order to do this you will need to make the following changes to your Nova configuration:

  1. You must be using a subnet that does have a DHCP server. This means that you have created it using neutron subnet-create --disable-dhcp ..., or that you disabled DHCP on an existing network using neutron net-update --disable-dhcp ....

  2. You must set flat_inject to true in /etc/nova/nova.conf. This causes Nova to embed network configuration information in the meta-data embedded on the configuration drive.

  3. You must ensure that injected_network_template in /etc/nova/nova.conf points to an appropriately formatted template.

Cloud-init expects the network configuration information to be presented in the format of a Debian /etc/network/interfaces file, even if you're using it on RHEL (or a derivative). The template is rendered using the Jinja2 template engine, and receives a top-level key called interfaces that contains a list of dictionaries, one for each interface.

A template similar to the following ought to be sufficient:

{% for interface in interfaces %}
auto {{ interface.name }}
iface {{ interface.name }} inet static
  address {{ interface.address }}
  netmask {{ interface.netmask }}
  broadcast {{ interface.broadcast }}
  gateway {{ interface.gateway }}
  dns-nameservers {{ interface.dns }}
{% endfor %}

This will directly populate /etc/network/interfaces on an Ubuntu system, or will get translated into /etc/sysconfig/network-scripts/ifcfg-eth0 on a RHEL system (a RHEL environment can only configure a single network interface using this mechanism).

by Lars Kellogg-Stedman at June 26, 2015 04:00 AM

June 25, 2015

Cloud Platform @ Symantec

Driving from Legacy to Cloud

Symantec is on a journey that started with a legacy of industry-leading security products and services. Through Open Source communities like OpenStack and Hadoop, we are transforming to meet the next-generation security needs of the digital world

Read More

by David T Lin at June 25, 2015 10:34 PM

Matt Fischer

Fernet Tokens in Prod

This post is a follow-up to my previous post about Fernet Tokens which you may want to read first.

Last night we upgraded our production OpenStack to a new version of keystone off of master from a couple weeks ago and at the same time switched on Fernet tokens. This is after we let the change soak in our dev and staging environments for a couple weeks. We used this time to assess performance, look for issues, and figure out our key rotation strategy.

The Upgrade

All of our upgrade process is run via ansible. We cherry-pick the change which includes pointing to the repo with the new keystone along with enabling the Fernet tokens and then let ansible drive puppet to upgrade and switch providers. During the process, we go down to a single keystone node because it simplifies the active/active database setup when running migrations. So when this node is upgraded we take a short outage as the package is installed and then the migrations run. This took about 16 seconds.

Once this is done, the other OpenStack services start freaking out. Because we’ve not upgraded to Kilo yet, our version of Keystone middleware is too dumb to request a new token when the old one stops working. So this means we have to restart services that talk to Keystone. We ended up re-using our “rabbit node died, reboot OpenStack” script and added glance to the list since restarting it is fairly harmless even though it doesn’t talk to rabbit. Due to how the timing works, we don’t start this script until puppet is completely done upgrading the single keystone node, so while the script to restart services is quick, it doesn’t start for about 90 seconds after Keystone is ready. This means that we have an API outage of 1-2 minutes. For us, this is not a big deal, our customers are sensitive to “hey I can’t get to my VM” way more than a few minutes of API outage, especially one that’s during a scheduled maintenance window. This could be optimized down substantially if I manually ran the restarts instead of waiting on the full puppet run (that upgrades keystone) to finish.

Once the first node is done we run a full validation suite of V2 and V3 keystone tests. This is the point at which we can decide to go back if needed. The test suite for us took about 2 minutes.

Once we have one node upgraded, OpenStack is rebooted, and validation passes, we then deploy the new package and token provider to the rest of the nodes and they rejoin the cluster one by one. We started in the opposite region so we’d get a endpoint up in the other DC quickly. This is driven by another ansible job that runs puppet and does the nodes one by one.

All in all we finished in about 30 minutes, most of that time was sitting around. We then stayed an extra 30 to do a full set of OpenStack regression tests and everything was okay.

At the end I also truncated the token table to get back all the disk space it was using.

Key Rotation

We are not using any of the built-in Keystone Fernet key rotation mechanisms. This is because we already have a way to get code and config onto all our nodes and did not want to run the tooling on a keystone node directly. If you do this, then you inadvertently declare one node a master and have to write special code to handle this master node in puppet or ansible (or whatever you are using). Instead we decided to store the keys in eyaml in our hiera config. I wrote a simple python script that decrypts the eyaml and then generates and rotates the keys. Then I will take the output and propose it into our review system. Reviewing eyaml encrypted keys is somewhat useless, but the human step is there to prevent something dumb from happening. For now we’re only using 3 keys, since our tokens last 2 hours, we can’t do two rotations in under two hours. The reviewer would know the last time a rotation was done and the last time one was deployed. Since we don’t deploy anywhere near a two hour window, this should be okay. Eventually we’ll have Jenkins do this work rather than me. We don’t have any firm plans right now on how often we’ll do the key rotation, probably weekly though.

To answer a question that’s come up, there is no outage when you rotate keys, I’ve done five or six rotations including a few in the same day, without any issues.

Performance

I will be doing a full post later on about performance once I have more numbers, but the results so far is that token generation is much faster, while validation to be a bit slower. Even if it was about the same, the number of problems and database sync issues that not storing tokens in the DB solves make them worthwhile. We’re also going to (finally) switch to WSGI and I think that will further enhance performance.

Aftermath

Today one of my colleagues bought a bottle of Fernet-Branca for us. All I can say is that I highly recommend not doing a shot of it. Switching token providers is way less painful. (Video of said shot is here)

by Matt Fischer at June 25, 2015 07:01 PM

OpenStack Blog

Technical Committee Highlights June 25, 2015

Beginners, start your engines

A compute starter kit tag has been approved, it provides a place for a beginner who only wants to try to get a computing cloud use case started. We discussed some reservations about recommending such a simple starting point, including only using nova-network for inter-vm networks, and recommending multi-host for that use case, but feel the current tagged projects indicate a decent starting point for now. We’ll update it as we see improvements to the starting experience. The projects for an OpenStack starter kit are: cinder, glance, keystone, and nova. Additional tags are being proposed to help with the release mechanisms for a type:service tag, and type:library tag merged this week.

Welcome, new projects

We welcomed new projects to the “we are OpenStack” definition, including:

  • Searchlight, providing indexes and search across cloud resources
  • OS-ansible-deployment (OSAD), deploying OpenStack using Ansible playbooks
  • Solum, managing the application lifecycle and source-to-image workflows
  • Cue Message Broker service project proposal, for deploying and managing message brokers using a REST API

There were several topics we didn’t get to discuss in this meeting due to the longer discussion about the compute starter kit, but we will get to those next week. Check the meeting agenda on the wiki any time you wonder what topics are up for discussion.

Project Team Guide sprint

The sprint for the Project Team Guide was last week, and the authors are going great gangbusters. The goal for this guide is to provide teams a starting point for understanding our philosophy and general thinking about what it means to be an OpenStack project. See the review queue for the work in progress. It’s not published to the web yet, so if you’d like to write or revise anything, propose a patch for review to the project-team-guide repository and build it locally.

Awaiting the M name

The poll for the M release name closed this week and we’re all awaiting the final name selection. Stay tuned to the openstack-dev mailing list for the final M name.

by Anne Gentle at June 25, 2015 05:57 PM

OpenStack Superuser

How OpenStack keeps Nike running smoothly

Nike more or less stumbled into OpenStack.

“We didn’t start out trying to build a private cloud or even necessarily say, 'Hey, we want OpenStack,'” said Peter Bogdanovic, lead architect, tech ops infrastructure at Nike. “We started out with the idea of changing the relationship with managed service providers…We basically wanted to manage our own V-center and V-infrastructure and the dev-ops teams wanted APIs. When we looked at the ways to deliver the APIs to them, we landed on OpenStack as the way that was most obvious.”

Bogdanovic shared his experience on a panel that included Wells Fargo and Adobe discussing OpenStack on VMware in production environments at the OpenStack Summit Vancouver. You can check out the entire 40-minute session online.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="https://www.youtube.com/embed/4yv7b5ywylM" width=""></iframe>

Talking about the workloads Nike is running on OpenStack, Bogdanovic said the data center is serving the legacy apps. One of the requirements Nike had when wresting more control over the v-sphere infrastructure is that they didn’t want to require any application changes, a requirement that ultimately made them more like pets than cattle, “unfortunately,” he added.

Bogdanovic says that the goal now is to but we're trying to do to treat it more like cattle and “at least bring some automation to the deployment of these things” to make them shorter-lived. His main “consumer” for OpenStack is the release management group because they own the process that the application packages are deployed with to as they move to production.

The transition to OpenStack involved straddling a few hurdles.

“It was hard when it was managed service, everything would be a ticket,” he said. “There was a lot of resistance over giving up a VM because it took so long to get one created.” They’re currently using Heat paired with the Fog library and although they’ve only been using OpenStack for a short time, Nike is looking to make it a marathon and not a spritn. “We’re really discouraging anybody from using the GUI for anything or even the command line tools…Everything should be checked in. Any change that we make to the infrastructure anywhere should be should be documented with code and checked into source code management.”

Bogdanovic said that because the company relied so much on managed service providers that bringing those services in-house means measuring up to those same high standards. Teams at Nike are currently focusing on building out server monitoring capabilities and instant response capabilities.

“I want [these teams] to be very successful,” he said. “I don't want my phone to ring all the time, but that’s something that still has to be proven in our organization.”

A core ops team of just five people is building it. “We’re going to building mass and then plug into an organization that’s bigger - teams that support the retail stores, etc. There's there's a bigger organization out there that we plug this into, and then you’re dealing with dozens of people or teams larger than that.”

When asked how he would do it differently in hindsight, Bogdanovic said a “customer-centric” approach would’ve helped speed things along.

“We went at it backwards…We should’ve said, ‘What do our customers want?’ They want these APIs to provision infrastructure and how are we going to provide that?'" That mistake was partly driven by what he termed an “aggressive” stance from the management team above, and could have been tempered by taking more time, he said.

“There was a lot of pressure to just go do it, just get something done.”

Cover Photo by Photon // CC BY NC

by Nicole Martinelli at June 25, 2015 04:18 PM

Deustche Telekom and TDBank kick off OpenStack Day Israel

At the sixth OpenStack Day Israel featuring more than 25 speakers across three tracks, Deutsche Telekom and TDBank set a tone of real telco operator and enterprise adoption, each presenting their OpenStack case studies and plans for the future.

Organized by Nati Shalom of GigaSpaces and Avner Algom of IGTCloud, OpenStack Israel is a vibrant community made up of technologists at major companies like Cisco and SAP, startups like Stratoscale, and large users like LivePerson which is running 12,000 physical cores in production. Held at former movie theater Cinematheque Tel Aviv, sponsors included Rackspace, Red Hat, IBM, Hewlett-Packard, Kontron and Mellanox.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Axel Clauberg at Deutsche Telekom spoke about the “software-defined operator” and how telcos that have previously been slow moving—taking one to two or more years to launch a new product—must move faster if they want to compete in the current market. Customers want sexy products, and they are even weighing big purchases like cars based on a new set of requirements like connectivity. To get ahead, Deutsche Telekom is embracing OpenStack as the platform for network functions virtualization (NFV), which is basically the ability to dynamically change the function of different network appliances, allowing them to change and deliver new services much more quickly.

alt text hereThe crowd at OpenStack Day Israel. Photo: Lauren Sell.

Services like WhatsApp and Kakao have greatly reduced the use of text messages, leaving telco operators stuck with hefty investments in text-based network appliances. The vision for NFV is the ability to change the function of those network appliances, so as text usage declines or increases, they can switch to supporting 4G data without making a new hardware investment.

Deutsche Telekom is actively pushing the OpenStack community to support NFV through their contributions to the community and Telco Working Group, and, more importantly, proving it in production. At Mobile World Congress in March 2015, they announced their first production NFV workload running OpenStack, a cloud VPN service available in Croatia, Slovakia and Hungary.

In partnership with Cisco, Deutsche Telekom brought the new cloud VPN service to market in less than three months, an incredibly fast time frame for the industry. They are now focused on pan-European activities and moving additional production services onto OpenStack, which we also heard about at the recent OpenStack DACH Day in Berlin.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Marco Ughetti from Telecom Italia shared their plans for OpenStack and NFV. They have been developing an OpenStack geo-distributed test bed along with Italian universities since 2013, and have also been involved in the OpenStack Telco Working Group to help drive forward the NFV use case.

The next user who took the stage was the head of cloud engineering Srinivas Sarathy at TDBank. Based in Canada, TDBank has a $93 billion market cap, nearly 43,000 employees and more than 2,500 retail banking locations, not including its investment institution. Sarathy joined just three months ago from another financial institution where he was implementing a single vendor, proprietary cloud stack. TDBank attracted him with its open source approach, presenting the opportunity to build a best of breed financial services cloud with OpenStack. We initially heard about TDBank’s cloud strategy from Graeme Peacock at the OpenStack Summit Vancouver.

TDBank is embracing a hybrid cloud strategy, building their own private cloud on premise, as well as utilizing hosted private cloud at Rackspace. Additionally, they are using Cloudify for the orchestration layer, SaltStack for configuration management, CloudCruiser for billing and Red Hat Enterprise Linux OpenStack Platform (RHEL OSP) supported by Rackspace as their distribution. They developed their own self-service portal called Storm, so they could customize it for their company and specific use cases.They have been working with niche consulting firm RiskFocus to help integrate these different pieces as well as drive culture change within their organization.

On the culture front, TDBank has aggressively adopted a cloud-first policy. Since January 2015, they have trained over 1,000 developers internally in writing cloud-ready applications utilizing capabilities like TOSCA blueprints, an important open standard they have embraced. They have worked closely with the different lines of business to make their cloud offerings attractive, both in price point and capabilities, to developers across the organization. While the policies and processes are key to driving organizational change, building a cloud that pulls developers to the platform is equally critical. “Cloud first is easy when the cloud works really well,” said Sarathy.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Asked about information security, Sarathy said it is obviously a very critical issue for financial services industry. He believes cloud has helped improve the security model by increasing the layers of zoning and the logical separation capabilities in their multi-tenant, multi-line-of-business environment without having to sacrifice efficiency of cloud.

Cover Photo by Reinhardt König // CC BY NC

by Lauren Sell at June 25, 2015 04:18 PM

Why LivePerson goes hand-in-hand with OpenStack

If you’ve ever engaged with a sales or support rep through a chat window on a website, there’s a good chance you were using the LivePerson real-time chat platform.

Their cloud is also expanding rapidly. LivePerson handed out some impressive metrics at the recent OpenStack Day Israel — their cloud has 12,000 physical cores, 6,000 virtual servers, and over 20,000 virtual cores.

Koby Holzer, director of cloud platforms, spoke at a fireside chat at the sixth edition of the event. Liveperson is headquartered in New York City, but the cloud engineering team is part of a large office that calls Israel home.

alt text here

The LivePerson team at OpenStack Day Israel. Photo: Lauren Sell.

“OpenStack is a very big deal at LivePerson,” said Holzer. “We want to engage with other large OpenStack users to learn more about what they’re doing as we continue to build out our cloud environment."

An very early adopter, Holzer jumped into OpenStack almost three-and-a-half years ago. At the time, LivePerson did not even have virtualization. They were just using standard Oracle database, web servers and Microsoft windows. To start their journey, they stood up an OpenStack cloud and a VMware virtualized environment side-by-side, so they would have a backup plan. OpenStack proved to be the right direction for them, and that cloud has since expanded dramatically.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

LivePerson now runs OpenStack in seven datacenters, six of which are in production — two each in North America, Europe and Asia Pacific. They’ve managed this supsersize scale-up while maintaining their service level agreement with customers — the lifeblood of any software-as-a-service (SaaS) provider. There are five people on their cloud engineering team operating the OpenStack environment, and several dozen total including staff dedicated to around-the-clock support, legacy storage and network engineers.

Holzer says the biggest challenge they’ve faced is upgrading, especially since they were such an early adopter. LivePerson recently upgraded 50 percent of their cloud from Havana to Icehouse, and Holzer said it was a less rocky transition than the early days of migrating from Essex to Folsom. He expects it will continue to smooth out with future releases, but for now it requires a lot of planning and staff time.

Looking forward, Holzer is interested in the Ironic bare metal service and containers, including the use of the new OpenStack Magnum project and Kubernetes. But he added that LivePerson will always have to walk the line between the desire for new features and maintaining their service level.

Cover Photo by Wilma // CC BY NC

by Lauren Sell at June 25, 2015 04:18 PM

At CERN, storage is the key to the universe

It takes a lot of space to unravel the mysteries of the universe. Just ask the operations team at European particle physics laboratory CERN, who face the weighty task of storing the data produced at the Large Hadron Collider (LHC.)

After a two-year hiatus, scientists recently fired up the 17-mile circular collision course that lies under a bucolic patch of the French-Swiss border near Geneva. It’s shaping up to be a great run - they’ve already broken a speed record. They’ll be investigating such lofty subjects as the early universe, dark matter and the Higgs boson or “God particle”— but there’s a reason the real action at CERN was dubbed “turning on the data tap.”

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

The data storage required to keep pace with the world’s top physicists puts the “ginormous” in big data. For starters, the data gets crunched before protons ever whirl around the circuit. Researchers at four particle detectors (ATLAS, CMS, LHCb, ALICE) first simulate what should happen according to the standard model of physics, then fire up the collider, observe the smashups and compare results from both.

“If it’s different, that means there’s some new physics there, some new particle or something that we didn’t understand,” said Dan van der Ster of the CERN IT data and storage group. “Getting significance in these two steps requires a lot of data. In the simulation step there are CPUs churning out petabytes of Montecarlo data but then the real data is also a on petabyte scale.”

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

He gave a run-down of how CERN is using distributed object store and file system Ceph at the OpenStack Summit Vancouver, including Linux tuning tips, thread-caching malloc latency issues, VM boot failures and a mysterious router failure that, however, had no data corruption reported and no data scrub inconsistencies. (Whew!) (A video of his 40-minute talk is available on YouTube.)

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="" src="https://www.youtube.com/embed/OopRMUYiY5E" width=""></iframe>

First, the creation story: CERN moved all IT infrastructure to OpenStack into virtual machines last year, and they’ve used it in production since summer 2013. Currently, all the IT core services are on OpenStack and Ceph, and most of the research services are, too. (One exception: CERN’s big batch farms are still not virtual.) The storage engineer also provided some mind-expanding numbers: they’ve currently got close to 5,000 hypervisors, 11,000 instances,1,800 tenants and roughly that number of users.

After evaluating Ceph in 2013, the research center deployed a 3-petabyte cluster for Cinder/Glance. CERN has shared some of its OpenStack set-up in other talks: “Deep dive into Cern cloud infrastructure,” “Accelerating science with Puppet,”“Running Compute and Data at Scale.”

“We picked Ceph because it had a good design on paper, it looked like the best option for building block storage for OpenStack,” he said. “We called Ceph our organic storage platform, because you could add remote servers with no downtime, ever.” They ran a 150-terabyte test that had all flags flying, so they went ahead and deployed it. Initially deployed with Puppet using eNovance’s Ceph module, but today they use the Ceph disk deployment tool, so it’s “kind of customized,” he added.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Everything at CERN is super-sized. “Our requirements are to grow by 20 to 30 petabytes per year,” he said. One of the challenges they faced is that because it’s a research lab, everything's effectively “free” for the users. That means that scientists are constantly pushing for more input/output operations per second (IOPS). “We had no objective way to decide if the user gets more IOPS,” so he says they now have to prove they have a performance problem before cranking that capacity up. To push boundaries even further, for a couple of weeks van der Ster borrowed 150 servers of 192- terabytes each and it worked — “with some tuning,” he said.

“For less than 10-petabyte clusters, Ceph just works, you don’t have to do much,” van der Ster said.

You can keep up with the latest on in OpenStack at CERN through this blog: http://openstack-in-production.blogspot.com/

Cover Photo by Marceline Smith // CC BY NC

by Nicole Martinelli at June 25, 2015 04:18 PM

Loïc Dachary

OpenStack instance name based on its IP address

A DNS has a set of pre-defined names such as:

...
the-re018 10.0.3.18
the-re019 10.0.3.19
...

If nova fixed-ip-reserve is denied by the OpenStack policy and neutron net-create is not available to create a network with the 10.0.3.0/24 subnet that is exclusive to the OpenStack tenant, the naming of the instance must be done after openstack server create completes.
A cloudinit user-data file is created with:

 - url=http://169.254.169.254/2009-04-04/meta-data \
  ( curl --silent $url/hostname | sed -e 's/\..*//' ; \
    printf "%03d" $(curl --silent $url/local-ipv4 | \
       sed -e 's/.*\.\(.*\)/\1/') \
  ) | \
  tee /etc/hostname
- hostname $(cat /etc/hostname)
preserve_hostname: true

Where $url/hostname retrieves the prefix of the hostname (multiple instances can have the same name, two simultaneous instance creation won’t race), $url/local-ipv4 gets the IPv4 address, keeps the last digits (sed -e ‘s/.*\.\(.*\)/\1/’)) and pad them with zeros if necessary (printf “%03d”). The hostname is stored in /etc/hostname and displayed in the /var/log/cloud-init.log logs (tee /etc/hostname) for debugging. This is done early in the cloudinit sequence (bootcmd) and the default cloudinit setting of the hostname is disabled (preserve_hostname: true) so that it does not override the custom name set with hostname $(cat /etc/hostname).
The instance is created with

$ openstack server create \
  --image 'ubuntu-trusty-14.04'
  --key-name loic \
  --flavor m1.small \
  --user-data user-data.txt \
  -f json \
  --wait \
  the-re
... {"Field": "addresses", "Value": "fsf-lan=10.0.3.19"} ...
... {"Field": "id", "Value": "cd1a8a0f-83f9-4266-bd61-f3e2f583d59d"} ...

Whe user-data.txt contains the above cloudinit lines. The IPv4 address returned by openstack server create (10.0.3.19) can then be used to rename the instance with

$ openstack server set --name the-re019 cd1a8a0f-83f9-4266-bd61-f3e2f583d59d

where cd1a8a0f-83f9-4266-bd61-f3e2f583d59d is the unique id of the instance which is preferred to the the-re prefix that could race with another identical openstack server create command.
To verify that the instance name matches the IPv4 address that is pre-set in the DNS:

$ ssh ubuntu@the-re019 hostname
Warning: Permanently added '10.0.3.19' (ECDSA) to the list of known hosts.
the-re019

Thanks to Josh Durgin for suggesting this solution.

by Loic Dachary at June 25, 2015 10:48 AM

June 24, 2015

OpenStack Superuser

Forrester says: ready, set, OpenStack!

A recent report from Forrester gives a major boost to OpenStack adoption, calling its "viability and presence in the market irrefutable.”

You can download the entire report for a limited time on the OpenStack.org website.
 If you’ve been working with OpenStack for more than a few months, the takeaways will sound familiar. However, if you’re an infrastructure/ops person interested in adopting OpenStack and need fuel for the fire, you’re in the right place.

Forrester analysts spoke to eight OpenStack end-users and 10 OpenStack ecosystem vendors to discuss the best practices and common pitfalls faced when adopting OpenStack. They also mined results from the latest OpenStack user surveys, which you can find on Superuser broken down into deployments, business drivers and app developer insights.

Here are the highlights of Forrester’s findings:

  • OpenStack is for “regular” companies. Like Disney, BMW, Disney, and Wal-Mart, not necessarily your snazzy digital startups, in other words. The message: this means your company, too.
  • These companies use OpenStack because it’s easy, cheap, prevents vendor lock-in and offers self-service developer access. Those are the easy buy-ins, but the report underlines the staying power of OpenStack. “Its adoption supports a much larger transformation toward agility and development efficiency and is not tied to virtualization or consolidation efforts.”
  • Sure, OpenStack has flaws, but get over it. “All software has its issues. Open source efforts typically suffer from transparency where “issues and bugs get blown out of proportion,” says author Lauren E. Nelson adding that OpenStack adopters know this and push forward anyway.

The most detailed part of the report provides blueprints how to build your OpenStack — the engine of this adoption, if you will. Start with determining your consumption model (direct or distro), using the OpenStack Marketplace for an updated list of vendor options. Then, it’s time to sketch out what your staffing needs will be — lining up systems architects, operators with open source/Linux chops, an on-call infrastructure service, and modern developers with infrastructure experience.

The most non-obvious part of the team: support from a C-level executive. Forrester advises against going rogue. “The investment that you need to make for an OpenStack initiative is not trivial. Make sure the relevant executive understands the full value, and make this a very early priority because it will help you avoid major economic and political pain as you progress.”

Winding up the report, Forrester’s page of overall recommendations can be summed up thusly: learn open-source culture. Go to the Summits, participate in the community, upstream your custom code and get over the “shock” of publicly-documented bugs. “Get comfortable looking at transparency as an advantage rather than a sign of instability.”

One final word - even if you skim through the report, hit the endnotes: there are two pages of helpful links to articles and other talks that will bolster OpenStack adoption.

Superuser is always looking for stories of OpenStack adoption, contact editor@superuser.org to find out more.

Cover Photo by Daniel Incandela // CC BY NC

by Superuser at June 24, 2015 09:30 PM

Mirantis

Mirantis OpenStack 6.1 offers solid foundation for PaaS deployments, multi-hypervisor clouds, and scaling

The post Mirantis OpenStack 6.1 offers solid foundation for PaaS deployments, multi-hypervisor clouds, and scaling appeared first on Mirantis | The #1 Pure Play OpenStack Company.

Mirantis OpenStack 6.1 is the newest release of our OpenStack cloud, which is a 100% open software environment with open APIs. It provides the Fuel tool for easy deployment and lifecycle management, implements new features and partner drivers, includes production-ready core packages and a point-and-click interface to easily deploy applications from the OpenStack Community Application Catalog based on Murano.

Fuel’s task-based deployment gives you granular control when setting up your cloud environment, providing a framework that allows you to configure OpenStack components to your specifications as you go, rather than deploying a monolith that you’ll need go back and modify.

This release also supports:

  • Deploying workloads from the OpenStack Community Application Catalog 
  • Flexibility of technology choice including vSphere and KVM hypervisors in a single cloud
  • Resilience at scale with 200-node support out-of-the-box
We’ll go into more detail about these features and discuss other infrastructure enhancements in the Mirantis OpenStack 6.1 release that can increase your cloud performance. We’ll also address new out-of-the-box abilities to scale.

Deploy workloads that meet your needs, fast

To help you get the most out of your cloud as expediently as possible, Mirantis 6.1 supports deploying workloads from the OpenStack Community App Catalog. The App Catalog provides on-demand workload deployment on an OpenStack cloud and allows you to share templates for setting up apps and services quickly and easily, with infrastructure settings and scripts to reconcile dependencies. 

What the OpenStack Community App Catalog provides

The OpenStack Community App Catalog enables access to ready-to-use, preconfigured applications that you can deploy with the click of a button, provided as:

  • Ready-to-deploy Murano open source or commercial application packages
  • Heat templates, which are YAML files, for orchestrating infrastructure services such as networking, VMs, and cloud storage
  • Glance images delivered as mountable files with a VM and bootable operating system
  • Application bundles that deliver pre-configured, ready-to-deploy combinations of applications, Heat templates, and Glance images 

The launch of the App Catalog signals OpenStack’s emphasis on meeting the needs of application providers and users in addition to promoting infrastructure development and deployment. With Mirantis OpenStack 6.1, you can use the Community App Catalog to deploy workloads and Platform as a Service (PaaS) frameworks on your Mirantis OpenStack cloud, including:


These solutions, along with others from across the OpenStack community, represent the types of applications and services available through the App Catalog, which includes both free and licensed software, as well as commercial software. Any packages requiring payment or special licensing display that information in the specific packages. In addition, you can contribute your apps to the catalog, though it’s not a requirement.

Now that you know more about the developments in application availability and deployment, we’ll turn to new infrastructure improvements.

Maximize infrastructure flexibility

Mirantis OpenStack 6.1 provides greater infrastructure flexibility that enables you to expand your cloud capabilities with support for a variety of hypervisors, new certified Mirantis partners, a broad range of plugins and drivers, and optimization for Ubuntu as well as CentOS.

VMware support

Mirantis OpenStack 6.1 offers a consistent, simple API and self-service provisioning to extend the value of the VMware infrastructure and supports the use of VMware tools. New additions to support robust VMware options allow you to:

  • Run vSphere and KVM simultaneously to operate multi-hypervisor clouds. See screencast (7 mins).
  • Perform functions such as automated cluster scaling via integration with vCenter metering
  • Pool resources and automate VM placement across one or more vSphere clusters

More Infrastructure Choices

Along with the developments that enable multi-hypervisor clouds and automated cluster scaling and VM placement, Mirantis OpenStack 6.1 also includes new certified partners in the Mirantis Unlocked Technology Solutions program. Mirantis Unlocked ensures product compatibility with Mirantis OpenStack and includes support, provides driver testing and certification, and simplifies configuration and product deployment with Fuel.

New certified partners for Mirantis 6.1 include:

Check out more information on plugins and how to create your own Fuel deployment plugins

Enhance scalability and resiliency

With the production-ready functionality in Mirantis OpenStack 6.1, you’ll be able to expand your cloud capabilities, so here we address scaling enhancements as well improvements in monitoring, analysis, and system functionality.


Scale it on up

With the features that increase your ability to scale, Mirantis OpenStack 6.1 also offers out-of-the box certified deployment for up to 200 nodes, an extension that builds on our existing comprehensive hardening of OpenStack at scale. The release also supports more than 200 nodes with tuning.

Take advantage of enhanced infrastructure monitoring and analysis

To increase resiliency, new Fuel plugins for log analysis, monitoring, analytics, and visualization are available in the Mirantis OpenStack 6.1 release, including:

See the Monitoring Best Practices Guide for more information.

Check out new system functionality

System functionality improvements in the Mirantis OpenStack 6.1 release include:

  • Streamlined patch delivery and deployment using familiar Linux package management tools, such as apt-get and yum. See screencast (25 mins).
  • Documentation and tooling for experimental in-service upgrades
  • Optimization for running on Ubuntu 14.04.1, in addition to CentOS 6.5
  • Flexibility to specify deployment source repositories for the host OS, patches, and more
  • Support for the Juno maintenance release 2014.2.2
See the Release Notes for a complete list of features and improvements in Mirantis OpenStack 6.1.

Learn more and download Mirantis OpenStack 6.1 here for a nice evolution in ease-of-use and heightened functionality. You can also take advantage of the new Mirantis OpenStack training course and the “Use Case Validation” professional services offering that validates OpenStack for your specific use/business case before adopting it.

The post Mirantis OpenStack 6.1 offers solid foundation for PaaS deployments, multi-hypervisor clouds, and scaling appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Denise Walters at June 24, 2015 05:17 PM

Source repository flexibility in OpenStack deployment using Fuel

The post Source repository flexibility in OpenStack deployment using Fuel appeared first on Mirantis | The #1 Pure Play OpenStack Company.

When you’re deploying OpenStack, you’re dealing with a lot of code.  Some of it is part of OpenStack itself, some of it is part of host operating systems, and some of it is part of various other components, but all of it needs to come from somewhere — often, a specific repository.  With the 6.1 release of Mirantis OpenStack, the configurations for operating system, OpenStack, and user definable custom repositories are exposed through the Fuel UI, making it easier for users to define specific repositories for packages used in their OpenStack deployment.

One benefit of this control is the ability to introduce a clean separation between different classes of packages, with a way to independently manage upstream Ubuntu repositories, Mirantis repositories, and user’s custom repositories.  

But what if you don’t want to be dependant on external repositories at all?  Mirantis OpenStack 6.1 also includes a brand new tool, fuel-createmirror, which enables you to download the necessary packages from configured public repositories to the Fuel Master node, creating a local repository mirror Fuel can then pull from.

Conveniently, the new tool will also update the repository URI configurations for new or undeployed environments to this new central repository within Fuel for you; you don’t have to manually change the configuration files anymore. In fact, Fuel also includes the ability to directly add repositories right from the UI, as in Figure 1.

fuelrepository

Figure 1.  Fuel centralized repositories for Ubuntu and Mirantis OpenStack

Why a local mirror can be crucial for OpenStack deployment

There are many drivers that may push the need for a local, internal set of repositories. Maybe your data centers don’t provide access to the Internet for the compute clusters, or there is a requirement for added safety and validation in auditing downloaded packages first, then hosting repositories locally.  Or maybe you have the need to have custom package repositories for use in deployment.

Of course all this control doesn’t do you much good if you need to choose a “one size fits all” strategy; fortunately, Mirantis OpenStack 6.1 includes the flexibility to provide different repository settings for different environments. For example, in a development environment, you might test new homegrown packages before wider distribution into production.

Whatever the driver, creating and maintaining an in-house repository has several benefits, including:

  • Increased security and control provided by using packages downloaded and verified by internal audits and taking advantage of the latest bug and security fixes (Environments, like financial institutions, that have very strict compliance policies will appreciate this capability!)

  • The convenience of being able to include your own custom packages for the repository

  • Elimination of the requirement for OpenStack nodes to have Internet access

  • Only downloading OpenStack update packages once for deployment to potentially hundreds of nodes

  • Decoupling of the base operating system files, which helps the Mirantis OpenStack and Fuel installation ISO fit on a single 4GB USB flash drive (Think about how much more convenient this makes installing to bare metal!)

To get all of these benefits, all you need to do is run the fuel-createmirror command; offload and stage the packages in one fell swoop. For more information on the fuel-createmirror tool, read the documentation here.

The post Source repository flexibility in OpenStack deployment using Fuel appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Joseph Yep at June 24, 2015 05:14 PM

Swapnil Kulkarni

Attending FUDCon, here’s my wishlist, whats yours?

FUDCon is the Fedora Users and Developers Conference, a major free software event held in various regions around the world, usually annually per region. FUDCon is a mix of sessions. There are talks that range from technology introductions to deep dives, hands-on workshops, BoFs where like-minded people get together to discuss a project or technology and hackfests where contributors work on specific initiatives. Topics include infrastructure, feature development, community building, general management and governance, marketing, testing and QA, packaging, etc. FUDCon is always free to attend for anyone in the world.

See more details about FUDCon Pune 2015

Venue : MIT College of Engineering

Schedule: Friday, June 26, 2015 – 09:00 to Sunday, June 28, 2015 – 17:30

I am attending and presenting at FUDCon, here’s my schedule,

Introduction to Software Defined Storage – Ceph and GlusterFS
Geo-Replication and Disaster Recovery : GlusterFS
Openstack on Fedora, Fedora on Openstack: An Introduction to cloud IaaS
Fedmsg: The message bus of Fedora Infrastructure
Introduction to CentOS Cloud SIG
Contributing to OpenStack 101
Fedora Atomic
Hands-On Kubernetes
Getting Started with IOT development using Fedora on ARM
Orchestration of Docker containers using Openshift v3 and Kubernetes

Find the complete schedule of talks and workshops here

See you at FUDCon :) Cheers!

by coolsvap at June 24, 2015 07:07 AM

Aptira

OpenStack: Product or… yeah, nah

It’s not my turn to aptirablog but as they say: “Someone is wrong on the Internet”. I’m going to start with some fairly uncontroversial statements about OpenStack: 1) OpenStack has value to customers. 2) Program level of control of OpenStack...

The post OpenStack: Product or… yeah, nah appeared first on Aptira OpenStack Services in APAC, Australia, India.

by Roland Chan at June 24, 2015 02:45 AM

June 23, 2015

OpenStack Superuser

How to craft a successful OpenStack Summit proposal

OpenStack summits are conferences for developers, users and administrators of OpenStack Cloud Software.

The community has plenty to say: there were over 1,000 proposals for around 300 talks at the Vancouver Summit in last month. For the upcoming Summit in Tokyo, there are 17 Summit tracks, from community building and security to hands-on labs. The deadline for proposals is July 15.

To improve your chances, the Women of OpenStack held a webinar with tips to make your pitch successful. These tips will give anyone who wants to get on stage at this or future Summits a boost. A video of the 50-minute session is available here.

Proposals go from an idea on the back of a cocktail napkin to center stage in a few steps. After you've submitted the proposal, the OpenStack community reviews and votes on all of them. For each track, a lead examines votes and orchestrates them into the final sessions. Track leads see where the votes come from, so if a company stuffs the virtual ballot box to bolster a pitch, they can correct that imbalance. They also keep an eye out for duplicate ideas, often combining them into panel discussions.

alt text hereStanding tall in the room session featuring (from left to right) Beth Cohen, Nalee Jang, Shilla Saebi, Elizabeth K. Joseph, , Radha Ratnaparkhi and Rainya Mosher

Find your audience

Rapid growth of the OpenStack community means that many summit attendees are relative newcomers. At the previous two Summits, around 50-60 percent were first-time attendees.

alt text hereAttendee data from the OpenStack Summit Vancouver

For each of those Summits, developers made up about 30 percent of attendees; product managers, strategists and architects made up roughly another quarter. Users, operators and sys admins were about 15 percent; CEOs, business developers and marketers about 10 percent each with an “other” category coming in under 10 percent.

“Don’t make knowledge assumptions,” says Anne Gentle, who works at Rackspace on OpenStack documentation and has powered through 11 Summits to date. But you don’t have to propose a talk for beginners, she adds, “be ready to tackle something deeply technical, don’t limit yourself.”

Consider the larger community, too. Your talk doesn’t necessarily have to be about code, says Niki Acosta of Cisco, adding that recent summit panels have explored gender diversity, community building and startup culture.

Set yourself up for success

There are some basic guidelines for getting your pitch noticed: use an informative title (catchy, but not cute — more below), set out a problem and a learning objective in the description, match the format of your talk to a type of session (hands-on, case study), make sure the outline mirrors what you can actually cover in the time allotted and, lastly, show your knowledge about the topic.

Be relevant

Remember that you’re pitching for an OpenStack Summit, not a sales meeting or embarking on a public relations tour. Be honest about who you work for and push your pitch beyond corporate puffery.

Diane Mueller, who works at Red Hat on OpenShift, spells it out this way. “I have corporate masters and we have agendas about getting visibility for our projects and the work we’re doing. But the Summit is all about OpenStack.” Instead of saying “give me an hour to talk about platform-as-a-service,” highlight an aspect of your business that directly relates to OpenStack. “It may be about how you deploy Heat or Docker," she adds, but it’s not a vendor pitch.

While you want to keep the volume on corporate-speak low, all three speakers agreed that the bio is the place to get loud. Make sure to highlight your knowledge of OpenStack and any contributions you’ve made to the community. “Contributors get respect and priority,” Mueller says. “So whatever you’ve done — organizing, documentation, Q/A, volunteering at events — make sure you mention it.”

Be clear and complete

State your intent clearly in the abstract, title and description. The abstract should highlight what the “attendee will gain, rather than what you’re going to say,” Acosta says. “Focus on the voter and the attendee rather than making it all about you.” If English is your second language, proofread closely before submitting. If you’re struggling with the writing, make sure to add links for background, complete your bio and upload a photo.

Gentle notes that although the team regularly gets pitches from around the world and works with speakers whose native tongue isn’t English, making your proposal as clear as possible goes a long way to getting it accepted. For examples, check out the sample proposals at O’Reilly.

“I’ve read some really bad abstracts,” says Mueller. "The worst ones are just one line that says, ‘I’m going to talk about updates to Project X.’”

Nervous? Don’t fly solo

If you’ve got great ideas for a talk but hate the thought of standing up alone in front of an audience, there are a few workarounds. Try finding a co-presenter, bringing a customer or putting together a panel.

“Reach out to people who have the same role as you do at different companies,” says Acosta. “There’s nothing more exciting than a panel with competitors who have drastically different methodologies and strategies.”

Toot your own horn

Make your title captivating — but not too cute — and social-media ready. Voting for your proposal and attendance at your session often depend on the strength of the title.

“Tweet early, tweet often,” says Gentle. “I always get a little nervous around voting time, that’s natural. But trust in the process.”

Start stumping for your proposal as soon as you submit it. Your boss, the PR team and product manager should all be on board; letting your company know early may be key to getting travel approved. Network with your peers to get the word out, too. Finally, remember to vote for yourself. You don’t want to miss out by just one vote.

And, if you don’t get accepted this time, keep trying.

The rate of rejection is “quite high,” Acosta admits. “Don’t be discouraged. It doesn’t mean that your session topic wasn’t good. It just means that yours didn’t make it this time.”

Photos: lead CC-licensed, thanks M1ke-Skydive on Flickr; Standing tall in the room session at the Vancouver Summit courtesy of the OpenStack Foundation.

by Nicole Martinelli at June 23, 2015 09:45 PM

Tesora Corp

We’re hiring, multiple positions!

Interested in joining the Tesora Team?  We are looking for talented individuals to fill the following job positions: DevOps Engineer Account Manager Marketing Specialist All of these positions are located at our Cambridge office. We would love to hear more about you and why you think you’d be a perfect fit for one of the positions.  Please e-mail your […]

The post We’re hiring, multiple positions! appeared first on Tesora.

by Leslie Barron at June 23, 2015 08:20 PM

DreamHost

From OpenStack to DreamHost: Welcome to the Team, Stefano!

In the Fall of 2011, my wife, 8 month old daughter, mother, and two dogs jammed themselves into a Honda in Atlanta, Georgia and started driving west to meet me in sunny Los Angeles, California. I had just started a new job that was exciting enough to uproot my family, move across the country, and start a new life thousands of miles away from my family and friends.

Jonathan, VP of Cloud & Developement, and his daughter Colette, VP of Everything Adorable

Jonathan, VP of Cloud & Development, and his daughter Colette, VP of Everything Adorable

What motivated me to join the DreamTeam? Our audacious purpose! DreamHost’s mission is to enable the world’s entrepreneurs and developers to create, share, and prosper on the internet. We get to dream big, and then watch our dreams be realized through hard work and collaboration. Over the past four years, I’ve helped guide DreamHost’s Cloud products from pie in the sky concepts, into real services with thousands of customers. I’m incredibly proud of the DreamTeam for their hard work and dedication to our mission.

Today we’ve got two awesome Cloud services – DreamObjects and DreamCompute. Both built on open source software, running on open source operating systems, and powered by incredible communities of developers. DreamObjects enables storage and delivery of data using industry standard APIs, while DreamCompute is a powerful cloud hosting platform built on OpenStack.

So, what’s next? Well, it’s time we reached out to the masses of developers and entrepreneurs – the builders of the internet; makers of all things great and creative. In an effort to encourage integration with the DreamCloud, and to help developers connect with each other, I’m excited to announce the hiring of Stefano Maffulli as DreamHost’s Director of Cloud Community and Marketing.

Looking forward to new Open Source adventures!

Looking forward to new Open Source adventures!

Stefano comes to us from the OpenStack Foundation, where he helped drive massive growth and adoption of OpenStack, the fastest growing open source project in history.

Stefano is just getting started at DreamHost, and will be reaching out to you, our DreamCloud users, to find out what you’re building, what technologies you’re using, and how we can make DreamCloud services the best place for you to connect with your peers and create amazing things.

by Jonathan LaCour at June 23, 2015 06:59 PM

Stefano Maffulli

So long OpenStack community, see you soon

Let’s call it a “ciao” and not an “addio.”

After almost four years (un)managing the OpenStack community, I have decided to move on and join DreamHost’s team to lead the marketing efforts of the DreamCloud. The past three years and 10 months have been amazing: I’ve joined a community of about 300 upstream contributors and saw it grow under my watch to over 3,600. I knew OpenStack would become huge and influence IT as much as the Linux kernel did and I still think it’s true. I’m really proud to have done my part to make OpenStack as big as it is now.

During these years, I’ve focused on making open source community management more like a system, with proper measurement in place and onboarding programs for new contributors. I believe that open source collaboration is just a different form of engineering and as such it should be measured correctly in order to be better managed. I am particularly proud of the Activity Board, one of the most sophisticated systems to keep track of open collaboration. When I started fiddling with data from the developers’ community, there were only rudimentary examples of open source metrics published by Eclipse Foundation and Symbian Foundation. With the help of researchers from University of Madrid, we built a comprehensive dashboard for weekly tracking of raw numbers and quarterly reports with sophisticated, in-depth analysis. The OpenStack Activity Board may not look as pretty as Stackalytics, but the partnership with its developers makes it possible to tap into the best practices of software engineering metrics. I was lucky enough to find good partners in this journey, and to provide an example that other open source communities have followed, from Wikimedia to Eclipse, Apache and others.

OpenStack Upstream Training is another example of a great partner: I was looking into a training program to teach developers about open source collaboration when I spoke in Hong Kong to an old friend, Loic Dachary. He told me about his experiment and I was immediately sold on the idea. After a trial run in Atlanta, we scaled the training up for Paris, Vancouver, plus community members repeated it in Japan already twice. I’m sure OpenStack Upstream Training will be offered also in Tokyo.

It’s not a secret that I can’t stand online forums and that I consider mailing lists a necessary evil. I setup Ask OpenStack for users hoping to provide a place for them to find answers. It’s working well, with a lot of traffic in the English version and a lot less traffic in Chinese. My original roadmap was to provide more languages but we hit some issues with the software powering it (Askbot) that I hope the infra team and the excellent Marton Kiss can solve rapidly.

On the issue of diversity, both gender and geographic, I’m quite satisfied with the results. I admit that these are hard problems that no single community can solve but each can put a drop in the bucket. I believe the Travel Support Program and constant participation in Outreachy are two such drops that help OpenStack be a welcoming place for people from all over the world and regardless of gender. The Board has also recently formalized a Diversity working group.

Of course I wish I did some things better, faster. I’m sorry I didn’t make the CLA easier for casual and independent contributors: I’m glad to see the Board finally taking steps to improve the situation. I wish also I delivered the OpenStack Groups portal earlier and with more features but the dependency on OpenStackID and other projects with higher priorities delayed it a lot. Hopefully that portal will catch up.

I will miss the people at the OpenStack Foundation: I’ve rarely worked with such a selection of smart, hard workers and fun to be around, too. It’s a huge privilege to work with people you actually want to go out with, talk about life, fun, travel, beers, wines and not work.

When we Italians say “ciao,” it means we’re not saying good bye for long.

So long, OpenStack community, see you around the corner.

by stefano at June 23, 2015 03:07 PM

Gorka Eguileor

Cinder Volume Back Up Automation

In my previous post on OpenStack’s volume backups I gave an overview of Cinder’s Backup service current status and I mentioned that some of the limitations that currently exist could be easily overcome scripting a helper tool. In this post I’m going to explain different options to create such a script and provide one as […]

by blog at June 23, 2015 12:55 PM

Nir Yechiel

IPv6 Prefix Delegation – what is it and how does it going to help OpenStack?

IPv6 offers several ways to assign IP addresses to end hosts. Some of them (SLAAC, stateful DHCPv6, stateless DHCPv6) were already covered in this post. The IPv6 Prefix Delegation mechanism (described in RFC 3769 and RFC 3633) provides “a way of automatically configuring IPv6 prefixes and addresses on routers and hosts” – which sounds like yet another IP assignment option. How does it differ from the other methods? And why do we need it? Let’s try to figure it out.

Understanding the problem

I know that you still find it hard to believe… but IPv6 is here, and with IPv6 there are enough addresses. That means that we can finally design our networks properly and avoid using different kinds of network address translation (NAT) in different places across the network. Clean IPv6 design will use addresses from the Global Unicast Address (GUA) range, which are routable in the public Internet. Since these are globally routed, care needs to be taken to ensure that prefixes configured by one customer do not overlap with prefixes chosen by another.

While SLAAC or DHCPv6 enable simple and automatic host configuration, they do not provide specification to automatically delegate a prefix to a customer site. With IPv6, there is a need to create a hierarchical model in which the service provider allocates prefixes from a set of pools to the customer. The customer then assign addresses to its end systems out of the predefined pool. This is powerful, as it provides the service provider with control over the IPv6 prefixes assignment, and could eliminate potential conflicts in prefix selection.

How does it work?

With Prefix Delegation, a delegating router (Prefix Delegation Server) delegates IPv6 prefixes to a requesting router (Prefix Delegation Client). The requesting router then uses the prefixes to assign global IPv6 addresses to the devices on its internal interfaces. Prefix Delegation is useful when the delegating router does not have information about the topology of the networks in which the requesting router is located. The delegating router requires only the identity of the requesting router to choose a prefix for delegation. Prefix Delegation is not a new protocol. It is using DHCPv6 messages as defined in RFC 3633, thus sometimes referred to as DHCPv6 Prefix Delegation.

DHCPv6 prefix delegation operates as follows:

  1. A delegating router (Server) is provided with IPv6 prefixes to be delegated to requesting routers.
  2. A requesting router (Client) requests one or more prefixes from the delegating router.
  3. The delegating router (Server) chooses prefixes for delegation, and responds with prefixes to the requesting router (Client).
  4. The requesting router (Client) is then responsible for the delegated prefixes.
  5. The final address allocation mechanism in the local network can be performed with SLAAC or stateful/stateless DHCPv6, based on the customer preference. At this step the key thing is the IPv6 prefix and not how it is delivered to end systems.

IPv6 in OpenStack Neutron

Back in the Icehouse development cycle, the Neutron “subnet” API was enhanced to support IPv6 address assignment options. Reference implementation of this followed at the Juno cycle, where dnsmasq and radvd processes were chosen to serve the subnets with RAs, SLAAC or DHCPv6.

In the current Neutron implementation, tenants must supply a prefix when creating subnets. This is not a big deal for IPv4, as tenants are expected to pick private IPv4 subnets for their networks and NAT is going to take place anyway when reaching external public networks. For IPv6 subnets that use Global Unicast Address (GUA) format, addresses are globally routable and cannot overlap. There is no NAT or floating IP model for IPv6 in Neutron. And if you ask me, there should not be one. GUA is the way to go. But can we just trust the tenants to configure their IPv6 prefixes correctly? Probably not, and that’s why Prefix Delegation is an important feature for OpenStack.

An OpenStack administrator may want to simplify the process of subnet prefix selection for the tenants by automatically supplying prefixes for IPv6 subnets from one or more large pools of pre-configured IPv6 prefixes. The tenant would not need to specify any prefix configuration. Prefix Delegation will take care of the address assignment.

The code is expected to land in OpenStack Liberty based on this specification. Other than REST API changes, a PD client would need to run in the Neutron router network namespace whenever a subnet attached to that router requires prefix delegation. Dibbler is an open-source utility that supports PD client and can be used to provide the required functionality.


by nyechiel at June 23, 2015 02:21 AM

June 22, 2015

eNovance Engineering Teams

Puppet Module Functional Testing with Vagrant, OpenStack and Beaker

This post is originally published on Emilien Macchi’s blog.

During the last OpenStack Summit, I had the pleasure to participate to the Infra sessions and we agreed at how to make functional testing for both Puppet OpenStack and Puppet Infra modules, which is a real proof of collaboration between both groups.

However, I met some people still wondering how to test a patch in a Puppet module without affecting our own system by installing OpenStack.

This article is short but effective: it’s about testing a Puppet module by using Vagrant with OpenStack provider and Beaker.

Prerequisites

  • Linux, Windows or MAC OS X.
  • You need to install Vagrant on your system.
  • An access to an OpenStack Cloud (or use libvirt Vagrant provider, but not explained in this article).

Let’s go!

This example will test a specific patch of puppet-keystone on CentOS7, but you can adapt the Vagrant file to spawn a Ubuntu Trusty image.
Before running any command, let’s see what we are doing here.

Here is the Vagrantfile used to provision a virtual machine in the OpenStack Cloud. You’ll need to adjust your credentials:
<script src="http://gist.github.com/EmilienM/499c7708cbcd08df529b.js"></script>

This is the script that will be run inside the VM to prepare the system and run the functional tests. You can adapt the module and patchset you want to test:
<script src="http://gist.github.com/EmilienM/c0c89f650e6942767b33.js"></script>

Open a terminal, and let’s start:

vagrant plugin install vagrant-openstack-provider
vagrant box add dummy https://github.com/cloudbau/vagrant-openstack-plugin/raw/master/dummy.box
wget https://gist.githubusercontent.com/EmilienM/c0c89f650e6942767b33/raw/deb41e80cb9cdee799ffc3b7c622905ea54e5526/beaker.sh
wget https://gist.github.com/EmilienM/499c7708cbcd08df529b/raw/9f7a739ee67e2cfed1deb9479949822a442bbd61/Vagrantfile
vagrant up --provider=openstack --debug # --debug will help to see in real-time the ssh output when running the script within the VM

Here is a demo of what happens.

The script and Vagrantfile are here for information but you’ll probably have to adjust them to test what you like.

My next step is to look at OpenStack hypervisor provided by Beaker itself, but it’s still experimental.

Happy testing!

by Emilien Macchi at June 22, 2015 05:12 PM

Tesora Corp

Roller Coaster App is now in iTunes!

Our Tesora Roller Coaster app is now officially available in iTunes!  We developed this app specifically for our customized Google Cardboard. Come visit our booth (518) at Red Hat Summit this week to check out this virtual reality roller coaster experience!  You could be lucky enough to take home your own Google Cardboard! Download the app today.  Available in iTunes and […]

The post Roller Coaster App is now in iTunes! appeared first on Tesora.

by Leslie Barron at June 22, 2015 02:56 PM

About

Planet OpenStack is a collection of thoughts from the developers and other key players of the OpenStack projects. If you are working on OpenStack technology you should add your OpenStack blog.

Subscriptions

Last updated:
July 04, 2015 11:43 AM
All times are UTC.

Powered by:
Planet