October 01, 2016

DreamHost

How DreamHost Builds Its Cloud: Selecting Hard Drives

This is the third in a series of posts about how DreamHost builds its Cloud products. Written by Luke Odom straight from the data center operations. 

To begin picking what drives we wanted to use in our new DreamCompute cluster, we first looked at the IO usage and feedback from the beta cluster. The amount of data moving around our beta cluster was very small. Despite the fact that we had over-provisioned, and had very little disk activity in our beta cluster, we still received a lot of feedback requesting faster IO. There were three main reasons that the throughput and latency in the beta cluster was slow.

The first reason was our choice of processor and RAM amount. We used machines of the same specs we used for DreamObjects. This worked well there, where there is a lot of storage but it is not accessed very often. However, in DreamCompute we store much less data, but the data is comparatively much more active. The second reason was density. Ceph functions better when you have fewer drives per machines. Recovery is also faster when you have smaller drives.

Cloud Computing

Image Source: Flickr

With the original DreamCompute cluster, we had machines that contained 12 Ceph drives. Each drive was 3TB in size, for a total of 36TB of storage per node. This was too dense for our needs. The third problem we had were with the SAS expanders we used. We used RAID cards with “4 channel”, meaning they can access four drives at a time. However, we had 14 drives attached (12 Ceph, two boot drives). This required us to use a “SAS expander”,  a device that sits between the RAID card and drives, acting as a traffic light. Unfortunately, the SAS expander we were using could only handle enough traffic to fully utilize two lanes. Imagine 14 cars traveling on one two-lane road. As long as the cars are parked most of the time, it’s fine. If they all want to drive at the same time, traffic gets slow. For the new cluster, we wanted to remove the latency of an SAS expander and use the maximum amount of drives than either the RAID card or motherboard supported. We also made sure that we had an ample amount of RAM and fast processors to avoid other hardware-related latency.

There two broad categories of drives for both the server and consumer markets. There are traditional hard drives (HDD) which magnetically store data on spinning platters, and solid state drives (SSD) which store data in integrated memory circuits. SSDs are much faster and have a lower latency than HDDs because they don’t have to wait to get the data from a spinning disk. However, SSDs are currently substantially more expensive. From the feedback we had gotten from our customers, fast IO and low latency were very important so we decided to go with SSDs!

There are three choices on how the SSD will interface with the server:

  • The first is SATA. Most consumer desktops and laptops interface with their attached storage devices using the SATA interface. This is an improvement on the old PATA interface, which was also commonly a consumer interface. Historically, SATA isn’t used very often in data center environments as it doesn’t play very well with RAID cards. However, we won’t be putting these drives in a RAID array. SATA also has a max throughput of 6 Gbit/s which is slower than our other options. Serial Attached SCSI (SAS) was developed from the more enterprise focused SCSI (pronounced like “scuzzy”). SAS has many advantages, but the main benefit for us is that the speed is limited to 12 Gbit/s, double that of SATA. The major disadvantage of SAS is that it isn’t directly supported by any server chipsets. This means that you have to place it behind a RAID card, even if you don’t need any RAID functionality. SAS drives are also much more expensive because the SAS controllers they use are very expensive. NVMe is an interface designed specifically for SSDs. NVMe drives are more than twice as fast as SATA or SAS SSDs, but they are just beginning to hit the market so they are much more expensive than SATA or SAS. All factors considered, we felt that SATA was the best choice. The drives were still four times faster than the drives in our original cluster and each drive was only 1TB instead of 3TB. We could now measure recovery time after drives were added or failed in hours instead of days.
  • Another factor to consider with SSDs is how much information each memory cell will hold. The three options are one bit per cell (SLC), two bits per cell (MLC), or three bits per cell (TLC). The more information each cell is holding, the fewer times it can be rewritten in the lifetime of the cell, so durability decreases as data per cell increases. Because of the large price difference between MLC and SLC, manufacturers have also released a couple intermediary options like eMLC (MLC with a greater endurance and higher over-provisioning) and pSLC (pseudo SLC which uses MLC but only writes one bit per cell). Based on testing in the original DreamCompute cluster, we wouldn’t even be near the TLC limitations for years. However, we purchased MLC-based drives just in case usage was higher than expected. Unfortunately we had firmware issues with the MLC drives. We got those worked out, but as a precaution, we replaced half of the drives in our cluster with a different manufacturer’s TLC-based drives so we now have a 50/50 mixture of MLC and TLC enterprise drives.

Enterprise SSDs, especially SATA ones, are very similar to the SSDs used in consumer desktops and laptops with a few critical differences. The biggest difference is the use of an integrated supercapacitor. On enterprise drives, should power fail in the middle of a write, the super capacitor will keep a drive powered long enough to finish the write it was working on and prevent data corruption. Another difference is the amount of over-provisioning. All SSDs display less storage than they actually have installed. The reason for this is memory cells will sometimes need to be replaced. When this happens, the SSD will start using one of its spare cells. As enterprise SSDs are usually under more stress, more memory cells are allocated as spares. For example, a consumer drive with 512 GB of internal storage is usually sold as a 500GB drive with 12GB over-provisioning. The drives we buy are either 400 or 480 depending on the workload. The firmware for enterprise SSD drives is also tuned for an enterprise environment which is usually much more write heavy than consumer workloads. For these reasons, we only use enterprise-rated SSDs in DreamCompute.

  • The final factor to consider was if we should put the SSDs behind a RAID card or directly attach them to the motherboard. A RAID array can be created via software, but to be able to add additional features, arrays are often implemented via a separate card. The redundancy that a RAID card provides isn’t needed with Ceph. Ceph provides its own redundancy. RAID cards can also provide protection against data corruption by having a battery or capacitor that provides power to the card so that it can hold incomplete writes in memory during unexpected reboots. This also isn’t needed as enterprise SSDs have capacitors built in. The final advantage of a RAID card is the onboard cache. The RAID card cache isn’t any faster than an SSDs onboard cache, but by setting each disk up as a single disk RAID array, we can use the drive’s capacitor protected cache as a write cache and the RAID card’s cache as a read ahead cache. Our testing showed a slight increase in speed using this setup.

The systems we plan on using have 10 SATA ports behind 2 controllers (four on CPU 1, six on CPU 2) on the motherboard. We could also use an 8-port RAID card for the eight Ceph drives and only have the two boot drives directly connected. In the end, we decided the expense and potential for problems weren’t worth the marginal speed increase and decided to just directly attach the drives to the motherboard.

So the final layout is eight 960GB Enterprise SSDs (half TLC, half MLC) directly attached to the motherboard with four Ceph drives and both boot drives going to one CPU, and four Ceph drives going to the second CPU.

Stay tuned for more inside looks at our Cloud in the next post, coming soon! 

The post How DreamHost Builds Its Cloud: Selecting Hard Drives appeared first on Welcome to the Official DreamHost Blog.

by Luke Odom at October 01, 2016 05:26 PM

September 30, 2016

OpenStack Blog

OpenStack Developer Mailing List Digest September 24-30

Candidate Proposals for TC are now open

  • Candidate proposals for the Technical committee (6 positions) are open and will remain open until 2016-10-01, 23:45 UTC.
  • Candidacies must submit a text file to the openstack/election repository [1].
  • Candidates for the Technical Committee can be any foundation individual member, except the seven TC members who were elected for a one year seat in April [2].
  • The election will be held from October 3rd through to 23:45 October 8th.
  • The electorate are foundation individual members that are committers to one of the official programs projects [3] over the Mitaka-Newton timeframe (September 5, 2015 00:00 UTC to September 4, 2016 23:59 UTC).
  • Current accepted candidates [4]
  • Full thread

Release countdown for week R-0, 3-7 October

  • Focus: Final release week. Most project teams should be preparing for the summit in Barcelona.
  • General notes:
    • Release management team will tag the final Newton release on October 6th.
      • Project teams don’t have to do anything. The release management team will re-tag the commit used in the most recent release candidate listed in openstack/releases.
    • Projects not following the milestone model will not be re-tagged.
    • Cycle-trailing projects will be skipped until the trailing deadline.
  • Release actions
    • Projects not follow the milestone-based release model who want stable/newton branches created should talk to the release team about their needs. Unbranched projects include:
      • cloudkitty
      • fuel
      • monasca
      • openstackansible
      • senlin
      • solum
      • tripleo
  • Important dates:
    • Newton final release: October 6th
    • Newton cycle-trailing deadline: October 20th
    • Ocata Design Summit: October 24-28
  • Full thread

Removal of Security and OpenStackSalt Project Teams From the Big Tent (cont.)

  • The change to remove Astara from the big tent was approval by the TC [4].
  • The TC has appointed Piet Kruithof as PTL of the UX team [5].
  • Based on the thread discussion [6] and engagements of the team, the Security project team will be kept as is and Rob Clark continuing as PTL [7].
  • The OpenStackSalt team did not produce any deliverable within the Newton cycle. The removal was approved by the current Salt team PTL and the TC [8].
  • Full thread

 

[1] – http://governance.openstack.org/election/#how-to-submit-your-candidacy

[2] – https://wiki.openstack.org/wiki/TC_Elections_April_2016#Results

[3] – http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml?id=sept-2016-elections

[4] – https://review.openstack.org/#/c/376609/

[5] – http://eavesdrop.openstack.org/meetings/tc/2016/tc.2016-09-27-20.01.html

[6] – http://lists.openstack.org/pipermail/openstack-dev/2016-September/thread.html#104170

[7] – http://eavesdrop.openstack.org/meetings/tc/2016/tc.2016-09-27-20.01.html

[8] – https://review.openstack.org/#/c/377906/

by Mike Perez at September 30, 2016 06:45 PM

Tesora Corp

Short Stack: New Guides and Tutorials, How to Find Your First OpenStack Job, and How to Give Your OpenStack Network a Boost

Welcome to the Short Stack, our regular feature where we search for the most intriguing OpenStack news. These links may come from traditional publications or company blogs, but if it’s about OpenStack, we’ll find the best ones.

Here are our latest links:

OpenStack in Focus: Soundbytes from this Week | OStatic

Sam Dean looks back on the OpenStack highlights from this past week. He examined Rich Bowen’s talk on OpenStack at LinuxCon, Red Hat and Mirantis’ new OpenStack distributions, and the Linux Foundation’s collaboration with edX to host a new, free OpenStack class.

5 New OpenStack Guides and Tutorials | Opensource.com

Jason Baker took an in-depth look at five new OpenStack tutorials and guides available to the community. The list includes Christian Berendt’s guide on using Ansible fact caching to speed up Kolla, Cloudwatt’s guide to GlusterFS and ownCloud, Matt Fischer’s instructions for cleaning up databases, Major Hayden’s guide to anti-affinity groups, and Lars Kellogg-Stedman’s tutorial on using YAQL in Heat.

How to Find Your First OpenStack Job | Linux.com

The OpenStack Foundation’s Anne Bertuccio addressed frequently asked questions about working on OpenStack professionally. She recommended assessing qualification levels by scouting OpenStack job postings, using various resources to improve Python proficiency, and locating different sites that post OpenStack jobs.

Gartner Market Guide for Database as a Service Hits the Bullseye | Tesora

Frank Days explored Gartner’s findings in their “Market Guide for Database Platform as a Service”. The report confirmed that the Database as a Service (DBaaS) market continues to gain momentum as products mature. As there is an increase in the acceptance of the cloud, companies are beginning to realize their savings potential.

SmartNICS: Give your OpenStack Network a Boost | InfoWorld

The OpenStack Foundation’s April 2016 user survey conveyed a strong community interest in server-based networking. Enterprises are growing more interested in how server- based networking can be deployed, and in the survey users showed a high level of interest in SDN/NFV, containers, and bare-metal deployments. This growing interest level can be seen in the biggest datacenters in the world (Google Cloud, Amazon Web Services) which use server-based networking.

The post Short Stack: New Guides and Tutorials, How to Find Your First OpenStack Job, and How to Give Your OpenStack Network a Boost appeared first on Tesora.

by Alex Campanelli at September 30, 2016 05:39 PM

Major Hayden

What’s Happening in OpenStack-Ansible (WHOA) – September 2016

OpenStackWelcome to the fourth post in the series of What’s Happening in OpenStack-Ansible (WHOA) posts that I’m assembling each month. OpenStack-Ansible is a flexible framework for deploying enterprise-grade OpenStack clouds. In fact, I use OpenStack-Ansible to deploy the OpenStack cloud underneath the virtual machine that runs this blog!

My goal with these posts is to inform more people about what we’re doing in the OpenStack-Ansible community and bring on more contributors to the project.

There are plenty of updates since the last post from August. The race is on to finish up the Newton release and start new developments for the Ocata release! We hope to see lots of contributors in Barcelona!

Arc de Triomf Barcelona Spain

New releases

The OpenStack-Ansible releases are announced on the OpenStack development mailing list. Here are the things you need to know:

Newton

The OpenStack-Ansible Newton release is still being finalized this week. The stable/newton branches were created yesterday and stabilization work is ongoing.

Mitaka

Mitaka is the latest stable release available and the latest version is 13.3.4. This release includes some upgrade improvements for ml2 ports and container hostname adjustments.

Liberty

The latest Liberty release, 12.2.4, contains lots of updates and fixes. The updates include a fix for picking up where you left off on a failed upgrade and a fix for duplicated log lines. The security role received some updates to improve performance and reduce unnecessary logging.

Notable discussions

This section covers discussions from the OpenStack-Ansible weekly meetings, IRC channels, mailing lists, or in-person events.

Newton branch

As mentioned earlier, the stable/newton branches have arrived for OpenStack-Ansible! This will allow us to finish stabilizing the Newton release and look ahead to Ocata.

Octavia

Michael Johnson and Jorge Miramontes stopped by our weekly meeting to talk about how Octavia could be implemented in OpenStack-Ansible. Recent Octavia releases have some new features that should be valuable to OpenStack-Ansible deployers.

There is a spec from the Liberty release for deploying Octavia, but we were only able to get LBaaSv2 with the agent deployed. Jorge and Michael are working on a new spec to get Octavia deployed with OpenStack-Ansible.

Testing repo

There’s now a centralized testing repository for all OpenStack-Ansible roles. This allows the developers to share variables, scripts, and test cases between multiple roles. Developers can begin testing new roles with much less effort since the scaffolding for a basic test environment is available in the repository.

You can follow along with the development by watching the central-test-config topic in Gerrit.

Mailing list

The OpenStack-Ansible tag was fairly quiet on the OpenStack Development mailing list during the time frame of this report, but there were a few threads:

Notable developments

This section covers some of the improvements coming to Newton, the upcoming OpenStack release.

OpenStack-Ansible Training

Hastexo logoHastexo is now offering a self-paced course for learning OpenStack-Ansible! It’s called HX201 Cloud Fundamentals for OpenStack-Ansible and it is available now.

Thanks to Florian Haas and Adolfo Brandes for assembling this course!

OpenStack-Ansible powers the OSIC cloud

OpenStack Innovation CenterOne of the clouds operated by the OpenStack Innovation Center (OSIC) is powered by OpenStack-Ansible. It’s a dual-stack (IPv4 and IPv6) environment and it provides the most nodes for the OpenStack CI service! If you need to test an application on a large OpenStack cloud, apply for access to the OSIC cluster.

Inventory improvements

The backbone of OpenStack-Ansible is its inventory. The dynamic inventory defines where each service should be deployed, configured and managed. Some recent improvements include exporting inventory for use by other scripts or applications. Ocata should bring even more improvements to the dynamic inventory.

Thanks to Nolan Brubaker for leading this effort!

Install guide

The installation guide has been completely overhauled! It has a more concise, opinionated approach to deployments and this should make the first deployment a little easier for newcomers. OpenStack can be a complex system to deploy and our goal is to provide the cleanest path to a successful deployment.

Thanks to Alex Settle for leading this effort!

Feedback?

The goal of this newsletter is three fold:

  • Keep OpenStack-Ansible developers updated with new changes
  • Inform operators about new features, fixes, and long-term goals
  • Bring more people into the OpenStack-Ansible community to share their use
    cases, bugs, and code

Please let me know if you spot any errors, areas for improvement, or items that I missed altogether. I’m mhayden on Freenode IRC and you can find me on Twitter anytime.

Photo credit: Mattia Felice Palermo (Own work) CC BY-SA 3.0 es, via Wikimedia Commons

The post What’s Happening in OpenStack-Ansible (WHOA) – September 2016 appeared first on major.io.

by Major Hayden at September 30, 2016 02:33 PM

OpenStack Superuser

How to use OpenStack Cinder for Docker

John Griffith, software engineer at SolidFire, offers this two-part tutorial.

A lot of you may be familiar with Cinder, the OpenStack block storage project that’s been around for quite a while now. If you’re not, you should take a look at the series of Ken Hui at Rackspace put together, Laying Cinder Block. That’s a more detailed version of the presentation he and I gave at the Summit in Atlanta a couple years back.

The context most think of when they consider Cinder is always “block storage for nova instances.” The fact is however that it’s pretty useful outside of the traditional Nova context. You can use Cinder as a powerful “block storage as a service” resource for a number of use cases and environments. Myself and a few other have been doing this for a number of years now, mostly with Bare Metal systems in our labs.

Given that containers are the hottest thing since sliced bread these days, I thought it would be fun to post a little how to on how to use Cinder by itself with containers. For those of you that saw my presentation at the OpenStack Summit in Austin last spring, here’s what I was trying to show you on stage when I lost my WiFi connection.

On that note, here’s part one of a two-part posting about using Cinder a little bit differently than you may have been doing previously. For those who were at OpenStack Days East, this is a follow-up to the presentation I gave there. Those slides are also available for those who missed it.

A neat use case that works for me

So one pattern that I’ve found myself using more and more that’s kind of interesting is that my development is a bit mixed. Some times I’m doing things on an OpenStack Instance, other times on a workstation or bare metal server, and more recently working in Containers (OK, and sometimes public cloud… but we’ll let the one slide for now).

What I found was that I could use my Cinder deployment (by itself, or as part of a full OpenStack deployment) to manage my storage needs across all three models fairly easily. Even better, I could start working on something in one environment, and easily take my data with me to another. I’ve now done this with some build images and data base files not just from my workstation to a Nova Instance, but also now to Docker Containers. As I change my app to run in various environments I just unplug the storage and plug it back in where I want it. Kinda cool right?

Cinder doesn’t know or care who’s consuming it (mostly)

The biggest point of all of this is that for the most part, Cinder doesn’t know (or care) who or what is calling it’s API or even really what they’re doing with the resources. It’s mostly just a control plane to bridge the differences between various backend storage devices. What a caller chooses to do with those resources is up to them, and it doesn’t really matter if that’s a person using the response data to type in info on a cmd line, an automation script on a bare metal server, or in our case today, a Docker Volume Plugin.

I will reluctantly point out that over the years there have been some things that have crept into the code that make Cinder a bit more Nova aware (which is unfortunate) but it’s still pretty easy to work around. Most of it is just things like requiring Instance UUID’s for attach (that’s a fairly generic thing though) and also there are some backends that have snuck in some features that use Nova to help them out. Today I’m focused on just using the LVM driver and iSCSI connectivity. This also works for other iSCSI based backends including SolidFire and others.

A Cinder plugin for Docker

Containers are hot, no doubt about it. They’re useful, the provide some pretty awesome flexibility in terms of not only development, but more importantly are fantastic when it comes to deployment. So, while I’ve been using Cinder against my Bare Metal systems and OpenStack Cloud for a while, last fall I started doing more and more with Containers, so I decided it would be worthwhile to write a Cinder Plugin for Docker that would allow me to use Cinder directly.

Yes, this has been done before

So the first thing people pointed out was that “you can do this already.” Yup, that’s true(ish), there are a number of offerings in the container space that have created plugins for Docker that run on top of and consume Cinder. Some of them even work (some of them don’t). None of them have much involvement from the Cinder community, and none of them are completely Cinder focused.

Anyway, that’s all great, if you use one of those (or choose to use one in the future) good for you! That’s awesome, but if you are looking for something that is “just” Cinder and is not maintained by a storage vendor maybe this will be of interest to you.

Disclosure: Yes, I work for SolidFire/Netapp. There’s no underlying interest here for me though, other than Cinder and cool tech. If you know me from the community, I’m pretty good at keeping company and community interests separate. I’m also fortunate enough to work for some great folks who get it and understand the importance.

Docker Volume Plugins

Docker started offering a Volume API a few releases back, it’s actually really pretty simple. You can read up on it a bit here if you’re interested.

The key here is that it’s pretty simple, and that’s a good… no that’s a GREAT thing. You’ll notice if you poke around on that link that there are quite a few volume plugins available. You also might notice that several of the devices listed out there also have Cinder plugins/drivers.

The Cinder plugin

All of the plugins are pretty similar in how they work, the majority of us choose to use golang and the now official Docker Plugins Helper module. Between that and the awesome Gophercloud golang bindings for OpenStack started by RackSpace, creating the plugin was pretty easy. The only effort here was the addition of some code to do the attach/detach of the Cinder volumes, even that though is still pretty trivial. I just added some open-iscsi wrappers to the plugin and voila.

Warning: I’ve only implemented iSCSI… because, well, it’s my favorite. Those that offer other options are more than welcome to submit PR’s if they’re interested.

Here are some of the reasons for creating this plugin rather than just pointing to and using someone elses:

  1. I happen to consider myself a bit of a Cinder expert
  2. Vendor agnostic (my Github repo is mine)
  3. I’m focused on just Cinder that’s it (other variants have other motivations)
  4. Licensed under the Unlicensed so you don’t have to be afraid to contribute, sign a EULA etc. (But please don’t start submitting stuff with copyright headers etc.)

Let’s get started

##### Prerequisites

For Ubuntu:

sudo apt-get install open-iscsi

For RHEL variants:

sudo yum install iscsi-initiator-utils

Of course make sure you have Docker installed, I recommend 1.12 at this point.

Currently the plugin is a simple daemon written in Golang. You’ll need to install it on your Docker Node(s), edit a simple config file to point it to your Cinder Endpoint and fire it up. I’ve recently created a systemd init service for this that the install script will create and set everything up for you. So all you have to do is use curl, or wget and pull down the install script and run it as root (or with sudo).

curl -sSL https://raw.githubusercontent.com/j-griffith/cinder-docker-driver/master/install.sh \
| sudo sh

Next we just set up a minimal config file so we can talk to our Cinder Endpoint (standard OpenStack stuff here), we’ll just create a json file with the needed credential info:

/var/lib/cinder/dockerdriver/config.json:

 {
        "HostUUID": "82ee38eb-821b-42d0-8c61-c4974d0c8536",
        "Endpoint": "http://172.16.140.243:5000/v2.0",
        "Username": "jdg",
        "Password": "NotMyPassword",
        "TenantID": "3dce5dd10b414ac1b942aba8ce8558e7"
    }

Now, if you’ve used the install script you can just start the service:

sudo service cinder-docker-driver restart

Verify everything came up properly:

$ sudo service cinder-docker-driver status
● cinder-docker-driver.service - "Cinder Docker Plugin daemon"
Loaded: loaded (/etc/systemd/system/cinder-docker-driver.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2016-09-08 23:44:06 UTC; 29min ago
Main PID: 13917 (cinder-docker-d)
Tasks: 6
Memory: 5.3M
CPU: 290ms
CGroup: /system.slice/cinder-docker-driver.service
└─13917 /usr/bin/cinder-docker-driver &

Sep 08 23:46:59 box-1 cinder-docker-driver[13917]: time="2016-09-08T23:46:59Z" level=debug msg="getVolByName: `cvol-4`"
Sep 08 23:46:59 box-1 cinder-docker-driver[13917]: time="2016-09-08T23:46:59Z" level=debug msg="querying volume: {Attachments:[] AvailabilityZone:nova Bootable:false ConsistencyGroupID: CreatedAt:2016-09-08T23:46:13.000000 Description:Docker volume. Encrypted:false Name:cvol-4 VolumeType:lvmdriver-1 ReplicationDriverData: ReplicationE
Sep 08 23:46:59 box-1 cinder-docker-driver[13917]: time="2016-09-08T23:46:59Z" level=debug msg="Found Volume ID: 677a821b-e33b-44b5-8fb3-38315e832216"
Sep 08 23:46:59 box-1 cinder-docker-driver[13917]: time="2016-09-08T23:46:59Z" level=info msg="Remove/Delete Volume: cvol-4"
Sep 08 23:46:59 box-1 cinder-docker-driver[13917]: time="2016-09-08T23:46:59Z" level=debug msg="getVolByName: `cvol-4`"
Sep 08 23:46:59 box-1 cinder-docker-driver[13917]: time="2016-09-08T23:46:59Z" level=debug msg="querying volume: {Attachments:[] AvailabilityZone:nova Bootable:false ConsistencyGroupID: CreatedAt:2016-09-08T23:46:13.000000 Description:Docker volume. Encrypted:false Name:cvol-4 VolumeType:lvmdriver-1 ReplicationDriverData: ReplicationE
Sep 08 23:46:59 box-1 cinder-docker-driver[13917]: time="2016-09-08T23:46:59Z" level=debug msg="Found Volume ID: 677a821b-e33b-44b5-8fb3-38315e832216"
Sep 08 23:46:59 box-1 cinder-docker-driver[13917]: time="2016-09-08T23:46:59Z" level=debug msg="Remove/Delete Volume ID: 677a821b-e33b-44b5-8fb3-38315e832216"
Sep 08 23:46:59 box-1 cinder-docker-driver[13917]: time="2016-09-08T23:46:59Z" level=debug msg="Response from Delete: {ErrResult:{Result:{Body: Header:map[] Err:}}}\n"
Sep 08 23:47:07 box-1 cinder-docker-driver[13917]: time="2016-09-08T23:47:07Z" level=info msg="List volumes: "
lines 1-20/20 (END)

Then you can use journalctl to filter out the relative entries in syslog:

ubuntu@box-1:~$ journalctl -f -u cinder-docker-driver
-- Logs begin at Thu 2016-09-08 20:52:15 UTC. --
Sep 08 23:46:59 box-1 cinder-docker-driver[13917]: time="2016-09-08T23:46:59Z" level=debug msg="getVolByName: `cvol-4`"
Sep 08 23:46:59 box-1 cinder-docker-driver[13917]: time="2016-09-08T23:46:59Z" level=debug msg="querying volume: {Attachments:[] AvailabilityZone:nova Bootable:false ConsistencyGroupID: CreatedAt:2016-09-08T23:46:13.000000 Description:Docker volume. Encrypted:false Name:cvol-4 VolumeType:lvmdriver-1 ReplicationDriverData: ReplicationExtendedStatus: ReplicationStatus:disabled SnapshotID: SourceVolID: Status:available TenantID:979ddb6183834b9993954ca6de518c5a Metadata:map[readonly:False] Multiattach:false ID:677a821b-e33b-44b5-8fb3-38315e832216 Size:1 UserID:e763fc47a2854b15a9f6b85aea2ae3f1}\n"
Sep 08 23:46:59 box-1 cinder-docker-driver[13917]: time="2016-09-08T23:46:59Z" level=debug msg="Found Volume ID: 677a821b-e33b-44b5-8fb3-38315e832216"
Sep 08 23:46:59 box-1 cinder-docker-driver[13917]: time="2016-09-08T23:46:59Z" level=info msg="Remove/Delete Volume: cvol-4"
Sep 08 23:46:59 box-1 cinder-docker-driver[13917]: time="2016-09-08T23:46:59Z" level=debug msg="getVolByName: `cvol-4`"
Sep 08 23:46:59 box-1 cinder-docker-driver[13917]: time="2016-09-08T23:46:59Z" level=debug msg="querying volume: {Attachments:[] AvailabilityZone:nova Bootable:false ConsistencyGroupID: CreatedAt:2016-09-08T23:46:13.000000 Description:Docker volume. Encrypted:false Name:cvol-4 VolumeType:lvmdriver-1 ReplicationDriverData: ReplicationExtendedStatus: ReplicationStatus:disabled SnapshotID: SourceVolID: Status:available TenantID:979ddb6183834b9993954ca6de518c5a Metadata:map[readonly:False] Multiattach:false ID:677a821b-e33b-44b5-8fb3-38315e832216 Size:1 UserID:e763fc47a2854b15a9f6b85aea2ae3f1}\n"
Sep 08 23:46:59 box-1 cinder-docker-driver[13917]: time="2016-09-08T23:46:59Z" level=debug msg="Found Volume ID: 677a821b-e33b-44b5-8fb3-38315e832216"
Sep 08 23:46:59 box-1 cinder-docker-driver[13917]: time="2016-09-08T23:46:59Z" level=debug msg="Remove/Delete Volume ID: 677a821b-e33b-44b5-8fb3-38315e832216"
Sep 08 23:46:59 box-1 cinder-docker-driver[13917]: time="2016-09-08T23:46:59Z" level=debug msg="Response from Delete: {ErrResult:{Result:{Body: Header:map[] Err:}}}\n"
Sep 08 23:47:07 box-1 cinder-docker-driver[13917]: time="2016-09-08T23:47:07Z" level=info msg="List volumes: "
      That’s about it, the plugin should get picked up by your Docker Daemon and be ready to accept requests. Couple of things to look out for:

    1. You have to use V2.0 as your endpoint for now, GopherCloud chose to implement extensions (like Attach) as versioned items instead of extensions which is what they currently are. Discovery works as long as it ends up being V2.0, but V1 or V3 won’t work. Micro-versioning support is going to create some consumer issues here I think but we’ll cross that bridge when we come to it.
    2. If you get weird error messages on startup about username and/or TenantID it’s almost always an issue with your TenantID so start there
    3. It never hurts to restart the Docker Daemon on your node after you install the Plugin just to be safe
    4. We expect/look for the config.json file in /var/lib/cinder/dockerdriver/ but you can specifiy this with the –config flag.

Using the Plugin via Docker

So that’s about it as far as install and setup. Now you can access your Cinder service from the Docker volume API. This means you can do things like use the Docker Volumes CLI to do things, as well as add Volume arguments to your Docker run commands and even your compose file.

Docker Volume CLI
$ docker volume --help

Usage: docker volume COMMAND

Manage Docker volumes

Options:
--help Print usage

Commands:
create Create a volume
inspect Display detailed information on one or more volumes
ls List volumes
rm Remove one or more volumes

Run 'docker volume COMMAND --help' for more information on a command.

Let’s look at a simple create example:

$ docker volume create -d cinder -o size=1 --name demovol

Now list using Dockers API:

$ docker volume ls
DRIVER VOLUME NAME
cinder demovol

Cool… how about checking with Cinder:

$ cinder list

+--------------------------------------+-----------+---------+------+-------------+----------+-------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+---------+------+-------------+----------+-------------+
| 21a42614-c381-4bbc-b1d0-8498014f2d65 | available | demovol | 1 | lvmdriver-1 | false | |
+--------------------------------------+-----------+---------+------+-------------+----------+-------------+
$

I know, I know…big deal, we can create a volume.

Let’s try something a little more interesting; let’s spin up a container and specify a volume to be created at the same time, we can do this using the Docker Run command, or via a Dockerfile.

Docker Run command

Syntax to add a persistent storage volume to your conatiner at launch. Notice that the Plugin will first check to see if the Volume already exists or not, if it does NOT it will create it for you using the info provided or defaults, if it does exist it will just do the attach and mount for you.

$ docker run -v demovol-2:/Data --volume-driver=cinder -i -t ubuntu /bin/bash
$ cinder list

+--------------------------------------+-----------+-----------+------+-------------+----------+--------------------------------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+-----------+------+-------------+----------+--------------------------------------+
| 08c0c2af-4f91-4488-82a7-96bd020e8c00 | in-use | demovol-2 | 1 | lvmdriver-1 | false | 969bd5f5-64e3-433e-a248-04c3f7899a45 |
| 21a42614-c381-4bbc-b1d0-8498014f2d65 | available | demovol | 1 | lvmdriver-1 | false | |
+--------------------------------------+-----------+-----------+------+-------------+----------+--------------------------------------+
$

So now you’ll have Cinder volume (demovol-2) mounted and formatted with ext4 at /Data inside your container.

docker-compose

I like compose, automation is bestest…

web:
environment:
- "reschedule:on-node-failure"
- "constraint:node!=swarm-master"
image: jgriffith/jgriffith-webbase
ports:
- "80:80"
links:
- "redis:redis"
redis:
environment:
- "reschedule:on-node-failure"
image: redis
volumes:
- "demovol:/data"
volume_driver: cinder

Conclusion

So these are pretty trivial examples, but it’s pretty easy to extend these examples into more complex builds. I’ve been using this setup for a while now to host a number of things ranging from a Harbor Docker Registry (images and DB backed by separate cinder-volumes and running on a Nova Instance). Other simple things are backing for a Redis store that I use as a simple message queue for a CI system I run, and another application I have where I’m running a Redis DB container that is persisting it’s data to a Cinder volume. There’s quite a few things in my lab using Cinder now, some of those things are Nova Instances, but more and more of them lately are something else.

The key take away here however is that Cinder is a lot more flexible than people might think. It doesn’t have any real hard dependencies on Nova, Glance etc. It’s just about managing a pool of storage via the control plane and it leaves it up to the consumer to connect it however/wherever it wants.

I’d love to hear your feedback, and try and answer any questions you might have. Thanks for reading!

Next time

You can also use this with Swarm, coming up next we’ll go through how you can quickly and easily deploy a Swarm cluster on your OpenStack cloud and use the Cinder driver to keep persistent data in your Swarm Cluster and automatically move the volume across nodes in failover scenarios. In the meantime, if you’re interested check it out, feel free to suggest improvements, raise issues or submit PRs.

Before you ask… Kubernetes is very different. It doesn’t actually use Docker’s Volume API, and with the exception of some experimental code in the different Cloud Provider drivers it also doesn’t do things like dynamic provisioning (create volumes). It’s a completely different animal, and does things it’s own way. TLDR; if you’re using Kubernetes, this doesn’t really buy you anything.


This post first appeared on John Griffith’s blog. Superuser is always interested in community content, email: editor@superuser.org.

Cover Photo // CC BY NC

The post How to use OpenStack Cinder for Docker appeared first on OpenStack Superuser.

by Isaac Gonzalez at September 30, 2016 11:02 AM

September 29, 2016

Gorka Eguileor

# of DB connections in OpenStack services

The other day someone asked me if the SQLAlchemy connections to the DB where per worker or shared among all workers, and what was the number of connections that should be expected from an OpenStack service. Maybe you have also wondered about this at some point, wonder no more, here’s a quick write up summarizing […]

by geguileo at September 29, 2016 07:17 PM

Mirantis

What’s the big deal about running OpenStack in containers?

The post What’s the big deal about running OpenStack in containers? appeared first on Mirantis | The Pure Play OpenStack Company.

Ever since containers began their meteoric rise in the technical consciousness, people have been wondering what it would mean for OpenStack. Some of the predictions were dire (that OpenStack would cease to be relevant), some were more practical (that containers are not mini VMs, and anyway, they need resources to run on, and OpenStack still existed to manage those resources).

But there were a few people who realized that there was yet another possibility: that containers could actually save OpenStack.

Look, it’s no secret that deploying and managing OpenStack is difficult at best, and frustratingly impossible at worst. So what if I told you that using Kubernetes and containers could make it easy?

Mirantis has been experimenting with container-based OpenStack for the past several years — since before it was “cool” — and lately we’d decided on an architecture that would enable us to take advantage of the management capabilities and scalability that comes with the Kubernetes container orchestration engine.  (You might have seen the news that we’ve also acquired TCP Cloud, which will help us jump our R&D forward about 9 months.)

Specifically, using Kubernetes as an OpenStack underlay lets us turn a monolithic software package into discrete services with well-defined APIs that can be freely distributed, orchestrated, recovered, upgraded and replaced — often automatically based on configured business logic.

That said, it’s more than just dropping OpenStack into containers, and talk is cheap. It’s one thing for me to say that Kubernetes makes it easy to deploy OpenStack services.  And frankly, almost anything would be easier than deploying, say, a new controller with today’s systems.

But what if I told you you could turn an empty bare metal node into an OpenStack controller just by adding a couple of tags to it?

Have a look at this video (you’ll have to drop your information in the form, but it just takes a second):

Containerizing the OpenStack Control Plane on Kubernetes: auto-scaling OpenStack services

I know, right? Are you as excited about this as I am?

The post What’s the big deal about running OpenStack in containers? appeared first on Mirantis | The Pure Play OpenStack Company.

by Nick Chase at September 29, 2016 06:40 PM

Dougal Matthews

Debugging TripleO Gate Failures in Mistral

TripleO started making use of Mistral in the OpenStack Newton cycle. To help make sure everything is robust and well tested a TripleO Jenkins job now runs against all Mistral changes. Hopefully this will prove valuable and useful to both sides - it should avoid Mistral changes breaking TripleO and also provide a good, real world test of Mistral.

However, it does mean that Mistral developers are going to be exposed to TripleO logs and TripleO for the first time.

So, let's look at how to make sense of the huge log files.

Mistral Logs

Each CI run includes all the logs generated by Mistral. You may want to dive straight in there to see if there is an error. When you click on a failed job, go into the logs directory, then follow through to /var/log/mistral and you will find a individual log for the api, engine and executor.

For example: http://logs.openstack.org/67/356567/8/check/gate-tripleo-ci-centos-7-nonha-multinode/fb8b79e/logs/var/log/mistral/

Note: that link will stop working at some point, I am not sure how long the files are kept.

console.html

At the root of the logs you will see [console.html] - this is the main shell output for the CI session and the best place to start, unless you suspect something will be in the Mistral logs.

ERROR DURING PREVIOUS COMMAND

This line is output at various points if the job fails. So, search for that. Note that it should be at the start of the line, after the date and time, otherwise you are just seeing the line that could potentially output it, but didn't because bash is echoing each command it runs.

Here is an example where I found that error.

2016-09-26 10:12:11.891987 | + ./..//tripleo-ci/scripts/common_functions.sh:delorean_build_and_serve:L1:   echo ERROR DURING PREVIOUS COMMAND '^^^'
2016-09-26 10:12:11.892018 | ERROR DURING PREVIOUS COMMAND ^^^

If you couldn't find it, skip to the next section.

If you did, look for the titles, they will be something like the "Building mistral" example below. In this example, I had to go up about 30 lines in the log and I could seed it failed building Mistral. This means it failed to create the rpm for Mistral - we will come back to possible reasons for this later.

2016-09-26 10:09:49.996482 | #################
2016-09-26 10:09:49.996609 | tripleo.sh -- Building mistral
2016-09-26 10:09:49.996672 | #################

An example failure due to a client regression

I found this by searching for "ERROR DURING PREVIOUS COMMAND" on a failed job, as described above. I found the following Python traceback (I stripped out the date and time to save space).

Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 1180, in install
    _post_config(instack_env)
  File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 1161, in _post_config
    auth_url)
TypeError: client() takes at most 1 argument (5 given)
Command 'instack-install-undercloud' returned non-zero exit status 1

This led me to the instack-undercloud project, and I found where we created the mistralclient instance. A bit of digging around in the git history then let me to this change in python-mistralclient. It refactored the client and changed the method signature in a backwards incompatible way.

Interestingly, the only reason this failed, is that the mistral patch had a Depends-On in its commit message on a mistralclient patch. This caused TripleO CI to build a new package for mistralclient at the same time and test against mistralclient master. Otherwise it would test with the latest released mistralclient.

If this hadn't happened the problem wouldn't have been noticed until a new mistralclient release was made and then TripleO picked up that new version. At this point the TripleO gate would be broken. This is a good argument for having TripleO gate against mistralclient also.

What failures at the different stages might mean

During "Building mistral"

In TripleO CI, an rpm is built for mistral with the patch under review. Typically this can fail if there is a change that impacts packaging - like a new requirement. If that is happening it needs to be added to the rpm spec. It's quite easy to do, but if there are any issues, ping me and I can help. The spec is on github here, and uses Gerrit for code reviews, so the process should be familiar.

https://github.com/rdo-packages/mistral-distgit

For example, there was a failure in patch 374813 as it added jinja2 until I added it to the rpm spec.

During "tripleo.sh -- Overcloud create started."

At this stage we have actually started to deploy OpenStack. The output can be very noisy once it gets started and Heat is doing most of the work. However, there will be various messages stating things like "Started Mistral Workflow..." - so look for those, if there are failures here, it could well be a legitimate regression. You probably won't see much information here unfortunately, but you will see the the execution ID which can help direct you to the correct part of the Mistral logs.

No Luck?

If all else fails, you can jump into #tripleo on freenode and ask for help. Feel free to try pinging me (I'm d0ugal) and if I'm around I'll help. As other different cases pop up I'll try and cover them in additional posts. Since gating Mistral with TripleO is new, I don't have many useful examples yet.

by Dougal Matthews at September 29, 2016 04:31 PM

OpenStack in Production

Hyperthreading in the cloud

The cloud at CERN is used for a variety of different purposes from running personal VMs for development/test, bulk throughput computing to analyse the data from the Large Hadron Collider to long running services for the experiments and the organisation.

The configuration of many of the hypervisors is carefully tuned to maximise the compute throughput, i.e. getting as much compute work done in a given time rather than optimising the individual job performance. Many of the workloads are also nearly all embarrassingly parallel, i.e. each unit of compute can be run without needing to communicate with other jobs. A few workloads, such as QCD, need classical High Performance Computing but these are running on dedicated clusters with Infiniband interconnect compared to the typical 1Gbit/s or 10Gbit/s ethernet for the typical hypervisor.

CERN has a public procurement procedure which awards tenders to the bid with the lowest price for a given throughput compliant with the specifications.  The typical CERN hardware configuration is based on a dual socket configuration and must have at least 2GB/core.

Intel provides a capability for doubling the number of cores on the underlying processor called Simultaneous multithreading or SMT. From the machine perspective, this appears as double the number of cores compared to non-SMT configurations. Enabling SMT requires a BIOS parameter change so resources need to be defined in advance and appropriate capacity planning to define the areas of the cloud which are SMT on or off statically.

The second benefit of an SMT off configuration is the memory per core doubles. A server with 32 SMT on cores and 64GB of memory with hyper-threading has 2GB per core. A change to 16 cores by dropping SMT leads to 4GB per core which can be useful for some workloads.

Setting the BIOS parameters for a subset of the hypervisors causes multiple difficulties
  • With older BIOSes, this is a manual operation. New tools are available on the most recent hardware so this is an operation which can be performed with a Linux program and a reboot.
  • A motherboard replacement requires that the operation is repeated. This can be overlooked as part of the standard repair activities.
  • Capacity planning requires allocation of appropriate blocks of servers. At CERN, we use OpenStack cells to provide a unique hardware configuration such as particular processor/memory configuration and thus dedicated cells need to be created for the SMT off machines. When these capacities are exceeded, the other unused cloud resources cannot be trivially used but further administration reconfiguration is required.
The reference benchmark for High Energy Physics is HEPSpec06, a subset of the Spec benchmarks which match the typical instruction workload. Using this, run in parallel on each of the cores in a machine, the throughput provided by a given configuration can be measured.

SMTVM configurationThroughput HS06
On2 VMs each 16 cores351
On4 VMs each 8 cores355
Off1 VM of 16 cores284.5

Thus, the total throughput of the server with SMT off is significantly less (284.5 compared to 351) but the individual core performance is higher (284.5/16=17.8 compared to 351/32=11). Where an experiment workflow is serialised for some of the steps, this higher single core performance was a significant gain, but at an operational cost.

To find a cheaper approach, the recent additions of NUMA flavors in OpenStack was used. The hypervisors were configured with SMT on but a flavor was created to only use half of the cores on the server with 4GB/core so that the hypervisors were under committed on cores but committed by memory to avoid another VM being allocated to the unused cores. In our configuration, this was done by adding numa_nodes=2 to the flavor and the NUMA aware scheduler does the appropriate allocation.

This configuration was benchmarked and compared with the SMT On/Off.

SMTVM configurationThroughput HS06
On2 VMs each 16 cores351
Off1 VM of 16 cores284.5
On1 VM of 16 cores with numa_nodes=2283

The new flavor shows similar characteristics to the SMT Off configuration without requiring the BIOS setting change and can therefore be deployed without needing the configuration of dedicated cells with a particular hardware configuration. The Linux and OpenStack schedulers appear to be allocating the appropriate distribution of cores across the processors.






by Tim Bell (noreply@blogger.com) at September 29, 2016 04:28 PM

Russell Bryant

OVS 2.6 and The First Release of OVN

In January of 2015, the Open vSwitch team announced that they planned to start a new project within OVS called OVN (Open Virtual Network).  The timing could not have been better for me as I was looking around for a new project.  I dove in with a goal of figuring out whether OVN could be a promising next generation of Open vSwitch integration for OpenStack and have been contributing to it ever since.

OVS 2.6.0 has now been released which includes the first non-experimental version of OVN.  As a community we have also built integration with OpenStack, Docker, and Kubernetes.

OVN is a system to support virtual network abstraction. OVN complements the existing capabilities of OVS to add native support for virtual network abstractions, such as virtual L2 and L3 overlays and security groups.

Some high level features of OVN include:

  • Provides virtual networking abstraction for OVS, implemented using L2 and L3 overlays, but can also manage connectivity to physical networks
  • Supports flexible ACLs (security policies) implemented using flows that use OVS connection tracking
  • Native support for distributed L3 routing using OVS flows, with support for both IPv4 and IPv6
  • ARP and IPv6 Neighbor Discovery suppression for known IP-MAC bindings
  • Native support for NAT and load balancing using OVS connection tracking
  • Native fully distributed support for DHCP
  • Works with any OVS datapath (such as the default Linux kernel datapath, DPDK, or Hyper-V) that supports all required features (namely Geneve tunnels and OVS connection tracking. See the datapath feature list in the FAQ for details.)
  • Supports L3 gateways from logical to physical networks
  • Supports software-based L2 gateways
  • Supports TOR (Top of Rack) based L2 gateways that implement the hardware_vtep schema
  • Can provide networking for both VMs and containers running inside of those VMs, without a second layer of overlay networking

Support for large scale deployments is a key goal of OVN.  So far, we have seen physical deployments of several hundred nodes.  We’ve also done some larger scale testing by simulating deployments of thousands of nodes using the ovn-scale-test project.

OVN Architecture

Components

ovn-architecture

OVN is a distributed system.  There is a local SDN controller that runs on every host, called ovn-controller.  All of the controllers are coordinated through the southbound database.  There is also a centralized component, ovn-northd, that processes high level configuration placed in the northbound database. OVN’s architecture is discussed in detail in the ovn-architecture document.

OVN uses databases for its control plane. One benefit is that scaling databases is a well understood problem.  OVN currently makes use of ovsdb-server as its database.  The use of ovsdb-server is particularly convenient within OVN as it introduces no new dependencies since ovsdb-server is already in use everywhere OVS is used.  However, the project is also currently considering adding support for, or fully migrating to etcd v3, since v3 includes all of the features we wanted for our system.

We have also found that this database driven architecture is much more reliable than RPC based approaches taken in other systems we have worked with.  In OVN, each instance of ovn-controller is always working with a consistent snapshot of the database.  It maintains a connection to the database and gets a feed of relevant updates as they occur.  If connectivity is interrupted, ovn-controller will always catch back up to the latest consistent snapshot of the relevant database contents and process them.

Logical Flows

OVN introduces a new intermediary representation of the system’s configuration called logical flows.  A typical centralized model would take the desired high level configuration, calculate the required physical flows for the environment, and program the switches on each node with those physical flows.  OVN breaks this problem up into a couple of steps.  It first calculates logical flows, which are similar to physical OpenFlow flows in their expressiveness, but operate only on logical entities.  The logical flows for a given network are identical across the whole environment.  These logical flows are then distributed to the local controller on each node, ovn-controller, which converts logical flows to physical flows.  This means that some deployment-wide computation is done once and the node-specific computation is fully distributed and done local to the node it applies to.

Logical flows have also proven to be powerful when it comes to implementing features.  As we’ve built up support for new capabilities in the logical flow syntax, most features are now implemented at the logical flow layer, which is much easier to work with than physical flows.

Data Path

OVN implements features natively in OVS wherever possible.  One such example is the implementation of security policies using OVS+conntrack integration.  I wrote about this in more detail previously.  This approach has led to significant data path performance improvements as compared to previous approaches.  The other area this makes a huge impact is how OVN implements distributed L3 routing.  Instead of combining OVS with several other layers of technology, we provide L3 routing purely with OVS flows.  In addition to the performance benefits, we also find this to be much simpler than the alternative approaches that other projects have taken to build routing on top of OVS.  Another benefit is that all of these features work with OVS+DPDK since we don’t rely on Linux kernel-specific features.

Integrations

OpenStack

Integration with OpenStack was developed in parallel with OVN itself.  The OpenStack networking-ovn project contains an ML2 driver for OpenStack Neutron that provides integration with OVN.  It differs from Neutron’s original OVS integration in some significant ways.  It no longer makes use of the Neutron Python agents as all equivalent functionality has been moved into OVN.  As a result, it no longer uses RabbitMQ.  Neutron’s use of RabbitMQ for RPC has been replaced by OVN’s database driven control plane.  The following diagram gives a visual representation of the architecture of Neutron using OVN.  Even more detail can be found in our documented reference architecture.

neutron-ovn-architecture

There are a few different ways to test out OVN integration with OpenStack.  The most popular development environment for OpenStack is called DevStack.  We provide integration with DevStack, including some instructions on how to do simple testing with DevStack.

If you’re a Vagrant user, networking-ovn includes a vagrant setup for doing multi-node testing of OVN using DevStack.

The OpenStack TripleO deployment project includes support for OVN as of the OpenStack Newton release.

Finally, we also have manual installation instructions to help with integrating OVN into your own OpenStack environment.

Kubernetes

There is active development on a CNI plugin for OVN to be used with Kubernetes.  One of the key goals for OVN was to have containers in mind from the beginning, and not just VMs.  Some important features were added to OVN to help support this integration.  For example, ovn-kubernetes makes use of OVN’s load balancing support, which is built on native load balancing support in OVS.

The README in that repository contains an overview, as well as instructions on how to use it.  There is also support for running an ovn-kubernetes environment using vagrant.

Docker

There is OVN integration with Docker networking, as well.  This currently resides in the main OVS repo, though it could be split out into its own repository in the future, similar to ovn-kubernetes.

Getting Involved

We would love feedback on your experience trying out OVN.  Here are some ways to get involved and provide feedback:

  • OVS and OVN are discussed on the OVS discuss mailing list.
  • OVN development occurs on the OVS development mailing list.
  • OVS and OVN are discussed in #openvswitch on the Freenode IRC network.
  • Development of the OVN Kubernetes integration occurs on Github but can be discussed on either the Open vSwitch IRC channel or discuss mailing list.
  • Integration of OVN with OpenStack is discussed in #openstack-neutron-ovn on Freenode, as well as the OpenStack development mailing list.

by russellbryant at September 29, 2016 04:00 PM

Galera Cluster by Codership

Codership is hiring Software Packaging Engineer!

Software Packaging Engineer

Job Description & Main Responsibilities

 

Codership is looking for a part-time Build and Packaging Engineer. We use Jenkins to build various packages for multiple distros and the engineer will take ownership of the build pipeline and extend it to cover additional platforms and testing scenarios.

There are plenty of interesting problems to tackle on the job, from crafting .spec files with complex dependencies to automatically testing no-downtime rolling upgrades of a distributed system.

Main Responsibilities

  •  take the ownership of and maintain Galera Cluster packages on Linux (.RPM and .DEB) and BSD
  • maintain the automatic package building pipeline and extend it to cover additional platforms and testing scenarios

 

Desired Skills & Requirements

 

  • knowledge of package-management systems, spec files, build tools, dependencies and package scripting
  • understanding of related technologies such as firewalls, systemd and SELinux
  • exposure to automatic testing using continuous integration tools, virtual machines and containers
  • 2-4 years software packaging experience
  • good English command is required

The company is 100% remote. We use Jenkins, Bash, Python, Docker, Qemu and the Open Build Service in our build system.

Apply today and send your application and CV to jobs@galeracluster.com

by Sakari Keskitalo at September 29, 2016 03:21 PM

Kenneth Hui

Rackspace Builds Highly Available Cloud Using Red Hat OpenStack Platform

uptime

Rackspace is known for our expertise building highly available solutions for our customers, so it should be no surprise that we’ve been able to apply that competency to managed OpenStack solutions for both our upsteam offering and Rackspace Private Cloud powered by Red Hat. or PRC-R.

I provided some details in my previous post, about RPC-R’s reference architecture. Here, I want to drill down even more into how we worked with Red Hat to build high availability into the control plane of the RPC-R solution.

As we know, component failures will happen in a cloud environment, particularly as that cloud grows in scale. However, users still expect their OpenStack services and APIs will be always on and always available, even when servers fail. This is particularly important in a public cloud or in a private cloud where resources are being shared by multiple teams.

The starting point for architecting a highly available OpenStack cloud is the control plane. While a control plane outage would not disrupt running workloads, it would prevent users from being able to scale out or scale in in response to business demands. Accordingly, a great deal effort went into maintaining service uptime as we architected RPC-R, particularly the Red Hat OpenStack platform which is the OpenStack distribution at the core of RPC-R.

To read more about how Rackspace and Red Hat have collaborated on the best OpenStack private cloud solution, please click here to go to my article on the Rackspace blog site.


Filed under: Cloud, Cloud Computing, OpenStack, Private Cloud Tagged: Cloud, Cloud computing, OpenStack, Private Cloud, Rackspace, Red Hat

by kenhui at September 29, 2016 03:00 PM

Fleio Blog

First Fleio OpenStack Billing Release is Now Available to Download: 0.9-alpha

Fleio is an OpenStack billing system and self-service control panel for public cloud service providers. Version 0.9-alpha, the first public release, is now available to download and install on-premises. Fleio – OpenStack Billing and Self-Service Portal – Staff Area Fleio’s high-level features are: Customer management: user sign-up/on-boarding, manage companies, and OpenStack projects End-user self-service portal Measures OpenStack cloud resource usage Applies […]

by adrian at September 29, 2016 12:37 PM

OpenStack Superuser

OpenStack Contributor Awards now open

We’d like to introduce the next round of community awards handed out by the Foundation, to be presented at the feedback session of the summit.

Nothing flashy or starchy – the idea is that these are to be a little
informal, quirky … but still recognizing the extremely valuable work that we all do to make OpenStack excel.

There’s so many different areas worthy of celebration, but we think that  there are a few main chunks of the community who need a little love:

  • Those who might not be aware that they are valued, particularly new
    contributors
  • Those who are the active glue that binds the community together
  • Those who share their hard-earned knowledge with others and mentor
  • Those who challenge assumptions and make us think

As before, rather than starting with a defined set of awards, we’d like to have submissions of names in those
broad categories. Then we’ll have a little bit of fun on the back-end and try to come up with something that isn’t just your standard set of award titles (as you can see from the first group of winners) and iterate to success. 😉

The submission form is below, so please submit anyone who you think is
deserving of an award!

https://openstackfoundation.formstack.com/forms/community_contributor_award_nomination_form

Awards will be presented during the feedback session on Friday.

Cover Photo // CC BY NC

The post OpenStack Contributor Awards now open appeared first on OpenStack Superuser.

by Tom Fifield at September 29, 2016 11:00 AM

The Official Rackspace Blog

Beyond Virtualization: Replacing VMware with OpenStack

As the mighty reign of VMware comes to a glorious sunset and virtualization becomes a thing of the past, many organizations are seeking out the next best cloud platform replacement. Some see OpenStack as a plausible next step but hesitate due to the many unknowns and differences. The old Sesame Street song, “One of these

The post Beyond Virtualization: Replacing VMware with OpenStack appeared first on The Official Rackspace Blog.

by Walter Bentley at September 29, 2016 10:00 AM

Carlos Camacho

Deployment tips for puppet-tripleo changes

This post will describe different ways of debugging puppet-tripleo changes.

Deploying puppet-tripleo using gerrit patches or source code repositories

In some cases, dependencies should be merged first in order to test newer patches when adding new features to THT. With the following procedure, the user will be able to create the overcloud images using WorkInProgress patches from gerrit code review without having them merged (for CI testing purposes).

If using third party repos included in the overcloud image, like i.e. the puppet-tripleo repository, your changes will not be available by default in the overcloud until you write them in the overcloud image (by default is: overcloud-full.qcow2)

In order to make quick changes to the overcloud image for testing purposes, you can:

Export the paths to your submission by following an In-Progress review:

  export DIB_INSTALLTYPE_puppet_tripleo=source
  export DIB_REPOLOCATION_puppet_tripleo=https://review.openstack.org/openstack/puppet-tripleo
  export DIB_REPOREF_puppet_tripleo=refs/changes/25/310725/14

In order to avoid noise on IRC, it is possible to clone puppet-tripleo and apply the changes from your github account. In some cases this is particularly useful as there is no need to update the patchset number.

  export DIB_INSTALLTYPE_puppet_tripleo=source
  export DIB_REPOLOCATION_puppet_tripleo=https://github.com/<usergoeshere>/puppet-tripleo

Remove previously created images from glance and from the user home folder by:

  rm -rf /home/stack/overcloud-full.*
  glance image-delete overcloud-full
  glance image-delete overcloud-full-initrd
  glance image-delete overcloud-full-vmlinuz

After this step the images can be recreated by executing:

  ./tripleo-ci/scripts/tripleo.sh --overcloud-images

Debugging puppet-tripleo from overcloud images

For debugging purposes, it is possible to mount the overcloud .qcow2 file:

  #Install the libguest tool:
  sudo yum install -y libguestfs-tools

  #Create a temp folder to mount the overcloud-full image:
  mkdir /tmp/overcloud-full

  #Mount the image:
  guestmount -a overcloud-full.qcow2 -i --rw /tmp/overcloud-full

  #Check and validate all the changes to your overcloud image, go to /tmp/overcloud-full:
  #  For example, in this step you can go to /opt/puppet-modules/tripleo,

  #Umount the image
  sudo umount /tmp/overcloud-full

From the mounted image file it is also possible to run, for testing purposes, the puppet manifests by using puppet apply and including your manifests:

  sudo puppet apply -v --debug --modulepath=/tmp/overcloud-full/opt/stack/puppet-modules -e "include ::tripleo::services::time::ntp"

by Carlos Camacho at September 29, 2016 09:00 AM

Opensource.com

5 new OpenStack tutorials and guides

Every month with gather the best in new community-created instructional materials for OpenStack, the open source cloud computing project.

by Jason Baker at September 29, 2016 07:02 AM

September 28, 2016

Kenneth Hui

The Architecture Behind Rackspace Private Cloud Powered by Red Hat OpenStack

diy

When Rackspace and Red Hat came together to create a managed private cloud, we knew this offering needed to support enterprises looking to transform themselves using the latest technological innovations. Rackspace Private Cloud Powered by Red Hat, RPC-R for short, was architected to meet stringent enterprise requirements such as availability, stability and manageability, while helping these enterprises enter the cloud computing age.

The process for creating RPC-R began with taking the industry’s best OpenStack distribution, Red Hat OpenStack platform, and designing a reference architecture Rackspace could deploy and operate on behalf of our joint customers. Think of it as building the best racing car in the sport, then surrounding it with the best pit crew and equipment.

race-car-rpc-r

To read more about how Rackspace and Red Hat have collaborated on the best OpenStack private cloud solution, please click here to go to my article on the Rackspace blog site.


Filed under: Cloud, Cloud Computing, OpenStack, Private Cloud Tagged: Cloud, Cloud computing, OpenStack, Private Cloud, Rackspace, Red Hat

by kenhui at September 28, 2016 07:00 PM

Chris Dent

OpenStack Ocata TC Candidacy

Despite its name, the Technical Committee has become the part of the OpenStack contributor community that enshrines, defines, and -- in some rare cases -- enforces what it means to be "OpenStack". Meanwhile, the community has seen a great deal of growth and change.

Some of these changes have led to progress and clarity, others have left people confused about how they can best make a contribution and what constraints their contributions must meet (for example, do we all know what it means to be an "official" project?).

Much of the confusion, I think, can be traced to two things:

  • Information is not always clear nor clearly available, despite valiant efforts to maintain a transparent environment for the discussion of policy and process. There is more that can be done to improve engagement and communication. Maybe the TC needs release notes?

  • Agreements are made without the full meaning and implications of those agreements being collectively shared. Most involved think they agree, but there is limited shared understanding, so there is limited effective collaboration. We see this, for example, in the ongoing discussions on "What is OpenStack?". Agreement is claimed without actually existing.

We can fix this, but we need a TC that has a diversity of ideas and experiences. Other candidates will have dramatically different opinions from me. This is good because we must rigorously and vigorously question the status quo and our assumptions. Not to tear things down, but to ensure our ideas are based on present day truths and clear visions of the future. And we must do this, always, where it can be seen and joined and later discovered; gerrit and IRC are not enough.

To have legitimate representation on the Technical Committee we must have voices that bring new ideas, are well informed about history, that protect the needs of existing users and developers, encourage new users and developers, that want to know how, that want to know why. No single person can speak with all these voices.

Several people have encouraged me to run for the TC, wanting my willingness to ask questions, to challenge the status quo and to drive discourse. What I want is to use my voice to bring about frequent and positive reevaluation.

We have a lot of challenges ahead. We want to remain a pleasant, progressive and relevant place to participate. That will require discovering ways to build bridges with other communities and within our own. We need to make greater use of technologies which were not invented here and be more willing to think about the future users, developers and use cases we don't yet have (as there will always be more of those). We need to keep looking and pushing forward.

To that end I'm nominating myself to be a member of the Technical Committee.

If you have specific questions about my goals, my background or anything else, please feel free to ask. I'm on IRC as cdent or send some email. Thank you for your consideration.

by Chris Dent at September 28, 2016 04:42 PM

SUSE Conversations

OpenStack Past, Present and Future

According to the famous quote-master Yogi Berra, “nostalgia ain’t what it used to be”.  While that may be true, the open source community couldn’t resist indulging in some well-earned nostalgia recently.  Having just passed major milestones of 25 years for Linux and 6 years of OpenStack, we’ve had the chance to look back at the …

+read more

The post OpenStack Past, Present and Future appeared first on SUSE Blog. Mark_Smith

by Mark_Smith at September 28, 2016 04:24 PM

Tesora Corp

Join Us in Barcelona Next Month!

The Tesora team is heading to the OpenStack Summit in Barcelona next month. From October 25-28, you can find us in the marketplace at Booth B33, as well as leading several presentations on Database as a Service.

If you’re a fan of OpenStack Trove, join us for these talks:

Tuesday, 10/25:

11:25 am- 12:05 pm

Working on OpenStack: My First Year in the Community

5:55 pm- 6:35 pm

Tesora and VMware present – A Complete Guide to Running Your Own DBaaS using OpenStack Trove

 

Wednesday, 10/26:

11:25 am- 12:55 pm

“Help, I have Trove deployed, how do I build guest images?”

3:55 pm- 4:35 pm

Best Practices for Deploying OpenStack Trove: An Inside Look at Database as a Service Architecture

3:55 pm- 4:35 pm

Stewardship: Bringing More Leadership and Vision to OpenStack 

3:55- 4:35 pm

Lessons from the Community: What I’ve Learned as an OpenStack Day Organizer

 

Thursday, 10/27:

1:50 pm- 2:30 pm

What’s New with OpenStack Trove in Newton, What’s On Deck for Ocata 

The post Join Us in Barcelona Next Month! appeared first on Tesora.

by Alex Campanelli at September 28, 2016 02:00 PM

OpenStack Superuser

Considering OpenStack? A new book shows you how to get started

If you’re new to OpenStack, it helps to have someone show you around. Two active members of the OpenStack community wrote a book with real-world examples to help you find your way.

Elizabeth K. Joseph has worked in open source for a decade and in OpenStack for over four years. She’s a member of the OpenStack Infra Team and Women of OpenStack.  Contributing author Matt Fischer, principle software engineer at Time Warner Cable who helped deploy their OpenStack Cloud, is a frequent speaker at OpenStack Summits and a familiar figure at the Mid-Cycle Ops Meetups.

Their book “Common OpenStack Deployments” from Prentice Hall offers a guide for people interested in OpenStack but new to the technology and its features. The authors provide a walk-through of your first approach to OpenStack and then take you all the way to controlling containers and troubleshooting in about 250 pages.  You can read a sample, buy the book in hard copy or e-versions on the publisher’s website and through online retailers including Safari Books.

Superuser talked to Joseph about this “gentle introduction” to OpenStack and some of her favorite resources for getting up to speed.

showcover-aspx

Who will this book help most?

When I came up with the idea for the book two years ago, I was attending a lot of open source conferences where OpenStack was quickly becoming a popular topic. There were introduction talks here and there, but already talks had started diving into specific projects. This brought great value to the OpenStack community early on because we could meet up and collaborate at these conferences, and early operators found a high level of expertise from speakers at these conferences.

There was a large group of folks who were left out of the excitement though, ones who were more generalists in the open source community, operators who weren’t in a position to change what they’re using for virtualization or cloud in their organization, and others. When I’d mention I worked on the Infrastructure Team for OpenStack, people would regularly asked me exactly what OpenStack was. “Open source software for building clouds” rarely satisfied them. What do I mean by “cloud?” What services does OpenStack exactly offer? Do I need to completely delete my current virtualization environment to use OpenStack? The book is a gentle introduction that answers all these questions, with real world examples designed to speak to this audience.

How will this book help people get their heads around OpenStack and its capabilities?

The book starts off by talking about what we mean by cloud and explaining several of the popular projects, like Nova, Neutron and Cinder. One of the most important things I believe people should to learn early on is that OpenStack has a collection of projects interoperate and that come together to be this thing we call OpenStack. As a result, OpenStack is incredibly modular, making it so that OpenStack clouds run by different organizations are rarely identical. You define what you need from your cloud, and then you identify, evaluate and implement the components that you need.

With that behind us, the book presents a series of deployment scenarios. The chapters start by explaining a series of use cases who may use the particular scenario, say a university looking to give virtual machines to students, or a film company processing a lot of raw data that requires a massive storage back end and transfer of large files. While I kept these examples general, I took many of these examples from OpenStack Summit keynotes and articles on Superuser, so thank you for showcasing so much great work that organizations are doing with OpenStack!

What I’m most proud of though is that these deployment scenarios are fully open source. A majority of the code for the book is on GitHub under an Apache license and we use the upstream OpenStack Puppet modules. My contributing author Matt Fischer uses these modules at work, and I now use them in OpenStack Infrastructure’s infra-cloud. Matt’s enterprise-level expertise and ability to bring that down into our smaller scale scenarios was invaluable, and we had a lot of support from folks on the Puppet team like Colleen Murphy and Emilien Macchi. Using the Puppet modules also allowed us to present examples that people can actually modify and start using if they want to diverge from the instructions in the book and start creating their own scenarios. I liked this approach a lot since it’s very different from a lot of examples that existed in other books that had you running things locally in containers or deploying by hand. We wanted to present a path to a production deployment that was clear.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

What’s the role of a book when there are more training programs out now than ever?

I’m really excited that there are so many training programs out there today, it’s an important part of learning for a lot of folks and we went far too long without having such programs in place.

I see the book as a kind of introductory phase between getting interested in OpenStack and securing a spot in a training program. If you are in a position to make decisions in your organization about your cloud and virtualization usage, you may want to learn about it by reading a book first and later applying for formal training. If you’re looking to expand your skill set on your own time, or haven’t yet convinced your department to send you to training, you can benefit by picking up a book today and getting familiar with the basics which can build a case for sending you to formal training.

There are also folks who still prefer books to formal training and I’m among them. I am better at learning at my own pace and find great value in introductory books that give me an overview. With the overview in my head, I can go back to the project documentation with a greater understanding of context within which they’re written and dive into specifics.

What’s on your OpenStack bookshelf?

My first stop for everything OpenStack is http://docs.openstack.org/. The OpenStack Documentation team does a lot of amazing work that is often not highlighted enough. They have install guides for multiple distributions and that landing page also links to various operations guides, API references, contributing documents and more. These resources are free and if you’re looking for specific information about a service, they’re definitely where you want to start.

As I was writing the book, I found a lot of value in some of the project-specific books. Off the top of my head, I found the “OpenStack Swift” published by O’Reilly Media to provide a considerable amount of insight into how the internals of Swift works (especially concepts like The Ring). I also have a copy of “Identity, Authentication, and Access Management in OpenStack: Implementing and Deploying Keystone” on my shelf, doing identity management correctly is incredibly important and many organizations have some sort of tooling that they want OpenStack to tie into, having a reference that covers it is important.

Finally, I highly recommend brushing up on your general Linux administration and networking skills. There are a lot of books on this topic out there, you want to find a book or other materials that can walk you through debugging tools (including viewing logs, analyzing TCP dumps and working with stack traces).

OpenStack can sometimes be a bit obtuse with the errors it gives, so having a firm grasp on the core system you’re working with will help you a lot when OpenStack isn’t directly giving you the answers you’re looking for.

Cover Photo // CC BY NC

The post Considering OpenStack? A new book shows you how to get started appeared first on OpenStack Superuser.

by Isaac Gonzalez at September 28, 2016 11:02 AM

The Official Rackspace Blog

Navigating the Multi-Cloud Maze

Apple made big news earlier this year when it moved its iCloud service to the Google Cloud Platform while also continuing to make heavy use of AWS and Azure. But according to Larry Dignan, Editor-in-Chief at ZDNet, the real story is that this move in no way makes them unique: “Let’s replace Apple with any

The post Navigating the Multi-Cloud Maze appeared first on The Official Rackspace Blog.

by John Engates at September 28, 2016 10:00 AM

September 27, 2016

OpenStack Superuser

Why OpenStack and Kubernetes are better together

The pairing of OpenStack and Kubernetes has been likened to the dynamic duo and a May-September romance.

Still, some question why you’d need these robust, community-driven platforms that provide computing and application resources together. At the recent OpenStack Days Silicon Valley, Sumeet Singh of AppFormix  launched a fireside chat by asking Craig McLuckie, product manager for Kubernetes at Google and Brandon Phillips, CTO of CoreOS, why a company would need OpenStack if they were using Kubernetes.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="https://www.youtube.com/embed/PXlgbiiRkrU?feature=oembed" width="640"></iframe>

 

McLuckie’s response: Today, the world doesn’t fit into a straight-up cloud-native computing box.  The enterprise world is hybrid in terms of the application patterns that people need to use, the types of infrastructure people need access to and the way that people want to run their workloads.

“Enterprise needs a set of familiar infrastructure primitives,” McLuckie adds, “and this are really well embodied in OpenStack today. It’s also interested in transitioning to more cloud-native progressive computing, which is what Kubernetes can provide.”

Companies have invested in virtual machine (VM) computing of the OpenStack variety, and there will continue to be VM users well into the future, said Phillips. As teams begin to demand more cloud-native systems, it makes more sense to blend systems like Kubernetes into the fold, as it were.

McLuckie remembers when companies like eBay who were heavily invested in OpenStack started to look at finding ways to integrate technologies such as containers and cloud-native computing into the traditional infrastructure as a service environment.

There are several problems within each ecosystem that are unaddressed as standalone products. It’s hard to deal with the operationalization of a very rich portfolio of services in OpenStack, said McLuckie, making it tough to roll out OpenStack, version it, update it, and manage it. Cloud-native computing systems like Kubernetes, then, make it a lot easier with a simple logical compute substrate into which you can deploy your services.

There was also a significant gap between the efficiencies customers could get out of more traditional, monolithic, VM-based scheduling versus the much more agile dynamic systems. Technologies like Kubernetes can bring much richer efficiencies to a traditional ecosystem. On the flip side, said McLuckie, customers who were using Kubernetes on its own were asking for extensions that would bring more traditional services like block and object storage integrated within the cloud-native ecosystem. OpenStack has a very robust set of services that naturally complements these technologies and integrates well with current physical infrastructures.

“Kubernetes solves a whole bunch of problems and offers a window into a new way of thinking about deploying, managing, and scheduling your applications,” says McLuckie. “OpenStack provides a very robust set of infrastructure primitives that you can bring in together.”

Phillips agreed, pointing to the way his company’s customers were searching for ways to more consistently manage the way they deployed OpenStack. His team worked on a way to manage OpenStack as complex application inside of containers. Now, they use Kubernetes in a similar way, with compassion for the OpenStack operator’ difficult task of deploying, updating, and managing OpenStack.

McLuckie said that the adoption of Kubernetes is far quicker than any other software he’s ever worked with, and that it’s the most widely used container orchestration technology out there. In a sense, added Phillips, the success of OpenStack has lead to a larger addressible market in which Kubernetes can thrive.

Still, the community-led Kubernetes and OpenStack projects will benefit greatly from collaborative development. OpenStack provides the infrastructure needed, like object storage and Keystone authentication, both things that were brought into the Kubernetes project itself.

“There’s a bunch of opportunities here to mix and match the components,” said Phillips. “Enterprises today have large OpenStack application deployments and want to utilize that stored data but also want to bring containerized applications into the ecosystem.”

The base integration work needs to be done, said McLuckie, and doing that will help optimize the strong synergies between the two projects. But that’s not all the community needs to do, he says.

Some of the needed work, said McLuckie, is around some higher-level concepts, like how next-generation enterprise applications are described, composed and distributed.

“It’s a amazing opportunity to come together across communities that are doing cloud-native work as well as more traditional infrastructure communities like OpenStack,” he said.

The communities need to rally together to help the software vendors that are building distributed systems and deploying them into these environments to have a set of common standards for how to actually describe, package and distribute their applications.

There’s also a potential for deeper integration between OpenStack and Kubernetes, says McLuckie. The idea is to continue to advance what container technologies while preserving the best attributes of traditional infrastructure and VM-based models, and then bringing that to both communities at the same time.

The trick to enabling this collaboration, said McLuckie, is at the foundation level. The OpenStack Foundation and the Cloud-Native Computing Foundation serve overlapping communities already, so why not create working groups that span the two foundations to help cross-polinate these technologies.

“Hopefully, over time, we can tear down the boundaries that exist between the two foundations themselves, and harmonize them together,” he said.

 

The post Why OpenStack and Kubernetes are better together appeared first on OpenStack Superuser.

by Rob LeFebvre at September 27, 2016 11:02 AM

September 26, 2016

DreamHost

How DreamHost Builds Its Cloud: Selecting Microprocessors

This is post No. 2 in a series of posts about how DreamHost builds its Cloud products. Written by Luke Odom straight from data center operations. 

In this post, we are going to be looking at what processor we are using in the new DreamCompute cluster, and how we picked it!

A processor is one of the most crucial components of a machine. The processor, also known as the CPU, is the metaphorical brain of the computer. It does all the “thinking” for the computer. A CPU can have one or more cores. A core is the part of the processor that does the actual computing. The first thing to consider is the instruction set. This is the language that the processor speaks. The popular instruction sets for use in servers are Sparc, ARM, and x86-64. Processors using the Sparc instruction set are made by Oracle. While Linux can run on them, the processors are designed to run Solaris, an operating system that is also made by Oracle. ARM is an instruction set that has been around for 30 years and had primarily been used in embedded devices. Cell phones, TVs, and tablets are some of the many places you will find ARM processors.

Recently, we’ve been testing some servers with ARM processors. The advantage of these servers is they use very little power and have many low-power cores, which is useful in environments where you have many processes running at the same time. The most common instruction set you will find in a data center is x86-64. This is the 64-bit version of the x86 instruction set that has been in use since 1978. Almost all consumer laptop, desktop, and enterprise servers are made using processors based on x86-64. Because of this, we decided to use a processor based on x86-64.

Cherry picking

Source: Flickr

The server x86-64 processor manufacturers are AMD and Intel. AMD, which we used in our beta cluster, last released a major x86-64 server processor in 2012. Since that release, AMD has been working on Zen, a new architecture. Unfortunately, Zen has not yet been released. Because of this, the AMD processors currently for sale are much higher wattage and slower than others on the market.

In the time since the last major AMD update, Intel has released three new generations of chips, each faster and more power efficient. This makes Intel the best choice right now, so we decided to use them. Every year or so, Intel releases a new “generation” of processors. With each generation, Intel does one of two things. They either change the transistor size, making the processor smaller and more power efficient, or they keep the transistor size the same and focus on adding features.

Within the server line of processors, there are four generations currently being produced. Ivy Bridge, which was released in 2012, is a upgrade of the older Sandy Bridge processors with smaller transistors and is denoted with a v2 at the end of the processor name for most server processors. Haswell, released in 2013, is a refinement of Ivy Bridge and uses the same transistor size. For servers, it is denoted as v3. Broadwell, released in 2014, was a refinement of the Haswell processor with smaller transistors and is denoted with a v4. Finally, SkyLake, released last year, is a refinement of Broadwell at the same transistor size and is denoted with a v5.

Individual product lines within the server category are upgraded at different times. Even though a generation has been released, it may be years before a certain product line begins using that generation, and certain product lines may skip over entire generations. The five current product lines within the server category are Atom, Xeon D, Xeon E3, Xeon E5, and Xeon E7. The Atom line is an ultra-low power line. These aren’t really powerful enough to use in our hypervisors, and the low number of cores would significantly limit the size of virtual machine we could offer. The next line is Xeon D. These are higher wattage and faster than the Atom processors, but still not quite the power we wanted to be able to give our customers.

The next line is the E3. The E3 line is the only server based line that has been upgraded to SkyLake. It has plenty of power, but you are limited to a single processor per system and four cores per processor. The E3 line lacks the density that would make it usable. At the time we were designing the new DreamCompute cluster, the E5 was Haswell-based but we knew Broadwell was coming soon. As we can only use what is currently being produced, we only looked at the Haswell line. The E5 line is marketed to data centers. Within the E5 line, you have both single, dual, and quad processor options with many choices within each of those. The E5 line just might work! The E7 line is last line we looked at. The E7 line was, at the time, Ivy Bridge-based, though we knew Haswell was coming soon. The E7 line is focused on density. E7s have both quad and octo processor options with up to 18 cores per processor. They are primarily used in environments where you need a single computer to be able to do a lot of work. That made E7s a possible, but not ideal, fit for DreamCompute as we probably don’t want that much density.

Now that we knew what processor lines would work, we needed to consider two more factors. We needed to figure how many processors we wanted in each system and how many cores each processor should have. Originally, processors had a single core, but modern x86-64 processors can have up to 22 cores on a single processor. This is especially useful in a shared resource environment (like DreamCompute) where you don’t want the processes of one user to affect everyone else on that hypervisor. Intel also has a technology called “hyperthreading,” where it presents each core to the operating system twice to allow for more efficient use of each core. You can also put multiple processors in each server. You can have up to eight 18-core processors in a system for a total of 144 cores or 288 threads. Though, as we know from our previous cluster, density isn’t everything. We wanted to be able to balance power and density, and limit the single points of failure. Based on the RAM-to-core ratio we wanted and the maximum density we were willing to have, we decided to test dual and quad processor systems with eight to 14 cores in each system. We tested both the E5 dual and quad processor lines as well as the E7 quad processor line.

To pick the processor we wanted to use, we needed to take into consideration a few other factors. But… more on that in another post coming soon!

The post How DreamHost Builds Its Cloud: Selecting Microprocessors appeared first on Welcome to the Official DreamHost Blog.

by Luke Odom at September 26, 2016 09:20 PM

OpenStack Superuser

OpenStack Day Nordic highlights local financial services, scientific users

“Let’s meet up North!” was the rallying cry of the first OpenStack Day Nordic, which took place in Stockholm, Sweden. More than 300 people from the region who are evaluating OpenStack or building businesses with the technology heeded the call. Hot topics of the day included how OpenStack applies to mid-sized enterprises, compliance and regulated industries, high-powered computing and scientific research and monitoring for cloud operators.

The morning keynotes from Jonathan Bryce, Monty Taylor and Thierry Carrez talked about how the OpenStack community works, bringing together what might be considered competitors in the industry to collaborate around a common goal. They talked about key enterprise use cases to address the question of OpenStack in production and various infrastructure tools and community governance that support the maturing project.

Jan Van Eldik then caught us up on CERN’s continuing use of OpenStack, now running 190,000 cores. His team deployed Magnum this year, pushing their environment to support 2 million requests per second. Now they are looking at the Ironic bare metal project to manage both virtual and physical infrastructure via OpenStack. He said their biggest challenge is managing a variety of legacy hardware, because they are a true brownfield deployment. We also heard from two important new users in the region: Folksam, one of the largest insurance companies in Sweden, and Swedish National Infrastructure for Computing (SNIC), a research cloud available to all Swedish universities.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Folksam has strict requirements around data protection that make traditional public cloud environments untenable, but around 2013-2014, the company also realized they had to do more with less money. They needed to evaluate new architectures to support the vast amount of data they were storing and processing. For example, data is now available to analyze when and where car accidents occur, so cities can better plan traffic management. They also wanted to align their infrastructure strategy with their development strategy for a true DevOps environment.

Folksam realized they had four options: traditional VMware virtualization, public cloud, building a private cloud or using private cloud managed by a partner. They started down the path of building their own cloud, but realized it wasn’t core to their business. They decided instead to go with City Network, who offered them a community cloud that meets European Union’s General Data Protection Regulation (GDPR) standards with a mix of public and private cloud services. Roger Ewert of Folksam believes more insurance and financial services companies will be heading down this path and advised them to think about multi-cloud and hybrid cloud strategy, consider the cloud maturity of your own organization and plan time and effort to change the organization (not just implement the technology) before they get started.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Salman Toor spoke about his experience with the SNIC cloud, which is running in three different data centers (or regions) with shared authentication and a self-service dashboard. Each region based in a different city has autonomous capabilities, and in total the cloud supports 65 projects, including visualization/data analysis software Chipster, Galaxy and Proteomic. A year ago, they moved from CentOS to Ubuntu because they felt like there was a larger community of support. By the end of 2016, they expect to support 4,000 cores. One of the challenges Toor described is helping researchers understand
how to architect applications in the cloud to take full advantage of its distributed and ephemeral nature. His team put together a series of workshops to help with training, which are available on GitHub. He is interested in collaborating with the community and Scientific Working Group on this topic going forward.

The OpenStack ecosystem is definitely growing in the Nordic region. In addition to City Network, Elastx has a public cloud in Sweden, and Norwegian service provider Basefarm sent attendees to the event. Other companies IPnett, Cybercom, and Layer 8 IT services offer a variety of OpenStack services throughout Denmark, Finland, Norway and Sweden.

For an in-depth view into how Folksam is managing its insurance business in the cloud with City Network, check out their presentation at the OpenStack Summit Barcelona, taking place from October 25-28.

Cover Photo // CC BY NC

The post OpenStack Day Nordic highlights local financial services, scientific users appeared first on OpenStack Superuser.

by Lauren Sell at September 26, 2016 04:32 PM

The Official Rackspace Blog

Where to Find Rackspace OpenStack Experts at the Barcelona Summit 2016

It’s been more than six years and one billion server hours since Rackspace announced to the world that we, along with NASA, would be open sourcing a new cloud platform called OpenStack. We’ve been privileged to have a front row seat as this little open source project has grown to become one of the largest

The post Where to Find Rackspace OpenStack Experts at the Barcelona Summit 2016 appeared first on The Official Rackspace Blog.

by Kenneth Hui at September 26, 2016 04:18 PM

Matt Fischer

Writing a Nova Filter Scheduler for Trove

In the process of deploying Trove, we had one simple requirement: “Only Run Trove instances on Trove nodes”. Surprisingly this is a difficult requirement to meet. What follows is our attempts to fix it and what we ended up doing. Some of these things mentioned do not work because of how we want to run our cloud and may not apply to you. Also this is not deployed in production yet, if I end up trashing or significantly changing this idea I will update the blog.

Option 1 – Use Special Trove Flavors

So you want to only run Trove instances on Trove compute nodes, nova can help you with this. The first option is to enable the deftly named AggregateInstanceExtraSpecsFilter in Nova. If you turn on this filter and then attach extra-specs to your flavors it will work as designed. As an aside, if you’re a software developer, placing warnings in CLI tools that end-users use is only going to cause consternation, for example the warnings below.

mfischer@Matts-MBP-4:~$ openstack aggregate list
WARNING: openstackclient.common.utils is deprecated and will be removed after Jun 2017. Please use osc_lib.utils
+------+-------+-------------------+
|   ID | Name  | Availability Zone |
+------+-------+-------------------+
|  417 | M4    | None              |
|  423 | M3    | None              |
| 1525 | Trove | None              |
+------+-------+-------------------+
mfischer@Matts-MBP-4:~$ openstack aggregate show 1525
WARNING: openstackclient.common.utils is deprecated and will be removed after Jun 2017. Please use osc_lib.utils
+-------------------+--------------------------------------------------------------------------------------+
| Field             | Value                                                                                |
+-------------------+--------------------------------------------------------------------------------------+
| availability_zone | None                                                                                 |
| created_at        | 2016-05-11T20:35:07.000000                                                           |
| deleted           | False                                                                                |
| deleted_at        | None                                                                                 |
| hosts             | [u'bfd02-compute-trove-005', u'bfd02-compute-trove-004', u'bfd02-compute-trove-003'] |
| id                | 1525                                                                                 |
| name              | Trove                                                                                |
| properties        | CPU='Trove'                                                                          |
| updated_at        | None                                                                                 |
+-------------------+--------------------------------------------------------------------------------------+

Note the properties portion here. This then matches with the special Trove flavors that we made. On the flavors we set the again deftly named aggregate_instance_extra_specs.

mfischer@Matts-MBP-4:~$ openstack flavor show 9050
WARNING: openstackclient.common.utils is deprecated and will be removed after Jun 2017. Please use osc_lib.utils
+----------------------------+-------------------------------------------------------------------------------------------+
| Field                      | Value                                                                                     |
+----------------------------+-------------------------------------------------------------------------------------------+
| OS-FLV-DISABLED:disabled   | False                                                                                     |
| OS-FLV-EXT-DATA:ephemeral  | 0                                                                                         |
| access_project_ids         | None                                                                                      |
| disk                       | 5                                                                                         |
| id                         | 9050                                                                                      |
| name                       | t4.1CPU.512MB                                                                             |
| os-flavor-access:is_public | True                                                                                      |
| properties                 | aggregate_instance_extra_specs:CPU='Trove',              |
|                            |                                                         |
| ram                        | 512                                                                                       |
| rxtx_factor                | 1.0                                                                                       |
| swap                       |                                                                                           |
| vcpus                      | 1                                                                                         |
+----------------------------+-------------------------------------------------------------------------------------------+

We do all this currently with puppet automation and facter facts. If you are a trove compute node you get a fact defined and then puppet sticks you in the right host aggregate.

So this solution works but has issues. The problem with new flavors is that everyone sees them, so someone can nova boot anything they want and it will end up on your Trove node, thus violating the main requirement. Enter Option 2.

Option 2 – Set Image Metadata + a Nova Scheduler

In combination with Option 1, we can set special image metadata such that nova will only schedule images to that node. The scheduler that kinda does this is obviously AggregateImagePropertiesIsolation (pro-tip: do not let Nova devs name your child). This scheduler matches metadata like the flavors above except does it on images. Trove images would be tagged with something like trove=true, for example:

openstack image set --property trove=true cirros-tagged

[DEV] root@dev01-build-001:~# openstack image list
+--------------------------------------+----------------+--------+
| ID                                   | Name           | Status |
+--------------------------------------+----------------+--------+
| 846ee606-9559-4fdc-83b9-1ca57895cf92 | cirros-no-tags | active |
| a12fda2c-d2ff-4b7b-b8f0-a8400939df78 | cirros-tagged  | active |
+--------------------------------------+----------------+--------+
[DEV] root@dev01-build-001:~# openstack image show a12fda2c-d2ff-4b7b-b8f0-a8400939df78
+------------------+-----------------------------------------------------------------------------------------------------+
| Field            | Value                                                                                               |
+------------------+-----------------------------------------------------------------------------------------------------+
<snips abound>
| id               | a12fda2c-d2ff-4b7b-b8f0-a8400939df78                                                                |
| properties       | description='', direct_url='rbd://b589a8c7-9b74-49dd-adbf-90733ee1e31a/images/a12fda2c-d2ff-4b7b-   |
|                  | b8f0-a8400939df78/snap', trove='true'                                                |                                                                                            |
+------------------+-----------------------------------------------------------------------------------------------------+

The problem is that the AggregateImagePropertiesIsolation scheduler considers images that do not have the tag at all to be a match. So while this is solvable for images we control and automate, it is not solvable for images that customers upload, they will end up on Trove nodes because they will not have the trove property. You could solve this with cron but thats terrible for a number of reasons.

Option 2a – Write Your Own Scheduler

So now we just bite the bullet and write our own scheduler. Starting with the AggregateImagePropertiesIsolation we hacked it down to the bare minimum logic and that looks like this:

    def host_passes(self, host_state, spec_obj):
        """Run Trove images on Trove nodes and not anywhere else."""

        image_props = spec_obj.get('request_spec', {}).\
            get('image', {}).get('properties', {})

        is_trove_host = False
        for ha in host_state.aggregates:
            if ha.name == 'Trove':
                is_trove_host = True

        # debug prints for is_trove_host here

        is_trove_image = 'tesora_edition' in image_props.keys()

        if is_trove_image:
            return is_trove_host
        else:
            return not is_trove_host

So what does it do. First it determines if this is a trove compute host or not, this is a simple check, are you in a host-aggregate called Trove or not? Next we determine if someone is booting a Trove image or not. For this we use the tesora_edition tag which is present on our Trove images. Note we don’t really care what it’s set to, just that it exists. This logic could clearly be re-worked or made more generic and/or configurable #patcheswelcome.

Deploying

A few notes on deploying this. Once your python code is shipped you will need to configure it. There are two settings that you need to change:

- scheduler_available_filters - Defines filter classes made available to the
scheduler. This setting can be used multiple times.

- scheduler_default_filters - Of the available filters, defines those that the
scheduler uses by default.

The scheduler_available_filters defaults to a setting that basically means “all” except that doesn’t mean your scheduler, just the default ones that ship with Nova, so when you turn this on you need to change both settings. This is a multi-value string option which means in basic terms you set it multiple times in your configs, like so:

scheduler_available_filters=nova.scheduler.filters.all_filters
scheduler_available_filters=nova_utils.scheduler.trove_image_filter.TroveImageFilter

(Note for Puppet users: The ability to set this as a MultiStringOpt in Nova was not landed until June as commit e7fe8c16)

Once that’s set you need to make it available, I added it to the list of things we’re already using:

scheduler_default_filters = <usual stuff>,TroveImageFilter

Note that available has the path to the class and default has the class name, do this wrong and the scheduler will error out saying it can’t find your scheduler.

Once you make this settings I also highly recommend enabling debug and then bouncing nova-scheduler. With debug on, you will see nova walk the filters and see how it picks the node. Unsurprisingly it will be impossible to debug without this enabled.

In Action

With this enabled and with 3 compute nodes I launched 6 instances. My setup was as follows:

compute3 – Trove host-aggregate
compute1,2 – Not in Trove host-aggregate

Launch 3 images with the tagged images, note they all go to compute3.
Launch 3 images with the un-tagged images, note they all go to compute1,2

Here’s some of the partial output from the scheduler log with debug enabled.

2016-09-23 01:45:56.763 1 DEBUG nova_utils.scheduler.trove_image_filter 
(dev01-compute-002, dev01-compute-002.os.cloud.twc.net) ram:30895 disk:587776 
io_ops:1 instances:1 is NOT a trove node host_passes 
/venv/local/lib/python2.7/site-packages/nova_utils/scheduler/trove_image_filter.py:47
2016-09-23 01:45:56.763 1 DEBUG nova_utils.scheduler.trove_image_filter 
(dev01-compute-003, dev01-compute-003.os.cloud.twc.net) ram:31407 disk:588800
io_ops:0 instances:0 is a trove node host_passes
/venv/local/lib/python2.7/site-packages/nova_utils/scheduler/trove_image_filter.py:44

Conclusion

So although I didn’t really want to, we wrote our own filter scheduler. Since there’s lots of good examples out there we had it working in less than an hour. In fact it took me longer to cherry-pick the puppet fixes I need and figure out the config options than to write the code.

Writing a nova scheduler filter let us solve a problem that had been bugging us for some time. If you plan on writing your own filter too you could look at the barebones docs for new filter writing here, note that there’s no section header for this so look for “To create your own filter”. (when this lands there will be section headers on the page: https://review.openstack.org/#/c/375702/) I’d also recommend when you’re first working on it, just copy an existing one and hack on it in the same folder, then you don’t have to deal with changing the scheduler_available_filters setting, it loads everything in the filters folder.

by Matt Fischer at September 26, 2016 02:30 PM

RDO

Running Tempest on RDO OpenStack Newton

Tempest is a set of integration tests to run against an OpenStack cluster.

What does RDO provides for Tempest?

RDO provides three packages for running tempest against any OpenStack installation.

  • python-tempest : It can be used as a python library, consumed as a dependency for out of tree tempest plugins i.e. for horizon and designate tempest plugins.
  • openstack-tempest : It provides python tempest library and required executables for running tempest.
  • openstack-tempest-all : It will install openstack-tempest as well as all the tempest plugins on the system.

Deploy packstack using latest RDO Newton packages

Roll out a vm of CentOS 7, Follow these steps:

  1. Install rdo-release-newton rpm

     # yum -y install https://rdoproject.org/repos/openstack-newton/rdo-release-newton.rpm
    
  2. Update your CentOS vm and perform reboot.

     # yum -y update
    
  3. Install openstack-packstack

     # yum install -y openstack-packstack
    
  4. Run packstack by enabling RDO GA testing repo:

     # packstack --enable-rdo-testing=y --allinone
    

    Once packstack installation is done, we are good to go ahead.

Install tempest and required tempest plugins

  1. Install tempest

    # yum install openstack-tempest
    
  2. Install tempest plugins based on the openstack services installed and configured on deployment.

    Packstack installs by default horizon, nova, neutron, keystone, cinder, swift, glance, ceilometer, aodh and gnocchi. To find out what are the openstack components installed, just do a rpm query:

     # rpm -qa | grep openstack-*
    

    OR you can use openstack-status command for the same. Just grab the tempest plugins of these services and install it.

     # yum install python-glance-tests python-keystone-tests python-horizon-tests-tempest \
       python-neutron-tests python-cinder-tests python-nova-tests python-swift-tests \
       python-ceilometer-tests python-gnocchi-tests python-aodh-tests
    
  3. To find what are tempest plugins installed:

     # tempest list-plugins
    

    Once done, you are ready to run tempest.

Configuring and Running tempest

  1. source admin credentials and switch to normal user

     # source /root/keystonerc_admin
    
     # su <user>
    
  2. Create a directory from where you want to run tempest

     $ mkdir /home/$USER/tempest; cd /home/$USER/tempest
    
  3. Configure the tempest directory

     $ /usr/share/openstack-tempest-*/tools/configure-tempest-directory
    
  4. Auto generate tempest configuration for your deployed openstack environment

    $ python tools/config_tempest.py --debug identity.uri $OS_AUTH_URL \
      identity.admin_password  $OS_PASSWORD --create
    

    It will automatically create all the required configuration in etc/tempest.conf

  5. To list all the tests

    $ testr list-tests
    

    OR

    $ ostestr -l
    
  6. To run tempest tests:

     $ ostestr
    
  7. For running api and scenario tests using ostestr and prints the slowest tests after test run

     $ ostestr --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.(api|scenario))'
    
  8. To run specific tests:

     $ python -m testtools.run tempest.api.volume.v2.test_volumes_list.VolumesV2ListTestJSON
    

    OR

     $ ostestr --pdb tempest.api.volume.v2.test_volumes_list.VolumesV2ListTestJSON
    

    ostestr –pdb will call python -m testtools.run under the hood.

Thanks to Luigi, Steve, Daniel, Javier, Alfredo, Alan for the review.

Happy Hacking!

by chandankumar at September 26, 2016 01:33 PM

Hugh Blemings

Lwood-20160925

Introduction

Welcome to Last week on OpenStack Dev (“Lwood”) for the week just past. For more background on Lwood, please refer here.

Basic Stats for week 19 to 25 September for openstack-dev:

  • ~547 Messages (up bit over 9.5% relative to last week)
  • ~170 Unique threads (down about 21% relative to last week)

Yet another pretty typical week on the list :)

Notable Discussions – openstack-dev

Architecture Working Group Process

Dean Troyer gives a concise update on where things are up to with the Architecture WG – this important initiative looks to be making good progress

Removing OpenStackSalt and Security project teams from the Big Tent ?

Thierry Carrez started what proved to be a longish thread this week with an email noting that as there were no PTL candidates within the election deadline for a number of official project teams – Astara, UX, OpenStackSalt and Security.

Turns out that the Astara project team wish to abandon the project anyway and the current PTL (Piet) quickly reacted to explain his error and confirm his willingness to continue in the role.

The thread then dealt a bit more specifically with OpenStackSalt – a newer project that seeming had some misunderstandings about the process and Security which has of course been around much longer.

The conclusion by the end of the thread looks to be that both Security and OpenStackSalt will stay within the tent after some satisfactory email and IRC discussions.  Rob Clark’s blog post regarding the situation for the Security project is worth a read too.

Community Contributor Awards nominations open

Kendall Nelson notes that nominations are open for the Community Contributor Awards and will remain so until October 7.

As Kendall puts it: “There are so many people out there who do invaluable work that should be recognized. People that hold the community together, people that make working on OpenStack fun, people that do a lot but aren’t called out for their work, people that speak their mind and aren’t afraid to challenge the norm.”  He continues “Like last time, we won’t have a defined set of awards so we take extra note of what you say about the nominee in your submission to pick the winners.”

Please give some thought to nominating your favour person or their endeavour for the awards, it’s an easy process and a nice way to give recognition to fellow community members.

PTL Election Concludes, TC Positions Open

Tony Breeds noted that the PTL Election has concluded and the results are detailed in his email.

Next up, as pointed out by Tristan Cacqueray in his email are TC elections with Candidate nominations open now through to 1 October 23:45 UTC.

How do -you- handle the openstack-dev Mailing List

Is the question posed by Josh Harlow in his email to the list from late last week – he’s heard it a few times over the years (I dare say many people have) – and is keen for people to share their own work practices.

If you’re reading this -and- contribute to/read the ML please chuck your thoughts in the etherpad :)

Operators Meetup Feedback

So far just the one from Sean Dague as the ops-summit related to Nova, but worth a read and hopefully will inspire more to come :)

More Beautiful Music in Barcelona

Last week’s Lwood made mention of Amrith Kumar’s post seeking instrument wielding musicians – this in turn prompted Neil Jerram to note that there’s a singing group together too :)

Notable Discussions – other OpenStack lists

Nothing that leapt out from the other lists this week.

People and Projects

Core nominations & changes

  • [Searchlight] Matt Borland Core Nomination – Travis Tripp
  • [QA] resigning from Tempest core – Marc Koderer
  • [QA] tempest-cores update – Ken Ohmichi

Further Reading & Miscellanea

Don’t forget these excellent sources of OpenStack news – most recent ones linked in each case

This weeks edition of Lwood brought to you by Pink Floyd (A Momentary Lapse of Reason), Bruce Hornsby (Hot House, Levitate) and Bruce Springsteen (High Hopes)

Last but by no means least, thanks, as always, to Rackspace :)

by hugh at September 26, 2016 07:57 AM

Opensource.com

SDN and NFV integration, updated API documentation, and more OpenStack news

Want to follow the OpenStack cloud computing project? Every week we check in with latest from the open source cloud.

by Jason Baker at September 26, 2016 05:00 AM

Enriquez Laura Sofia

The power of community

Hello, what’s up?

I’m so happy to announce that  I talked about all I’ve learned in my internship with OpenStack in two events. My internship finished but not my contributions.

  1. ARGCONF  at RedHat Arg

csxtdj_waaayjdm-1

I joined vkmc in her talk about The power of community. This was my first talk ever so I was an important experience for me.

cswqclrxeaavitt cswravaxeaafeba-1

I talked about the contribution workflow. Since this could be a bit confused for incoming people It’s cool to add this topic.  Workflow, and conventions are still an issue for me in such a bit community, but luckily it’s easy to find help.

Slides Argconf here

<script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>

2. LinuxChixArg meetup

The most shocking of the meetup was that @stringarray an ex-outreachy intern attended!! She join us and talked about her experience working with Gnome 6 years ago.

captura-de-pantalla-de-2016-09-25-230555 captura-de-pantalla-de-2016-09-25-230605

We gave an especial place to Outreachy this time, talking about our experiences and time working on the internship. I hope attends enjoyed the meet as much as I did. We try to share of motivation with FOSS every day.

captura-de-pantalla-de-2016-09-25-230619

@cynpy shared our next events project: PyCon Argentina.

captura-de-pantalla-de-2016-09-25-230542

Something else?

Twitter:

I also join #OutreachyChat to advice applicants for the incoming round of Outreachy😀

<script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>

 

What’s next?

I’ll attend to OpenStack Summit Barcelona (24th-29th October)!! I can’t wait for it.

I ‘ll also attend to PyCon Ireland!

 


by enriquetaso at September 26, 2016 02:25 AM

Openstack Security Project

Secure Development in Python

OpenStack is one of the largest Python projects, both in code size and number of contributors. Like any development language, Python has a set of best (and worst) security practices that developers should be aware of to avoid common security pitfalls. One mission of the OpenStack Security Project is to help developers write Python code as securely and easily as possible, so we created two resources to help.

Secure Development Guidelines

Easy

The Secure Development Guidelines were created with the goal to make it quick and easy for a developer to learn: - What is the best practice - An example of the incorrect (insecure!) way of accomplishing a task - An example of the correct way of accomplishing a task - Consequences for not following best practices - Links for further Reference

As developers ourselves we’re guilty of more than the occasional copy-paste. The Correct section of the Secure Development Guidelines are a perfect source to jump in and get the best practice code snippet you need.

Bandit

Bandit was built to find common insecure coding practices in Python code. Developed for the OpenStack community by the OSSP, it is the best Python static analysis tool available (in our biased opinion). Like all OSSP resources and tools, Bandit is open source and we encourage people to use it, extend it, and provide feedback.

If you’re new to Bandit a good way to get started is by watching this: presentation

Also check out our wiki.

If you have any questions please contact us on the OpenStack Developer Mailing list (using the [Security] tag), or visit us on IRC in #openstack-security on Freenode.

September 26, 2016 12:00 AM

September 24, 2016

OpenStack in Production

Our Cloud in Liberty

We have previously posted experiences with upgrades of OpenStack such as
The upgrade to Liberty for the CERN cloud was completed at the end of August. Working with the upstream OpenStack, Puppet and RDO communities, this went pretty smoothly without any issues so there is no significant advice to report. We followed the same approach as the past, gradually upgrading component by component.  With the LHC reaching it's highest data rates so far this year (over 10PB recorded to tape during June), the upgrades needed to be done without disturbing the running VMs.

Some hypervisors are still on Scientific Linux CERN 6 (Kilo) using Python 2.7 in a software collection but the backwards compatibility has allows the rest of the cloud to migrate while we complete the migration of 5000 VMs from old hardware and SLC6 to new hardware on CentOS 7 in the next few months.

After the migration, we did encounter a few problems:
The first two cases are being backported to Liberty so others who have not upgraded may not see these.

We've already started the Mitaka upgrade with Barbican, Magnum and Heat ahead of the other components as we enable a CERN production container service. The other components will follow in the next six months including the experiences of the migration to Neutron which we'll share at the OpenStack summit in October.



by Tim Bell (noreply@blogger.com) at September 24, 2016 04:25 PM

Elizabeth K. Joseph

OpenStack QA/Infrastructure Meetup in Walldorf

I spent this week in the lovely town of Walldorf, Germany with about 25 of my OpenStack Quality Assurance and Infrastructure colleagues. We were there for a late-cycle sprint, where we all huddled in a couple of rooms for three days to talk, script and code our way through some challenges that are much easier to tackle when all the key players are in a room together. QA and Infra have always been a good match for an event like this since we’re so tightly linked as things QA works on are supported by and tested in the Continuous Integration system we run.

Our venue this time around were the SAP offices in Walldorf. They graciously donated the space to us for this event, and kept us blessedly fed, hydrated and caffeinated throughout the day.

Each day we enjoyed a lovely walk from and to the hotel many of us stayed at. We lucked out and there wasn’t any rain while we were there so we got to take in the best of late summer weather in Germany. Our walk took us through a corn field, past flowers, gave us a nice glimpse at the town of Walldorf on the other side of the highway and then began in on the approach to the SAP buildings of which there are many.

The first day began with an opening from our host at the SAP offices, Marc Koderer and by the QA project lead Ken’ichi Ohmichi. From there we went through the etherpad for the event to figure out where to begin. A big chunk of the Infrastructure team went to their own room to chat about Zuulv3 and some of the work on Ansible, and a couple of us hung back with the QA team to move some of their work along.

Spending time with the QA folks I learned about future plans for a more useful series of bugday graphs. I also worked with Spencer Krum and Matt Treinish to land a few patches related to the new Firehose service. Firehose is a MQTT-based unified message bus that seeks to encompass all the developer-facing infra alerts and updates in a single stream. This includes job results from Gerrit, updates on bugs from Launchpad, specific logs that are processed by logstash and more. At the beginning of the sprint only Gerrit was feeding into it using germqtt, but by the end of Monday we had Launchpad bugs submitting events over email via lpmqtt. The work was mostly centered around setting up Cyrus with Exim and then configuring the accounts and MX records, and trying to do this all in a way that the rest of the team would be happy with. All seems to have worked out, and at the end of the day Matt sent out an email announcing it: Announcing firehose.openstack.org.

That evening we gathered in the little town of Walldorf to have a couple beers, dinner, and relax in a lovely beer garden for a few hours as the sun went down. It was really nice to catch up with some of my colleagues that I have less day to day contact with. I especially enjoyed catching up with Yolanda and Gema, both of whom I’ve known for years through their past work at Canonical on Ubuntu. The three of us also were walk buddies back to the hotel, before which I demanded a quick photo together.

Tuesday morning we started off by inviting Khai Do over to give a quick demo of the Gerrit verify plugin. Now, Khai is one of us, so what do I mean by “come over”? Of all the places and times in the world, Khai was also at the SAP offices in Walldorf, Germany, but he was there for a Gerrit Hackathon. He brought along another Gerrit contributor and showed us how the verify plugin would replace our somewhat hacked into place Javascript that we currently have on our review pages to give a quick view into the test results. It also offers the ability in the web UI to run rechecks on tests, and will provide a page including history of all results through all the patchsets and queues. They’ve done a great job on it, and I was thrilled to see upstream Gerrit working with us to solve some of our problems.


Khai demos the Gerrit verify plugin

After Khai’s little presentation, I plugged my laptop into the projector and brought up the etherpad so we could spend a few minutes going over work that was done on Monday. A Zuulv3 etherpad had been worked on to capture a lot of the work from the Infrastructure team on Monday. Updates were added to our main etherpad about things other people worked on and reviews that were now pending to complete the work.

Groups then split off again, this time I followed most of the rest of the Infrastructure team into a room where we worked on infra-cloud, our infra-spun, fully open source OpenStack deployment that we started running a chunk of our CI tests on a few weeks ago. The key folks working on it gave a quick introduction and then we dove right into debugging some performance problems that were causing failed initial launches. This took us through poking at the Glance image service, rules in Neutron and defaults in the Puppet modules. A fair amount of multi-player (using screen) debugging was done up on the projector as we shifted around options, took the cloud out of the pool of servers for some time, and spent some time debugging individual compute nodes and instances as we watched what they did when they came up for the first time. In addition to our “vanilla” region, Ricardo Carrillo Cruz also made progress that day on getting our “chocolate” region working (next up: strawberry!).

I also was able to take some time on Tuesday to finally get notice and alert notifications going to our new @openstackinfra Twitter account. Monty Taylor had added support for this months ago, but I had just set up the account and written the patches to land it a few days before. We ran into one snafu, but a quick patch (thanks Andreas Jaeger!) got us on our way to automatically sending out our first Tweet. This will be fun, and I can stop being the unofficial Twitter status bot.

That evening we all piled into cars to head over to the nearby city of Heidelberg for dinner and drinks at Zum Weissen Schwanen (The White Swan). This ended up being our big team dinner. Lots of beers, great conversation and catching up on some discussions we didn’t have during the day. I had a really nice time and during our walk back to the car I got to see Heidelberg Castle light up at night as it looms over the city.

Friday kicked off once again at 9AM. For me this day was a lot of talking and chasing down loose ends while I had key people in the room. I also worked on some more Firehose stuff, this time working our way down the path to get logstash also sending data to Firehose. In the midst of which, we embarrassingly brought down our cluster due to failure to quote strings in the config file, but we did get it back online and then more progress was made after everyone got home on Friday. Still, it was good to get part of the way there during the sprint, and we all learned about the amount of logging (in this case, not much!) our tooling for all this MQTT stuff was providing for us to debug. Never hurts to get a bit more familiar with logstash either.

The final evening was spent once again in Walldorf, this time at the restaurant just across the road from the one we went to on Monday. We weren’t there long enough to grow tired of the limited selection, so we all had a lovely time. My early morning to catch a train meant I stuck to a single beer and left shortly after 8PM with a colleague, but that was plenty late for me.


Photo courtesy of Chris Hoge (source)

Huge thanks to Marc and SAP for hosting us. The spaces worked out really well for everything we needed to get done. I also have to say I really enjoyed my time. I work with some amazing people, and come Thursday morning all I could think was “What a great week! But I better get home so I can get back to work.” Hey! This all was work! Also thanks to Jeremy Stanley, our fearless Infrastructure Project Team Leader who sat this sprint out and kept things going on the home front while we were all focused on the sprint.

A few more photos from our sprint here: https://www.flickr.com/photos/pleia2/albums/72157674174936355

by pleia2 at September 24, 2016 03:30 PM

OpenStack Blog

OpenStack Developer Mailing List Digest September 17-23

Announcing firehose.openstack.org

  • A MQTT based unified message bus for infra services.
  • This allows a single place to go for consuming messages of events from infra services.
  • Two interfaces for subscribing to topics:
    • MQTT protocol on the default port
    • Websockets over port 80
  • Launchpad and gerrit events are the only things currently sending message to firehose, but the plan is to expand this.
  • An example [1] of gerritbot on the consuming side, which has support for subscribing to gerrit event stream over MQTT.
  • A spec giving details on firehose [2].
  • Docs on firehose [3].
  • Full thread

Release countdown for week R-1, 26-30

  • Focus: All teams should be working on release-critical bugs before the final release.
  • General
    • 29th September is the deadline for the new release candidates or release from intermediary projects.
    • Quiet period to follow before the last release candidates on 6th October.
  • Release actions:
    • Projects not following the milestone-based release model who want a stable/newton branch created should talk to the release team.
    • Watch for translation patches and merge them quickly to ensure we have as many user-facing strings translated as possible in the release candidates.
      • If your project has already been branched, make sure those patches are applied to the stable branch.
    • Liaisons for projects with independent deliverables should import the release history by preparing patches to openstack/release.
  • Important Dates:
    • Newton last RC, 29 September
    • Newton final release, 6 October
    • Newton release schedule [4]
  • Full thread

Removal of Security and OpenStackSalt Project Teams From the Big Tent

  • The Security and OpenStackSalt projects are without PTLs. Projects leaderless default to the Technical Committee for decision of what to do with the project [5]. Majority of the Technical Committee has agreed to have these projects removed.
  • OpenStackSalt is a relatively new addition to the Big Tent, so if they got their act together, they could be reproposed.
  • We still need to care about security., and we still need a home for the vulnerability management team (VMT). The suggested way forward is to have the VMT apply to be its own official project team, and have security be a working group.
  • The Mitaka PTL for the Security mentions missing the election date, but provides some things the team has been working on:
    • Issuing Security Notes for Glance, Nova, Horizon, Bandit, Neutron and Barbican.
    • Updating the security guide (the book we wrote on securing OpenStack)
    • Hosting a midcycle and inducting new members
    • Supporting the VMT with several embargoed and complex vulnerabilities
    • Building up a security blog
    • Making OpenStack the biggest open source project to ever receive the Core
    • Infrastructure Initiative Best Practices Badge
    • Working on the OpenStack Security Whitepaper
    • Developing CI security tooling such as Bandit
  • One of the Technical Committee members privately received information that explains why the security PTL was not on top of things. With ~60 teams around there will always be one of two that miss, but here we’re not sure it passes the bar of “non-alignment with the community” that would make the security team unfit to be an official OpenStack Team.
  • Full thread

[1] – http://git.openstack.org/cgit/openstack-infra/gerritbot/commit/?id=7c6e57983d499b16b3fabb864cf3b

[2] – http://specs.openstack.org/openstack-infra/infra-specs/specs/firehose.html

[3] – http://docs.openstack.org/infra/system-config/firehose.html

[4] – http://releases.openstack.org/newton/schedule.html

[5] – http://docs.openstack.org/project-team-guide/open-community.html#technical-committee-and-ptl-elections

by Mike Perez at September 24, 2016 02:03 AM

September 23, 2016

OpenStack Superuser

It’s your chance to rate the Superuser Awards nominees

The OpenStack Summit kicks off in less than five weeks and 12 deserving organizations have been nominated to be recognized during the opening keynotes. These organizations are competing within four categories—enterprise, telecom, research / government and public cloud service providers— to win the Superuser Award that will be presented by AT&T, the most recent winner from the OpenStack Summit Austin last April.

For this cycle, the community (that means you!) will review the candidates before the Superuser editorial advisors select the finalists and ultimate winner. There will be one finalist per category to be recognized during the OpenStack Summit Barcelona keynotes and the winner will be recognized on stage.

Check out the nominations below and click through to see each organization’s full nomination. Then, rate the nominees with this survey to select the nominees that you think should be recognized at the OpenStack Summit Barcelona. You have until Friday, September 30 at 11:59 p.m. PT to rate them.

Enterprise

  • MercadoLibre
    • Mercadolibre has recently implemented its own hybrid cloud, multivendor PaaS based on Docker containers within VMs, and the platform is 100 percent in production, with over 300 active applications, self-healing and auto-scaling over OpenStack and two public clouds.

Public cloud service providers

  • City Network
    • City Network focuses on helping regulated companies, mainly in the financial and healthcare industries, with their digital transformation and cloud adoption. By building completely separated cloud services compliant with regulations such as ISO 9001, 27001, 27015, 27018, Basel, Solvency and HIPAA, City Network allows these industries to be truly agile.
  • Cloudwatt
    • The Cloudwatt team has contributed code upstream in nearly all the Openstack core modules, especially Nova, Horizon and Tempest. Cloudwatt infrastructures are deployed on two data centers in France with more than 400 hypervisors, 10,000 VMs and 400 TiB of object storage.
  • DataCentred
    • DataCentred’s public cloud has around 9,000 virtual cores and 8TB of RAM and its team typically operates at 75 percent capacity as the platform scales. It has a global user base with customers distributed among Israel, New Zealand, the U.S. and Asia, and some of its largest customers are U.K. government departments including HMRC, the tax office.
  • Internap
    • Internap’s OpenStack based public cloud and bare metal offering is globally available in seven data centers worldwide. The combined footprint is 17,760 vCPU of compute, 1,756 of bare metal servers and over 1PB of Swift storage. Its infrastructure-as-a-service (IaaS) based on OpenStack currently predominantly powers customer production workloads in gaming, ad tech and web hosting.
  • OVH
    • OVH runs large OpenStack infrastructures: Its main platform, launched in 2015, manages thousands of compute nodes distributed within six different regions. It currently has more than 100,000 instances running in production, and 530,000 instances are spawned per month.
  • T-Systems International GmbH
    • The OpenTelekomCloud public cloud environment contains more than 2,000 servers (compute and storage nodes) in one region and two availability zones. In production since March 2016, the public cloud has hundreds of customers and recently the web presence of a public broadcasting station was using the OpenTelekomCloud during a state election in Germany.
  • UKCloud Ltd
    • UKCloud is operating across two sites, each with two isolated OpenStack deployments to meet its customers varying levels of security – in essence, this gives UKCloud four separate production grade OpenStack deployments, each of which is currently built to handle around 1,500 instances.
  • VEXXHOST
    • VEXXHOST has thousands of customers all over the world who host tens of thousands of instances across VEXXHOST’s OpenStack infrastructure. In a single day, its customers run hundreds of deployments, including big data customers that leverage OpenStack as well as CI/CD customers.

Research / Government

  • The INDIGO-DataCloud Consortium
    • The INDIGO-DataCloud Consortium has made efforts to support TOSCA in both OpenStack and other cloud management frameworks by contributing code for the TOSCA parser and Heat translator projects, and it’s expected that the usage of this open standard will ease interoperability.
  • Universidade Federal de Campina Grande (UFCG)
    • UFCG’s team has contributed 109,000 lines of code and 4,300 reviews to OpenStack over the last three years. Its members have also contributed to large blueprints, such as hierarchical multi-tenancy and cross-project Keystone v3 adoption.

Telecom

  • China Mobile
    • Based on OpenStack, China Mobile’s application release cycle has been shortened from half a year to a month, and its big data prediction platform has increased the success rate from around 3 percent before to 15-20 percent now.

You can rate the candidates with this survey until Friday, September 30 at 11:59 p.m.  AT&T won the fourth edition of the Superuser Awards at the Austin Summit, joining CERN, Comcast and NTT Group who were the winners at the Paris, Vancouver and Tokyo OpenStack Summits respectively.

The post It’s your chance to rate the Superuser Awards nominees appeared first on OpenStack Superuser.

by Allison Price at September 23, 2016 04:04 PM

Boden Russell

OpenStack Neutron VMware NSX REST API Extension Refence Now Available

Forward

The OpenStack neutron team has done a fantastic job consolidating the neutron REST API reference source into the neutron-lib tree (note that this is an ongoing effort). Once built, the resulting documentation is published to the docs site and is what you see when viewing the neutron api-ref. While stadium projects can contribute their api-ref to the neutron-lib tree, non-stadium projects (such as the numerous neutron plugins including the VMware NSX plugin) must publish/maintain their own API reference documentation.


VMware NSX Neutron Plugin REST API Reference

We recently decided the most straight forward place to publish the OpenStack neutron VMware NSX plugin REST api-ref was right alongside the plugin source code. This document is in markdown format and can be found at vmware-nsx/api-ref/rest.md.

OpenStack neutron VMware NSX api-ref rendered in markdown

Moving forward, our goal is to keep the VMware NSX plugin api-ref in sync with the plugin source code so consumers can always find the api-ref for the release of the plugin they are using. Consumers using the VMware NSX neutron plugin for release REL can access the following URL to view its api-ref:

https://github.com/openstack/vmware-nsx/tree/stable/REL/api-ref/rest.md

However since we just committed this documentation, consumers will only be able to access the api-ref using the above URL starting with the Ocata release (for now it can be accessed from the master branch of the plugin source repository).

We look forward to any feedback on this api-ref so feel free to open a bug, or reach out to me directly.

by boden (noreply@blogger.com) at September 23, 2016 02:59 PM

Tesora Corp

Short Stack: Support is Differentiator in OpenStack Race, Highlights from Oracle OpenWorld, and OpenStack Growth by the Numbers

Welcome to the Short Stack, our regular feature where we search for the most intriguing OpenStack news. These links may come from traditional publications or company blogs, but if it’s about OpenStack, we’ll find the best ones.

Here are our latest links:

Innovate, Collaborate, Replicate success says OpenStack’s Jonathan Bryce | OpenStack Superuser

Rob LeFebvre’s interviewed OpenStack Foundation Executive Director Jonathan Bryce. Bryce revealed his simple goal for OpenStack: providing the open and accessible tools to give the world all the computing capacity it will need in the future. Bryce said he believes that for OpenStack to keep moving forward, the community needs to continue to innovate, collaborate, and duplicate success.

Support is Now the Differentiator in the OpenStack Race | Linux.com

Sam Dean discussed the current ‘tipping point’ of the OpenStack industry. Dean detailed how competition is leading to market consolidation, and that moving forward the key differentiator between major OpenStack players will be support. Dean finally stated how especially in the open source world, companies that provide quality support are proven to succeed.

Larry Ellison Lays It On the Line: 5 Highlights From Oracle OpenWorld | Forbes

Rob Preston covered Larry Ellison’s keynotes from this week’s Oracle OpenWorld conference in San Francisco. Ellison’s talks asserted several major points: IT as a utility service is now a reality, security continues to be job number 1, AWS is more closed than the IBM mainframe, and that Oracle can bring their cloud to your premise. 

OpenStack Growth – By the Numbers and In Real Life | Tesora

Tesora’s Frank Days examined projections for OpenStack growth by 451 Research, with estimated OpenStack-related revenue at about 1.7 billion this year alone. Days continued to apply his thoughts on OpenStack growth to a real life example, giving his observations from the OpenStack Days East event this past August. He cited the enterprises that presented at the event and the number of servers they support running on OpenStack.

Common OpenStack Deployments released | PrincessLeia.com

Elizabeth Krumbach Joseph talked about the release of her second book, Common OpenStack Deployments. Joseph received help from her contributing author, Matt Fischer, and said his Puppet and OpenStack expertise proved very valuable to the book’s publication. Joseph decided to publish this book with the goals of clarifying what exactly OpenStack is, and to detail how to build clouds valuable to a business.

The post Short Stack: Support is Differentiator in OpenStack Race, Highlights from Oracle OpenWorld, and OpenStack Growth by the Numbers appeared first on Tesora.

by Alex Campanelli at September 23, 2016 02:37 PM

IBM OpenTech Team

Huge lineup of IBM speakers at the Barcelona OpenStack Summit

openstack-barcelona-ibm-sponsor-social-tile

IBM is proud to be a Headline Sponsor for the Barcelona OpenStack Summit. Expectations are high for the upcoming summit as speaker session acceptance rates were extremely competitive for Barcelona. With that in mind, I am extremely pleased to inform you that IBM has a large number of presentations and speakers presenting at this prestigious conference. We have over 50 speakers presenting in 32 different sessions. For this summit, IBM continues its strong focus on OpenStack initiatives that are critical for successful production deployment of OpenStack at scale.

On Tuesday, don’t miss Jesse Proudman and Animesh Singh talking about their lessons learned operating hybrid Cloud Foundry deployments on top of OpenStack and how they used underlying technologies like BOSH, Razon, Ansible, Rally to seamlessly operate them. Also on Tuesday, we have some excellent sessions on container technology and how it intersects with OpenStack projects. The team will cover topics such as Magnum, Kuryr, OCI and CNCF and demonstrate live migration of container workloads between OpenStack Nova hosts. Another can’t miss session is Brian Cline’s Swift at Scale session that highlights IBM SoftLayer’s journey of running Swift in over 22 datacenters and the launching of several multi-facility Swift Clusters.

In our Wednesday sessions, big themes include security and scalability with a heavy focus on networking and containers. Neutron and OVN legend Kyle Mestery and team will be speaking about how to scale an OVN-powered OpenStack. Ton Ngo and Winnie Tsang will discuss how they scale Kubernetes, Swarm, and Mesos container clusters managed by OpenStack Magnum. Also look for a talk on securing container platforms by Salman Baset and Phil Estes and a great hands-on workshop by Fernando Diaz and Elvin Tubillara on securing your cloud.

On Thursday, look for some outstanding sessions on OpenStack Interoperability. IBMers have been leading a large cross-community effort called the OpenStack Interoperability Challenge that has been focused on driving improved OpenStack workload portability by demonstrating best practices for automated deployment of enterprise application workloads that work across a variety of OpenStack clouds. After your fill of OpenStack interoperability, be sure to catch more sessions on OpenStack networking solutions, federated cloud access, OpenStackClient, and cluster management of thousands of VMs using OpenStack Senlin.

The daily summaries presented above are only a sampling of the great IBM speaker sessions you will find at OpenStack Barcelona. A complete list of all IBM speaker sessions is provided below. In addition, IBM is also hosting an IBM Client Day on Wednesday, October 26th starting at 11:15am in Room 116 of the CCIB. This is a full-day event featuring Thought Leaders, Executives, and Community Leaders who will share their market perspective, knowledge of OpenStack, and insights on how to design, develop, and maintain applications built on Open Technologies. We also will have book signings and free book giveaways for the extremely popular OpenStack Keystone book. Stop by the IBM Booth in the Marketplace at lunch time on Tuesday, October 25th to get your copy.  We look forward to seeing you at all these events!

Tuesday, October 25th

Wednesday, October 26th  

Thursday, October 27th  

The post Huge lineup of IBM speakers at the Barcelona OpenStack Summit appeared first on IBM OpenTech.

by Brad Topol at September 23, 2016 01:47 PM

September 22, 2016

Mirantis

Let’s meet in Barcelona at the OpenStack Summit!

The post Let’s meet in Barcelona at the OpenStack Summit! appeared first on Mirantis | The Pure Play OpenStack Company.

As we count down the days to the OpenStack Summit in Barcelona on October 24-28, we’re getting ready to share memorable experiences, knowledge, and fun!

Come to booth C27 to see what we’ve built with OpenStack, and join in an “Easter Egg Hunt” that will test your observational skills and knowledge of OpenStack, Containers, and Mirantis swag from prior summits. If you find enough Easter eggs, you’re entered in our prize drawing for a $300 Visa gift card or an OpenStack certification exam from our OpenStack Training team ($600 value). And as always, we’re giving away more awesome swag you’ve come to expect from us.

If you’d like to set up some time at the summit to talk with our team, simply contact us and we’ll schedule a meeting.

 

Free Training

Mirantis is also providing two FREE training courses based on our standard industry-leading curriculum. If you’re interested in attending, please follow the links below to register:

 

Mirantis Presentations

Here’s where you can find us during the summit….

TUESDAY OCTOBER 25

Tuesday, 12:15pm-12:55pm
Level: Intermediate
Chasing 1000 nodes scale (Dina Belova and Alex Shaposhnikov, Mirantis; Inria)
Tuesday, 12:15pm-12:55pm
Level: Intermediate
OpenStack: you can take it to the bank! (Ivan Krovyakov, Mirantis; Sberbank)
Tuesday, 3:05pm-3:45pm
Level: Intermediate
Live From Oslo (Oleksii Zamiatin, Mirantis; EasyStack, Red Hat, HP)
Tuesday, 3:55pm-4:35pm
Level: Intermediate
Is your cloud forecast a bit foggy? (Oleksii Zamiatin, Mirantis; EasyStack, Red Hat, HP)
Tuesday, 5:05pm-5:45pm
Level: Intermediate

WEDNESDAY OCTOBER 26

Wednesday, 11:25am-12:05pm
Level: Intermediate
Wednesday, 11:25am-12:05pm
Level: Advanced
Wednesday, 12:15pm-12:55pm
Level: Beginner
The Good, Bad and Ugly: OpenStack Consumption Models (Amar Kapadia, Mirantis; IDC, EMC, Canonical)
Wednesday, 12:15pm-12:55pm
Level: Intermediate
OpenStack Journey in Tieto Elastic Cloud (Jakub Pavlík, Mirantis TCP Cloud; Tieto)
Wednesday, 2:15pm-3:45pm
Level: Intermediate
User Committee Session (Hana Sulcova, Mirantis TCP Cloud; Comcast, Workday, MIT)
Wednesday, 3:55pm-4:35pm
Level: Beginner
Lessons from the Community: What I’ve Learned As An OpenStack Day Organizer (Hana Sulcova, Mirantis TCP Cloud; Tesora, GigaSpaces, CloudDon, Intel, Huawei)
Wednesday, 3:05pm-3:45pm
Level: Beginner
Glare – a unified binary repository for OpenStack (Mike Fedosin and Kairat Kushaev, Mirantis)
Wednesday, 3:55pm-4:30pm
Level: Intermediate
Wednesday, 3:55pm-4:35pm
Level: Intermediate
Is OpenStack Neutron production ready for large scale deployments? (Oleg Bondarev, Satish Salagame and Elena Ezhova, Mirantis)
Wednesday, 5:05pm-5:45pm
Level: Beginner

THURSDAY OCTOBER 27

Thursday, 9:00am-9:40am
Level: Intermediate
Sleep Better at Night: OpenStack Cloud Auto­-Healing (Mykyta Gubenko and Alexander Sakhnov, Mirantis)
Thursday, 11:00am-11:40am
Level: Advanced
OpenStack on Kubernetes – Lessons learned (Sergey Lukjanova, Mirantis; Intel, CoreOS)
Thursday, 11:00am-11:40am
Level: Intermediate
Thursday, 11:50am-12:30pm
Level: Intermediate
Kubernetes SDN Performance and Architecture Evaluation at Scale (Jakub Pavlík and Marek Celoud, Mirantis TCP Cloud)
Thursday, 3:30pm-4:10pm
Level: Advanced
Ironic Grenade: Blowing up our upgrades. (Vasyl Saienko, Mirantis; Intel)
Thursday, 3:30pm-4:10pm
Level: Beginner
Thursday, 5:30pm-6:10pm
Level: Beginner
What’s new in OpenStack File Share Services (Manila) (Gregory Elkinbard, Mirantis; NetApp)

The post Let’s meet in Barcelona at the OpenStack Summit! appeared first on Mirantis | The Pure Play OpenStack Company.

by Dave Van Everen at September 22, 2016 04:08 PM

Gorka Eguileor

Cinder’s Ceph Replication Sneak peek

Have you been dying to try out the Volume Replication functionality in OpenStack but you didn’t have some enterprise level storage with replication features lying around for you to play with? Then you are in luck!, because thanks to Ceph’s new RBD mirroring functionality and Jon Bernard’s work on Cinder, you can now have the […]

by geguileo at September 22, 2016 12:58 PM

OpenStack Superuser

What DevOps can (and can’t) do for you

All companies are software companies, says Luke Kanies, founder and CEO of Puppet, an open-source configuration management company.  As such, they have to optimize their practices to deliver high quality software to their end users quickly and reliably.

According to five years of survey data conducted by Puppet, the set of tools and practices known as DevOps has been shown to improve the frequency of software deployment by a factor of 200, resulting in 2,500 times shorter lead times between the time of idea and production, or problem discovery and customer benefit from fixing that problem.

Many companies, however, believe that the costs of changing their current technology infrastructures to one that uses DevOps too high. As a result, many of these companies will fail to become great software companies.

On stage recently at an OpenStack Days: Silicon Valley, Kanies explained to attendees why adopting DevOps tools and practices within their organizations was paramount to their current and future success.

“If you’re able to move past the barriers that exist in most organizations today, you’ll be able to keep the great people on your team happy, spend more time creating value instead of responding to outages and problems, and deliver better software faster,” says Kanies.

There are six big myths about using DevOps that Kanies wants to dispel.

MYTH: There’s no direct customer or business value for adopting DevOps practices.

Reality: The problem is that more that companies don’t know how to really understand their customers and then create software that actually connects to those issues. DevOps, says Kanies, delivers reliable products, delivers software faster, seeks to optimize processes, and introduces real measurability.

MYTH: There’s no significant return on investment in applying DevOps principles to legacy applications.

Reality: The reality is, says Kanies, that 98 percent of the world runs in legacy environments. Using DevOps is not an all or nothing proposition and is often simpler than it appears. The largest returns often come from unexpected areas when teams start utilizing DevOps practices in legacy environments, while ignoring these older systems may undermine other efforts within the company.

Teams need to start to work with DevOps across the entire infrastructure, but it doesn’t need to be all at once.

“There’s a ton of value in using DevOps across your entire organization, syas Kanies, “even if you’re not going to go rebuild and completely automate your entire setup.”

MYTH: DevOps only works with ‘unicorn’ companies and not traditional enterprise businesses.

Reality: DevOps is the new normal, says Kanies, and it benefits with reduced time to market, lower mean time to recovery (MTTR), and high levels of employee engagement. It works as well for traditional, mature organizations as it does for newer startups.

MYTH: Improvement via DevOps principles requires spare time and people that we simply don’t have

Reality: Many companies waste time doing things manually that software can do more reliably and faster. DevOps, says Kanies, is more often about reclaiming the time focused on current inefficiencies and using it for more sustained, long-term efficiency strategies.

This leaves more time  for value-adding activities and can actually free up capacity within organizations.

MYTH: Regulatory and compliance requirements that preclude the adoption of DevOps principles.

Reality: It’s not against the rules to automate compliance and regulatory activities, says Kanies. In fact, adding in audit and compliance systems to DevOps practices makes the processes easier to audit, easier to understand, and easier to secure.

“If you have an automation platform that doesn’t do all these at once,” says Kanies, “you’re most likely going to fail in some relevant and miserable way.”

MYTH: We don’t have any problems that adopting DevOps principles and practices would fix.

Reality: In fact, DevOps principles and practices allow teams to improve their efforts, move faster, and eliminate the most frustrating parts of work. This allows organizations to consistently deliveri a better software experience, and hence a better product, to its customers.

“You and your teams and all the companies around us have a choice: whether you want to aggressively move into this world,” says Kanies, “or let your competitors do it first.”

The data makes clear the benefits of adopting DevOps. The survey says that companies that adopt DevOps are pulling away faster than ever.

You can download the DevOps report here or  catch his 20-minute talk here: <iframe allowfullscreen="allowfullscreen" frameborder="0" height="480" src="https://www.youtube.com/embed/aBCplH6BX0s" width="854"></iframe>

Cover Photo ) // CC BY NC

The post What DevOps can (and can’t) do for you appeared first on OpenStack Superuser.

by Rob LeFebvre at September 22, 2016 11:02 AM

Openstack Security Project

Maturing the Security Project

This blog article is intended to address the recent discussions on the openstack-dev mailing list, following the suggestion by Thierry on behalf of the TC that the OpenStack Security Project “should be removed from the Big Tent” because the security team failed to nominate and elect a project ream lead (PTL) for the next release cycle. This process is required for all active project teams and is seen by the TC as a failure in community engagement that the OSSP has missed this deadline, again.

Back in the early days of the Security Project being in the big tent I missed the election deadline for my nomination. Pure oversight on my part, I was new to the role of PTL having been grandfathered in from the working group and I simply didn’t realise what was required for elections. ‘Missing a nomination once is bad, so missing the most recent nomination window is obviously very bad and raises questions over the level of engagement we have in the community, particularly as everyone in the OSSP also missed the email sent to highlight the closing nomination window (Its the one on the 19th)…

PTL election reminder

Unfortunately during the nomination window I was temporarily distracted dealing with some local issues. I’ve discussed these with a member of the TC who recognises that it was a temporary thing that’s unlikely to happen in the future, however the bell has been rung and we must decide how to proceed.

Maturing the Security Project

Missing two nominations reflects badly on a project team and leads to several understandable questions being asked: Who are these people? Are they an active team? Should they be moved outside of the big tent?

These are understandable questions, I feel that my on-thread response addressed them for the most part. What I want to focus on is the things that we need to do to be a better part of the community and ensure that project teams and the TC are both aware of what we do and how we help improve security in OpenStack.

We know from the feedback we’ve had from downstream OpenStack consumers that our work is valued, we need to better demonstrate that value within the OpenStack community. I think a good place to start is to look at the Project Team Guide examine what we are already doing and where we fall short. Of course this doesn’t include the good things we do like providing CI tooling for security, threat analysis etc but it is the minimum boxes that we should be ticking off as a project team and that I should be driving as PTL.

I want to be clear, I think that the Security Project is doing great things to enhance security in OpenStack. We need to become a better community player though, through doing so I expect new opportunities to innovate on security and create new ways to make OpenStack more secure.

Score Card

I’m proposing a score card for the security project, to ensure we’re doing all that we should be doing and identify those areas where we need to improve. I’ve based this on the Project Team Guide

Requirement Status Notes
Open Code Achieved All code in git and licensed appropriately
Open Design Achieved All design is open to the public, conducted at summits etc
Open Development Achieved We follow standard OpenStack best practice
Open Community Needs Improvement We have a gap around the mailing lists that we need to address
Public Meetings on IRC Achieved 1700UTC Thursdays #openstack-meeting-alt
Project IRC channel Achieved #openstack-security
Community Support Channels Mostly Achieved We are strong on Launchpad and IRC which is where 90% of our workload comes from however we need to pay more attention to the ML and ask.openstack.org
Planet OpenStack Achieved This security blog posts to planet openstack
Participate in Design Summits Achieved Regular, very well attended sessions
Release Management Achieved We have a number of software projects that we created to support or enhance security in OpenStack. As they’re not directly consumed by OpenStack Operators they’ve not been part of the normal release cycle. Instead we follow the Independent release model.
Support Phases Needs Improvement Traditionally we have not followed the normal support phases for our projects because they have not been directly consumed by downstream OpenStack users. However there’s a clear opportunity to get more in line with the rest of the OpenStack community here. This should make things like rolling Bandit changes out through CI easier.
Testing Achieved++ All of our software and documentation efforts have appropriate gate tests in place. Functional and Unit tests are in place where appropriate. We’ve also built tooling that other teams are using in their projects for Security gate tests. We’re not just testing, we’re also testing our integration with the projects that have adopted us.
Vulnerability Management Achieved Our software projects don’t have the vulnerability managed tag, however as the OSSP we do triage any security issues in our own software following standard processes, this is best demonstrated with the recent XSS issue in Bandit https://bugs.launchpad.net/bandit/+bug/1612988
Documentation Achieved We have a lot of documentation out there for customers and consumers of openstack OSSNs, security.openstack.org, the security guide as well as developer documentation such as that for Anchor and Bandit

The Four Opens

To paraphrase from the OpenStack documentation it’s important that any project participating in the big tent adopt and practice the “four opens”. Open Source, Open Design, Open Development and Open Community.

For the most part we have done a good job of following these, all of our code is developed under the appropriate Apache Licenses and all of our documentation efforts like the security guide, security notes, threat analysis etc are all conducted openly and use the same peer review tools as our code projects. We develop new ideas in the open, attend design summits and encourage new contributions.

Where we have not done such a good job is with the Open Community goal. Of course our team is open to new ideas and new contributions but we have not been as big of a participant in the larger community as we could have been. Our work with the VMT typically means that teams are driven toward us when they require our assistance.

I’d like to expand a little bit more on what Open Community means and where we can improve. OpenStack has some very good documentation on this topic but again I’ll paraphrase here.

Public Meetings on IRC: This is something that the security project has always done. We can be found on #openstack-meeting-alt at 1700UTC every Thursday. Our meetings are public and logged we have a standing public agenda that any developer is welcome to contribute to if they want to participate in the meeting, we also welcome people dropping by with questions, comments etc.

Mailing Lists: When the Security Project first formed we were a working group, we had a separate mailing list that didn’t get used for many things but for legacy reasons that I can’t remember (we’ve been doing security for OpenStack since Essex) we had a private list. As I said it didn’t get used much in our day-to-day and I think that’s a bad practice that we carried across to our big tent operations.

Largely I think this disconnect from the mailing list has arisen because it was not our experience that we needed to use it. Most of our work has always come from teams reaching out directly to us, typically via IRC. I think it will always be the case that teams will be more active on one communication medium than another but I fully accept that to meet our obligations under the four opens we must find a way to work more effectively on the mailing lists.

Community Support Channels: We manage all of our bugs on LaunchPad, that’s the primary way we interact with the VMT. Our IRC channel is reasonably active but as we’ve described above we certainly need to do better on the mailing lists.

Impact of removing Security from the big-tent

Although I think it’s been addressed a number of times on the mailing-list thread I’d like to reiterate two themes from the responses regarding concerns of removing Security from the OpenStack big-tent.

Legitimacy: As can be gleaned from this blog, we haven’t done the best job in making the wider OpenStack community aware of what it is that we do, probably even some teams who are running Bandit in their gate might not realise that it’s a tool that we created for OpenStack to be more secure. However even with teams that haven’t heard of us, we are able to quickly gain traction when they see that we are a ‘proper’ OpenStack project - the truth of the matter is that how most people see OpenStack, you’re either in the tent or you’re largely an irrelevance. We know this because we started outside of the tent and found it much harder to engage with teams where we could see there were obvious security issues. Being outside of the big-tent will make it very difficult for us to act as an authority for signing off that a project has taken reasonable security steps before applying for a vulnerability managed tag, a relatively recent change.

Investment: Running any OpenStack project requires investment, very few projects succeed based only on people working on them in their spare time. For the most part investment here means giving people time to contribute to Security as part of their working week, to provide funding for spaces for meetings and mid-cycles and to cover the time and expenses of contributors travelling to design summits etc. It’s no secret from looking around OpenStack that some historically big contributors have been scaling back the number of people they send to summits, the numbers of active contributor they maintain etc. Having been in the position of lobbying various corporations for support in these areas I cannot imagine a scenario where we could leave the big tent and continue to dedicate time to the efforts we have in place.

Without the legitimacy we have from being part of the big-tent we will not get the investment required to deliver and enhance security within OpenStack.

Moving Forward

I think it’s clear by now that I want the Security Project to have the opportunity to stay within the big-tent. I’d like to continue on as PTL at least through a period of maturing the Security Project to ensure that our baseline operations are aligned with what the wider community expects of any big-tent project.

I want the opportunity to improve the score card above and have us achieving everything on that list. I see no reason why we can’t begin acting on these things now and that our status can easily be judged on this basis during the next election cycle.

September 22, 2016 12:00 AM

September 21, 2016

Dougal Matthews

Debugging Mistral in TripleO

During the OpenStack Newton cylce the TripleO project has started to use Mistral, the OpenStack Workflow service. This has allowed us to provide an interface to TripleO that is used by the CLI and the new GUI (and potentially other consumes in the future).

We naturally had to debug a few problems along the way. We'll go through the steps to track down the issue.

Mistal Primer

In TripleO, we call Mistral in two different ways - either by starting Workflows or directly calling Actions. Workflows are started by creating Executions, this then represents the running workflow. Running actions are represented by Action Executions. Since Workflows are typically are made up of a number of action calls (or sub-workflow calls) this means start a Workflow will start one or more Executions and one or more Action Executions.

Unfortunately it isn't always clear if you are calling a workflow or action directly. So, first things first, what is happening in Mistral? Let's list the workflow executions and the action executions.

A small gotcha; Unfortunately when we call Mistral actions directly they wont be stored in the Mistral database, and this wont show up in mistral action-execution-list. This is because by default they will run synchronously and then the result isn't saved. We can tell Mistral to save the result, but then the action runs asynchronously, which defeats the purpose of calling actions directly. I am working to change this behaviour. Direct action calls can be seen in the Mistral log.

$ openstack workflow execution list
$ openstack action execution list
# or
$ mistal execution-list
$ mistal action-execution-list

These commands can be generally useful if you are waiting for something to finish and want to look for signs of progress.

The most important columns to pay attention to are the "Workflow name" and "State" in both. Then then "State info" in the execution list and the "Task name" in the action execution list.

Finding the error

Okay, so something has gone wrong. Maybe you have an error that mentions Mistral or you noticed an error state in either the execution list or action execution list.

First check the executions. If you have a Workflow in error state, then you often want to look at the action executions unless there is an error in the workflow itself. The output here should give us enough information to tell if the workflow is a problem or one of the actions.

mistral execution-list | grep "ERROR";
# Grab the execution ID from above.
mistral execution-get $EXECUTION_ID;
mistral execution-get-output $EXECUTION_ID;

Then check the actions. Often these are more useful to look at, but you first want to know which workflow execution you are debugging.

# Also look at the actions
mistral action-execution-list;
mistral action-execution-get-output $ACTION_ID;

NOTE: Sometimes an action execution is in the ERROR state, but that is expected. For example, in some workflows we check if a swift container exists and it is an "ERROR" if it doesn't, but it just changes the Workflow logic.

Hopefully this gives you some idea what is going on, but you may need to look into the Logs fo the full traceback...

Logs

The Mistral log is very detailed and useful for in depth debugging. To follow it and look for messages from the TripleO actions, or ERROR's I find this very useful.

tail -f /var/log/mistral/mistral-server.log | grep "ERROR\|tripleo_common";

Common-ish Problems

A couple of problems I've seen a few times and how they can be spotted.

  • "Error response from Zaqar. Code: 503. Title: Service temporarily unavailable. Description: Messages could not be enqueued. Please try again in a few seconds.."

Sometimes workflows will fail when sending messages to Zaqar, this is how the result of a workflow is reported. Unfortunately this is hard to debug. You can usually safely retry the full workflow, or retry the individual task.

mistral task-list;
# Find the ID for the failed task.
mistral task-rerun $ID;

Hopefully we can resolve this issue: https://bugs.launchpad.net/tripleo/+bug/1626103

  • Another problem? I shall add to this as they come up!

by Dougal Matthews at September 21, 2016 08:31 PM

Kenneth Hui

Assembling OpenStack as a Service with Rackspace and Red Hat

avengers-assemble-s1

I am a huge comic book movie fan, loving the resurgence in popularity of superhero movies over the past decade. I particularly enjoy superhero team movies such the Marvel Cinematic Universe’s “Avengers.” There is something spectacular about watching individual heroes, powerful in their own right, accomplish even more by working together towards a common goal.

On a less heroic scale, truly innovative things can be done when IT industry experts come together to solve user problems. Building and running private clouds is one of those problems.

So what happens when the #1 open source software vendor and the #1 operator of OpenStack clouds comes together? You get Rackspace Private Cloud powered by Red Hat — a Red Hat OpenStack platform and Rackspace Fanatical Support.

Rackspace Private Cloud powered by Red Hat, or RPC-R for short, is a prescriptive Rackspace and Red Hat solution that makes OpenStack private cloud easy to consume by delivering it as a managed service.

To read more about how Rackspace and Red Hat have collaborated on the best OpenStack private cloud solution, please click here to go to my article on the Rackspace blog site.


Filed under: Cloud, Cloud Computing, OpenStack, Private Cloud Tagged: Cloud, Cloud computing, OpenStack, Private Cloud, Rackspace, Red Hat

by kenhui at September 21, 2016 04:02 PM

OpenStack Superuser

Update on OpenStack Gerrit-StoryBoard integration

Collaboration is what keeps the lights on at OpenStack, whether developers work together on a single project or people ironing out the wrinkles across the 58 big tent projects.

Superuser got this update on the integration of StoryBoard, the cross-project task tracking system, and Gerrit, the code review set-up, from Zara Zaimeche, software developer at Codethink.

Naturally, these updates are also fruit of teamwork: Zaimeche gives special thanks to Khai Do of HPE for the bulk of the Gerrit integration work, Codethink’s Adam Coldrick for boards/worklists implementation and the Infra team for guiding the process.  “This is the big one! And I am as happy as a clam,” Zaimeche says.

Here’s what you can expect and how to get involved.

StoryBoard is now integrated with Gerrit

To use this incredible new power in StoryBoard, find the task ID to the left of the task you’re about to send a patch for.

Then put:

Task: $task_id

into the commit message. When the patch is sent, this will update the status of the relevant task in StoryBoard and post a comment linking to the Gerrit change. Stories also have unique IDs, found to the left of each story in the list of stories, so if you include:

Story: $story_id

You can easily browse from Gerrit to the related StoryBoard story. There is an example of the syntax here: https://review.openstack.org/#/c/355079/

screen-shot-2016-09-20-at-11-09-43-am

If you’d like to try it out yourself but don’t have any pressing patches to send, you can make a story over at our test instance, https://storyboard-dev.openstack.org , and then send a nonsense patch to a project in review-dev (https://review-dev.openstack.org/), citing the relevant task and/or story.

Worklists and boards are more discoverable

Now logged-out users can easily find the lists of worklists and boards, and users can filter them by title, or by tasks or stories they contain. You’ll find them on the main sidebar, just below the ‘dashboard’ option. A worklist lets you order a custom todo list (e.g. to convey priority), or provide a handy filter of tasks/stories (e.g. ‘show all ‘todo’ tasks in story foo). A board allows you to create several lists side-by-side, so that you can track the movement of tasks. This means you can, say, create a board with ‘todo’, ‘review’, and ‘merged’ lanes, filtered by project, and the contents of these will update as people send patches to Gerrit. Here’s an example: https://storyboard.openstack.org/#!/board/14 screen-shot-2016-09-20-at-11-10-22-am

More usable developer docs

Matthew Bodkin has updated our developer docs so that they can be used to launch a StoryBoard instance. They should be functional now (we aim for the stars). He’s also helped with multiple misc UI fixes recently, so thanks again.

What’s next

There is a TC Ocata goal to remove incubated Oslo code, which affects two StoryBoard projects (the API and the Python client).

I’ve made a story for it over here: https://storyboard.openstack.org/#!/story/2000707

screen-shot-2016-09-20-at-11-12-48-am

if other affected projects are using StoryBoard, it makes sense to list tasks there so they’re easier to find. This is exactly the sort of cross-project work that StoryBoard is designed for, so let’s give it a workout!

I could do with some guidance or examples on removing and replacing the incubated Oslo code (especially for the Python client, which uses the old apiclient module). If people are interested in running scripts against StoryBoard and doing more specific browses and filters on results, our Python client is the answer, so I’m interested in a) tidying it up and b) finding out people’s workflow and how they would expect to interact with the python client from the commandline.

Get involved!

The StoryBoard meeting is at 15:00 UTC every Wednesday in #openstack-meeting. We are also always available in #storyboard, for chatter (and occasionally development). Happy task-tracking! 🙂

Cover Photo // CC BY NC

The post Update on OpenStack Gerrit-StoryBoard integration appeared first on OpenStack Superuser.

by Nicole Martinelli at September 21, 2016 11:02 AM

Aptira

Testing your OpenStack with Tempest – Part 2

Tempest is the official OpenStack test suite that runs integration tests against an OpenStack cloud to validate its healthiness and find out its (potential) problems. Tempest contains a list of test classes, in three categories, being API, scenario, and stress. API tests validate API functionalities; scenario tests simulate complex multi-step operations; stress tests run several jobs in parallel to see if the service can sustain high workload. Tempest uses its own client implementation rather than existing Python clients so that it can send fake or invalid requests to check if APIs are implemented correctly.

Setting up Tempest is a challenge as the documentation is not very clear. Some problem may not be caused by the configuration of Tempest but the setup of your cloud. Fortunately, there are several tools that can help users take advantage of Tempest without going through too much pain. In this series, we will show you how to run Tempest, standalone or with the help of another tool.

In Part 1 we describe how to use RefStack client to run Tempest tests. RefStack is a tool developed to assist DefCore capability testing. In Part 2 I will show you how to run Tempest tests with Rally.

Rally is a benchmarking tool that tests the scalability of an OpenStack cloud. Its main job is to run workloads in parallel in order to detect performance issues of a cloud, or find out the best configuration model to achieve best performance. Furthermore, it provides a feature to call Tempest and run tests against a cloud to ‘verify‘ its functionalities. That is the feature we will explore in this blog post.

Rally’s documentation is very clear on installation and setting up Tempest. I just list key steps here. We assume you are running Rally against an All-in-One OpenStack that is installed by following Part 1’s instructions.

Firstly, we install Rally. The Rally team has provided a script to automate the download and installation, which makes things a lot easier.

wget -q -O- https://raw.githubusercontent.com/openstack/rally/master/install_rally.sh | bash

Then we need to add a deployment to Rally. The easiest way is to do it via environment variables.
source openrc
rally deployment create --fromenv --name=mycloud

Next we can install Tempest in Rally.
rally verify install

The above command does a default installation and Tempest will be put into a virtual environment. But you can also specify a particular version of Tempest to install, and/or install Tempest into system library path.
rally verify genconfig

This command will generate a default config file for Tempest. To view it, run the following command. It will also give you the path. So you can tweak it according to Part 1.
rally verify showconfig

Then we can start running Tempest. If nothing is specified, it will run all tests.
rally verify start

If we just want to run DefCore tests, like Part 1, we can download the test list, e.g.
wget https://refstack.openstack.org/api/v1/guidelines/2016.01/tests?type=required -O defcore.txt

and update these variables in the auto-generated tempest config file (like what we do in Part 1):
[compute]
image_ref = [your cirros image uuid]
image_ref_alt = [your cirros-alt image uuid]
flavor_ref = [your flavor uuid]
flavor_ref_alt = [your flavor uuid]
fixed_network_name = [your fixed network name]
[validation]
image_ssh_password = cubswin:)

The good thing is Rally allows you to use auto-generated accounts so you don’t need to create any for the test.

And then run rally with this list.
rally verify start --tests-file defcore.txt

Here is an example output.
...
======
Totals
======
Ran: 76 tests in 399.0000 sec.
- Passed: 75
- Skipped: 1
- Expected Fail: 0
- Unexpected Success: 0
- Failed: 0
Sum of execute time for each test: 593.3815 sec.
==============
Worker Balance
==============
- Worker 0 (14 tests) => 0:05:59.292310
- Worker 1 (23 tests) => 0:02:39.609214
- Worker 2 (22 tests) => 0:05:24.914187
- Worker 3 (17 tests) => 0:00:16.433136
2016-09-20 02:39:58.942 15238 INFO rally.verification.tempest.tempest [-] Verification ef55832c-d61c-4129-a052-2b7beacbc4d4 | Completed: Run verification.
2016-09-20 02:39:58.943 15238 INFO rally.verification.tempest.tempest [-] Verification ef55832c-d61c-4129-a052-2b7beacbc4d4 | Starting: Saving verification results.
2016-09-20 02:39:59.126 15238 INFO rally.verification.tempest.tempest [-] Verification ef55832c-d61c-4129-a052-2b7beacbc4d4 | Completed: Saving verification results.
Verification UUID: ef55832c-d61c-4129-a052-2b7beacbc4d4

Rally uses multiple threads to run tests so it is quite fast.
Then you can view the result in a table or in HTML format.
# rally verify show
Total results of verification:
+--------------------------------------+--------------------------------------+----------+-------+----------+----------------------------+----------+
| UUID | Deployment UUID | Set name | Tests | Failures | Created at | Status |
+--------------------------------------+--------------------------------------+----------+-------+----------+----------------------------+----------+
| ef55832c-d61c-4129-a052-2b7beacbc4d4 | 71fc6dc5-ca05-49d0-badc-0ef5b3f58225 | | 76 | 0 | 2016-09-20 02:33:16.006425 | finished |
+--------------------------------------+--------------------------------------+----------+-------+----------+----------------------------+----------+
Tests:
+-----------------------------------------------------------------------------------------------------------------------------------+---------+---------+
| name | time | status |
+-----------------------------------------------------------------------------------------------------------------------------------+---------+---------+
| tempest.api.compute.images.test_images.ImagesTestJSON.test_delete_saving_image | 10.859 | success |
| tempest.api.compute.images.test_images_oneserver.ImagesOneServerTestJSON.test_create_delete_image | 7.689 | success |
| tempest.api.compute.images.test_images_oneserver.ImagesOneServerTestJSON.test_create_image_specify_multibyte_character_image_name | 7.144 | success |
| tempest.api.compute.images.test_list_image_filters.ListImageFiltersTestJSON.test_list_images_filter_by_changes_since | 0.106 | success |
| tempest.api.compute.images.test_list_image_filters.ListImageFiltersTestJSON.test_list_images_filter_by_name | 0.097 | success |
| tempest.api.compute.images.test_list_image_filters.ListImageFiltersTestJSON.test_list_images_filter_by_server_id | 0.110 | success |
| tempest.api.compute.images.test_list_image_filters.ListImageFiltersTestJSON.test_list_images_filter_by_server_ref | 0.201 | success |
| tempest.api.compute.images.test_list_image_filters.ListImageFiltersTestJSON.test_list_images_filter_by_status | 0.164 | success |
| tempest.api.compute.images.test_list_image_filters.ListImageFiltersTestJSON.test_list_images_filter_by_type | 0.172 | success |
| tempest.api.compute.images.test_list_image_filters.ListImageFiltersTestJSON.test_list_images_limit_results | 0.260 | success |
| tempest.api.compute.images.test_list_image_filters.ListImageFiltersTestJSON.test_list_images_with_detail_filter_by_changes_since | 0.162 | success |
| tempest.api.compute.images.test_list_image_filters.ListImageFiltersTestJSON.test_list_images_with_detail_filter_by_name | 0.127 | success |
| tempest.api.compute.images.test_list_image_filters.ListImageFiltersTestJSON.test_list_images_with_detail_filter_by_server_ref | 0.289 | success |
| tempest.api.compute.images.test_list_image_filters.ListImageFiltersTestJSON.test_list_images_with_detail_filter_by_status | 0.251 | success |
| tempest.api.compute.images.test_list_image_filters.ListImageFiltersTestJSON.test_list_images_with_detail_filter_by_type | 0.180 | success |
| tempest.api.compute.images.test_list_image_filters.ListImageFiltersTestJSON.test_list_images_with_detail_limit_results | 0.103 | success |
| tempest.api.compute.images.test_list_images.ListImagesTestJSON.test_get_image | 0.842 | success |
| tempest.api.compute.images.test_list_images.ListImagesTestJSON.test_list_images | 0.707 | success |
| tempest.api.compute.images.test_list_images.ListImagesTestJSON.test_list_images_with_detail | 0.147 | success |
| tempest.api.compute.servers.test_create_server.ServersTestJSON.test_host_name_is_same_as_server_name | 32.323 | success |
| tempest.api.compute.servers.test_create_server.ServersTestJSON.test_list_servers_with_detail | 0.298 | success |
| tempest.api.compute.servers.test_create_server.ServersTestJSON.test_verify_created_server_vcpus | 0.821 | success |
| tempest.api.compute.servers.test_create_server.ServersTestManualDisk.test_host_name_is_same_as_server_name | 29.806 | success |
| tempest.api.compute.servers.test_create_server.ServersTestManualDisk.test_list_servers_with_detail | 0.258 | success |
| tempest.api.compute.servers.test_create_server.ServersTestManualDisk.test_verify_created_server_vcpus | 0.426 | success |
| tempest.api.compute.servers.test_instance_actions.InstanceActionsTestJSON.test_get_instance_action | 0.332 | success |
| tempest.api.compute.servers.test_instance_actions.InstanceActionsTestJSON.test_list_instance_actions | 3.442 | success |
| tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_detailed_filter_by_flavor | 0.441 | success |
| tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_detailed_filter_by_image | 0.341 | success |
| tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_detailed_filter_by_server_name | 0.432 | success |
| tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_detailed_filter_by_server_status | 0.359 | success |
| tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_detailed_limit_results | 0.519 | success |
| tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_filter_by_flavor | 0.298 | success |
| tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_filter_by_image | 0.100 | success |
| tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_filter_by_limit | 0.136 | success |
| tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_filter_by_server_name | 0.165 | success |
| tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_filter_by_server_status | 0.108 | success |
| tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_filtered_by_ip | 0.496 | success |
| tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_filtered_by_ip_regex | 0.007 | skip |
| tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_filtered_by_name_wildcard | 0.254 | success |
| tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_by_limits | 0.082 | success |
| tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_lock_unlock_server | 37.387 | success |
| tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_rebuild_server | 80.857 | success |
| tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_resize_server_confirm | 41.418 | success |
| tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_resize_server_revert | 50.959 | success |
| tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_stop_start_server | 44.271 | success |
| tempest.api.compute.servers.test_server_metadata.ServerMetadataTestJSON.test_delete_server_metadata_item | 1.025 | success |
| tempest.api.compute.servers.test_server_metadata.ServerMetadataTestJSON.test_get_server_metadata_item | 0.910 | success |
| tempest.api.compute.servers.test_server_metadata.ServerMetadataTestJSON.test_list_server_metadata | 0.624 | success |
| tempest.api.compute.servers.test_server_metadata.ServerMetadataTestJSON.test_set_server_metadata | 0.832 | success |
| tempest.api.compute.servers.test_server_metadata.ServerMetadataTestJSON.test_set_server_metadata_item | 0.977 | success |
| tempest.api.compute.servers.test_server_metadata.ServerMetadataTestJSON.test_update_server_metadata | 0.678 | success |
| tempest.api.compute.servers.test_servers.ServersTestJSON.test_create_server_with_admin_password | 2.509 | success |
| tempest.api.compute.servers.test_servers.ServersTestJSON.test_create_specify_keypair | 7.713 | success |
| tempest.api.compute.servers.test_servers.ServersTestJSON.test_create_with_existing_server_name | 15.460 | success |
| tempest.api.compute.servers.test_servers.ServersTestJSON.test_update_access_server_address | 9.006 | success |
| tempest.api.compute.servers.test_servers.ServersTestJSON.test_update_server_name | 7.960 | success |
| tempest.api.compute.test_quotas.QuotasTestJSON.test_get_default_quotas | 0.427 | success |
| tempest.api.compute.test_quotas.QuotasTestJSON.test_get_quotas | 0.127 | success |
| tempest.api.compute.volumes.test_attach_volume.AttachVolumeTestJSON.test_attach_detach_volume | 128.877 | success |
| tempest.api.compute.volumes.test_attach_volume.AttachVolumeTestJSON.test_list_get_volume_attachments | 42.423 | success |
| tempest.api.compute.volumes.test_volumes_list.VolumesTestJSON.test_volume_list | 0.142 | success |
| tempest.api.compute.volumes.test_volumes_list.VolumesTestJSON.test_volume_list_with_details | 0.142 | success |
| tempest.api.identity.v3.test_tokens.TokensV3Test.test_create_token | 0.147 | success |
| tempest.api.image.v2.test_images.ListImagesTest.test_list_no_params | 0.100 | success |
| tempest.api.object_storage.test_object_expiry.ObjectExpiryTest.test_get_object_after_expiry_time | 12.339 | success |
| tempest.api.object_storage.test_object_services.ObjectTest.test_copy_object_2d_way | 1.457 | success |
| tempest.api.object_storage.test_object_services.ObjectTest.test_copy_object_across_containers | 0.482 | success |
| tempest.api.object_storage.test_object_services.ObjectTest.test_copy_object_in_same_container | 0.214 | success |
| tempest.api.object_storage.test_object_services.ObjectTest.test_copy_object_to_itself | 0.186 | success |
| tempest.api.object_storage.test_object_services.ObjectTest.test_delete_object | 0.113 | success |
| tempest.api.object_storage.test_object_services.ObjectTest.test_get_object_if_different | 0.096 | success |
| tempest.api.object_storage.test_object_services.ObjectTest.test_object_upload_in_segments | 1.027 | success |
| tempest.api.object_storage.test_object_temp_url.ObjectTempUrlTest.test_get_object_using_temp_url | 0.057 | success |
| tempest.api.object_storage.test_object_temp_url.ObjectTempUrlTest.test_put_object_using_temp_url | 0.106 | success |
| tempest.api.object_storage.test_object_version.ContainerTest.test_versioned_container | 1.281 | success |
+-----------------------------------------------------------------------------------------------------------------------------------+---------+---------+

To output results to a HTML file, run
rally verify detail --html > /tmp/test.html

Then you can open it in a web browser.
rally

The post Testing your OpenStack with Tempest – Part 2 appeared first on Aptira OpenStack Services in Australia Asia Europe.

by Shunde Zhang at September 21, 2016 03:16 AM

Adam Young

Distinct RBAC Policy Rules

The ever elusive bug 968696 is still out there, due, in no small part, to the distributed nature of the policy mechanism. One Question I asked myself as I chased this beastie is “how many distinct policy rules do we actually have to implement?” This is an interesting question because, if we can an automated way to answer that question, it can lead to an automated way to transforming the policy rules themselves, and thus getting to a more unified approach to policy.

The set of policy files used in a Tripleo overcloud have around 1400 rules:

$ find /tmp/policy -name \*.json | xargs wc -l
   73 /tmp/policy/etc/sahara/policy.json
   61 /tmp/policy/etc/glance/policy.json
  138 /tmp/policy/etc/cinder/policy.json
   42 /tmp/policy/etc/gnocchi/policy.json
   20 /tmp/policy/etc/aodh/policy.json
   74 /tmp/policy/etc/ironic/policy.json
  214 /tmp/policy/etc/neutron/policy.json
  257 /tmp/policy/etc/nova/policy.json
  198 /tmp/policy/etc/keystone/policy.json
   18 /tmp/policy/etc/ceilometer/policy.json
  135 /tmp/policy/etc/manila/policy.json
    3 /tmp/policy/etc/heat/policy.json
   88 /tmp/policy/auth_token_scoped.json
  140 /tmp/policy/auth_v3_token_scoped.json
 1461 total

Granted, that might not be distinct rule lines, as some are multi-line, but most rules seem to be on a single line. There is some whitespace, too.

Many of the rules, while written differently, can map to the same implementation. For example:

“rule: False”

can reduce to

“False”

which is the same as

“!”

All are instances of oslo_policy.policy._check.FalseCheck.

With that in mind, I gathered up the set of policy files deployed on a Tripleo overcloud and hacked together some analysis.

Note: Nova embeds its policy rules in code now. In order to convert them to an old-style policy file, you need to run a command line tool:

oslopolicy-policy-generator --namespace nova --output-file /tmp/policy/etc/nova/policy.json

Ironic does something similar, but uses

oslopolicy-sample-generator --namespace=ironic.api --output-file=/tmp/policy/etc/ironic/policy.json

I’ve attached my source code at the bottom of this article. Running the code provides the following summary:

55 unique rules found

The longest rule belongs to Ironic:

OR(OR(OR((ROLE:admin)(ROLE:administrator))AND(OR((tenant == demo)(tenant == baremetal))(ROLE:baremetal_admin)))AND(OR((tenant == demo)(tenant == baremetal))OR((ROLE:observer)(ROLE:baremetal_observer))))

Some look somewhat repetitive, such as

OR((ROLE:admin)(is_admin == 1))

And some downright dangerous:

NOT( (ROLE:heat_stack_user)

A there are ways to work around having an explicit role in your token.

Many are indications of places where we want to use implied roles, such as:

  1. OR((ROLE:admin)(ROLE:administrator))
  2. OR((ROLE:admin)(ROLE:advsvc)
  3. (ROLE:admin)
  4. (ROLE:advsvc)
  5. (ROLE:service)

 

This is the set of keys that appear more than one time:

9 context_is_admin
4 admin_api
2 owner
6 admin_or_owner
2 service:index
2 segregation
7 default

Doing a grep for context_is_admin shows all of them with the following rule:

"context_is_admin": "role:admin",

admin_api is roughly the same:

cinder/policy.json: "admin_api": "is_admin:True",
ironic/policy.json: "admin_api": "role:admin or role:administrator"
nova/policy.json:   "admin_api": "is_admin:True"
manila/policy.json: "admin_api": "is_admin:True",

I think these here are supposed to include the new check for is_admin_project as well.

Owner is defined two different ways in two files:

neutron/policy.json:  "owner": "tenant_id:%(tenant_id)s",
keystone/policy.json: "owner": "user_id:%(user_id)s",

Keystone’s meaning is that the user matches, where as neutron is a project scope check. Both rules should change.

Admin or owner has the same variety

cinder/policy.json:    "admin_or_owner": "is_admin:True or project_id:%(project_id)s",
aodh/policy.json:      "admin_or_owner": "rule:context_is_admin or project_id:%(project_id)s",
neutron/policy.json:   "admin_or_owner": "rule:context_is_admin or rule:owner",
nova/policy.json:      "admin_or_owner": "is_admin:True or project_id:%(project_id)s"
keystone/policy.json:  "admin_or_owner": "rule:admin_required or rule:owner",
manila/policy.json:    "admin_or_owner": "is_admin:True or project_id:%(project_id)s",

Keystone is the odd one out here, with owner again meaning “user matches.”

Segregation is another rules that means admin:

aodh/policy.json:       "segregation": "rule:context_is_admin",
ceilometer/policy.json: "segregation": "rule:context_is_admin",

Probably the trickiest one to deal with is default, as that is a magic term that is used when a rule is not defined:

sahara/policy.json:   "default": "",
glance/policy.json:   "default": "role:admin",
cinder/policy.json:   "default": "rule:admin_or_owner",
aodh/policy.json:     "default": "rule:admin_or_owner",
neutron/policy.json:  "default": "rule:admin_or_owner",
keystone/policy.json: "default": "rule:admin_required",
manila/policy.json:   "default": "rule:admin_or_owner",

There seem to be three catch all approaches:

  1. require admin,
  2. look for a project match but let admin override
  3. let anyone execute the API.

This is the only rule that cannot be made globally unique across all the files.

Here is the complete list of suffixes.  The format is not strict policy format; I munged it to look for duplicates.

(ROLE:admin)
(ROLE:advsvc)
(ROLE:service)
(field == address_scopes:shared=True)
(field == networks:router:external=True)
(field == networks:shared=True)
(field == port:device_owner=~^network:)
(field == subnetpools:shared=True)
(group == nobody)
(is_admin == False)
(is_admin == True)
(is_public_api == True)
(project_id == %(project_id)s)
(project_id == %(resource.project_id)s)
(tenant_id == %(tenant_id)s)
(user_id == %(target.token.user_id)s)
(user_id == %(trust.trustor_user_id)s)
(user_id == %(user_id)s)
AND(OR((tenant == demo)(tenant == baremetal))OR((ROLE:observer)(ROLE:baremetal_observer)))
AND(OR(NOT( (field == rbac_policy:target_tenant=*) (ROLE:admin))OR((ROLE:admin)(tenant_id == %(tenant_id)s)))
FALSE
NOT( (ROLE:heat_stack_user) 
OR((ROLE:admin)(ROLE:administrator))
OR((ROLE:admin)(ROLE:advsvc))
OR((ROLE:admin)(is_admin == 1))
OR((ROLE:admin)(project_id == %(created_by_project_id)s))
OR((ROLE:admin)(project_id == %(project_id)s))
OR((ROLE:admin)(tenant_id == %(network:tenant_id)s))
OR((ROLE:admin)(tenant_id == %(tenant_id)s))
OR((ROLE:advsvc)OR((ROLE:admin)(tenant_id == %(network:tenant_id)s)))
OR((ROLE:advsvc)OR((tenant_id == %(tenant_id)s)OR((ROLE:admin)(tenant_id == %(network:tenant_id)s))))
OR((is_admin == True)(project_id == %(project_id)s))
OR((is_admin == True)(quota_class == %(quota_class)s))
OR((is_admin == True)(user_id == %(user_id)s))
OR((tenant == demo)(tenant == baremetal))
OR((tenant_id == %(tenant_id)s)OR((ROLE:admin)(tenant_id == %(network:tenant_id)s)))
OR(NOT( (field == port:device_owner=~^network:) (ROLE:advsvc)OR((ROLE:admin)(tenant_id == %(network:tenant_id)s)))
OR(NOT( (field == rbac_policy:target_tenant=*) (ROLE:admin))
OR(OR((ROLE:admin)(ROLE:administrator))AND(OR((tenant == demo)(tenant == baremetal))(ROLE:baremetal_admin)))
OR(OR((ROLE:admin)(is_admin == 1))(ROLE:service))
OR(OR((ROLE:admin)(is_admin == 1))(project_id == %(target.project.id)s))
OR(OR((ROLE:admin)(is_admin == 1))(token.project.domain.id == %(target.domain.id)s))
OR(OR((ROLE:admin)(is_admin == 1))(user_id == %(target.token.user_id)s))
OR(OR((ROLE:admin)(is_admin == 1))(user_id == %(user_id)s))
OR(OR((ROLE:admin)(is_admin == 1))AND((user_id == %(user_id)s)(user_id == %(target.credential.user_id)s)))
OR(OR((ROLE:admin)(project_id == %(created_by_project_id)s))(project_id == %(project_id)s))
OR(OR((ROLE:admin)(project_id == %(created_by_project_id)s))(project_id == %(resource.project_id)s))
OR(OR((ROLE:admin)(tenant_id == %(tenant_id)s))(ROLE:advsvc))
OR(OR((ROLE:admin)(tenant_id == %(tenant_id)s))(field == address_scopes:shared=True))
OR(OR((ROLE:admin)(tenant_id == %(tenant_id)s))(field == networks:shared=True)(field == networks:router:external=True)(ROLE:advsvc))
OR(OR((ROLE:admin)(tenant_id == %(tenant_id)s))(field == networks:shared=True))
OR(OR((ROLE:admin)(tenant_id == %(tenant_id)s))(field == subnetpools:shared=True))
OR(OR(OR((ROLE:admin)(ROLE:administrator))AND(OR((tenant == demo)(tenant == baremetal))(ROLE:baremetal_admin)))AND(OR((tenant == demo)(tenant == baremetal))OR((ROLE:observer)(ROLE:baremetal_observer))))
OR(OR(OR((ROLE:admin)(is_admin == 1))(ROLE:service))(user_id == %(target.token.user_id)s))

Here is the source code I used to analyze the policy files:

#!/usr/bin/env python

# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import os
import sys

from oslo_serialization import jsonutils

from oslo_policy import policy
import oslo_policy._checks as _checks


def display_suffix(rules, rule):

    if isinstance (rule, _checks.RuleCheck):
        return display_suffix(rules, rules[rule.match.__str__()])

    if isinstance (rule, _checks.OrCheck):
        answer =  'OR('
        for subrule in rule.rules:
            answer += display_suffix(rules, subrule)
        answer +=  ')'
    elif isinstance (rule, _checks.AndCheck):
        answer =  'AND('
        for subrule in rule.rules:
            answer += display_suffix(rules, subrule)
        answer +=  ')'
    elif isinstance (rule, _checks.TrueCheck):
        answer =  "TRUE"
    elif isinstance (rule, _checks.FalseCheck):
        answer =  "FALSE"
    elif isinstance (rule, _checks.RoleCheck):       
        answer =  ("(ROLE:%s)" % rule.match)
    elif isinstance (rule, _checks.GenericCheck):       
        answer =  ("(%s == %s)" % (rule.kind, rule.match))
    elif isinstance (rule, _checks.NotCheck):       
        answer =  'NOT( %s ' % display_suffix(rules, rule.rule)
    else:        
        answer =  (rule)
    return answer

class Tool():
    def __init__(self):
        self.prefixes = dict()
        self.suffixes = dict()

    def add(self, policy_file):
        policy_data = policy_file.read()
        rules = policy.Rules.load(policy_data, "default")
        suffixes = []
        for key, rule in rules.items():
            suffix = display_suffix(rules, rule)
            self.prefixes[key] = self.prefixes.get(key, 0) + 1
            self.suffixes[suffix] = self.suffixes.get(suffix, 0) + 1

    def report(self):
        suffixes = sorted(self.suffixes.keys())
        for suffix in suffixes:
            print (suffix)
        print ("%d unique rules found" % len(suffixes))
        for prefix, count in self.prefixes.items():
            if count > 1:
                print ("%d %s" % (count, prefix))
        
def main(argv=sys.argv[1:]):
    tool = Tool()
    policy_dir = "/tmp/policy"
    name = 'policy.json'
    suffixes = []
    for root, dirs, files in os.walk(policy_dir):
        if name in files:
            policy_file_path = os.path.join(root, name)
            print (policy_file_path)
            policy_file = open(policy_file_path, 'r')
            tool.add(policy_file)
    tool.report()

if __name__ == "__main__":
    sys.exit(main(sys.argv[1:]))

by Adam Young at September 21, 2016 01:52 AM

September 20, 2016

SUSE Conversations

Från atomer till galaxer med OpenStack

By Carl Linden För företag som använder Linux i datacenter, tittar på Docker-tekniken och på Ceph där man bygger enormt kraftfulla lagringslösningar med olika system är OpenStack säkert inget märkligt.  Men få vet att det började med en utmaning för NASA som hade ett antal olika system som man försökte integrera och standardisera. Ur Project …

+read more

The post Från atomer till galaxer med OpenStack appeared first on SUSE Blog. mnavarr

by mnavarr at September 20, 2016 05:04 PM

Red Hat Stack

Red Hat OpenStack Platform and Tesora Database-as-a-Service Platform: What’s New

As OpenStack users build or migrate more applications and services for private cloud deployment, users are expanding their plans for how these deployments will be serviced by non-core, emerging components. Based on the April 2016 OpenStack User Survey (see page 35), Trove is among the top “as a service” non-core components that OpenStack users are deploying or plan to deploy on top of the core components. This comes as no surprise as every application requires a database and Trove provides OpenStack with an integrated Database-as-a-Service option that works smoothly with the core OpenStack services.

Recently, Red Hat and Tesora jointly announced that we have collaborated to certify Tesora Database as a Service (“DBaaS”) Platform on the Red Hat OpenStack Platform. When we at Red Hat announced our strategic decision to focus our development and contribution efforts on the core OpenStack services, we did so with confidence, due in large part to our expanded relationship with Tesora. Tesora is a recognized thought leader and the top contributor to upstream OpenStack Trove. They understand the needs of the Trove community, but more importantly they have a reputation for understanding, and focusing, on the needs of the those developing and supporting applications running in a heterogeneous database environment. Adding Tesora DBaaS Platform as a certified workload on top of Red Hat OpenStack Platform addresses our customer requirements and provides an immediate, production-ready DBaaS option that can be deployed within their current Red Hat OpenStack Platform 8 and higher environments.

What’s New for Red Hat OpenStack Platform Users?  

From a technical and operational standpoint, this move offers users the following benefits:

  • The Tesora DBaaS Platform is now tested and certified as a supported workload on top of Red Hat OpenStack Platform 8 and higher. From a technical standpoint the Tesora solution is fully compatible with the core OpenStack service APIs and includes all upstream features and bug fixes. The Tesora Trove controller is a drop-in replacement for the upstream Trove controller so users can more easily move to the Tesora solution with minimal effort.
  • Tesora has addressed the database “golden” image building and maintenance problem. Most organizations use multiple database platforms depending on their specific business needs. Building, tuning, securing and maintaining each database platform requires a specific DBA skillset and some level of administrative time and effort. Cloud enabled DBaaS enables DBAs and developers to be more productive in these areas, but there is still a gap when it comes to creating, testing and maintaining “golden” images for each database platform used in a heterogeneous environment. The OpenStack Trove community provides tooling for this in the form of disk-image-builder (“DIB”). While DIB is functional and enables the development of standardized database images, it is very verbose and complex to maintain, specifically for images created for operating systems that are commercially supported and maintained, such as Red Hat Enterprise Linux (RHEL); in short updating DIB rendered images often is difficult and time consuming. Tesora has addressed this problem by providing production-ready solutions, pre-built database images, and support for the SQL, NoSQL, open source, and proprietary database platforms that are most commonly used in heterogeneous environments. Red Hat OpenStack Platform users can now deploy Tesora DBaaS Platform on top of their existing environments and can provide their users with DBaaS with 15 different database platforms, using out of the box, pre-built, certified and easily updatable images running on Red Hat Enterprise Linux and other commercially supported and maintained operating systems.
  • Tesora DBaaS Platform CLI and Dashboard make database creation and administration quicker and easier across different database platforms. The biggest challenge most DBAs and developers face is managing and monitoring the databases under their care. Tesora DBaaS Platform provides a rich set of CLI commands, APIs and a web-based GUI that provide a common interface across heterogeneous platforms. Users are shielded from syntax and platform specifics, so they can focus on the “what” they need to do vs the “how” they will do it for things like backup and recovery, database creation, configuration, replication and clustering.  
  • Tesora DBaaS Platform can be deployed on top of an existing Red Hat OpenStack Platform 8 and higher deployment. Users can gain the immediate benefits of DBaaS with little to no change to their existing Red Hat OpenStack Platform deployment. As mentioned earlier, the Tesora solution is fully compatible with the core OpenStack (and Red Hat OpenStack Platform) service APIs. Red Hat users can easily install and configure the Trove DBaaS controller, dashboard, database images with minimal disruption to the other OpenStack services. For new and existing Red Hat OpenStack Platform deployments installations, there are some things that should be taken into consideration:
    • Red Hat OpenStack Platform director is used to deploy and upgrade the OpenStack core components. Users who choose not to use director to deploy and upgrade Red Hat OpenStack Platform will have a different support and upgrade path from those who do.
    • Tesora deployment and upgrades have yet to be integrated with Red Hat OpenStack Platform director. Tesora provides installation and upgrade guides for manually deploying and upgrading Tesora DBaaS Platform on top of an existing Red Hat OpenStack Platform installation. Director integration is in the works, but Tesora has provided no delivery timeframe on when this will happen.
    • Red Hat and Tesora are collaborating to support our joint customers. While you will need a support agreement with each of us, the support experience should be pleasant.

I speak for everyone at Red Hat and Tesora when I say we believe this certification and ongoing collaboration will bring tremendous benefit and value add to our customer base from both a technical and business standpoint and we look forward to your feedback.

For more information:

* Tesora Press Release http://www.tesora.com/press-releases/tesora-collaborates-red-hat-deliver-certified-openstack-based-database-service-platform/

 

by Rob Young, Principal Product Manager, Red Hat at September 20, 2016 02:30 PM

OpenStack Superuser

Innovate, collaborate, replicate success says OpenStack’s Jonathan Bryce

OpenStack’s Executive Director Jonathan Bryce has a simple yet compelling goal: to give the world all the computing capacity it’s going to need in the very near future.

In the keynote given at the recent OpenStack Days Silicon Valley in Mountain View, California, Bryce explained that he’d also like to make sure that the much-needed capacity isn’t locked up within only a few organizations, governments, or companies.

“Having an open, viable, and powerful set of tools to enable that is extremely critical,” says Bryce.

A lot of the players in the same space as OpenStack, who provide stakeholders with the compute, power, and storage capacity they need to bring innovative products to market or the public space, all treat it as a zero-sum game. As if taking market share and becoming the top dog is the most important thing.

Bryce thinks about it differently.

“Because of the scope of what this industry is now able to do,” Bryce says, “it’s really a positive-sum game.”

It’s this sense of growth and collaboration that informs OpenStack’s current aim. The Foundation owes its base to software, events and users, but it’s also more than the sum of those parts.

“What we’ve really created,” says Bryce, “is a competitive marketplace for ideas that really values implementation and contribution.”

To keep moving forward, Bryce thinks there are three things OpenStack must do: innovate, collaborate, and duplicate success.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="https://www.youtube.com/embed/2JhodK6zGKU?feature=oembed" width="640"></iframe>

Innovate

The technology industry never slows down, so it’s time for OpenStack to look to the future by studying what users are starting to do now. OpenStack users are already working in areas like the “internet of things,” machine learning, artificial intelligence, and massive data analytics.

“It’s critical,” says Bryce, “that OpenStack continues to expand what it supports in these workloads.”

It’s not time to circle the wagons and double-down on what OpenStack does now, but rather continue to grow critical services and an ability to provide cloud services and infrastructure for what its users will be doing in the near future.

Collaborate

In its continuing effort to find ways to push the envelope for new ideas and services for its users, OpenStack must also collaborate.

“This is an opportunity right now to put together incredible tools that are being built in different communities in new ways to build the best system for users and application developers,” says Bryce.

Just as the LAMP stack (Linux, Apache, MySQL, and PHP) empowered developers to implement powerful web applications and change the way users see the internet itself, OpenStack — as part of a collaborative effort among open source technologies — will power the next cloud frameworks and change the way people use networks, cloud computing, and data storage.

This is how OpenStack will support collaboration among technologies to give users reliable, scalable, programmable infrastructure so they can go out and do amazing things.

Duplicate success

OpenStack, says Bryce, is a serious technology for cloud computing. Historically, it’s been challenging to set up, requiring a lot of technical knowledge and help from the community. The last couple of releases, though, have focused quite a bit on making OpenStack much easier to install, upgrade, and manage.

“A lot of the challenges that users face go beyond the technology,” he says, “to the culture and the processes and the people that they have to work with inside of their organizations.”

OpenStack has to learn how to help share knowledge and duplicate successes across a wider variety of users. There are incredible success stories out there, with dozens of clouds with multiple thousands of nodes running great workloads.

“I think that one of the things we need to do a better job of is sharing those and making it easier to duplicate that,” says Bryce.

The OpenStack Foundation has begun this process with some useful consumable content for users looking to use OpenStack to power their cloud computing workloads. Found at openstack.org/enterprise, these books cover not just the technology or how to get started form a software perspective, but also what needs to be considered from an organizational standpoint to put together a successful cloud computing strategy.

The Foundation has also launched a program to empower users to become a Certified OpenStack Administrator, further supporting the platform and the community itself. Talent is a challenge with every new technology; with more OpenStack experts out there, says Bryce, the platform and technology can become even more ubiquitous.

Ultimately, says Bryce, OpenStack needs to keep the innovation flowing and keep welcoming new ideas and concepts. It needs to make sure to collaborate with other important technologies and keep finding ways to put them together to create new value. In addition, OpenStack has to share the successes and the information that successful users are coming away with to continue its phenomenal growth into the premier open source cloud computing platform in the years ahead.

Cover Photo // CC BY NC

The post Innovate, collaborate, replicate success says OpenStack’s Jonathan Bryce appeared first on OpenStack Superuser.

by Rob LeFebvre at September 20, 2016 11:02 AM

Fleio Blog

OpenStack billing features – Fleio 1.0 preview (part 2 of 2)

You saw the billing options you have for block devices and disk images. Today I’d like to show you the OpenStack billing features for compute instances and network traffic. Fleio can apply a cost to compute instances based on multiple attributes: existence over time (of instance) vCPUs: number of virtual CPU cores root_gb: root file system size […]

by adrian at September 20, 2016 09:33 AM

September 19, 2016

Hugh Blemings

Lwood-20160918

Introduction

Welcome to Last week on OpenStack Dev (“Lwood”) for the week just past. For more background on Lwood, please refer here.

Basic Stats for week 12 to 18 September for openstack-dev:

  • ~499 Messages (down about 10% relative to last week)
  • ~216 Unique threads (up about 35% relative to last week)

Another pretty typical week on the list message count down a bit from last week but the thread count up as there weren’t any particularly long threads to jiggle the metrics (a single message is counted as a thread).

Notable Discussions – openstack-dev

New OpenStack Security Notices

MongoDB guest instance allows any user to connect [OSSN-0066]

From the summary “When creating a new MongoDB single instance or cluster the default setting in MongoDB `security.authorization` was set as disabled. This resulted in no need to provide user credentials to connect to the mongo instance and perform read / write operations from any network that is attached on instance create.”  The original email or the SSN itself has more information.

Deleted Glance image IDs may be reassigned [OSSN-0075]

From the summary (paraphrase and so errors mine) “It is possible for image IDs from deleted images to be reassigned to other images.  This creates the possibility that by creating a nefarious image that shares the ID with a previously deleted but trusted image, the nefarious image can be booted without the user realising it was quietly changed.”  The original post and/or the SSN has more information.

Writing down OpenStack Principles – thread continues and a great quote

The thread that started last week with this a post from Chris Dent trundled along a bit more this week with a few more messages.  Most of the substantive commentary seems to have moved to the review that Thierry Carrez created, the last message in the thread being that Thierry has now posted a revised version which seeks to incorporate the various bits of feedback.

A brief side discussion on the thread popped up between Clay Gerrard and Thierry where they both acknowledged somewhat different views but also an appreciation for the others willingness to embrace new information and change their outlook as appropriate.  Collaborative Open Source development at its best there I reckon.

I’ll close this item with a quote from Thierry’s email which is, I think, one of the most eloquent summaries of the relationship between governance and code in an Open Source project I’ve read.

“It is important for open source projects to have a strong governance model, but it is only the frame that holds the canvas and defines the space. The important part is the painting.”

Nicely put :)

 

Stewardship Working Group (SWG) meeting report

The SWG was mentioned in Lwood-20160717 – as Amrith Kumar noted in the post in question, the group was set up by the Technical Commitee (TC) with the intent that this small group would “review the leadership, communication, and decision making processes of the TC and OpenStack projects as a whole, and propose a set of improvements to the TC.”

During the week past Colette Alexander posted with an update on recent activities.  Of note is that there is work under way to refine the vision for what the SWG will accomplish in Barcelona and feedback is sought from the community.

Election Season Continues

This week marked the end of the PTL nomination period as Tristan Cacqueray notes here – there were four projects (Astara, OpenStack Salt, OpenStack UX and Security) that were without candidates and so the TC will appoint the PTL.  Six projects had more than one PTL nominate and so will have an election: Freezer, Ironic, Keystone, Kolla, Magnum and Quality Assurance. There’s a full list of candidates below or on the official site here.

At the time of writing the election itself has just kicked off and will run until 23:45 September 25, 2016 (UTC)  If you’re eligible, please vote! :)

End of Cycle Retrospectives / Postmortems

As Newton draws to an end, projects are starting to do retrospectives.  Three I spotted were for Keystone (Steve Martinelli) Neutron (Armando Migliaccio) and Nova (Matt Riedemann) with more likely over the next few weeks.  These are all works in progress so if you’ve something constructive to contribute please do!

Beautiful Music in Barcelona

While the gathering proposed may not quite reach the vocal, choral and orchestral grandeur of this if you’re a musician and will be at the Barcelona Summit, please read Amrith Kumar’s post here.

Amrith asks “would y’all musicians who plan to bring your gear to Barcelona please start a little thread here on the ML and let’s get a band going?”.  While I won’t alas be in Barcelona I’ve had the good fortune to be involved in these sorts of FOSS meets music gatherings in the past back in the Canonical days – it’s a ton of fun and I commend it to you :)

Notable Discussions – other OpenStack lists

Nothing that leapt out from the other lists this week.

Upcoming OpenStack Events

Best I can tell no OpenStack related events mentioned this week.  Don’t forget the OpenStack Foundation’s Events Page for a list of general events that is frequently updated.

People and Projects

PTL’s stepping down

PTL Candidates

Core nominations & changes

Further Reading & Miscellanea

Don’t forget these excellent sources of OpenStack news – most recent ones linked in each case

This weeks edition of Lwood brought to you by the background noise of Brunswick, Victoria.  Not as tuneful as Weather Report last week, but a pleasant hum and bustle none the less :)  Oh and a quick reprise of Barcelona featuring Freddie Mercury and Montserrat Caballé.

Last but by no means least, thanks, as always, to Rackspace :)

by hugh at September 19, 2016 01:02 PM

OpenStack Superuser

Update on API information in OpenStack

Earlier this year, I reported on our progress in updating the many bits of API information for OpenStack services in “What’s next for OpenStack application developer guides?” Since then, we have gathered even more information and lessons learned as OpenStack has also grown to add more and more REST API services.

At last count in the TC governance file, there are 28 REST API services under the OpenStack framework. This release, we have migrated many of those services from WADL and have collected all the links to API information for each of the projects in the projects.yaml file so links to documentation are published for each project. The collection is now available at developer.openstack.org/api-guide/quick-start/.

Here are some common questions and answers.

What drove the switch over from Web Application Description Language?

Back in 2010, when we first started documenting OpenStack APIs, WADL was a great way to do that. It was a standard submitted to the W3C, and we had hired an XML tools specialist at Rackspace to ensure we could do two things with WADL: filter requests to make sure they were valid for our cloud, and keep the docs true to the implementation. It was a win-win and we were documenting two APIs this way: Object Storage and Compute.

Fast-forward to 2014. The WADL specification from 2009 had not gotten the traction as a standard, the doc-specific tools team was disbanded, and trying to do two things (filter and document) with one tool (WADL) proved unwieldy. Plus, it’s not easy to community-source updates on a super complex tool like XML and WADL. Not to mention the allergy many developers have to XML. We needed to work on a new solution to document API reference information.

For the Liberty and Mitaka releases, I worked on two specifications to transform the API documentation. The main problems to solve are summarized as:

  • API reference information has been historically difficult to maintain using community or company resources due to the specialized nature of both the tools and the REST API knowledge.
  • API reference information needs to be sourced where API writers, API developers, contributor developers, and the API working group members can review together.

By switching from WADL, we could move to a format that was more modern and fitting for developers to maintain going forward. We sure made a good run at getting OpenAPI (Swagger) working. I believe parts of OpenAPI can be useful to the OpenStack community, but the lesson learned from WADL for me was that dual-purpose solutions eventually only have one purpose, and many solutions do not primarily exist for docs. Once we looked at using OpenAPI for a large API like Compute and evaluating whether we could document microversions, we opted for a new solution in using RST (ReStructuredText) plus YAML (Yet Another Markup Language) and a lot of web development to make sure we could get decent display and interaction with the reference parts. And, by using standard formats that OpenStack developers already use heavily, we made it easy for developers to contribute.

What advantages do you see in re-using the new format for downstream vendor documentation?

Our vision is to make developer.openstack.org a useful place for developers who are using OpenStack cloud resources to get the information they need to make requests against OpenStack endpoints. API reference information enables Software Development Kit (SDK) developers to create code, examples, and documentation that you use to create OpenStack cloud applications in the language of your choice. By using a format that our community can distribute maintenance on, and produce better web experiences, we can offer the most accurate docs across more REST API services.

The results are looking really quite good. Take a look at a list of all the APIs documented this way, and the Compute API and DNS API sites offer great examples of the new API reference, maintained by the project team rather than a central docs team.

We’re still working on a bit more interface “glue” to let users browse across all the OpenStack APIs, and with feature freeze at the end of August, we got all the moving parts of our tools (a Sphinx theme and a Sphinx extension) to fit together.

This was an amazing cross-purpose, cross-function, and cross-project effort accomplished by Karen Bradshaw, Sean Dague, Graham Hayes, Doug Hellman, Auggy Ragwitz, and Russell Sim with consistent assistance from the indefatigable Andreas Jaeger. This work had to span three releases (18 months), so we’re super happy to have this new framework in place and in use.

What can I do to try out this new API doc framework?

If you’re interested in contributing to these docs, go to a local copy, or clone your favorite (or least favorite, ha!) OpenStack service repository and look for an api-ref directory. If there’s not one already, create it using the contributor documentation.

If you’re already active in a project with an api-ref directory, try building the docs locally with:

$ tox -e api-ref

Read what your project’s API docs say, test it against actual requests and responses, and see if you can add to the accuracy of the documentation by making sure the docs match reality. As always, I’m here to help, and you can ask questions on the openstack-dev list with the [api] or [doc] topic, as you take a closer look.

 

Cover Photo // CC BY NC

The post Update on API information in OpenStack appeared first on OpenStack Superuser.

by Anne Gentle at September 19, 2016 11:02 AM

Opensource.com

Making installation easy, Hackathon winners, and more OpenStack news

Explore what's happening this week in OpenStack, the collection of open source projects for building your own cloud.

by Jason Baker at September 19, 2016 05:00 AM

September 16, 2016

OpenStack Blog

OpenStack Developer Mailing List Digest September 10-16

Nominations for OpenStack PTLs Are Now Open

  • Will remain open until September 18 23:45 UTC
  • Submit a text file to the openstack/election repository [1].
    • File name convention: $cycle_name/$project_name/$ircname.txt
  • In order to be an elgible candidate (and be allowed to vote) you need to have contributed an accepted patch to one of the program projects during the Mitaka-Newton timeframe.
  • Additional information [2].
  • Approved candidates [3]
  • Elections will start at September 19, 2016 00:00 UTC until September 25 23:45 UTC
  • Full thread

Ocata Design Summit – Proposed Slot Allocation

  • Proposed slot allocation for project teams at the Ocata design summit in Barcelona [4] based on requests current PTLs have made and adjusted for limit space available.
  • Kendall Nelson and Thierry will start laying out those sessions over the available rooms and time slots.
  • Communicated constraints (e.g. Manila not wanting to overlap with Cinder) should be communicated to Thierry asap.
  • If you don’t plan to use all of your slots, let Thierry know so they can be given to a team that needs them.
  • Start working with your team on content you’d like to cover at the summit and warm up those etherpads!
  • Full thread

OpenStack Principles

  • A set of OpenStack principles is proposed [5] to accurately capture existing tribal knowledge as a prerequisite for being able to have an open and productive discussions about changing it.
  • Last time majority of the Technical Committee were together, it was realized that there were a set of unspoken assumptions carried and used to judge things.
    • These are being captured to empower everyone to actually be able challenge and discuss them.
  • The principles were started by various TC members who have governance history and know these principles. This was in attempt to document this history to commonly asked questions. These are not by any means final, and the community should participate in discussing them.
  • Full thread

API Working Group News

  • Recently merged guidelines
    • URIs [6]
    • Links [7]
    • Version string being parsable [8]
  • Guidelines Under review
    • Add a warning about JSON expectations. [9]
  • Full thread

 

[1] – http://governance.openstack.org/election/#how-to-submit-your-candidacy

[2] – https://governance.openstack.org/election/

[3] – http://governance.openstack.org/election/#ocata-ptl-candidates

[4] – http://lists.openstack.org/pipermail/openstack-dev/2016-September/103560.html

[5] – https://review.openstack.org/#/c/357260/5

[6] – https://review.openstack.org/322194

[7] – https://review.openstack.org/354266

[8] – https://review.openstack.org/346846

[9] – https://review.openstack.org/#/c/364460/

 

by Mike Perez at September 16, 2016 03:49 PM

Red Hat Stack

Install your OpenStack Cloud before lunchtime

<figure class="wp-caption alignleft" style="width: 254px"><figcaption class="wp-caption-text">Figure 1. The inner workings of QuickStart Cloud Installer</figcaption></figure>

What if I told you that you can have your OpenStack Cloud environment setup before you have to stop for lunch?

Would you be surprised?

Could you do that today?

In most cases I am betting your answer would be not possible, not even on your best day. Not to worry, a solution is here and it’s called the QuickStart Cloud Installer (QCI).

Let’s take a look at the background of where this Cloud tool came from, how it evolved and where it is headed.

 

Born from need

As products like Red Hat Cloud Suite emerge onto the technology scene, it exemplifies the need for companies to be able to support infrastructure and application development use cases such as the following:

The problem is how to streamline the setup of such intricate and complex solutions?

<figure class="wp-caption alignright" style="width: 320px"><figcaption class="wp-caption-text">Figure 2. Getting the installation of complex infrastructure solutions down from a month, to days, to just hours based on testing by Red Hat.</figcaption></figure>

It started with researching in 2013 how the product Red Hat Cloud Infrastructure (RHCI) was being deployed by Red Hat customers. That information was used to start an effort creating several simple, reproducible installation guides that could cut down the time needed to install the following products.

The final product installation documentation brought the deployment time for this infrastructure solution down to just several days, instead of a month. Figure 2 shows the progress made between the efforts of installing RHCI.

The next evolution included Satellite and OpenShift offerings that you now find in the Red Hat Cloud Suite solution. This brought more complexity into the installation process and a push was made to go beyond just documentation. An installation effort commenced that had to bring together all the products, deal with their configurations and manage it all to a full deployment in a faster time frame than several days.

 

How it works

The QCI progressed and expanded by functioning as an extension (plugin) of Satellite with intentional roadmap alignment. It uses specific product plugins that interface with their individual APIs so that they can be used for both individual product installations and complete solution base installs.

Figure 1 shows you the architectural layout of QCI as it relates to Satellite. See the online documentation for the versions supported by QCI at the time of this writing, we expect to update the documentation on a regular basis as products are released that QCI supports.

The installer, when first started, spins up the Fusor Installer. This is a plugin to Foreman and is used to perform the initial setup such as networking and provisioning within Satellite to be used later in the deployment.

Some of the deployment steps depend on the path you have chosen when specifying the products you wish to install:

  • if a RHV with CloudForms deployment is chosen, the QCI calls Puppet modules for configuring and setting up the RHV environment. It installs RHV-M and runs Python scripts which will set the RHV Datacenter up.
  • CloudForms management engine is deployed as a Satellite resource and as such can be launched on top of RHV.
  • Most of the OpenShift product deployment uses Ansible to facilitate the installation and setup of the environment.
  • OpenStack uses what is known as the TripleO installation. This means OpenStack installed on OpenStack (hence the three O’s). It uses an all-in-one ISO image containing OpenStack which then deploys a customized version configured through QCI user interface.

The two deployment patterns supported by QCI are:

Now here is the unbelievable part we suggested in the title, that both deployment patterns can be installed in under four hours.

<figure class="wp-caption alignright" style="width: 320px"><figcaption class="wp-caption-text">Figure 3. The timeline from pushing the deploy button to completion of your OpenStack deployment.</figcaption></figure>

Yes, you can arrive in the morning to work and have your OpenStack Cloud infrastructure setup by the time you have to break for lunch!

Figure 3 shows you a condensed timeline of our testing of the RHCI installation as an example, but the same is possible with Red Hat Cloud Suite.

 

The future is bright

We can’t think of anything brighter for you than a future where you can reduce deployment times for your complex Cloud infrastructure, but there are more positive points to take note of when you leverage QCI:

  • Simple fully integrated deployments of RHCI and Red Hat Cloud Suite requiring only minimal documentation.
  • Easy to use, single graphical web-based user interface for deploying all products.
  • Leverages existing Red Hat Storage (Ceph and Gluster) deployments for Red Hat Virtualization, Red Hat OpenStack, and OpenShift product installations.
  • Integrated with Red Hat’s Customer Portal for automated subscription management.
  • Avoid the need for costly consultants when deploying proof-of-concept environments.
With this in mind the team behind this technology is busy looking at expanding into more products and solutions within the Red Hat portfolio. Who knows, maybe the next step could be including partner technologies or other third-party solutions?
No time like the present for you to dive right and take QCI for a spin and be sure to let us know what you think of it.

(This article written together with Red Hat Software Engineer Nenad Peric)

by Eric D. Schabell at September 16, 2016 12:00 PM

OpenStack Superuser

How to run a load-balanced service in Docker containers

This article offers a step-by-step guide on setting up a load-balanced service deployed on Docker containers using OpenStack VMs. The installation consists of an Nginx load balancer and multiple upstream nodes located in two deployments.

The issue

Let’s imagine that we plan to deploy an application that is expected to be heavily used. The assumption is that a single server can’t handle all incoming requests. We need multiple instances. Moreover the application we launch runs complex calculations that fully utilizes the server for a long time. Therefore the server instance can’t meet expected performance requirements. Or we simply want to deploy an application across multiple instances to ensure that if one instance fails, we still have another instance operating.

Docker allows us to easily and efficiently run multiple instances of the same service. Docker containers are designed to be able to build up a container very quickly on the VM regardless the underlying layers.

Nevertheless, such installation allows us to run multiple containers as separate objects. When building such infrastructure, it is desirable to keep all the instances available over one URL. The requirement can be satisfied by the usage of the load balancer node.

Our goal

Our goal is to set up an installation that has an Nginx reverse proxy server at the front and a set of upstream servers handling the requests. The Nginx server is the one directly communicating with clients. Clients don’t receive any information about particular upstream server handling their requests. The responses appear to come directly from the reverse proxy server.

In addition to this functionality, Nginx has health checks. The checks ensure that nodes behind the load balancer are still operating. In case of one of the servers is not responding, Nginx stops forwarding requests to the failed node.

How to build the infrastructure

First, we need to create Docker hosts in which we can run the containers. If you are not familiar with Docker, we recommend reading more about launching Docker hosts and containers in our previous article Docker containers on OpenStack VMs first. In this entry, we launch two OpenStack VMs running Ubuntu 14.04 on different deployments. The easiest way to launch the VMs is to use the Dashboard.

The VMs being launched should have a public IP address in order to be directly addressable over the internet. Then, we need to keep the TCP port 2376 open for communication in order to run with Docker. Additionally, the TCP port 80 (HTTP) needs to be open in order to access the load balancer and ports 8080 and 8081 so that the reverse proxy server can reach the upstream servers that will be accessible on that ports.

Once the VMs are launched and operating, we make them Docker hosts with docker-machine using the generic driver. We can do so by executing this command from the local environment for both VMs:


# docker-machine create -d generic \
--generic-ip-address \
--generic-ssh-key \
--generic-ssh-user ubuntu \

After the execution, we have two Docker hosts running. We can check them with the ls command:

# docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
lb1 - generic Running tcp://1.2.3.4:2376 v1.11.0
lb2 - generic Running tcp://5.6.7.8:2376 v1.11.0

Now, we can run the containers on the hosts. We launch the upstream nodes and the reverse proxy. In this article, tutum/hello-world was chosen as an image for running upstream nodes because it enables us to differentiate the particular containers from one another. We launch two containers from the hello-world image on both VMs. Moreover, we launch the load balancer on one of the hosts. There is an image called nginx where the Nginx is already running. First on instance lb1:

# eval $(docker-machine env lb1)
# docker run -d --name con1 -p 8080:80 tutum/hello-world
# docker run -d --name con2 -p 8081:80 tutum/hello-world
# docker run -d --name nginx1 -p 80:80 nginx

Let’s check the created containers on this host:

# docker ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
4a9a75a82ecf nginx "nginx -g 'daemon off" 16 minutes ago
Up 16 minutes 0.0.0.0:80->80/tcp, 443/tcp nginx1
b28434a2e5d4 tutum/hello-world "/bin/sh -c 'php-fpm " 18 minutes ago
Up 18 minutes 0.0.0.0:8081->80/tcp con2
3df4a2e5d86b tutum/hello-world "/bin/sh -c 'php-fpm " 19 minutes ago
Up 18 minutes 0.0.0.0:8080->80/tcp con1

And then for the second instance, but this time without launching the Nginx load balancer from the image *nginx*:

# eval $(docker-machine env lb2)
# docker run -d --name con3 -p 8080:80 tutum/hello-world
# docker run -d --name con4 -p 8081:80 tutum/hello-world
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0a35034ac307 tutum/hello-world "/bin/sh -c 'php-fpm " 15 minutes ago Up 15 minutes 0.0.0.0:8081->80/tcp con4
f223f0eed6a0 tutum/hello-world "/bin/sh -c 'php-fpm " 15 minutes ago Up 15 minutes 0.0.0.0:8080->80/tcp con3

How to configure the reverse proxy server

In the current state we have an Nginx welcome page accessible at 1.2.3.4 and four instances of hello-world web application, all with slightly different content at 1.2.3.4:8080, 1.2.3.4:8081, 5.6.7.8:8080, and 5.6.7.8:8081. Now, we have to configure the Nginx node to become a load balancer and a reverse proxy server.

We show only the basic configuration that provides a load balancing where all nodes are equal. There are a lot more possible configurations enabling for example setting priorities (primary and secondary nodes), load balancing methods (round-robin, ip-hash), or setting weights for servers. In order to find out more about load balancing configurations, we recommend you to read the Nginx load balancing guide or the entry Understanding the Nginx Configuration File Structure and Configuration Contexts at DigitalOcean blog.

Now, let’s configure the Nginx node to become a load balancer and a reverse proxy server. The configuration file we want to create in the container is /etc/nginx/conf.d/default.conf. We can execute a script on a container using the docker exec command. The convenient way is to establish a new shell session to the container. The nginximage has the Bash at /bin/bash

# eval $(docker-machine env lb1)
# docker exec -it nginx1 /bin/bash

Executing this command creates a Bash session so that we can insert the desired content in the configuration file.

# echo "upstream servers {
server 1.2.3.4:8080;
server 5.6.7.8:8080;
server 1.2.3.4:8081;
server 5.6.7.8:8081;
}

# This server accepts all traffic to the port 80 and passes it to the upstream.
# Notice that the upstream name and the proxy_pass need to match.

server {
listen 80;

location / {
proxy_pass http://servers;
}
}” > /etc/nginx/conf.d/default.conf

The upstream servers section specifies four upstream servers that we have created before. They will be accessed over given URLs. Because no other approach was specified, the default round-robin load balancing is used.

The server section specifies the function of the node as a load balancer specifying the proxy_pass as the group of servers defined above. The server is listening on the port 80.

Be aware that the port 80 does not correspond with the port 1.2.3.4:80 that the server is accessible at. If other ports than -p 80:80 were published while starting the container, it could have changed this directive. For example if -p 8080:8081 was used, then we need to access the port 8081 on 1.2.3.4:8080 and we have to use listen 8081; in the configuration file.

The configuration file is properly configured now, so we can terminate the Bash session:

# exit

Back in our local environment, we restart the container so that the changes to the configuration file will be loaded and thus the node begins to operate as a load balancer.

# docker restart nginx1

Review the infrastructure

The system is running now, so we can check its functionality. We can type http://1.2.3.4 to our browser and see the hello-world app content. When we do it again, the displayed hostname changes. That means that another upstream server has responded to our request.

Simulate a failover

Let’s test the health checking feature. First, we stop one of the containers:

# docker stop con1

Then access http://1.2.3.4 several times and check the hostnames that are being displayed – there are only three different hostnames alternating. This means that the stopped container does not receive the requests.

Now we start the container again:

# docker start con1

Conclusion

Nginx is an efficient way to perform load balancing in order to provide fail-over, increase the availability, extend fleet of the application servers, or to unify the access point to the installation. Docker containers allow quickly spawn multiple instances of the same type on various nodes. Combined together, it is an easy and powerful mechanism for solving similar challenges.

And after a short delay, you should see four different hosts responding again.

This post first appeared on the Cloud&Heat blog. Superuser is always interested in community content, email: editor@superuser.org.

Cover Photo // CC BY NC

The post How to run a load-balanced service in Docker containers appeared first on OpenStack Superuser.

by Superuser at September 16, 2016 11:02 AM

Alessandro Pilotti

Coriolis v2 – Adding DRaaS to Cloud Migrations

 

We’re excited to announce that our Coriolis cloud migration project introduced earlier this year just reached a new important milestone. With the addition of new features specifically aimed at business continuity and Disaster Recovery as a Service (DRaaS), Coriolis now offers both one-off migrations as well as constantly up to date replicas of virtual workloads between different clouds.

 

Business continuity in cloud migrations and DRaaS

Coriolis v2 introduces the concept of migration replicas. A migration replica is a complete copy of one or more virtual machines on a separate cloud environment. For example, VMware virtual machines can be replicated on OpenStack, AWS or Azure. The replica is obtained by copying (replicating) incrementally the virtual machines data from the source environment to the target, without interfering with any running workload. A migration replica can then be finalized by automatically applying the required changes to adapt it to the target environment (migration phase).

Here are the objectives that we set for Coriolis V2:


  • No agent is needed inside the virtual machine itself: the user’s workload is a “black box” which means that Coriolis doesn’t require any guest access or credentials.


  • Guest OS independent: supports Windows, Linux, etc


  • Virtual machines can be either powered on or off when replicating data, the replication must not impact the business continuity.


  • Replica data transfers must be incremental to allow synchronization between source and destination with the minimum amount of network IO.


  • In case of running VMs, the replicated content must retain an application consistent state when available (e.g. VMware snapshot quiescing, VSS app consistency, etc).


  • All operations must be resilient as transient failures are to be expected in the cloud world.


  • Source and target hypervisor technology can differ: e.g. ESXi on a source vSphere, and KVM or Hyper-V on a target OpenStack.


  • In case of unavailability of the source environment (e.g. an unrecoverable failure), the virtual machines must be able to start on the target, retaining data and configurations, allowing a fast recovery.


  • Scalability: any number of replicas can be performed at a given time, limited only by the underlying resource availability, storage or network IO and QoS rules.


  • Testing: replicated VMs can be started anytime on the target environment to perform integration testing of guest operating systems and applications, without impacting the replica process or the source environment.


  • Run anywhere: all Coriolis components can be deployed as virtual, physical or containerized workloads / appliances anywhere, including directly on the source or target cloud.

Coriolis implements all the above requirements by leveraging the available cloud and virtualization technology (e.g. Changed Block Tracking in VMware) and cloud API abstractions (e.g. Cinder Storage Snapshot API in OpenStack) with a scalable and fully automated microservice based architecture.

 

How can Coriolis migration replicas be created and managed?

Coriolis offers a rich REST API, that can be consumed with our CLI tools, web GUI (coming soon) or any 3rd party tools.
Coriolis identity and authentication are managed via Keystone, which allows a large number of backends including common options like OpenLDAP, Active Directory, Azure Federation, etc.
Source or target cloud credentials can be managed via Barbican, another project originating within the OpenStack ecosystem, for the added security benefits.

ReplicaMigration

Create a new replica

In this example two VMware VMs named “WebServer” and “DBServer” are going to be replicated to OpenStack (on any hypervisor, e.g.: KVM, Hyper-V or ESXi). The following command will create a configuration for the consistent replication of our “WebServer” and “DBServer” instances:

coriolis replica create --origin-provider vmware_vsphere --destination-provider openstack --origin-connection-secret $VMWARE_SECRET_REF --destination-connection-secret $OPENSTACK_SECRET_REF --instance DBServer --instance WebServer --target-environment "$TARGET_ENV"

Credentials for the source or target clouds can be securely stored in Barbican and safely referenced to Coriolis via their href:

barbican secret store -n "vsphere_connection_info" -t "text/plain" -p '{"host": "vsphere.local", "port": 443, "username": "user@vsphere.local", "password": "MyPassword", "allow_untrusted": true}'

barbican secret store -n "openstack_connection_info" -t "text/plain" -p '{"identity_api_version": 3, "username": "demo", "password": "MyPassword", "project_name": "demo", "user_domain_name": "default", "project_domain_name": "default", "auth_url": "http://openstack.local:35357/v3", "allow_untrusted": true}'

Note: if OpenStack credentials are omitted, the same Keystone token used by Coriolis is employed.

Lastly, the “target environment” parameter contains information about how to map virtual resources (such as networks, flavors etc) between source and target environments:

TARGET_ENV='{"network_map": {"VM Network Local": "public", "VM Network": "private"}, "flavor_name": "m1.small"}'

Replica configurations can be easily managed with the following commands:

replica list
replica show
replica create
replica delete

 

Executing the replica

A replica can be executed at any time, e.g. on a weekly, daily or hourly basis, or even continuously:

coriolis replica execute $REPLICA_ID

The replica execution is fully asynchronous and divided in multiple steps, each of which may be monitored with:

coriolis replica execution show $REPLICA_ID $EXECUTION_ID

During the first execution, a full data replication is performed while subsequent runs involve only the changes that occurred since the previous successful replica execution.

Optionally, you can also automatically shutdown the source VM before the replication starts by adding the “–shutdown-instances” command line option.

The replica process may involve the creation of temporary resources (such as virtual machines and volumes) both on the source and target infrastructures similar to what may be seen in the following replica execution output snippet:

id: b60ea340-da5e-40c7-9d2b-087c86fc952a
task_type: DEPLOY_REPLICA_TARGET_RESOURCES
instance: DBServer
status: COMPLETED
depends_on: b0cbd181-dba9-4908-89f5-1519f15fe3eb
progress_updates:
2016-08-29T08:29:31.000000 Creating migration worker instance keypair
2016-08-29T08:29:32.000000 Creating migration worker instance Neutron port
2016-08-29T08:29:34.000000 Adding migration worker instance floating IP
2016-08-29T08:29:45.000000 Adding migration worker instance security group
2016-08-29T08:29:47.000000 Waiting for connectivity on host: 10.89.13.211:22
2016-08-29T08:30:06.000000 Attaching volume to worker instance

id: bc04e461-ca2d-482e-90c4-70378522dfe7
task_type: REPLICATE_DISKS
instance: DBServer
status: COMPLETED
depends_on: 68674a95-7bb8-40ad-8cbc-d9598237ea91, b60ea340-da5e-40c7-9d2b-087c86fc952a
exception_details:
progress_updates:
2016-08-29T08:30:26.000000 Creating snapshot
2016-08-29T08:30:32.000000 Performing incremental CBT replica for disk: [afsan1] Windows 2012 R2/Disk1.vmdk. Disk size: 14,294,967,296. Changed blocks size: 1,983,040
2016-08-29T08:30:40.000000 Disk [afsan1] Windows 2012 R2/Disk1.vmdk replica progress: 100%
2016-08-29T08:30:48.000000 Removing snapshot

The replication process will perform an app consistent snapshot on the guest OS when available (e.g. VMware quiescing). This ensures that the replicated data is application consistent in case of transactional activities running in the guest OS (e.g. Oracle, MS SQL Server, Exchange Server, etc).

 

How to start a replicated VM on the target cloud

Here’s where Coriolis truly excels: getting virtual machines to run across a wide range of hypervisors and clouds. Replicating the content of a VM is just a part of the process, getting the machine to actually run on the target cloud also requires actions particular to the underlying target hypervisor and cloud deployment tools. For example, some guest operating systems will require specific synthetic drivers to run in the best possible way on each of the individual hypervisors such KVM, ESXi or Hyper-V, while OpenStack needs cloud-init / cloudbase-init configured for proper guest initialization, Azure the WALinuxAgent, etc. All the required steps are fully automated and transparent for the user.

Once the replication is completed, the source environment becomes redundant thanks to the replica made available on the destination cloud. Thus, in the event of an actual disaster (the source VM became deleted, corrupted, etc), we may quickly bring the system back up on our target cloud with minimal loss to continuity.
The following command will snapshot the migrated disks / volumes and start the migration process asynchronously:

coriolis migration deploy replica $REPLICA_ID

Here’s some sample output:

task_type: CREATE_REPLICA_DISK_SNAPSHOTS
instance: WebServer
status: COMPLETED
depends_on:
exception_details:
progress_updates:
2016-08-28T21:06:30.000000 Creating replica disk snapshots

id: be587123-f01e-4ddc-a9e2-1474d7b6ca4c
task_type: DEPLOY_REPLICA_INSTANCE
instance: WebServer
status: COMPLETED
depends_on: f3bc535b-70b9-43b2-a6bd-0ec6fb9f21c6
exception_details:
progress_updates:
2016-08-28T21:06:39.000000 Creating migration worker instance Neutron port
2016-08-28T21:06:39.000000 Creating migration worker instance keypair
2016-08-28T21:06:41.000000 Adding migration worker instance floating IP
2016-08-28T21:06:53.000000 Adding migration worker instance security group
2016-08-28T21:06:55.000000 Waiting for connectivity on host: 10.89.13.226:22
2016-08-28T21:07:08.000000 Attaching volume to worker instance
2016-08-28T21:07:15.000000 Attaching volume to worker instance
2016-08-28T21:07:21.000000 Preparing instance for target platform
2016-08-28T21:07:21.000000 Connecting to SSH host: 10.89.13.226:22
2016-08-28T21:07:24.000000 Discovering and mounting OS partitions
2016-08-28T21:08:01.000000 OS being migrated: ('CentOS Linux', '7.2.1511')
2016-08-28T21:08:04.000000 Removing packages: ['open-vm-tools', 'hyperv-daemons']
2016-08-28T21:08:07.000000 Adding packages: ['dracut-config-generic', 'cloud-init', 'cloud-utils', 'parted', 'git', 'cloud-utils-growpart']
2016-08-28T21:08:40.000000 Generating initrd for kernel: 3.10.0-327.el7.x86_64
2016-08-28T21:09:43.000000 Dismounting OS partitions
2016-08-28T21:09:45.000000 Removing worker instance resources
2016-08-28T21:09:56.000000 Renaming volumes
2016-08-28T21:09:57.000000 Ensuring volumes are bootable
2016-08-28T21:09:57.000000 Creating Neutron ports for migrated instance
2016-08-28T21:09:59.000000 Creating migrated instance
2016-08-28T21:10:20.000000 Deleting Glance images

id: d4ebd9b7-76a9-46b5-b7c1-fbfed7aa5b99
task_type: DELETE_REPLICA_DISK_SNAPSHOTS
instance: WebServer
status: COMPLETED
depends_on: be587123-f01e-4ddc-a9e2-1474d7b6ca4c
exception_details:
progress_updates:
2016-08-28T21:10:22.000000 Removing replica disk snapshots
Congratulations, your workload has been successfully migrated!

 

What about testing the replicated workloads?

One important requirement consists in periodically testing the replicated workloads on the destination cloud. This can be easily done with Coriolis, even while the replicated resources on the source environment continue running unaffected. This also opens the way for numerous testing scenarios (e.g. integration / functional / white box / black box) to be performed on the replicated workloads at anytime, reassuring that the workloads will operate as expected and lead to a smooth recovery in the event of an emergency.

 

What are the supported guest operating systems?

One of the advantages of Coriolis is that the replica is guest OS agnostic, so differences between guest operating systems come only into play when performing the migration. Coriolis supports migrations for all Windows versions (Windows Server 2008, 2008 R2, 2012, 2012 R2, 2016 including Nano Server, Windows 7, 8, 8.1, 10), Ubuntu 12.04 and above, RHEL / CentOS 6.x and 7.x, SUSE SLE 11 and 12, openSUSE, Debian and Fedora. Support for other Linux distributions can also be added on demand.

 

How can you deploy Coriolis?

Just contact us for pricing and additional info! We are also offering a demo appliance that can be easily deployed on your environment for evaluating Coriolis!

 

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="225" src="https://www.youtube-nocookie.com/embed/YBfGTNEb-8Q" style="margin-right: 35px;" width="400"></iframe><iframe allowfullscreen="allowfullscreen" frameborder="0" height="225" src="https://www.youtube-nocookie.com/embed/XFTj9q3BK54" width="400"></iframe>

The post Coriolis v2 – Adding DRaaS to Cloud Migrations appeared first on Cloudbase Solutions.

by Alessandro Pilotti at September 16, 2016 11:00 AM

About

Planet OpenStack is a collection of thoughts from the developers and other key players of the OpenStack projects. If you are working on OpenStack technology you should add your OpenStack blog.

Subscriptions

Last updated:
October 01, 2016 06:48 PM
All times are UTC.

Powered by:
Planet