April 17, 2014

SUSE Conversations

OpenStack Icehouse – Cool for Enterprises

OpenStack Icehouse has arrived with spring, but this ninth release of the leading open source project for building Infrastructure-as-a-Service clouds won’t melt under user demands. This is good, since user interest in OpenStack seems to know no limits as evidenced by the three new and sixteen total languages in which OpenStack Dashboard is now available. …

+read more

by Douglas Jarvis at April 17, 2014 06:13 PM

Cody Bunch

Basic Hardening with User Data / Cloud-Init

The ‘cloud’ presents new and interesting issues around hardening an instance. If you buy into the cattle vs. puppies or cloudy application building mentality, your instances or VMs will be very short lived (Hours, Weeks, maybe Months). The internet however, doesn’t much care for how short lived the things are, and the attacks will begin as soon as you attach to the internet. Thus, the need to harden at build time. What follows is a quick guide on how to do this as part of the ‘cloud-init’ process.

Note: Yes, you can use pre-hardened images. However, keeping them up to date and deploying them everywhere presents it’s own set of challenges.

Note the second: The things I do below only harden some of the things, and only do an ‘ok’ job at it. You will want to build this out as needed per your policies / regulatory requirements.

What is User Data / Cloud-Init?

A full discussion of cloud init is a bit beyond the scope of what we’re doing here. That said, there are some details that will help you understand what we’re doing and how it works. From the OpenStack User Guide:

“User data is the mechanism by which a user can pass information contained in a local file to an instance at launch time. The typical use case is to pass something like a shell script or a configuration file as user data.”

and:

“To do something useful with the user data, you must configure the virtual machine image to run a service on boot that gets user data from the metadata service and takes some action based on the contents of the data. The cloud-init package does exactly this. This package is compatible with the Compute metadata service and the Compute configuration drive.”

Using User Data / Cloud-Init to Harden an Instance

Now that we’ve an understanding of the mechanism to run boot time scripts in your images, next up, we need to do just that. For this example we’ll be using an Ubuntu 12.04 image. First create a file with the following contents:

<script src="https://gist.github.com/bunchc/10999677.js"></script>

Now that you have that in a file, you’ll need to fire off a command similar to:

nova boot --user-data ./myfile.txt --image myimage myinstance

Resources

  1. http://docs.openstack.org/user-guide/content/user-data.html
  2. http://plusbryan.com/my-first-5-minutes-on-a-server-or-essential-security-for-linux-servers
  3. http://www.thefanclub.co.za/how-to/how-secure-ubuntu-1204-lts-server-part-1-basics

by OpenStackPro at April 17, 2014 05:35 PM

Matt Fischer

Keystone: Using Stacked Auth (SQL & LDAP)

Most large companies have an Active Directory or LDAP system to manage identity and integrating this with OpenStack makes sense for some obvious reasons, like not having multiple passwords and automatically de-authenticating users who leave the company. However doing this in practice can be a bit tricky. OpenStack needs some basic service accounts. Then your users probably want their own service accounts, since they don’t want to stick their AD password in a Jenkins config. So multiple the number of teams that may use your OpenStack by 4 or 5 and you may have a ballpark estimate on the number of service accounts you need. Then take that number to your AD team and be sure to run it by your security guys too. This is the response you might get.

Note: All security guys wear cloaks.

The way you can tell how a conversation with a security guy is going is to count the number of facepalms they do.

In this setup you probably also do not have write access to your LDAP box. This means that you need a way to track role assignment outside of LDAP, using the SQL back-end (a subject I’ve previous covered)

So this leaves us with an interesting problem, fortunately one with a few solutions. The one we chose was to use a “stacked” or “hybrid” auth in Keystone. One where service accounts live in Keystone’s SQL backend and if users fail to authenticate there they fallback to LDAP. For role Assignment we will store everything in Keystone’s SQL back-end. The good news is that Ionuț Arțăriși from SUSE has already written a driver that does exactly what you need. His driver for Identity and Assignment is available here on github and has a nice README to get you started. I did have a few issues getting it to work, which I will cover below which hopefully saves you some time in setting this up.

The way Ionut’s driver works is pretty simple, try the username and password first locally against SQL. If that fails, try to use the combo to bind to LDAP. This saves the time of having to bind with a service account and search, so it’s simpler and quicker. You will need to be able to at least bind RO with your standard AD creds for this to work though.

From this point forward, you probably will only have the issues I have if you have an AD/LDAP server with a large number of users. If you’re in a lab environment, just following the README in Ionut’s driver is all you need. For everyone else, read on…

Successes and Problems

After configuring the LDAP driver, I switched and enabled hybrid identity and restarted keystone. The authentication module seemed to be working great. I did my first test by doing some basic auth and then running a user-list. The auth worked, although my user lacked any roles that allowed him to do a user list. When I set the SERVICE_TOKEN/ENDPOINT and re-ran the user list, well that was interesting. I noticed that since I had enabled the secret LDAP debug option (hopefully soon to be non-secret) that it was doing A TON of queries and I eventually hit the server max before setting page_size=1000. Below is an animation that shows what it was like running keystone user-list:

This isn't really animated, but is still an accurate representation of what happened.

This isn’t really animated, but is still an accurate representation of how slow it was

It did eventually finish, but it took 5 minutes and 25 seconds, and ain’t nobody got time for that. One simple and obvious fix for this is to just modify the driver to not pull LDAP users in the list. There are too many to manage anyway, so I made the change and now it only showed local accounts. This was a simple change to the bottom of the driver, to just return SQL users and not query LDAP.

root@mfisch:~# keystone user-list
+----------------------------------+------------+---------+--------------------+
| id | name | enabled | email |
+----------------------------------+------------+---------+--------------------+
| e0ede62ebcb64e489a41f4a9b18cd63c | admin | True | root@localhost |
| f9d76feb7081463f973eeda82c918547 | ceilometer | True | root@localhost |
| 176e232176c74164a42e2f773695671b | cinder | True | cinder@localhost |
| fc8007f82b7542e88ca28fab5eda1b5c | glance | True | glance@localhost |
| 635b4022aafc4f5488c048109c7b1665 | heat | True | heat@localhost |
| b3175fc7b62748679b4c958fe89fbdf0 | heat-cfn | True | heat-cfn@localhost |
| 4873cd322d0d4582908002b619e3940a | neutron | True | neutron@localhost |
| 37ef623c60474bf5a384a2047719acf4 | nova | True | nova@localhost |
| ff07bd356b8d4959be4326a43c297db0 | swift | True | swift@localhost |
+----------------------------------+------------+---------+--------------------+

After making this change, I went to add the admin role to my AD user and found I had a second problem. It seemed to be relying on my user being in the user list before it would assign them a role.

root@mfisch:~# keystone user-role-add --user-id MyADUser --role-id admin --tenant-id 6031675b7618458b8bb85b92857532b06No user with a name or ID of 'MyADUser' exists.

I didn’t really want to wait five minutes to assign or list a role, so this needed fixing.

Solutions

It was late on Friday afternoon when I started toying with some solutions for the role issue. The first solution is to add OpenStack users into a special ou so that you can limit the scope of the search. This is done via the user_filter setting in keystone.conf. If this is easy for your organization, it’s a simple fix. It also allows tools like Horizon to show all your users. If you do this you can obviously undo the change to not return LDAP users.

The second idea that came to mind was that every time an AD user authenticated, I could just make a local copy of that user, without a password and setup in a way that they’d be unable to authenticate locally. I did manage to get this code functional, but it has a few issues. The most obvious one is that when users leave the company there’s no corresponding clean-up of their records. The second one is that it doesn’t allow for any changes to things like names or emails once they’ve authenticated. In general it’s bad to have two copies of something like this info anyway. I did however manage to get this code working and it’s available on github if you ever have a use for it. It is not well tested and it’s not very clean in the current state. With this change, I had this output (truncated):

root@mfisch:~# keystone user-list
+----------------------------------+------------+---------+--------------------+
| id | name | enabled | email |
+----------------------------------+------------+---------+--------------------+
| MyADUser | MyADUser | True | my corporate email |
| e0ede62ebcb64e489a41f4a9b18cd63c | admin | True | root@localhost |
| f9d76feb7081463f973eeda82c918547 | ceilometer | True | root@localhost |
...
+----------------------------------+------------+---------+--------------------+

Due to the concerns raised above, we decided to drop this idea.

My feelings on this solution...

My feelings on this solution…

The final idea which came after some discussions was to figure out why the heck a user list was needed to add a role. I dug into the client code, hacked a fix, and then asked on IRC. It turns out I wasn’t the first person to notice it. The bug (#1189933) is that when the user ID is not a UUID, it thinks you must be passing in a name and does a search. The fix for this is to pull a copy of python-keystoneclient thats 0.4 or newer. We had 0.3.2 in our repo. If you upgrade this package to the latest (0.7x) you will need a copy of python-six that’s 1.4.0 or newer. This should all be cake if you’re using the Ubuntu Cloud Archive.

Final State

In the final state of things, I have taken the code from Ionut and:

  1. disabled listing of LDAP users in the driver
  2. upgraded python-six and python-keystoneclient
  3. packaged and installed Ionut’s drivers (as soon as I figure out the license for them, I’ll publish the packaging branch)
  4. reconfigured keystone via puppet

So far it’s all working great with just one caveat so far. If users leave your organization they will still have roles and tenants defined in Keystone’s SQL. You will probably need to define a process for cleaning those up occasionally. As long as the users are deactivated in AD/LDAP though the roles/tenant mappings for them should be relatively harmless cruft.

by Matt Fischer at April 17, 2014 04:42 PM

OpenStack Blog

Open Mic Spotlight: Joshua Hesketh

OLYMPUS DIGITAL CAMERAThis post is part of the OpenStack Open Mic series to spotlight the people who have helped make OpenStack successful. Each week, a new contributor will step up to the mic and answer five questions about OpenStack, cloud, careers and what they do for fun. If you’re interested in being featured, please choose five questions from this form and submit!

Joshua Hesketh is a software developer for Rackspace Australia working on upstream OpenStack. He works from his home in Hobart, Tasmania. Joshua is currently President of Linux Australia, previously the co-chair for PyCon Australia and a key organizer for linux.conf.au. He has an interest in robotics having recently completed a degree in mechatronic engineering. Check out his blog here

1. Finish the sentences. OpenStack is great for _______. OpenStack is bad for ______.

OpenStack is great for freedom. OpenStack is bad for proprietary competitors.

2. How did you learn to code? Are you self-taught or did you lear in college? On-the-job?

I’m self-taught – which might explain some of my bad habits! I learned a bit during university while picking up most of my knowledge from being involved in open source projects.

3. What does “open source” mean to you?

To me, open source is a superior development model in which everybody wins – from the users, to the developers and businesses involved. Much more value can be gained from using open source where you can build on the shoulders of giants, collaborate on complicated problems and avoid vendor lock-ins. As users you have the flexibility to use a product to its fullest potential whilst, as developers, having the ability to modify and customize it as needed.

4. Where is your favorite place to code? In the office, at a local coffee shop, in bed?

I love working from home. I get to wake up to this view every morning. When I’m not at my home office I spend hours at my favourite cafe, Villino, working while enjoying a flat white.

joshview

5What drew you to OpenStack?

One of the big drawcards for me is the community within OpenStack which is really special. It’s such a large and active project, with hundreds of developers all working in unison. The sense of community is reflected in everyone being nice, approachable and willing to go out of their way to help solve your problem. Everybody is working towards the same goal – to better OpenStack.

This is one of the great success stories of the project – being able to scale its developer base so well. Granted, there are still issues in the getting started pipeline as a consequence of size, but overall the project is very well managed. I am a very big fan of the structure and operation of the OpenStack Foundation. The membership models and egalitarianism are very well set out.

by OpenStack at April 17, 2014 04:25 PM

Florian Haas

Hello Icehouse!

The OpenStack Icehouse release is here, and it's time to give another shout-out to all those fantastic OpenStack developers and contributors out there. Thanks for building something great!

And we have two humble announcements of our own to make on this day:

read more

by florian at April 17, 2014 03:48 PM

Adam Young

Configuring mod_nss for Horizon

Horizon is the Web Dashboard for OpenStack. Since it manages some very sensitive information, it should be accessed via SSL. I’ve written up in the past how to do this for a generic web server. Here is how to apply that approach to Horizon.

These instructions are based on a Fedora 20 and packstack install.

As a sanity check, point a browser at your Horizon server before making any changes. If hostname is not set before you installed packstack, you might get an exception about bad request header suggesting you might need to set ALLOWED_HOSTS: If so, you have to edit /etc/openstack-dashboard/local_settings

ALLOWED_HOSTS = ['192.168.187.13','ayoungf20packstack.cloudlab.freeipa.org', 'localhost', ]

Once Horizon has been shown to work on port 80, proceed to install the Apache HTTPD module for NSS:

sudo yum install mod_nss

While this normally works for HTTPD, something is different with packstack; all of the HTTPD module loading is done with files in /etc/httpd/conf.d/ whereas the mod_nss RPM assumes the Fedora approach of putting them in /etc/httpd/conf.modules.d/. I suspect it has to do with the use of Puppet. To adapt mod_nss to the packstack format, after installing mod_nss, you need to mv the file:

sudo mv /etc/httpd/conf.modules.d/10-nss.conf   /etc/httpd/conf.d/nss.load

Note that mv keeps SELinux Happy, but cp does not: ls -Z to confirm

$ ls -Z /etc/httpd/conf.d/nss.load 
-rw-r--r--. root root system_u:object_r:httpd_config_t:s0 /etc/httpd/conf.d/nss.load

If you get a bad context there, the cheating way is to fix is yum erase mod_nss and rerun yum install mod_nss and then do the mv. That is what I did.

edit /etc/httpd/conf.d/nss.conf:

#Listen 8443
Listen 443

and in the virtual host entry change 8443 to 443

Add the following to /etc/httpd/conf.d/openstack-dashboard.conf

<virtualhost>
   ServerName ayoungf20packstack.cloudlab.freeipa.org
   Redirect permanent / https://ayoungf20packstack.cloudlab.freeipa.org/dashboard/
</virtualhost>

replacing ayoungf20packstack.cloudlab.freeipa.org with your hostname.

Lower in the same file, in the section

<directory>

add

  NSSRequireSSL

To enable SSL.

SSL certificates really should not be self signed. To have a real security strategy, your X509 certificates should be managed via a Certificate Authority. Dogtag PKI provides one, and is deployed with FreeIPA. So, for this setup, the Horizon server is registered as an IPA client.

There will be a selfsigned certificate in the nss database from the install. We need to remove that:

sudo certutil -d /etc/httpd/alias/ -D -n Server-Cert

In order to fetch the certificates for this server, we use the IPA command that tells certmonger to fetch and track the certificate.

ipa service-add HTTP/`hostname`
sudo ipa-getcert request -d /etc/httpd/alias -n Server-Cert -K HTTP/`hostname` -N CN=`hostname`,O=cloudlab.freeipa.org

If you forgot to add the service before requesting the cert, as I did on my first iteration, the request is on hold: it will be serviced in 12 (I think) hours by certmonger resubmitting it, but you can speed up the process:

sudo getcert resubmit -n Server-Cert  -d /etc/httpd/alias

You can now see the certificate with:

 sudo certutil -d /etc/httpd/alias/ -L -n Server-Cert

Now, if you restart the HTTPD server,

sudo systemctl restart httpd.service

and point a browser at http://hostname, it should get redirected to https://hostname/dashboard and a functioning Horizon application.

Note that for devstack, the steps are comparable, but different:

  • No need to mv the 10-nss.conf file from modules
  • The Horizon application is put into /etc/httpd/conf.d/horizon.conf
  • The horizon app is in a virtual host of <VirtualHost *:80> you can’t just change this to 443, or you lose all of the config from nss.conf. The two VirtualHost sections should probably be merged.

by Adam Young at April 17, 2014 02:21 PM

Andreas Jaeger

Changes for importing translations to OpenStack

Most OpenStack projects send after each commit updated files for translation to transifex. Also, every morning any translated files get merged back to the projects as a "proposal" - a normal review with all the changes in it.
Quite a lot projects had broken translation files - files with duplicate entries that transifex rejected -, and thus no update happened for some time. The problem is that these jobs run automatically and nobody gets notified when they fail.

Clark Boylan, Devananda van der Veen and myself have recently looked into this and produced several fixes to get this all working properly:
  • The broken translation files (see bug 1299349) have been fixed in time for the Icehouse release.
  • The gating has been enhanced so that no broken files can go in again.
  • The scripts that send the translations to and retrieve them from transifex have been greatly improved.
The scripts have been analyzed and a couple of problems fixed so that no more obsolete entries should show up anymore. Additionally, the proposal scripts - those that download from transifex - have been changed to not propose any files where only dates or line numbers have changed. This last change is a great optimization for the projects. For example, the sahara project got every morning a proposal that contained two line changes for each .po file - just updates of the dates. Now they do not get any update at all unless there is a real change of the translation files. A real change is either a new translation or a change of strings. For example, today's update to the operations-guide contained only a single file (see this review) since the only change was a translation update by the Czech translation team - and sahara did not get an update at all.

New User: OpenStack Proposal Bot

Previously the translation jobs where run as user "Jenkins" and now have been changed to "OpenStack Proposal Bot".

Now, another magic script showed up and welcomed the Proposal Bot to OpenStack:
Unfortunately, the workflow did not work for most projects - the bot forgot to sign the contributor license agreement:
I'm sure the infra team will find a way to have the bot sign the CLA and starting tomorrow, the import of translations will work again.

by Andreas Jaeger (noreply@blogger.com) at April 17, 2014 08:14 AM

April 16, 2014

The Official Rackspace Blog » OpenStack

What Can Object Storage Do For Me Today?

Introduction

In my previous blog post on Object Storage, I provided an overview of what Object Storage is, and how it compares to conventional storage platforms. In this post, I will discuss what benefits Object Storage can provide for you today.  As there are a variety of solutions to choose from, each offering different pros, cons and price-points, I will focus on OpenStack Swift, the open-source Object Storage component of OpenStack, as it is vendor-agnostic and freely available to everyone.

Three Key Advantages

Price

Object Storage vs. Block/File Storage

Block and File Storage solutions may be cheaper than Object Storage at small data sizes. In the low tens of TB, the economics of Object Storage are not very compelling. The overhead of 3x replication and having dedicated management/network infrastructure is significant. However, most commodity storage solutions begin to become challenging to scale once more than a single node worth of storage is required. By the time you need 10 or more devices, you are generally either taking on a large amount of administrative overhead (managing volumes/LUNs), or are starting to look at expensive proprietary solutions. This is the point at which Object Storage begins hitting its stride.

Object Storage prices today are typically less than $0.10/GB per month depending on platform and quantity. In the private cloud space, these costs can be significantly lower: you pay a premium to a public cloud provider both in terms of their profit margin and in terms of renting resources on a utility basis.

Object Storage vs. Tape

Simply put, when you have valuable data, the only way to store it more cheaply than Object Storage is on tape. Tape is still by far the cheapest option for long-term cold storage, and rumors of its death have been greatly exaggerated.

On the other hand, tape is basically a black hole for data from the standpoint of day-to-day operations. Tape is not appropriate for data that needs to be accessed regularly, or for data that might need to be retrieved rapidly at some point in the future.  In short, tape is not suitable for data that needs to be alive to any extent.

Object Storage provides the closest durability and cost profile to tape on top of a solution that provides hot storage, and can, for example, act as a backend for Hadoop. There are a variety of other useful features, such as multiple ways to span data across several geographical locations, access control, the ability to make content public, CDN integration and others, that tape cannot provide.

Durability

By combining 3N redundancy, intelligent data placement, automatic recovery of lost or corrupted objects and automated handling of drive failures (ensuring 3N redundancy even in the period prior to drive replacement), Object Storage provides extremely high levels of durability when compared to conventional storage options. There are also ways to automatically synchronize data between multiple clusters in separate geographical regions, providing durability characteristics that suit virtually any use case.

Without going into excruciating detail, the type of events that would cause permanent data loss in a large-scale Object Storage cluster would typically be catastrophic in nature (the sort of thing that only having a secondary site would protect against: something that is also quite easy to do with an Object Storage platform).

An interesting mathematical analysis of data loss characteristics indicates a worst-case mean time to data loss of over 150 years using consumer quality drives with 3N redundancy. Imagine the cost and complexity of achieving that type of durability number using conventional storage technologies!

Features

While enterprise storage offerings typically offer a variety of compelling features (albeit with a corresponding price tag attached), the feature-set on most free/open-source storage products is sorely lacking when it comes to multi-petabyte storage requirements. This is another place where Object Storage systems shine.  Swift Object Storage features include:

  • Self-Healing: Automatic identification of failed drives and replication of data to preserve 3N redundancy.
  • Massively Scalable (with linear performance improvements with scale): No single points of failure and full horizontal scalability of all services means that environment-level performance increases steadily along with the size of the environment.
  • Scalable Metadata: Decentralized object metadata means that billions of objects with megabytes of metadata per object can be managed without performance degradation or the need for an external database to manage data (something that becomes prohibitive at scale). Note: One current downside to Object Storage is that metadata search is currently challenging to implement. We will discuss this point further in an upcoming post on the future of Object Storage.
  • Multi-region: Swift supports two methods of implementing multi-region clusters. First, specific containers (analogous to S3 buckets) can be set to synchronize between two distinct Swift clusters (“container sync”).  Second, a true multi-region cluster can be set up where replicas are distributed across two or more clusters.
  • Secure Multi-tenancy: Swift Object Storage handles multiple accounts, and allows for total isolation of the data associated with an account (in other words, users cannot access one another’s data). There is also the ability to share data between accounts, or even publicly.

Summary

Object Storage provides a variety of compelling benefits, and should be considered as a “first-class” shared storage option when designing scalable infrastructures.

by Jonathan Kelly at April 16, 2014 07:00 PM

Red Hat Stack

The Road To High Availability for OpenStack

Why OpenStack High Availability is Important?
Many organizations choose OpenStack for it’s distributed architecture and ability to deliver Infrastructure-as-a-Service environment for scale-out applications to run on top of it, for private on premise clouds or public clouds. It is quite common for OpenStack to run mission critical applications. OpenStack itself is commonly deployed in Controller/Network-Node/Computes layout where the controller runs management services such as nova-scheduler that determines how to dispatch compute resources, and Keystone service that handles authentication and authorization for all services.

Although failure of the controller node would not cause disruption to already running application workloads on top of OpenStack, for organizations running production applications it is critical to provide 99.999% uptime of the control plane of their cloud, and deploy the controller in a highly available configuration so that OpenStack services are accessible at all times and applications can scale-out or scale-in according to workloads.

Address High Availability Needs
Deploying a highly available controller for OpenStack could be achieved in various configurations, each would serve certain set of demands, and introduce growing set of  prerequisites. OpenStack Environment consists of stateless, shared-nothing services that serve their APIs -  Keystone,Glance, Swift, Nova-schedule,Nova-api, Neutron, Horizon, Heat, Ceilometer, etc. – and underlying infrastructure components that OpenStack services use to communicate and save persistent data – MariaDB Database, and a message broker – RabbitMQ – for inter-service communication.

Maintaining OpenStack services’ availability and uptime can be achieved with fairly simple Active/Passive cluster configuration and a virtual IP address forwarding communication to the active node. As the load and demand on OpenStack services grow organizations are interested in the ability to add nodes and scale-out the controller plane. Building a scale-out controller would require setting all services and infrastructure components (database and message broker) in Active/Active configuration and confirming that they are capable to add more nodes to the cluster as load grows, and balancing the API requests load between the nodes.

High Availability Architecture for RHEL-OSP (Red Hat Enterprise Linux OpenStack Platform)
We are heavily investing to provide a fully supported, out of the box, Active/Active high availability solution for OpenStack services and underlying infrastructure components based on mature industry proven open source technologies. In a multi-controller layout services run on all controller nodes in a highly available clustered configuration.

Controller 1_2_3 OpenStack High Availability
OpenStack Platform 5.0 high availability solution uses Pacemaker to construct Active/Active clusters for OpenStack services and HAProxy load balancer. API calls are load balanced through clustered HAProxy in front of the services where every service has it’s own virtual IP(VIP). Such setup makes it easy to customize layouts and segregate services as needed. Galera is used to synchronize the persistent data layer across the running database nodes. Galera is a synchronous multi-master cluster for MariaDB database, handling synchronous replication. This enables an Active/Active scale-out of the database layer without requiring shared storage.

Pacemaker clustered load balancer

Out Of The Box With Foreman Openstack Installer
To make the high availability solution for OpenStack Platform 5.0 extremely easy to consume and setup we are fully integrating it with a project named Staypuft. StayPuft is a Foreman user interface plugin which aims to make it easy to deploy complex production OpenStack deployments. StayPuft will be delivered as part of the Foreman Openstack Installer  for OpenStack Platform 5.0.

Foreman OpenStack Deployment

 

 

 

by Arthur Berezin at April 16, 2014 04:30 PM

Mirantis

Trusted Cloud computing with Intel TXT: The challenge

Interested in hearing more about this topic? Christian will be co-presenting Learning to Trust the Cloud/Securing OpenStack with Intel Trusted Computing at the OpenStack Summit in Atlanta on May 12 at 2:50pm.

Editor’s Note:  We published this post earlier this year, but with the topic’s selection for this year’s summit — congratulations, Christian! — we thought we’d feature it again. We hope you find it useful.

In today’s connected environments, attacks on compute infrastructure are ubiquitous. Major players have been compromised by hackers and malware, with damages inflicted both to their reputation and their business. Protecting the infrastructure from external and internal threats is an important part of operating production grade cloud environments.

Approaches to meeting this challenge range from hoping it will not hit one’s own environment to fully locking down the environment with heavy gatekeepers and restrictive policies.

What is Intel TXT?

Intel Trusted Execution Technology (TXT) is a combination of hardware and software aimed at securing the execution of sensitive workloads.

In contrast to solutions that protect the Operating System, Intel TXT builds a chain of trust from the system firmware all the way to the server or hypervisor to prevent attacks on system firmware or BIOS, MBR, boot loader, OS and hypervisor.

Every component in this chain is verified against known good states and, depending on the result, marked either trusted or untrusted.

This approach allows detection of not only threats to the OS itself, such as viruses, but also attacks on the configuration and even manipulation of the server’s boot firmware and hardware. When a breach is detected, workloads that require secure execution can not be executed on this server.

How does this approach meet the challenge?

An entity attacking a cloud environment can choose many different paths, but unless the workload itself is attacked, the attack will leave traces in one or more of the Platform Control Registers (PCR). A changed value in a PCR results in a break in the chain of trust, which is then detected by TXT and leads to the resource being marked as untrusted. This happens both during boot and while the platform is running.

When a workload is scheduled, the trust status of the potential compute resources is verified by the scheduler. New workloads are only scheduled on compute resources that are still trusted.

Components to an Intel TXT environment

Servers used in Intel TXT contain a number of components to allow calculation of the required fingerprints and enable a trusted environment.

 

  • The Processor and server chipset must support Intel TXT extensions.
  • A TPM module (usually third party silicon) allows storing vendor and owner policies for ‘known good’ state.
  • The TPM also allows signing of PCR values that are transmitted to the attestation service.
  • BIOS and OS must contain Authenticated Code Modules (ACM) to build a complete chain of trust for TXT support.
  • A Launch Control Policy (LCP) allowing comparison of Platform Control Registers (PCR) with known good values.

 

During system boot and operation, the PCRs get populated with values that can then be compared locally with values in the TPM and remotely with known good values on the Attestation Server.

The OpenAttestation Server is a service that can be run on a separate compute resource, or if desired, on a virtual machine. It confirms or denies the Trusted status of a system based on PCR values submitted to it.

Integration with OpenStack

OpenStack Grizzly and newer versions provide a Trusted Filter for Filter Scheduler that uses Intel TXT to schedule workloads requiring trusted execution only to trusted compute resources. Clusters can have both trusted and untrusted compute resources.

Workloads not requiring trusted execution can be scheduled on any node, depending on utilization, while workloads with a trusted execution requirement will be scheduled only to trusted nodes.

An overview of the flow of information on a trusted compute request can be found in the OpenStack documentation:

inteltxt

Fig. 1:  Trusted computing attestation process. (Source: http://docs.openstack.org/grizzly/openstack-compute/admin/content/trusted-compute-pools.html, License: Apache 2.0)

In this graph the OpenAttestation Server is communicating with all compute nodes to determine a pool of trusted resources [1]. An API request is received with trust_lvl set to trusted [2]. The scheduler reaches out to the OpenAttestation Server to determine a trusted resource [3] via a RESTful API call. Upon receiving a pool of trusted resources, the scheduler schedules the workload on a machine inside the trusted compute pool [4].

Implementation example of an OpenStack cluster with Intel TXT support

Intel TXT support can be added to an existing cluster, provided the hardware is TXT capable. In this case, an existing cluster was modified.

An OpenAttestation Server was set up first. It is possible to use a VM inside the cluster to deploy OpenAttestation, but for security and maintainability reasons, the decision was made to use an external host for this environment. At the moment OpenAttestation 1.6, 2.0 and 2.1 are available, but Intel recommended using 1.6 for this project.

Interested in hearing more about this topic?  Vote for Christian’s talk The TPM hardware and Intel TXT were enabled in the BIOS of all compute nodes that were to be designated to be trusted. OpenAttestation Clients were installed on the controllers and all trusted compute nodes. Initial values of the PCMs were then added to the OpenAttestation Server.

The OpenStack configuration was modified to use TrustedScheduler instead of the default scheduler.

A trusted flavor was created to allow distinction between workloads that require trusted execution and workloads that don’t. To achieve this, the trust:trusted_host flag must be set in the newly created instance. Alternatively one or more existing flavors can be designated as trusted.

It was demonstrated that workloads scheduled with the trusted flavor would only be scheduled onto the trusted compute nodes, whereas workloads that were launched with a non-trusted flavor would be scheduled to any host.

Further security considerations

Attacks to an application that do not leave traces on the system level (e.g. SQL injection) will not trip the Trusted status of a compute node in Intel TXT. This is correct and expected behavior, because in this case the application, not the platform, is compromised. Applications requiring a high trust level must be secured separately with a combination of secure design, development best practices and thorough testing.

Finally, operations security with monitoring, security tests and audits  is necessary to spot threats before they develop into breaches.

Conclusion

Platform security with Intel TXT is a big step in the direction of securing OpenStack cloud platforms with a chain of trust spanning hardware, firmware and operating system. Integration into OpenStack is available and has proven reliable. Being able to run a cluster with both trusted and untrusted hosts strikes a balance between administration effort and security requirements.

However, Intel TXT is not a monolithic security solution, but in conjunction with application level security, monitoring and security audits is a cornerstone of a successful cloud security concept.

The post Trusted Cloud computing with Intel TXT: The challenge appeared first on Mirantis | The #1 Pure Play OpenStack Company.

by Christian Huebner at April 16, 2014 12:51 PM

Red Hat Stack

An Icehouse Sneak Peek – OpenStack Networking (Neutron)

Today’s datacenter networks contain more devices than ever before; servers, switches, routers, storage systems, dedicated network equipment and security appliances – many of which are further divided into virtual machines and virtual networks. Traditional network management techniques generally fall short of providing a truly scalable, automated approach to managing these next-generation networks. Users expect more control and flexibility with quicker provisioning and monitoring.

OpenStack Networking (Neutron) is a pluggable, scalable and API-driven system for managing networks and IP addresses. Like other aspects of the cloud operating system, it can be used by administrators and users to increase the value of existing datacenter infrastructure. Neutron prevents the network from being the bottleneck or limiting factor in a cloud deployment and gives users real self service over their network configurations.
Starting in the Folsom release, OpenStack Networking, then called Quantum, became a core and supported part of the OpenStack platform, and is considered to be one of the most exicting projects – with great innovation around network virtualization and software-defined networking (SDN). The general availability of Icehouse, the ninth release of OpenStack, is just around the corner, so I would like to highlight some of the key features and enhancements made by the contributors in the community to Neutron.

Modular Layer 2 (ML2) plugin enhancements
The Modular Layer 2 (ML2) plugin is a framework allowing OpenStack Networking to simultaneously utilize the variety of layer 2 networking technologies found in complex real-world data centers. It currently works with the existing Open vSwitch, Linux bridge, and Hyper-V L2 agents as what ML2 define as ‘MechanismDrivers’, and is intended to replace and deprecate the monolithic plugins associated with those. The ML2 framework is also intended to greatly simplify adding support for new L2 networking technologies, requiring much less initial and ongoing effort than would be required to add a new monolithic core plugin.

Starting with Icehouse, ML2 will support SR-IOV PCI passthrough; SR-IOV is a specification that allows a PCIe device to appear to be multiple separate physical PCIe devices. In the case of networking, this allows to assign a Virtual Function of an SR-IOV capable network card to a virtual machine as a network device (vNIC). This enables network traffic to bypass the software switch layer, which can offer better performance characteristics. SR-IOV introduces the need to enhance ML2 with support for requesting certain ‘vnic_type’ to be bound on Neutron port. Depending on the application, Neutron port can now be requested to be realized as normal vNIC (implemented using virtio NIC) or pci-passthrough (using SR-IOV).

Operational status of floating IPs
OpenStack instances receive a private IP address through which they can reach each other. In order to access these instances from other machines in the environment and from external networks like the Internet, the instances will need to be allocated a “floating IP” – which is implemented on the neutron-l3-agent as NAT (iptables) rules. Originally, floating IPs did not have an operational status, so the user had no way to check whether the floating IP has actually been created or not. This functionally was added in Icehouse through a change in the core API.

Enhanced scheduling of virtual routers
Neutron has an API extension to allow administrators and tenants to create virtual routers in order to route between networks and to access external networks. Virtual routers are scheduled in the neutron-l3-agent, which uses the Linux IP stack with iptables to perform L3 forwarding and NAT. In order to support multiple routers with potentially overlapping IP addresses across the environment, the l3-agent defaults to using Linux network namespaces to provide isolated forwarding instances.

Depending on your OpenStack cloud design, you may want to use more than one l3-agent, usually to provide fault-tolerance or high-availability, or to distribute the load of this crucial component.  The default scheduler available in Neutron (‘chance’) is used to place virtual routers on l3-agents randomly, which may or may not fit to your design. Starting with Icehouse, it will now be possible to use a more efficient scheduler (‘leastrouter’) that schedules a virtual router on the l3-agent that has the lower amount of current running routers.

OpenDaylight plugin for Neutron
The OpenDaylight Project is a collaborative open source project hosted by the Linux Foundation. The goal of the project is to accelerate the adoption of software-defined-networking (SDN) and create a solid foundation for Network Functions Virtualization (NFV).

OpenDaylight now has a MechanismDriver for the ML2 plugin to enable communication between Neutron and OpenDaylight. OpenDaylight offers an SDN controller which uses OVSDB (Open vSwitch Database Management Protocol) for southbound configuration of vSwitches, and can now manage network connectivity and initiate GRE or VXLAN tunnels for OpenStack Compute nodes.

New third-party plugins
As mentioned before, Neutron has a pluggable infrastructure which uses plugins to introduce advanced network capabilities. Alongside the open-source plugins (or ML2 MechanismDrivers) such as Linux bridge, Open vSwitch or OpenDaylight, different networking vendors offer their plugin to interact with their hardware or software solution.  Plugins to be included in the Icehouse release are coming from Brocade, Big Switch Networks, Embrane, Midonet, Mellanox, Nuage, Radware, Ryu, IBM, and others.

Continuation of Nova network
With the introduction of Neutron, development effort on the initial networking code that remains part of the Compute component (Nova) has gradually lessened. While many still use nova-network in production, there has been a plan to remove the code in favor of the more flexible and featured OpenStack Networking. As there are still some use-cases where nova-network fits, and as there is no clean migration path from nova-network to Neutron, it was decided that nova-network will not be deprecated in the Icehouse release. In addition, to a limited degree, patches to nova-network will now again begin to be accepted to support deployments in production.

 

Get Started with OpenStack Icehouse
If you want to try out OpenStack Icehouse, or to check out yourself some of the above features, please visit our RDO site. We have documentation to help get started, forums where you can connect with other users, and community-supported packages of the most up-to-date OpenStack releases available for download. You can also try the OpenDaylight integration with Neutron in this simple step-by-step procedure.

If you are looking for enterprise-level support, or information on partner certification, Red Hat also offers Red Hat Enterprise Linux OpenStack Platform.

by Nir Yechiel at April 16, 2014 12:00 PM

Steve Hardy

Heat auth model updates - part 2 "Stack Domain Users"

As promised, here's the second part of my updates on the Heat auth model, following on from part 1 describing our use of Keystone trusts.

This post will cover details of the recently implemented instance-users blueprint, which makes use of keystone domains to contain users related to credentials which are deployed inside instances created by heat.  If you just want to know how the new stuff works, you can skip to the last sections :)

So...why does heat create users at all?

Lets start with a bit of context.  Heat has historically needed to do some or all of the following:
  1. Provide metadata to agents inside instances, which poll for changes and apply the configuration expressed in the metadata to the instance.
  2. Signal completion of some action, typically configuration of software on a VM after it is booted (because nova moves the state of a VM to "Active" as soon as it spawns it, not when heat has fully configured it)
  3. Provide application level status or metrics from inside the instance, e.g to allow AutoScaling actions to be performed in response to some measure of performance or quality of service.
Heat provides API's which enable all of these things, but all of those API's require some sort of authentication, e.g credentials so whatever agent is running on the instance is able to access it.  So credentials must be deployed inside the instance, e.g here's how things work if you're using the heat-cfntools agents:

heat-cfntools agents data-flow with CFN-compatible API's


The heat-cfntools agents use signed requests, which requires an ec2 keypair created via keystone, which is then used to sign requests to the heat cloudformation and cloudwatch compatible API's, which are authenticated by heat via signature validation (which uses the keystone ec2tokens extension).

The problem is, ec2 keypairs are associated with a user.  And we don't want to deploy credentials directly related to the stack owner, otherwise any compromise of the (implicitly untrusted) instance could result in a cascading compromise where an attacker could take control of anything the stack-owning user has permission to access.

I've used cfntools/ec2tokens as an example, but the same issue exists if you use any credential available via keystone (token, username/password) which can be used to authenticate with the heat APIs.

So we need separation/isolation of the credentials deployed in the instance, such that we can limit the access allowed to the minimum necessary to make heat work.  Our first attempt at this did the following:
  • Create a new user in keystone, in the same project as the stack owner (either explicitly in the template via User and AccessKey resources, or for some resources such as WaitConditionHandle and ScalingPolicy we do it internally to obtain an ec2 keypair for generation of a pre-signed URL)
  • Add the "heat stack user" to a special role, default "heat_stack_user" (configurable via the heat_stack_user_role in heat.conf)
  • Limit the API surface accessible to the "heat_stack_user" via policy.json, with the expectation that access to other service's will be restricted in a similar way, or denied completely via network separation/firewall rules.
This approach is flawed, and led to this long-standing bug, there are multiple problems:
  • It requires the user creating the stack to have permissions to create users in keystone, which typically requires administrative roles.
  • It doesn't provide complete separation - even with the policy rules, it's possible a compromised stack could abuse the credentials (for example obtaining metadata for some other stack created by the user in the same project)
  • It clutters the user list for the project with spurious (from the user/operator perspective) users who aren't "real" users, the users are a heat implementation detail, and we're exposing them to the end user.

Hmm, that sounds bad, what's the alternative?

Well, we've been considering that for quite some time ;) multiple solutions were discussed:
  • Delegating a subset of user roles via trusts (rejected because token expiry is not optional, and separation from the stack owner is desired, e.g we don't really want to delegate or impersonate them from the instance, we just need an identity which can be verified as related to the stack)
  • Rolling our own auth mechanism based on some random "token" (some folks were in favour of this, but I'm opposed to it, I think we should stick to orchestration and leverage or improve what's in keystone instead of taking on the burden and security risk of maintaining our own auth scheme)
  • Using the keystone OAuth extension to use OAuth keypairs and signed requests.  (This was rejected due to lack of keystoneclient support, e.g client API and auth middleware, maybe we'll revisit enabling this as an option in some future release).
  • Isolating the in-instance users by creating them in a completely separate heat-specific keystone domain.  This idea was first suggested by Adam Young, as is what we ended up implementing for Icehouse.

"Stack Domain Users", the details..

The new approach is, effectively, an optimisation of the existing implementation.  We encapuslate all stack-defined users (ie users created as a result of things contained in a heat template) in a separate domain, which is created specifically to contain things related only to heat stacks.  A user is created which is the "domain admin", and heat uses that user to manage the lifecycle of the users in the "stack user domain".

There are two aspects of this I'll discuss below, firstly what deployers need to do to enable stack domain users in Heat (Icehouse or later), and secondly what actually happens when you create a stack, and how it addresses the previously identified problems:

When deploying heat:

  • A special keystone domain is created, e.g one called "heat" and the ID is set in the "stack_user_domain" option in heat.conf
  • A user with sufficient permissions to create/delete projects and users in the "heat" domain is created, e.g in devstack a user called "heat_domain_admin" is created, and given the admin role on the heat domain.
  • The username/password for the domain admin user is set in heat.conf (stack_domain_admin and stack_domain_admin_password).  This user administers "stack domain users" on behalf of stack owners, so they no longer need to be admins themselves, and the risk of this escalation path is limited because the heat_domain_admin is only given administrative permission for the "heat" domain.
This is all done automatically for you when using recent devstack, but if you're deploying via some other method, you need to use python-openstackclient (which is the only CLI interface to v3 keystone) to create the domain and user:

Create the domain:
$OS_TOKEN refers to a token, e.g the service admin token or some other valid token for a user with sufficient roles to create users and domains.
$KS_ENDPOINT_V3 refers to the v3 keystone endpoint, e.g http://<keystone>:5000/v3 where <keystone> is the IP address or resolvable name for the keystone service.

openstack --os-token $OS_TOKEN --os-url=$KS_ENDPOINT_V3 --os-identity-api-version=3 domain create heat --description "Owns users and projects created by heat"
The domain ID is returned by this command, and is referred to as $HEAT_DOMAIN_ID below.

Create the user:
openstack --os-token $OS_TOKEN --os-url=$KS_ENDPOINT_V3 --os-identity-api-version=3 user create --password $PASSWORD --domain $HEAT_DOMAIN_ID heat_domain_admin --description "Manages users and projects created by heat"
The user ID is returned by this command and is referred to as $DOMAIN_ADMIN_ID below:

Make the user a domain admin:
openstack --os-token $OS_TOKEN --os-url=$KS_ENDPOINT_V3 --os-identity-api-version=3 role add --user $DOMAIN_ADMIN_ID --domain $HEAT_DOMAIN_ID admin

Then you need to add the domain ID, username and password from these steps to heat.conf:


stack_domain_admin_password = <password>


stack_domain_admin = heat_domain_admin

stack_user_domain = <domain id returned from domain create above>

When a user creates a stack:

  • We create a new "stack domain project" in the "heat" domain, if the stack contains any resources which require creation of a "stack domain user"
  • Any resources which require a user, we create the user in the "stack domain project", which is associated with the heat stack in the heat database, but is completely separate and unrelated (from an authentication perspective) to the stack owners project
  • The users created in the stack domain are still assigned the heat_stack_user role, so as before the API surface they can access is limited via policy.json
  • When API requests are processed, we do an internal lookup, and allow stack details for a given stack to be retrieved from the database for both the stack owner's project (the default API path to the stack), and also the "stack domain project", subject to the policy.json restrictions.
To clarify that last point, that means there are now two paths which can result in retrieval of the same data via the heat API, e.g for resource-metadata:

GET v1/​{stack_owner_project_id}​/stacks/​{stack_name}​/​{stack_id}​/resources/​{resource_name}​/metadata

or

GET v1/​{stack_domain_project_id}​/stacks/​{stack_name}​/​{stack_id}​/resources/​{resource_name}​/metadata

The stack owner would use the former (e.g via "heat resource-metadata {stack_name} {resource_name}), and any agents in the instance will use the latter.

This solves all of the problems identified previously:
  • The stack owner no longer requires admin roles, because the heat_domain_admin user administers stack domain users
  • There is complete separation, the users created in the stack domain project cannot access any resources other than those explicitly allowed by heat, any attempt to access other stacks, or any other resource owned by the stack-owner will fail.
  • The list of users in the stack-owner project is unaffected, because we've created a completely different project in another domain.
Hopefully that provides a fairly clear picture of the new feature, and how it works - it should be transparent to users but I'm hoping this information may be useful to deployers when adopting the functionality for Icehouse.

The main gap still to be investigated is how we handle situations where keystone is backed by a read-only directory (e.g LDAP), my expectation is that it can be solved via the keystone capability to have different identity drivers per domain, so you could for example have e.g domains containing human users backed by LDAP, and the heat domain backed by SQL.  My understanding is that there are outstanding issues to be solved for Juno in keystone, but I will post a future update when I've had time to do some testing and figure out what works.

That is all, respect if you managed to read it all! ;)

by Steve Hardy (noreply@blogger.com) at April 16, 2014 10:14 AM

Percona

‘Open Source Appreciation Day’ draws OpenStack, MySQL and CentOS faithful

Open Source Appreciation Day Brings Together OpenStack, MySQL, and CentOS Communities

210 people registered for the inaugural “Open Source Appreciation Day” March 31 in Santa Clara, Calif. The event will be held each year at Percona Live henceforth.

To kick off the Percona Live MySQL Conference & Expo 2014, Percona held the first “Open Source Appreciation Day” on Monday, March 31st. Over 210 people registered and the day’s two free events focused on CentOS and OpenStack.

The OpenStack Today event brought together members of the OpenStack community and MySQL experts in an afternoon of talks and sharing of best practices for both technologies. After a brief welcome message from Peter Zaitsev, co-founder and CEO of Percona, Florian Haas shared an introduction to OpenStack including its history and the basics of how it works.

Jay Pipes delivered lessons from the field based on his years of OpenStack experience at AT&T, at Mirantis, and as a frequent code contributor to the project. Jay Janssen, a Percona managing consultant, complemented Jay Pipes’ talk with a MySQL expert’s perspective of OpenStack. He also shared ways to achieve High Availability using the latest version of Galera (Galera 3) and other new features found in the open source Percona XtraDB Cluster 5.6.

Amrith Kumar’s presentation focused on the latest happenings in project Trove, OpenStack’s evolving DBaaS component, and Tesora’s growing involvement. Amrith also won quote of the day for his response to a question about the difference between “elastic” and “scalable.” Amrith: “The waistband on my trousers is elastic. It is not scalable.” Sandro Mazziotta wrapped up the event by sharing the challenges and opportunities of OpenStack from both an integrator as well as operator point of view based on the customer experiences of eNovance.

OpenStack Today was made possible with the support of our sponsors, Tesora and hastexo. Here are links to presentations from the OpenStack Today event. Any missing presentations will soon be added to the OpenStack Today event page.

Open Source Appreciation DaySpeakers in the CentOS Dojo Santa Clara event shared information about the current status of CentOS, the exciting road ahead, and best practices in key areas such as system administration, running MySQL, and administration tools. Here’s a rundown of topics and presentations from the event. Any missing presentations will soon be added to the CentOS Dojo Santa Clara event page.

  • Welcome and Housekeeping
    Karsten Wade, CentOS Engineering Manager, Red Hat
  • The New CentOS Project
    Karsten Wade, CentOS Engineering Manager, Red Hat
  • Systems Automation and Metrics at Pinterest
    Jeremy Carroll, Operations Engineer, Pinterest
  • Software Collections on CentOS
    Joe Brockmeier, Open Source & Standards, Red Hat
  • Two Years Living Your Future
    Joe Miller, Lead Systems Engineer, Pantheon
  • Running MySQL on CentOS Linux
    Peter Zaitsev, CEO and Co-Founder, Percona
  • Notes on MariaDB 10
    Michael Widenius, Founder and CTO, MariaDB Foundation
  • Happy Tools
    Jordan Sissel, Systems Engineer, DreamHost

Thank you to all of the presenters at the Open Source Appreciation Day events and to all of the attendees for joining.

I hope to see you all again this November 3-4  at Percona Live London. The Percona Live MySQL Conference and Expo 2015 will also return to the Hyatt Santa Clara and Santa Clara Convention Center from April 13-16, 2015 – watch for more details in the coming months!

The post ‘Open Source Appreciation Day’ draws OpenStack, MySQL and CentOS faithful appeared first on MySQL Performance Blog.

by Matt Griffin at April 16, 2014 10:00 AM

Opensource.com

Giving rise to the cloud with OpenStack Heat

Heat for OpenStack orchestration

Setting up an application server in the cloud isn't that hard if you're familiar with the tools and your application's requirements. But what if you needed to do it dozens or hundreds of times, maybe even in one day? Enter Heat, the OpenStack Orchestration project. Heat provides a templating system for rolling out infrastructure within OpenStack to automate the process and attach the right resources to each new instance of your application.

read more

by Jason Baker at April 16, 2014 09:00 AM

April 15, 2014

Cloudwatt

Deploy Horizon From Source With Apache And Ssl

Some companies may deploy OpenStack clouds but without the Horizon Dashboard interface, and therefore you may wish to deploy your own horizon instance, either on a hosted VM of the OpenStack infrastructure, or why not on your own computer? Well this is possible.

However, your concern is that http might be insecure... especially if hosted on a VM or machine accessible from the Internet. So you want an SSL connection.

The issue is that SSL certificates can cost some money, but for personal usage, self-signed certificates will do the Job for no costs, and easy-rsa will make their management easy :-)

Note: even though you will run your own Horizon instance, you will not have extra privileges, it will just add your favorite "life easy-making GUI" on top of OpenStack :-)

Requirements:

On Centos/RHEL 6.x x86_64:

# Apache with SSL and wsgi support
sudo yum install httpd mod_ssl mod_wsgi
# EPEL repos
rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
# GIT to retrieve sources
sudo yum install git git-review
sudo yum install python-virtualenv
# cryptography requirements
sudo yum install gcc libffi-devel python-devel openssl-devel

On Ubuntu:

# Apache with SSL and wsgi support
sudo apt-get install apache2 libapache2-mod-wsgi
# GIT to retreieve sources
sudo apt-get install git git-review
sudo apt-get install python-virtualenv
# cryptography requirements
sudo apt-get install build-essential libssl-dev libffi-dev python-dev

Create an "horizon" user:

On Centos/RHEL:

useradd -d /home/horizon -m -g apache horizon

On Ubuntu:

useradd -d /home/horizon -m -s /bin/bash -g www-data horizon

sudo permissions for the horizon user:

If you want to be able to "sudo" from the horizon user (for convenience):

sudo su -c "echo 'horizon ALL=(ALL) NOPASSWD:ALL' > /etc/sudoers.d/horizon_user"
sudo chmod 0440 /etc/sudoers.d/horizon_user

The server will run under the "apache" or "www-data" user (depending on the distribution), so there is no risk of privilege escalation due to this sudo permission. If after deployment you want to remove the horizon user's sudo permissions to feel reassured, just type:

sudo rm -f /etc/sudoers.d/horizon_user

switch to the horizon user:

sudo su - horizon

Generate your SSL certificates:

Centos/RHEL:

sudo yum install easy-rsa
cp -r /usr/share/easy-rsa/2.0 ~/easy-rsa

On Ubuntu:

sudo apt-get install easy-rsa
cp -r /usr/share/easy-rsa ~/easy-rsa

NOTE: depending on your Ubuntu version, you might not find the easy-rsa package.

This package has been recently striped out of OpenVPN, so if you do not have an easy-rsa package, you can install OpenVPN and copy the easy-rsa script (and uninstall OpenVPN if you do not want to keep it):

sudo apt-get install openvpn libpkcs11-helper1 liblzo2-2
cp -r /usr/share/doc/openvpn/examples/easy-rsa/2.0 ~/easy-rsa
cp ~/easy-rsa/openssl-1.0.0.cnf ~/easy-rsa/openssl.cnf
# If you do not want to use or keep OpenVPN, you can now remove it:
sudo apt-get purge openvpn

Generate the certificates:

Edit the vars file in your ~/easy-rsa directory and adapt all the export KEY_* variables to your liking (especially: KEYSIZE, KEYCOUNTRY, KEYPROVINCE, KEYCITY, KEYORG, KEYEMAIL, KEY_OU), and then source this file:

source ./vars

and initialize certificates:

./clean-all

Create your own CA:

./build-ca

Create your server's certificate:

./build-key-server My_Server_Name

Hit the "enter" key when prompted for a password.

This creates a password-less private key which is usually considered bad practice but we do it this way here for convenience because your server will not enter passwords to use the certificate, requiring a password for a server key is also bad practice because most users of such certificates will use the clear-text password in a configuration file in order to use the certificate automatically in init scripts.

Hit the "y" key when prompted to Sign the certificate, and when prompted to commit.

In the keys subdirectory you will now see something like this:

-rw-r--r--. 1 horizon apache 5625 Apr  2 14:35 01.pem
-rw-r--r--. 1 horizon apache 1809 Apr  2 14:32 ca.crt
-rw-------. 1 horizon apache 1704 Apr  2 14:32 ca.key
-rw-r--r--. 1 horizon apache  152 Apr  2 14:35 index.txt
-rw-r--r--. 1 horizon apache   21 Apr  2 14:35 index.txt.attr
-rw-r--r--. 1 horizon apache    0 Apr  2 14:31 index.txt.old
-rw-r--r--. 1 horizon apache 5625 Apr  2 14:35 My_Server_Name.crt
-rw-r--r--. 1 horizon apache 1102 Apr  2 14:35 My_Server_Name.csr
-rw-------. 1 horizon apache 1708 Apr  2 14:35 My_Server_Name.key
-rw-r--r--. 1 horizon apache    3 Apr  2 14:35 serial
-rw-r--r--. 1 horizon apache    3 Apr  2 14:31 serial.old

apache will need read access to MyServerName.key:

chmod g+rx keys
chmod g+r keys/My_Server_Name.key

NOTE:

These are Self-signed certificates usually made for testing or pre-deployement, so since your browser isn't able to verify the identity of your website when accessing your server, it will display a "This Connection Is Untrusted" alert page saying it is an untrusted site. This is normal. To avoid this message you will have to bypass the warning, or import the ca.crt file in your browser (the later works only if when prompted for the server name by the ./build-key-server command, you give the server the same hostname as the FQDN you use to access it, otherwise you will get a "Certificate is only valid for (site name)" warning instead).

Get the Horizon source:

Clone horizon sources:

git clone git://git.openstack.org/openstack/horizon.git

You will now see an horizon directory (under you own "horizon" user's /home/horizon directory if you created one previously).

change to this new horizon directory:

cd ~/horizon

Horizon needs python dependencies which may not be provided in the proper version by your OS's packaging system, so the best is to use a virtual environment to install the python packages without any conflicts with your distribution's packages:

virtualenv --no-site-packages .venv
source .venv/bin/activate
pip install -Ur requirements.txt

If some packages fail to compile with errors like this one (It "sometimes" may happen when your language's locales is not strictly limited to ASCII):

  UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 126: ordinal not in range(128)

then try the last command again but prefixed with LC_ALL=C:

LC_ALL=C pip install -Ur requirements.txt

Configure your local_settings:

cd openstack_dashboard/local/
cp local_settings.py.example local_settings.py

And edit local_settings.py with your favorite editor and set DEBUG = False, then configure OPENSTACK_API_VERSIONS, OPENSTACK_HOST and uncomment:

  CSRF_COOKIE_SECURE = True
  SESSION_COOKIE_SECURE = True

With DEBUG = False, you need to set ALLOWED_HOSTS to a list of strings representing the host/domain names used to access your horizon site. If you have not registered any hostname yet, you will have to set the server's IP (as a string) in the list in order to be able to access Horizon via it's IP in your browser. See ALLOWED_HOSTS for detailed information.

You also have to edit SECRET_KEY.

If you use SECRET_KEY = secret_key.generate_or_read_from_file(os.path.join(LOCAL_PATH, '.secret_key_store')) the apache (or www-data) user will need write access to this file (.secret_key_store) because this file is created the first time you launch Horizon. Instead you can set SECRET_KEY to a string (e.g.: SECRET_KET = 'a unique sentence no one can guess') SECRET_KEY is used to provide cryptographic signing, and should be set to a unique, unpredictable value. Running Horizon with a known SECRET_KEY defeats many of Horizon’s security protections, and can lead to privilege escalation and remote code execution vulnerabilities. Horizon will now refuse to start if SECRET_KEY is not set.

If you use Self-signed certificates uncomment:

  OPENSTACK_SSL_NO_VERIFY = True

Otherwise, uncomment:

 OPENSTACK_SSL_CACERT = '/path/to/cacert.pem'

and set the path to the CA provided by your Certificate Authority

Get the apache configuration script:

If the Web deployment configuration script isn't yet merged (see Change I6397ba01: Created a makewebconf command.) you can cherry-pick it:

git checkout -b web-conf-generation-script
git fetch https://review.openstack.org/openstack/horizon refs/changes/68/82468/6 && git cherry-pick FETCH_HEAD

This patch adds a django_admin management command allowing to create a wsgi file with virtual environment detection, and an apache configuration file. We will use this command.

Go back the ~/horizon directory (where the manage.py file is located):

cd ~/horizon

Activate your virtual environment if not already done (In a bash shell, your prompt is usually prefixed by "(.venv)" if it's activated, but if typing echo $VIRTUAL_ENV returns nothing, it means you have to source it):

source .venv/bin/activate

Create the wsgi file:

We use the Web deployment configuration script:

python manage.py make_web_conf --wsgi

Collect static files:

We gather all the static files which apache will have to serve (they will be placed in the directory defined by STATIC_ROOT in the local_settings.py file):

python manage.py collectstatic

Compile .pyc files:

If apache does not have write access it won't be able to write .pyc files during code execution, and this drastically slows down python's performances.

Instead of relying on the code execution to compile the bytecode .pyc files, we create (compile) them manually:

python -m compileall .

Give apache some permissions:

We Give apache read access to files, execution permission on directories, and write permission to static files directory:

sudo chmod -R g+r ~/
find ~/ -type d -exec sudo chmod g+x {} \;
find ~/horizon/static -type d -exec chmod g+w {} \;

Create your apache configuration file:

We use the Web deployment configuration script again:

python manage.py make_web_conf --apache --ssl \
--sslcert=/home/horizon/easy-rsa/keys/My_Server_Name.crt \
--sslkey=/home/horizon/easy-rsa/keys/My_Server_Name.key \
--mail=your.email@youdomain.com > horizon.conf

And move this configuration file to your apache conf directory:

Centos/RHEL Apache configuration file:

sudo mv horizon.conf /etc/httpd/conf.d/
sudo chown root:root /etc/httpd/conf.d/horizon.conf

edit /etc/httpd/conf/httpd.conf and replace:

  #NameVirtualHost *:80

by:

  NameVirtualHost *:443
  WSGISocketPrefix /var/run/wsgi

To start Apache:

sudo service httpd start

To restart Apache:

sudo service httpd restart

Logs are available in /var/log/httpd/openstack_dashboard-error.log and /var/log/httpd/openstack_dashboard-access.log.

Ubuntu Apache configuration file:

sudo mv horizon.conf /etc/apache2/sites-available/horizon
sudo chown root:root /etc/apache2/sites-available/horizon
sudo a2ensite horizon
sudo a2ensite ssl

To start Apache:

sudo service apache2 start

To restart Apache:

sudo service apache2 reload

Logs are available in /var/log/apache2/openstack_dashboard-error.log and /var/log/apache2/openstack_dashboard-access.log.

Notes about unscoped tokens:

Some cloud companies do not let you log in with an unscoped token and horizon logs will tell you your login failed even though you entered the proper password.

If this is the case, you may need to modify your .venv/lib/python2.7/site-packages/openstack_auth/backend.py (or .venv/lib/python2.6/site-packages/openstack_auth/backend.py) file like this:

change the try block line 134:

                try:
                    client = keystone_client.Client(
                        tenant_id=project.id,
                        token=unscoped_auth_ref.auth_token,
                        auth_url=auth_url,
                        insecure=insecure,
                        cacert=ca_cert,
                        debug=settings.DEBUG)

to:

                try:
                    client = keystone_client.Client(
                        tenant_id=project.id,
                        #token=unscoped_auth_ref.auth_token,
                        user_domain_name=user_domain_name,
                        username=username,
                        password=password,
                        auth_url=auth_url,
                        insecure=insecure,
                        debug=settings.DEBUG)

Keep up to date:

Once Horizon deployed, staying up to date is easy:

git checkout master
git remote update && git pull --ff-only origin master
source .venv/bin/activate
pip install -Ur requirements.txt  # you might need to redo the unscoped tokens change
find . -name "*.pyc" -delete
python -m compileall .
python manage.py collectstatic
chmod -R g+r ~/horizon
find ~/horizon -type d -exec chmod g+x {} \;
find ~/horizon/static -type d -exec chmod g+w {} \;

And restart apache.

Centos/RHEL:

sudo service httpd start

Ubuntu:

sudo service apache2 reload

Enjoy your Horizon GUI, and feel free to review the Change I6397ba01: Created a makewebconf command. patch, or to add suggestions to the Web deployment configuration script Blueprint.

by Yves-Gwenael at April 15, 2014 10:00 PM

Red Hat Stack

What’s New in Icehouse Storage

The latest OpenStack 2014.1 release introduces many important new features across the OpenStack Storage services that includes an advanced block storage Quality of Service, a new API to support Disaster Recovery between OpenStack deployments, a new advanced Multi-Locations strategy for OpenStack Image service & many  improvements to authentication, replication and metadata in OpenStack Object storage.

Here is a Sneak Peek of the upcoming Icehouse release:

Block Storage (Cinder)
The Icehouse release includes a lot of quality and compatibility improvements such as improved block storage load distribution in Cinder Scheduler, replacing Simple/Chance Scheduler with FilterScheduler, advancing to the latest TaskFlow support in volume create, Cinder support for Quota delete was added, as well as support for automated FC SAN zone/access control management in Cinder for Fibre Channel volumes to reduce pre-zoning complexity in cloud orchestration and prevent unrestricted fabric access.

Here is a zoom-in to some of the Key New Features in Block Storage:

Advanced Storage Quality of Service
Cinder has support for multiple back-ends. By default, volumes will be allocated between back-ends to balance allocated space.  Cinder volume types can be used to have finer-grained control over where volumes will be allocated. Each volume type contains a set of key-value pairs called extra specs. Volume types are typically set in advance by the admin and can be managed using the cinder client.  These keys are interpreted by the Cinder scheduler and used to make placement decisions according to the capabilities of the available back-ends.

Volume types can be used to provide users with different Storage Tiers, that can have different performance levels (such as HDD tier, mixed HDD-SDD tier, or SSD tier), as well as  different resiliency levels (selection of different RAID levels) and features (such as Compression).

Users can then specify a tier they want when creating a volume. The Volume Retype capability that was added in the Icehouse release, extends this functionality to allow users to change a volume’s type after its creation.  This is useful for changing quality of service settings (for example a volume that sees heavy usage over time and needs a higher service tier). It also supports changing volume type feature properties (marking as  compressed/uncompressed etc.).

The new API allows vendor-supplied(or provided) drivers to support migration when retyping,. The migration is policy-based, where the policy is passed via scheduler hints.
When retyping, the scheduler checks if the volume’s current host can accept the new type. If the current host is suitable, its manager is called which calls upon the driver to change the volume’s type.

A Step towards Disaster Recovery
An important disaster recovery building block was added to the Cinder Backup API in the Icehouse release, to allow resumption in case your OpenStack cloud deployment goes into flames / falls off a cliff or suffers from any event that ends up with a corrupted service state. Cinder Backup already supports today the ability to back up the data, however in order to support a real disaster recovery between OpenStack deployments you must be able to have a complete restoration of volumes to their original state including Cinder database metadata. Cinder Backup API was extended in Icehouse to support this new functionality with the existing backup/restore api.

The new API supports:
1. Export and import backup service metadata
2. Client support for export and import backup service metadata

This capability sets the scene for the next planned step in OpenStack disaster recovery, that will be designed to extend the Cinder API to support volume replication (that is currently in the works for Juno release).

OpenStack Image Service (Glance)

The Icehouse release has many image service improvements that include:

  • Enhanced NFS servers as backend support, to allow users to configure multiple NFS servers as a backend using filesystem store and mount disks to a single directory.
  • Improved image size attribution was introduced to solve the file size or the actual size of the uploaded file confusion, by adding  a new virtual_size attribute. Where `size` refers the file size and `virtual_size` to the image virtual size. The later is useful just for some image types like qcow2. The Improved image size attribution eases the consumption of its value from Nova, Cinder and other tools relying on it.
  • Better calculation of storage quotas

 Advanced Multi-Location Strategy
The support for Multi-locations was introduced to Glance in the Havana release, to enable image domain object fetch data from multiple locations and allow glance client to consume image from multiple backend store. Starting in the Icehouse release,  a new image location selection strategy was added to the Glance image service to support a selection strategy of the best back-end storage. Another benefit of this capability is the improved consuming performance, as the end user,  can consume images faster, both in term of  ‘download’ transport handling on the API server side and also on the Glance client  side, obtaining locations by standard ‘direct URL’ interface.  These new strategy selection functions are shared between API server side and client side.

OpenStack Object Storage (Swift)
Although the biggest Swift feature  (Storage Policies) that was set for Icehouse is planned  only tp land in Juno, there were many other improvements to authentication, replication and metadata. Here is a zoom-in to some of the key new features you can expect to see in Swift with the Icehouse release:

Account-level ACLs and ACL format v2 (TempAuth)
Accounts now have a new privileged header to represent ACLs or any other form of account-level access control. The value of the header is a JSON dictionary string to be interpreted by the auth system.

Container to Container Synchronization
A new support  for sync realms was added to allow for simpler configuration of container sync. A sync realm is a set of clusters that have agreed to allow container syncing with each other  as all the contents of a container can be mirrored to another container through background synchronization. Swift cluster operators can configure their cluster to allow/accept sync requests to/from other clusters, and the user specifies where to sync their container along with a secret synchronization key.

The key is the overall cluster-to-cluster key used in combination with the external users’ key that they set on their containers’ X-Container-Sync-Key metadata header values. These keys will be used to sign each request the container sync daemon makes and used to validate each incoming container sync request. The swift-container-sync does the job of sending updates to the remote container. This is done by scanning the local devices for container databases and checking for x-container-sync-to and x-container-sync-key metadata values. If they exist, newer rows since the last sync will trigger PUTs or DELETEs to the other container.

Object Replicator – Next Generation leveraging “SSYNC”
The Object Replicator role in Swift encapsulates most logic and data needed by the object replication process, a new replicator implementation set to replace good old RSYNC with backend PUTs and DELETEs.  The initial implementation of object replication simply performed an RSYNC to push data from a local partition to all remote servers it was expected to exist on. While this performed adequately at small scale, replication times skyrocketed once directory structures could no longer be held in RAM.

We now use a modification of this scheme in which a hash of the contents for each suffix directory is saved to a per-partition hashes file. The hash for a suffix directory is invalidated when the contents of that suffix directory are modified.

SSYNC is a thin recursive wrapper around the RSYNC. Its primary goals are reliability, correctness, and speed in syncing extremely large filesystems over fast, local network connections. Work continues with a new SSYNC method where RSYNC is not used at all and instead all-Swift code is used to transfer the objects. At first, this SSYNC will just strive to emulate the RSYNC behavior. Once deemed stable it will open the way for future improvements in replication since we’ll be able to easily add code in the replication path instead of trying to alter the RSYNC code base and distributing such modifications. FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION CLUSTERS – It is an experimental feature. In its current implementation it is probably going to be a bit slower than RSYNC, but if all goes according to plan it will end up much faster.

Other notable Icehouse improvements that were added include:

Swift Proxy Server Discoverable Capabilities to allow clients to retrieve configuration info programmatically.  The response will include information about the cluster and can be used by clients to determine which features are supported in the cluster. Early Quorum Responses that allow the proxy to respond to many types of requests as soon as it has a quorum.  Removed python-swiftclient dependency,  added support for system-level metadata on accounts and containers, added swift-container-info and swift-account-info tools,  and various bug fixes such as fixing the ring-builder crash when a ring partition was assigned to deleted, zero-weighted and normal devices.

Other key Swift features that made good progress in Icehouse and will probably land in the Juno release include:  Erasure Coded Storage in addition to replicas that will enable a cluster to reduce the overall storage footprint while maintaining a high degree of durability, and Shard large containers – as containers grow, their performance suffers. Sharding the containers (transparently to the user) would allow the containers to grow without bound.

by Sean Cohen at April 15, 2014 09:40 PM

The Official Rackspace Blog » OpenStack

Why We Craft OpenStack (Featuring Rackspace Product Manager Jarret Raim)

As OpenStack Summit Atlanta fast approaches, we wanted to dig deeper into the past, present and future of OpenStack. In this video series, we hear straight from some of OpenStack’s top contributors from Rackspace about how the fast-growing open source project has evolved, what it needs to continue thriving, what it means to them personally, and why they are active contributors.

In the video below, Jarret Raim, Rackspace Product Manager, discusses his work and passion for the security space within the open source community.

<iframe allowfullscreen="" frameborder="0" height="360" src="http://www.youtube.com/embed/hrKYdlThQvM" width="640"></iframe>

Jarret, along with Paul Kehrer of Rackspace, presented at PyCon2014 last week in Montreal. During their talk they encouraged the Python community to always remember and contribute to the entire open source story, with all of its projects and history and tools, and related this to how the open source ecosystem – from Linux to Python to OpenStack- is crucial to their work with security and cryptography.

Be sure to check out the first installment of the “Why We Craft OpenStack” video series featuring Kurt Griffiths, Rackspace Principal Architect. We look forward to seeing you in Atlanta for OpenStack Summit, May 12 through May 16.

by Vyjayanthi Vadrevu at April 15, 2014 05:00 PM

April 14, 2014

Tesora Corp

Database as a Service in the Private Cloud; Tell us what’s important to you

SERVEY-right-sidebar-image.png

Database as a service is already a big deal in the public cloud. Amazon’s RDS, DynamoDB and Redshift are some of the most rapidly growing services in Amazon's rapidly growing portfolio. Other players like Rackspace, HP and Salesforce also have strong offerings in this space.

Gartner recently reported that the public cloud continues to grow with a projected 5 year CAGR of 17% and that Database Management Systems are the number one area of growth within that, with an amazing 5 year CAGR of 39.8%.1

At the same time, private cloud capabilities are expanding rapidly. IT organizations are seeking to provide the same self-service experience that their users can now get in the public cloud. Basic compute and storage services are becoming commonplace, but database services are still just emerging in the private cloud.

We believe this has to change and would like your input to help shape the evolution of database capabilities for the private cloud.

Share your opinion by taking this short survey. (It’s only 11 questions.) Complete the survey and you’ll be entered into a drawing to win one of 10 Amex gift cards that we’re giving away.

Results will be reported at the OpenStack Summit in Atlanta. Let your voice be heard and take this short survey today.

1Source: Gartner, IT Spending Forecast, 3Q13 Update, Public Cloud Services Forecast 3Q13 Update, September 2013

by 86 at April 14, 2014 11:12 PM

The Official Rackspace Blog » OpenStack

Going Hybrid With vSphere And OpenStack

“OpenStack is on the cusp of major adoption.”  How many times have you heard a vendor or analyst say that or some variation of it in the past 12 months?

The fact is that many companies are still evaluating OpenStack and trying to determine how it should fit into their overall IT strategies. That should not be a surprise given the disruptive nature of a technology like cloud computing.

I’ve argued in the past that OpenStack is in the process of crossing the chasm from acceptance by early technology adopters to acceptance by the early majority/mainstream.

Over the past 12 months, I have spoken with a number of companies in the early majority camp that are currently conducting OpenStack evaluations or using it in small projects. Almost all of these users are current VMware customers with legacy application workloads. They understand that OpenStack is not something they can ignore, but they often struggle to understand its true value and how it should impact their current vSphere deployments. The questions they frequently ask include:

  • Is OpenStack a free and open-sourced hypervisor that can be used to replace our current ESXi servers?
  • Is there feature parity between OpenStack and vSphere, i.e. High-Availability, vMotion, Distributed Resource Scheduling, etc.?
  • Can we and should we use OpenStack together with vSphere?
  • Are there reasons why we may want to run both?

My approach to answering these questions generally focuses on trying to bring clarity in three areas:

  1. Level-setting on the differences between legacy and cloud workloads
  2. Walking though different legacy infrastructure and cloud consumption models
  3. Talking through the various options for using vSphere with OpenStack.

Virtualization and Cloud

The starting point for me, when I speak with customers about vSphere and OpenStack, is to highlight the different design philosophies behind legacy infrastructures and cloud computing. Here I want to focus on legacy infrastructures that are built on virtualization technologies, such as the ESX hypervisor and vSphere, which came into prominence as a technology to virtualize many smaller servers so they can be consolidated on to a few large servers. This worked very well since most servers were hosting applications with monolithic architectures, such as Oracle or Microsoft Exchange. Today, each instance of this type of legacy application is still typically encapsulated in a single virtual machine and grows by scaling up on a single physical server running the ESXi hypervisor. High availability can be achieved by running a clustered version of the application, such as Oracle Real Application Clusters; however, this can be an expensive and overly complex solution and most applications do not have such functionality. Most VMware shops choose to run their application servers as virtual machines in vSphere clusters and depend on features such as vSphere HA and vMotion to provide infrastructure resiliency and redundancy. While these solutions work well, they also require certain architecture choices to be made, such as reliance on shared storage, that make scaling out the infrastructure difficult.

Cloud computing, however, was created to accommodate a different class of applications, such as MongoDB and Hadoop. Cloud platforms like OpenStack are designed to be used with applications that have a distributed architecture where application components are distributed across multiple physical or virtual servers. These applications are generally designed to grow by scaling out across multiple servers so as demand increases, resources can be expanded by adding more application instances and re-balancing workloads across those instances. Another design principle behind cloud platforms such as OpenStack is that given the distributed nature of these applications, ownership for resiliency belongs to the applications and not the underlying infrastructure. This approach is often misunderstood by folks in the VMware space as a shortcoming and immaturity in the OpenStack platform; “lacking” features such as vSphere HA are seen as a potential warning sign that OpenStack is not ready for production usage.

However, this is a misunderstanding about the differing design principles of legacy and cloud. Distributed applications that run on cloud platforms (what the industry often term as “cloud native” applications) have lowered the barrier for building resiliency, both in terms of cost and of usability. By moving application resiliency up the stack, cloud platforms remove the need for shared-everything architecture based decisions such as the use of shared storage. This promotes the use of commodity as an option for running a cloud platform and creates an architecture that enables rapid scaling out the infrastructure. It is also an architecture that is best suited for next-generation large scale application environments where failure is expected and needs to be designed at multiple layers, not simply at the infrastructure layer.

Consumption Models

Once a customer understands the differing design principles, we can talk about the different infrastructure consumption models. It is important to differentiate between virtualized infrastructure consumption and cloud consumption here as well.

Along with running bare-metal servers and a virtualization technology such as vSphere in their own data center, companies can consume managed hosting offerings such as the Rackspace’s Dedicated vCenter offering or VMware’s vCloud Hybrid Services offering. Both are built on VMware technologies and offer off-prem virtualization solutions to augment customers’ on-prem vSphere deployments. Both are designed and ideally suited for legacy applications that do not require rapid scaling and rely on the virtualized infrastructure to provide application availability and resiliency.

In contrast, cloud consumption typically begins with public cloud usage and may later include private cloud deployments. In this space, the focus is on accommodating next-generation applications and being able to scale and to provision resources quickly, often using commodity hardware.

vSphere with OpenStack

It should be clear by now that one size does NOT fit all when it comes to building out infrastructure for different workloads. Rackspace has customers that, because of the distributed nature of their applications, have been able to move directly to our OpenStack-powered public cloud and/or our private cloud. However, most customers have legacy applications, often running on a bare-metal or virtualized infrastructure that cannot be easily rewritten to use a cloud platform such as OpenStack. For these customers, co-existence and not replacement is the route they will choose to take in adopting OpenStack within their bare-metal vSphere dominated data centers. This route typically fork into one of three paths:

Infrastructure Silos

This is the most frequently chosen route by customers. Typically, this involves making the decision to keep existing legacy applications running on their vSphere environment while building new applications on a separate OpenStack cloud. While this is the least disruptive route to take in adopting OpenStack, it also perpetuates IT silos and adds complexity and the additional overhead of managing two completely distinct environments since separate operations teams are often required.

Multi-Hypervisor Integration

Another possible route is to leverage the work VMware has done to integrate vSphere into OpenStack. This is similar to the silos route where legacy workloads continue running on vSphere while next-generation workloads run on a hypervisor such as KVM or Xen. In this case, OpenStack is used as the control plane to manage a multi-hypervisor cloud, consolidating cloud management while allowing applications to be hosted on the environment best suited for them. The primary drawback to this architecture is that the vSphere integration with OpenStack is very new and there are still rough edges that still need to be refined, such as how resources are scheduled.

Best-Fit Hybrid Solution

Given the current state of affairs, Rackspace has adopted a best-fit hybrid solution as the best and most mature route for OpenStack adoption. This is similar to the silos approach in terms of separating the control planes for vSphere and OpenStack and focusing on applying the hybrid solution architecture to ensure that applications are hosted in the infrastructure that best fits their needs. However, the goal here is to maintain separate infrastructures and integrate the operations teams for the two environments so that they work together to build integrated solutions. One key is to utilize technologies, such as Rackspace’s RackConnect, to tie these infrastructures together so each can leverage the other as appropriate. One use case is a multi-tier application with a distributed web-tier running on an OpenStack-powered Rackspace Private Cloud that is connected via RackConnect to an Oracle database running on a managed vSphere cluster. In this hybrid solution, the Rackspace Public Cloud can also be used to enable bursting up of web-tier workloads from the private cloud.

Want To Learn More?

Rackspace is committed to providing managed services for both OpenStack and VMware technologies to offer our customers a best-fit hybrid solution. A large part of that commitment is educating the VMware community on how it can use OpenStack to build a true cloud platform while continuing to leverage its VMware investment where appropriate. Throughout the year, Rackspace will sponsor several VMware User Group (VMUG) conferences where I will speak more in depth about vSphere with OpenStack. The table below lists the VMUG conferences at which I am scheduled to speak. I encourage VMware administrators, architects or partners to attend these events and to participate in my sessions. The reality is that a growing number of companies are either deploying or evaluating OpenStack as their private cloud platform and will look for technologists who can help them understand how to leverage OpenStack, either as a replacement for or as a complementary platform to their existing vSphere environments.

I look forward to hitting the road and continuing the work I’ve been doing as an official OpenStack Ambassador and a VMware vExpert educating technologists on the value and power of vSphere with OpenStack. More importantly, I am excited to meet as many folks as I can in the VMware and OpenStack communities. As always, I invite you all to engage with me and help me make what I present useful to as many people as possible.

by Kenneth Hui at April 14, 2014 07:00 PM

Cloudscaling Corporate Blog

The Top Three Things To Bookmark for the Juno Summit

I can’t believe that it’s less than a month before the the upcoming Juno Summit.  As you start putting together your plans for the Summit, I wanted to highlight some items to look forward to from the Cloudscaling team.

#1: Four Cloudscaling sessions have been selected for the Summit.

We appreciate the explicit endorsement from the community – the topics reflect on our experience and leadership in the space – and we are very excited to share! Here is a recap of the four Cloudscaling sessions – you can simply add them to your Summit schedule by clicking on the links:

  • Hybrid Cloud Landmines: Architecting Apps to Avoid Problems (Randy Bias & Drew Smith): Application portability? Cross-cloud replication and bursting? These are hard issues with serious implications that we’ll examine in detail, digging into the subtleties between simply implementing a cross-cloud API and actually using it to get real work done. We’ll provide a set of recommendations on how to architect your application from the ground up to be hybrid cloud friendly.

  • Open Cloud System: OpenStack Architected Like AWS (Randy Bias): At Cloudscaling, we spend a great deal of effort operationalizing OpenStack for customers and ensuring it’s compatible with leading public clouds. In this session, we’ll detail the configuration and design decisions we make in real world OpenStack deployments.

  • OpenStack Block Storage: Performance vs. Reliability (Randy Bias & Joseph Glanville): In this presentation, we’ll talk about selecting and designing the right kind of storage for your use case. We’ll also walk you through how we built our block storage solution in Open Cloud System, its design principles, and explain why it is a better version of AWS Elastic Block Storage (EBS) in several key ways. (I regret we have to pull the plug on this presentation due to time constraints!  Hopefully in fall.  Best, –Randy)

#2: For the second Summit in a row, we are sharing our booth space with Quanta.

The reason is simple – a significant majority of Cloudscaling customers use Quanta servers and storage equipment for their OpenStack powered clouds (and did I mentioned what such a great team they are to work with?). While OpenStack’s role is to ultimately abstract the underlying physical infrastructure, we have found Quanta hardware to be a perfect complement to elastic OpenStack-powered clouds. The Quanta team will be bringing a few racks of their datacenter products that are most optimized for building modern OpenStack clouds. Stop by our booth!

#3:Open Cloud System (OCS) product announcement.

OK, I know it’s a teaser. But it should be no surprise that Icehouse will be central to the OCS announcement but we have a few additional items up our sleeves to share including

  • New OCS management capabilities

  • Additional AWS compatible features

So, it’s an understatement to say that we will be busy between now and the Summit. But any opportunity to meaningfully interact with the community, our customers and our partners is worth its weight in gold. We look forward to seeing you in Atlanta!

by Azmir Mohamed at April 14, 2014 06:56 PM

ICCLab

Floating IPs management in Openstack

Openstack is generally well suited for typical use cases and there is hardly reasons to tinker with advance options and features available. Normally you would plan your public IP addresses usage and management well in advance, but if you are an experimental lab like ours, many a times things are handled in an ad-hoc manner. Recently, we ran into a unique problem which took us some time to figure out a solution.

We manage a full 160.xxx.xxx.xxx/24 block of 255 public IP addresses. Due to an underestimated user demand forecast, in our external cloud we ended up with a floating-ip pool that was woefully inadequate. One solution was to remove the external network altogether and recreate a new one with the larger floating-ip pool. The challenge was – we had real users, with experiments running on our cloud and destroying the external network was not an option.

So here is what we did to add more floating ips to the pool without even stopping or restarting any of the neutron services -

  1. Log onto your openstack controller node
  2. Read the neutron configuration file (usually located at /etc/neutron/neutron.conf
  3. Locate the connection string – this will tell you where the neutron database in located
  4. Depending on the database type (mysql, sqlite) use appropriate database managers (ours was using sqlite)

I will next show you what to do to add more IPs to the floating pool for sqlite3, this can be easily adapted for mysql.

$ sqlite3 /var/lib/neutron/ovs.sqlite
SQLite version 3.7.9 2011-11-01 00:52:41
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> .tables

The list of tables used by neutron dumped by the previous command will be similar to -

agents ovs_tunnel_endpoints
allowedaddresspairs ovs_vlan_allocations
dnsnameservers portbindingports
externalnetworks ports
extradhcpopts quotas
floatingips routerl3agentbindings
ipallocationpools routerroutes
ipallocations routers
ipavailabilityranges securitygroupportbindings
networkdhcpagentbindings securitygrouprules
networks securitygroups
ovs_network_bindings subnetroutes
ovs_tunnel_allocations subnets

The tables that are of interest to us are -

  • ipallocationpools
  • ipavailabilityranges

Next look into the schema of these tables, this will shed more light into what needs to be modified -

sqlite> .schema ipavailabilityranges
CREATE TABLE ipavailabilityranges (
allocation_pool_id VARCHAR(36) NOT NULL,
first_ip VARCHAR(64) NOT NULL,
last_ip VARCHAR(64) NOT NULL,
PRIMARY KEY (allocation_pool_id, first_ip, last_ip),
FOREIGN KEY(allocation_pool_id) REFERENCES ipallocationpools (id) ON DELETE CASCADE
);
sqlite> .schema ipallocationpools
CREATE TABLE ipallocationpools (
id VARCHAR(36) NOT NULL,
subnet_id VARCHAR(36),
first_ip VARCHAR(64) NOT NULL,
last_ip VARCHAR(64) NOT NULL,
PRIMARY KEY (id),
FOREIGN KEY(subnet_id) REFERENCES subnets (id) ON DELETE CASCADE
);
sqlite>

Next look into the content of these tables, for brevity only partial outputs are shown below. Also I have masked some of the IP addresses with xxx, replace these with real values when using this guide.

sqlite> select * from ipallocationpools;
b5a7b8b4-ad10-4d92-b877-e406df8ceb91|f0034b20-3566-4f9f-a6d5-b725c02f98fc|10.10.10.2|10.10.10.254
7bca3261-e578-4cfa-bba1-51ba6eae7791|765adcdf-72a4-4e07-8860-f443c7b9098b|160.xxx.xxx.32|160.xxx.xxx.80
a9994f70-2b9a-45f3-b5db-31ccc6cb7e90|72250c58-5fda-4d1b-a847-b71b432ea218|10.10.1.2|10.10.1.254
23032620-731a-4092-9509-7591b53b5ddf|12849c1f-4456-4fc1-bea6-444cce4f1ac6|10.10.2.2|10.10.2.254
fcf22336-2bd6-4e1c-92cd-e33af0b23ad9|bcf1082d-50d5-4ebc-a311-7e0618096356|10.10.11.2|10.10.11.254
bc961a47-4902-4ca2-b4f4-c5fd581a364e|09b79d08-aa92-4b99-b1fd-61d5f31d3351|10.10.25.2|10.10.25.254
sqlite> select * from ipavailabilityranges;
b5a7b8b4-ad10-4d92-b877-e406df8ceb91|10.10.10.6|10.10.10.254
a9994f70-2b9a-45f3-b5db-31ccc6cb7e90|10.10.1.2|10.10.1.2
7bca3261-e578-4cfa-bba1-51ba6eae7791|160.xxx.xxx.74|160.xxx.xxx.74
7bca3261-e578-4cfa-bba1-51ba6eae7791|160.xxx.xxx.75|160.xxx.xxx.75

Looking at the above two outputs, it is immediately clear what needs to be done next in order to add more IPs to the floating-ip range.

  1. modify the floating-ip record in the ipallocationpools table, extend the first_ip and/or last_ip value(s)
  2. for each new ip address to be added in the pool, create an entry in the ipavailabilityranges table with first_ip same as last_ip value (set to the actual IP address)

An an example, say I want to extend my pool from 160.xxx.xxx.80 to 160.xxx.xxx.82, this is what I would do

sqlite> update ipallocationpools set last_ip='160.xxx.xxx.82' where first_ip='160.xxx.xxx.32';
sqlite> insert into ipavailabilityranges values ('7bca3261-e578-4cfa-bba1-51ba6eae7791', '160.xxx.xxx.81', '160.xxx.xxx.81');
sqlite> insert into ipavailabilityranges values ('7bca3261-e578-4cfa-bba1-51ba6eae7791', '160.xxx.xxx.82', '160.xxx.xxx.82');
sqlite> .exit

And that’s all, you have 2 additional IPs available for use from your floating-ip pool. And you don’t even need to restart any of the neutron services. make sure that the subnet id is the same as in the ipallocationpools table entry.

by Piyush Harsh at April 14, 2014 09:22 AM

April 13, 2014

Loïc Dachary

HOWTO migrate an AMI from Essex to a bootable volume on Havana

A snapshot of an Essex OpenStack instance contains an AMI ext3 file system. It is rsync’ed to a partitioned volume in the Havana cluster. After installing grub from chroot, a new instance can be booted from the volume.

The instance snapshot from the Essex cluster is mounted on the host containing the glance directory:

$ mount /var/lib/glance/images/493e2e71-f100-403c-98a0-0ccbafec6fbd /mnt

A volume is created in an Havana cluster and attached to a temporary instance, for the duration of the migration. A single partition is created and formatted as ext3.

root@migrate:~# fdisk /dev/vdc
Command (m for help): p
Disk /dev/vdc: 32.2 GB, 32212254720 bytes
   Device Boot      Start         End      Blocks   Id  System
/dev/vdc1            2048    62914559    31456256   83  Linux
root@migrate:~# mkfs.ext3 /dev/vdc

The content of the AMI is copied from the glance host to the newly created file system:

root@migrate:~# mount /dev/vdc1 /mnt
root@migrate:~# rsync -avHz --numeric-ids glance.host.net:/mnt/ /mnt/

Grub is installed after mounting the special file systems:

migrate:/# mount --bind /dev /mnt/dev && mount --bind /dev/pts /mnt/dev/pts && mount --bind /proc /mnt/proc && mount --bind /sys /mnt/sys
migrate:~# chroot /mnt
migrate:/# grub-install /dev/vdc
Searching for GRUB installation directory ... found: /boot/grub
Installation finished. No error reported.
This is the contents of the device map /boot/grub/device.map.
Check if this is correct or not. If any of the lines is incorrect,
fix it and re-run the script `grub-install'.
(hd0)	/dev/vda
(hd1)	/dev/vdb
(hd2)	/dev/vdc
migrate:/# update-grub
# update-grub
Searching for GRUB installation directory ... found: /boot/grub
Searching for default file ... found: /boot/grub/default
Testing for an existing GRUB menu.lst file ... found: /boot/grub/menu.lst
Searching for splash image ... none found, skipping ...
Found kernel: /boot/vmlinuz-2.6.26-2-686
Updating /boot/grub/menu.lst ... done

Detach the volume from the temporary instance

migrate:~# umount /mnt/dev/pts && umount /mnt/dev && umount /mnt/proc && umount /mnt/sys
migrate:~# umount /mnt
migrate:~# nova volume-detach migrate b4bdd79f-e9e8-4cd1-9826-a122d9d626f6

Booting from the volume may fail if grub is misconfigured. It can be fixed from the console.

by Loic Dachary at April 13, 2014 05:34 PM

April 11, 2014

Solinea

Beyond Installation: Managing Your OpenStack Cloud

ken_pepple

Please join Solinea’s Ken Pepple as he talks about best practices for managing your OpenStack Cloud. This webinar is part of the O’Reilly Webcast Series.

In this webinar Ken will cover the following topics:

  • Tools that OpenStack cloud operators are using to manage their cloud
  • Gathering important data from your cloud
  • The most important metrics to monitor in your cloud
  • The impact of the Icehouse release on your management strategy

by Seth Fox (seth@solinea.com) at April 11, 2014 05:47 PM

Ben Nemec

My Devtest Workflow

I said in a previous post that I would write something up about my devtest workflow once I had it nailed down a bit more, and although it's an always-evolving thing, I think I've got a pretty good setup at this point so I figured I'd go ahead and write it up. Instead of including scripts and such here, I'm just going to link to my Github repo where all of this stuff is stored and reference those scripts in this post.

The first step is to source devtest.rc, which I generally just add to my ~/.bashrc so it always gets done when I start a new session. That sets up a bunch of environment variables that are either required or convenient for development environments. Some of them may be unnecessarily setting variables to their defaults, but this way they are handy if I need to change them for some reason. Also, note that this is for Fedora 20 - other distros may need slightly tweaked values (NODE_DIST at the very least).

Some notes about devtest.rc:

  • LIBVIRT_DEFAULT_URI - I find that this is needed for the virtual power manager to work properly on Fedora. I really need to revisit making this work correctly upstream (assuming it doesn't already - I haven't tried it without setting this in a while).
  • NODE_ARCH - Set to amd64 so I can use the wheels on my pypi-mirror.
  • PYPI_MIRROR_URL* - Set this to your local pypi-mirror, or remove the pypi element from DIB_COMMON_ELEMENTS if you don't have one.
  • MESSAGING_PROVIDER - Makes it easier to switch from rabbitmq to qpid. I should probably get something similar merged upstream for this too.
  • *_EXTRA_ARGS - Adding -u to these causes devtest to use uncompressed images, which is faster if you have the disk space for it.
  • DIB_REPO* - These can be used to override the git refs that devtest will use. This can be handy when there's a pending change you need, or a change was merged recently that broke devtest.

Once devtest.rc is sourced, it should be possible to run-devtest. This script is simply a wrapper that passes --trash-my-machine to devtest.sh so you don't have to type it every time. It also passes through any options to the script, so if you want to run with cached images (for example) you could run-devtest -c.

While devtest is running, you can keep track of its progress with dtstatus. I run it with watch so it updates every two seconds: watch dtstatus. This just tells you which step (ramdisk, seed, undercloud, overcloud) devtest is on, and any information it has about what that step is currently doing (building images and waiting for deployment, at the moment). This is another thing I need to investigate adding to upstream TripleO.

by bnemec at April 11, 2014 05:39 PM

Cameron Seader

Quickly Setting-up an OpenStack Cloud with the SUSE Cloud 3 Admin Appliance

In an effort to make OpenStack available to the non-tech user and appear much less of a heavy lifting project for them, I have created the SUSE Cloud 3 Admin Appliance. I have worked with so many partners, vendors, and customers deploying OpenStack with SUSE Cloud that the idea came to me that SUSE had some great tools that would enable me to create something that they could use to easily deploy, test, and discover OpenStack on their own without a whole lot of effort required. SUSE has integrated Crowbar/Chef as part of the installation framework for our enterprise OpenStack distribution – SUSE Cloud – to improve the speed of deploying and managing OpenStack clouds. This has allowed us to be flexible in our deployment when working with partners and software vendors and provide greater ease of use.

The creation of the SUSE Cloud 3 Admin Appliance is intended to provide a quick and easy deployment. The partners and vendors we are working with find it useful to quickly test their applications in SUSE Cloud and validate their use. Beyond those cases it has become a great tool for deploying your production private cloud based on OpenStack.

I have developed two different appliances and you can find them here:

Standard v1.0.1: SUSE Cloud 3 Admin Standard
Embedded v1.0.1: SUSE Cloud 3 Admin Embedded

Standard has a process which will mirror all of the requiredrepositories to the Admin Server.

Embedded has all of the required repositories in the image ready for you to consume. It might take a little longer to download, but might be worth the wait if you need something portable that can quickly load a private cloud.

This is version 1.0.x

Its important that you answer several questions before proceeding. You can find those questions in the SUSE Cloud 3 Deployment Guide

This Questionnaire will help you as a companion to the Deployment Guide. SUSE Cloud Questionnaire

This guide on using the appliance can help walk you through step by step. SUSE Cloud Admin Appliance Guide

- This version contains the GM version of SUSE Cloud 3
- Disabled IPv6 - Added motd (Message of the day) to reflect next steps
- Updated logos and wallpaper to align with product
- Updated init and firstboot process and alignment with YaST firstboot

Enjoy!

by Cameron Seader (noreply@blogger.com) at April 11, 2014 05:02 PM

Rob Hirschfeld

Running with scissors > DefCore “must-pass” Road Show Starts [VIDEOS]

The OpenStack DefCore committee has been very active during this cycle turning the core definition principles into an actual list of “must-pass” capabilities (working page).  This in turn gives the community something tangible enough to review and evaluate.

Capabilities SelectionTL;DR!  We appreciate those in the community who have been patient enough to help define and learn the process we’re using the make selections; however, we also recognize that most people want to jump to the results.

This week, we started a “DefCore roadshow” with the goal of learning how to make this huge body of capabilities, process and impact easier to digest (draft write-up for review & Troy Toman’s notes).  So far we’ve had two great sessions on this topic.  We took notes and recorded at both meetups (San Francisco & Austin).

My takeaways of these initial meetups are:

  • Jump to the Capabilities right away, the process history is not needed up front
  • You need more graphics – specifically, one for the selection criteria (what do you think of my 1st attempt?)
  • Work from some examples of scored capabilities
  • Include some specific use-cases with a user, 2 types of private cloud and a public cloud to help show the impact

Overall, people like what they are hearing.  It makes sense and decisions are justified.

We need more feedback!  Please help us figure out how to explain this for the broader community.


by Rob H at April 11, 2014 02:58 PM

Assaf Muller

What Does Open Source Mean to Me?

I gained some development experience in various freelance projects and figured I’d apply for a development position during my last semester of Computer Science studies. I sought a student role in a large corporation so that I wouldn’t be relied upon too heavily, as I wanted to prioritize my studies (Please see ‘You have your entire life left to work’ and similar cliches). I applied to a bunch of places, including Red Hat – My would be boss gave a talk in my school about open source culture and internship positions, otherwise I would have never heard about a Linux company in a Microsoft dominated nation. Microsoft has solid contracts with the Israeli Defense Force, and with the Israeli high tech being lead mostly by ex- IDF officers, CTOs tend to go with Microsoft technology. In any case, Red Hat had an internship position in the networking team of a virtualization product (I had networking experience from my army service), paid generously, their offices were close by, it all lined up.

At this point, open source meant nothing to me.

At Red Hat, I started working on a project called oVirt. While it has an impressive user base, and its Q&A mailing list gets a healthy amount of traffic, it does not have a significant development community outside of Red Hat. Here I started experiencing the efforts that go into building an expansive open source community. Open source is not free contrary to popular belief – It is, in fact, quite costly, for a project in oVirt’s stage. For example, when working in a closed source company and designing a new feature, normally you would write a specification down, discuss it with your team members, and get going. In oVirt, you’d share the specification first with the rest of the community. The resulting discussion can take weeks, and with a time based release schedule that inherent delay must be factored in during planning. All communication must be performed on public (and archived) medias such as mailing lists and IRC channels. Hallway discussions are natural but frowned upon when it comes to feature design and other aspects of the development process that should be shared with the community. Then comes the review process. I’m a big believer in peer reviews, regardless if the project is open or closed, but surely in an open source project the review process is much better felt. One of the key elements to building a community is taking the time to review code submitted by non-Red Hatters. You could never hope to get an active development community going if code sits in the repository for weeks, attracting no attention. To this end, code review becomes part of your job description. Some people do it quite well, some people like me have a lot of room to improve. I find reviewing code infinitely harder than writing it. In fact, I find it so hard that I must force myself to do it, double so when the code is written by a faceless community member that cannot knock a basketball over my head if I don’t review his code (Dear mankind: Please don’t ever invent that technology).

At this point, open source was a burden for me.

Six months back I was moved to another project called OpenStack. Still in the same team, under the same boss, just working on another project. OpenStack, while comparable to oVirt technologically,  is very different from oVirt, in the sense that it has a huge development community. By huge, I mean thousands strong. OpenStack is composed of sub projects – The networking project alone has hundreds of developers working on it regularly. At the time I was moved I was the only Israeli developer working on it. The rest of the Red Hat OpenStack team was located in the Czech Republic and in the US. As you can imagine, a lot of self learning was to be had. Conveniently, the (community maintained) OpenStack documentation is excellent. My team mates were no longer working for the same company I was, nor were they down the hall. I did most of my work with individuals spread all over the world. I met some in FOSDEM this past February (Probably the highlight of the event for me), at which point I began to understand the importance of building personal relationships and I will expand on this below.

The beauty of open source and the basis of a meritocracy is that the strongest idea wins. You might stumble upon an infuriating bug which might seem like the most important issue facing the project (And, in fact, humanity). You start working on it, submit a patch, and quickly discover that nobody gives a shit about your bug. Instead of being frustrated by the difficulty of moving forward, I learned two lessons:

  1. Building personal relationships is the only way to drive change
  2. ‘The community’ can realign your understanding of what is important

Maybe there is good reason nobody cares about that bug. Maybe it was a waste of time working on it, not because the patch was not accepted (In time, or at all), maybe it was a waste of time because it was just a waste of time. Maybe that bug was just not important, and you should have invested your time working on anything else. There is a larger amount of issues than resources available and your choice of what to tackle is more important than the urgency of what’s in front of you.

In addition to navigating between the perceived urgency of issues, the community can help you reflect and choose the better solution. I always love hearing people’s ideas, and this concept is expressed beautifully in the review system. Getting criticism from strangers and collaborators alike always constitutes to a learning experience. Luckily OpenStack is being developed by very smart individuals that can help you understand if your solution is terrible, or simply realign your trajectory. I find that it’s sometimes even helpful to get feedback from people with opposing interests – Perhaps together you can form a solution that will answer all use cases in a generalized manner. Such a solution might just end up to be of higher quality than one that would have dealt only with your own customer’s needs.

At this point, open source is obvious to me.


by assafmuller at April 11, 2014 01:08 PM

OpenStack Blog

OpenStack Community Weekly Newsletter (Apr 4 – 11)

Take the OpenStack User Survey and Change the (OpenStack) World

Still a few more hours to fill in the OpenStack User Survey.

Heartbleed

The bug discovered affecting OpenSSL and “breaking” internet doesn’t directly touch OpenStack but can lead to OpenStack compromise. The width of the problem discovered this week is extremely wide though and I think it’s worth spending some more time learning about it. Mark McLoughlin has collected an impressive amount of links where you can learn more.

Security auditing of OpenStack releases

Nathan Kinder started a conversation about how to deal with high-level security related questions about OpenStack.

How to govern a project on the scale of OpenStack

How an open source project is governed can matter just as much as the features it supports, the speed at which it runs, or the code that underlies it. Some open source projects have what I call a “benevolent dictator for life.” Others are outgrowths of corporate projects that, while open, still have their goals and code led by the company that manages it. And of course, there are thousands of projects out there that are written and managed by a single person or a small group of people for whom governance is less of an issue than insuring project sustainability.

Introducing the OpenStack SDK for PHP

Marrying OpenStack with one of the most popular programming languages on the planet. Write applications to interact with OpenStack public and private clouds, using the APIs. The OpenStack SDK for PHP is meant to be by the community and for the community. It will be able to work with clouds from a variety of vendors or vanilla OpenStack setups.

Tearing down obstacles to OpenStack documentation contributions

Rip. Shred. Tear. Let’s gather up the obstacles to documentation contribution and tear them down one by one. I’ve designed a survey with the help of the OpenStack docs team to determine blockers for docs contributions. If you’ve contributed to OpenStack, please fill it out.

The road to Juno Summit – Atlanta 2014

Security Advisories and Notices

Tips ‘n Tricks

Reports from Previous Events

Upcoming Events

Other News

Welcome New Reviewers and Developers

Jason Kincl Manish Godara
Choe, Cheng-Dae Jason Ni
Juan Antonio Osorio Robles vishal yadav
Peter Jönsson Amrith
vaibhav Doug Shelley
Victor Boivie Aimon Bustardo
Marc Abramowitz Igor Duarte Cardoso

Latest Activity In Projects

Do you want to see at a glance the bugs filed and solved this week? Latest patches submitted for review? Check out the individual project pages on OpenStack Activity Board – Insights.

OpenStack 2014 T-Shirt Design Winner

The colorful design will debut on T-Shirts at PyCon in Montreal this week, and will be distributed at upcoming events worldwide.

‘Glowing stack’ by Jaewin Ong

The weekly newsletter is a way for the community to learn about all the various activities occurring on a weekly basis. If you would like to add content to a weekly update or have an idea about this newsletter, please leave a comment.

 

by Stefano Maffulli at April 11, 2014 12:08 PM

April 10, 2014

Cody Bunch

How to Make A LOT of Devstack with Vagrant

Sometimes you need to make A LOT of devstack instances, and on different providers. That is, some in the cloud, others locally, and then some. At least, I keep finding myself in situations where this happens, so, I made a little Vagrantfile to handle the details for me. Here is the Vagrantfile I use to make that happen. After you look the file over, I’ll explain a few key spots and show you how to use it:

<script src="https://gist.github.com/bunchc/10413090.js"></script>

Now let’s talk about the interesting spots:

  • Line 5: This defines the host name and number of devstacks you need (2 in this case). It also defines the starting IP if you are using this locally.
  • Lines 8 – 71: These are the bit that gets you ready to do the devstacks.
  • Lines 88 – 93: To make this work on the Rackspace cloud, modify these lines as needed.
  • Lines 97 – 118: These are some VMware Fusion performance hacks. Note 116 sets it to 6gb. Change that if doing this locally.

If you intend to use the Rackspace provider, you’ll need it installed, you can get that from here
That’s about it for the interesting parts. Now to use this, you can bring up all the machines at once using:

vagrant up --provider=vmware_fusion or vagrant up --provider=rackspace

To bring them up one at a time:

vagrant up devstack-01 --provider=rackspace

Once things are online, you still need to access and run stack.sh. To do that:

vagrant ssh devstack-01
su stack -
cd ~/devstack
./stack.sh

by OpenStackPro at April 10, 2014 07:20 PM

SUSE Conversations

Take the OpenStack User Survey and Change the (OpenStack) World

Many organizations are taking advantage of OpenStack services to deliver IT resources faster to the line of business. Whether speeding the development, testing and deployment of new applications or rapidly implementing an online marketing campaign, OpenStack clouds are significantly impacting business results. If you are an operator or consumer of OpenStack services at one of …

+read more

by Douglas Jarvis at April 10, 2014 06:35 PM

OpenStack Blog

Open Mic Spotlight: Steve Martinelli

steve_martinelliThis post is part of the OpenStack Open Mic series to spotlight the people who have helped make OpenStack successful. Each week, a new contributor will step up to the mic and answer five questions about OpenStack, cloud, careers and what they do for fun. If you’re interested in being featured, please choose five questions from this form and submit!

Steve Martinelli is an OpenStack Active Technical Contributor and a Keystone Core Developer located at the IBM Canada Lab. He primarily focuses on enabling Keystone to better integrate into enterprise environments. Steve was responsible for adding OAuth support to Keystone and is currently adding Federated Identity support to Keystone. In his spare time he also contributes to OpenStackClient as a Core Developer. Though usually swamped with code reviews, his summer Wednesday nights are reserved for playing in the IBM softball league. You can follow him on Twitter @stevebot

1. Get creative — create an original OpenStack gif or haiku!

Here’s a haiku:

Code, test, submit patch.
Oh no, forgot to rebase.
Jenkins, I failed you.

If we’re talking gifs, I can’t compete with: http://openstackreactions.enovance.com/.

2. How did you learn to code? Are you self-taught or did you lear in college? On-the-job?

I learned to code at school, but I’ve learned how to support, test, and build projects while working. When learning a new language, I avoid using books. I generally use an online tutorial to get a development environment up and running, then have the API handy while I poke around. When getting ramped up on an existing project, like Keystone, I find that going through the code, documentation, and running the test suite with a debugger enabled is enormously helpful.

3. What does “open source” mean to you?

My inner developer wants to say … ‘Free as in Beer, Speech and Love’: http://www.flickr.com/photos/joshuamckenty/6747269389/

But, I’ve learned that it’s much more than that. ‘Open source’ software can drive and accelerate an industry. It can ensure many companies agree upon a standard, and move on to the more interesting aspects of what the technology can do.

4. Where is your favorite place to code? In the office, at a local coffee shop, in bed?

It depends on what I’m doing that day. If it’s something that requires a lot of thinking, then I like to work from my desk at home, where it’s relatively free of distractions, and very quiet. If I’m just dabbling in code, or working on something more ‘mechanical’, then I’m good as long as I have a place to sit.

 5What is your favorite example of OpenStack in production (besides yours, of course!) 

I really like what the folks at CERN are doing. They are really pushing for Keystone to have Federated Identity support. Plus, who doesn’t like smashing subatomic particles together at nearly the speed of light?!

by OpenStack at April 10, 2014 04:24 PM

Anne Gentle

Tearing down obstacles to OpenStack documentation contributions

Rip. Shred. Tear. Let’s gather up the obstacles to documentation contribution and tear them down one by one. I’ve designed a survey with the help of the OpenStack docs team to determine blockers for docs contributions. If you’ve contributed to OpenStack, please fill it out here:

https://docs.google.com/forms/d/136-BssH-OxjVo8vNoOD-gW4x8fDFpvixbgCfeV1w_do/viewform

barriers_sameleighton
I want to use this survey to avoid shouting opinions and instead make sure we gather data first. This survey helps us find the biggest barriers so that we can build the best collaboration systems for documentation on OpenStack. Here are the obstacles culled from discussions in the community:

  • The git/gerrit workflow isn’t in my normal work environment
  • The DocBook and WADL (XML source) tools are not in my normal work environment
  • My team or manager doesn’t value documentation so we don’t make time for it
  • Every time I want to contribute to docs, I can’t figure out where to put the information I know
  • When I’ve tried to patch documentation, the review process was difficult or took too long
  • When I’ve contributed to docs, developers changed things without concern for docs, so my efforts were wasted
  • Testing doc patches requires an OpenStack environment I don’t have set up or access to in a lab
  • I think someone else should write the documentation, not me
  • I would only contribute documentation if I were paid to do so

Based on the input from the survey, I want to gather requirements for doc collaboration.

We have different docs for different audiences:

  • cross-project docs for deploy/install/config: openstack-manuals
  • API docs references, standards: api-site and others

These are written with the git/gerrit method. I want to talk about standing up a new docs site that serves our requirements:

Experience:
Solution must be completely open source
Content must be available online
Content must be indexable by search engines
Content must be searchable
Content should be easily cross-linked by topic and type (priority:low)
Enable comments, ratings, and analytics (or ask.openstack.org integration) (priority:low)

Distribution:
Readers must get versions of technical content specific to version of product
Modular authoring of content
Graphic and text content should be stored as files, not in a database
Consumers must get technical content in PDF, html, video, audio
Workflow for review and approval prior to publishing content

Authoring:
Content must be re-usable across authors and personas (Single source)
Must support many content authors with multiple authoring tools
Existing content must migrate smoothly
All content versions need to be comparable (diff) across versions
Content must be organizationally segregated based on user personas
Draft content must be reviewable in HTML
Link maintenance – Links must update with little manual maintenance to avoid broken links and link validation

Please take the survey and make your voice heard! Also please join us at a cross-project session at the OpenStack Summit to discuss doc contributions. We’ll go over the results there. The survey is open until the first week of May.

by annegentle at April 10, 2014 04:17 PM

Steve Hardy

Heat auth model updates - part 1 Trusts

Over the last few months I've spent a lot of my time looking at ways to rework the heat auth model, in an attempt to solve two long-standing issues:


  1. Requirement to pass a password when creating a stack which may perform deferred orchestration actions (for example AutoScaling adjustments)
  2. Requirement for users to have administrative roles when creating certain types of resource.


So, fixes to these issues have been happening (in Havana and Icehouse respectively), but discussions with various folks indicates significant confusion re differentiating the two changes, probably because I've not got around to writing up the documentation yet (it's in progress, honest!) ;)

In an attempt to clear up the confusion, and provide some documentation ahead of the upcoming Icehouse Heat release, I'm planning to cover each feature in this and a subsequent post - below is a discussion of the "Requirement to pass a password" problem, and the method used to solve it.




What? Passwords? Don't we pass tokens?

Well, yes mostly we do.  However the problem with tokens is they expire, and we have no way of knowing how long a stack may exist for, so we can't store user tokens to do deferred operations after the initial creation of the heat stack (not that it's a good idea from a security perspective either..)

So in previous versions of heat, we've required the user to pass a password (yes, even if they are passing us a token), which we'd then encrypt and store in the heat database, such that we can then obtain a token to act on behalf of the user and to whatever deferred operations are required during the lifetime of the stack.  It's not a nice design, but when it was implemented, Trusts did not exist in Keystone so there was no viable alternative.  Here's exactly what happens:


  • User requests stack creation, providing a token and username/password (python-heatclient or Horizon normally requests the token for you)
  • If the stack contains any resources marked as requiring deferred operations heat will fail validation checks if no username/password is provided
  • The username/password are encrypted and stored in the heat DB
  • Stack creation is completed
  • At some later stage we retrieve the credentials and request another token on behalf of the user, the token is not limited in scope and provides access to all roles of the stack owner.
Clearly this is suboptimal, and is the reason for this strange additional password box in horizon:

You already entered your password, right?!



Happily, after discussions with Adam Young, Trusts were implemented during Grizzly and Heat integrated with the functionality during the Havana cycle.  I get the impression not that many people have yet adopted it, so I'm hoping we can move towards making the new trusts based method the default, which has already happened for devstack quite recently.


Keystone Trusts 101

So, in describing the solution to Heat storing passwords, I will be referring to Keystone Trusts, because that is the method used to implement the solution.  There's quite a bit of good information out there, including the Keystone Wiki, Adam Young's blog and the API documentation, but here's a quick summary of terminology which should be sufficient to understand how we're using trusts in Heat:

Trusts are a keystone extension, which provide a method to enable delegation, and optionally impersonation via keystone.  The key terminology is trustor (the user delegating) and trustee (the user being delegated to).

To create a trust, the trustor (in this case the user creating the heat stack) provides keystone with the following information:


  • The ID of the trustee (who you want to delegate to, in this case the heat service user)
  • The roles to be delegated (configurable via the heat configuration file, but it needs to contain whatever roles are required to perform the deferred operations on the users behalf, e.g launching a nova instance in response to an AutoScaling event)
  • Whether to enable impersonation
Keystone then provides a trust_id, which can be consumed by the trustee (and only the trustee) to obtain a trust scoped token.  This token is limited in scope such that the trustee has limited access to those roles delegated, along with effective impersonation of the trustor user, if it was selected when creating the trust.


Phew! Ok so how did you fix it?

Basically we now do the following:

  • User creates a stack via an API request (only the token is required)
  • Heat uses the token to create a trust between the stack owner (trustor) and the heat service user (trustee), delegating a special role (or roles) as defined in the trusts_delegated_roles list in the heat configuration file.  By default heat sets this to "heat_stack_owner", so this role must exist and the user creating the stack must have this role assigned in the project they are creating a stack.  Deployers may modify this list to reflect local RBAC policy, e.g to ensure the heat process can only access those services expected while impersonating a stack owner.
  • Heat stores the trust id in the heat DB (still encrypted, although in theory it doesn't need to be since it's useless to anyone other than the trustee, e.g the heat service user)
  • When a deferred operation is required, Heat retrieves the trust id, and requests a trust scoped token which enables the service user to impersonate the stack owner for the duration of the deferred operation, e.g to launch some nova instances on behalf of the stack owner in response to an AutoScaling event.

The advantages of this approach are hopefully clear, but to clarify:
  • It's better for users, we no longer require a password and can provide full functionality when provided with just a token (like all other OpenStack services... and we can kill the Horizon password box, yay!)
  • It's more secure, as we no longer store any credentials or other data which could use used by any attacker - the trust_id can only be consumed by the trustee (the heat service user).
  • It provides much more granular control of what can be done by heat in deferred operations, e.g if the stack owner has administrative roles, there's no need to delegate them to Heat, just the subset required.

I'd encourage everyone to switch to using this feature, enabling it is simple, first update your heat.conf file to have the following lines:


deferred_auth_method=trusts
trusts_delegated_roles=heat_stack_owner

Hopefully this will soon become the default from Juno for Heat.

Then ensure all users creating heat stacks have the "heat_stack_owner" role (or whatever roles you want them to delegate to the heat service user based on your local RBAC policies).

That is all, more coming soon on "stack domain users" which is new for Icehouse and resolves the second problem mentioned at the start of this post! :)

by Steve Hardy (noreply@blogger.com) at April 10, 2014 02:12 PM

Red Hat Stack

Repost: KVM Virtualization – Refining the Virtual World with Red Hat Enterprise Linux 7 Beta

Originally posted on January 29, 2014 by Bhavna Sarathy

Are the virtualization enhancements to Red Hat Enterprise Linux 7 beta relevant to your own day-to-day operations?

Read the full blog post where Bhavna Sarathy gives a deep dive and learn what’s new in the beta release and how the enhancements relate to your business.

http://rhelblog.redhat.com/2014/01/29/kvm-virtualization/

by Maria Gallegos at April 10, 2014 01:00 PM

Mark McLoughlin

Heartbleed

Watching #heartbleed (aka CVE-2014-0160) fly by in my twitter stream this week, I keep wishing we could all just pause time for a couple of weeks and properly reflect on all the angles here.

Some of the things I’d love to have more time to dig into:

by markmc at April 10, 2014 10:04 AM

Opensource.com

How to govern a project on the scale of OpenStack

Managing collaborative open source projects

How an open source project is governed can matter just as much as the features it supports, the speed at which it runs, or the code that underlies it. Some open source projects have what we might call a "benevolent dictator for life." Others are outgrowths of corporate projects that, while open, still have their goals and code led by the company that manages it. And of course, there are thousands of projects out there that are written and managed by a single person or a small group of people for whom governance is less of an issue than insuring project sustainability.

read more

by Jason Baker at April 10, 2014 09:00 AM

Florian Haas

Catch Martin Loschwitz and Syed Armani talking about Ceph this week!

Just coming off Percona Live 2014, we're speaking at two more conferences this week: the Open Source Data Center conference in Berlin, and the OpenStack India meetup in Mumbai. We'll have two of our best speakers covering Ceph at both of these events.

read more

by florian at April 10, 2014 08:22 AM

April 09, 2014

OpenStack Blog

OpenStack 2014 T-Shirt Design Winner

The 2014 T-shirt design contest is a wrap! Thank you to everyone who shared their creativity and original designs with us this year.
We are excited to reveal our winner, Jaewin Ong of Singapore! This colorful design will debut on T-Shirts at PyCon in Montreal this week, and will be distributed at upcoming events worldwide.
OpenStack T-Shirt Design
We wanted to learn more about the creative mind behind the design, so we asked Jaewin a few questions:
What was your inspiration for this design?
  • The inspiration was actually the OpenStack logo! Since the logo is already of significance, I thought it would be cool to manipulate it with bright colors and superimposing the outline with itself.
 How long have you been designing?
  • My first design was for a T-Shirt, incidentally, during my freshman year in university. The T-Shirts were printed and sold to raise funds for a committee I was involved in. And I started out with MS Paint! I’ve come a long way.
 Where are you currently working?
  • I’m currently a junior in university pursuing a degree in Electrical and Electronic Engineering.
 In what way are you involved in OpenStack?
  • I’m afraid to say that my involvement with OpenStack is minimal. Although, I had some experience with Python during my internship. Otherwise, I do find cloud computing to be rather complex and I admire people who do it.
 Do you publish any of your other work online?
  • I don’t publish my work because I’m doing this out of interest. I would be grateful when it comes to a point where I’m publishing my work for other purposes besides interest.
 Is there anything else you might like to share about yourself?
  • I constantly look for opportunities like this to improve myself. It might not be a big deal to some, but it’s a big deal to me!
Congratulations, Jaewin!
Want to see your design on a future OpenStack T-Shirt? Stay tuned on our blog as we announce upcoming design contests!

 

by Allison Price at April 09, 2014 07:13 PM

Adam Young

Teaching Horizon to Share

Horizon is The OpenStack Dashboard. It is a DJango (Python) Web app. During a default installation, Horizon has resources at one level under the main Hostname in the URL scheme. For example, authentication is under http://hostname/auth.

Devstack performs single system deployments. Packstack has an “all-in-one” option that does the same thing. If these deployment tools are going to deploy other services via HTTPD, Horizon needs to be taught how to share the URL space. Fortunately, this is not hard to do.

A naive approach might just say “why not reserve suburls for known applications, like Keystone or Glance?” There are two reasons that this is not a decent approach. First is that you do not know what other applications OpenStack might need to deploy in the future. The other reason is that DJango has a well known mechanism that allows an administrator to deploy additional web UI functionality alongside existing. Thus, an end deployer could very well have a custom webUI for Keystone running at http://hostname/dashboard/keystone. We can’t block that. DJango provides the means for a very neat solution.

In horizon, you can get away with setting the WEBROOT variable. I made my changes in openstack_dashboard/settings.py as I am intending on submitting this as a patch:

WEBROOT='/dashboard'
LOGIN_URL =  WEBROOT+'/auth/login/'
LOGOUT_URL =  WEBROOT+'/auth/logout/'
 # LOGIN_REDIRECT_URL can be used as an alternative for
 # HORIZON_CONFIG.user_home, if user_home is not set.
 # Do not set it to '/home/', as this will cause circular redirect loop
LOGIN_REDIRECT_URL =  WEBROOT 

However, you can achieve the same effect for your deployments by setting the values in openstack_dashboard/local/local_settings.py for devstack or the comparable file in a Puppet or Chef based install.

However, DJango is no longer managing the root url for your application. If you want to make the transition seamless, you need to provide for a redirect from http://hostname to http://hostname/dashboard. This is an Apache HTTPD configuration issue, and not something you can do inside of DJango.

On a Fedora based install, you will find a file /etc/httpd/conf.d/welcome.conf. In a default deployment, it points to the Fedora “Welcome” page.

Alias /.noindex.html /usr/share/httpd/noindex/index.html

There are many ways to use this. Here’s how I did it. It might not be the neatest. Feel free to comment if you have something better.

Create a file /var/www/html/index.html with the following body:

<META HTTP-EQUIV="Refresh" Content="0; URL=/dashboard/">

Then, modify welcome.conf to have:

Alias /.noindex.html /var/www/html/index.html

Now users that land on http://hostname are redirected to http://hostname/dashboard.

by Adam Young at April 09, 2014 06:41 PM

Christian Berendt

An other Vagrant box with Devstack and Ubuntu 14.04

Like Sean Dague I really like Vagrant and created a box with Ubuntu 14.04 and Devstack a few days ago. The box is available for use with VirtualBox on the Vagrant Cloud. The used Vagrantfile and Puppet manifest is available on Github in the repository berendt/vagrant-devstack. Hava a look in the used local.conf for enabled services and available images.

To use the prepared box simply install Vagrant and fetch the box with vagrant box add berendt/devstack-ubuntu-14.04-amd64. Now you can use the box with the following Vagrantfile. After typing vagrant up the start of the box will take appr. 10 minutes. Afterwards you can login with vagrant ssh. The dashboard is available at http://localhost:8080/. Add 127.0.0.1 openstack.site to /etc/hosts to be able to use the VNC sessions.

VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|

  config.vm.box = "berendt/devstack-ubuntu-14.04-amd64"

  config.vm.network "forwarded_port", guest: 80, host: 8080, host_ip: "127.0.0.1"
  config.vm.network "forwarded_port", guest: 6080, host: 6080, host_ip: "127.0.0.1"

  config.vm.provider "virtualbox" do |vb|
      vb.customize ["modifyvm", :id, "--memory", "16384"]
      vb.customize ["modifyvm", :id, "--cpus", "8"]
  end

  config.vm.provision "shell", privileged: false, inline: "/home/vagrant/devstack/stack.sh"

end

by berendt at April 09, 2014 04:35 PM

Red Hat Stack

Demonstration of Red Hat Enterprise Virtualization 3.3

<iframe class="youtube-player" frameborder="0" height="368" src="http://www.youtube.com/embed/RbRtRoxTmvI?version=3&amp;rel=1&amp;fs=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" type="text/html" width="600"></iframe>

by Russ McMullin at April 09, 2014 04:16 PM

IT Professionals worldwide are getting certified on Red Hat Enterprise Linux OpenStack Platform

<iframe class="youtube-player" frameborder="0" height="368" src="http://www.youtube.com/embed/2jMVb4VJv_Y?version=3&amp;rel=1&amp;fs=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" type="text/html" width="600"></iframe>

by Russ McMullin at April 09, 2014 04:15 PM

The Official Rackspace Blog » OpenStack

Why We Craft OpenStack (Featuring Rackspace Principal Architect Kurt Griffiths)

As OpenStack Summit Atlanta fast approaches, we wanted to dig deeper into the past, present and future of OpenStack. In this video series, we hear straight from some of OpenStack’s top contributors from Rackspace about how the fast-growing open source project has evolved, what it needs to continue thriving, what it means to them personally, and why they are active contributors.

Here, Kurt Griffiths, Rackspace Principal Architect, discusses how a strong community is imperative to OpenStack’s future success. According to Kurt, “Our best hope for the future of OpenStack is to create a culture of farming ideas and vetting them and letting them grow up.”

<iframe allowfullscreen="" frameborder="0" height="360" src="http://www.youtube.com/embed/wvD8wVLu42g" width="640"></iframe>

Stay tuned for more insight from Rackspace’s OpenStack contributors. And we look forward to seeing you in Atlanta for OpenStack Summit May 12 through May 16.

by Vyjayanthi Vadrevu at April 09, 2014 04:00 PM

Tesora Corp

Short Stack: Inside Telstra's OpenStack Build and Cisco CIO talks OpenStack

short stack_b small_0_0.jpgWelcome to the Short Stack, our weekly feature where we search for the most intriguing OpenStack links to share with you. These links may come fromtraditional publications or company blogs, but if it's about OpenStack, we'll find the best links we can to share with you every week. If you like what you see, please consider subscribing. Here we go with this week's links:

HP realigns object storage strategy around OpenStack Swift | 451 Research [$$$]

More evidence that OpenStack is going mainstream as HP has committed to OpenStack as part of its object storage strategy. As 451 Research points out in this report, this isn't isolated but part of an overall strategy on HP's part to build their systems around OpenStack.

Why does it matter that OpenStack is open source? | opensource.com

It's hard to explain why open source matters when it comes to cloud infrastructure, but it comes down to a single way of operating. When you use an open source system like OpenStack, you know that for the most part you can move between OpenStack vendors, and you will find a familiar environment when you do --and that's not to be minimized.

Inside Telstra’s OpenStack build | iTnews.com.au

Telstra for those of you who aren't familiar with it, is Australia's largest telco. It struck a deal last week with Cisco Systems in which Cisco will operate a cloud service for based on Red Hat's OpenStack framework. The thinking behind the deal is that this could offer a way for Telstra to set up a cloud service that they believe can compete with AWS.

Cisco CIO Talks SDN, OpenStack & IT Challenges | Information Week

Cisco CIO Rebecca Jacoby met reporters last week at Interop and she had a lot say about the changing role of IT in general, but she also pointed out that they are going with OpenStack because she believes it can scale and she believes in the value of an open source and standards-based approach to cloud infrastructure.

Mirantis scores IaaS deal with Parallels in second OpenStack win | Business Cloud News

This deal provides a way for Parallels Automation Service to use Mirantis OpenStack to deliver IaaS to Parallels customers. The deal allows Parallels customers to buy these services on a metered basis and use only what they need within the OpenStack environment. This comes on the heels of the biggest OpenStack deal to date: a five year deal worth an estimated $30 million with Ericsson, as reported by The Wall Street Journal.

by 693 at April 09, 2014 04:00 AM

Short Stack: Inside Telestra's OpenStack Build and Cisco CIO talks OpenStack

short stack_b small_0_0.jpgWelcome to the Short Stack, our weekly feature where we search for the most intriguing OpenStack links to share with you. These links may come fromtraditional publications or company blogs, but if it's about OpenStack, we'll find the best links we can to share with you every week. If you like what you see, please consider subscribing. Here we go with this week's links:

HP realigns object storage strategy around OpenStack Swift | 451 Research [$$$]

More evidence that OpenStack is going mainstream as HP has committed to OpenStack as part of its object storage strategy. As 451 Research points out in this report, this isn't isolated but part of an overall strategy on HP's part to build their systems around OpenStack.

Why does it matter that OpenStack is open source? | opensource.com

It's hard to explain why open source matters when it comes to cloud infrastructure, but it comes down to a single way of operating. When you use an open source system like OpenStack, you know that for the most part you can move between OpenStack vendors, and you will find a familiar environment when you do --and that's not to be minimized.

Inside Telstra’s OpenStack build | iTnews.com.au

Telestra for those of you who aren't familiar with it, is Australia's largest telco. It struck a deal last week with Cisco Systems in which Cisco will operate a cloud service for based on Red Hat's OpenStack framework. The thinking behind the deal is that this could offer a way for Telestra to set up a cloud service that they believe can compete with AWS.

Cisco CIO Talks SDN, OpenStack & IT Challenges | Information Week

Cisco CIO Rebecca Jacoby met reporters last week at Interop and she had a lot say about the changing role of IT in general, but she also pointed out that they are going with OpenStack because she believes it can scale and she believes in the value of an open source and standards-based approach to cloud infrastructure.

Mirantis scores IaaS deal with Parallels in second OpenStack win | Business Cloud News

This deal provides a way for Parallels Automation Service to use Mirantis OpenStack to deliver IaaS to Parallels customers. The deal allows Parallels customers to buy these services on a metered basis and use only what they need within the OpenStack environment. This comes on the heels of the biggest OpenStack deal to date: a five year deal worth an estimated $30 million with Ericsson, as reported by The Wall Street Journal.

by 693 at April 09, 2014 04:00 AM

Short Stack: HP aligns object storage around OpenStack, Inside Telestra's OpenStack Build and Cisco CEO talks OpenStack

short stack_b small_0_0.jpgWelcome to the Short Stack, our weekly feature where we search for the most intriguing OpenStack links to share with you. These links may come fromtraditional publications or company blogs, but if it's about OpenStack, we'll find the best links we can to share with you every week. If you like what you see, please consider subscribing. Here we go with this week's links:

HP realigns object storage strategy around OpenStack Swift | 451 Research [$$$]

More evidence that OpenStack is going mainstream as HP has committed to OpenStack as part of its object storage strategy. As 451 Research points out in this report, this isn't isolated but part of an overall strategy on HP's part to build their systems around OpenStack.

Why does it matter that OpenStack is open source? | opensource.com

It's hard to explain why open source matters when it comes to cloud infrastructure, but it comes down to a single way of operating. When you use an open source system like OpenStack, you know that for the most part you can move between OpenStack vendors, and you will find a familiar environment when you do --and that's not to be minimized.

Inside Telstra’s OpenStack build | iTnews.com.au

Telestra for those of you who aren't familiar with it, is Australia's largest telco. It struck a deal last week with Cisco Systems in which Cisco will operate a cloud service for based on Red Hat's OpenStack framework. The thinking behind the deal is that this could offer a way for Telestra to set up a cloud service that they believe can compete with AWS.

Cisco CIO Talks SDN, OpenStack & IT Challenges | Information Week

Cisco CIO Rebecca Jacoby met reporters last week at Interop and she had a lot say about the changing role of IT in general, but she also pointed out that they are going with OpenStack because she believes it can scale and she believes in the value of an open source and standards-based approach to cloud infrastructure.

Mirantis scores IaaS deal with Parallels in second OpenStack win | Business Cloud News

This deal provides a way for Parallels Automation Service to use Mirantis OpenStack to deliver IaaS to Parallels customers. The deal allows Parallels customers to buy these services on a metered basis and use only what they need within the OpenStack environment. This comes on the heels of the biggest OpenStack deal to date: a five year deal worth an estimated $30 million with Ericsson, as reported by The Wall Street Journal.

by 693 at April 09, 2014 04:00 AM

Solinea

Webinar: OpenStack Icehouse Preview

icehouseOpenStack's Icehouse release will be available on April 17th. The latest OpenStack iteration introduces a bevy of great new features across all of the OpenStack programs, as well as the usual bug fixes and operational improvements. In this webinar, we will review the core programs and show you what’s new in this release:

  • Trove - New integrated program that provides Database as a Service
  • Compute (Nova) - Improved Scheduler performance and upgradeability enhancements
  • Object Storage (Swift) - Updated Storage policies to create more flexible deployment options
  • Block Storage (Cinder) - Enhanced Cinder Scheduler that improves block storage load distribution
  • Networking (Neutron) - Updates to floating IPs that optimize manageability
  • Overview of updates for Dashboard (Horizon) / Telemetry (Ceilometer) / Identity (Keystone) / and Image Service (Glance)

by Seth Fox (seth@solinea.com) at April 09, 2014 12:00 AM

April 08, 2014

Cloudwatt

How To Store Incremental Backups On The Cloudwatt Object Store Using Duplicity

In this article, we will show you how to store your backups in the Cloudwatt object store. For this, we'll be using Duplicity as a backup tool:

  • It's available for all Linux distributions
  • It has a range of interesting features, like incremental backups or encryption
  • It provides a native support for Swift as a backend

Here are the steps to follow to accomplish common tasks

Preliminary work

Before you start there are a few steps you need to do:

  1. Install Duplicity (make sure to pick a version >= 0.6.22)
  2. Install python-swiftclient and python-keystoneclient (usually best installed with pip)
  3. Set your environment variables to point at your Swift account

I'm assuming you already have your Cloudwatt credentials available. Here are the environment variables needed:

  • SWIFT_USERNAME
  • SWIFT_TENANTNAME
  • SWIFT_PASSWORD
  • SWIFT_AUTHURL

These variables are pretty self-descripting, so I'm not going to detail how to use them. In case you're not very familiar with the Unix shell, here's a simple script I use to set my environment correctly:

    function set_environment {
      local username
      local tenant
      local password
      local authurl

      if [[ $* -gt 0 ]]; then
        username=$1
        tenant=$2
        password=$3
        authurl=$4
      else
        username=$OS_USERNAME
        tenant=$OS_TENANT_NAME
        password=$OS_PASSWORD
        authurl=$OS_AUTH_URL
      fi

      export SWIFT_USERNAME=$username
      export SWIFT_TENANTNAME=$tenant
      export SWIFT_PASSWORD=$password
      export SWIFT_AUTHURL=$authurl
    }

And here's how I use it:

    $ set_environment myusername mytenantname mypassword https://identity.fr1.cloudwatt.com/v2.0

Note that these environment variables are the only way to give Duplicity the Swift credentials, so make sure to set them correctly.

Store backups on Swift

Here's how I can backup a local directory:

    $ duplicity ~/my_directory swift://my_backup_container

Since it's a first run, Duplicity will run a full backup of the directory. Note that by default it will also encrypt the backup with GnuPG, so it will ask you for a passphrase interactively. If you don't want to get the passphrase interactively, for instance in a backup script, you can use the PASSPHRASE variable, like this:

    $ PASSPHRASE=secret duplicity ~/my_directory swift://my_backup_container

Turning off encryption

If you don't need to secure your backups, you can turn off encryption with a simple command line option, like this:

    $ duplicity --no-encryption ~/my_directory swift:://my_backup_container

Subsequent runs

Subsequent runs will create incremental backups, and in case you're using encryption, make sure you provide the same passphrase you used during the first run.

Restore backups

Restoring a backup, is as easy as:

    $ duplicity swift://my_backup_container ~/my_directory_restored

Note that ~/my_directory_restored must not exist as it will be created by duplicity

Going further

For any advanced use of Duplicity, you should check the documentation, the man page (man duplicity) and the myriad of tutorials and blog post you can find all over the web. Duplicity is a widely used program, so you'll find enough info to cover every common use case.

by Joe H. Rahme at April 08, 2014 10:00 PM

Flavio Percoco

MongoDB 2.6 is out, Marconi will benefit from it

Those of you following closely MongoDB's development know that the new stable version (2.6) is out and that it brings lots of improvements and new features.

Since there are already presentations, documentation and general information about this new release, I wanted to take a chance and evaluate those changes from Marconi's perspective. Specifically, I wanted to evaluate which of the changes of this new version will help improving Marconi's MongoDB storage driver.

Index Intersection

For a long time, the only way to have queries that would use an index for 2 or more fields was using compound indexes. Although compound indexes still exist and they are recommended for several scenarios, it is now possible to intersect 2 indexes per query, which means that queries like this one are now possible:

> db.post.ensureIndex({a: 1})
> db.post.ensureIndex({t: 1})
> db.post.insert({t: "yasdasdasdasdaso", a: 673453})
> db.post.find({t: "mmh", a: {"$lt": 5}}).explain() // Complex Plan

If you've followed Marconi's development, you may know that it depends heavily on compound indexes in order to have fully indexed-covered queries. With the addition of index intersection, it is now possible to relax some of the compound indexes. For example, these two indexes ACTIVE_INDEX_FIELDS and the COUNTING_INDEX_FIELDS could be simplified into:

ACTIVE_INDEX_FIELDS = [
    ('k', 1), # Used for sorting and paging, must come before range queries
]

COUNTING_INDEX_FIELDS = [
    (PROJ_QUEUE, 1), # Project will be unique, so put first
    ('c.e', 1), # Used for filtering out claimed messages
]

Note that the index {p_q: 1, 'c.e': 1} is one of the most used ones in Marconi right now.

New Bulk Semantics

Marconi supports posting several messages at the same time. Bulk post, that is. Depending on the storage driver this has to be implemented differently. In the case of MongoDB's driver, Marconi relies on MongoDB's bulk inserts. Although we could have used continueOnError on previous MongoDB versions, we came up with a 2-step insert process that would ensure that either all or none of the messages are posted. This was done for several reasons, one of those being not having great semantics for bulk inserts and those not being extended to updates too.

In version 2.6, MongoDB introduced ordered bulk inserts. For Marconi, this means it can rely on a more deterministic behaviour when doing bulk-inserts. The determinism comes from the fact that with ordered bulk-inserts it'll be now possible to know the exact status of the insert in case of failures.

There are more things to analyse before being able to remove the 2-step inserts but this new feature definitely solves one of them.

$min / $max

This is another very cool feature to have. As of now, Marconi relies on ttl collections to delete expired messages and claims. Unfortunately, when creating a claim, there wasn't a way to update the message expiration date if it would've expired before the claim did. With the new $min/$max operators, it'll be now possible to do all this in a single operation.

new_values = {'e': message_expiration, 't': message_ttl}
collection = msg_ctrl._collection(queue, project)
        updated = collection.update({'_id': {'$in': ids},
                                     'c.e': {'$lte': now}},
                                    {'$set': {'c': meta,
                                              '$max': new_values}},
                                    upsert=False,
                                    multi=True)['n']

In other words, we'll be able to simplify this piece of code

Other things

There are several other new features and improvements that I'm very exited about. For instance, MongoDB 2.6 brings in RBAC (Role Based Access Control) down to a collection level. Although Marconi allows users to secure their databases, it doesn't directly rely on MongoDB's auth features. However, the new RBAC allows for a more secured distribution of messages throughout the database instance. It could be possible to create roles based on keystone roles and let the database enforce that for us. Whether or not this is a good idea, is out of the scope of this post, though.

MongoDB 2.6 also brings a brand new write protocol that integrates write operations with write concerns. The default write concern is safe-writes, which means that write failures are reported immediately.

Overall, this is a really exiting MongoDB release for me and for Marconi's team. Besides bringing several fixes and enhancements, it also brings new features that will make the storage driver simpler and safer. Please, read the full release notes for more information.

by FlaPer87 at April 08, 2014 02:40 PM

MongoDB 2.6 is out, Marconi will benefit from it

Those of you following closely MongoDB's development know that the next version (2.6) is almost out and that it brings lots of improvements and new features.

Since there are already presentations, documentation and general information about this new release, I wanted to take a chance and evaluate those changes from Marconi's perspective. Specifically, I wanted to evaluate which of the changes of this new version will help improving Marconi's MongoDB storage driver.

Index Intersection

For a long time, the only way to have queries that would use an index for 2 or more fields was using compound indexes. Although compound indexes still exist and they are recommended for several scenarios, it is now possible to intersect 2 indexes per query, which means that queries like this one are now possible:

> db.post.ensureIndex({a: 1})
> db.post.ensureIndex({t: 1})
> db.post.insert({t: "yasdasdasdasdaso", a: 673453})
> db.post.find({t: "mmh", a: {"$lt": 5}}).explain() // Complex Plan

If you've followed Marconi's development, you may know that it depends heavily on compound indexes in order to have fully indexed-covered queries. With the addition of index intersection, it is now possible to relax some of the compound indexes. For example, these two indexes ACTIVE_INDEX_FIELDS and the COUNTING_INDEX_FIELDS could be simplified into:

ACTIVE_INDEX_FIELDS = [
    ('k', 1), # Used for sorting and paging, must come before range queries
]

COUNTING_INDEX_FIELDS = [
    (PROJ_QUEUE, 1), # Project will be unique, so put first
    ('c.e', 1), # Used for filtering out claimed messages
]

Note that the index {p_q: 1, 'c.e': 1} is one of the most used ones in Marconi right now.

New Bulk Semantics

Marconi supports posting several messages at the same time. Bulk post, that is. Depending on the storage driver this has to be implemented differently. In the case of MongoDB's driver, Marconi relies on MongoDB's bulk inserts. Although we could have used continueOnError on previous MongoDB versions, we came up with a 2-step insert process that would ensure that either all or none of the messages are posted. This was done for several reasons, one of those being not having great semantics for bulk inserts and those not being extended to updates too.

In version 2.6, MongoDB introduced ordered bulk inserts. For Marconi, this means it can rely on a more deterministic behaviour when doing bulk-inserts. The determinism comes from the fact that with ordered bulk-inserts it'll be now possible to know the exact status of the insert in case of failures.

There are more things to analyse before being able to remove the 2-step inserts but this new feature definitely solves one of them.

$min / $max

This is another very cool feature to have. As of now, Marconi relies on ttl collections to delete expired messages and claims. Unfortunately, when creating a claim, there wasn't a way to update the message expiration date if it would've expired before the claim did. With the new $min/$max operators, it'll be now possible to do all this in a single operation.

new_values = {'e': message_expiration, 't': message_ttl}
collection = msg_ctrl._collection(queue, project)
        updated = collection.update({'_id': {'$in': ids},
                                     'c.e': {'$lte': now}},
                                    {'$set': {'c': meta,
                                              '$max': new_values}},
                                    upsert=False,
                                    multi=True)['n']

In other words, we'll be able to simplify this piece of code

Other things

There are several other new features and improvements that I'm very exited about. For instance, MongoDB 2.6 brings in RBAC (Role Based Access Control) down to a collection level. Although Marconi allows users to secure their databases, it doesn't directly rely on MongoDB's auth features. However, the new RBAC allows for a more secured distribution of messages throughout the database instance. It could be possible to create roles based on keystone roles and let the database enforce that for us. Whether or not this is a good idea, is out of the scope of this post, though.

MongoDB 2.6 also brings a brand new write protocol that integrates write operations with write concerns. The default write concern is safe-writes, which means that write failures are reported immediately.

Overall, this is a really exiting MongoDB release for me and for Marconi's team. Besides bringing several fixes and enhancements, it also brings new features that will make the storage driver simpler and safer. Please, read the full release notes for more information.

by FlaPer87 at April 08, 2014 02:34 PM

MongoDB 2.6 is out, Marconi will benefit from it

Those of you following closely MongoDB's development know that the next version (2.6) is almost out and that it brings lots of improvements and new features.

Since there are already presentations, documentation and general information about this new release, I wanted to take a chance and evaluate those changes from Marconi's perspective. Specifically, I wanted to evaluate which of the changes of this new version will help improving Marconi's MongoDB storage driver.

Index Intersection

For a long time, the only way to have queries that would use an index for 2 or more fields was using compound indexes. Although compound indexes still exist and they are recommended for several scenarios, it is now possible to intersect 2 indexes per query, which means that queries like this one are now possible:

```

db.post.ensureIndex({a: 1}) db.post.ensureIndex({t: 1}) db.post.insert({t: "yasdasdasdasdaso", a: 673453}) db.post.find({t: "mmh", a: {"$lt": 5}}).explain() // Complex Plan ```

If you've followed Marconi's development, you may know that it depends heavily on compound indexes in order to have fully indexed-covered queries. With the addition of index intersection, it is now possible to relax some of the compound indexes. For example, these two indexes ACTIVE_INDEX_FIELDS and the COUNTING_INDEX_FIELDS could be simplified into:

``` ACTIVEINDEXFIELDS = [ ('k', 1), # Used for sorting and paging, must come before range queries ]

COUNTINGINDEXFIELDS = [ (PROJ_QUEUE, 1), # Project will be unique, so put first ('c.e', 1), # Used for filtering out claimed messages ] ```

Note that the index {p_q: 1, 'c.e': 1} is one of the most used ones in Marconi right now.

New Bulk Semantics

Marconi supports posting several messages at the same time. Bulk post, that is. Depending on the storage driver this has to be implemented differently. In the case of MongoDB's driver, Marconi relies on MongoDB's bulk inserts. Although we could have used continueOnError on previous MongoDB versions, we came up with a 2-step insert process that would ensure that either all or none of the messages are posted. This was done for several reasons, one of those being not having great semantics for bulk inserts and those not being extended to updates too.

In version 2.6, MongoDB introduced ordered bulk inserts. For Marconi, this means it can rely on a more deterministic behaviour when doing bulk-inserts. The determinism comes from the fact that with ordered bulk-inserts it'll be now possible to know the exact status of the insert in case of failures.

There are more things to analyse before being able to remove the 2-step inserts but this new feature definitely solves one of them.

$min / $max

This is another very cool feature to have. As of now, Marconi relies on ttl collections to delete expired messages and claims. Unfortunately, when creating a claim, there wasn't a way to update the message expiration date if it would've expired before the claim did. With the new $min/$max operators, it'll be now possible to do all this in a single operation.

new_values = {'e': message_expiration, 't': message_ttl} collection = msg_ctrl._collection(queue, project) updated = collection.update({'_id': {'$in': ids}, 'c.e': {'$lte': now}}, {'$set': {'c': meta, '$max': new_values}}, upsert=False, multi=True)['n']

In other words, we'll be able to simplify this piece of code

Other things

There are several other new features and improvements that I'm very exited about. For instance, MongoDB 2.6 brings in RBAC (Role Based Access Control) down to a collection level. Although Marconi allows users to secure their databases, it doesn't directly rely on MongoDB's auth features. However, the new RBAC allows for a more secured distribution of messages throughout the database instance. It could be possible to create roles based on keystone roles and let the database enforce that for us. Whether or not this is a good idea, is out of the scope of this post, though.

MongoDB 2.6 also brings a brand new write protocol that integrates write operations with write concerns. The default write concern is safe-writes, which means that write failures are reported immediately.

Overall, this is a really exiting MongoDB release for me and for Marconi's team. Besides bringing several fixes and enhancements, it also brings new features that will make the storage driver simpler and safer. Please, read the full release notes for more information.

by FlaPer87 at April 08, 2014 02:30 PM

Red Hat Stack

Experience enterprise infrastructure for yourself at Red Hat Summit 2014

By Jonathan Gershater, Principal Product Marketing Manager, Red Hat

This year marks the 10th anniversary of Red Hat Summit and for the first time in San Francisco, April 14-17! At the Infrastructure as a Service zone of the Red Hat Booth, there will be demos of our cloud and virtualization technologies.

We’ll be showing a live demonstration of the latest OpenStack innovations with Red Hat Enterprise Linux OpenStack Platform 4, based on the Havana release. If you’ve ever been interested in learning more about what OpenStack is, or might already be experienced with OpenStack and would like to see the latest feature enhancements, be sure to stop by for a chat with an IaaS expert. We’ll be showing the Horizon dashboard,  images, tenants, volumes, and networks with an easy point and click interface to:

  • Launch a virtual machine instance
  • Attach storage
  • Connect to networks
  • Suspend or terminate a virtual instance
  • Create tenants
  • View usage
  • and much more…

In addition, we’ll also share some info on how to use the command line tools. Along with scriptable methods to automate the above tasks, you can experience the power of Red Hat Enterprise Linux OpenStack Platform to automate daily tasks that provision, de-provision, and re-provision cloud infrastructure resources.

There is also a demo of Red Hat Enterprise Virtualization 3.3. This customer-proven traditional virtualization solution provides everything you need to virtualize traditional enterprise servers and virtual desktop workloads. This document compares the pricing of Red Hat® Enterprise Virtualization with the pricing of VMware vSphere 5.5 for server virtualization. As the use cases detailed here reveal, Red Hat Enterprise Virtualization can cost 50-80% less than VMware vSphere 5.5. Download this guide for details.

Perhaps you’re looking for an alternative virtualization infrastructure to help combat expensive proprietary infrastructures and avoid vendor lock-in risks, or maybe you’re already a customer and would like to experience the latest features and functionality, join us at the IaaS Zone of the Red Hat booth to experience it for yourself. We’ll be showcasing basic to advance capabilities, such as:

  • Live migration of virtual machines between hosts
  • Templates
  • Integrated Red Hat Storage
  • Self-hosted Red Hat Enterprise Virtualization Manager
  • Integration with a Directory Server for authentication and authorization
  • Snapshots
  • Integration with the OpensSack image service
  • Cloud-Init provisions virtual machines with initial setup (including networking, SSH keys, timezone, user data injection, and more).
  • Browser-based administrator portal
  • Browser-based self-service user portal
  • And much more…

For an end-to-end, enterprise-ready solution, be sure to ask about Red Hat Cloud Infrastructure. Red Hat Cloud infrastructure provides a complete IaaS cloud management solution integrating with your existing VMware, Amazon Web Services, Red Hat Enterprise Virtualization and OpenStack infrastructure.
At this demo, you’ll view:

  • Self-Service Provisioning
  • Workload management
  • Chargeback/showback
  • Deploying N-tier services vs simple VMs
  • Public cloud bursting/flexing to Amazon
  • And much more…

With 10 years of experience putting on these Summit’s, they only get better. With an incredible lineup of sessions, labs, and hands-on workshops, I’m confident this will be the best Red Hat Summit yet! I look forward to seeing you in San Francisco! Register using code TWT14 for $1000 off on-site pricing http://www.redhat.com/summit/?sc_cid=70160000000cQzdAAE

 

RHEV0033_RHCI_Summit_Graphic_12026857_0413jw-01_Jonathan post

 

by Maria Gallegos at April 08, 2014 01:00 PM

Opensource.com

Why open infrastructure matters in the cloud

Building open infrastructure

When reading a recent article by Red Hat CEO Jim Whitehurst, I was struck by a comparison made between OpenStack and the interstate highway system. The article in Wall Street and Technology, called "OpenStack: Five things every executive needs to know," mostly focused on the high points of where OpenStack is in its development cycle. But the highway analogy stuck with me.

read more

by Jason Baker at April 08, 2014 09:00 AM

Sean Roberts

Why I Support Mark McClain for OpenStack Neutron PTL

I am biased. I admit it. Mark McClain is a friend and a fellow Yahoo. I think he is doing a good job as the current OpenStack Neutron PTL and he deserves another 6 month term. That being said, I have many friends in this business and they are all good people. There are no […]

by sean roberts at April 08, 2014 06:18 AM

April 07, 2014

OpenStack Blog

OpenStack Day Events April – May – June 2014

Several upcoming OpenStack Day events are taking place around the world. Please join us in spreading the word and register soon. We hope to see you there!
OpenStack Day Mexico in Mexico City – April 29
  • When: Tuesday, April 29, 2014
  • Where: World Trade Center Mexico
  • Tickets: Tickets are MXN $200.00, covering all meals, workshops and conferences. Register quickly! 
OpenStack CEE Day in Budapest – May 26
OpenStack in Action 5! in Paris – May 28
  • Attendees will be provided with the raw materials to engage with the community, become a consumer of the technology and take part in its evolution
  • When: Wednesday, May 28, 2014
  • Where: CAP 15
  • Admission is free, so register to get an overview of the OpenStack technology, projects updates, challenges, best practices and roadmap for all audiences
#1 OpenStack Day in Milan – May 30
  • When: Friday, May 30, 2014
  • Where: Via Privata Stefanardo de Vimercate
OpenStack Israel in Tel Aviv-Yafo - June 2
Hear about OpenStack’s Icehouse Release from industry thought leaders and local OpenStack users. Following the conference, attend a 3-day training course on the current OpenStack Havana Release
  • When: Monday, June 2, 2014
  • Where: Arts Hall HaBama Herzliya
  • Tickets: We’re expecting +300 OpenStack users, prospective users, ecosystem members and developers to attend, so register quickly!
With an anticipated 500+ attendees from all sectors of London’s wide and diverse tech community, an exciting line-up of speakers and exhibitors, this will be the UK’s largest OpenStack related event this year!
  • When: Wednesday, June 4, 2014
  • Where: 155 Bishopsgate
  • Tickets: The early bird rate expires on May 14th, so register quickly before prices increase!
If you are interested in organizing an OpenStack Day event in your area, please contact events@openstack.org.

 

by Allison Price at April 07, 2014 07:45 PM

Matt Farina

Introducing the OpenStack SDK for PHP

OpenStack has become the go to way to build open source public or private clouds. PHP is one of the most popular programming languages on the planet and the dominating server side language of the web.

What if you want to marry PHP and OpenStack? To have PHP work with OpenStack APIs using services or managing them? Until recently the efforts have focused on SDKs and language bindings for companies like Rackspace and HP. The libraries from these companies may support some of OpenStack but have been focused on the stacks provided by these companies rather than all of the options.

With the OpenStack SDK for PHP this changes. This SDK is meant to be by the community and for the community. It will be able to work with clouds from a variety of vendors or vanilla OpenStack setups.

What Will It Provide?

This SDK is targeted at meeting the needs of application developers. This is different from those who are building OpenStack or standing it up. We want to make it easy to use OpenStack for the long tail of applications written in PHP. To that end it will:

  • Provide a language binding. You can work with OpenStack APIs using regular PHP.
  • Contain documentation explaining how the code works and how to get going with OpenStack. We don't expect application developers to understand OpenStack when they first approach it.
  • Share code examples for working with OpenStack. This includes writing code that will easily work with different vendors.
  • Be framework agnostic. We're supporting the efforts of the PHP Framework Interoperability Group (FIG). If you're using Symfony, the Zend Framework, or something else meant to be interoperable we want to make it easy to work with OpenStack.

Where Are We At?

The SDK is currently under development but moving fairly quickly. The near term goal is to come to a stable API with support for Identity Service (the basis authentication and authorization) and Object Storage. Once the SDK API is in place we'll move to support more services and add more documentation.

We are also at a good point for others to get involved in the effort. If you want to learn more about what's going on or contribute, now is a good time to come around.

How To Get Involved?

If you're interested in getting involved there are a few ways you can get going.

  1. Find us in IRC and strike up a conversation. I'm mfer on Freenode and you can find me in the room #openstack-sdks.
  2. Check out the OpenStack wiki page about the PHP SDK.
  3. Join us for a meeting in IRC. The next one is this Wednesday at 11:30am ET (15:30 UTC).
  4. Poke around the code on Github. Note, Github is a mirror of the code. If you're interested in getting involved in the development process please jump into IRC or the documentation to learn more about it.

If you have any questions or suggestions please let us know. We hope to help applications successfully use OpenStack.

Continue Reading »

April 07, 2014 04:25 PM